blob: eb6f3d9166227e49d01015aeb14f7cc193615bdd [file] [log] [blame]
Mauro Carvalho Chehabdc7a12b2019-04-14 15:51:10 -03001============================
Nicolas Pitre37b83042011-06-19 23:36:03 -04002Kernel-provided User Helpers
3============================
4
5These are segment of kernel provided user code reachable from user space
6at a fixed address in kernel memory. This is used to provide user space
7with some operations which require kernel help because of unimplemented
8native feature and/or instructions in many ARM CPUs. The idea is for this
9code to be executed directly in user mode for best efficiency but which is
10too intimate with the kernel counter part to be left to user libraries.
11In fact this code might even differ from one CPU to another depending on
12the available instruction set, or whether it is a SMP systems. In other
13words, the kernel reserves the right to change this code as needed without
14warning. Only the entry points and their results as documented here are
15guaranteed to be stable.
16
17This is different from (but doesn't preclude) a full blown VDSO
18implementation, however a VDSO would prevent some assembly tricks with
19constants that allows for efficient branching to those code segments. And
20since those code segments only use a few cycles before returning to user
21code, the overhead of a VDSO indirect far call would add a measurable
22overhead to such minimalistic operations.
23
24User space is expected to bypass those helpers and implement those things
25inline (either in the code emitted directly by the compiler, or part of
26the implementation of a library call) when optimizing for a recent enough
27processor that has the necessary native support, but only if resulting
28binaries are already to be incompatible with earlier ARM processors due to
Masanari Iida40e47122012-03-04 23:16:11 +090029usage of similar native instructions for other things. In other words
Nicolas Pitre37b83042011-06-19 23:36:03 -040030don't make binaries unable to run on earlier processors just for the sake
31of not using these kernel helpers if your compiled code is not going to
32use new instructions for other purpose.
33
34New helpers may be added over time, so an older kernel may be missing some
35helpers present in a newer kernel. For this reason, programs must check
36the value of __kuser_helper_version (see below) before assuming that it is
37safe to call any particular helper. This check should ideally be
38performed only once at process startup time, and execution aborted early
39if the required helpers are not provided by the kernel version that
40process is running on.
41
42kuser_helper_version
43--------------------
44
45Location: 0xffff0ffc
46
Mauro Carvalho Chehabdc7a12b2019-04-14 15:51:10 -030047Reference declaration::
Nicolas Pitre37b83042011-06-19 23:36:03 -040048
49 extern int32_t __kuser_helper_version;
50
51Definition:
52
53 This field contains the number of helpers being implemented by the
54 running kernel. User space may read this to determine the availability
55 of a particular helper.
56
Mauro Carvalho Chehabdc7a12b2019-04-14 15:51:10 -030057Usage example::
Nicolas Pitre37b83042011-06-19 23:36:03 -040058
Mauro Carvalho Chehabdc7a12b2019-04-14 15:51:10 -030059 #define __kuser_helper_version (*(int32_t *)0xffff0ffc)
Nicolas Pitre37b83042011-06-19 23:36:03 -040060
Mauro Carvalho Chehabdc7a12b2019-04-14 15:51:10 -030061 void check_kuser_version(void)
62 {
Nicolas Pitre37b83042011-06-19 23:36:03 -040063 if (__kuser_helper_version < 2) {
64 fprintf(stderr, "can't do atomic operations, kernel too old\n");
65 abort();
66 }
Mauro Carvalho Chehabdc7a12b2019-04-14 15:51:10 -030067 }
Nicolas Pitre37b83042011-06-19 23:36:03 -040068
69Notes:
70
71 User space may assume that the value of this field never changes
72 during the lifetime of any single process. This means that this
73 field can be read once during the initialisation of a library or
74 startup phase of a program.
75
76kuser_get_tls
77-------------
78
79Location: 0xffff0fe0
80
Mauro Carvalho Chehabdc7a12b2019-04-14 15:51:10 -030081Reference prototype::
Nicolas Pitre37b83042011-06-19 23:36:03 -040082
83 void * __kuser_get_tls(void);
84
85Input:
86
87 lr = return address
88
89Output:
90
91 r0 = TLS value
92
93Clobbered registers:
94
95 none
96
97Definition:
98
99 Get the TLS value as previously set via the __ARM_NR_set_tls syscall.
100
Mauro Carvalho Chehabdc7a12b2019-04-14 15:51:10 -0300101Usage example::
Nicolas Pitre37b83042011-06-19 23:36:03 -0400102
Mauro Carvalho Chehabdc7a12b2019-04-14 15:51:10 -0300103 typedef void * (__kuser_get_tls_t)(void);
104 #define __kuser_get_tls (*(__kuser_get_tls_t *)0xffff0fe0)
Nicolas Pitre37b83042011-06-19 23:36:03 -0400105
Mauro Carvalho Chehabdc7a12b2019-04-14 15:51:10 -0300106 void foo()
107 {
Nicolas Pitre37b83042011-06-19 23:36:03 -0400108 void *tls = __kuser_get_tls();
109 printf("TLS = %p\n", tls);
Mauro Carvalho Chehabdc7a12b2019-04-14 15:51:10 -0300110 }
Nicolas Pitre37b83042011-06-19 23:36:03 -0400111
112Notes:
113
114 - Valid only if __kuser_helper_version >= 1 (from kernel version 2.6.12).
115
116kuser_cmpxchg
117-------------
118
119Location: 0xffff0fc0
120
Mauro Carvalho Chehabdc7a12b2019-04-14 15:51:10 -0300121Reference prototype::
Nicolas Pitre37b83042011-06-19 23:36:03 -0400122
123 int __kuser_cmpxchg(int32_t oldval, int32_t newval, volatile int32_t *ptr);
124
125Input:
126
127 r0 = oldval
128 r1 = newval
129 r2 = ptr
130 lr = return address
131
132Output:
133
134 r0 = success code (zero or non-zero)
135 C flag = set if r0 == 0, clear if r0 != 0
136
137Clobbered registers:
138
139 r3, ip, flags
140
141Definition:
142
Mauro Carvalho Chehabdc7a12b2019-04-14 15:51:10 -0300143 Atomically store newval in `*ptr` only if `*ptr` is equal to oldval.
144 Return zero if `*ptr` was changed or non-zero if no exchange happened.
145 The C flag is also set if `*ptr` was changed to allow for assembly
Nicolas Pitre37b83042011-06-19 23:36:03 -0400146 optimization in the calling code.
147
Mauro Carvalho Chehabdc7a12b2019-04-14 15:51:10 -0300148Usage example::
Nicolas Pitre37b83042011-06-19 23:36:03 -0400149
Mauro Carvalho Chehabdc7a12b2019-04-14 15:51:10 -0300150 typedef int (__kuser_cmpxchg_t)(int oldval, int newval, volatile int *ptr);
151 #define __kuser_cmpxchg (*(__kuser_cmpxchg_t *)0xffff0fc0)
Nicolas Pitre37b83042011-06-19 23:36:03 -0400152
Mauro Carvalho Chehabdc7a12b2019-04-14 15:51:10 -0300153 int atomic_add(volatile int *ptr, int val)
154 {
Nicolas Pitre37b83042011-06-19 23:36:03 -0400155 int old, new;
156
157 do {
158 old = *ptr;
159 new = old + val;
160 } while(__kuser_cmpxchg(old, new, ptr));
161
162 return new;
Mauro Carvalho Chehabdc7a12b2019-04-14 15:51:10 -0300163 }
Nicolas Pitre37b83042011-06-19 23:36:03 -0400164
165Notes:
166
167 - This routine already includes memory barriers as needed.
168
169 - Valid only if __kuser_helper_version >= 2 (from kernel version 2.6.12).
170
171kuser_memory_barrier
172--------------------
173
174Location: 0xffff0fa0
175
Mauro Carvalho Chehabdc7a12b2019-04-14 15:51:10 -0300176Reference prototype::
Nicolas Pitre37b83042011-06-19 23:36:03 -0400177
178 void __kuser_memory_barrier(void);
179
180Input:
181
182 lr = return address
183
184Output:
185
186 none
187
188Clobbered registers:
189
190 none
191
192Definition:
193
194 Apply any needed memory barrier to preserve consistency with data modified
195 manually and __kuser_cmpxchg usage.
196
Mauro Carvalho Chehabdc7a12b2019-04-14 15:51:10 -0300197Usage example::
Nicolas Pitre37b83042011-06-19 23:36:03 -0400198
Mauro Carvalho Chehabdc7a12b2019-04-14 15:51:10 -0300199 typedef void (__kuser_dmb_t)(void);
200 #define __kuser_dmb (*(__kuser_dmb_t *)0xffff0fa0)
Nicolas Pitre37b83042011-06-19 23:36:03 -0400201
202Notes:
203
204 - Valid only if __kuser_helper_version >= 3 (from kernel version 2.6.15).
Nicolas Pitre40fb79c2011-06-19 23:36:03 -0400205
206kuser_cmpxchg64
207---------------
208
209Location: 0xffff0f60
210
Mauro Carvalho Chehabdc7a12b2019-04-14 15:51:10 -0300211Reference prototype::
Nicolas Pitre40fb79c2011-06-19 23:36:03 -0400212
213 int __kuser_cmpxchg64(const int64_t *oldval,
214 const int64_t *newval,
215 volatile int64_t *ptr);
216
217Input:
218
219 r0 = pointer to oldval
220 r1 = pointer to newval
221 r2 = pointer to target value
222 lr = return address
223
224Output:
225
226 r0 = success code (zero or non-zero)
227 C flag = set if r0 == 0, clear if r0 != 0
228
229Clobbered registers:
230
231 r3, lr, flags
232
233Definition:
234
Mauro Carvalho Chehabdc7a12b2019-04-14 15:51:10 -0300235 Atomically store the 64-bit value pointed by `*newval` in `*ptr` only if `*ptr`
236 is equal to the 64-bit value pointed by `*oldval`. Return zero if `*ptr` was
Nicolas Pitre40fb79c2011-06-19 23:36:03 -0400237 changed or non-zero if no exchange happened.
238
Mauro Carvalho Chehabdc7a12b2019-04-14 15:51:10 -0300239 The C flag is also set if `*ptr` was changed to allow for assembly
Nicolas Pitre40fb79c2011-06-19 23:36:03 -0400240 optimization in the calling code.
241
Mauro Carvalho Chehabdc7a12b2019-04-14 15:51:10 -0300242Usage example::
Nicolas Pitre40fb79c2011-06-19 23:36:03 -0400243
Mauro Carvalho Chehabdc7a12b2019-04-14 15:51:10 -0300244 typedef int (__kuser_cmpxchg64_t)(const int64_t *oldval,
245 const int64_t *newval,
246 volatile int64_t *ptr);
247 #define __kuser_cmpxchg64 (*(__kuser_cmpxchg64_t *)0xffff0f60)
Nicolas Pitre40fb79c2011-06-19 23:36:03 -0400248
Mauro Carvalho Chehabdc7a12b2019-04-14 15:51:10 -0300249 int64_t atomic_add64(volatile int64_t *ptr, int64_t val)
250 {
Nicolas Pitre40fb79c2011-06-19 23:36:03 -0400251 int64_t old, new;
252
253 do {
254 old = *ptr;
255 new = old + val;
256 } while(__kuser_cmpxchg64(&old, &new, ptr));
257
258 return new;
Mauro Carvalho Chehabdc7a12b2019-04-14 15:51:10 -0300259 }
Nicolas Pitre40fb79c2011-06-19 23:36:03 -0400260
261Notes:
262
263 - This routine already includes memory barriers as needed.
264
265 - Due to the length of this sequence, this spans 2 conventional kuser
266 "slots", therefore 0xffff0f80 is not used as a valid entry point.
267
268 - Valid only if __kuser_helper_version >= 5 (from kernel version 3.1).