Merge tag 'sound-4.2-rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/tiwai...
[linux-2.6-block.git] / Documentation / atomic_ops.txt
CommitLineData
1da177e4
LT
1 Semantics and Behavior of Atomic and
2 Bitmask Operations
3
4 David S. Miller
5
6 This document is intended to serve as a guide to Linux port
7maintainers on how to implement atomic counter, bitops, and spinlock
8interfaces properly.
9
1f7870dd
PM
10 The atomic_t type should be defined as a signed integer and
11the atomic_long_t type as a signed long integer. Also, they should
12be made opaque such that any kind of cast to a normal C integer type
13will fail. Something like the following should suffice:
1da177e4 14
72eef0f3 15 typedef struct { int counter; } atomic_t;
1f7870dd 16 typedef struct { long counter; } atomic_long_t;
1da177e4 17
8d7b52df
ML
18Historically, counter has been declared volatile. This is now discouraged.
19See Documentation/volatile-considered-harmful.txt for the complete rationale.
20
1a2142b0
GG
21local_t is very similar to atomic_t. If the counter is per CPU and only
22updated by one CPU, local_t is probably more appropriate. Please see
23Documentation/local_ops.txt for the semantics of local_t.
24
8d7b52df
ML
25The first operations to implement for atomic_t's are the initializers and
26plain reads.
1da177e4
LT
27
28 #define ATOMIC_INIT(i) { (i) }
29 #define atomic_set(v, i) ((v)->counter = (i))
30
31The first macro is used in definitions, such as:
32
33static atomic_t my_counter = ATOMIC_INIT(1);
34
8d7b52df
ML
35The initializer is atomic in that the return values of the atomic operations
36are guaranteed to be correct reflecting the initialized value if the
37initializer is used before runtime. If the initializer is used at runtime, a
38proper implicit or explicit read memory barrier is needed before reading the
39value with atomic_read from another thread.
40
1f7870dd
PM
41As with all of the atomic_ interfaces, replace the leading "atomic_"
42with "atomic_long_" to operate on atomic_long_t.
43
1da177e4
LT
44The second interface can be used at runtime, as in:
45
46 struct foo { atomic_t counter; };
47 ...
48
49 struct foo *k;
50
51 k = kmalloc(sizeof(*k), GFP_KERNEL);
52 if (!k)
53 return -ENOMEM;
54 atomic_set(&k->counter, 0);
55
8d7b52df
ML
56The setting is atomic in that the return values of the atomic operations by
57all threads are guaranteed to be correct reflecting either the value that has
58been set with this operation or set with another operation. A proper implicit
59or explicit memory barrier is needed before the value set with the operation
60is guaranteed to be readable with atomic_read from another thread.
61
1da177e4
LT
62Next, we have:
63
64 #define atomic_read(v) ((v)->counter)
65
8d7b52df
ML
66which simply reads the counter value currently visible to the calling thread.
67The read is atomic in that the return value is guaranteed to be one of the
68values initialized or modified with the interface operations if a proper
69implicit or explicit memory barrier is used after possible runtime
70initialization by any other thread and the value is modified only with the
71interface operations. atomic_read does not guarantee that the runtime
72initialization by any other thread is visible yet, so the user of the
73interface must take care of that with a proper implicit or explicit memory
74barrier.
75
76*** WARNING: atomic_read() and atomic_set() DO NOT IMPLY BARRIERS! ***
77
78Some architectures may choose to use the volatile keyword, barriers, or inline
79assembly to guarantee some degree of immediacy for atomic_read() and
80atomic_set(). This is not uniformly guaranteed, and may change in the future,
81so all users of atomic_t should treat atomic_read() and atomic_set() as simple
82C statements that may be reordered or optimized away entirely by the compiler
83or processor, and explicitly invoke the appropriate compiler and/or memory
84barrier for each use case. Failure to do so will result in code that may
85suddenly break when used with different architectures or compiler
86optimizations, or even changes in unrelated code which changes how the
87compiler optimizes the section accessing atomic_t variables.
88
89*** YOU HAVE BEEN WARNED! ***
90
182dd4b2
PM
91Properly aligned pointers, longs, ints, and chars (and unsigned
92equivalents) may be atomically loaded from and stored to in the same
93sense as described for atomic_read() and atomic_set(). The ACCESS_ONCE()
94macro should be used to prevent the compiler from using optimizations
95that might otherwise optimize accesses out of existence on the one hand,
96or that might create unsolicited accesses on the other.
97
98For example consider the following code:
99
100 while (a > 0)
101 do_something();
102
103If the compiler can prove that do_something() does not store to the
104variable a, then the compiler is within its rights transforming this to
105the following:
106
107 tmp = a;
108 if (a > 0)
109 for (;;)
110 do_something();
111
112If you don't want the compiler to do this (and you probably don't), then
113you should use something like the following:
114
115 while (ACCESS_ONCE(a) < 0)
116 do_something();
117
118Alternatively, you could place a barrier() call in the loop.
119
120For another example, consider the following code:
121
122 tmp_a = a;
123 do_something_with(tmp_a);
124 do_something_else_with(tmp_a);
125
126If the compiler can prove that do_something_with() does not store to the
127variable a, then the compiler is within its rights to manufacture an
128additional load as follows:
129
130 tmp_a = a;
131 do_something_with(tmp_a);
132 tmp_a = a;
133 do_something_else_with(tmp_a);
134
135This could fatally confuse your code if it expected the same value
136to be passed to do_something_with() and do_something_else_with().
137
138The compiler would be likely to manufacture this additional load if
139do_something_with() was an inline function that made very heavy use
140of registers: reloading from variable a could save a flush to the
141stack and later reload. To prevent the compiler from attacking your
142code in this manner, write the following:
143
144 tmp_a = ACCESS_ONCE(a);
145 do_something_with(tmp_a);
146 do_something_else_with(tmp_a);
147
148For a final example, consider the following code, assuming that the
149variable a is set at boot time before the second CPU is brought online
150and never changed later, so that memory barriers are not needed:
151
152 if (a)
153 b = 9;
154 else
155 b = 42;
156
157The compiler is within its rights to manufacture an additional store
158by transforming the above code into the following:
159
160 b = 42;
161 if (a)
162 b = 9;
163
164This could come as a fatal surprise to other code running concurrently
165that expected b to never have the value 42 if a was zero. To prevent
166the compiler from doing this, write something like:
167
168 if (a)
169 ACCESS_ONCE(b) = 9;
170 else
171 ACCESS_ONCE(b) = 42;
172
173Don't even -think- about doing this without proper use of memory barriers,
174locks, or atomic operations if variable a can change at runtime!
175
176*** WARNING: ACCESS_ONCE() DOES NOT IMPLY A BARRIER! ***
177
8d7b52df
ML
178Now, we move onto the atomic operation interfaces typically implemented with
179the help of assembly code.
1da177e4
LT
180
181 void atomic_add(int i, atomic_t *v);
182 void atomic_sub(int i, atomic_t *v);
183 void atomic_inc(atomic_t *v);
184 void atomic_dec(atomic_t *v);
185
186These four routines add and subtract integral values to/from the given
187atomic_t value. The first two routines pass explicit integers by
188which to make the adjustment, whereas the latter two use an implicit
189adjustment value of "1".
190
191One very important aspect of these two routines is that they DO NOT
192require any explicit memory barriers. They need only perform the
193atomic_t counter update in an SMP safe manner.
194
195Next, we have:
196
197 int atomic_inc_return(atomic_t *v);
198 int atomic_dec_return(atomic_t *v);
199
200These routines add 1 and subtract 1, respectively, from the given
201atomic_t and return the new counter value after the operation is
202performed.
203
daf1aab9
PM
204Unlike the above routines, it is required that these primitives
205include explicit memory barriers that are performed before and after
206the operation. It must be done such that all memory operations before
207and after the atomic operation calls are strongly ordered with respect
208to the atomic operation itself.
1da177e4
LT
209
210For example, it should behave as if a smp_mb() call existed both
211before and after the atomic operation.
212
213If the atomic instructions used in an implementation provide explicit
214memory barrier semantics which satisfy the above requirements, that is
215fine as well.
216
217Let's move on:
218
219 int atomic_add_return(int i, atomic_t *v);
220 int atomic_sub_return(int i, atomic_t *v);
221
222These behave just like atomic_{inc,dec}_return() except that an
223explicit counter adjustment is given instead of the implicit "1".
224This means that like atomic_{inc,dec}_return(), the memory barrier
225semantics are required.
226
227Next:
228
229 int atomic_inc_and_test(atomic_t *v);
230 int atomic_dec_and_test(atomic_t *v);
231
232These two routines increment and decrement by 1, respectively, the
233given atomic counter. They return a boolean indicating whether the
234resulting counter value was zero or not.
235
daf1aab9
PM
236Again, these primitives provide explicit memory barrier semantics around
237the atomic operation.
1da177e4
LT
238
239 int atomic_sub_and_test(int i, atomic_t *v);
240
241This is identical to atomic_dec_and_test() except that an explicit
daf1aab9
PM
242decrement is given instead of the implicit "1". This primitive must
243provide explicit memory barrier semantics around the operation.
1da177e4
LT
244
245 int atomic_add_negative(int i, atomic_t *v);
246
daf1aab9
PM
247The given increment is added to the given atomic counter value. A boolean
248is return which indicates whether the resulting counter value is negative.
249This primitive must provide explicit memory barrier semantics around
250the operation.
1da177e4 251
8426e1f6 252Then:
4a6dae6d 253
8d7b52df
ML
254 int atomic_xchg(atomic_t *v, int new);
255
256This performs an atomic exchange operation on the atomic variable v, setting
257the given new value. It returns the old value that the atomic variable v had
258just before the operation.
259
daf1aab9 260atomic_xchg must provide explicit memory barriers around the operation.
7e8b1e78 261
4a6dae6d
NP
262 int atomic_cmpxchg(atomic_t *v, int old, int new);
263
264This performs an atomic compare exchange operation on the atomic value v,
265with the given old and new values. Like all atomic_xxx operations,
266atomic_cmpxchg will only satisfy its atomicity semantics as long as all
267other accesses of *v are performed through atomic_xxx operations.
268
daf1aab9 269atomic_cmpxchg must provide explicit memory barriers around the operation.
4a6dae6d
NP
270
271The semantics for atomic_cmpxchg are the same as those defined for 'cas'
272below.
273
8426e1f6
NP
274Finally:
275
276 int atomic_add_unless(atomic_t *v, int a, int u);
277
278If the atomic value v is not equal to u, this function adds a to v, and
279returns non zero. If v is equal to u then it returns zero. This is done as
280an atomic operation.
281
daf1aab9
PM
282atomic_add_unless must provide explicit memory barriers around the
283operation unless it fails (returns 0).
8426e1f6
NP
284
285atomic_inc_not_zero, equivalent to atomic_add_unless(v, 1, 0)
286
4a6dae6d 287
1da177e4
LT
288If a caller requires memory barrier semantics around an atomic_t
289operation which does not return a value, a set of interfaces are
290defined which accomplish this:
291
1b15611e
PZ
292 void smp_mb__before_atomic(void);
293 void smp_mb__after_atomic(void);
1da177e4 294
1b15611e 295For example, smp_mb__before_atomic() can be used like so:
1da177e4
LT
296
297 obj->dead = 1;
1b15611e 298 smp_mb__before_atomic();
1da177e4
LT
299 atomic_dec(&obj->ref_count);
300
a0ebb3ff 301It makes sure that all memory operations preceding the atomic_dec()
1da177e4 302call are strongly ordered with respect to the atomic counter
a0ebb3ff 303operation. In the above example, it guarantees that the assignment of
1da177e4
LT
304"1" to obj->dead will be globally visible to other cpus before the
305atomic counter decrement.
306
1b15611e 307Without the explicit smp_mb__before_atomic() call, the
1da177e4
LT
308implementation could legally allow the atomic counter update visible
309to other cpus before the "obj->dead = 1;" assignment.
310
1da177e4 311A missing memory barrier in the cases where they are required by the
a0ebb3ff
MH
312atomic_t implementation above can have disastrous results. Here is
313an example, which follows a pattern occurring frequently in the Linux
1da177e4
LT
314kernel. It is the use of atomic counters to implement reference
315counting, and it works such that once the counter falls to zero it can
a0ebb3ff 316be guaranteed that no other entity can be accessing the object:
1da177e4 317
4764e280 318static void obj_list_add(struct obj *obj, struct list_head *head)
1da177e4
LT
319{
320 obj->active = 1;
4764e280 321 list_add(&obj->list, head);
1da177e4
LT
322}
323
324static void obj_list_del(struct obj *obj)
325{
326 list_del(&obj->list);
327 obj->active = 0;
328}
329
330static void obj_destroy(struct obj *obj)
331{
332 BUG_ON(obj->active);
333 kfree(obj);
334}
335
336struct obj *obj_list_peek(struct list_head *head)
337{
338 if (!list_empty(head)) {
339 struct obj *obj;
340
341 obj = list_entry(head->next, struct obj, list);
342 atomic_inc(&obj->refcnt);
343 return obj;
344 }
345 return NULL;
346}
347
348void obj_poke(void)
349{
350 struct obj *obj;
351
352 spin_lock(&global_list_lock);
353 obj = obj_list_peek(&global_list);
354 spin_unlock(&global_list_lock);
355
356 if (obj) {
357 obj->ops->poke(obj);
358 if (atomic_dec_and_test(&obj->refcnt))
359 obj_destroy(obj);
360 }
361}
362
363void obj_timeout(struct obj *obj)
364{
365 spin_lock(&global_list_lock);
366 obj_list_del(obj);
367 spin_unlock(&global_list_lock);
368
369 if (atomic_dec_and_test(&obj->refcnt))
370 obj_destroy(obj);
371}
372
373(This is a simplification of the ARP queue management in the
374 generic neighbour discover code of the networking. Olaf Kirch
375 found a bug wrt. memory barriers in kfree_skb() that exposed
376 the atomic_t memory barrier requirements quite clearly.)
377
378Given the above scheme, it must be the case that the obj->active
379update done by the obj list deletion be visible to other processors
380before the atomic counter decrement is performed.
381
382Otherwise, the counter could fall to zero, yet obj->active would still
383be set, thus triggering the assertion in obj_destroy(). The error
384sequence looks like this:
385
386 cpu 0 cpu 1
387 obj_poke() obj_timeout()
388 obj = obj_list_peek();
389 ... gains ref to obj, refcnt=2
390 obj_list_del(obj);
391 obj->active = 0 ...
392 ... visibility delayed ...
393 atomic_dec_and_test()
394 ... refcnt drops to 1 ...
395 atomic_dec_and_test()
396 ... refcount drops to 0 ...
397 obj_destroy()
398 BUG() triggers since obj->active
399 still seen as one
400 obj->active update visibility occurs
401
402With the memory barrier semantics required of the atomic_t operations
403which return values, the above sequence of memory visibility can never
404happen. Specifically, in the above case the atomic_dec_and_test()
405counter decrement would not become globally visible until the
406obj->active update does.
407
408As a historical note, 32-bit Sparc used to only allow usage of
a33f3224 40924-bits of its atomic_t type. This was because it used 8 bits
1da177e4
LT
410as a spinlock for SMP safety. Sparc32 lacked a "compare and swap"
411type instruction. However, 32-bit Sparc has since been moved over
412to a "hash table of spinlocks" scheme, that allows the full 32-bit
413counter to be realized. Essentially, an array of spinlocks are
414indexed into based upon the address of the atomic_t being operated
415on, and that lock protects the atomic operation. Parisc uses the
416same scheme.
417
418Another note is that the atomic_t operations returning values are
419extremely slow on an old 386.
420
421We will now cover the atomic bitmask operations. You will find that
422their SMP and memory barrier semantics are similar in shape and scope
423to the atomic_t ops above.
424
425Native atomic bit operations are defined to operate on objects aligned
426to the size of an "unsigned long" C data type, and are least of that
427size. The endianness of the bits within each "unsigned long" are the
428native endianness of the cpu.
429
a0ebb3ff
MH
430 void set_bit(unsigned long nr, volatile unsigned long *addr);
431 void clear_bit(unsigned long nr, volatile unsigned long *addr);
432 void change_bit(unsigned long nr, volatile unsigned long *addr);
1da177e4
LT
433
434These routines set, clear, and change, respectively, the bit number
435indicated by "nr" on the bit mask pointed to by "ADDR".
436
437They must execute atomically, yet there are no implicit memory barrier
438semantics required of these interfaces.
439
a0ebb3ff
MH
440 int test_and_set_bit(unsigned long nr, volatile unsigned long *addr);
441 int test_and_clear_bit(unsigned long nr, volatile unsigned long *addr);
442 int test_and_change_bit(unsigned long nr, volatile unsigned long *addr);
1da177e4
LT
443
444Like the above, except that these routines return a boolean which
445indicates whether the changed bit was set _BEFORE_ the atomic bit
446operation.
447
448WARNING! It is incredibly important that the value be a boolean,
449ie. "0" or "1". Do not try to be fancy and save a few instructions by
450declaring the above to return "long" and just returning something like
451"old_val & mask" because that will not work.
452
453For one thing, this return value gets truncated to int in many code
454paths using these interfaces, so on 64-bit if the bit is set in the
455upper 32-bits then testers will never see that.
456
457One great example of where this problem crops up are the thread_info
458flag operations. Routines such as test_and_set_ti_thread_flag() chop
459the return value into an int. There are other places where things
460like this occur as well.
461
462These routines, like the atomic_t counter operations returning values,
daf1aab9
PM
463must provide explicit memory barrier semantics around their execution.
464All memory operations before the atomic bit operation call must be
465made visible globally before the atomic bit operation is made visible.
1da177e4
LT
466Likewise, the atomic bit operation must be visible globally before any
467subsequent memory operation is made visible. For example:
468
469 obj->dead = 1;
470 if (test_and_set_bit(0, &obj->flags))
471 /* ... */;
472 obj->killed = 1;
473
a0ebb3ff 474The implementation of test_and_set_bit() must guarantee that
1da177e4
LT
475"obj->dead = 1;" is visible to cpus before the atomic memory operation
476done by test_and_set_bit() becomes visible. Likewise, the atomic
477memory operation done by test_and_set_bit() must become visible before
478"obj->killed = 1;" is visible.
479
480Finally there is the basic operation:
481
482 int test_bit(unsigned long nr, __const__ volatile unsigned long *addr);
483
484Which returns a boolean indicating if bit "nr" is set in the bitmask
485pointed to by "addr".
486
1b15611e
PZ
487If explicit memory barriers are required around {set,clear}_bit() (which do
488not return a value, and thus does not need to provide memory barrier
489semantics), two interfaces are provided:
1da177e4 490
1b15611e
PZ
491 void smp_mb__before_atomic(void);
492 void smp_mb__after_atomic(void);
1da177e4
LT
493
494They are used as follows, and are akin to their atomic_t operation
495brothers:
496
497 /* All memory operations before this call will
498 * be globally visible before the clear_bit().
499 */
1b15611e 500 smp_mb__before_atomic();
1da177e4
LT
501 clear_bit( ... );
502
503 /* The clear_bit() will be visible before all
504 * subsequent memory operations.
505 */
1b15611e 506 smp_mb__after_atomic();
1da177e4 507
26333576
NP
508There are two special bitops with lock barrier semantics (acquire/release,
509same as spinlocks). These operate in the same way as their non-_lock/unlock
510postfixed variants, except that they are to provide acquire/release semantics,
511respectively. This means they can be used for bit_spin_trylock and
512bit_spin_unlock type operations without specifying any more barriers.
513
514 int test_and_set_bit_lock(unsigned long nr, unsigned long *addr);
515 void clear_bit_unlock(unsigned long nr, unsigned long *addr);
516 void __clear_bit_unlock(unsigned long nr, unsigned long *addr);
517
518The __clear_bit_unlock version is non-atomic, however it still implements
519unlock barrier semantics. This can be useful if the lock itself is protecting
520the other bits in the word.
521
1da177e4
LT
522Finally, there are non-atomic versions of the bitmask operations
523provided. They are used in contexts where some other higher-level SMP
524locking scheme is being used to protect the bitmask, and thus less
525expensive non-atomic operations may be used in the implementation.
526They have names similar to the above bitmask operation interfaces,
527except that two underscores are prefixed to the interface name.
528
529 void __set_bit(unsigned long nr, volatile unsigned long *addr);
530 void __clear_bit(unsigned long nr, volatile unsigned long *addr);
531 void __change_bit(unsigned long nr, volatile unsigned long *addr);
532 int __test_and_set_bit(unsigned long nr, volatile unsigned long *addr);
533 int __test_and_clear_bit(unsigned long nr, volatile unsigned long *addr);
534 int __test_and_change_bit(unsigned long nr, volatile unsigned long *addr);
535
536These non-atomic variants also do not require any special memory
537barrier semantics.
538
daf1aab9
PM
539The routines xchg() and cmpxchg() must provide the same exact
540memory-barrier semantics as the atomic and bit operations returning
541values.
1da177e4
LT
542
543Spinlocks and rwlocks have memory barrier expectations as well.
544The rule to follow is simple:
545
5461) When acquiring a lock, the implementation must make it globally
547 visible before any subsequent memory operation.
548
5492) When releasing a lock, the implementation must make it such that
550 all previous memory operations are globally visible before the
551 lock release.
552
553Which finally brings us to _atomic_dec_and_lock(). There is an
554architecture-neutral version implemented in lib/dec_and_lock.c,
555but most platforms will wish to optimize this in assembler.
556
557 int _atomic_dec_and_lock(atomic_t *atomic, spinlock_t *lock);
558
559Atomically decrement the given counter, and if will drop to zero
560atomically acquire the given spinlock and perform the decrement
561of the counter to zero. If it does not drop to zero, do nothing
562with the spinlock.
563
564It is actually pretty simple to get the memory barrier correct.
565Simply satisfy the spinlock grab requirements, which is make
566sure the spinlock operation is globally visible before any
567subsequent memory operation.
568
569We can demonstrate this operation more clearly if we define
570an abstract atomic operation:
571
572 long cas(long *mem, long old, long new);
573
574"cas" stands for "compare and swap". It atomically:
575
5761) Compares "old" with the value currently at "mem".
5772) If they are equal, "new" is written to "mem".
5783) Regardless, the current value at "mem" is returned.
579
580As an example usage, here is what an atomic counter update
581might look like:
582
583void example_atomic_inc(long *counter)
584{
585 long old, new, ret;
586
587 while (1) {
588 old = *counter;
589 new = old + 1;
590
591 ret = cas(counter, old, new);
592 if (ret == old)
593 break;
594 }
595}
596
597Let's use cas() in order to build a pseudo-C atomic_dec_and_lock():
598
599int _atomic_dec_and_lock(atomic_t *atomic, spinlock_t *lock)
600{
601 long old, new, ret;
602 int went_to_zero;
603
604 went_to_zero = 0;
605 while (1) {
606 old = atomic_read(atomic);
607 new = old - 1;
608 if (new == 0) {
609 went_to_zero = 1;
610 spin_lock(lock);
611 }
612 ret = cas(atomic, old, new);
613 if (ret == old)
614 break;
615 if (went_to_zero) {
616 spin_unlock(lock);
617 went_to_zero = 0;
618 }
619 }
620
621 return went_to_zero;
622}
623
624Now, as far as memory barriers go, as long as spin_lock()
625strictly orders all subsequent memory operations (including
626the cas()) with respect to itself, things will be fine.
627
a0ebb3ff 628Said another way, _atomic_dec_and_lock() must guarantee that
1da177e4
LT
629a counter dropping to zero is never made visible before the
630spinlock being acquired.
631
632Note that this also means that for the case where the counter
633is not dropping to zero, there are no memory ordering
634requirements.