blk-cgroup: fix rcu lockdep warning in blkg_lookup()
[linux-block.git] / Documentation / memory-barriers.txt
CommitLineData
108b42b4
DH
1 ============================
2 LINUX KERNEL MEMORY BARRIERS
3 ============================
4
5By: David Howells <dhowells@redhat.com>
714b6904 6 Paul E. McKenney <paulmck@linux.ibm.com>
e7720af5
PZ
7 Will Deacon <will.deacon@arm.com>
8 Peter Zijlstra <peterz@infradead.org>
108b42b4 9
e7720af5
PZ
10==========
11DISCLAIMER
12==========
13
14This document is not a specification; it is intentionally (for the sake of
15brevity) and unintentionally (due to being human) incomplete. This document is
16meant as a guide to using the various memory barriers provided by Linux, but
621df431
AP
17in case of any doubt (and there are many) please ask. Some doubts may be
18resolved by referring to the formal memory consistency model and related
19documentation at tools/memory-model/. Nevertheless, even this memory
20model should be viewed as the collective opinion of its maintainers rather
21than as an infallible oracle.
e7720af5
PZ
22
23To repeat, this document is not a specification of what Linux expects from
24hardware.
25
8d4840e8
DH
26The purpose of this document is twofold:
27
28 (1) to specify the minimum functionality that one can rely on for any
29 particular barrier, and
30
31 (2) to provide a guide as to how to use the barriers that are available.
32
33Note that an architecture can provide more than the minimum requirement
35bdc72a 34for any particular barrier, but if the architecture provides less than
8d4840e8
DH
35that, that architecture is incorrect.
36
37Note also that it is possible that a barrier may be a no-op for an
38architecture because the way that arch works renders an explicit barrier
39unnecessary in that case.
40
41
e7720af5
PZ
42========
43CONTENTS
44========
108b42b4
DH
45
46 (*) Abstract memory access model.
47
48 - Device operations.
49 - Guarantees.
50
51 (*) What are memory barriers?
52
53 - Varieties of memory barrier.
54 - What may not be assumed about memory barriers?
203185f6 55 - Address-dependency barriers (historical).
108b42b4
DH
56 - Control dependencies.
57 - SMP barrier pairing.
58 - Examples of memory barrier sequences.
670bd95e 59 - Read memory barriers vs load speculation.
f1ab25a3 60 - Multicopy atomicity.
108b42b4
DH
61
62 (*) Explicit kernel barriers.
63
64 - Compiler barrier.
81fc6323 65 - CPU memory barriers.
108b42b4
DH
66
67 (*) Implicit kernel memory barriers.
68
166bda71 69 - Lock acquisition functions.
108b42b4 70 - Interrupt disabling functions.
50fa610a 71 - Sleep and wake-up functions.
108b42b4
DH
72 - Miscellaneous functions.
73
166bda71 74 (*) Inter-CPU acquiring barrier effects.
108b42b4 75
166bda71 76 - Acquires vs memory accesses.
108b42b4
DH
77
78 (*) Where are memory barriers needed?
79
80 - Interprocessor interaction.
81 - Atomic operations.
82 - Accessing devices.
83 - Interrupts.
84
85 (*) Kernel I/O barrier effects.
86
87 (*) Assumed minimum execution ordering model.
88
89 (*) The effects of the cpu cache.
90
91 - Cache coherency.
92 - Cache coherency vs DMA.
93 - Cache coherency vs MMIO.
94
95 (*) The things CPUs get up to.
96
97 - And then there's the Alpha.
01e1cd6d 98 - Virtual Machine Guests.
108b42b4 99
90fddabf
DH
100 (*) Example uses.
101
102 - Circular buffers.
103
108b42b4
DH
104 (*) References.
105
106
107============================
108ABSTRACT MEMORY ACCESS MODEL
109============================
110
111Consider the following abstract model of the system:
112
113 : :
114 : :
115 : :
116 +-------+ : +--------+ : +-------+
117 | | : | | : | |
118 | | : | | : | |
119 | CPU 1 |<----->| Memory |<----->| CPU 2 |
120 | | : | | : | |
121 | | : | | : | |
122 +-------+ : +--------+ : +-------+
123 ^ : ^ : ^
124 | : | : |
125 | : | : |
126 | : v : |
127 | : +--------+ : |
128 | : | | : |
129 | : | | : |
130 +---------->| Device |<----------+
131 : | | :
132 : | | :
133 : +--------+ :
134 : :
135
136Each CPU executes a program that generates memory access operations. In the
137abstract CPU, memory operation ordering is very relaxed, and a CPU may actually
138perform the memory operations in any order it likes, provided program causality
139appears to be maintained. Similarly, the compiler may also arrange the
140instructions it emits in any order it likes, provided it doesn't affect the
141apparent operation of the program.
142
143So in the above diagram, the effects of the memory operations performed by a
144CPU are perceived by the rest of the system as the operations cross the
145interface between the CPU and rest of the system (the dotted lines).
146
147
148For example, consider the following sequence of events:
149
150 CPU 1 CPU 2
151 =============== ===============
152 { A == 1; B == 2 }
615cc2c9
AD
153 A = 3; x = B;
154 B = 4; y = A;
108b42b4
DH
155
156The set of accesses as seen by the memory system in the middle can be arranged
157in 24 different combinations:
158
8ab8b3e1
PK
159 STORE A=3, STORE B=4, y=LOAD A->3, x=LOAD B->4
160 STORE A=3, STORE B=4, x=LOAD B->4, y=LOAD A->3
161 STORE A=3, y=LOAD A->3, STORE B=4, x=LOAD B->4
162 STORE A=3, y=LOAD A->3, x=LOAD B->2, STORE B=4
163 STORE A=3, x=LOAD B->2, STORE B=4, y=LOAD A->3
164 STORE A=3, x=LOAD B->2, y=LOAD A->3, STORE B=4
165 STORE B=4, STORE A=3, y=LOAD A->3, x=LOAD B->4
108b42b4
DH
166 STORE B=4, ...
167 ...
168
169and can thus result in four different combinations of values:
170
8ab8b3e1
PK
171 x == 2, y == 1
172 x == 2, y == 3
173 x == 4, y == 1
174 x == 4, y == 3
108b42b4
DH
175
176
177Furthermore, the stores committed by a CPU to the memory system may not be
178perceived by the loads made by another CPU in the same order as the stores were
179committed.
180
181
182As a further example, consider this sequence of events:
183
184 CPU 1 CPU 2
185 =============== ===============
3dbf0913 186 { A == 1, B == 2, C == 3, P == &A, Q == &C }
108b42b4 187 B = 4; Q = P;
8149b5cb 188 P = &B; D = *Q;
108b42b4 189
f556082d
AY
190There is an obvious address dependency here, as the value loaded into D depends
191on the address retrieved from P by CPU 2. At the end of the sequence, any of
192the following results are possible:
108b42b4
DH
193
194 (Q == &A) and (D == 1)
195 (Q == &B) and (D == 2)
196 (Q == &B) and (D == 4)
197
198Note that CPU 2 will never try and load C into D because the CPU will load P
199into Q before issuing the load of *Q.
200
201
202DEVICE OPERATIONS
203-----------------
204
205Some devices present their control interfaces as collections of memory
206locations, but the order in which the control registers are accessed is very
207important. For instance, imagine an ethernet card with a set of internal
208registers that are accessed through an address port register (A) and a data
209port register (D). To read internal register 5, the following code might then
210be used:
211
212 *A = 5;
213 x = *D;
214
215but this might show up as either of the following two sequences:
216
217 STORE *A = 5, x = LOAD *D
218 x = LOAD *D, STORE *A = 5
219
220the second of which will almost certainly result in a malfunction, since it set
221the address _after_ attempting to read the register.
222
223
224GUARANTEES
225----------
226
227There are some minimal guarantees that may be expected of a CPU:
228
229 (*) On any given CPU, dependent memory accesses will be issued in order, with
230 respect to itself. This means that for:
231
40555946 232 Q = READ_ONCE(P); D = READ_ONCE(*Q);
108b42b4
DH
233
234 the CPU will issue the following memory operations:
235
236 Q = LOAD P, D = LOAD *Q
237
40555946
PM
238 and always in that order. However, on DEC Alpha, READ_ONCE() also
239 emits a memory-barrier instruction, so that a DEC Alpha CPU will
240 instead issue the following memory operations:
241
242 Q = LOAD P, MEMORY_BARRIER, D = LOAD *Q, MEMORY_BARRIER
243
244 Whether on DEC Alpha or not, the READ_ONCE() also prevents compiler
245 mischief.
108b42b4
DH
246
247 (*) Overlapping loads and stores within a particular CPU will appear to be
248 ordered within that CPU. This means that for:
249
9af194ce 250 a = READ_ONCE(*X); WRITE_ONCE(*X, b);
108b42b4
DH
251
252 the CPU will only issue the following sequence of memory operations:
253
254 a = LOAD *X, STORE *X = b
255
256 And for:
257
9af194ce 258 WRITE_ONCE(*X, c); d = READ_ONCE(*X);
108b42b4
DH
259
260 the CPU will only issue:
261
262 STORE *X = c, d = LOAD *X
263
fa00e7e1 264 (Loads and stores overlap if they are targeted at overlapping pieces of
108b42b4
DH
265 memory).
266
267And there are a number of things that _must_ or _must_not_ be assumed:
268
9af194ce
PM
269 (*) It _must_not_ be assumed that the compiler will do what you want
270 with memory references that are not protected by READ_ONCE() and
271 WRITE_ONCE(). Without them, the compiler is within its rights to
272 do all sorts of "creative" transformations, which are covered in
895f5542 273 the COMPILER BARRIER section.
2ecf8101 274
108b42b4
DH
275 (*) It _must_not_ be assumed that independent loads and stores will be issued
276 in the order given. This means that for:
277
278 X = *A; Y = *B; *D = Z;
279
280 we may get any of the following sequences:
281
282 X = LOAD *A, Y = LOAD *B, STORE *D = Z
283 X = LOAD *A, STORE *D = Z, Y = LOAD *B
284 Y = LOAD *B, X = LOAD *A, STORE *D = Z
285 Y = LOAD *B, STORE *D = Z, X = LOAD *A
286 STORE *D = Z, X = LOAD *A, Y = LOAD *B
287 STORE *D = Z, Y = LOAD *B, X = LOAD *A
288
289 (*) It _must_ be assumed that overlapping memory accesses may be merged or
290 discarded. This means that for:
291
292 X = *A; Y = *(A + 4);
293
294 we may get any one of the following sequences:
295
296 X = LOAD *A; Y = LOAD *(A + 4);
297 Y = LOAD *(A + 4); X = LOAD *A;
298 {X, Y} = LOAD {*A, *(A + 4) };
299
300 And for:
301
f191eec5 302 *A = X; *(A + 4) = Y;
108b42b4 303
f191eec5 304 we may get any of:
108b42b4 305
f191eec5
PM
306 STORE *A = X; STORE *(A + 4) = Y;
307 STORE *(A + 4) = Y; STORE *A = X;
308 STORE {*A, *(A + 4) } = {X, Y};
108b42b4 309
432fbf3c
PM
310And there are anti-guarantees:
311
312 (*) These guarantees do not apply to bitfields, because compilers often
313 generate code to modify these using non-atomic read-modify-write
314 sequences. Do not attempt to use bitfields to synchronize parallel
315 algorithms.
316
317 (*) Even in cases where bitfields are protected by locks, all fields
318 in a given bitfield must be protected by one lock. If two fields
319 in a given bitfield are protected by different locks, the compiler's
320 non-atomic read-modify-write sequences can cause an update to one
321 field to corrupt the value of an adjacent field.
322
323 (*) These guarantees apply only to properly aligned and sized scalar
324 variables. "Properly sized" currently means variables that are
325 the same size as "char", "short", "int" and "long". "Properly
326 aligned" means the natural alignment, thus no constraints for
327 "char", two-byte alignment for "short", four-byte alignment for
328 "int", and either four-byte or eight-byte alignment for "long",
329 on 32-bit and 64-bit systems, respectively. Note that these
330 guarantees were introduced into the C11 standard, so beware when
331 using older pre-C11 compilers (for example, gcc 4.6). The portion
332 of the standard containing this guarantee is Section 3.14, which
333 defines "memory location" as follows:
334
335 memory location
336 either an object of scalar type, or a maximal sequence
337 of adjacent bit-fields all having nonzero width
338
339 NOTE 1: Two threads of execution can update and access
340 separate memory locations without interfering with
341 each other.
342
343 NOTE 2: A bit-field and an adjacent non-bit-field member
344 are in separate memory locations. The same applies
345 to two bit-fields, if one is declared inside a nested
346 structure declaration and the other is not, or if the two
347 are separated by a zero-length bit-field declaration,
348 or if they are separated by a non-bit-field member
349 declaration. It is not safe to concurrently update two
350 bit-fields in the same structure if all members declared
351 between them are also bit-fields, no matter what the
352 sizes of those intervening bit-fields happen to be.
353
108b42b4
DH
354
355=========================
356WHAT ARE MEMORY BARRIERS?
357=========================
358
359As can be seen above, independent memory operations are effectively performed
360in random order, but this can be a problem for CPU-CPU interaction and for I/O.
361What is required is some way of intervening to instruct the compiler and the
362CPU to restrict the order.
363
364Memory barriers are such interventions. They impose a perceived partial
2b94895b
DH
365ordering over the memory operations on either side of the barrier.
366
367Such enforcement is important because the CPUs and other devices in a system
81fc6323 368can use a variety of tricks to improve performance, including reordering,
2b94895b
DH
369deferral and combination of memory operations; speculative loads; speculative
370branch prediction and various types of caching. Memory barriers are used to
371override or suppress these tricks, allowing the code to sanely control the
372interaction of multiple CPUs and/or devices.
108b42b4
DH
373
374
375VARIETIES OF MEMORY BARRIER
376---------------------------
377
378Memory barriers come in four basic varieties:
379
380 (1) Write (or store) memory barriers.
381
382 A write memory barrier gives a guarantee that all the STORE operations
383 specified before the barrier will appear to happen before all the STORE
384 operations specified after the barrier with respect to the other
385 components of the system.
386
387 A write barrier is a partial ordering on stores only; it is not required
388 to have any effect on loads.
389
6bc39274 390 A CPU can be viewed as committing a sequence of store operations to the
5692fcc6
GP
391 memory system as time progresses. All stores _before_ a write barrier
392 will occur _before_ all the stores after the write barrier.
108b42b4 393
203185f6
AY
394 [!] Note that write barriers should normally be paired with read or
395 address-dependency barriers; see the "SMP barrier pairing" subsection.
108b42b4
DH
396
397
203185f6 398 (2) Address-dependency barriers (historical).
1566bf4b
JFG
399 [!] This section is marked as HISTORICAL: For more up-to-date
400 information, including how compiler transformations related to pointer
401 comparisons can sometimes cause problems, see
402 Documentation/RCU/rcu_dereference.rst.
108b42b4 403
f556082d
AY
404 An address-dependency barrier is a weaker form of read barrier. In the
405 case where two loads are performed such that the second depends on the
406 result of the first (eg: the first load retrieves the address to which
407 the second load will be directed), an address-dependency barrier would
408 be required to make sure that the target of the second load is updated
409 after the address obtained by the first load is accessed.
108b42b4 410
f556082d
AY
411 An address-dependency barrier is a partial ordering on interdependent
412 loads only; it is not required to have any effect on stores, independent
413 loads or overlapping loads.
108b42b4
DH
414
415 As mentioned in (1), the other CPUs in the system can be viewed as
416 committing sequences of stores to the memory system that the CPU being
f556082d
AY
417 considered can then perceive. An address-dependency barrier issued by
418 the CPU under consideration guarantees that for any load preceding it,
419 if that load touches one of a sequence of stores from another CPU, then
420 by the time the barrier completes, the effects of all the stores prior to
421 that touched by the load will be perceptible to any loads issued after
422 the address-dependency barrier.
108b42b4
DH
423
424 See the "Examples of memory barrier sequences" subsection for diagrams
425 showing the ordering constraints.
426
203185f6 427 [!] Note that the first load really has to have an _address_ dependency and
108b42b4
DH
428 not a control dependency. If the address for the second load is dependent
429 on the first load, but the dependency is through a conditional rather than
430 actually loading the address itself, then it's a _control_ dependency and
431 a full read barrier or better is required. See the "Control dependencies"
432 subsection for more information.
433
203185f6 434 [!] Note that address-dependency barriers should normally be paired with
108b42b4
DH
435 write barriers; see the "SMP barrier pairing" subsection.
436
203185f6
AY
437 [!] Kernel release v5.9 removed kernel APIs for explicit address-
438 dependency barriers. Nowadays, APIs for marking loads from shared
439 variables such as READ_ONCE() and rcu_dereference() provide implicit
440 address-dependency barriers.
108b42b4
DH
441
442 (3) Read (or load) memory barriers.
443
f556082d
AY
444 A read barrier is an address-dependency barrier plus a guarantee that all
445 the LOAD operations specified before the barrier will appear to happen
446 before all the LOAD operations specified after the barrier with respect to
447 the other components of the system.
108b42b4
DH
448
449 A read barrier is a partial ordering on loads only; it is not required to
450 have any effect on stores.
451
f556082d
AY
452 Read memory barriers imply address-dependency barriers, and so can
453 substitute for them.
108b42b4
DH
454
455 [!] Note that read barriers should normally be paired with write barriers;
456 see the "SMP barrier pairing" subsection.
457
458
459 (4) General memory barriers.
460
670bd95e
DH
461 A general memory barrier gives a guarantee that all the LOAD and STORE
462 operations specified before the barrier will appear to happen before all
463 the LOAD and STORE operations specified after the barrier with respect to
464 the other components of the system.
465
466 A general memory barrier is a partial ordering over both loads and stores.
108b42b4
DH
467
468 General memory barriers imply both read and write memory barriers, and so
469 can substitute for either.
470
471
472And a couple of implicit varieties:
473
2e4f5382 474 (5) ACQUIRE operations.
108b42b4
DH
475
476 This acts as a one-way permeable barrier. It guarantees that all memory
2e4f5382
PZ
477 operations after the ACQUIRE operation will appear to happen after the
478 ACQUIRE operation with respect to the other components of the system.
787df638 479 ACQUIRE operations include LOCK operations and both smp_load_acquire()
2f359c7e 480 and smp_cond_load_acquire() operations.
108b42b4 481
2e4f5382
PZ
482 Memory operations that occur before an ACQUIRE operation may appear to
483 happen after it completes.
108b42b4 484
2e4f5382
PZ
485 An ACQUIRE operation should almost always be paired with a RELEASE
486 operation.
108b42b4
DH
487
488
2e4f5382 489 (6) RELEASE operations.
108b42b4
DH
490
491 This also acts as a one-way permeable barrier. It guarantees that all
2e4f5382
PZ
492 memory operations before the RELEASE operation will appear to happen
493 before the RELEASE operation with respect to the other components of the
494 system. RELEASE operations include UNLOCK operations and
495 smp_store_release() operations.
108b42b4 496
2e4f5382 497 Memory operations that occur after a RELEASE operation may appear to
108b42b4
DH
498 happen before it completes.
499
2e4f5382 500 The use of ACQUIRE and RELEASE operations generally precludes the need
a897b13d
SP
501 for other sorts of memory barrier. In addition, a RELEASE+ACQUIRE pair is
502 -not- guaranteed to act as a full memory barrier. However, after an
503 ACQUIRE on a given variable, all memory accesses preceding any prior
2e4f5382
PZ
504 RELEASE on that same variable are guaranteed to be visible. In other
505 words, within a given variable's critical section, all accesses of all
506 previous critical sections for that variable are guaranteed to have
507 completed.
17eb88e0 508
2e4f5382
PZ
509 This means that ACQUIRE acts as a minimal "acquire" operation and
510 RELEASE acts as a minimal "release" operation.
108b42b4 511
706eeb3e
PZ
512A subset of the atomic operations described in atomic_t.txt have ACQUIRE and
513RELEASE variants in addition to fully-ordered and relaxed (no barrier
514semantics) definitions. For compound atomics performing both a load and a
515store, ACQUIRE semantics apply only to the load and RELEASE semantics apply
516only to the store portion of the operation.
108b42b4
DH
517
518Memory barriers are only required where there's a possibility of interaction
519between two CPUs or between a CPU and a device. If it can be guaranteed that
520there won't be any such interaction in any particular piece of code, then
521memory barriers are unnecessary in that piece of code.
522
523
524Note that these are the _minimum_ guarantees. Different architectures may give
525more substantial guarantees, but they may _not_ be relied upon outside of arch
526specific code.
527
528
529WHAT MAY NOT BE ASSUMED ABOUT MEMORY BARRIERS?
530----------------------------------------------
531
532There are certain things that the Linux kernel memory barriers do not guarantee:
533
534 (*) There is no guarantee that any of the memory accesses specified before a
535 memory barrier will be _complete_ by the completion of a memory barrier
536 instruction; the barrier can be considered to draw a line in that CPU's
537 access queue that accesses of the appropriate type may not cross.
538
539 (*) There is no guarantee that issuing a memory barrier on one CPU will have
540 any direct effect on another CPU or any other hardware in the system. The
541 indirect effect will be the order in which the second CPU sees the effects
542 of the first CPU's accesses occur, but see the next point:
543
6bc39274 544 (*) There is no guarantee that a CPU will see the correct order of effects
108b42b4
DH
545 from a second CPU's accesses, even _if_ the second CPU uses a memory
546 barrier, unless the first CPU _also_ uses a matching memory barrier (see
547 the subsection on "SMP Barrier Pairing").
548
549 (*) There is no guarantee that some intervening piece of off-the-CPU
550 hardware[*] will not reorder the memory accesses. CPU cache coherency
551 mechanisms should propagate the indirect effects of a memory barrier
552 between CPUs, but might not do so in order.
553
554 [*] For information on bus mastering DMA and coherency please read:
555
bff9e34c 556 Documentation/driver-api/pci/pci.rst
537f3a7c
SP
557 Documentation/core-api/dma-api-howto.rst
558 Documentation/core-api/dma-api.rst
108b42b4
DH
559
560
203185f6
AY
561ADDRESS-DEPENDENCY BARRIERS (HISTORICAL)
562----------------------------------------
1566bf4b
JFG
563[!] This section is marked as HISTORICAL: For more up-to-date information,
564including how compiler transformations related to pointer comparisons can
565sometimes cause problems, see Documentation/RCU/rcu_dereference.rst.
f28f0868 566
8ca924ae
WD
567As of v4.15 of the Linux kernel, an smp_mb() was added to READ_ONCE() for
568DEC Alpha, which means that about the only people who need to pay attention
569to this section are those working on DEC Alpha architecture-specific code
570and those working on READ_ONCE() itself. For those who need it, and for
571those who are interested in the history, here is the story of
203185f6
AY
572address-dependency barriers.
573
574[!] While address dependencies are observed in both load-to-load and
575load-to-store relations, address-dependency barriers are not necessary
576for load-to-store situations.
108b42b4 577
203185f6 578The requirement of address-dependency barriers is a little subtle, and
108b42b4
DH
579it's not always obvious that they're needed. To illustrate, consider the
580following sequence of events:
581
2ecf8101
PM
582 CPU 1 CPU 2
583 =============== ===============
3dbf0913 584 { A == 1, B == 2, C == 3, P == &A, Q == &C }
108b42b4
DH
585 B = 4;
586 <write barrier>
8149b5cb 587 WRITE_ONCE(P, &B);
203185f6 588 Q = READ_ONCE_OLD(P);
2ecf8101 589 D = *Q;
108b42b4 590
203185f6
AY
591[!] READ_ONCE_OLD() corresponds to READ_ONCE() of pre-4.15 kernel, which
592doesn't imply an address-dependency barrier.
593
f556082d
AY
594There's a clear address dependency here, and it would seem that by the end of
595the sequence, Q must be either &A or &B, and that:
108b42b4
DH
596
597 (Q == &A) implies (D == 1)
598 (Q == &B) implies (D == 4)
599
81fc6323 600But! CPU 2's perception of P may be updated _before_ its perception of B, thus
108b42b4
DH
601leading to the following situation:
602
603 (Q == &B) and (D == 2) ????
604
806654a9 605While this may seem like a failure of coherency or causality maintenance, it
108b42b4
DH
606isn't, and this behaviour can be observed on certain real CPUs (such as the DEC
607Alpha).
608
f556082d
AY
609To deal with this, READ_ONCE() provides an implicit address-dependency barrier
610since kernel release v4.15:
108b42b4 611
2ecf8101
PM
612 CPU 1 CPU 2
613 =============== ===============
3dbf0913 614 { A == 1, B == 2, C == 3, P == &A, Q == &C }
108b42b4
DH
615 B = 4;
616 <write barrier>
9af194ce
PM
617 WRITE_ONCE(P, &B);
618 Q = READ_ONCE(P);
203185f6 619 <implicit address-dependency barrier>
2ecf8101 620 D = *Q;
108b42b4
DH
621
622This enforces the occurrence of one of the two implications, and prevents the
623third possibility from arising.
624
66ce3a4d
PM
625
626[!] Note that this extremely counterintuitive situation arises most easily on
627machines with split caches, so that, for example, one cache bank processes
628even-numbered cache lines and the other bank processes odd-numbered cache
629lines. The pointer P might be stored in an odd-numbered cache line, and the
630variable B might be stored in an even-numbered cache line. Then, if the
631even-numbered bank of the reading CPU's cache is extremely busy while the
632odd-numbered bank is idle, one can see the new value of the pointer P (&B),
633but the old value of the variable B (2).
634
635
203185f6 636An address-dependency barrier is not required to order dependent writes
f556082d
AY
637because the CPUs that the Linux kernel supports don't do writes until they
638are certain (1) that the write will actually happen, (2) of the location of
639the write, and (3) of the value to be written.
66ce3a4d 640But please carefully read the "CONTROL DEPENDENCIES" section and the
f556082d
AY
641Documentation/RCU/rcu_dereference.rst file: The compiler can and does break
642dependencies in a great many highly creative ways.
92a84dd2
PM
643
644 CPU 1 CPU 2
645 =============== ===============
646 { A == 1, B == 2, C = 3, P == &A, Q == &C }
647 B = 4;
648 <write barrier>
649 WRITE_ONCE(P, &B);
203185f6 650 Q = READ_ONCE_OLD(P);
66ce3a4d 651 WRITE_ONCE(*Q, 5);
92a84dd2 652
203185f6 653Therefore, no address-dependency barrier is required to order the read into
66ce3a4d 654Q with the store into *Q. In other words, this outcome is prohibited,
203185f6 655even without an implicit address-dependency barrier of modern READ_ONCE():
92a84dd2 656
8b9e7715 657 (Q == &B) && (B == 4)
92a84dd2
PM
658
659Please note that this pattern should be rare. After all, the whole point
660of dependency ordering is to -prevent- writes to the data structure, along
661with the expensive cache misses associated with those writes. This pattern
66ce3a4d
PM
662can be used to record rare error conditions and the like, and the CPUs'
663naturally occurring ordering prevents such records from being lost.
108b42b4
DH
664
665
203185f6 666Note well that the ordering provided by an address dependency is local to
f1ab25a3
PM
667the CPU containing it. See the section on "Multicopy atomicity" for
668more information.
669
670
203185f6 671The address-dependency barrier is very important to the RCU system,
2ecf8101
PM
672for example. See rcu_assign_pointer() and rcu_dereference() in
673include/linux/rcupdate.h. This permits the current target of an RCU'd
674pointer to be replaced with a new modified target, without the replacement
675target appearing to be incompletely initialised.
108b42b4
DH
676
677See also the subsection on "Cache Coherency" for a more thorough example.
678
679
680CONTROL DEPENDENCIES
681--------------------
682
c8241f85
PM
683Control dependencies can be a bit tricky because current compilers do
684not understand them. The purpose of this section is to help you prevent
685the compiler's ignorance from breaking your code.
686
ff382810 687A load-load control dependency requires a full read memory barrier, not
f556082d
AY
688simply an (implicit) address-dependency barrier to make it work correctly.
689Consider the following bit of code:
108b42b4 690
9af194ce 691 q = READ_ONCE(a);
203185f6 692 <implicit address-dependency barrier>
18c03c61 693 if (q) {
203185f6 694 /* BUG: No address dependency!!! */
9af194ce 695 p = READ_ONCE(b);
45c8a36a 696 }
108b42b4 697
203185f6 698This will not have the desired effect because there is no actual address
2ecf8101
PM
699dependency, but rather a control dependency that the CPU may short-circuit
700by attempting to predict the outcome in advance, so that other CPUs see
f556082d
AY
701the load from b as having happened before the load from a. In such a case
702what's actually required is:
108b42b4 703
9af194ce 704 q = READ_ONCE(a);
18c03c61 705 if (q) {
45c8a36a 706 <read barrier>
9af194ce 707 p = READ_ONCE(b);
45c8a36a 708 }
18c03c61
PZ
709
710However, stores are not speculated. This means that ordering -is- provided
ff382810 711for load-store control dependencies, as in the following example:
18c03c61 712
105ff3cb 713 q = READ_ONCE(a);
18c03c61 714 if (q) {
c8241f85 715 WRITE_ONCE(b, 1);
18c03c61
PZ
716 }
717
c8241f85
PM
718Control dependencies pair normally with other types of barriers.
719That said, please note that neither READ_ONCE() nor WRITE_ONCE()
720are optional! Without the READ_ONCE(), the compiler might combine the
721load from 'a' with other loads from 'a'. Without the WRITE_ONCE(),
722the compiler might combine the store to 'b' with other stores to 'b'.
723Either can result in highly counterintuitive effects on ordering.
18c03c61
PZ
724
725Worse yet, if the compiler is able to prove (say) that the value of
726variable 'a' is always non-zero, it would be well within its rights
727to optimize the original example by eliminating the "if" statement
728as follows:
729
730 q = a;
c8241f85 731 b = 1; /* BUG: Compiler and CPU can both reorder!!! */
2456d2a6 732
105ff3cb 733So don't leave out the READ_ONCE().
18c03c61 734
2456d2a6
PM
735It is tempting to try to enforce ordering on identical stores on both
736branches of the "if" statement as follows:
18c03c61 737
105ff3cb 738 q = READ_ONCE(a);
18c03c61 739 if (q) {
9b2b3bf5 740 barrier();
c8241f85 741 WRITE_ONCE(b, 1);
18c03c61
PZ
742 do_something();
743 } else {
9b2b3bf5 744 barrier();
c8241f85 745 WRITE_ONCE(b, 1);
18c03c61
PZ
746 do_something_else();
747 }
748
2456d2a6
PM
749Unfortunately, current compilers will transform this as follows at high
750optimization levels:
18c03c61 751
105ff3cb 752 q = READ_ONCE(a);
2456d2a6 753 barrier();
c8241f85 754 WRITE_ONCE(b, 1); /* BUG: No ordering vs. load from a!!! */
18c03c61 755 if (q) {
c8241f85 756 /* WRITE_ONCE(b, 1); -- moved up, BUG!!! */
18c03c61
PZ
757 do_something();
758 } else {
c8241f85 759 /* WRITE_ONCE(b, 1); -- moved up, BUG!!! */
18c03c61
PZ
760 do_something_else();
761 }
762
2456d2a6
PM
763Now there is no conditional between the load from 'a' and the store to
764'b', which means that the CPU is within its rights to reorder them:
765The conditional is absolutely required, and must be present in the
766assembly code even after all compiler optimizations have been applied.
767Therefore, if you need ordering in this example, you need explicit
768memory barriers, for example, smp_store_release():
18c03c61 769
9af194ce 770 q = READ_ONCE(a);
2456d2a6 771 if (q) {
c8241f85 772 smp_store_release(&b, 1);
18c03c61
PZ
773 do_something();
774 } else {
c8241f85 775 smp_store_release(&b, 1);
18c03c61
PZ
776 do_something_else();
777 }
778
2456d2a6
PM
779In contrast, without explicit memory barriers, two-legged-if control
780ordering is guaranteed only when the stores differ, for example:
781
105ff3cb 782 q = READ_ONCE(a);
2456d2a6 783 if (q) {
c8241f85 784 WRITE_ONCE(b, 1);
2456d2a6
PM
785 do_something();
786 } else {
c8241f85 787 WRITE_ONCE(b, 2);
2456d2a6
PM
788 do_something_else();
789 }
790
105ff3cb
LT
791The initial READ_ONCE() is still required to prevent the compiler from
792proving the value of 'a'.
18c03c61
PZ
793
794In addition, you need to be careful what you do with the local variable 'q',
795otherwise the compiler might be able to guess the value and again remove
796the needed conditional. For example:
797
105ff3cb 798 q = READ_ONCE(a);
18c03c61 799 if (q % MAX) {
c8241f85 800 WRITE_ONCE(b, 1);
18c03c61
PZ
801 do_something();
802 } else {
c8241f85 803 WRITE_ONCE(b, 2);
18c03c61
PZ
804 do_something_else();
805 }
806
807If MAX is defined to be 1, then the compiler knows that (q % MAX) is
808equal to zero, in which case the compiler is within its rights to
809transform the above code into the following:
810
105ff3cb 811 q = READ_ONCE(a);
b26cfc48 812 WRITE_ONCE(b, 2);
18c03c61
PZ
813 do_something_else();
814
2456d2a6
PM
815Given this transformation, the CPU is not required to respect the ordering
816between the load from variable 'a' and the store to variable 'b'. It is
817tempting to add a barrier(), but this does not help. The conditional
818is gone, and the barrier won't bring it back. Therefore, if you are
819relying on this ordering, you should make sure that MAX is greater than
820one, perhaps as follows:
18c03c61 821
105ff3cb 822 q = READ_ONCE(a);
18c03c61
PZ
823 BUILD_BUG_ON(MAX <= 1); /* Order load from a with store to b. */
824 if (q % MAX) {
c8241f85 825 WRITE_ONCE(b, 1);
18c03c61
PZ
826 do_something();
827 } else {
c8241f85 828 WRITE_ONCE(b, 2);
18c03c61
PZ
829 do_something_else();
830 }
831
2456d2a6
PM
832Please note once again that the stores to 'b' differ. If they were
833identical, as noted earlier, the compiler could pull this store outside
834of the 'if' statement.
835
8b19d1de
PM
836You must also be careful not to rely too much on boolean short-circuit
837evaluation. Consider this example:
838
105ff3cb 839 q = READ_ONCE(a);
57aecae9 840 if (q || 1 > 0)
9af194ce 841 WRITE_ONCE(b, 1);
8b19d1de 842
5af4692a
PM
843Because the first condition cannot fault and the second condition is
844always true, the compiler can transform this example as following,
845defeating control dependency:
8b19d1de 846
105ff3cb 847 q = READ_ONCE(a);
9af194ce 848 WRITE_ONCE(b, 1);
8b19d1de
PM
849
850This example underscores the need to ensure that the compiler cannot
9af194ce 851out-guess your code. More generally, although READ_ONCE() does force
8b19d1de
PM
852the compiler to actually emit code for a given load, it does not force
853the compiler to use the results.
854
ebff09a6
PM
855In addition, control dependencies apply only to the then-clause and
856else-clause of the if-statement in question. In particular, it does
857not necessarily apply to code following the if-statement:
858
859 q = READ_ONCE(a);
860 if (q) {
c8241f85 861 WRITE_ONCE(b, 1);
ebff09a6 862 } else {
c8241f85 863 WRITE_ONCE(b, 2);
ebff09a6 864 }
c8241f85 865 WRITE_ONCE(c, 1); /* BUG: No ordering against the read from 'a'. */
ebff09a6
PM
866
867It is tempting to argue that there in fact is ordering because the
868compiler cannot reorder volatile accesses and also cannot reorder
c8241f85
PM
869the writes to 'b' with the condition. Unfortunately for this line
870of reasoning, the compiler might compile the two writes to 'b' as
ebff09a6
PM
871conditional-move instructions, as in this fanciful pseudo-assembly
872language:
873
874 ld r1,a
ebff09a6 875 cmp r1,$0
c8241f85
PM
876 cmov,ne r4,$1
877 cmov,eq r4,$2
ebff09a6
PM
878 st r4,b
879 st $1,c
880
881A weakly ordered CPU would have no dependency of any sort between the load
c8241f85 882from 'a' and the store to 'c'. The control dependencies would extend
ebff09a6
PM
883only to the pair of cmov instructions and the store depending on them.
884In short, control dependencies apply only to the stores in the then-clause
885and else-clause of the if-statement in question (including functions
886invoked by those two clauses), not to code following that if-statement.
887
18c03c61 888
f1ab25a3
PM
889Note well that the ordering provided by a control dependency is local
890to the CPU containing it. See the section on "Multicopy atomicity"
891for more information.
18c03c61 892
18c03c61
PZ
893
894In summary:
895
896 (*) Control dependencies can order prior loads against later stores.
897 However, they do -not- guarantee any other sort of ordering:
898 Not prior loads against later loads, nor prior stores against
899 later anything. If you need these other forms of ordering,
d87510c5 900 use smp_rmb(), smp_wmb(), or, in the case of prior stores and
18c03c61
PZ
901 later loads, smp_mb().
902
7817b799
PM
903 (*) If both legs of the "if" statement begin with identical stores to
904 the same variable, then those stores must be ordered, either by
905 preceding both of them with smp_mb() or by using smp_store_release()
906 to carry out the stores. Please note that it is -not- sufficient
a5052657
PM
907 to use barrier() at beginning of each leg of the "if" statement
908 because, as shown by the example above, optimizing compilers can
909 destroy the control dependency while respecting the letter of the
910 barrier() law.
9b2b3bf5 911
18c03c61 912 (*) Control dependencies require at least one run-time conditional
586dd56a 913 between the prior load and the subsequent store, and this
9af194ce
PM
914 conditional must involve the prior load. If the compiler is able
915 to optimize the conditional away, it will have also optimized
105ff3cb
LT
916 away the ordering. Careful use of READ_ONCE() and WRITE_ONCE()
917 can help to preserve the needed conditional.
18c03c61
PZ
918
919 (*) Control dependencies require that the compiler avoid reordering the
105ff3cb
LT
920 dependency into nonexistence. Careful use of READ_ONCE() or
921 atomic{,64}_read() can help to preserve your control dependency.
895f5542 922 Please see the COMPILER BARRIER section for more information.
18c03c61 923
ebff09a6
PM
924 (*) Control dependencies apply only to the then-clause and else-clause
925 of the if-statement containing the control dependency, including
926 any functions that these two clauses call. Control dependencies
927 do -not- apply to code following the if-statement containing the
928 control dependency.
929
ff382810
PM
930 (*) Control dependencies pair normally with other types of barriers.
931
f1ab25a3
PM
932 (*) Control dependencies do -not- provide multicopy atomicity. If you
933 need all the CPUs to see a given store at the same time, use smp_mb().
108b42b4 934
c8241f85
PM
935 (*) Compilers do not understand control dependencies. It is therefore
936 your job to ensure that they do not break your code.
937
108b42b4
DH
938
939SMP BARRIER PAIRING
940-------------------
941
942When dealing with CPU-CPU interactions, certain types of memory barrier should
943always be paired. A lack of appropriate pairing is almost certainly an error.
944
ff382810 945General barriers pair with each other, though they also pair with most
f1ab25a3
PM
946other types of barriers, albeit without multicopy atomicity. An acquire
947barrier pairs with a release barrier, but both may also pair with other
948barriers, including of course general barriers. A write barrier pairs
203185f6 949with an address-dependency barrier, a control dependency, an acquire barrier,
f1ab25a3 950a release barrier, a read barrier, or a general barrier. Similarly a
203185f6 951read barrier, control dependency, or an address-dependency barrier pairs
f1ab25a3
PM
952with a write barrier, an acquire barrier, a release barrier, or a
953general barrier:
108b42b4 954
2ecf8101
PM
955 CPU 1 CPU 2
956 =============== ===============
9af194ce 957 WRITE_ONCE(a, 1);
108b42b4 958 <write barrier>
9af194ce 959 WRITE_ONCE(b, 2); x = READ_ONCE(b);
2ecf8101 960 <read barrier>
9af194ce 961 y = READ_ONCE(a);
108b42b4
DH
962
963Or:
964
2ecf8101
PM
965 CPU 1 CPU 2
966 =============== ===============================
108b42b4
DH
967 a = 1;
968 <write barrier>
9af194ce 969 WRITE_ONCE(b, &a); x = READ_ONCE(b);
203185f6 970 <implicit address-dependency barrier>
2ecf8101 971 y = *x;
108b42b4 972
ff382810
PM
973Or even:
974
975 CPU 1 CPU 2
976 =============== ===============================
9af194ce 977 r1 = READ_ONCE(y);
ff382810 978 <general barrier>
d92f842b 979 WRITE_ONCE(x, 1); if (r2 = READ_ONCE(x)) {
ff382810 980 <implicit control dependency>
9af194ce 981 WRITE_ONCE(y, 1);
ff382810
PM
982 }
983
984 assert(r1 == 0 || r2 == 0);
985
108b42b4
DH
986Basically, the read barrier always has to be there, even though it can be of
987the "weaker" type.
988
670bd95e 989[!] Note that the stores before the write barrier would normally be expected to
f556082d
AY
990match the loads after the read barrier or the address-dependency barrier, and
991vice versa:
670bd95e 992
2ecf8101
PM
993 CPU 1 CPU 2
994 =================== ===================
9af194ce
PM
995 WRITE_ONCE(a, 1); }---- --->{ v = READ_ONCE(c);
996 WRITE_ONCE(b, 2); } \ / { w = READ_ONCE(d);
2ecf8101 997 <write barrier> \ <read barrier>
9af194ce
PM
998 WRITE_ONCE(c, 3); } / \ { x = READ_ONCE(a);
999 WRITE_ONCE(d, 4); }---- --->{ y = READ_ONCE(b);
670bd95e 1000
108b42b4
DH
1001
1002EXAMPLES OF MEMORY BARRIER SEQUENCES
1003------------------------------------
1004
81fc6323 1005Firstly, write barriers act as partial orderings on store operations.
108b42b4
DH
1006Consider the following sequence of events:
1007
1008 CPU 1
1009 =======================
1010 STORE A = 1
1011 STORE B = 2
1012 STORE C = 3
1013 <write barrier>
1014 STORE D = 4
1015 STORE E = 5
1016
1017This sequence of events is committed to the memory coherence system in an order
1018that the rest of the system might perceive as the unordered set of { STORE A,
80f7228b 1019STORE B, STORE C } all occurring before the unordered set of { STORE D, STORE E
108b42b4
DH
1020}:
1021
1022 +-------+ : :
1023 | | +------+
1024 | |------>| C=3 | } /\
81fc6323
JP
1025 | | : +------+ }----- \ -----> Events perceptible to
1026 | | : | A=1 | } \/ the rest of the system
108b42b4
DH
1027 | | : +------+ }
1028 | CPU 1 | : | B=2 | }
1029 | | +------+ }
1030 | | wwwwwwwwwwwwwwww } <--- At this point the write barrier
1031 | | +------+ } requires all stores prior to the
1032 | | : | E=5 | } barrier to be committed before
81fc6323 1033 | | : +------+ } further stores may take place
108b42b4
DH
1034 | |------>| D=4 | }
1035 | | +------+
1036 +-------+ : :
1037 |
670bd95e
DH
1038 | Sequence in which stores are committed to the
1039 | memory system by CPU 1
108b42b4
DH
1040 V
1041
1042
f556082d
AY
1043Secondly, address-dependency barriers act as partial orderings on address-
1044dependent loads. Consider the following sequence of events:
108b42b4
DH
1045
1046 CPU 1 CPU 2
1047 ======================= =======================
c14038c3 1048 { B = 7; X = 9; Y = 8; C = &Y }
108b42b4
DH
1049 STORE A = 1
1050 STORE B = 2
1051 <write barrier>
1052 STORE C = &B LOAD X
1053 STORE D = 4 LOAD C (gets &B)
1054 LOAD *C (reads B)
1055
1056Without intervention, CPU 2 may perceive the events on CPU 1 in some
1057effectively random order, despite the write barrier issued by CPU 1:
1058
1059 +-------+ : : : :
1060 | | +------+ +-------+ | Sequence of update
1061 | |------>| B=2 |----- --->| Y->8 | | of perception on
1062 | | : +------+ \ +-------+ | CPU 2
1063 | CPU 1 | : | A=1 | \ --->| C->&Y | V
1064 | | +------+ | +-------+
1065 | | wwwwwwwwwwwwwwww | : :
1066 | | +------+ | : :
1067 | | : | C=&B |--- | : : +-------+
1068 | | : +------+ \ | +-------+ | |
1069 | |------>| D=4 | ----------->| C->&B |------>| |
1070 | | +------+ | +-------+ | |
1071 +-------+ : : | : : | |
1072 | : : | |
1073 | : : | CPU 2 |
1074 | +-------+ | |
1075 Apparently incorrect ---> | | B->7 |------>| |
1076 perception of B (!) | +-------+ | |
1077 | : : | |
1078 | +-------+ | |
1079 The load of X holds ---> \ | X->9 |------>| |
1080 up the maintenance \ +-------+ | |
1081 of coherence of B ----->| B->2 | +-------+
1082 +-------+
1083 : :
1084
1085
1086In the above example, CPU 2 perceives that B is 7, despite the load of *C
670e9f34 1087(which would be B) coming after the LOAD of C.
108b42b4 1088
f556082d
AY
1089If, however, an address-dependency barrier were to be placed between the load
1090of C and the load of *C (ie: B) on CPU 2:
c14038c3
DH
1091
1092 CPU 1 CPU 2
1093 ======================= =======================
1094 { B = 7; X = 9; Y = 8; C = &Y }
1095 STORE A = 1
1096 STORE B = 2
1097 <write barrier>
1098 STORE C = &B LOAD X
1099 STORE D = 4 LOAD C (gets &B)
203185f6 1100 <address-dependency barrier>
c14038c3
DH
1101 LOAD *C (reads B)
1102
1103then the following will occur:
108b42b4
DH
1104
1105 +-------+ : : : :
1106 | | +------+ +-------+
1107 | |------>| B=2 |----- --->| Y->8 |
1108 | | : +------+ \ +-------+
1109 | CPU 1 | : | A=1 | \ --->| C->&Y |
1110 | | +------+ | +-------+
1111 | | wwwwwwwwwwwwwwww | : :
1112 | | +------+ | : :
1113 | | : | C=&B |--- | : : +-------+
1114 | | : +------+ \ | +-------+ | |
1115 | |------>| D=4 | ----------->| C->&B |------>| |
1116 | | +------+ | +-------+ | |
1117 +-------+ : : | : : | |
1118 | : : | |
1119 | : : | CPU 2 |
1120 | +-------+ | |
670bd95e
DH
1121 | | X->9 |------>| |
1122 | +-------+ | |
203185f6 1123 Makes sure all effects ---> \ aaaaaaaaaaaaaaaaa | |
670bd95e
DH
1124 prior to the store of C \ +-------+ | |
1125 are perceptible to ----->| B->2 |------>| |
1126 subsequent loads +-------+ | |
108b42b4
DH
1127 : : +-------+
1128
1129
1130And thirdly, a read barrier acts as a partial order on loads. Consider the
1131following sequence of events:
1132
1133 CPU 1 CPU 2
1134 ======================= =======================
670bd95e 1135 { A = 0, B = 9 }
108b42b4 1136 STORE A=1
108b42b4 1137 <write barrier>
670bd95e 1138 STORE B=2
108b42b4 1139 LOAD B
670bd95e 1140 LOAD A
108b42b4
DH
1141
1142Without intervention, CPU 2 may then choose to perceive the events on CPU 1 in
1143some effectively random order, despite the write barrier issued by CPU 1:
1144
670bd95e
DH
1145 +-------+ : : : :
1146 | | +------+ +-------+
1147 | |------>| A=1 |------ --->| A->0 |
1148 | | +------+ \ +-------+
1149 | CPU 1 | wwwwwwwwwwwwwwww \ --->| B->9 |
1150 | | +------+ | +-------+
1151 | |------>| B=2 |--- | : :
1152 | | +------+ \ | : : +-------+
1153 +-------+ : : \ | +-------+ | |
1154 ---------->| B->2 |------>| |
1155 | +-------+ | CPU 2 |
1156 | | A->0 |------>| |
1157 | +-------+ | |
1158 | : : +-------+
1159 \ : :
1160 \ +-------+
1161 ---->| A->1 |
1162 +-------+
1163 : :
108b42b4 1164
670bd95e 1165
6bc39274 1166If, however, a read barrier were to be placed between the load of B and the
670bd95e
DH
1167load of A on CPU 2:
1168
1169 CPU 1 CPU 2
1170 ======================= =======================
1171 { A = 0, B = 9 }
1172 STORE A=1
1173 <write barrier>
1174 STORE B=2
1175 LOAD B
1176 <read barrier>
1177 LOAD A
1178
1179then the partial ordering imposed by CPU 1 will be perceived correctly by CPU
11802:
1181
1182 +-------+ : : : :
1183 | | +------+ +-------+
1184 | |------>| A=1 |------ --->| A->0 |
1185 | | +------+ \ +-------+
1186 | CPU 1 | wwwwwwwwwwwwwwww \ --->| B->9 |
1187 | | +------+ | +-------+
1188 | |------>| B=2 |--- | : :
1189 | | +------+ \ | : : +-------+
1190 +-------+ : : \ | +-------+ | |
1191 ---------->| B->2 |------>| |
1192 | +-------+ | CPU 2 |
1193 | : : | |
1194 | : : | |
1195 At this point the read ----> \ rrrrrrrrrrrrrrrrr | |
1196 barrier causes all effects \ +-------+ | |
1197 prior to the storage of B ---->| A->1 |------>| |
1198 to be perceptible to CPU 2 +-------+ | |
1199 : : +-------+
1200
1201
1202To illustrate this more completely, consider what could happen if the code
1203contained a load of A either side of the read barrier:
1204
1205 CPU 1 CPU 2
1206 ======================= =======================
1207 { A = 0, B = 9 }
1208 STORE A=1
1209 <write barrier>
1210 STORE B=2
1211 LOAD B
1212 LOAD A [first load of A]
1213 <read barrier>
1214 LOAD A [second load of A]
1215
1216Even though the two loads of A both occur after the load of B, they may both
1217come up with different values:
1218
1219 +-------+ : : : :
1220 | | +------+ +-------+
1221 | |------>| A=1 |------ --->| A->0 |
1222 | | +------+ \ +-------+
1223 | CPU 1 | wwwwwwwwwwwwwwww \ --->| B->9 |
1224 | | +------+ | +-------+
1225 | |------>| B=2 |--- | : :
1226 | | +------+ \ | : : +-------+
1227 +-------+ : : \ | +-------+ | |
1228 ---------->| B->2 |------>| |
1229 | +-------+ | CPU 2 |
1230 | : : | |
1231 | : : | |
1232 | +-------+ | |
1233 | | A->0 |------>| 1st |
1234 | +-------+ | |
1235 At this point the read ----> \ rrrrrrrrrrrrrrrrr | |
1236 barrier causes all effects \ +-------+ | |
1237 prior to the storage of B ---->| A->1 |------>| 2nd |
1238 to be perceptible to CPU 2 +-------+ | |
1239 : : +-------+
1240
1241
1242But it may be that the update to A from CPU 1 becomes perceptible to CPU 2
1243before the read barrier completes anyway:
1244
1245 +-------+ : : : :
1246 | | +------+ +-------+
1247 | |------>| A=1 |------ --->| A->0 |
1248 | | +------+ \ +-------+
1249 | CPU 1 | wwwwwwwwwwwwwwww \ --->| B->9 |
1250 | | +------+ | +-------+
1251 | |------>| B=2 |--- | : :
1252 | | +------+ \ | : : +-------+
1253 +-------+ : : \ | +-------+ | |
1254 ---------->| B->2 |------>| |
1255 | +-------+ | CPU 2 |
1256 | : : | |
1257 \ : : | |
1258 \ +-------+ | |
1259 ---->| A->1 |------>| 1st |
1260 +-------+ | |
1261 rrrrrrrrrrrrrrrrr | |
1262 +-------+ | |
1263 | A->1 |------>| 2nd |
1264 +-------+ | |
1265 : : +-------+
1266
1267
1268The guarantee is that the second load will always come up with A == 1 if the
1269load of B came up with B == 2. No such guarantee exists for the first load of
1270A; that may come up with either A == 0 or A == 1.
1271
1272
1273READ MEMORY BARRIERS VS LOAD SPECULATION
1274----------------------------------------
1275
1276Many CPUs speculate with loads: that is they see that they will need to load an
1277item from memory, and they find a time where they're not using the bus for any
1278other loads, and so do the load in advance - even though they haven't actually
1279got to that point in the instruction execution flow yet. This permits the
1280actual load instruction to potentially complete immediately because the CPU
1281already has the value to hand.
1282
1283It may turn out that the CPU didn't actually need the value - perhaps because a
1284branch circumvented the load - in which case it can discard the value or just
1285cache it for later use.
1286
1287Consider:
1288
e0edc78f 1289 CPU 1 CPU 2
670bd95e 1290 ======================= =======================
e0edc78f
IM
1291 LOAD B
1292 DIVIDE } Divide instructions generally
1293 DIVIDE } take a long time to perform
1294 LOAD A
670bd95e
DH
1295
1296Which might appear as this:
1297
1298 : : +-------+
1299 +-------+ | |
1300 --->| B->2 |------>| |
1301 +-------+ | CPU 2 |
1302 : :DIVIDE | |
1303 +-------+ | |
1304 The CPU being busy doing a ---> --->| A->0 |~~~~ | |
1305 division speculates on the +-------+ ~ | |
1306 LOAD of A : : ~ | |
1307 : :DIVIDE | |
1308 : : ~ | |
1309 Once the divisions are complete --> : : ~-->| |
1310 the CPU can then perform the : : | |
1311 LOAD with immediate effect : : +-------+
1312
1313
203185f6 1314Placing a read barrier or an address-dependency barrier just before the second
670bd95e
DH
1315load:
1316
e0edc78f 1317 CPU 1 CPU 2
670bd95e 1318 ======================= =======================
e0edc78f
IM
1319 LOAD B
1320 DIVIDE
1321 DIVIDE
670bd95e 1322 <read barrier>
e0edc78f 1323 LOAD A
670bd95e
DH
1324
1325will force any value speculatively obtained to be reconsidered to an extent
1326dependent on the type of barrier used. If there was no change made to the
1327speculated memory location, then the speculated value will just be used:
1328
1329 : : +-------+
1330 +-------+ | |
1331 --->| B->2 |------>| |
1332 +-------+ | CPU 2 |
1333 : :DIVIDE | |
1334 +-------+ | |
1335 The CPU being busy doing a ---> --->| A->0 |~~~~ | |
1336 division speculates on the +-------+ ~ | |
1337 LOAD of A : : ~ | |
1338 : :DIVIDE | |
1339 : : ~ | |
1340 : : ~ | |
1341 rrrrrrrrrrrrrrrr~ | |
1342 : : ~ | |
1343 : : ~-->| |
1344 : : | |
1345 : : +-------+
1346
1347
1348but if there was an update or an invalidation from another CPU pending, then
1349the speculation will be cancelled and the value reloaded:
1350
1351 : : +-------+
1352 +-------+ | |
1353 --->| B->2 |------>| |
1354 +-------+ | CPU 2 |
1355 : :DIVIDE | |
1356 +-------+ | |
1357 The CPU being busy doing a ---> --->| A->0 |~~~~ | |
1358 division speculates on the +-------+ ~ | |
1359 LOAD of A : : ~ | |
1360 : :DIVIDE | |
1361 : : ~ | |
1362 : : ~ | |
1363 rrrrrrrrrrrrrrrrr | |
1364 +-------+ | |
1365 The speculation is discarded ---> --->| A->1 |------>| |
1366 and an updated value is +-------+ | |
1367 retrieved : : +-------+
108b42b4
DH
1368
1369
f1ab25a3
PM
1370MULTICOPY ATOMICITY
1371--------------------
1372
1373Multicopy atomicity is a deeply intuitive notion about ordering that is
1374not always provided by real computer systems, namely that a given store
0902b1f4
AS
1375becomes visible at the same time to all CPUs, or, alternatively, that all
1376CPUs agree on the order in which all stores become visible. However,
1377support of full multicopy atomicity would rule out valuable hardware
1378optimizations, so a weaker form called ``other multicopy atomicity''
1379instead guarantees only that a given store becomes visible at the same
1380time to all -other- CPUs. The remainder of this document discusses this
1381weaker form, but for brevity will call it simply ``multicopy atomicity''.
241e6663 1382
f1ab25a3 1383The following example demonstrates multicopy atomicity:
241e6663
PM
1384
1385 CPU 1 CPU 2 CPU 3
1386 ======================= ======================= =======================
1387 { X = 0, Y = 0 }
f1ab25a3
PM
1388 STORE X=1 r1=LOAD X (reads 1) LOAD Y (reads 1)
1389 <general barrier> <read barrier>
1390 STORE Y=r1 LOAD X
241e6663 1391
0902b1f4
AS
1392Suppose that CPU 2's load from X returns 1, which it then stores to Y,
1393and CPU 3's load from Y returns 1. This indicates that CPU 1's store
1394to X precedes CPU 2's load from X and that CPU 2's store to Y precedes
1395CPU 3's load from Y. In addition, the memory barriers guarantee that
1396CPU 2 executes its load before its store, and CPU 3 loads from Y before
1397it loads from X. The question is then "Can CPU 3's load from X return 0?"
241e6663 1398
0902b1f4 1399Because CPU 3's load from X in some sense comes after CPU 2's load, it
241e6663 1400is natural to expect that CPU 3's load from X must therefore return 1.
0902b1f4
AS
1401This expectation follows from multicopy atomicity: if a load executing
1402on CPU B follows a load from the same variable executing on CPU A (and
1403CPU A did not originally store the value which it read), then on
1404multicopy-atomic systems, CPU B's load must return either the same value
1405that CPU A's load did or some later value. However, the Linux kernel
1406does not require systems to be multicopy atomic.
1407
1408The use of a general memory barrier in the example above compensates
1409for any lack of multicopy atomicity. In the example, if CPU 2's load
1410from X returns 1 and CPU 3's load from Y returns 1, then CPU 3's load
1411from X must indeed also return 1.
f1ab25a3
PM
1412
1413However, dependencies, read barriers, and write barriers are not always
1414able to compensate for non-multicopy atomicity. For example, suppose
1415that CPU 2's general barrier is removed from the above example, leaving
1416only the data dependency shown below:
241e6663
PM
1417
1418 CPU 1 CPU 2 CPU 3
1419 ======================= ======================= =======================
1420 { X = 0, Y = 0 }
f1ab25a3
PM
1421 STORE X=1 r1=LOAD X (reads 1) LOAD Y (reads 1)
1422 <data dependency> <read barrier>
1423 STORE Y=r1 LOAD X (reads 0)
1424
1425This substitution allows non-multicopy atomicity to run rampant: in
1426this example, it is perfectly legal for CPU 2's load from X to return 1,
1427CPU 3's load from Y to return 1, and its load from X to return 0.
1428
1429The key point is that although CPU 2's data dependency orders its load
0902b1f4
AS
1430and store, it does not guarantee to order CPU 1's store. Thus, if this
1431example runs on a non-multicopy-atomic system where CPUs 1 and 2 share a
1432store buffer or a level of cache, CPU 2 might have early access to CPU 1's
1433writes. General barriers are therefore required to ensure that all CPUs
1434agree on the combined order of multiple accesses.
f1ab25a3
PM
1435
1436General barriers can compensate not only for non-multicopy atomicity,
1437but can also generate additional ordering that can ensure that -all-
1438CPUs will perceive the same order of -all- operations. In contrast, a
1439chain of release-acquire pairs do not provide this additional ordering,
1440which means that only those CPUs on the chain are guaranteed to agree
1441on the combined order of the accesses. For example, switching to C code
1442in deference to the ghost of Herman Hollerith:
c535cc92
PM
1443
1444 int u, v, x, y, z;
1445
1446 void cpu0(void)
1447 {
1448 r0 = smp_load_acquire(&x);
1449 WRITE_ONCE(u, 1);
1450 smp_store_release(&y, 1);
1451 }
1452
1453 void cpu1(void)
1454 {
1455 r1 = smp_load_acquire(&y);
1456 r4 = READ_ONCE(v);
1457 r5 = READ_ONCE(u);
1458 smp_store_release(&z, 1);
1459 }
1460
1461 void cpu2(void)
1462 {
1463 r2 = smp_load_acquire(&z);
1464 smp_store_release(&x, 1);
1465 }
1466
1467 void cpu3(void)
1468 {
1469 WRITE_ONCE(v, 1);
1470 smp_mb();
1471 r3 = READ_ONCE(u);
1472 }
1473
f1ab25a3
PM
1474Because cpu0(), cpu1(), and cpu2() participate in a chain of
1475smp_store_release()/smp_load_acquire() pairs, the following outcome
1476is prohibited:
c535cc92
PM
1477
1478 r0 == 1 && r1 == 1 && r2 == 1
1479
1480Furthermore, because of the release-acquire relationship between cpu0()
1481and cpu1(), cpu1() must see cpu0()'s writes, so that the following
1482outcome is prohibited:
1483
1484 r1 == 1 && r5 == 0
1485
f1ab25a3
PM
1486However, the ordering provided by a release-acquire chain is local
1487to the CPUs participating in that chain and does not apply to cpu3(),
1488at least aside from stores. Therefore, the following outcome is possible:
c535cc92
PM
1489
1490 r0 == 0 && r1 == 1 && r2 == 1 && r3 == 0 && r4 == 0
1491
37ef0341
PM
1492As an aside, the following outcome is also possible:
1493
1494 r0 == 0 && r1 == 1 && r2 == 1 && r3 == 0 && r4 == 0 && r5 == 1
1495
c535cc92
PM
1496Although cpu0(), cpu1(), and cpu2() will see their respective reads and
1497writes in order, CPUs not involved in the release-acquire chain might
1498well disagree on the order. This disagreement stems from the fact that
1499the weak memory-barrier instructions used to implement smp_load_acquire()
1500and smp_store_release() are not required to order prior stores against
1501subsequent loads in all cases. This means that cpu3() can see cpu0()'s
1502store to u as happening -after- cpu1()'s load from v, even though
1503both cpu0() and cpu1() agree that these two operations occurred in the
1504intended order.
1505
1506However, please keep in mind that smp_load_acquire() is not magic.
1507In particular, it simply reads from its argument with ordering. It does
1508-not- ensure that any particular value will be read. Therefore, the
1509following outcome is possible:
1510
1511 r0 == 0 && r1 == 0 && r2 == 0 && r5 == 0
1512
1513Note that this outcome can happen even on a mythical sequentially
1514consistent system where nothing is ever reordered.
1515
f1ab25a3
PM
1516To reiterate, if your code requires full ordering of all operations,
1517use general barriers throughout.
241e6663
PM
1518
1519
108b42b4
DH
1520========================
1521EXPLICIT KERNEL BARRIERS
1522========================
1523
1524The Linux kernel has a variety of different barriers that act at different
1525levels:
1526
1527 (*) Compiler barrier.
1528
1529 (*) CPU memory barriers.
1530
108b42b4
DH
1531
1532COMPILER BARRIER
1533----------------
1534
1535The Linux kernel has an explicit compiler barrier function that prevents the
1536compiler from moving the memory accesses either side of it to the other side:
1537
1538 barrier();
1539
9af194ce
PM
1540This is a general barrier -- there are no read-read or write-write
1541variants of barrier(). However, READ_ONCE() and WRITE_ONCE() can be
1542thought of as weak forms of barrier() that affect only the specific
1543accesses flagged by the READ_ONCE() or WRITE_ONCE().
108b42b4 1544
692118da
PM
1545The barrier() function has the following effects:
1546
1547 (*) Prevents the compiler from reordering accesses following the
1548 barrier() to precede any accesses preceding the barrier().
1549 One example use for this property is to ease communication between
1550 interrupt-handler code and the code that was interrupted.
1551
1552 (*) Within a loop, forces the compiler to load the variables used
1553 in that loop's conditional on each pass through that loop.
1554
9af194ce
PM
1555The READ_ONCE() and WRITE_ONCE() functions can prevent any number of
1556optimizations that, while perfectly safe in single-threaded code, can
1557be fatal in concurrent code. Here are some examples of these sorts
1558of optimizations:
692118da 1559
449f7413
PM
1560 (*) The compiler is within its rights to reorder loads and stores
1561 to the same variable, and in some cases, the CPU is within its
1562 rights to reorder loads to the same variable. This means that
1563 the following code:
1564
1565 a[0] = x;
1566 a[1] = x;
1567
1568 Might result in an older value of x stored in a[1] than in a[0].
1569 Prevent both the compiler and the CPU from doing this as follows:
1570
9af194ce
PM
1571 a[0] = READ_ONCE(x);
1572 a[1] = READ_ONCE(x);
449f7413 1573
9af194ce
PM
1574 In short, READ_ONCE() and WRITE_ONCE() provide cache coherence for
1575 accesses from multiple CPUs to a single variable.
449f7413 1576
692118da
PM
1577 (*) The compiler is within its rights to merge successive loads from
1578 the same variable. Such merging can cause the compiler to "optimize"
1579 the following code:
1580
1581 while (tmp = a)
1582 do_something_with(tmp);
1583
1584 into the following code, which, although in some sense legitimate
1585 for single-threaded code, is almost certainly not what the developer
1586 intended:
1587
1588 if (tmp = a)
1589 for (;;)
1590 do_something_with(tmp);
1591
9af194ce 1592 Use READ_ONCE() to prevent the compiler from doing this to you:
692118da 1593
9af194ce 1594 while (tmp = READ_ONCE(a))
692118da
PM
1595 do_something_with(tmp);
1596
1597 (*) The compiler is within its rights to reload a variable, for example,
1598 in cases where high register pressure prevents the compiler from
1599 keeping all data of interest in registers. The compiler might
1600 therefore optimize the variable 'tmp' out of our previous example:
1601
1602 while (tmp = a)
1603 do_something_with(tmp);
1604
1605 This could result in the following code, which is perfectly safe in
1606 single-threaded code, but can be fatal in concurrent code:
1607
1608 while (a)
1609 do_something_with(a);
1610
1611 For example, the optimized version of this code could result in
1612 passing a zero to do_something_with() in the case where the variable
1613 a was modified by some other CPU between the "while" statement and
1614 the call to do_something_with().
1615
9af194ce 1616 Again, use READ_ONCE() to prevent the compiler from doing this:
692118da 1617
9af194ce 1618 while (tmp = READ_ONCE(a))
692118da
PM
1619 do_something_with(tmp);
1620
1621 Note that if the compiler runs short of registers, it might save
1622 tmp onto the stack. The overhead of this saving and later restoring
1623 is why compilers reload variables. Doing so is perfectly safe for
1624 single-threaded code, so you need to tell the compiler about cases
1625 where it is not safe.
1626
1627 (*) The compiler is within its rights to omit a load entirely if it knows
1628 what the value will be. For example, if the compiler can prove that
1629 the value of variable 'a' is always zero, it can optimize this code:
1630
1631 while (tmp = a)
1632 do_something_with(tmp);
1633
1634 Into this:
1635
1636 do { } while (0);
1637
9af194ce
PM
1638 This transformation is a win for single-threaded code because it
1639 gets rid of a load and a branch. The problem is that the compiler
1640 will carry out its proof assuming that the current CPU is the only
1641 one updating variable 'a'. If variable 'a' is shared, then the
1642 compiler's proof will be erroneous. Use READ_ONCE() to tell the
1643 compiler that it doesn't know as much as it thinks it does:
692118da 1644
9af194ce 1645 while (tmp = READ_ONCE(a))
692118da
PM
1646 do_something_with(tmp);
1647
1648 But please note that the compiler is also closely watching what you
9af194ce 1649 do with the value after the READ_ONCE(). For example, suppose you
692118da
PM
1650 do the following and MAX is a preprocessor macro with the value 1:
1651
9af194ce 1652 while ((tmp = READ_ONCE(a)) % MAX)
692118da
PM
1653 do_something_with(tmp);
1654
1655 Then the compiler knows that the result of the "%" operator applied
1656 to MAX will always be zero, again allowing the compiler to optimize
1657 the code into near-nonexistence. (It will still load from the
1658 variable 'a'.)
1659
1660 (*) Similarly, the compiler is within its rights to omit a store entirely
1661 if it knows that the variable already has the value being stored.
1662 Again, the compiler assumes that the current CPU is the only one
1663 storing into the variable, which can cause the compiler to do the
1664 wrong thing for shared variables. For example, suppose you have
1665 the following:
1666
1667 a = 0;
65f95ff2 1668 ... Code that does not store to variable a ...
692118da
PM
1669 a = 0;
1670
1671 The compiler sees that the value of variable 'a' is already zero, so
1672 it might well omit the second store. This would come as a fatal
1673 surprise if some other CPU might have stored to variable 'a' in the
1674 meantime.
1675
9af194ce 1676 Use WRITE_ONCE() to prevent the compiler from making this sort of
692118da
PM
1677 wrong guess:
1678
9af194ce 1679 WRITE_ONCE(a, 0);
65f95ff2 1680 ... Code that does not store to variable a ...
9af194ce 1681 WRITE_ONCE(a, 0);
692118da
PM
1682
1683 (*) The compiler is within its rights to reorder memory accesses unless
1684 you tell it not to. For example, consider the following interaction
1685 between process-level code and an interrupt handler:
1686
1687 void process_level(void)
1688 {
1689 msg = get_message();
1690 flag = true;
1691 }
1692
1693 void interrupt_handler(void)
1694 {
1695 if (flag)
1696 process_message(msg);
1697 }
1698
df5cbb27 1699 There is nothing to prevent the compiler from transforming
692118da
PM
1700 process_level() to the following, in fact, this might well be a
1701 win for single-threaded code:
1702
1703 void process_level(void)
1704 {
1705 flag = true;
1706 msg = get_message();
1707 }
1708
1709 If the interrupt occurs between these two statement, then
9af194ce 1710 interrupt_handler() might be passed a garbled msg. Use WRITE_ONCE()
692118da
PM
1711 to prevent this as follows:
1712
1713 void process_level(void)
1714 {
9af194ce
PM
1715 WRITE_ONCE(msg, get_message());
1716 WRITE_ONCE(flag, true);
692118da
PM
1717 }
1718
1719 void interrupt_handler(void)
1720 {
9af194ce
PM
1721 if (READ_ONCE(flag))
1722 process_message(READ_ONCE(msg));
692118da
PM
1723 }
1724
9af194ce
PM
1725 Note that the READ_ONCE() and WRITE_ONCE() wrappers in
1726 interrupt_handler() are needed if this interrupt handler can itself
1727 be interrupted by something that also accesses 'flag' and 'msg',
1728 for example, a nested interrupt or an NMI. Otherwise, READ_ONCE()
1729 and WRITE_ONCE() are not needed in interrupt_handler() other than
1730 for documentation purposes. (Note also that nested interrupts
1731 do not typically occur in modern Linux kernels, in fact, if an
1732 interrupt handler returns with interrupts enabled, you will get a
1733 WARN_ONCE() splat.)
1734
1735 You should assume that the compiler can move READ_ONCE() and
1736 WRITE_ONCE() past code not containing READ_ONCE(), WRITE_ONCE(),
1737 barrier(), or similar primitives.
1738
1739 This effect could also be achieved using barrier(), but READ_ONCE()
1740 and WRITE_ONCE() are more selective: With READ_ONCE() and
1741 WRITE_ONCE(), the compiler need only forget the contents of the
1742 indicated memory locations, while with barrier() the compiler must
8149b5cb 1743 discard the value of all memory locations that it has currently
9af194ce
PM
1744 cached in any machine registers. Of course, the compiler must also
1745 respect the order in which the READ_ONCE()s and WRITE_ONCE()s occur,
1746 though the CPU of course need not do so.
692118da
PM
1747
1748 (*) The compiler is within its rights to invent stores to a variable,
1749 as in the following example:
1750
1751 if (a)
1752 b = a;
1753 else
1754 b = 42;
1755
1756 The compiler might save a branch by optimizing this as follows:
1757
1758 b = 42;
1759 if (a)
1760 b = a;
1761
1762 In single-threaded code, this is not only safe, but also saves
1763 a branch. Unfortunately, in concurrent code, this optimization
1764 could cause some other CPU to see a spurious value of 42 -- even
1765 if variable 'a' was never zero -- when loading variable 'b'.
9af194ce 1766 Use WRITE_ONCE() to prevent this as follows:
692118da
PM
1767
1768 if (a)
9af194ce 1769 WRITE_ONCE(b, a);
692118da 1770 else
9af194ce 1771 WRITE_ONCE(b, 42);
692118da
PM
1772
1773 The compiler can also invent loads. These are usually less
1774 damaging, but they can result in cache-line bouncing and thus in
9af194ce 1775 poor performance and scalability. Use READ_ONCE() to prevent
692118da
PM
1776 invented loads.
1777
1778 (*) For aligned memory locations whose size allows them to be accessed
1779 with a single memory-reference instruction, prevents "load tearing"
1780 and "store tearing," in which a single large access is replaced by
1781 multiple smaller accesses. For example, given an architecture having
1782 16-bit store instructions with 7-bit immediate fields, the compiler
1783 might be tempted to use two 16-bit store-immediate instructions to
1784 implement the following 32-bit store:
1785
1786 p = 0x00010002;
1787
1788 Please note that GCC really does use this sort of optimization,
1789 which is not surprising given that it would likely take more
1790 than two instructions to build the constant and then store it.
1791 This optimization can therefore be a win in single-threaded code.
1792 In fact, a recent bug (since fixed) caused GCC to incorrectly use
1793 this optimization in a volatile store. In the absence of such bugs,
9af194ce 1794 use of WRITE_ONCE() prevents store tearing in the following example:
692118da 1795
9af194ce 1796 WRITE_ONCE(p, 0x00010002);
692118da
PM
1797
1798 Use of packed structures can also result in load and store tearing,
1799 as in this example:
1800
1801 struct __attribute__((__packed__)) foo {
1802 short a;
1803 int b;
1804 short c;
1805 };
1806 struct foo foo1, foo2;
1807 ...
1808
1809 foo2.a = foo1.a;
1810 foo2.b = foo1.b;
1811 foo2.c = foo1.c;
1812
9af194ce
PM
1813 Because there are no READ_ONCE() or WRITE_ONCE() wrappers and no
1814 volatile markings, the compiler would be well within its rights to
1815 implement these three assignment statements as a pair of 32-bit
1816 loads followed by a pair of 32-bit stores. This would result in
1817 load tearing on 'foo1.b' and store tearing on 'foo2.b'. READ_ONCE()
1818 and WRITE_ONCE() again prevent tearing in this example:
692118da
PM
1819
1820 foo2.a = foo1.a;
9af194ce 1821 WRITE_ONCE(foo2.b, READ_ONCE(foo1.b));
692118da
PM
1822 foo2.c = foo1.c;
1823
9af194ce
PM
1824All that aside, it is never necessary to use READ_ONCE() and
1825WRITE_ONCE() on a variable that has been marked volatile. For example,
1826because 'jiffies' is marked volatile, it is never necessary to
1827say READ_ONCE(jiffies). The reason for this is that READ_ONCE() and
1828WRITE_ONCE() are implemented as volatile casts, which has no effect when
1829its argument is already marked volatile.
692118da
PM
1830
1831Please note that these compiler barriers have no direct effect on the CPU,
1832which may then reorder things however it wishes.
108b42b4
DH
1833
1834
1835CPU MEMORY BARRIERS
1836-------------------
1837
203185f6 1838The Linux kernel has seven basic CPU memory barriers:
108b42b4 1839
203185f6
AY
1840 TYPE MANDATORY SMP CONDITIONAL
1841 ======================= =============== ===============
1842 GENERAL mb() smp_mb()
1843 WRITE wmb() smp_wmb()
1844 READ rmb() smp_rmb()
1845 ADDRESS DEPENDENCY READ_ONCE()
108b42b4
DH
1846
1847
203185f6
AY
1848All memory barriers except the address-dependency barriers imply a compiler
1849barrier. Address dependencies do not impose any additional compiler ordering.
73f10281 1850
203185f6 1851Aside: In the case of address dependencies, the compiler would be expected
9af194ce
PM
1852to issue the loads in the correct order (eg. `a[b]` would have to load
1853the value of b before loading a[b]), however there is no guarantee in
1854the C specification that the compiler may not speculate the value of b
8149b5cb 1855(eg. is equal to 1) and load a[b] before b (eg. tmp = a[1]; if (b != 1)
0b6fa347
SP
1856tmp = a[b]; ). There is also the problem of a compiler reloading b after
1857having loaded a[b], thus having a newer copy of b than a[b]. A consensus
9af194ce
PM
1858has not yet been reached about these problems, however the READ_ONCE()
1859macro is a good place to start looking.
108b42b4
DH
1860
1861SMP memory barriers are reduced to compiler barriers on uniprocessor compiled
81fc6323 1862systems because it is assumed that a CPU will appear to be self-consistent,
108b42b4 1863and will order overlapping accesses correctly with respect to itself.
6a65d263 1864However, see the subsection on "Virtual Machine Guests" below.
108b42b4
DH
1865
1866[!] Note that SMP memory barriers _must_ be used to control the ordering of
1867references to shared memory on SMP systems, though the use of locking instead
1868is sufficient.
1869
1870Mandatory barriers should not be used to control SMP effects, since mandatory
6a65d263
MT
1871barriers impose unnecessary overhead on both SMP and UP systems. They may,
1872however, be used to control MMIO effects on accesses through relaxed memory I/O
1873windows. These barriers are required even on non-SMP systems as they affect
1874the order in which memory operations appear to a device by prohibiting both the
1875compiler and the CPU from reordering them.
108b42b4
DH
1876
1877
1878There are some more advanced barrier functions:
1879
b92b8b35 1880 (*) smp_store_mb(var, value)
108b42b4 1881
75b2bd55 1882 This assigns the value to the variable and then inserts a full memory
2d142e59
DB
1883 barrier after it. It isn't guaranteed to insert anything more than a
1884 compiler barrier in a UP compilation.
108b42b4
DH
1885
1886
1b15611e
PZ
1887 (*) smp_mb__before_atomic();
1888 (*) smp_mb__after_atomic();
108b42b4 1889
39323c64
MS
1890 These are for use with atomic RMW functions that do not imply memory
1891 barriers, but where the code needs a memory barrier. Examples for atomic
d8566f15 1892 RMW functions that do not imply a memory barrier are e.g. add,
39323c64
MS
1893 subtract, (failed) conditional operations, _relaxed functions,
1894 but not atomic_read or atomic_set. A common example where a memory
1895 barrier may be required is when atomic ops are used for reference
1896 counting.
1897
1898 These are also used for atomic RMW bitop functions that do not imply a
1899 memory barrier (such as set_bit and clear_bit).
108b42b4
DH
1900
1901 As an example, consider a piece of code that marks an object as being dead
1902 and then decrements the object's reference count:
1903
1904 obj->dead = 1;
1b15611e 1905 smp_mb__before_atomic();
108b42b4
DH
1906 atomic_dec(&obj->ref_count);
1907
1908 This makes sure that the death mark on the object is perceived to be set
1909 *before* the reference counter is decremented.
1910
706eeb3e 1911 See Documentation/atomic_{t,bitops}.txt for more information.
108b42b4
DH
1912
1913
1077fa36
AD
1914 (*) dma_wmb();
1915 (*) dma_rmb();
ed59dfd9 1916 (*) dma_mb();
1077fa36
AD
1917
1918 These are for use with consistent memory to guarantee the ordering
1919 of writes or reads of shared memory accessible to both the CPU and a
289e1c89
PP
1920 DMA capable device. See Documentation/core-api/dma-api.rst file for more
1921 information about consistent memory.
1077fa36
AD
1922
1923 For example, consider a device driver that shares memory with a device
1924 and uses a descriptor status value to indicate if the descriptor belongs
1925 to the device or the CPU, and a doorbell to notify it when new
1926 descriptors are available:
1927
1928 if (desc->status != DEVICE_OWN) {
1929 /* do not read data until we own descriptor */
1930 dma_rmb();
1931
1932 /* read/modify data */
1933 read_data = desc->data;
1934 desc->data = write_data;
1935
1936 /* flush modifications before status update */
1937 dma_wmb();
1938
1939 /* assign ownership */
1940 desc->status = DEVICE_OWN;
1941
289e1c89
PP
1942 /* Make descriptor status visible to the device followed by
1943 * notify device of new descriptor
1944 */
1077fa36
AD
1945 writel(DESC_NOTIFY, doorbell);
1946 }
1947
289e1c89 1948 The dma_rmb() allows us to guarantee that the device has released ownership
7a458007 1949 before we read the data from the descriptor, and the dma_wmb() allows
1077fa36 1950 us to guarantee the data is written to the descriptor before the device
ed59dfd9 1951 can see it now has ownership. The dma_mb() implies both a dma_rmb() and
289e1c89
PP
1952 a dma_wmb().
1953
1954 Note that the dma_*() barriers do not provide any ordering guarantees for
1955 accesses to MMIO regions. See the later "KERNEL I/O BARRIER EFFECTS"
1956 subsection for more information about I/O accessors and MMIO ordering.
1077fa36 1957
3e79f082
AK
1958 (*) pmem_wmb();
1959
1960 This is for use with persistent memory to ensure that stores for which
1961 modifications are written to persistent storage reached a platform
1962 durability domain.
1963
1964 For example, after a non-temporal write to pmem region, we use pmem_wmb()
1965 to ensure that stores have reached a platform durability domain. This ensures
1966 that stores have updated persistent storage before any data access or
1967 data transfer caused by subsequent instructions is initiated. This is
1968 in addition to the ordering done by wmb().
1969
1970 For load from persistent memory, existing read memory barriers are sufficient
1971 to ensure read ordering.
dfeccea6 1972
d5624bb2
XW
1973 (*) io_stop_wc();
1974
1975 For memory accesses with write-combining attributes (e.g. those returned
1ab8f248 1976 by ioremap_wc()), the CPU may wait for prior accesses to be merged with
d5624bb2
XW
1977 subsequent ones. io_stop_wc() can be used to prevent the merging of
1978 write-combining memory accesses before this macro with those after it when
1979 such wait has performance implications.
1980
108b42b4
DH
1981===============================
1982IMPLICIT KERNEL MEMORY BARRIERS
1983===============================
1984
1985Some of the other functions in the linux kernel imply memory barriers, amongst
670bd95e 1986which are locking and scheduling functions.
108b42b4
DH
1987
1988This specification is a _minimum_ guarantee; any particular architecture may
1989provide more substantial guarantees, but these may not be relied upon outside
1990of arch specific code.
1991
1992
166bda71
SP
1993LOCK ACQUISITION FUNCTIONS
1994--------------------------
108b42b4
DH
1995
1996The Linux kernel has a number of locking constructs:
1997
1998 (*) spin locks
1999 (*) R/W spin locks
2000 (*) mutexes
2001 (*) semaphores
2002 (*) R/W semaphores
108b42b4 2003
2e4f5382 2004In all cases there are variants on "ACQUIRE" operations and "RELEASE" operations
108b42b4
DH
2005for each construct. These operations all imply certain barriers:
2006
2e4f5382 2007 (1) ACQUIRE operation implication:
108b42b4 2008
2e4f5382
PZ
2009 Memory operations issued after the ACQUIRE will be completed after the
2010 ACQUIRE operation has completed.
108b42b4 2011
8dd853d7 2012 Memory operations issued before the ACQUIRE may be completed after
a9668cd6 2013 the ACQUIRE operation has completed.
108b42b4 2014
2e4f5382 2015 (2) RELEASE operation implication:
108b42b4 2016
2e4f5382
PZ
2017 Memory operations issued before the RELEASE will be completed before the
2018 RELEASE operation has completed.
108b42b4 2019
2e4f5382
PZ
2020 Memory operations issued after the RELEASE may be completed before the
2021 RELEASE operation has completed.
108b42b4 2022
2e4f5382 2023 (3) ACQUIRE vs ACQUIRE implication:
108b42b4 2024
2e4f5382
PZ
2025 All ACQUIRE operations issued before another ACQUIRE operation will be
2026 completed before that ACQUIRE operation.
108b42b4 2027
2e4f5382 2028 (4) ACQUIRE vs RELEASE implication:
108b42b4 2029
2e4f5382
PZ
2030 All ACQUIRE operations issued before a RELEASE operation will be
2031 completed before the RELEASE operation.
108b42b4 2032
2e4f5382 2033 (5) Failed conditional ACQUIRE implication:
108b42b4 2034
2e4f5382
PZ
2035 Certain locking variants of the ACQUIRE operation may fail, either due to
2036 being unable to get the lock immediately, or due to receiving an unblocked
806654a9 2037 signal while asleep waiting for the lock to become available. Failed
108b42b4
DH
2038 locks do not imply any sort of barrier.
2039
2e4f5382
PZ
2040[!] Note: one of the consequences of lock ACQUIREs and RELEASEs being only
2041one-way barriers is that the effects of instructions outside of a critical
2042section may seep into the inside of the critical section.
108b42b4 2043
2e4f5382
PZ
2044An ACQUIRE followed by a RELEASE may not be assumed to be full memory barrier
2045because it is possible for an access preceding the ACQUIRE to happen after the
2046ACQUIRE, and an access following the RELEASE to happen before the RELEASE, and
2047the two accesses can themselves then cross:
670bd95e
DH
2048
2049 *A = a;
2e4f5382
PZ
2050 ACQUIRE M
2051 RELEASE M
670bd95e
DH
2052 *B = b;
2053
2054may occur as:
2055
2e4f5382 2056 ACQUIRE M, STORE *B, STORE *A, RELEASE M
17eb88e0 2057
8dd853d7
PM
2058When the ACQUIRE and RELEASE are a lock acquisition and release,
2059respectively, this same reordering can occur if the lock's ACQUIRE and
2060RELEASE are to the same lock variable, but only from the perspective of
2061another CPU not holding that lock. In short, a ACQUIRE followed by an
2062RELEASE may -not- be assumed to be a full memory barrier.
2063
12d560f4
PM
2064Similarly, the reverse case of a RELEASE followed by an ACQUIRE does
2065not imply a full memory barrier. Therefore, the CPU's execution of the
2066critical sections corresponding to the RELEASE and the ACQUIRE can cross,
2067so that:
17eb88e0
PM
2068
2069 *A = a;
2e4f5382
PZ
2070 RELEASE M
2071 ACQUIRE N
17eb88e0
PM
2072 *B = b;
2073
2074could occur as:
2075
2e4f5382 2076 ACQUIRE N, STORE *B, STORE *A, RELEASE M
17eb88e0 2077
8dd853d7
PM
2078It might appear that this reordering could introduce a deadlock.
2079However, this cannot happen because if such a deadlock threatened,
2080the RELEASE would simply complete, thereby avoiding the deadlock.
2081
2082 Why does this work?
2083
2084 One key point is that we are only talking about the CPU doing
2085 the reordering, not the compiler. If the compiler (or, for
2086 that matter, the developer) switched the operations, deadlock
2087 -could- occur.
2088
2089 But suppose the CPU reordered the operations. In this case,
2090 the unlock precedes the lock in the assembly code. The CPU
2091 simply elected to try executing the later lock operation first.
2092 If there is a deadlock, this lock operation will simply spin (or
2093 try to sleep, but more on that later). The CPU will eventually
2094 execute the unlock operation (which preceded the lock operation
2095 in the assembly code), which will unravel the potential deadlock,
2096 allowing the lock operation to succeed.
2097
2098 But what if the lock is a sleeplock? In that case, the code will
2099 try to enter the scheduler, where it will eventually encounter
2100 a memory barrier, which will force the earlier unlock operation
2101 to complete, again unraveling the deadlock. There might be
2102 a sleep-unlock race, but the locking primitive needs to resolve
2103 such races properly in any case.
2104
108b42b4
DH
2105Locks and semaphores may not provide any guarantee of ordering on UP compiled
2106systems, and so cannot be counted on in such a situation to actually achieve
2107anything at all - especially with respect to I/O accesses - unless combined
2108with interrupt disabling operations.
2109
d7cab36d 2110See also the section on "Inter-CPU acquiring barrier effects".
108b42b4
DH
2111
2112
2113As an example, consider the following:
2114
2115 *A = a;
2116 *B = b;
2e4f5382 2117 ACQUIRE
108b42b4
DH
2118 *C = c;
2119 *D = d;
2e4f5382 2120 RELEASE
108b42b4
DH
2121 *E = e;
2122 *F = f;
2123
2124The following sequence of events is acceptable:
2125
2e4f5382 2126 ACQUIRE, {*F,*A}, *E, {*C,*D}, *B, RELEASE
108b42b4
DH
2127
2128 [+] Note that {*F,*A} indicates a combined access.
2129
2130But none of the following are:
2131
2e4f5382
PZ
2132 {*F,*A}, *B, ACQUIRE, *C, *D, RELEASE, *E
2133 *A, *B, *C, ACQUIRE, *D, RELEASE, *E, *F
2134 *A, *B, ACQUIRE, *C, RELEASE, *D, *E, *F
2135 *B, ACQUIRE, *C, *D, RELEASE, {*F,*A}, *E
108b42b4
DH
2136
2137
2138
2139INTERRUPT DISABLING FUNCTIONS
2140-----------------------------
2141
2e4f5382
PZ
2142Functions that disable interrupts (ACQUIRE equivalent) and enable interrupts
2143(RELEASE equivalent) will act as compiler barriers only. So if memory or I/O
108b42b4
DH
2144barriers are required in such a situation, they must be provided from some
2145other means.
2146
2147
50fa610a
DH
2148SLEEP AND WAKE-UP FUNCTIONS
2149---------------------------
2150
2151Sleeping and waking on an event flagged in global data can be viewed as an
2152interaction between two pieces of data: the task state of the task waiting for
2153the event and the global data used to indicate the event. To make sure that
2154these appear to happen in the right order, the primitives to begin the process
2155of going to sleep, and the primitives to initiate a wake up imply certain
2156barriers.
2157
2158Firstly, the sleeper normally follows something like this sequence of events:
2159
2160 for (;;) {
2161 set_current_state(TASK_UNINTERRUPTIBLE);
2162 if (event_indicated)
2163 break;
2164 schedule();
2165 }
2166
2167A general memory barrier is interpolated automatically by set_current_state()
2168after it has altered the task state:
2169
2170 CPU 1
2171 ===============================
2172 set_current_state();
b92b8b35 2173 smp_store_mb();
50fa610a
DH
2174 STORE current->state
2175 <general barrier>
2176 LOAD event_indicated
2177
2178set_current_state() may be wrapped by:
2179
2180 prepare_to_wait();
2181 prepare_to_wait_exclusive();
2182
2183which therefore also imply a general memory barrier after setting the state.
2184The whole sequence above is available in various canned forms, all of which
2185interpolate the memory barrier in the right place:
2186
2187 wait_event();
2188 wait_event_interruptible();
2189 wait_event_interruptible_exclusive();
2190 wait_event_interruptible_timeout();
2191 wait_event_killable();
2192 wait_event_timeout();
2193 wait_on_bit();
2194 wait_on_bit_lock();
2195
2196
2197Secondly, code that performs a wake up normally follows something like this:
2198
2199 event_indicated = 1;
2200 wake_up(&event_wait_queue);
2201
2202or:
2203
2204 event_indicated = 1;
2205 wake_up_process(event_daemon);
2206
7696f991
AP
2207A general memory barrier is executed by wake_up() if it wakes something up.
2208If it doesn't wake anything up then a memory barrier may or may not be
2209executed; you must not rely on it. The barrier occurs before the task state
2210is accessed, in particular, it sits between the STORE to indicate the event
2211and the STORE to set TASK_RUNNING:
50fa610a 2212
7696f991 2213 CPU 1 (Sleeper) CPU 2 (Waker)
50fa610a
DH
2214 =============================== ===============================
2215 set_current_state(); STORE event_indicated
b92b8b35 2216 smp_store_mb(); wake_up();
7696f991
AP
2217 STORE current->state ...
2218 <general barrier> <general barrier>
2219 LOAD event_indicated if ((LOAD task->state) & TASK_NORMAL)
2220 STORE task->state
50fa610a 2221
7696f991
AP
2222where "task" is the thread being woken up and it equals CPU 1's "current".
2223
2224To repeat, a general memory barrier is guaranteed to be executed by wake_up()
2225if something is actually awakened, but otherwise there is no such guarantee.
2226To see this, consider the following sequence of events, where X and Y are both
2227initially zero:
5726ce06
PM
2228
2229 CPU 1 CPU 2
2230 =============================== ===============================
7696f991 2231 X = 1; Y = 1;
5726ce06 2232 smp_mb(); wake_up();
7696f991
AP
2233 LOAD Y LOAD X
2234
2235If a wakeup does occur, one (at least) of the two loads must see 1. If, on
2236the other hand, a wakeup does not occur, both loads might see 0.
5726ce06 2237
7696f991
AP
2238wake_up_process() always executes a general memory barrier. The barrier again
2239occurs before the task state is accessed. In particular, if the wake_up() in
2240the previous snippet were replaced by a call to wake_up_process() then one of
2241the two loads would be guaranteed to see 1.
5726ce06 2242
50fa610a
DH
2243The available waker functions include:
2244
2245 complete();
2246 wake_up();
2247 wake_up_all();
2248 wake_up_bit();
2249 wake_up_interruptible();
2250 wake_up_interruptible_all();
2251 wake_up_interruptible_nr();
2252 wake_up_interruptible_poll();
2253 wake_up_interruptible_sync();
2254 wake_up_interruptible_sync_poll();
2255 wake_up_locked();
2256 wake_up_locked_poll();
2257 wake_up_nr();
2258 wake_up_poll();
2259 wake_up_process();
2260
7696f991
AP
2261In terms of memory ordering, these functions all provide the same guarantees of
2262a wake_up() (or stronger).
50fa610a
DH
2263
2264[!] Note that the memory barriers implied by the sleeper and the waker do _not_
2265order multiple stores before the wake-up with respect to loads of those stored
2266values after the sleeper has called set_current_state(). For instance, if the
2267sleeper does:
2268
2269 set_current_state(TASK_INTERRUPTIBLE);
2270 if (event_indicated)
2271 break;
2272 __set_current_state(TASK_RUNNING);
2273 do_something(my_data);
2274
2275and the waker does:
2276
2277 my_data = value;
2278 event_indicated = 1;
2279 wake_up(&event_wait_queue);
2280
2281there's no guarantee that the change to event_indicated will be perceived by
2282the sleeper as coming after the change to my_data. In such a circumstance, the
2283code on both sides must interpolate its own memory barriers between the
2284separate data accesses. Thus the above sleeper ought to do:
2285
2286 set_current_state(TASK_INTERRUPTIBLE);
2287 if (event_indicated) {
2288 smp_rmb();
2289 do_something(my_data);
2290 }
2291
2292and the waker should do:
2293
2294 my_data = value;
2295 smp_wmb();
2296 event_indicated = 1;
2297 wake_up(&event_wait_queue);
2298
2299
108b42b4
DH
2300MISCELLANEOUS FUNCTIONS
2301-----------------------
2302
2303Other functions that imply barriers:
2304
2305 (*) schedule() and similar imply full memory barriers.
2306
108b42b4 2307
2e4f5382
PZ
2308===================================
2309INTER-CPU ACQUIRING BARRIER EFFECTS
2310===================================
108b42b4
DH
2311
2312On SMP systems locking primitives give a more substantial form of barrier: one
2313that does affect memory access ordering on other CPUs, within the context of
2314conflict on any particular lock.
2315
2316
2e4f5382
PZ
2317ACQUIRES VS MEMORY ACCESSES
2318---------------------------
108b42b4 2319
79afecfa 2320Consider the following: the system has a pair of spinlocks (M) and (Q), and
108b42b4
DH
2321three CPUs; then should the following sequence of events occur:
2322
2323 CPU 1 CPU 2
2324 =============================== ===============================
9af194ce 2325 WRITE_ONCE(*A, a); WRITE_ONCE(*E, e);
2e4f5382 2326 ACQUIRE M ACQUIRE Q
9af194ce
PM
2327 WRITE_ONCE(*B, b); WRITE_ONCE(*F, f);
2328 WRITE_ONCE(*C, c); WRITE_ONCE(*G, g);
2e4f5382 2329 RELEASE M RELEASE Q
9af194ce 2330 WRITE_ONCE(*D, d); WRITE_ONCE(*H, h);
108b42b4 2331
81fc6323 2332Then there is no guarantee as to what order CPU 3 will see the accesses to *A
108b42b4 2333through *H occur in, other than the constraints imposed by the separate locks
0b6fa347 2334on the separate CPUs. It might, for example, see:
108b42b4 2335
2e4f5382 2336 *E, ACQUIRE M, ACQUIRE Q, *G, *C, *F, *A, *B, RELEASE Q, *D, *H, RELEASE M
108b42b4
DH
2337
2338But it won't see any of:
2339
2e4f5382
PZ
2340 *B, *C or *D preceding ACQUIRE M
2341 *A, *B or *C following RELEASE M
2342 *F, *G or *H preceding ACQUIRE Q
2343 *E, *F or *G following RELEASE Q
108b42b4
DH
2344
2345
108b42b4
DH
2346=================================
2347WHERE ARE MEMORY BARRIERS NEEDED?
2348=================================
2349
2350Under normal operation, memory operation reordering is generally not going to
2351be a problem as a single-threaded linear piece of code will still appear to
50fa610a 2352work correctly, even if it's in an SMP kernel. There are, however, four
108b42b4
DH
2353circumstances in which reordering definitely _could_ be a problem:
2354
2355 (*) Interprocessor interaction.
2356
2357 (*) Atomic operations.
2358
81fc6323 2359 (*) Accessing devices.
108b42b4
DH
2360
2361 (*) Interrupts.
2362
2363
2364INTERPROCESSOR INTERACTION
2365--------------------------
2366
2367When there's a system with more than one processor, more than one CPU in the
2368system may be working on the same data set at the same time. This can cause
2369synchronisation problems, and the usual way of dealing with them is to use
2370locks. Locks, however, are quite expensive, and so it may be preferable to
2371operate without the use of a lock if at all possible. In such a case
2372operations that affect both CPUs may have to be carefully ordered to prevent
2373a malfunction.
2374
2375Consider, for example, the R/W semaphore slow path. Here a waiting process is
2376queued on the semaphore, by virtue of it having a piece of its stack linked to
2377the semaphore's list of waiting processes:
2378
2379 struct rw_semaphore {
2380 ...
2381 spinlock_t lock;
2382 struct list_head waiters;
2383 };
2384
2385 struct rwsem_waiter {
2386 struct list_head list;
2387 struct task_struct *task;
2388 };
2389
2390To wake up a particular waiter, the up_read() or up_write() functions have to:
2391
2392 (1) read the next pointer from this waiter's record to know as to where the
2393 next waiter record is;
2394
81fc6323 2395 (2) read the pointer to the waiter's task structure;
108b42b4
DH
2396
2397 (3) clear the task pointer to tell the waiter it has been given the semaphore;
2398
2399 (4) call wake_up_process() on the task; and
2400
2401 (5) release the reference held on the waiter's task struct.
2402
81fc6323 2403In other words, it has to perform this sequence of events:
108b42b4
DH
2404
2405 LOAD waiter->list.next;
2406 LOAD waiter->task;
2407 STORE waiter->task;
2408 CALL wakeup
2409 RELEASE task
2410
2411and if any of these steps occur out of order, then the whole thing may
2412malfunction.
2413
2414Once it has queued itself and dropped the semaphore lock, the waiter does not
2415get the lock again; it instead just waits for its task pointer to be cleared
2416before proceeding. Since the record is on the waiter's stack, this means that
2417if the task pointer is cleared _before_ the next pointer in the list is read,
2418another CPU might start processing the waiter and might clobber the waiter's
2419stack before the up*() function has a chance to read the next pointer.
2420
2421Consider then what might happen to the above sequence of events:
2422
2423 CPU 1 CPU 2
2424 =============================== ===============================
2425 down_xxx()
2426 Queue waiter
2427 Sleep
2428 up_yyy()
2429 LOAD waiter->task;
2430 STORE waiter->task;
2431 Woken up by other event
2432 <preempt>
2433 Resume processing
2434 down_xxx() returns
2435 call foo()
2436 foo() clobbers *waiter
2437 </preempt>
2438 LOAD waiter->list.next;
2439 --- OOPS ---
2440
2441This could be dealt with using the semaphore lock, but then the down_xxx()
2442function has to needlessly get the spinlock again after being woken up.
2443
2444The way to deal with this is to insert a general SMP memory barrier:
2445
2446 LOAD waiter->list.next;
2447 LOAD waiter->task;
2448 smp_mb();
2449 STORE waiter->task;
2450 CALL wakeup
2451 RELEASE task
2452
2453In this case, the barrier makes a guarantee that all memory accesses before the
2454barrier will appear to happen before all the memory accesses after the barrier
2455with respect to the other CPUs on the system. It does _not_ guarantee that all
2456the memory accesses before the barrier will be complete by the time the barrier
2457instruction itself is complete.
2458
2459On a UP system - where this wouldn't be a problem - the smp_mb() is just a
2460compiler barrier, thus making sure the compiler emits the instructions in the
6bc39274
DH
2461right order without actually intervening in the CPU. Since there's only one
2462CPU, that CPU's dependency ordering logic will take care of everything else.
108b42b4
DH
2463
2464
2465ATOMIC OPERATIONS
2466-----------------
2467
806654a9 2468While they are technically interprocessor interaction considerations, atomic
dbc8700e
DH
2469operations are noted specially as some of them imply full memory barriers and
2470some don't, but they're very heavily relied on as a group throughout the
2471kernel.
2472
706eeb3e 2473See Documentation/atomic_t.txt for more information.
108b42b4
DH
2474
2475
2476ACCESSING DEVICES
2477-----------------
2478
2479Many devices can be memory mapped, and so appear to the CPU as if they're just
2480a set of memory locations. To control such a device, the driver usually has to
2481make the right memory accesses in exactly the right order.
2482
2483However, having a clever CPU or a clever compiler creates a potential problem
2484in that the carefully sequenced accesses in the driver code won't reach the
2485device in the requisite order if the CPU or the compiler thinks it is more
2486efficient to reorder, combine or merge accesses - something that would cause
2487the device to malfunction.
2488
2489Inside of the Linux kernel, I/O should be done through the appropriate accessor
2490routines - such as inb() or writel() - which know how to make such accesses
806654a9 2491appropriately sequential. While this, for the most part, renders the explicit
91553039
WD
2492use of memory barriers unnecessary, if the accessor functions are used to refer
2493to an I/O memory window with relaxed memory access properties, then _mandatory_
2494memory barriers are required to enforce ordering.
108b42b4 2495
0fe397f0 2496See Documentation/driver-api/device-io.rst for more information.
108b42b4
DH
2497
2498
2499INTERRUPTS
2500----------
2501
2502A driver may be interrupted by its own interrupt service routine, and thus the
2503two parts of the driver may interfere with each other's attempts to control or
2504access the device.
2505
2506This may be alleviated - at least in part - by disabling local interrupts (a
2507form of locking), such that the critical operations are all contained within
806654a9 2508the interrupt-disabled section in the driver. While the driver's interrupt
108b42b4
DH
2509routine is executing, the driver's core may not run on the same CPU, and its
2510interrupt is not permitted to happen again until the current interrupt has been
2511handled, thus the interrupt handler does not need to lock against that.
2512
2513However, consider a driver that was talking to an ethernet card that sports an
2514address register and a data register. If that driver's core talks to the card
2515under interrupt-disablement and then the driver's interrupt handler is invoked:
2516
2517 LOCAL IRQ DISABLE
2518 writew(ADDR, 3);
2519 writew(DATA, y);
2520 LOCAL IRQ ENABLE
2521 <interrupt>
2522 writew(ADDR, 4);
2523 q = readw(DATA);
2524 </interrupt>
2525
2526The store to the data register might happen after the second store to the
2527address register if ordering rules are sufficiently relaxed:
2528
2529 STORE *ADDR = 3, STORE *ADDR = 4, STORE *DATA = y, q = LOAD *DATA
2530
2531
2532If ordering rules are relaxed, it must be assumed that accesses done inside an
2533interrupt disabled section may leak outside of it and may interleave with
2534accesses performed in an interrupt - and vice versa - unless implicit or
2535explicit barriers are used.
2536
2537Normally this won't be a problem because the I/O accesses done inside such
2538sections will include synchronous load operations on strictly ordered I/O
91553039 2539registers that form implicit I/O barriers.
108b42b4
DH
2540
2541
2542A similar situation may occur between an interrupt routine and two routines
0b6fa347 2543running on separate CPUs that communicate with each other. If such a case is
108b42b4
DH
2544likely, then interrupt-disabling locks should be used to guarantee ordering.
2545
2546
2547==========================
2548KERNEL I/O BARRIER EFFECTS
2549==========================
2550
4614bbde
WD
2551Interfacing with peripherals via I/O accesses is deeply architecture and device
2552specific. Therefore, drivers which are inherently non-portable may rely on
2553specific behaviours of their target systems in order to achieve synchronization
2554in the most lightweight manner possible. For drivers intending to be portable
2555between multiple architectures and bus implementations, the kernel offers a
2556series of accessor functions that provide various degrees of ordering
2557guarantees:
108b42b4 2558
4614bbde 2559 (*) readX(), writeX():
108b42b4 2560
0cde62a4
WD
2561 The readX() and writeX() MMIO accessors take a pointer to the
2562 peripheral being accessed as an __iomem * parameter. For pointers
2563 mapped with the default I/O attributes (e.g. those returned by
2564 ioremap()), the ordering guarantees are as follows:
2565
2566 1. All readX() and writeX() accesses to the same peripheral are ordered
9726840d
WD
2567 with respect to each other. This ensures that MMIO register accesses
2568 by the same CPU thread to a particular device will arrive in program
2569 order.
2570
2571 2. A writeX() issued by a CPU thread holding a spinlock is ordered
2572 before a writeX() to the same peripheral from another CPU thread
2573 issued after a later acquisition of the same spinlock. This ensures
2574 that MMIO register writes to a particular device issued while holding
2575 a spinlock will arrive in an order consistent with acquisitions of
2576 the lock.
2577
2578 3. A writeX() by a CPU thread to the peripheral will first wait for the
2579 completion of all prior writes to memory either issued by, or
2580 propagated to, the same thread. This ensures that writes by the CPU
2581 to an outbound DMA buffer allocated by dma_alloc_coherent() will be
2582 visible to a DMA engine when the CPU writes to its MMIO control
2583 register to trigger the transfer.
2584
2585 4. A readX() by a CPU thread from the peripheral will complete before
2586 any subsequent reads from memory by the same thread can begin. This
2587 ensures that reads by the CPU from an incoming DMA buffer allocated
2588 by dma_alloc_coherent() will not see stale data after reading from
2589 the DMA engine's MMIO status register to establish that the DMA
2590 transfer has completed.
2591
2592 5. A readX() by a CPU thread from the peripheral will complete before
2593 any subsequent delay() loop can begin execution on the same thread.
2594 This ensures that two MMIO register writes by the CPU to a peripheral
2595 will arrive at least 1us apart if the first write is immediately read
2596 back with readX() and udelay(1) is called prior to the second
2597 writeX():
0cde62a4
WD
2598
2599 writel(42, DEVICE_REGISTER_0); // Arrives at the device...
2600 readl(DEVICE_REGISTER_0);
2601 udelay(1);
2602 writel(42, DEVICE_REGISTER_1); // ...at least 1us before this.
2603
2604 The ordering properties of __iomem pointers obtained with non-default
2605 attributes (e.g. those returned by ioremap_wc()) are specific to the
2606 underlying architecture and therefore the guarantees listed above cannot
2607 generally be relied upon for accesses to these types of mappings.
108b42b4 2608
4614bbde 2609 (*) readX_relaxed(), writeX_relaxed():
108b42b4 2610
0cde62a4
WD
2611 These are similar to readX() and writeX(), but provide weaker memory
2612 ordering guarantees. Specifically, they do not guarantee ordering with
9726840d
WD
2613 respect to locking, normal memory accesses or delay() loops (i.e.
2614 bullets 2-5 above) but they are still guaranteed to be ordered with
2615 respect to other accesses from the same CPU thread to the same
2616 peripheral when operating on __iomem pointers mapped with the default
2617 I/O attributes.
108b42b4 2618
4614bbde 2619 (*) readsX(), writesX():
108b42b4 2620
0cde62a4
WD
2621 The readsX() and writesX() MMIO accessors are designed for accessing
2622 register-based, memory-mapped FIFOs residing on peripherals that are not
2623 capable of performing DMA. Consequently, they provide only the ordering
2624 guarantees of readX_relaxed() and writeX_relaxed(), as documented above.
108b42b4 2625
4614bbde 2626 (*) inX(), outX():
108b42b4 2627
0cde62a4
WD
2628 The inX() and outX() accessors are intended to access legacy port-mapped
2629 I/O peripherals, which may require special instructions on some
2630 architectures (notably x86). The port number of the peripheral being
2631 accessed is passed as an argument.
108b42b4 2632
0cde62a4
WD
2633 Since many CPU architectures ultimately access these peripherals via an
2634 internal virtual memory mapping, the portable ordering guarantees
2635 provided by inX() and outX() are the same as those provided by readX()
2636 and writeX() respectively when accessing a mapping with the default I/O
2637 attributes.
a8e0aead 2638
0cde62a4
WD
2639 Device drivers may expect outX() to emit a non-posted write transaction
2640 that waits for a completion response from the I/O peripheral before
2641 returning. This is not guaranteed by all architectures and is therefore
2642 not part of the portable ordering semantics.
4614bbde
WD
2643
2644 (*) insX(), outsX():
2645
0cde62a4
WD
2646 As above, the insX() and outsX() accessors provide the same ordering
2647 guarantees as readsX() and writesX() respectively when accessing a
2648 mapping with the default I/O attributes.
108b42b4 2649
0cde62a4 2650 (*) ioreadX(), iowriteX():
108b42b4 2651
0cde62a4
WD
2652 These will perform appropriately for the type of access they're actually
2653 doing, be it inX()/outX() or readX()/writeX().
108b42b4 2654
9726840d
WD
2655With the exception of the string accessors (insX(), outsX(), readsX() and
2656writesX()), all of the above assume that the underlying peripheral is
2657little-endian and will therefore perform byte-swapping operations on big-endian
2658architectures.
4614bbde 2659
108b42b4
DH
2660
2661========================================
2662ASSUMED MINIMUM EXECUTION ORDERING MODEL
2663========================================
2664
2665It has to be assumed that the conceptual CPU is weakly-ordered but that it will
2666maintain the appearance of program causality with respect to itself. Some CPUs
2667(such as i386 or x86_64) are more constrained than others (such as powerpc or
2668frv), and so the most relaxed case (namely DEC Alpha) must be assumed outside
2669of arch-specific code.
2670
2671This means that it must be considered that the CPU will execute its instruction
2672stream in any order it feels like - or even in parallel - provided that if an
81fc6323 2673instruction in the stream depends on an earlier instruction, then that
108b42b4
DH
2674earlier instruction must be sufficiently complete[*] before the later
2675instruction may proceed; in other words: provided that the appearance of
2676causality is maintained.
2677
2678 [*] Some instructions have more than one effect - such as changing the
2679 condition codes, changing registers or changing memory - and different
2680 instructions may depend on different effects.
2681
2682A CPU may also discard any instruction sequence that winds up having no
2683ultimate effect. For example, if two adjacent instructions both load an
2684immediate value into the same register, the first may be discarded.
2685
2686
2687Similarly, it has to be assumed that compiler might reorder the instruction
2688stream in any way it sees fit, again provided the appearance of causality is
2689maintained.
2690
2691
2692============================
2693THE EFFECTS OF THE CPU CACHE
2694============================
2695
2696The way cached memory operations are perceived across the system is affected to
2697a certain extent by the caches that lie between CPUs and memory, and by the
2698memory coherence system that maintains the consistency of state in the system.
2699
2700As far as the way a CPU interacts with another part of the system through the
2701caches goes, the memory system has to include the CPU's caches, and memory
2702barriers for the most part act at the interface between the CPU and its cache
2703(memory barriers logically act on the dotted line in the following diagram):
2704
2705 <--- CPU ---> : <----------- Memory ----------->
2706 :
2707 +--------+ +--------+ : +--------+ +-----------+
2708 | | | | : | | | | +--------+
e0edc78f
IM
2709 | CPU | | Memory | : | CPU | | | | |
2710 | Core |--->| Access |----->| Cache |<-->| | | |
108b42b4 2711 | | | Queue | : | | | |--->| Memory |
e0edc78f
IM
2712 | | | | : | | | | | |
2713 +--------+ +--------+ : +--------+ | | | |
108b42b4
DH
2714 : | Cache | +--------+
2715 : | Coherency |
2716 : | Mechanism | +--------+
2717 +--------+ +--------+ : +--------+ | | | |
2718 | | | | : | | | | | |
2719 | CPU | | Memory | : | CPU | | |--->| Device |
e0edc78f
IM
2720 | Core |--->| Access |----->| Cache |<-->| | | |
2721 | | | Queue | : | | | | | |
108b42b4
DH
2722 | | | | : | | | | +--------+
2723 +--------+ +--------+ : +--------+ +-----------+
2724 :
2725 :
2726
2727Although any particular load or store may not actually appear outside of the
2728CPU that issued it since it may have been satisfied within the CPU's own cache,
2729it will still appear as if the full memory access had taken place as far as the
2730other CPUs are concerned since the cache coherency mechanisms will migrate the
2731cacheline over to the accessing CPU and propagate the effects upon conflict.
2732
2733The CPU core may execute instructions in any order it deems fit, provided the
2734expected program causality appears to be maintained. Some of the instructions
2735generate load and store operations which then go into the queue of memory
2736accesses to be performed. The core may place these in the queue in any order
2737it wishes, and continue execution until it is forced to wait for an instruction
2738to complete.
2739
2740What memory barriers are concerned with is controlling the order in which
2741accesses cross from the CPU side of things to the memory side of things, and
2742the order in which the effects are perceived to happen by the other observers
2743in the system.
2744
2745[!] Memory barriers are _not_ needed within a given CPU, as CPUs always see
2746their own loads and stores as if they had happened in program order.
2747
2748[!] MMIO or other device accesses may bypass the cache system. This depends on
2749the properties of the memory window through which devices are accessed and/or
2750the use of any special device communication instructions the CPU may have.
2751
2752
108b42b4
DH
2753CACHE COHERENCY VS DMA
2754----------------------
2755
2756Not all systems maintain cache coherency with respect to devices doing DMA. In
2757such cases, a device attempting DMA may obtain stale data from RAM because
2758dirty cache lines may be resident in the caches of various CPUs, and may not
2759have been written back to RAM yet. To deal with this, the appropriate part of
2760the kernel must flush the overlapping bits of cache on each CPU (and maybe
2761invalidate them as well).
2762
2763In addition, the data DMA'd to RAM by a device may be overwritten by dirty
2764cache lines being written back to RAM from a CPU's cache after the device has
81fc6323
JP
2765installed its own data, or cache lines present in the CPU's cache may simply
2766obscure the fact that RAM has been updated, until at such time as the cacheline
2767is discarded from the CPU's cache and reloaded. To deal with this, the
2768appropriate part of the kernel must invalidate the overlapping bits of the
108b42b4
DH
2769cache on each CPU.
2770
f556082d
AY
2771See Documentation/core-api/cachetlb.rst for more information on cache
2772management.
108b42b4
DH
2773
2774
2775CACHE COHERENCY VS MMIO
2776-----------------------
2777
2778Memory mapped I/O usually takes place through memory locations that are part of
81fc6323 2779a window in the CPU's memory space that has different properties assigned than
108b42b4
DH
2780the usual RAM directed window.
2781
2782Amongst these properties is usually the fact that such accesses bypass the
2783caching entirely and go directly to the device buses. This means MMIO accesses
2784may, in effect, overtake accesses to cached memory that were emitted earlier.
2785A memory barrier isn't sufficient in such a case, but rather the cache must be
2786flushed between the cached memory write and the MMIO access if the two are in
2787any way dependent.
2788
2789
2790=========================
2791THE THINGS CPUS GET UP TO
2792=========================
2793
2794A programmer might take it for granted that the CPU will perform memory
81fc6323 2795operations in exactly the order specified, so that if the CPU is, for example,
108b42b4
DH
2796given the following piece of code to execute:
2797
9af194ce
PM
2798 a = READ_ONCE(*A);
2799 WRITE_ONCE(*B, b);
2800 c = READ_ONCE(*C);
2801 d = READ_ONCE(*D);
2802 WRITE_ONCE(*E, e);
108b42b4 2803
81fc6323 2804they would then expect that the CPU will complete the memory operation for each
108b42b4
DH
2805instruction before moving on to the next one, leading to a definite sequence of
2806operations as seen by external observers in the system:
2807
2808 LOAD *A, STORE *B, LOAD *C, LOAD *D, STORE *E.
2809
2810
2811Reality is, of course, much messier. With many CPUs and compilers, the above
2812assumption doesn't hold because:
2813
2814 (*) loads are more likely to need to be completed immediately to permit
2815 execution progress, whereas stores can often be deferred without a
2816 problem;
2817
2818 (*) loads may be done speculatively, and the result discarded should it prove
2819 to have been unnecessary;
2820
81fc6323
JP
2821 (*) loads may be done speculatively, leading to the result having been fetched
2822 at the wrong time in the expected sequence of events;
108b42b4
DH
2823
2824 (*) the order of the memory accesses may be rearranged to promote better use
2825 of the CPU buses and caches;
2826
2827 (*) loads and stores may be combined to improve performance when talking to
2828 memory or I/O hardware that can do batched accesses of adjacent locations,
2829 thus cutting down on transaction setup costs (memory and PCI devices may
2830 both be able to do this); and
2831
806654a9 2832 (*) the CPU's data cache may affect the ordering, and while cache-coherency
108b42b4
DH
2833 mechanisms may alleviate this - once the store has actually hit the cache
2834 - there's no guarantee that the coherency management will be propagated in
2835 order to other CPUs.
2836
2837So what another CPU, say, might actually observe from the above piece of code
2838is:
2839
2840 LOAD *A, ..., LOAD {*C,*D}, STORE *E, STORE *B
2841
2842 (Where "LOAD {*C,*D}" is a combined load)
2843
2844
2845However, it is guaranteed that a CPU will be self-consistent: it will see its
2846_own_ accesses appear to be correctly ordered, without the need for a memory
2847barrier. For instance with the following code:
2848
9af194ce
PM
2849 U = READ_ONCE(*A);
2850 WRITE_ONCE(*A, V);
2851 WRITE_ONCE(*A, W);
2852 X = READ_ONCE(*A);
2853 WRITE_ONCE(*A, Y);
2854 Z = READ_ONCE(*A);
108b42b4
DH
2855
2856and assuming no intervention by an external influence, it can be assumed that
2857the final result will appear to be:
2858
2859 U == the original value of *A
2860 X == W
2861 Z == Y
2862 *A == Y
2863
2864The code above may cause the CPU to generate the full sequence of memory
2865accesses:
2866
2867 U=LOAD *A, STORE *A=V, STORE *A=W, X=LOAD *A, STORE *A=Y, Z=LOAD *A
2868
2869in that order, but, without intervention, the sequence may have almost any
9af194ce
PM
2870combination of elements combined or discarded, provided the program's view
2871of the world remains consistent. Note that READ_ONCE() and WRITE_ONCE()
2872are -not- optional in the above example, as there are architectures
2873where a given CPU might reorder successive loads to the same location.
2874On such architectures, READ_ONCE() and WRITE_ONCE() do whatever is
2875necessary to prevent this, for example, on Itanium the volatile casts
2876used by READ_ONCE() and WRITE_ONCE() cause GCC to emit the special ld.acq
2877and st.rel instructions (respectively) that prevent such reordering.
108b42b4
DH
2878
2879The compiler may also combine, discard or defer elements of the sequence before
2880the CPU even sees them.
2881
2882For instance:
2883
2884 *A = V;
2885 *A = W;
2886
2887may be reduced to:
2888
2889 *A = W;
2890
9af194ce 2891since, without either a write barrier or an WRITE_ONCE(), it can be
2ecf8101 2892assumed that the effect of the storage of V to *A is lost. Similarly:
108b42b4
DH
2893
2894 *A = Y;
2895 Z = *A;
2896
9af194ce
PM
2897may, without a memory barrier or an READ_ONCE() and WRITE_ONCE(), be
2898reduced to:
108b42b4
DH
2899
2900 *A = Y;
2901 Z = Y;
2902
2903and the LOAD operation never appear outside of the CPU.
2904
2905
2906AND THEN THERE'S THE ALPHA
2907--------------------------
2908
2909The DEC Alpha CPU is one of the most relaxed CPUs there is. Not only that,
2910some versions of the Alpha CPU have a split data cache, permitting them to have
81fc6323 2911two semantically-related cache lines updated at separate times. This is where
f556082d
AY
2912the address-dependency barrier really becomes necessary as this synchronises
2913both caches with the memory coherence system, thus making it seem like pointer
108b42b4
DH
2914changes vs new data occur in the right order.
2915
f28f0868 2916The Alpha defines the Linux kernel's memory model, although as of v4.15
8ca924ae
WD
2917the Linux kernel's addition of smp_mb() to READ_ONCE() on Alpha greatly
2918reduced its impact on the memory model.
108b42b4 2919
0b6fa347 2920
6a65d263 2921VIRTUAL MACHINE GUESTS
3dbf0913 2922----------------------
6a65d263
MT
2923
2924Guests running within virtual machines might be affected by SMP effects even if
2925the guest itself is compiled without SMP support. This is an artifact of
2926interfacing with an SMP host while running an UP kernel. Using mandatory
2927barriers for this use-case would be possible but is often suboptimal.
2928
2929To handle this case optimally, low-level virt_mb() etc macros are available.
2930These have the same effect as smp_mb() etc when SMP is enabled, but generate
0b6fa347 2931identical code for SMP and non-SMP systems. For example, virtual machine guests
6a65d263
MT
2932should use virt_mb() rather than smp_mb() when synchronizing against a
2933(possibly SMP) host.
2934
2935These are equivalent to smp_mb() etc counterparts in all other respects,
2936in particular, they do not control MMIO effects: to control
2937MMIO effects, use mandatory barriers.
108b42b4 2938
0b6fa347 2939
90fddabf
DH
2940============
2941EXAMPLE USES
2942============
2943
2944CIRCULAR BUFFERS
2945----------------
2946
2947Memory barriers can be used to implement circular buffering without the need
2948of a lock to serialise the producer with the consumer. See:
2949
d8a121e3 2950 Documentation/core-api/circular-buffers.rst
90fddabf
DH
2951
2952for details.
2953
2954
108b42b4
DH
2955==========
2956REFERENCES
2957==========
2958
2959Alpha AXP Architecture Reference Manual, Second Edition (Sites & Witek,
2960Digital Press)
2961 Chapter 5.2: Physical Address Space Characteristics
2962 Chapter 5.4: Caches and Write Buffers
2963 Chapter 5.5: Data Sharing
2964 Chapter 5.6: Read/Write Ordering
2965
2966AMD64 Architecture Programmer's Manual Volume 2: System Programming
2967 Chapter 7.1: Memory-Access Ordering
2968 Chapter 7.4: Buffering and Combining Memory Writes
2969
f1ab25a3
PM
2970ARM Architecture Reference Manual (ARMv8, for ARMv8-A architecture profile)
2971 Chapter B2: The AArch64 Application Level Memory Model
2972
108b42b4
DH
2973IA-32 Intel Architecture Software Developer's Manual, Volume 3:
2974System Programming Guide
2975 Chapter 7.1: Locked Atomic Operations
2976 Chapter 7.2: Memory Ordering
2977 Chapter 7.4: Serializing Instructions
2978
2979The SPARC Architecture Manual, Version 9
2980 Chapter 8: Memory Models
2981 Appendix D: Formal Specification of the Memory Models
2982 Appendix J: Programming with the Memory Models
2983
f1ab25a3
PM
2984Storage in the PowerPC (Stone and Fitzgerald)
2985
108b42b4
DH
2986UltraSPARC Programmer Reference Manual
2987 Chapter 5: Memory Accesses and Cacheability
2988 Chapter 15: Sparc-V9 Memory Models
2989
2990UltraSPARC III Cu User's Manual
2991 Chapter 9: Memory Models
2992
2993UltraSPARC IIIi Processor User's Manual
2994 Chapter 8: Memory Models
2995
2996UltraSPARC Architecture 2005
2997 Chapter 9: Memory
2998 Appendix D: Formal Specifications of the Memory Models
2999
3000UltraSPARC T1 Supplement to the UltraSPARC Architecture 2005
3001 Chapter 8: Memory Models
3002 Appendix F: Caches and Cache Coherency
3003
3004Solaris Internals, Core Kernel Architecture, p63-68:
3005 Chapter 3.3: Hardware Considerations for Locks and
3006 Synchronization
3007
3008Unix Systems for Modern Architectures, Symmetric Multiprocessing and Caching
3009for Kernel Programmers:
3010 Chapter 13: Other Memory Models
3011
3012Intel Itanium Architecture Software Developer's Manual: Volume 1:
3013 Section 2.6: Speculation
3014 Section 4.4: Memory Access