Merge tag 'iio-fixes-for-5.7a' of git://git.kernel.org/pub/scm/linux/kernel/git/jic23...
[linux-2.6-block.git] / Documentation / memory-barriers.txt
CommitLineData
108b42b4
DH
1 ============================
2 LINUX KERNEL MEMORY BARRIERS
3 ============================
4
5By: David Howells <dhowells@redhat.com>
714b6904 6 Paul E. McKenney <paulmck@linux.ibm.com>
e7720af5
PZ
7 Will Deacon <will.deacon@arm.com>
8 Peter Zijlstra <peterz@infradead.org>
108b42b4 9
e7720af5
PZ
10==========
11DISCLAIMER
12==========
13
14This document is not a specification; it is intentionally (for the sake of
15brevity) and unintentionally (due to being human) incomplete. This document is
16meant as a guide to using the various memory barriers provided by Linux, but
621df431
AP
17in case of any doubt (and there are many) please ask. Some doubts may be
18resolved by referring to the formal memory consistency model and related
19documentation at tools/memory-model/. Nevertheless, even this memory
20model should be viewed as the collective opinion of its maintainers rather
21than as an infallible oracle.
e7720af5
PZ
22
23To repeat, this document is not a specification of what Linux expects from
24hardware.
25
8d4840e8
DH
26The purpose of this document is twofold:
27
28 (1) to specify the minimum functionality that one can rely on for any
29 particular barrier, and
30
31 (2) to provide a guide as to how to use the barriers that are available.
32
33Note that an architecture can provide more than the minimum requirement
35bdc72a 34for any particular barrier, but if the architecture provides less than
8d4840e8
DH
35that, that architecture is incorrect.
36
37Note also that it is possible that a barrier may be a no-op for an
38architecture because the way that arch works renders an explicit barrier
39unnecessary in that case.
40
41
e7720af5
PZ
42========
43CONTENTS
44========
108b42b4
DH
45
46 (*) Abstract memory access model.
47
48 - Device operations.
49 - Guarantees.
50
51 (*) What are memory barriers?
52
53 - Varieties of memory barrier.
54 - What may not be assumed about memory barriers?
f28f0868 55 - Data dependency barriers (historical).
108b42b4
DH
56 - Control dependencies.
57 - SMP barrier pairing.
58 - Examples of memory barrier sequences.
670bd95e 59 - Read memory barriers vs load speculation.
f1ab25a3 60 - Multicopy atomicity.
108b42b4
DH
61
62 (*) Explicit kernel barriers.
63
64 - Compiler barrier.
81fc6323 65 - CPU memory barriers.
108b42b4
DH
66
67 (*) Implicit kernel memory barriers.
68
166bda71 69 - Lock acquisition functions.
108b42b4 70 - Interrupt disabling functions.
50fa610a 71 - Sleep and wake-up functions.
108b42b4
DH
72 - Miscellaneous functions.
73
166bda71 74 (*) Inter-CPU acquiring barrier effects.
108b42b4 75
166bda71 76 - Acquires vs memory accesses.
108b42b4
DH
77
78 (*) Where are memory barriers needed?
79
80 - Interprocessor interaction.
81 - Atomic operations.
82 - Accessing devices.
83 - Interrupts.
84
85 (*) Kernel I/O barrier effects.
86
87 (*) Assumed minimum execution ordering model.
88
89 (*) The effects of the cpu cache.
90
91 - Cache coherency.
92 - Cache coherency vs DMA.
93 - Cache coherency vs MMIO.
94
95 (*) The things CPUs get up to.
96
97 - And then there's the Alpha.
01e1cd6d 98 - Virtual Machine Guests.
108b42b4 99
90fddabf
DH
100 (*) Example uses.
101
102 - Circular buffers.
103
108b42b4
DH
104 (*) References.
105
106
107============================
108ABSTRACT MEMORY ACCESS MODEL
109============================
110
111Consider the following abstract model of the system:
112
113 : :
114 : :
115 : :
116 +-------+ : +--------+ : +-------+
117 | | : | | : | |
118 | | : | | : | |
119 | CPU 1 |<----->| Memory |<----->| CPU 2 |
120 | | : | | : | |
121 | | : | | : | |
122 +-------+ : +--------+ : +-------+
123 ^ : ^ : ^
124 | : | : |
125 | : | : |
126 | : v : |
127 | : +--------+ : |
128 | : | | : |
129 | : | | : |
130 +---------->| Device |<----------+
131 : | | :
132 : | | :
133 : +--------+ :
134 : :
135
136Each CPU executes a program that generates memory access operations. In the
137abstract CPU, memory operation ordering is very relaxed, and a CPU may actually
138perform the memory operations in any order it likes, provided program causality
139appears to be maintained. Similarly, the compiler may also arrange the
140instructions it emits in any order it likes, provided it doesn't affect the
141apparent operation of the program.
142
143So in the above diagram, the effects of the memory operations performed by a
144CPU are perceived by the rest of the system as the operations cross the
145interface between the CPU and rest of the system (the dotted lines).
146
147
148For example, consider the following sequence of events:
149
150 CPU 1 CPU 2
151 =============== ===============
152 { A == 1; B == 2 }
615cc2c9
AD
153 A = 3; x = B;
154 B = 4; y = A;
108b42b4
DH
155
156The set of accesses as seen by the memory system in the middle can be arranged
157in 24 different combinations:
158
8ab8b3e1
PK
159 STORE A=3, STORE B=4, y=LOAD A->3, x=LOAD B->4
160 STORE A=3, STORE B=4, x=LOAD B->4, y=LOAD A->3
161 STORE A=3, y=LOAD A->3, STORE B=4, x=LOAD B->4
162 STORE A=3, y=LOAD A->3, x=LOAD B->2, STORE B=4
163 STORE A=3, x=LOAD B->2, STORE B=4, y=LOAD A->3
164 STORE A=3, x=LOAD B->2, y=LOAD A->3, STORE B=4
165 STORE B=4, STORE A=3, y=LOAD A->3, x=LOAD B->4
108b42b4
DH
166 STORE B=4, ...
167 ...
168
169and can thus result in four different combinations of values:
170
8ab8b3e1
PK
171 x == 2, y == 1
172 x == 2, y == 3
173 x == 4, y == 1
174 x == 4, y == 3
108b42b4
DH
175
176
177Furthermore, the stores committed by a CPU to the memory system may not be
178perceived by the loads made by another CPU in the same order as the stores were
179committed.
180
181
182As a further example, consider this sequence of events:
183
184 CPU 1 CPU 2
185 =============== ===============
3dbf0913 186 { A == 1, B == 2, C == 3, P == &A, Q == &C }
108b42b4 187 B = 4; Q = P;
8149b5cb 188 P = &B; D = *Q;
108b42b4
DH
189
190There is an obvious data dependency here, as the value loaded into D depends on
191the address retrieved from P by CPU 2. At the end of the sequence, any of the
192following results are possible:
193
194 (Q == &A) and (D == 1)
195 (Q == &B) and (D == 2)
196 (Q == &B) and (D == 4)
197
198Note that CPU 2 will never try and load C into D because the CPU will load P
199into Q before issuing the load of *Q.
200
201
202DEVICE OPERATIONS
203-----------------
204
205Some devices present their control interfaces as collections of memory
206locations, but the order in which the control registers are accessed is very
207important. For instance, imagine an ethernet card with a set of internal
208registers that are accessed through an address port register (A) and a data
209port register (D). To read internal register 5, the following code might then
210be used:
211
212 *A = 5;
213 x = *D;
214
215but this might show up as either of the following two sequences:
216
217 STORE *A = 5, x = LOAD *D
218 x = LOAD *D, STORE *A = 5
219
220the second of which will almost certainly result in a malfunction, since it set
221the address _after_ attempting to read the register.
222
223
224GUARANTEES
225----------
226
227There are some minimal guarantees that may be expected of a CPU:
228
229 (*) On any given CPU, dependent memory accesses will be issued in order, with
230 respect to itself. This means that for:
231
40555946 232 Q = READ_ONCE(P); D = READ_ONCE(*Q);
108b42b4
DH
233
234 the CPU will issue the following memory operations:
235
236 Q = LOAD P, D = LOAD *Q
237
40555946
PM
238 and always in that order. However, on DEC Alpha, READ_ONCE() also
239 emits a memory-barrier instruction, so that a DEC Alpha CPU will
240 instead issue the following memory operations:
241
242 Q = LOAD P, MEMORY_BARRIER, D = LOAD *Q, MEMORY_BARRIER
243
244 Whether on DEC Alpha or not, the READ_ONCE() also prevents compiler
245 mischief.
108b42b4
DH
246
247 (*) Overlapping loads and stores within a particular CPU will appear to be
248 ordered within that CPU. This means that for:
249
9af194ce 250 a = READ_ONCE(*X); WRITE_ONCE(*X, b);
108b42b4
DH
251
252 the CPU will only issue the following sequence of memory operations:
253
254 a = LOAD *X, STORE *X = b
255
256 And for:
257
9af194ce 258 WRITE_ONCE(*X, c); d = READ_ONCE(*X);
108b42b4
DH
259
260 the CPU will only issue:
261
262 STORE *X = c, d = LOAD *X
263
fa00e7e1 264 (Loads and stores overlap if they are targeted at overlapping pieces of
108b42b4
DH
265 memory).
266
267And there are a number of things that _must_ or _must_not_ be assumed:
268
9af194ce
PM
269 (*) It _must_not_ be assumed that the compiler will do what you want
270 with memory references that are not protected by READ_ONCE() and
271 WRITE_ONCE(). Without them, the compiler is within its rights to
272 do all sorts of "creative" transformations, which are covered in
895f5542 273 the COMPILER BARRIER section.
2ecf8101 274
108b42b4
DH
275 (*) It _must_not_ be assumed that independent loads and stores will be issued
276 in the order given. This means that for:
277
278 X = *A; Y = *B; *D = Z;
279
280 we may get any of the following sequences:
281
282 X = LOAD *A, Y = LOAD *B, STORE *D = Z
283 X = LOAD *A, STORE *D = Z, Y = LOAD *B
284 Y = LOAD *B, X = LOAD *A, STORE *D = Z
285 Y = LOAD *B, STORE *D = Z, X = LOAD *A
286 STORE *D = Z, X = LOAD *A, Y = LOAD *B
287 STORE *D = Z, Y = LOAD *B, X = LOAD *A
288
289 (*) It _must_ be assumed that overlapping memory accesses may be merged or
290 discarded. This means that for:
291
292 X = *A; Y = *(A + 4);
293
294 we may get any one of the following sequences:
295
296 X = LOAD *A; Y = LOAD *(A + 4);
297 Y = LOAD *(A + 4); X = LOAD *A;
298 {X, Y} = LOAD {*A, *(A + 4) };
299
300 And for:
301
f191eec5 302 *A = X; *(A + 4) = Y;
108b42b4 303
f191eec5 304 we may get any of:
108b42b4 305
f191eec5
PM
306 STORE *A = X; STORE *(A + 4) = Y;
307 STORE *(A + 4) = Y; STORE *A = X;
308 STORE {*A, *(A + 4) } = {X, Y};
108b42b4 309
432fbf3c
PM
310And there are anti-guarantees:
311
312 (*) These guarantees do not apply to bitfields, because compilers often
313 generate code to modify these using non-atomic read-modify-write
314 sequences. Do not attempt to use bitfields to synchronize parallel
315 algorithms.
316
317 (*) Even in cases where bitfields are protected by locks, all fields
318 in a given bitfield must be protected by one lock. If two fields
319 in a given bitfield are protected by different locks, the compiler's
320 non-atomic read-modify-write sequences can cause an update to one
321 field to corrupt the value of an adjacent field.
322
323 (*) These guarantees apply only to properly aligned and sized scalar
324 variables. "Properly sized" currently means variables that are
325 the same size as "char", "short", "int" and "long". "Properly
326 aligned" means the natural alignment, thus no constraints for
327 "char", two-byte alignment for "short", four-byte alignment for
328 "int", and either four-byte or eight-byte alignment for "long",
329 on 32-bit and 64-bit systems, respectively. Note that these
330 guarantees were introduced into the C11 standard, so beware when
331 using older pre-C11 compilers (for example, gcc 4.6). The portion
332 of the standard containing this guarantee is Section 3.14, which
333 defines "memory location" as follows:
334
335 memory location
336 either an object of scalar type, or a maximal sequence
337 of adjacent bit-fields all having nonzero width
338
339 NOTE 1: Two threads of execution can update and access
340 separate memory locations without interfering with
341 each other.
342
343 NOTE 2: A bit-field and an adjacent non-bit-field member
344 are in separate memory locations. The same applies
345 to two bit-fields, if one is declared inside a nested
346 structure declaration and the other is not, or if the two
347 are separated by a zero-length bit-field declaration,
348 or if they are separated by a non-bit-field member
349 declaration. It is not safe to concurrently update two
350 bit-fields in the same structure if all members declared
351 between them are also bit-fields, no matter what the
352 sizes of those intervening bit-fields happen to be.
353
108b42b4
DH
354
355=========================
356WHAT ARE MEMORY BARRIERS?
357=========================
358
359As can be seen above, independent memory operations are effectively performed
360in random order, but this can be a problem for CPU-CPU interaction and for I/O.
361What is required is some way of intervening to instruct the compiler and the
362CPU to restrict the order.
363
364Memory barriers are such interventions. They impose a perceived partial
2b94895b
DH
365ordering over the memory operations on either side of the barrier.
366
367Such enforcement is important because the CPUs and other devices in a system
81fc6323 368can use a variety of tricks to improve performance, including reordering,
2b94895b
DH
369deferral and combination of memory operations; speculative loads; speculative
370branch prediction and various types of caching. Memory barriers are used to
371override or suppress these tricks, allowing the code to sanely control the
372interaction of multiple CPUs and/or devices.
108b42b4
DH
373
374
375VARIETIES OF MEMORY BARRIER
376---------------------------
377
378Memory barriers come in four basic varieties:
379
380 (1) Write (or store) memory barriers.
381
382 A write memory barrier gives a guarantee that all the STORE operations
383 specified before the barrier will appear to happen before all the STORE
384 operations specified after the barrier with respect to the other
385 components of the system.
386
387 A write barrier is a partial ordering on stores only; it is not required
388 to have any effect on loads.
389
6bc39274 390 A CPU can be viewed as committing a sequence of store operations to the
5692fcc6
GP
391 memory system as time progresses. All stores _before_ a write barrier
392 will occur _before_ all the stores after the write barrier.
108b42b4
DH
393
394 [!] Note that write barriers should normally be paired with read or data
395 dependency barriers; see the "SMP barrier pairing" subsection.
396
397
398 (2) Data dependency barriers.
399
400 A data dependency barrier is a weaker form of read barrier. In the case
401 where two loads are performed such that the second depends on the result
402 of the first (eg: the first load retrieves the address to which the second
403 load will be directed), a data dependency barrier would be required to
51de7889 404 make sure that the target of the second load is updated after the address
108b42b4
DH
405 obtained by the first load is accessed.
406
407 A data dependency barrier is a partial ordering on interdependent loads
408 only; it is not required to have any effect on stores, independent loads
409 or overlapping loads.
410
411 As mentioned in (1), the other CPUs in the system can be viewed as
412 committing sequences of stores to the memory system that the CPU being
413 considered can then perceive. A data dependency barrier issued by the CPU
414 under consideration guarantees that for any load preceding it, if that
415 load touches one of a sequence of stores from another CPU, then by the
416 time the barrier completes, the effects of all the stores prior to that
417 touched by the load will be perceptible to any loads issued after the data
418 dependency barrier.
419
420 See the "Examples of memory barrier sequences" subsection for diagrams
421 showing the ordering constraints.
422
423 [!] Note that the first load really has to have a _data_ dependency and
424 not a control dependency. If the address for the second load is dependent
425 on the first load, but the dependency is through a conditional rather than
426 actually loading the address itself, then it's a _control_ dependency and
427 a full read barrier or better is required. See the "Control dependencies"
428 subsection for more information.
429
430 [!] Note that data dependency barriers should normally be paired with
431 write barriers; see the "SMP barrier pairing" subsection.
432
433
434 (3) Read (or load) memory barriers.
435
436 A read barrier is a data dependency barrier plus a guarantee that all the
437 LOAD operations specified before the barrier will appear to happen before
438 all the LOAD operations specified after the barrier with respect to the
439 other components of the system.
440
441 A read barrier is a partial ordering on loads only; it is not required to
442 have any effect on stores.
443
444 Read memory barriers imply data dependency barriers, and so can substitute
445 for them.
446
447 [!] Note that read barriers should normally be paired with write barriers;
448 see the "SMP barrier pairing" subsection.
449
450
451 (4) General memory barriers.
452
670bd95e
DH
453 A general memory barrier gives a guarantee that all the LOAD and STORE
454 operations specified before the barrier will appear to happen before all
455 the LOAD and STORE operations specified after the barrier with respect to
456 the other components of the system.
457
458 A general memory barrier is a partial ordering over both loads and stores.
108b42b4
DH
459
460 General memory barriers imply both read and write memory barriers, and so
461 can substitute for either.
462
463
464And a couple of implicit varieties:
465
2e4f5382 466 (5) ACQUIRE operations.
108b42b4
DH
467
468 This acts as a one-way permeable barrier. It guarantees that all memory
2e4f5382
PZ
469 operations after the ACQUIRE operation will appear to happen after the
470 ACQUIRE operation with respect to the other components of the system.
787df638 471 ACQUIRE operations include LOCK operations and both smp_load_acquire()
2f359c7e 472 and smp_cond_load_acquire() operations.
108b42b4 473
2e4f5382
PZ
474 Memory operations that occur before an ACQUIRE operation may appear to
475 happen after it completes.
108b42b4 476
2e4f5382
PZ
477 An ACQUIRE operation should almost always be paired with a RELEASE
478 operation.
108b42b4
DH
479
480
2e4f5382 481 (6) RELEASE operations.
108b42b4
DH
482
483 This also acts as a one-way permeable barrier. It guarantees that all
2e4f5382
PZ
484 memory operations before the RELEASE operation will appear to happen
485 before the RELEASE operation with respect to the other components of the
486 system. RELEASE operations include UNLOCK operations and
487 smp_store_release() operations.
108b42b4 488
2e4f5382 489 Memory operations that occur after a RELEASE operation may appear to
108b42b4
DH
490 happen before it completes.
491
2e4f5382 492 The use of ACQUIRE and RELEASE operations generally precludes the need
a897b13d
SP
493 for other sorts of memory barrier. In addition, a RELEASE+ACQUIRE pair is
494 -not- guaranteed to act as a full memory barrier. However, after an
495 ACQUIRE on a given variable, all memory accesses preceding any prior
2e4f5382
PZ
496 RELEASE on that same variable are guaranteed to be visible. In other
497 words, within a given variable's critical section, all accesses of all
498 previous critical sections for that variable are guaranteed to have
499 completed.
17eb88e0 500
2e4f5382
PZ
501 This means that ACQUIRE acts as a minimal "acquire" operation and
502 RELEASE acts as a minimal "release" operation.
108b42b4 503
706eeb3e
PZ
504A subset of the atomic operations described in atomic_t.txt have ACQUIRE and
505RELEASE variants in addition to fully-ordered and relaxed (no barrier
506semantics) definitions. For compound atomics performing both a load and a
507store, ACQUIRE semantics apply only to the load and RELEASE semantics apply
508only to the store portion of the operation.
108b42b4
DH
509
510Memory barriers are only required where there's a possibility of interaction
511between two CPUs or between a CPU and a device. If it can be guaranteed that
512there won't be any such interaction in any particular piece of code, then
513memory barriers are unnecessary in that piece of code.
514
515
516Note that these are the _minimum_ guarantees. Different architectures may give
517more substantial guarantees, but they may _not_ be relied upon outside of arch
518specific code.
519
520
521WHAT MAY NOT BE ASSUMED ABOUT MEMORY BARRIERS?
522----------------------------------------------
523
524There are certain things that the Linux kernel memory barriers do not guarantee:
525
526 (*) There is no guarantee that any of the memory accesses specified before a
527 memory barrier will be _complete_ by the completion of a memory barrier
528 instruction; the barrier can be considered to draw a line in that CPU's
529 access queue that accesses of the appropriate type may not cross.
530
531 (*) There is no guarantee that issuing a memory barrier on one CPU will have
532 any direct effect on another CPU or any other hardware in the system. The
533 indirect effect will be the order in which the second CPU sees the effects
534 of the first CPU's accesses occur, but see the next point:
535
6bc39274 536 (*) There is no guarantee that a CPU will see the correct order of effects
108b42b4
DH
537 from a second CPU's accesses, even _if_ the second CPU uses a memory
538 barrier, unless the first CPU _also_ uses a matching memory barrier (see
539 the subsection on "SMP Barrier Pairing").
540
541 (*) There is no guarantee that some intervening piece of off-the-CPU
542 hardware[*] will not reorder the memory accesses. CPU cache coherency
543 mechanisms should propagate the indirect effects of a memory barrier
544 between CPUs, but might not do so in order.
545
546 [*] For information on bus mastering DMA and coherency please read:
547
bff9e34c 548 Documentation/driver-api/pci/pci.rst
395cf969 549 Documentation/DMA-API-HOWTO.txt
108b42b4
DH
550 Documentation/DMA-API.txt
551
552
f28f0868
PM
553DATA DEPENDENCY BARRIERS (HISTORICAL)
554-------------------------------------
555
556As of v4.15 of the Linux kernel, an smp_read_barrier_depends() was
557added to READ_ONCE(), which means that about the only people who
558need to pay attention to this section are those working on DEC Alpha
559architecture-specific code and those working on READ_ONCE() itself.
560For those who need it, and for those who are interested in the history,
561here is the story of data-dependency barriers.
108b42b4
DH
562
563The usage requirements of data dependency barriers are a little subtle, and
564it's not always obvious that they're needed. To illustrate, consider the
565following sequence of events:
566
2ecf8101
PM
567 CPU 1 CPU 2
568 =============== ===============
3dbf0913 569 { A == 1, B == 2, C == 3, P == &A, Q == &C }
108b42b4
DH
570 B = 4;
571 <write barrier>
8149b5cb 572 WRITE_ONCE(P, &B);
9af194ce 573 Q = READ_ONCE(P);
2ecf8101 574 D = *Q;
108b42b4
DH
575
576There's a clear data dependency here, and it would seem that by the end of the
577sequence, Q must be either &A or &B, and that:
578
579 (Q == &A) implies (D == 1)
580 (Q == &B) implies (D == 4)
581
81fc6323 582But! CPU 2's perception of P may be updated _before_ its perception of B, thus
108b42b4
DH
583leading to the following situation:
584
585 (Q == &B) and (D == 2) ????
586
806654a9 587While this may seem like a failure of coherency or causality maintenance, it
108b42b4
DH
588isn't, and this behaviour can be observed on certain real CPUs (such as the DEC
589Alpha).
590
2b94895b
DH
591To deal with this, a data dependency barrier or better must be inserted
592between the address load and the data load:
108b42b4 593
2ecf8101
PM
594 CPU 1 CPU 2
595 =============== ===============
3dbf0913 596 { A == 1, B == 2, C == 3, P == &A, Q == &C }
108b42b4
DH
597 B = 4;
598 <write barrier>
9af194ce
PM
599 WRITE_ONCE(P, &B);
600 Q = READ_ONCE(P);
2ecf8101
PM
601 <data dependency barrier>
602 D = *Q;
108b42b4
DH
603
604This enforces the occurrence of one of the two implications, and prevents the
605third possibility from arising.
606
66ce3a4d
PM
607
608[!] Note that this extremely counterintuitive situation arises most easily on
609machines with split caches, so that, for example, one cache bank processes
610even-numbered cache lines and the other bank processes odd-numbered cache
611lines. The pointer P might be stored in an odd-numbered cache line, and the
612variable B might be stored in an even-numbered cache line. Then, if the
613even-numbered bank of the reading CPU's cache is extremely busy while the
614odd-numbered bank is idle, one can see the new value of the pointer P (&B),
615but the old value of the variable B (2).
616
617
618A data-dependency barrier is not required to order dependent writes
619because the CPUs that the Linux kernel supports don't do writes
620until they are certain (1) that the write will actually happen, (2)
621of the location of the write, and (3) of the value to be written.
622But please carefully read the "CONTROL DEPENDENCIES" section and the
623Documentation/RCU/rcu_dereference.txt file: The compiler can and does
624break dependencies in a great many highly creative ways.
92a84dd2
PM
625
626 CPU 1 CPU 2
627 =============== ===============
628 { A == 1, B == 2, C = 3, P == &A, Q == &C }
629 B = 4;
630 <write barrier>
631 WRITE_ONCE(P, &B);
632 Q = READ_ONCE(P);
66ce3a4d 633 WRITE_ONCE(*Q, 5);
92a84dd2 634
66ce3a4d
PM
635Therefore, no data-dependency barrier is required to order the read into
636Q with the store into *Q. In other words, this outcome is prohibited,
637even without a data-dependency barrier:
92a84dd2 638
8b9e7715 639 (Q == &B) && (B == 4)
92a84dd2
PM
640
641Please note that this pattern should be rare. After all, the whole point
642of dependency ordering is to -prevent- writes to the data structure, along
643with the expensive cache misses associated with those writes. This pattern
66ce3a4d
PM
644can be used to record rare error conditions and the like, and the CPUs'
645naturally occurring ordering prevents such records from being lost.
108b42b4
DH
646
647
f1ab25a3
PM
648Note well that the ordering provided by a data dependency is local to
649the CPU containing it. See the section on "Multicopy atomicity" for
650more information.
651
652
2ecf8101
PM
653The data dependency barrier is very important to the RCU system,
654for example. See rcu_assign_pointer() and rcu_dereference() in
655include/linux/rcupdate.h. This permits the current target of an RCU'd
656pointer to be replaced with a new modified target, without the replacement
657target appearing to be incompletely initialised.
108b42b4
DH
658
659See also the subsection on "Cache Coherency" for a more thorough example.
660
661
662CONTROL DEPENDENCIES
663--------------------
664
c8241f85
PM
665Control dependencies can be a bit tricky because current compilers do
666not understand them. The purpose of this section is to help you prevent
667the compiler's ignorance from breaking your code.
668
ff382810
PM
669A load-load control dependency requires a full read memory barrier, not
670simply a data dependency barrier to make it work correctly. Consider the
671following bit of code:
108b42b4 672
9af194ce 673 q = READ_ONCE(a);
18c03c61
PZ
674 if (q) {
675 <data dependency barrier> /* BUG: No data dependency!!! */
9af194ce 676 p = READ_ONCE(b);
45c8a36a 677 }
108b42b4
DH
678
679This will not have the desired effect because there is no actual data
2ecf8101
PM
680dependency, but rather a control dependency that the CPU may short-circuit
681by attempting to predict the outcome in advance, so that other CPUs see
682the load from b as having happened before the load from a. In such a
683case what's actually required is:
108b42b4 684
9af194ce 685 q = READ_ONCE(a);
18c03c61 686 if (q) {
45c8a36a 687 <read barrier>
9af194ce 688 p = READ_ONCE(b);
45c8a36a 689 }
18c03c61
PZ
690
691However, stores are not speculated. This means that ordering -is- provided
ff382810 692for load-store control dependencies, as in the following example:
18c03c61 693
105ff3cb 694 q = READ_ONCE(a);
18c03c61 695 if (q) {
c8241f85 696 WRITE_ONCE(b, 1);
18c03c61
PZ
697 }
698
c8241f85
PM
699Control dependencies pair normally with other types of barriers.
700That said, please note that neither READ_ONCE() nor WRITE_ONCE()
701are optional! Without the READ_ONCE(), the compiler might combine the
702load from 'a' with other loads from 'a'. Without the WRITE_ONCE(),
703the compiler might combine the store to 'b' with other stores to 'b'.
704Either can result in highly counterintuitive effects on ordering.
18c03c61
PZ
705
706Worse yet, if the compiler is able to prove (say) that the value of
707variable 'a' is always non-zero, it would be well within its rights
708to optimize the original example by eliminating the "if" statement
709as follows:
710
711 q = a;
c8241f85 712 b = 1; /* BUG: Compiler and CPU can both reorder!!! */
2456d2a6 713
105ff3cb 714So don't leave out the READ_ONCE().
18c03c61 715
2456d2a6
PM
716It is tempting to try to enforce ordering on identical stores on both
717branches of the "if" statement as follows:
18c03c61 718
105ff3cb 719 q = READ_ONCE(a);
18c03c61 720 if (q) {
9b2b3bf5 721 barrier();
c8241f85 722 WRITE_ONCE(b, 1);
18c03c61
PZ
723 do_something();
724 } else {
9b2b3bf5 725 barrier();
c8241f85 726 WRITE_ONCE(b, 1);
18c03c61
PZ
727 do_something_else();
728 }
729
2456d2a6
PM
730Unfortunately, current compilers will transform this as follows at high
731optimization levels:
18c03c61 732
105ff3cb 733 q = READ_ONCE(a);
2456d2a6 734 barrier();
c8241f85 735 WRITE_ONCE(b, 1); /* BUG: No ordering vs. load from a!!! */
18c03c61 736 if (q) {
c8241f85 737 /* WRITE_ONCE(b, 1); -- moved up, BUG!!! */
18c03c61
PZ
738 do_something();
739 } else {
c8241f85 740 /* WRITE_ONCE(b, 1); -- moved up, BUG!!! */
18c03c61
PZ
741 do_something_else();
742 }
743
2456d2a6
PM
744Now there is no conditional between the load from 'a' and the store to
745'b', which means that the CPU is within its rights to reorder them:
746The conditional is absolutely required, and must be present in the
747assembly code even after all compiler optimizations have been applied.
748Therefore, if you need ordering in this example, you need explicit
749memory barriers, for example, smp_store_release():
18c03c61 750
9af194ce 751 q = READ_ONCE(a);
2456d2a6 752 if (q) {
c8241f85 753 smp_store_release(&b, 1);
18c03c61
PZ
754 do_something();
755 } else {
c8241f85 756 smp_store_release(&b, 1);
18c03c61
PZ
757 do_something_else();
758 }
759
2456d2a6
PM
760In contrast, without explicit memory barriers, two-legged-if control
761ordering is guaranteed only when the stores differ, for example:
762
105ff3cb 763 q = READ_ONCE(a);
2456d2a6 764 if (q) {
c8241f85 765 WRITE_ONCE(b, 1);
2456d2a6
PM
766 do_something();
767 } else {
c8241f85 768 WRITE_ONCE(b, 2);
2456d2a6
PM
769 do_something_else();
770 }
771
105ff3cb
LT
772The initial READ_ONCE() is still required to prevent the compiler from
773proving the value of 'a'.
18c03c61
PZ
774
775In addition, you need to be careful what you do with the local variable 'q',
776otherwise the compiler might be able to guess the value and again remove
777the needed conditional. For example:
778
105ff3cb 779 q = READ_ONCE(a);
18c03c61 780 if (q % MAX) {
c8241f85 781 WRITE_ONCE(b, 1);
18c03c61
PZ
782 do_something();
783 } else {
c8241f85 784 WRITE_ONCE(b, 2);
18c03c61
PZ
785 do_something_else();
786 }
787
788If MAX is defined to be 1, then the compiler knows that (q % MAX) is
789equal to zero, in which case the compiler is within its rights to
790transform the above code into the following:
791
105ff3cb 792 q = READ_ONCE(a);
b26cfc48 793 WRITE_ONCE(b, 2);
18c03c61
PZ
794 do_something_else();
795
2456d2a6
PM
796Given this transformation, the CPU is not required to respect the ordering
797between the load from variable 'a' and the store to variable 'b'. It is
798tempting to add a barrier(), but this does not help. The conditional
799is gone, and the barrier won't bring it back. Therefore, if you are
800relying on this ordering, you should make sure that MAX is greater than
801one, perhaps as follows:
18c03c61 802
105ff3cb 803 q = READ_ONCE(a);
18c03c61
PZ
804 BUILD_BUG_ON(MAX <= 1); /* Order load from a with store to b. */
805 if (q % MAX) {
c8241f85 806 WRITE_ONCE(b, 1);
18c03c61
PZ
807 do_something();
808 } else {
c8241f85 809 WRITE_ONCE(b, 2);
18c03c61
PZ
810 do_something_else();
811 }
812
2456d2a6
PM
813Please note once again that the stores to 'b' differ. If they were
814identical, as noted earlier, the compiler could pull this store outside
815of the 'if' statement.
816
8b19d1de
PM
817You must also be careful not to rely too much on boolean short-circuit
818evaluation. Consider this example:
819
105ff3cb 820 q = READ_ONCE(a);
57aecae9 821 if (q || 1 > 0)
9af194ce 822 WRITE_ONCE(b, 1);
8b19d1de 823
5af4692a
PM
824Because the first condition cannot fault and the second condition is
825always true, the compiler can transform this example as following,
826defeating control dependency:
8b19d1de 827
105ff3cb 828 q = READ_ONCE(a);
9af194ce 829 WRITE_ONCE(b, 1);
8b19d1de
PM
830
831This example underscores the need to ensure that the compiler cannot
9af194ce 832out-guess your code. More generally, although READ_ONCE() does force
8b19d1de
PM
833the compiler to actually emit code for a given load, it does not force
834the compiler to use the results.
835
ebff09a6
PM
836In addition, control dependencies apply only to the then-clause and
837else-clause of the if-statement in question. In particular, it does
838not necessarily apply to code following the if-statement:
839
840 q = READ_ONCE(a);
841 if (q) {
c8241f85 842 WRITE_ONCE(b, 1);
ebff09a6 843 } else {
c8241f85 844 WRITE_ONCE(b, 2);
ebff09a6 845 }
c8241f85 846 WRITE_ONCE(c, 1); /* BUG: No ordering against the read from 'a'. */
ebff09a6
PM
847
848It is tempting to argue that there in fact is ordering because the
849compiler cannot reorder volatile accesses and also cannot reorder
c8241f85
PM
850the writes to 'b' with the condition. Unfortunately for this line
851of reasoning, the compiler might compile the two writes to 'b' as
ebff09a6
PM
852conditional-move instructions, as in this fanciful pseudo-assembly
853language:
854
855 ld r1,a
ebff09a6 856 cmp r1,$0
c8241f85
PM
857 cmov,ne r4,$1
858 cmov,eq r4,$2
ebff09a6
PM
859 st r4,b
860 st $1,c
861
862A weakly ordered CPU would have no dependency of any sort between the load
c8241f85 863from 'a' and the store to 'c'. The control dependencies would extend
ebff09a6
PM
864only to the pair of cmov instructions and the store depending on them.
865In short, control dependencies apply only to the stores in the then-clause
866and else-clause of the if-statement in question (including functions
867invoked by those two clauses), not to code following that if-statement.
868
18c03c61 869
f1ab25a3
PM
870Note well that the ordering provided by a control dependency is local
871to the CPU containing it. See the section on "Multicopy atomicity"
872for more information.
18c03c61 873
18c03c61
PZ
874
875In summary:
876
877 (*) Control dependencies can order prior loads against later stores.
878 However, they do -not- guarantee any other sort of ordering:
879 Not prior loads against later loads, nor prior stores against
880 later anything. If you need these other forms of ordering,
d87510c5 881 use smp_rmb(), smp_wmb(), or, in the case of prior stores and
18c03c61
PZ
882 later loads, smp_mb().
883
7817b799
PM
884 (*) If both legs of the "if" statement begin with identical stores to
885 the same variable, then those stores must be ordered, either by
886 preceding both of them with smp_mb() or by using smp_store_release()
887 to carry out the stores. Please note that it is -not- sufficient
a5052657
PM
888 to use barrier() at beginning of each leg of the "if" statement
889 because, as shown by the example above, optimizing compilers can
890 destroy the control dependency while respecting the letter of the
891 barrier() law.
9b2b3bf5 892
18c03c61 893 (*) Control dependencies require at least one run-time conditional
586dd56a 894 between the prior load and the subsequent store, and this
9af194ce
PM
895 conditional must involve the prior load. If the compiler is able
896 to optimize the conditional away, it will have also optimized
105ff3cb
LT
897 away the ordering. Careful use of READ_ONCE() and WRITE_ONCE()
898 can help to preserve the needed conditional.
18c03c61
PZ
899
900 (*) Control dependencies require that the compiler avoid reordering the
105ff3cb
LT
901 dependency into nonexistence. Careful use of READ_ONCE() or
902 atomic{,64}_read() can help to preserve your control dependency.
895f5542 903 Please see the COMPILER BARRIER section for more information.
18c03c61 904
ebff09a6
PM
905 (*) Control dependencies apply only to the then-clause and else-clause
906 of the if-statement containing the control dependency, including
907 any functions that these two clauses call. Control dependencies
908 do -not- apply to code following the if-statement containing the
909 control dependency.
910
ff382810
PM
911 (*) Control dependencies pair normally with other types of barriers.
912
f1ab25a3
PM
913 (*) Control dependencies do -not- provide multicopy atomicity. If you
914 need all the CPUs to see a given store at the same time, use smp_mb().
108b42b4 915
c8241f85
PM
916 (*) Compilers do not understand control dependencies. It is therefore
917 your job to ensure that they do not break your code.
918
108b42b4
DH
919
920SMP BARRIER PAIRING
921-------------------
922
923When dealing with CPU-CPU interactions, certain types of memory barrier should
924always be paired. A lack of appropriate pairing is almost certainly an error.
925
ff382810 926General barriers pair with each other, though they also pair with most
f1ab25a3
PM
927other types of barriers, albeit without multicopy atomicity. An acquire
928barrier pairs with a release barrier, but both may also pair with other
929barriers, including of course general barriers. A write barrier pairs
930with a data dependency barrier, a control dependency, an acquire barrier,
931a release barrier, a read barrier, or a general barrier. Similarly a
932read barrier, control dependency, or a data dependency barrier pairs
933with a write barrier, an acquire barrier, a release barrier, or a
934general barrier:
108b42b4 935
2ecf8101
PM
936 CPU 1 CPU 2
937 =============== ===============
9af194ce 938 WRITE_ONCE(a, 1);
108b42b4 939 <write barrier>
9af194ce 940 WRITE_ONCE(b, 2); x = READ_ONCE(b);
2ecf8101 941 <read barrier>
9af194ce 942 y = READ_ONCE(a);
108b42b4
DH
943
944Or:
945
2ecf8101
PM
946 CPU 1 CPU 2
947 =============== ===============================
108b42b4
DH
948 a = 1;
949 <write barrier>
9af194ce 950 WRITE_ONCE(b, &a); x = READ_ONCE(b);
2ecf8101
PM
951 <data dependency barrier>
952 y = *x;
108b42b4 953
ff382810
PM
954Or even:
955
956 CPU 1 CPU 2
957 =============== ===============================
9af194ce 958 r1 = READ_ONCE(y);
ff382810 959 <general barrier>
d92f842b 960 WRITE_ONCE(x, 1); if (r2 = READ_ONCE(x)) {
ff382810 961 <implicit control dependency>
9af194ce 962 WRITE_ONCE(y, 1);
ff382810
PM
963 }
964
965 assert(r1 == 0 || r2 == 0);
966
108b42b4
DH
967Basically, the read barrier always has to be there, even though it can be of
968the "weaker" type.
969
670bd95e 970[!] Note that the stores before the write barrier would normally be expected to
81fc6323 971match the loads after the read barrier or the data dependency barrier, and vice
670bd95e
DH
972versa:
973
2ecf8101
PM
974 CPU 1 CPU 2
975 =================== ===================
9af194ce
PM
976 WRITE_ONCE(a, 1); }---- --->{ v = READ_ONCE(c);
977 WRITE_ONCE(b, 2); } \ / { w = READ_ONCE(d);
2ecf8101 978 <write barrier> \ <read barrier>
9af194ce
PM
979 WRITE_ONCE(c, 3); } / \ { x = READ_ONCE(a);
980 WRITE_ONCE(d, 4); }---- --->{ y = READ_ONCE(b);
670bd95e 981
108b42b4
DH
982
983EXAMPLES OF MEMORY BARRIER SEQUENCES
984------------------------------------
985
81fc6323 986Firstly, write barriers act as partial orderings on store operations.
108b42b4
DH
987Consider the following sequence of events:
988
989 CPU 1
990 =======================
991 STORE A = 1
992 STORE B = 2
993 STORE C = 3
994 <write barrier>
995 STORE D = 4
996 STORE E = 5
997
998This sequence of events is committed to the memory coherence system in an order
999that the rest of the system might perceive as the unordered set of { STORE A,
80f7228b 1000STORE B, STORE C } all occurring before the unordered set of { STORE D, STORE E
108b42b4
DH
1001}:
1002
1003 +-------+ : :
1004 | | +------+
1005 | |------>| C=3 | } /\
81fc6323
JP
1006 | | : +------+ }----- \ -----> Events perceptible to
1007 | | : | A=1 | } \/ the rest of the system
108b42b4
DH
1008 | | : +------+ }
1009 | CPU 1 | : | B=2 | }
1010 | | +------+ }
1011 | | wwwwwwwwwwwwwwww } <--- At this point the write barrier
1012 | | +------+ } requires all stores prior to the
1013 | | : | E=5 | } barrier to be committed before
81fc6323 1014 | | : +------+ } further stores may take place
108b42b4
DH
1015 | |------>| D=4 | }
1016 | | +------+
1017 +-------+ : :
1018 |
670bd95e
DH
1019 | Sequence in which stores are committed to the
1020 | memory system by CPU 1
108b42b4
DH
1021 V
1022
1023
81fc6323 1024Secondly, data dependency barriers act as partial orderings on data-dependent
108b42b4
DH
1025loads. Consider the following sequence of events:
1026
1027 CPU 1 CPU 2
1028 ======================= =======================
c14038c3 1029 { B = 7; X = 9; Y = 8; C = &Y }
108b42b4
DH
1030 STORE A = 1
1031 STORE B = 2
1032 <write barrier>
1033 STORE C = &B LOAD X
1034 STORE D = 4 LOAD C (gets &B)
1035 LOAD *C (reads B)
1036
1037Without intervention, CPU 2 may perceive the events on CPU 1 in some
1038effectively random order, despite the write barrier issued by CPU 1:
1039
1040 +-------+ : : : :
1041 | | +------+ +-------+ | Sequence of update
1042 | |------>| B=2 |----- --->| Y->8 | | of perception on
1043 | | : +------+ \ +-------+ | CPU 2
1044 | CPU 1 | : | A=1 | \ --->| C->&Y | V
1045 | | +------+ | +-------+
1046 | | wwwwwwwwwwwwwwww | : :
1047 | | +------+ | : :
1048 | | : | C=&B |--- | : : +-------+
1049 | | : +------+ \ | +-------+ | |
1050 | |------>| D=4 | ----------->| C->&B |------>| |
1051 | | +------+ | +-------+ | |
1052 +-------+ : : | : : | |
1053 | : : | |
1054 | : : | CPU 2 |
1055 | +-------+ | |
1056 Apparently incorrect ---> | | B->7 |------>| |
1057 perception of B (!) | +-------+ | |
1058 | : : | |
1059 | +-------+ | |
1060 The load of X holds ---> \ | X->9 |------>| |
1061 up the maintenance \ +-------+ | |
1062 of coherence of B ----->| B->2 | +-------+
1063 +-------+
1064 : :
1065
1066
1067In the above example, CPU 2 perceives that B is 7, despite the load of *C
670e9f34 1068(which would be B) coming after the LOAD of C.
108b42b4
DH
1069
1070If, however, a data dependency barrier were to be placed between the load of C
c14038c3
DH
1071and the load of *C (ie: B) on CPU 2:
1072
1073 CPU 1 CPU 2
1074 ======================= =======================
1075 { B = 7; X = 9; Y = 8; C = &Y }
1076 STORE A = 1
1077 STORE B = 2
1078 <write barrier>
1079 STORE C = &B LOAD X
1080 STORE D = 4 LOAD C (gets &B)
1081 <data dependency barrier>
1082 LOAD *C (reads B)
1083
1084then the following will occur:
108b42b4
DH
1085
1086 +-------+ : : : :
1087 | | +------+ +-------+
1088 | |------>| B=2 |----- --->| Y->8 |
1089 | | : +------+ \ +-------+
1090 | CPU 1 | : | A=1 | \ --->| C->&Y |
1091 | | +------+ | +-------+
1092 | | wwwwwwwwwwwwwwww | : :
1093 | | +------+ | : :
1094 | | : | C=&B |--- | : : +-------+
1095 | | : +------+ \ | +-------+ | |
1096 | |------>| D=4 | ----------->| C->&B |------>| |
1097 | | +------+ | +-------+ | |
1098 +-------+ : : | : : | |
1099 | : : | |
1100 | : : | CPU 2 |
1101 | +-------+ | |
670bd95e
DH
1102 | | X->9 |------>| |
1103 | +-------+ | |
1104 Makes sure all effects ---> \ ddddddddddddddddd | |
1105 prior to the store of C \ +-------+ | |
1106 are perceptible to ----->| B->2 |------>| |
1107 subsequent loads +-------+ | |
108b42b4
DH
1108 : : +-------+
1109
1110
1111And thirdly, a read barrier acts as a partial order on loads. Consider the
1112following sequence of events:
1113
1114 CPU 1 CPU 2
1115 ======================= =======================
670bd95e 1116 { A = 0, B = 9 }
108b42b4 1117 STORE A=1
108b42b4 1118 <write barrier>
670bd95e 1119 STORE B=2
108b42b4 1120 LOAD B
670bd95e 1121 LOAD A
108b42b4
DH
1122
1123Without intervention, CPU 2 may then choose to perceive the events on CPU 1 in
1124some effectively random order, despite the write barrier issued by CPU 1:
1125
670bd95e
DH
1126 +-------+ : : : :
1127 | | +------+ +-------+
1128 | |------>| A=1 |------ --->| A->0 |
1129 | | +------+ \ +-------+
1130 | CPU 1 | wwwwwwwwwwwwwwww \ --->| B->9 |
1131 | | +------+ | +-------+
1132 | |------>| B=2 |--- | : :
1133 | | +------+ \ | : : +-------+
1134 +-------+ : : \ | +-------+ | |
1135 ---------->| B->2 |------>| |
1136 | +-------+ | CPU 2 |
1137 | | A->0 |------>| |
1138 | +-------+ | |
1139 | : : +-------+
1140 \ : :
1141 \ +-------+
1142 ---->| A->1 |
1143 +-------+
1144 : :
108b42b4 1145
670bd95e 1146
6bc39274 1147If, however, a read barrier were to be placed between the load of B and the
670bd95e
DH
1148load of A on CPU 2:
1149
1150 CPU 1 CPU 2
1151 ======================= =======================
1152 { A = 0, B = 9 }
1153 STORE A=1
1154 <write barrier>
1155 STORE B=2
1156 LOAD B
1157 <read barrier>
1158 LOAD A
1159
1160then the partial ordering imposed by CPU 1 will be perceived correctly by CPU
11612:
1162
1163 +-------+ : : : :
1164 | | +------+ +-------+
1165 | |------>| A=1 |------ --->| A->0 |
1166 | | +------+ \ +-------+
1167 | CPU 1 | wwwwwwwwwwwwwwww \ --->| B->9 |
1168 | | +------+ | +-------+
1169 | |------>| B=2 |--- | : :
1170 | | +------+ \ | : : +-------+
1171 +-------+ : : \ | +-------+ | |
1172 ---------->| B->2 |------>| |
1173 | +-------+ | CPU 2 |
1174 | : : | |
1175 | : : | |
1176 At this point the read ----> \ rrrrrrrrrrrrrrrrr | |
1177 barrier causes all effects \ +-------+ | |
1178 prior to the storage of B ---->| A->1 |------>| |
1179 to be perceptible to CPU 2 +-------+ | |
1180 : : +-------+
1181
1182
1183To illustrate this more completely, consider what could happen if the code
1184contained a load of A either side of the read barrier:
1185
1186 CPU 1 CPU 2
1187 ======================= =======================
1188 { A = 0, B = 9 }
1189 STORE A=1
1190 <write barrier>
1191 STORE B=2
1192 LOAD B
1193 LOAD A [first load of A]
1194 <read barrier>
1195 LOAD A [second load of A]
1196
1197Even though the two loads of A both occur after the load of B, they may both
1198come up with different values:
1199
1200 +-------+ : : : :
1201 | | +------+ +-------+
1202 | |------>| A=1 |------ --->| A->0 |
1203 | | +------+ \ +-------+
1204 | CPU 1 | wwwwwwwwwwwwwwww \ --->| B->9 |
1205 | | +------+ | +-------+
1206 | |------>| B=2 |--- | : :
1207 | | +------+ \ | : : +-------+
1208 +-------+ : : \ | +-------+ | |
1209 ---------->| B->2 |------>| |
1210 | +-------+ | CPU 2 |
1211 | : : | |
1212 | : : | |
1213 | +-------+ | |
1214 | | A->0 |------>| 1st |
1215 | +-------+ | |
1216 At this point the read ----> \ rrrrrrrrrrrrrrrrr | |
1217 barrier causes all effects \ +-------+ | |
1218 prior to the storage of B ---->| A->1 |------>| 2nd |
1219 to be perceptible to CPU 2 +-------+ | |
1220 : : +-------+
1221
1222
1223But it may be that the update to A from CPU 1 becomes perceptible to CPU 2
1224before the read barrier completes anyway:
1225
1226 +-------+ : : : :
1227 | | +------+ +-------+
1228 | |------>| A=1 |------ --->| A->0 |
1229 | | +------+ \ +-------+
1230 | CPU 1 | wwwwwwwwwwwwwwww \ --->| B->9 |
1231 | | +------+ | +-------+
1232 | |------>| B=2 |--- | : :
1233 | | +------+ \ | : : +-------+
1234 +-------+ : : \ | +-------+ | |
1235 ---------->| B->2 |------>| |
1236 | +-------+ | CPU 2 |
1237 | : : | |
1238 \ : : | |
1239 \ +-------+ | |
1240 ---->| A->1 |------>| 1st |
1241 +-------+ | |
1242 rrrrrrrrrrrrrrrrr | |
1243 +-------+ | |
1244 | A->1 |------>| 2nd |
1245 +-------+ | |
1246 : : +-------+
1247
1248
1249The guarantee is that the second load will always come up with A == 1 if the
1250load of B came up with B == 2. No such guarantee exists for the first load of
1251A; that may come up with either A == 0 or A == 1.
1252
1253
1254READ MEMORY BARRIERS VS LOAD SPECULATION
1255----------------------------------------
1256
1257Many CPUs speculate with loads: that is they see that they will need to load an
1258item from memory, and they find a time where they're not using the bus for any
1259other loads, and so do the load in advance - even though they haven't actually
1260got to that point in the instruction execution flow yet. This permits the
1261actual load instruction to potentially complete immediately because the CPU
1262already has the value to hand.
1263
1264It may turn out that the CPU didn't actually need the value - perhaps because a
1265branch circumvented the load - in which case it can discard the value or just
1266cache it for later use.
1267
1268Consider:
1269
e0edc78f 1270 CPU 1 CPU 2
670bd95e 1271 ======================= =======================
e0edc78f
IM
1272 LOAD B
1273 DIVIDE } Divide instructions generally
1274 DIVIDE } take a long time to perform
1275 LOAD A
670bd95e
DH
1276
1277Which might appear as this:
1278
1279 : : +-------+
1280 +-------+ | |
1281 --->| B->2 |------>| |
1282 +-------+ | CPU 2 |
1283 : :DIVIDE | |
1284 +-------+ | |
1285 The CPU being busy doing a ---> --->| A->0 |~~~~ | |
1286 division speculates on the +-------+ ~ | |
1287 LOAD of A : : ~ | |
1288 : :DIVIDE | |
1289 : : ~ | |
1290 Once the divisions are complete --> : : ~-->| |
1291 the CPU can then perform the : : | |
1292 LOAD with immediate effect : : +-------+
1293
1294
1295Placing a read barrier or a data dependency barrier just before the second
1296load:
1297
e0edc78f 1298 CPU 1 CPU 2
670bd95e 1299 ======================= =======================
e0edc78f
IM
1300 LOAD B
1301 DIVIDE
1302 DIVIDE
670bd95e 1303 <read barrier>
e0edc78f 1304 LOAD A
670bd95e
DH
1305
1306will force any value speculatively obtained to be reconsidered to an extent
1307dependent on the type of barrier used. If there was no change made to the
1308speculated memory location, then the speculated value will just be used:
1309
1310 : : +-------+
1311 +-------+ | |
1312 --->| B->2 |------>| |
1313 +-------+ | CPU 2 |
1314 : :DIVIDE | |
1315 +-------+ | |
1316 The CPU being busy doing a ---> --->| A->0 |~~~~ | |
1317 division speculates on the +-------+ ~ | |
1318 LOAD of A : : ~ | |
1319 : :DIVIDE | |
1320 : : ~ | |
1321 : : ~ | |
1322 rrrrrrrrrrrrrrrr~ | |
1323 : : ~ | |
1324 : : ~-->| |
1325 : : | |
1326 : : +-------+
1327
1328
1329but if there was an update or an invalidation from another CPU pending, then
1330the speculation will be cancelled and the value reloaded:
1331
1332 : : +-------+
1333 +-------+ | |
1334 --->| B->2 |------>| |
1335 +-------+ | CPU 2 |
1336 : :DIVIDE | |
1337 +-------+ | |
1338 The CPU being busy doing a ---> --->| A->0 |~~~~ | |
1339 division speculates on the +-------+ ~ | |
1340 LOAD of A : : ~ | |
1341 : :DIVIDE | |
1342 : : ~ | |
1343 : : ~ | |
1344 rrrrrrrrrrrrrrrrr | |
1345 +-------+ | |
1346 The speculation is discarded ---> --->| A->1 |------>| |
1347 and an updated value is +-------+ | |
1348 retrieved : : +-------+
108b42b4
DH
1349
1350
f1ab25a3
PM
1351MULTICOPY ATOMICITY
1352--------------------
1353
1354Multicopy atomicity is a deeply intuitive notion about ordering that is
1355not always provided by real computer systems, namely that a given store
0902b1f4
AS
1356becomes visible at the same time to all CPUs, or, alternatively, that all
1357CPUs agree on the order in which all stores become visible. However,
1358support of full multicopy atomicity would rule out valuable hardware
1359optimizations, so a weaker form called ``other multicopy atomicity''
1360instead guarantees only that a given store becomes visible at the same
1361time to all -other- CPUs. The remainder of this document discusses this
1362weaker form, but for brevity will call it simply ``multicopy atomicity''.
241e6663 1363
f1ab25a3 1364The following example demonstrates multicopy atomicity:
241e6663
PM
1365
1366 CPU 1 CPU 2 CPU 3
1367 ======================= ======================= =======================
1368 { X = 0, Y = 0 }
f1ab25a3
PM
1369 STORE X=1 r1=LOAD X (reads 1) LOAD Y (reads 1)
1370 <general barrier> <read barrier>
1371 STORE Y=r1 LOAD X
241e6663 1372
0902b1f4
AS
1373Suppose that CPU 2's load from X returns 1, which it then stores to Y,
1374and CPU 3's load from Y returns 1. This indicates that CPU 1's store
1375to X precedes CPU 2's load from X and that CPU 2's store to Y precedes
1376CPU 3's load from Y. In addition, the memory barriers guarantee that
1377CPU 2 executes its load before its store, and CPU 3 loads from Y before
1378it loads from X. The question is then "Can CPU 3's load from X return 0?"
241e6663 1379
0902b1f4 1380Because CPU 3's load from X in some sense comes after CPU 2's load, it
241e6663 1381is natural to expect that CPU 3's load from X must therefore return 1.
0902b1f4
AS
1382This expectation follows from multicopy atomicity: if a load executing
1383on CPU B follows a load from the same variable executing on CPU A (and
1384CPU A did not originally store the value which it read), then on
1385multicopy-atomic systems, CPU B's load must return either the same value
1386that CPU A's load did or some later value. However, the Linux kernel
1387does not require systems to be multicopy atomic.
1388
1389The use of a general memory barrier in the example above compensates
1390for any lack of multicopy atomicity. In the example, if CPU 2's load
1391from X returns 1 and CPU 3's load from Y returns 1, then CPU 3's load
1392from X must indeed also return 1.
f1ab25a3
PM
1393
1394However, dependencies, read barriers, and write barriers are not always
1395able to compensate for non-multicopy atomicity. For example, suppose
1396that CPU 2's general barrier is removed from the above example, leaving
1397only the data dependency shown below:
241e6663
PM
1398
1399 CPU 1 CPU 2 CPU 3
1400 ======================= ======================= =======================
1401 { X = 0, Y = 0 }
f1ab25a3
PM
1402 STORE X=1 r1=LOAD X (reads 1) LOAD Y (reads 1)
1403 <data dependency> <read barrier>
1404 STORE Y=r1 LOAD X (reads 0)
1405
1406This substitution allows non-multicopy atomicity to run rampant: in
1407this example, it is perfectly legal for CPU 2's load from X to return 1,
1408CPU 3's load from Y to return 1, and its load from X to return 0.
1409
1410The key point is that although CPU 2's data dependency orders its load
0902b1f4
AS
1411and store, it does not guarantee to order CPU 1's store. Thus, if this
1412example runs on a non-multicopy-atomic system where CPUs 1 and 2 share a
1413store buffer or a level of cache, CPU 2 might have early access to CPU 1's
1414writes. General barriers are therefore required to ensure that all CPUs
1415agree on the combined order of multiple accesses.
f1ab25a3
PM
1416
1417General barriers can compensate not only for non-multicopy atomicity,
1418but can also generate additional ordering that can ensure that -all-
1419CPUs will perceive the same order of -all- operations. In contrast, a
1420chain of release-acquire pairs do not provide this additional ordering,
1421which means that only those CPUs on the chain are guaranteed to agree
1422on the combined order of the accesses. For example, switching to C code
1423in deference to the ghost of Herman Hollerith:
c535cc92
PM
1424
1425 int u, v, x, y, z;
1426
1427 void cpu0(void)
1428 {
1429 r0 = smp_load_acquire(&x);
1430 WRITE_ONCE(u, 1);
1431 smp_store_release(&y, 1);
1432 }
1433
1434 void cpu1(void)
1435 {
1436 r1 = smp_load_acquire(&y);
1437 r4 = READ_ONCE(v);
1438 r5 = READ_ONCE(u);
1439 smp_store_release(&z, 1);
1440 }
1441
1442 void cpu2(void)
1443 {
1444 r2 = smp_load_acquire(&z);
1445 smp_store_release(&x, 1);
1446 }
1447
1448 void cpu3(void)
1449 {
1450 WRITE_ONCE(v, 1);
1451 smp_mb();
1452 r3 = READ_ONCE(u);
1453 }
1454
f1ab25a3
PM
1455Because cpu0(), cpu1(), and cpu2() participate in a chain of
1456smp_store_release()/smp_load_acquire() pairs, the following outcome
1457is prohibited:
c535cc92
PM
1458
1459 r0 == 1 && r1 == 1 && r2 == 1
1460
1461Furthermore, because of the release-acquire relationship between cpu0()
1462and cpu1(), cpu1() must see cpu0()'s writes, so that the following
1463outcome is prohibited:
1464
1465 r1 == 1 && r5 == 0
1466
f1ab25a3
PM
1467However, the ordering provided by a release-acquire chain is local
1468to the CPUs participating in that chain and does not apply to cpu3(),
1469at least aside from stores. Therefore, the following outcome is possible:
c535cc92
PM
1470
1471 r0 == 0 && r1 == 1 && r2 == 1 && r3 == 0 && r4 == 0
1472
37ef0341
PM
1473As an aside, the following outcome is also possible:
1474
1475 r0 == 0 && r1 == 1 && r2 == 1 && r3 == 0 && r4 == 0 && r5 == 1
1476
c535cc92
PM
1477Although cpu0(), cpu1(), and cpu2() will see their respective reads and
1478writes in order, CPUs not involved in the release-acquire chain might
1479well disagree on the order. This disagreement stems from the fact that
1480the weak memory-barrier instructions used to implement smp_load_acquire()
1481and smp_store_release() are not required to order prior stores against
1482subsequent loads in all cases. This means that cpu3() can see cpu0()'s
1483store to u as happening -after- cpu1()'s load from v, even though
1484both cpu0() and cpu1() agree that these two operations occurred in the
1485intended order.
1486
1487However, please keep in mind that smp_load_acquire() is not magic.
1488In particular, it simply reads from its argument with ordering. It does
1489-not- ensure that any particular value will be read. Therefore, the
1490following outcome is possible:
1491
1492 r0 == 0 && r1 == 0 && r2 == 0 && r5 == 0
1493
1494Note that this outcome can happen even on a mythical sequentially
1495consistent system where nothing is ever reordered.
1496
f1ab25a3
PM
1497To reiterate, if your code requires full ordering of all operations,
1498use general barriers throughout.
241e6663
PM
1499
1500
108b42b4
DH
1501========================
1502EXPLICIT KERNEL BARRIERS
1503========================
1504
1505The Linux kernel has a variety of different barriers that act at different
1506levels:
1507
1508 (*) Compiler barrier.
1509
1510 (*) CPU memory barriers.
1511
108b42b4
DH
1512
1513COMPILER BARRIER
1514----------------
1515
1516The Linux kernel has an explicit compiler barrier function that prevents the
1517compiler from moving the memory accesses either side of it to the other side:
1518
1519 barrier();
1520
9af194ce
PM
1521This is a general barrier -- there are no read-read or write-write
1522variants of barrier(). However, READ_ONCE() and WRITE_ONCE() can be
1523thought of as weak forms of barrier() that affect only the specific
1524accesses flagged by the READ_ONCE() or WRITE_ONCE().
108b42b4 1525
692118da
PM
1526The barrier() function has the following effects:
1527
1528 (*) Prevents the compiler from reordering accesses following the
1529 barrier() to precede any accesses preceding the barrier().
1530 One example use for this property is to ease communication between
1531 interrupt-handler code and the code that was interrupted.
1532
1533 (*) Within a loop, forces the compiler to load the variables used
1534 in that loop's conditional on each pass through that loop.
1535
9af194ce
PM
1536The READ_ONCE() and WRITE_ONCE() functions can prevent any number of
1537optimizations that, while perfectly safe in single-threaded code, can
1538be fatal in concurrent code. Here are some examples of these sorts
1539of optimizations:
692118da 1540
449f7413
PM
1541 (*) The compiler is within its rights to reorder loads and stores
1542 to the same variable, and in some cases, the CPU is within its
1543 rights to reorder loads to the same variable. This means that
1544 the following code:
1545
1546 a[0] = x;
1547 a[1] = x;
1548
1549 Might result in an older value of x stored in a[1] than in a[0].
1550 Prevent both the compiler and the CPU from doing this as follows:
1551
9af194ce
PM
1552 a[0] = READ_ONCE(x);
1553 a[1] = READ_ONCE(x);
449f7413 1554
9af194ce
PM
1555 In short, READ_ONCE() and WRITE_ONCE() provide cache coherence for
1556 accesses from multiple CPUs to a single variable.
449f7413 1557
692118da
PM
1558 (*) The compiler is within its rights to merge successive loads from
1559 the same variable. Such merging can cause the compiler to "optimize"
1560 the following code:
1561
1562 while (tmp = a)
1563 do_something_with(tmp);
1564
1565 into the following code, which, although in some sense legitimate
1566 for single-threaded code, is almost certainly not what the developer
1567 intended:
1568
1569 if (tmp = a)
1570 for (;;)
1571 do_something_with(tmp);
1572
9af194ce 1573 Use READ_ONCE() to prevent the compiler from doing this to you:
692118da 1574
9af194ce 1575 while (tmp = READ_ONCE(a))
692118da
PM
1576 do_something_with(tmp);
1577
1578 (*) The compiler is within its rights to reload a variable, for example,
1579 in cases where high register pressure prevents the compiler from
1580 keeping all data of interest in registers. The compiler might
1581 therefore optimize the variable 'tmp' out of our previous example:
1582
1583 while (tmp = a)
1584 do_something_with(tmp);
1585
1586 This could result in the following code, which is perfectly safe in
1587 single-threaded code, but can be fatal in concurrent code:
1588
1589 while (a)
1590 do_something_with(a);
1591
1592 For example, the optimized version of this code could result in
1593 passing a zero to do_something_with() in the case where the variable
1594 a was modified by some other CPU between the "while" statement and
1595 the call to do_something_with().
1596
9af194ce 1597 Again, use READ_ONCE() to prevent the compiler from doing this:
692118da 1598
9af194ce 1599 while (tmp = READ_ONCE(a))
692118da
PM
1600 do_something_with(tmp);
1601
1602 Note that if the compiler runs short of registers, it might save
1603 tmp onto the stack. The overhead of this saving and later restoring
1604 is why compilers reload variables. Doing so is perfectly safe for
1605 single-threaded code, so you need to tell the compiler about cases
1606 where it is not safe.
1607
1608 (*) The compiler is within its rights to omit a load entirely if it knows
1609 what the value will be. For example, if the compiler can prove that
1610 the value of variable 'a' is always zero, it can optimize this code:
1611
1612 while (tmp = a)
1613 do_something_with(tmp);
1614
1615 Into this:
1616
1617 do { } while (0);
1618
9af194ce
PM
1619 This transformation is a win for single-threaded code because it
1620 gets rid of a load and a branch. The problem is that the compiler
1621 will carry out its proof assuming that the current CPU is the only
1622 one updating variable 'a'. If variable 'a' is shared, then the
1623 compiler's proof will be erroneous. Use READ_ONCE() to tell the
1624 compiler that it doesn't know as much as it thinks it does:
692118da 1625
9af194ce 1626 while (tmp = READ_ONCE(a))
692118da
PM
1627 do_something_with(tmp);
1628
1629 But please note that the compiler is also closely watching what you
9af194ce 1630 do with the value after the READ_ONCE(). For example, suppose you
692118da
PM
1631 do the following and MAX is a preprocessor macro with the value 1:
1632
9af194ce 1633 while ((tmp = READ_ONCE(a)) % MAX)
692118da
PM
1634 do_something_with(tmp);
1635
1636 Then the compiler knows that the result of the "%" operator applied
1637 to MAX will always be zero, again allowing the compiler to optimize
1638 the code into near-nonexistence. (It will still load from the
1639 variable 'a'.)
1640
1641 (*) Similarly, the compiler is within its rights to omit a store entirely
1642 if it knows that the variable already has the value being stored.
1643 Again, the compiler assumes that the current CPU is the only one
1644 storing into the variable, which can cause the compiler to do the
1645 wrong thing for shared variables. For example, suppose you have
1646 the following:
1647
1648 a = 0;
65f95ff2 1649 ... Code that does not store to variable a ...
692118da
PM
1650 a = 0;
1651
1652 The compiler sees that the value of variable 'a' is already zero, so
1653 it might well omit the second store. This would come as a fatal
1654 surprise if some other CPU might have stored to variable 'a' in the
1655 meantime.
1656
9af194ce 1657 Use WRITE_ONCE() to prevent the compiler from making this sort of
692118da
PM
1658 wrong guess:
1659
9af194ce 1660 WRITE_ONCE(a, 0);
65f95ff2 1661 ... Code that does not store to variable a ...
9af194ce 1662 WRITE_ONCE(a, 0);
692118da
PM
1663
1664 (*) The compiler is within its rights to reorder memory accesses unless
1665 you tell it not to. For example, consider the following interaction
1666 between process-level code and an interrupt handler:
1667
1668 void process_level(void)
1669 {
1670 msg = get_message();
1671 flag = true;
1672 }
1673
1674 void interrupt_handler(void)
1675 {
1676 if (flag)
1677 process_message(msg);
1678 }
1679
df5cbb27 1680 There is nothing to prevent the compiler from transforming
692118da
PM
1681 process_level() to the following, in fact, this might well be a
1682 win for single-threaded code:
1683
1684 void process_level(void)
1685 {
1686 flag = true;
1687 msg = get_message();
1688 }
1689
1690 If the interrupt occurs between these two statement, then
9af194ce 1691 interrupt_handler() might be passed a garbled msg. Use WRITE_ONCE()
692118da
PM
1692 to prevent this as follows:
1693
1694 void process_level(void)
1695 {
9af194ce
PM
1696 WRITE_ONCE(msg, get_message());
1697 WRITE_ONCE(flag, true);
692118da
PM
1698 }
1699
1700 void interrupt_handler(void)
1701 {
9af194ce
PM
1702 if (READ_ONCE(flag))
1703 process_message(READ_ONCE(msg));
692118da
PM
1704 }
1705
9af194ce
PM
1706 Note that the READ_ONCE() and WRITE_ONCE() wrappers in
1707 interrupt_handler() are needed if this interrupt handler can itself
1708 be interrupted by something that also accesses 'flag' and 'msg',
1709 for example, a nested interrupt or an NMI. Otherwise, READ_ONCE()
1710 and WRITE_ONCE() are not needed in interrupt_handler() other than
1711 for documentation purposes. (Note also that nested interrupts
1712 do not typically occur in modern Linux kernels, in fact, if an
1713 interrupt handler returns with interrupts enabled, you will get a
1714 WARN_ONCE() splat.)
1715
1716 You should assume that the compiler can move READ_ONCE() and
1717 WRITE_ONCE() past code not containing READ_ONCE(), WRITE_ONCE(),
1718 barrier(), or similar primitives.
1719
1720 This effect could also be achieved using barrier(), but READ_ONCE()
1721 and WRITE_ONCE() are more selective: With READ_ONCE() and
1722 WRITE_ONCE(), the compiler need only forget the contents of the
1723 indicated memory locations, while with barrier() the compiler must
8149b5cb 1724 discard the value of all memory locations that it has currently
9af194ce
PM
1725 cached in any machine registers. Of course, the compiler must also
1726 respect the order in which the READ_ONCE()s and WRITE_ONCE()s occur,
1727 though the CPU of course need not do so.
692118da
PM
1728
1729 (*) The compiler is within its rights to invent stores to a variable,
1730 as in the following example:
1731
1732 if (a)
1733 b = a;
1734 else
1735 b = 42;
1736
1737 The compiler might save a branch by optimizing this as follows:
1738
1739 b = 42;
1740 if (a)
1741 b = a;
1742
1743 In single-threaded code, this is not only safe, but also saves
1744 a branch. Unfortunately, in concurrent code, this optimization
1745 could cause some other CPU to see a spurious value of 42 -- even
1746 if variable 'a' was never zero -- when loading variable 'b'.
9af194ce 1747 Use WRITE_ONCE() to prevent this as follows:
692118da
PM
1748
1749 if (a)
9af194ce 1750 WRITE_ONCE(b, a);
692118da 1751 else
9af194ce 1752 WRITE_ONCE(b, 42);
692118da
PM
1753
1754 The compiler can also invent loads. These are usually less
1755 damaging, but they can result in cache-line bouncing and thus in
9af194ce 1756 poor performance and scalability. Use READ_ONCE() to prevent
692118da
PM
1757 invented loads.
1758
1759 (*) For aligned memory locations whose size allows them to be accessed
1760 with a single memory-reference instruction, prevents "load tearing"
1761 and "store tearing," in which a single large access is replaced by
1762 multiple smaller accesses. For example, given an architecture having
1763 16-bit store instructions with 7-bit immediate fields, the compiler
1764 might be tempted to use two 16-bit store-immediate instructions to
1765 implement the following 32-bit store:
1766
1767 p = 0x00010002;
1768
1769 Please note that GCC really does use this sort of optimization,
1770 which is not surprising given that it would likely take more
1771 than two instructions to build the constant and then store it.
1772 This optimization can therefore be a win in single-threaded code.
1773 In fact, a recent bug (since fixed) caused GCC to incorrectly use
1774 this optimization in a volatile store. In the absence of such bugs,
9af194ce 1775 use of WRITE_ONCE() prevents store tearing in the following example:
692118da 1776
9af194ce 1777 WRITE_ONCE(p, 0x00010002);
692118da
PM
1778
1779 Use of packed structures can also result in load and store tearing,
1780 as in this example:
1781
1782 struct __attribute__((__packed__)) foo {
1783 short a;
1784 int b;
1785 short c;
1786 };
1787 struct foo foo1, foo2;
1788 ...
1789
1790 foo2.a = foo1.a;
1791 foo2.b = foo1.b;
1792 foo2.c = foo1.c;
1793
9af194ce
PM
1794 Because there are no READ_ONCE() or WRITE_ONCE() wrappers and no
1795 volatile markings, the compiler would be well within its rights to
1796 implement these three assignment statements as a pair of 32-bit
1797 loads followed by a pair of 32-bit stores. This would result in
1798 load tearing on 'foo1.b' and store tearing on 'foo2.b'. READ_ONCE()
1799 and WRITE_ONCE() again prevent tearing in this example:
692118da
PM
1800
1801 foo2.a = foo1.a;
9af194ce 1802 WRITE_ONCE(foo2.b, READ_ONCE(foo1.b));
692118da
PM
1803 foo2.c = foo1.c;
1804
9af194ce
PM
1805All that aside, it is never necessary to use READ_ONCE() and
1806WRITE_ONCE() on a variable that has been marked volatile. For example,
1807because 'jiffies' is marked volatile, it is never necessary to
1808say READ_ONCE(jiffies). The reason for this is that READ_ONCE() and
1809WRITE_ONCE() are implemented as volatile casts, which has no effect when
1810its argument is already marked volatile.
692118da
PM
1811
1812Please note that these compiler barriers have no direct effect on the CPU,
1813which may then reorder things however it wishes.
108b42b4
DH
1814
1815
1816CPU MEMORY BARRIERS
1817-------------------
1818
1819The Linux kernel has eight basic CPU memory barriers:
1820
1821 TYPE MANDATORY SMP CONDITIONAL
1822 =============== ======================= ===========================
1823 GENERAL mb() smp_mb()
1824 WRITE wmb() smp_wmb()
1825 READ rmb() smp_rmb()
9ad3c143 1826 DATA DEPENDENCY READ_ONCE()
108b42b4
DH
1827
1828
73f10281 1829All memory barriers except the data dependency barriers imply a compiler
0b6fa347 1830barrier. Data dependencies do not impose any additional compiler ordering.
73f10281 1831
9af194ce
PM
1832Aside: In the case of data dependencies, the compiler would be expected
1833to issue the loads in the correct order (eg. `a[b]` would have to load
1834the value of b before loading a[b]), however there is no guarantee in
1835the C specification that the compiler may not speculate the value of b
8149b5cb 1836(eg. is equal to 1) and load a[b] before b (eg. tmp = a[1]; if (b != 1)
0b6fa347
SP
1837tmp = a[b]; ). There is also the problem of a compiler reloading b after
1838having loaded a[b], thus having a newer copy of b than a[b]. A consensus
9af194ce
PM
1839has not yet been reached about these problems, however the READ_ONCE()
1840macro is a good place to start looking.
108b42b4
DH
1841
1842SMP memory barriers are reduced to compiler barriers on uniprocessor compiled
81fc6323 1843systems because it is assumed that a CPU will appear to be self-consistent,
108b42b4 1844and will order overlapping accesses correctly with respect to itself.
6a65d263 1845However, see the subsection on "Virtual Machine Guests" below.
108b42b4
DH
1846
1847[!] Note that SMP memory barriers _must_ be used to control the ordering of
1848references to shared memory on SMP systems, though the use of locking instead
1849is sufficient.
1850
1851Mandatory barriers should not be used to control SMP effects, since mandatory
6a65d263
MT
1852barriers impose unnecessary overhead on both SMP and UP systems. They may,
1853however, be used to control MMIO effects on accesses through relaxed memory I/O
1854windows. These barriers are required even on non-SMP systems as they affect
1855the order in which memory operations appear to a device by prohibiting both the
1856compiler and the CPU from reordering them.
108b42b4
DH
1857
1858
1859There are some more advanced barrier functions:
1860
b92b8b35 1861 (*) smp_store_mb(var, value)
108b42b4 1862
75b2bd55 1863 This assigns the value to the variable and then inserts a full memory
2d142e59
DB
1864 barrier after it. It isn't guaranteed to insert anything more than a
1865 compiler barrier in a UP compilation.
108b42b4
DH
1866
1867
1b15611e
PZ
1868 (*) smp_mb__before_atomic();
1869 (*) smp_mb__after_atomic();
108b42b4 1870
39323c64
MS
1871 These are for use with atomic RMW functions that do not imply memory
1872 barriers, but where the code needs a memory barrier. Examples for atomic
1873 RMW functions that do not imply are memory barrier are e.g. add,
1874 subtract, (failed) conditional operations, _relaxed functions,
1875 but not atomic_read or atomic_set. A common example where a memory
1876 barrier may be required is when atomic ops are used for reference
1877 counting.
1878
1879 These are also used for atomic RMW bitop functions that do not imply a
1880 memory barrier (such as set_bit and clear_bit).
108b42b4
DH
1881
1882 As an example, consider a piece of code that marks an object as being dead
1883 and then decrements the object's reference count:
1884
1885 obj->dead = 1;
1b15611e 1886 smp_mb__before_atomic();
108b42b4
DH
1887 atomic_dec(&obj->ref_count);
1888
1889 This makes sure that the death mark on the object is perceived to be set
1890 *before* the reference counter is decremented.
1891
706eeb3e 1892 See Documentation/atomic_{t,bitops}.txt for more information.
108b42b4
DH
1893
1894
1077fa36
AD
1895 (*) dma_wmb();
1896 (*) dma_rmb();
1897
1898 These are for use with consistent memory to guarantee the ordering
1899 of writes or reads of shared memory accessible to both the CPU and a
1900 DMA capable device.
1901
1902 For example, consider a device driver that shares memory with a device
1903 and uses a descriptor status value to indicate if the descriptor belongs
1904 to the device or the CPU, and a doorbell to notify it when new
1905 descriptors are available:
1906
1907 if (desc->status != DEVICE_OWN) {
1908 /* do not read data until we own descriptor */
1909 dma_rmb();
1910
1911 /* read/modify data */
1912 read_data = desc->data;
1913 desc->data = write_data;
1914
1915 /* flush modifications before status update */
1916 dma_wmb();
1917
1918 /* assign ownership */
1919 desc->status = DEVICE_OWN;
1920
1077fa36
AD
1921 /* notify device of new descriptors */
1922 writel(DESC_NOTIFY, doorbell);
1923 }
1924
1925 The dma_rmb() allows us guarantee the device has released ownership
7a458007 1926 before we read the data from the descriptor, and the dma_wmb() allows
1077fa36 1927 us to guarantee the data is written to the descriptor before the device
5846581e
WD
1928 can see it now has ownership. Note that, when using writel(), a prior
1929 wmb() is not needed to guarantee that the cache coherent memory writes
1930 have completed before writing to the MMIO region. The cheaper
1931 writel_relaxed() does not provide this guarantee and must not be used
1932 here.
1933
1934 See the subsection "Kernel I/O barrier effects" for more information on
1935 relaxed I/O accessors and the Documentation/DMA-API.txt file for more
1936 information on consistent memory.
1077fa36 1937
dfeccea6 1938
108b42b4
DH
1939===============================
1940IMPLICIT KERNEL MEMORY BARRIERS
1941===============================
1942
1943Some of the other functions in the linux kernel imply memory barriers, amongst
670bd95e 1944which are locking and scheduling functions.
108b42b4
DH
1945
1946This specification is a _minimum_ guarantee; any particular architecture may
1947provide more substantial guarantees, but these may not be relied upon outside
1948of arch specific code.
1949
1950
166bda71
SP
1951LOCK ACQUISITION FUNCTIONS
1952--------------------------
108b42b4
DH
1953
1954The Linux kernel has a number of locking constructs:
1955
1956 (*) spin locks
1957 (*) R/W spin locks
1958 (*) mutexes
1959 (*) semaphores
1960 (*) R/W semaphores
108b42b4 1961
2e4f5382 1962In all cases there are variants on "ACQUIRE" operations and "RELEASE" operations
108b42b4
DH
1963for each construct. These operations all imply certain barriers:
1964
2e4f5382 1965 (1) ACQUIRE operation implication:
108b42b4 1966
2e4f5382
PZ
1967 Memory operations issued after the ACQUIRE will be completed after the
1968 ACQUIRE operation has completed.
108b42b4 1969
8dd853d7 1970 Memory operations issued before the ACQUIRE may be completed after
a9668cd6 1971 the ACQUIRE operation has completed.
108b42b4 1972
2e4f5382 1973 (2) RELEASE operation implication:
108b42b4 1974
2e4f5382
PZ
1975 Memory operations issued before the RELEASE will be completed before the
1976 RELEASE operation has completed.
108b42b4 1977
2e4f5382
PZ
1978 Memory operations issued after the RELEASE may be completed before the
1979 RELEASE operation has completed.
108b42b4 1980
2e4f5382 1981 (3) ACQUIRE vs ACQUIRE implication:
108b42b4 1982
2e4f5382
PZ
1983 All ACQUIRE operations issued before another ACQUIRE operation will be
1984 completed before that ACQUIRE operation.
108b42b4 1985
2e4f5382 1986 (4) ACQUIRE vs RELEASE implication:
108b42b4 1987
2e4f5382
PZ
1988 All ACQUIRE operations issued before a RELEASE operation will be
1989 completed before the RELEASE operation.
108b42b4 1990
2e4f5382 1991 (5) Failed conditional ACQUIRE implication:
108b42b4 1992
2e4f5382
PZ
1993 Certain locking variants of the ACQUIRE operation may fail, either due to
1994 being unable to get the lock immediately, or due to receiving an unblocked
806654a9 1995 signal while asleep waiting for the lock to become available. Failed
108b42b4
DH
1996 locks do not imply any sort of barrier.
1997
2e4f5382
PZ
1998[!] Note: one of the consequences of lock ACQUIREs and RELEASEs being only
1999one-way barriers is that the effects of instructions outside of a critical
2000section may seep into the inside of the critical section.
108b42b4 2001
2e4f5382
PZ
2002An ACQUIRE followed by a RELEASE may not be assumed to be full memory barrier
2003because it is possible for an access preceding the ACQUIRE to happen after the
2004ACQUIRE, and an access following the RELEASE to happen before the RELEASE, and
2005the two accesses can themselves then cross:
670bd95e
DH
2006
2007 *A = a;
2e4f5382
PZ
2008 ACQUIRE M
2009 RELEASE M
670bd95e
DH
2010 *B = b;
2011
2012may occur as:
2013
2e4f5382 2014 ACQUIRE M, STORE *B, STORE *A, RELEASE M
17eb88e0 2015
8dd853d7
PM
2016When the ACQUIRE and RELEASE are a lock acquisition and release,
2017respectively, this same reordering can occur if the lock's ACQUIRE and
2018RELEASE are to the same lock variable, but only from the perspective of
2019another CPU not holding that lock. In short, a ACQUIRE followed by an
2020RELEASE may -not- be assumed to be a full memory barrier.
2021
12d560f4
PM
2022Similarly, the reverse case of a RELEASE followed by an ACQUIRE does
2023not imply a full memory barrier. Therefore, the CPU's execution of the
2024critical sections corresponding to the RELEASE and the ACQUIRE can cross,
2025so that:
17eb88e0
PM
2026
2027 *A = a;
2e4f5382
PZ
2028 RELEASE M
2029 ACQUIRE N
17eb88e0
PM
2030 *B = b;
2031
2032could occur as:
2033
2e4f5382 2034 ACQUIRE N, STORE *B, STORE *A, RELEASE M
17eb88e0 2035
8dd853d7
PM
2036It might appear that this reordering could introduce a deadlock.
2037However, this cannot happen because if such a deadlock threatened,
2038the RELEASE would simply complete, thereby avoiding the deadlock.
2039
2040 Why does this work?
2041
2042 One key point is that we are only talking about the CPU doing
2043 the reordering, not the compiler. If the compiler (or, for
2044 that matter, the developer) switched the operations, deadlock
2045 -could- occur.
2046
2047 But suppose the CPU reordered the operations. In this case,
2048 the unlock precedes the lock in the assembly code. The CPU
2049 simply elected to try executing the later lock operation first.
2050 If there is a deadlock, this lock operation will simply spin (or
2051 try to sleep, but more on that later). The CPU will eventually
2052 execute the unlock operation (which preceded the lock operation
2053 in the assembly code), which will unravel the potential deadlock,
2054 allowing the lock operation to succeed.
2055
2056 But what if the lock is a sleeplock? In that case, the code will
2057 try to enter the scheduler, where it will eventually encounter
2058 a memory barrier, which will force the earlier unlock operation
2059 to complete, again unraveling the deadlock. There might be
2060 a sleep-unlock race, but the locking primitive needs to resolve
2061 such races properly in any case.
2062
108b42b4
DH
2063Locks and semaphores may not provide any guarantee of ordering on UP compiled
2064systems, and so cannot be counted on in such a situation to actually achieve
2065anything at all - especially with respect to I/O accesses - unless combined
2066with interrupt disabling operations.
2067
d7cab36d 2068See also the section on "Inter-CPU acquiring barrier effects".
108b42b4
DH
2069
2070
2071As an example, consider the following:
2072
2073 *A = a;
2074 *B = b;
2e4f5382 2075 ACQUIRE
108b42b4
DH
2076 *C = c;
2077 *D = d;
2e4f5382 2078 RELEASE
108b42b4
DH
2079 *E = e;
2080 *F = f;
2081
2082The following sequence of events is acceptable:
2083
2e4f5382 2084 ACQUIRE, {*F,*A}, *E, {*C,*D}, *B, RELEASE
108b42b4
DH
2085
2086 [+] Note that {*F,*A} indicates a combined access.
2087
2088But none of the following are:
2089
2e4f5382
PZ
2090 {*F,*A}, *B, ACQUIRE, *C, *D, RELEASE, *E
2091 *A, *B, *C, ACQUIRE, *D, RELEASE, *E, *F
2092 *A, *B, ACQUIRE, *C, RELEASE, *D, *E, *F
2093 *B, ACQUIRE, *C, *D, RELEASE, {*F,*A}, *E
108b42b4
DH
2094
2095
2096
2097INTERRUPT DISABLING FUNCTIONS
2098-----------------------------
2099
2e4f5382
PZ
2100Functions that disable interrupts (ACQUIRE equivalent) and enable interrupts
2101(RELEASE equivalent) will act as compiler barriers only. So if memory or I/O
108b42b4
DH
2102barriers are required in such a situation, they must be provided from some
2103other means.
2104
2105
50fa610a
DH
2106SLEEP AND WAKE-UP FUNCTIONS
2107---------------------------
2108
2109Sleeping and waking on an event flagged in global data can be viewed as an
2110interaction between two pieces of data: the task state of the task waiting for
2111the event and the global data used to indicate the event. To make sure that
2112these appear to happen in the right order, the primitives to begin the process
2113of going to sleep, and the primitives to initiate a wake up imply certain
2114barriers.
2115
2116Firstly, the sleeper normally follows something like this sequence of events:
2117
2118 for (;;) {
2119 set_current_state(TASK_UNINTERRUPTIBLE);
2120 if (event_indicated)
2121 break;
2122 schedule();
2123 }
2124
2125A general memory barrier is interpolated automatically by set_current_state()
2126after it has altered the task state:
2127
2128 CPU 1
2129 ===============================
2130 set_current_state();
b92b8b35 2131 smp_store_mb();
50fa610a
DH
2132 STORE current->state
2133 <general barrier>
2134 LOAD event_indicated
2135
2136set_current_state() may be wrapped by:
2137
2138 prepare_to_wait();
2139 prepare_to_wait_exclusive();
2140
2141which therefore also imply a general memory barrier after setting the state.
2142The whole sequence above is available in various canned forms, all of which
2143interpolate the memory barrier in the right place:
2144
2145 wait_event();
2146 wait_event_interruptible();
2147 wait_event_interruptible_exclusive();
2148 wait_event_interruptible_timeout();
2149 wait_event_killable();
2150 wait_event_timeout();
2151 wait_on_bit();
2152 wait_on_bit_lock();
2153
2154
2155Secondly, code that performs a wake up normally follows something like this:
2156
2157 event_indicated = 1;
2158 wake_up(&event_wait_queue);
2159
2160or:
2161
2162 event_indicated = 1;
2163 wake_up_process(event_daemon);
2164
7696f991
AP
2165A general memory barrier is executed by wake_up() if it wakes something up.
2166If it doesn't wake anything up then a memory barrier may or may not be
2167executed; you must not rely on it. The barrier occurs before the task state
2168is accessed, in particular, it sits between the STORE to indicate the event
2169and the STORE to set TASK_RUNNING:
50fa610a 2170
7696f991 2171 CPU 1 (Sleeper) CPU 2 (Waker)
50fa610a
DH
2172 =============================== ===============================
2173 set_current_state(); STORE event_indicated
b92b8b35 2174 smp_store_mb(); wake_up();
7696f991
AP
2175 STORE current->state ...
2176 <general barrier> <general barrier>
2177 LOAD event_indicated if ((LOAD task->state) & TASK_NORMAL)
2178 STORE task->state
50fa610a 2179
7696f991
AP
2180where "task" is the thread being woken up and it equals CPU 1's "current".
2181
2182To repeat, a general memory barrier is guaranteed to be executed by wake_up()
2183if something is actually awakened, but otherwise there is no such guarantee.
2184To see this, consider the following sequence of events, where X and Y are both
2185initially zero:
5726ce06
PM
2186
2187 CPU 1 CPU 2
2188 =============================== ===============================
7696f991 2189 X = 1; Y = 1;
5726ce06 2190 smp_mb(); wake_up();
7696f991
AP
2191 LOAD Y LOAD X
2192
2193If a wakeup does occur, one (at least) of the two loads must see 1. If, on
2194the other hand, a wakeup does not occur, both loads might see 0.
5726ce06 2195
7696f991
AP
2196wake_up_process() always executes a general memory barrier. The barrier again
2197occurs before the task state is accessed. In particular, if the wake_up() in
2198the previous snippet were replaced by a call to wake_up_process() then one of
2199the two loads would be guaranteed to see 1.
5726ce06 2200
50fa610a
DH
2201The available waker functions include:
2202
2203 complete();
2204 wake_up();
2205 wake_up_all();
2206 wake_up_bit();
2207 wake_up_interruptible();
2208 wake_up_interruptible_all();
2209 wake_up_interruptible_nr();
2210 wake_up_interruptible_poll();
2211 wake_up_interruptible_sync();
2212 wake_up_interruptible_sync_poll();
2213 wake_up_locked();
2214 wake_up_locked_poll();
2215 wake_up_nr();
2216 wake_up_poll();
2217 wake_up_process();
2218
7696f991
AP
2219In terms of memory ordering, these functions all provide the same guarantees of
2220a wake_up() (or stronger).
50fa610a
DH
2221
2222[!] Note that the memory barriers implied by the sleeper and the waker do _not_
2223order multiple stores before the wake-up with respect to loads of those stored
2224values after the sleeper has called set_current_state(). For instance, if the
2225sleeper does:
2226
2227 set_current_state(TASK_INTERRUPTIBLE);
2228 if (event_indicated)
2229 break;
2230 __set_current_state(TASK_RUNNING);
2231 do_something(my_data);
2232
2233and the waker does:
2234
2235 my_data = value;
2236 event_indicated = 1;
2237 wake_up(&event_wait_queue);
2238
2239there's no guarantee that the change to event_indicated will be perceived by
2240the sleeper as coming after the change to my_data. In such a circumstance, the
2241code on both sides must interpolate its own memory barriers between the
2242separate data accesses. Thus the above sleeper ought to do:
2243
2244 set_current_state(TASK_INTERRUPTIBLE);
2245 if (event_indicated) {
2246 smp_rmb();
2247 do_something(my_data);
2248 }
2249
2250and the waker should do:
2251
2252 my_data = value;
2253 smp_wmb();
2254 event_indicated = 1;
2255 wake_up(&event_wait_queue);
2256
2257
108b42b4
DH
2258MISCELLANEOUS FUNCTIONS
2259-----------------------
2260
2261Other functions that imply barriers:
2262
2263 (*) schedule() and similar imply full memory barriers.
2264
108b42b4 2265
2e4f5382
PZ
2266===================================
2267INTER-CPU ACQUIRING BARRIER EFFECTS
2268===================================
108b42b4
DH
2269
2270On SMP systems locking primitives give a more substantial form of barrier: one
2271that does affect memory access ordering on other CPUs, within the context of
2272conflict on any particular lock.
2273
2274
2e4f5382
PZ
2275ACQUIRES VS MEMORY ACCESSES
2276---------------------------
108b42b4 2277
79afecfa 2278Consider the following: the system has a pair of spinlocks (M) and (Q), and
108b42b4
DH
2279three CPUs; then should the following sequence of events occur:
2280
2281 CPU 1 CPU 2
2282 =============================== ===============================
9af194ce 2283 WRITE_ONCE(*A, a); WRITE_ONCE(*E, e);
2e4f5382 2284 ACQUIRE M ACQUIRE Q
9af194ce
PM
2285 WRITE_ONCE(*B, b); WRITE_ONCE(*F, f);
2286 WRITE_ONCE(*C, c); WRITE_ONCE(*G, g);
2e4f5382 2287 RELEASE M RELEASE Q
9af194ce 2288 WRITE_ONCE(*D, d); WRITE_ONCE(*H, h);
108b42b4 2289
81fc6323 2290Then there is no guarantee as to what order CPU 3 will see the accesses to *A
108b42b4 2291through *H occur in, other than the constraints imposed by the separate locks
0b6fa347 2292on the separate CPUs. It might, for example, see:
108b42b4 2293
2e4f5382 2294 *E, ACQUIRE M, ACQUIRE Q, *G, *C, *F, *A, *B, RELEASE Q, *D, *H, RELEASE M
108b42b4
DH
2295
2296But it won't see any of:
2297
2e4f5382
PZ
2298 *B, *C or *D preceding ACQUIRE M
2299 *A, *B or *C following RELEASE M
2300 *F, *G or *H preceding ACQUIRE Q
2301 *E, *F or *G following RELEASE Q
108b42b4
DH
2302
2303
108b42b4
DH
2304=================================
2305WHERE ARE MEMORY BARRIERS NEEDED?
2306=================================
2307
2308Under normal operation, memory operation reordering is generally not going to
2309be a problem as a single-threaded linear piece of code will still appear to
50fa610a 2310work correctly, even if it's in an SMP kernel. There are, however, four
108b42b4
DH
2311circumstances in which reordering definitely _could_ be a problem:
2312
2313 (*) Interprocessor interaction.
2314
2315 (*) Atomic operations.
2316
81fc6323 2317 (*) Accessing devices.
108b42b4
DH
2318
2319 (*) Interrupts.
2320
2321
2322INTERPROCESSOR INTERACTION
2323--------------------------
2324
2325When there's a system with more than one processor, more than one CPU in the
2326system may be working on the same data set at the same time. This can cause
2327synchronisation problems, and the usual way of dealing with them is to use
2328locks. Locks, however, are quite expensive, and so it may be preferable to
2329operate without the use of a lock if at all possible. In such a case
2330operations that affect both CPUs may have to be carefully ordered to prevent
2331a malfunction.
2332
2333Consider, for example, the R/W semaphore slow path. Here a waiting process is
2334queued on the semaphore, by virtue of it having a piece of its stack linked to
2335the semaphore's list of waiting processes:
2336
2337 struct rw_semaphore {
2338 ...
2339 spinlock_t lock;
2340 struct list_head waiters;
2341 };
2342
2343 struct rwsem_waiter {
2344 struct list_head list;
2345 struct task_struct *task;
2346 };
2347
2348To wake up a particular waiter, the up_read() or up_write() functions have to:
2349
2350 (1) read the next pointer from this waiter's record to know as to where the
2351 next waiter record is;
2352
81fc6323 2353 (2) read the pointer to the waiter's task structure;
108b42b4
DH
2354
2355 (3) clear the task pointer to tell the waiter it has been given the semaphore;
2356
2357 (4) call wake_up_process() on the task; and
2358
2359 (5) release the reference held on the waiter's task struct.
2360
81fc6323 2361In other words, it has to perform this sequence of events:
108b42b4
DH
2362
2363 LOAD waiter->list.next;
2364 LOAD waiter->task;
2365 STORE waiter->task;
2366 CALL wakeup
2367 RELEASE task
2368
2369and if any of these steps occur out of order, then the whole thing may
2370malfunction.
2371
2372Once it has queued itself and dropped the semaphore lock, the waiter does not
2373get the lock again; it instead just waits for its task pointer to be cleared
2374before proceeding. Since the record is on the waiter's stack, this means that
2375if the task pointer is cleared _before_ the next pointer in the list is read,
2376another CPU might start processing the waiter and might clobber the waiter's
2377stack before the up*() function has a chance to read the next pointer.
2378
2379Consider then what might happen to the above sequence of events:
2380
2381 CPU 1 CPU 2
2382 =============================== ===============================
2383 down_xxx()
2384 Queue waiter
2385 Sleep
2386 up_yyy()
2387 LOAD waiter->task;
2388 STORE waiter->task;
2389 Woken up by other event
2390 <preempt>
2391 Resume processing
2392 down_xxx() returns
2393 call foo()
2394 foo() clobbers *waiter
2395 </preempt>
2396 LOAD waiter->list.next;
2397 --- OOPS ---
2398
2399This could be dealt with using the semaphore lock, but then the down_xxx()
2400function has to needlessly get the spinlock again after being woken up.
2401
2402The way to deal with this is to insert a general SMP memory barrier:
2403
2404 LOAD waiter->list.next;
2405 LOAD waiter->task;
2406 smp_mb();
2407 STORE waiter->task;
2408 CALL wakeup
2409 RELEASE task
2410
2411In this case, the barrier makes a guarantee that all memory accesses before the
2412barrier will appear to happen before all the memory accesses after the barrier
2413with respect to the other CPUs on the system. It does _not_ guarantee that all
2414the memory accesses before the barrier will be complete by the time the barrier
2415instruction itself is complete.
2416
2417On a UP system - where this wouldn't be a problem - the smp_mb() is just a
2418compiler barrier, thus making sure the compiler emits the instructions in the
6bc39274
DH
2419right order without actually intervening in the CPU. Since there's only one
2420CPU, that CPU's dependency ordering logic will take care of everything else.
108b42b4
DH
2421
2422
2423ATOMIC OPERATIONS
2424-----------------
2425
806654a9 2426While they are technically interprocessor interaction considerations, atomic
dbc8700e
DH
2427operations are noted specially as some of them imply full memory barriers and
2428some don't, but they're very heavily relied on as a group throughout the
2429kernel.
2430
706eeb3e 2431See Documentation/atomic_t.txt for more information.
108b42b4
DH
2432
2433
2434ACCESSING DEVICES
2435-----------------
2436
2437Many devices can be memory mapped, and so appear to the CPU as if they're just
2438a set of memory locations. To control such a device, the driver usually has to
2439make the right memory accesses in exactly the right order.
2440
2441However, having a clever CPU or a clever compiler creates a potential problem
2442in that the carefully sequenced accesses in the driver code won't reach the
2443device in the requisite order if the CPU or the compiler thinks it is more
2444efficient to reorder, combine or merge accesses - something that would cause
2445the device to malfunction.
2446
2447Inside of the Linux kernel, I/O should be done through the appropriate accessor
2448routines - such as inb() or writel() - which know how to make such accesses
806654a9 2449appropriately sequential. While this, for the most part, renders the explicit
91553039
WD
2450use of memory barriers unnecessary, if the accessor functions are used to refer
2451to an I/O memory window with relaxed memory access properties, then _mandatory_
2452memory barriers are required to enforce ordering.
108b42b4 2453
0fe397f0 2454See Documentation/driver-api/device-io.rst for more information.
108b42b4
DH
2455
2456
2457INTERRUPTS
2458----------
2459
2460A driver may be interrupted by its own interrupt service routine, and thus the
2461two parts of the driver may interfere with each other's attempts to control or
2462access the device.
2463
2464This may be alleviated - at least in part - by disabling local interrupts (a
2465form of locking), such that the critical operations are all contained within
806654a9 2466the interrupt-disabled section in the driver. While the driver's interrupt
108b42b4
DH
2467routine is executing, the driver's core may not run on the same CPU, and its
2468interrupt is not permitted to happen again until the current interrupt has been
2469handled, thus the interrupt handler does not need to lock against that.
2470
2471However, consider a driver that was talking to an ethernet card that sports an
2472address register and a data register. If that driver's core talks to the card
2473under interrupt-disablement and then the driver's interrupt handler is invoked:
2474
2475 LOCAL IRQ DISABLE
2476 writew(ADDR, 3);
2477 writew(DATA, y);
2478 LOCAL IRQ ENABLE
2479 <interrupt>
2480 writew(ADDR, 4);
2481 q = readw(DATA);
2482 </interrupt>
2483
2484The store to the data register might happen after the second store to the
2485address register if ordering rules are sufficiently relaxed:
2486
2487 STORE *ADDR = 3, STORE *ADDR = 4, STORE *DATA = y, q = LOAD *DATA
2488
2489
2490If ordering rules are relaxed, it must be assumed that accesses done inside an
2491interrupt disabled section may leak outside of it and may interleave with
2492accesses performed in an interrupt - and vice versa - unless implicit or
2493explicit barriers are used.
2494
2495Normally this won't be a problem because the I/O accesses done inside such
2496sections will include synchronous load operations on strictly ordered I/O
91553039 2497registers that form implicit I/O barriers.
108b42b4
DH
2498
2499
2500A similar situation may occur between an interrupt routine and two routines
0b6fa347 2501running on separate CPUs that communicate with each other. If such a case is
108b42b4
DH
2502likely, then interrupt-disabling locks should be used to guarantee ordering.
2503
2504
2505==========================
2506KERNEL I/O BARRIER EFFECTS
2507==========================
2508
4614bbde
WD
2509Interfacing with peripherals via I/O accesses is deeply architecture and device
2510specific. Therefore, drivers which are inherently non-portable may rely on
2511specific behaviours of their target systems in order to achieve synchronization
2512in the most lightweight manner possible. For drivers intending to be portable
2513between multiple architectures and bus implementations, the kernel offers a
2514series of accessor functions that provide various degrees of ordering
2515guarantees:
108b42b4 2516
4614bbde 2517 (*) readX(), writeX():
108b42b4 2518
0cde62a4
WD
2519 The readX() and writeX() MMIO accessors take a pointer to the
2520 peripheral being accessed as an __iomem * parameter. For pointers
2521 mapped with the default I/O attributes (e.g. those returned by
2522 ioremap()), the ordering guarantees are as follows:
2523
2524 1. All readX() and writeX() accesses to the same peripheral are ordered
9726840d
WD
2525 with respect to each other. This ensures that MMIO register accesses
2526 by the same CPU thread to a particular device will arrive in program
2527 order.
2528
2529 2. A writeX() issued by a CPU thread holding a spinlock is ordered
2530 before a writeX() to the same peripheral from another CPU thread
2531 issued after a later acquisition of the same spinlock. This ensures
2532 that MMIO register writes to a particular device issued while holding
2533 a spinlock will arrive in an order consistent with acquisitions of
2534 the lock.
2535
2536 3. A writeX() by a CPU thread to the peripheral will first wait for the
2537 completion of all prior writes to memory either issued by, or
2538 propagated to, the same thread. This ensures that writes by the CPU
2539 to an outbound DMA buffer allocated by dma_alloc_coherent() will be
2540 visible to a DMA engine when the CPU writes to its MMIO control
2541 register to trigger the transfer.
2542
2543 4. A readX() by a CPU thread from the peripheral will complete before
2544 any subsequent reads from memory by the same thread can begin. This
2545 ensures that reads by the CPU from an incoming DMA buffer allocated
2546 by dma_alloc_coherent() will not see stale data after reading from
2547 the DMA engine's MMIO status register to establish that the DMA
2548 transfer has completed.
2549
2550 5. A readX() by a CPU thread from the peripheral will complete before
2551 any subsequent delay() loop can begin execution on the same thread.
2552 This ensures that two MMIO register writes by the CPU to a peripheral
2553 will arrive at least 1us apart if the first write is immediately read
2554 back with readX() and udelay(1) is called prior to the second
2555 writeX():
0cde62a4
WD
2556
2557 writel(42, DEVICE_REGISTER_0); // Arrives at the device...
2558 readl(DEVICE_REGISTER_0);
2559 udelay(1);
2560 writel(42, DEVICE_REGISTER_1); // ...at least 1us before this.
2561
2562 The ordering properties of __iomem pointers obtained with non-default
2563 attributes (e.g. those returned by ioremap_wc()) are specific to the
2564 underlying architecture and therefore the guarantees listed above cannot
2565 generally be relied upon for accesses to these types of mappings.
108b42b4 2566
4614bbde 2567 (*) readX_relaxed(), writeX_relaxed():
108b42b4 2568
0cde62a4
WD
2569 These are similar to readX() and writeX(), but provide weaker memory
2570 ordering guarantees. Specifically, they do not guarantee ordering with
9726840d
WD
2571 respect to locking, normal memory accesses or delay() loops (i.e.
2572 bullets 2-5 above) but they are still guaranteed to be ordered with
2573 respect to other accesses from the same CPU thread to the same
2574 peripheral when operating on __iomem pointers mapped with the default
2575 I/O attributes.
108b42b4 2576
4614bbde 2577 (*) readsX(), writesX():
108b42b4 2578
0cde62a4
WD
2579 The readsX() and writesX() MMIO accessors are designed for accessing
2580 register-based, memory-mapped FIFOs residing on peripherals that are not
2581 capable of performing DMA. Consequently, they provide only the ordering
2582 guarantees of readX_relaxed() and writeX_relaxed(), as documented above.
108b42b4 2583
4614bbde 2584 (*) inX(), outX():
108b42b4 2585
0cde62a4
WD
2586 The inX() and outX() accessors are intended to access legacy port-mapped
2587 I/O peripherals, which may require special instructions on some
2588 architectures (notably x86). The port number of the peripheral being
2589 accessed is passed as an argument.
108b42b4 2590
0cde62a4
WD
2591 Since many CPU architectures ultimately access these peripherals via an
2592 internal virtual memory mapping, the portable ordering guarantees
2593 provided by inX() and outX() are the same as those provided by readX()
2594 and writeX() respectively when accessing a mapping with the default I/O
2595 attributes.
a8e0aead 2596
0cde62a4
WD
2597 Device drivers may expect outX() to emit a non-posted write transaction
2598 that waits for a completion response from the I/O peripheral before
2599 returning. This is not guaranteed by all architectures and is therefore
2600 not part of the portable ordering semantics.
4614bbde
WD
2601
2602 (*) insX(), outsX():
2603
0cde62a4
WD
2604 As above, the insX() and outsX() accessors provide the same ordering
2605 guarantees as readsX() and writesX() respectively when accessing a
2606 mapping with the default I/O attributes.
108b42b4 2607
0cde62a4 2608 (*) ioreadX(), iowriteX():
108b42b4 2609
0cde62a4
WD
2610 These will perform appropriately for the type of access they're actually
2611 doing, be it inX()/outX() or readX()/writeX().
108b42b4 2612
9726840d
WD
2613With the exception of the string accessors (insX(), outsX(), readsX() and
2614writesX()), all of the above assume that the underlying peripheral is
2615little-endian and will therefore perform byte-swapping operations on big-endian
2616architectures.
4614bbde 2617
108b42b4
DH
2618
2619========================================
2620ASSUMED MINIMUM EXECUTION ORDERING MODEL
2621========================================
2622
2623It has to be assumed that the conceptual CPU is weakly-ordered but that it will
2624maintain the appearance of program causality with respect to itself. Some CPUs
2625(such as i386 or x86_64) are more constrained than others (such as powerpc or
2626frv), and so the most relaxed case (namely DEC Alpha) must be assumed outside
2627of arch-specific code.
2628
2629This means that it must be considered that the CPU will execute its instruction
2630stream in any order it feels like - or even in parallel - provided that if an
81fc6323 2631instruction in the stream depends on an earlier instruction, then that
108b42b4
DH
2632earlier instruction must be sufficiently complete[*] before the later
2633instruction may proceed; in other words: provided that the appearance of
2634causality is maintained.
2635
2636 [*] Some instructions have more than one effect - such as changing the
2637 condition codes, changing registers or changing memory - and different
2638 instructions may depend on different effects.
2639
2640A CPU may also discard any instruction sequence that winds up having no
2641ultimate effect. For example, if two adjacent instructions both load an
2642immediate value into the same register, the first may be discarded.
2643
2644
2645Similarly, it has to be assumed that compiler might reorder the instruction
2646stream in any way it sees fit, again provided the appearance of causality is
2647maintained.
2648
2649
2650============================
2651THE EFFECTS OF THE CPU CACHE
2652============================
2653
2654The way cached memory operations are perceived across the system is affected to
2655a certain extent by the caches that lie between CPUs and memory, and by the
2656memory coherence system that maintains the consistency of state in the system.
2657
2658As far as the way a CPU interacts with another part of the system through the
2659caches goes, the memory system has to include the CPU's caches, and memory
2660barriers for the most part act at the interface between the CPU and its cache
2661(memory barriers logically act on the dotted line in the following diagram):
2662
2663 <--- CPU ---> : <----------- Memory ----------->
2664 :
2665 +--------+ +--------+ : +--------+ +-----------+
2666 | | | | : | | | | +--------+
e0edc78f
IM
2667 | CPU | | Memory | : | CPU | | | | |
2668 | Core |--->| Access |----->| Cache |<-->| | | |
108b42b4 2669 | | | Queue | : | | | |--->| Memory |
e0edc78f
IM
2670 | | | | : | | | | | |
2671 +--------+ +--------+ : +--------+ | | | |
108b42b4
DH
2672 : | Cache | +--------+
2673 : | Coherency |
2674 : | Mechanism | +--------+
2675 +--------+ +--------+ : +--------+ | | | |
2676 | | | | : | | | | | |
2677 | CPU | | Memory | : | CPU | | |--->| Device |
e0edc78f
IM
2678 | Core |--->| Access |----->| Cache |<-->| | | |
2679 | | | Queue | : | | | | | |
108b42b4
DH
2680 | | | | : | | | | +--------+
2681 +--------+ +--------+ : +--------+ +-----------+
2682 :
2683 :
2684
2685Although any particular load or store may not actually appear outside of the
2686CPU that issued it since it may have been satisfied within the CPU's own cache,
2687it will still appear as if the full memory access had taken place as far as the
2688other CPUs are concerned since the cache coherency mechanisms will migrate the
2689cacheline over to the accessing CPU and propagate the effects upon conflict.
2690
2691The CPU core may execute instructions in any order it deems fit, provided the
2692expected program causality appears to be maintained. Some of the instructions
2693generate load and store operations which then go into the queue of memory
2694accesses to be performed. The core may place these in the queue in any order
2695it wishes, and continue execution until it is forced to wait for an instruction
2696to complete.
2697
2698What memory barriers are concerned with is controlling the order in which
2699accesses cross from the CPU side of things to the memory side of things, and
2700the order in which the effects are perceived to happen by the other observers
2701in the system.
2702
2703[!] Memory barriers are _not_ needed within a given CPU, as CPUs always see
2704their own loads and stores as if they had happened in program order.
2705
2706[!] MMIO or other device accesses may bypass the cache system. This depends on
2707the properties of the memory window through which devices are accessed and/or
2708the use of any special device communication instructions the CPU may have.
2709
2710
2711CACHE COHERENCY
2712---------------
2713
2714Life isn't quite as simple as it may appear above, however: for while the
2715caches are expected to be coherent, there's no guarantee that that coherency
806654a9 2716will be ordered. This means that while changes made on one CPU will
108b42b4
DH
2717eventually become visible on all CPUs, there's no guarantee that they will
2718become apparent in the same order on those other CPUs.
2719
2720
81fc6323
JP
2721Consider dealing with a system that has a pair of CPUs (1 & 2), each of which
2722has a pair of parallel data caches (CPU 1 has A/B, and CPU 2 has C/D):
108b42b4
DH
2723
2724 :
2725 : +--------+
2726 : +---------+ | |
2727 +--------+ : +--->| Cache A |<------->| |
2728 | | : | +---------+ | |
2729 | CPU 1 |<---+ | |
2730 | | : | +---------+ | |
2731 +--------+ : +--->| Cache B |<------->| |
2732 : +---------+ | |
2733 : | Memory |
2734 : +---------+ | System |
2735 +--------+ : +--->| Cache C |<------->| |
2736 | | : | +---------+ | |
2737 | CPU 2 |<---+ | |
2738 | | : | +---------+ | |
2739 +--------+ : +--->| Cache D |<------->| |
2740 : +---------+ | |
2741 : +--------+
2742 :
2743
2744Imagine the system has the following properties:
2745
2746 (*) an odd-numbered cache line may be in cache A, cache C or it may still be
2747 resident in memory;
2748
2749 (*) an even-numbered cache line may be in cache B, cache D or it may still be
2750 resident in memory;
2751
806654a9 2752 (*) while the CPU core is interrogating one cache, the other cache may be
108b42b4
DH
2753 making use of the bus to access the rest of the system - perhaps to
2754 displace a dirty cacheline or to do a speculative load;
2755
2756 (*) each cache has a queue of operations that need to be applied to that cache
2757 to maintain coherency with the rest of the system;
2758
2759 (*) the coherency queue is not flushed by normal loads to lines already
2760 present in the cache, even though the contents of the queue may
81fc6323 2761 potentially affect those loads.
108b42b4
DH
2762
2763Imagine, then, that two writes are made on the first CPU, with a write barrier
2764between them to guarantee that they will appear to reach that CPU's caches in
2765the requisite order:
2766
2767 CPU 1 CPU 2 COMMENT
2768 =============== =============== =======================================
2769 u == 0, v == 1 and p == &u, q == &u
2770 v = 2;
81fc6323 2771 smp_wmb(); Make sure change to v is visible before
108b42b4
DH
2772 change to p
2773 <A:modify v=2> v is now in cache A exclusively
2774 p = &v;
2775 <B:modify p=&v> p is now in cache B exclusively
2776
2777The write memory barrier forces the other CPUs in the system to perceive that
2778the local CPU's caches have apparently been updated in the correct order. But
81fc6323 2779now imagine that the second CPU wants to read those values:
108b42b4
DH
2780
2781 CPU 1 CPU 2 COMMENT
2782 =============== =============== =======================================
2783 ...
2784 q = p;
2785 x = *q;
2786
81fc6323 2787The above pair of reads may then fail to happen in the expected order, as the
806654a9 2788cacheline holding p may get updated in one of the second CPU's caches while
108b42b4
DH
2789the update to the cacheline holding v is delayed in the other of the second
2790CPU's caches by some other cache event:
2791
2792 CPU 1 CPU 2 COMMENT
2793 =============== =============== =======================================
2794 u == 0, v == 1 and p == &u, q == &u
2795 v = 2;
2796 smp_wmb();
2797 <A:modify v=2> <C:busy>
2798 <C:queue v=2>
79afecfa 2799 p = &v; q = p;
108b42b4
DH
2800 <D:request p>
2801 <B:modify p=&v> <D:commit p=&v>
e0edc78f 2802 <D:read p>
108b42b4
DH
2803 x = *q;
2804 <C:read *q> Reads from v before v updated in cache
2805 <C:unbusy>
2806 <C:commit v=2>
2807
806654a9 2808Basically, while both cachelines will be updated on CPU 2 eventually, there's
108b42b4
DH
2809no guarantee that, without intervention, the order of update will be the same
2810as that committed on CPU 1.
2811
2812
2813To intervene, we need to interpolate a data dependency barrier or a read
f28f0868
PM
2814barrier between the loads (which as of v4.15 is supplied unconditionally
2815by the READ_ONCE() macro). This will force the cache to commit its
2816coherency queue before processing any further requests:
108b42b4
DH
2817
2818 CPU 1 CPU 2 COMMENT
2819 =============== =============== =======================================
2820 u == 0, v == 1 and p == &u, q == &u
2821 v = 2;
2822 smp_wmb();
2823 <A:modify v=2> <C:busy>
2824 <C:queue v=2>
3fda982c 2825 p = &v; q = p;
108b42b4
DH
2826 <D:request p>
2827 <B:modify p=&v> <D:commit p=&v>
e0edc78f 2828 <D:read p>
108b42b4
DH
2829 smp_read_barrier_depends()
2830 <C:unbusy>
2831 <C:commit v=2>
2832 x = *q;
2833 <C:read *q> Reads from v after v updated in cache
2834
2835
2836This sort of problem can be encountered on DEC Alpha processors as they have a
2837split cache that improves performance by making better use of the data bus.
806654a9 2838While most CPUs do imply a data dependency barrier on the read when a memory
108b42b4
DH
2839access depends on a read, not all do, so it may not be relied on.
2840
2841Other CPUs may also have split caches, but must coordinate between the various
3f6dee9b 2842cachelets for normal memory accesses. The semantics of the Alpha removes the
9ad3c143
PM
2843need for hardware coordination in the absence of memory barriers, which
2844permitted Alpha to sport higher CPU clock rates back in the day. However,
f28f0868
PM
2845please note that (again, as of v4.15) smp_read_barrier_depends() should not
2846be used except in Alpha arch-specific code and within the READ_ONCE() macro.
108b42b4
DH
2847
2848
2849CACHE COHERENCY VS DMA
2850----------------------
2851
2852Not all systems maintain cache coherency with respect to devices doing DMA. In
2853such cases, a device attempting DMA may obtain stale data from RAM because
2854dirty cache lines may be resident in the caches of various CPUs, and may not
2855have been written back to RAM yet. To deal with this, the appropriate part of
2856the kernel must flush the overlapping bits of cache on each CPU (and maybe
2857invalidate them as well).
2858
2859In addition, the data DMA'd to RAM by a device may be overwritten by dirty
2860cache lines being written back to RAM from a CPU's cache after the device has
81fc6323
JP
2861installed its own data, or cache lines present in the CPU's cache may simply
2862obscure the fact that RAM has been updated, until at such time as the cacheline
2863is discarded from the CPU's cache and reloaded. To deal with this, the
2864appropriate part of the kernel must invalidate the overlapping bits of the
108b42b4
DH
2865cache on each CPU.
2866
de0f51e4 2867See Documentation/core-api/cachetlb.rst for more information on cache management.
108b42b4
DH
2868
2869
2870CACHE COHERENCY VS MMIO
2871-----------------------
2872
2873Memory mapped I/O usually takes place through memory locations that are part of
81fc6323 2874a window in the CPU's memory space that has different properties assigned than
108b42b4
DH
2875the usual RAM directed window.
2876
2877Amongst these properties is usually the fact that such accesses bypass the
2878caching entirely and go directly to the device buses. This means MMIO accesses
2879may, in effect, overtake accesses to cached memory that were emitted earlier.
2880A memory barrier isn't sufficient in such a case, but rather the cache must be
2881flushed between the cached memory write and the MMIO access if the two are in
2882any way dependent.
2883
2884
2885=========================
2886THE THINGS CPUS GET UP TO
2887=========================
2888
2889A programmer might take it for granted that the CPU will perform memory
81fc6323 2890operations in exactly the order specified, so that if the CPU is, for example,
108b42b4
DH
2891given the following piece of code to execute:
2892
9af194ce
PM
2893 a = READ_ONCE(*A);
2894 WRITE_ONCE(*B, b);
2895 c = READ_ONCE(*C);
2896 d = READ_ONCE(*D);
2897 WRITE_ONCE(*E, e);
108b42b4 2898
81fc6323 2899they would then expect that the CPU will complete the memory operation for each
108b42b4
DH
2900instruction before moving on to the next one, leading to a definite sequence of
2901operations as seen by external observers in the system:
2902
2903 LOAD *A, STORE *B, LOAD *C, LOAD *D, STORE *E.
2904
2905
2906Reality is, of course, much messier. With many CPUs and compilers, the above
2907assumption doesn't hold because:
2908
2909 (*) loads are more likely to need to be completed immediately to permit
2910 execution progress, whereas stores can often be deferred without a
2911 problem;
2912
2913 (*) loads may be done speculatively, and the result discarded should it prove
2914 to have been unnecessary;
2915
81fc6323
JP
2916 (*) loads may be done speculatively, leading to the result having been fetched
2917 at the wrong time in the expected sequence of events;
108b42b4
DH
2918
2919 (*) the order of the memory accesses may be rearranged to promote better use
2920 of the CPU buses and caches;
2921
2922 (*) loads and stores may be combined to improve performance when talking to
2923 memory or I/O hardware that can do batched accesses of adjacent locations,
2924 thus cutting down on transaction setup costs (memory and PCI devices may
2925 both be able to do this); and
2926
806654a9 2927 (*) the CPU's data cache may affect the ordering, and while cache-coherency
108b42b4
DH
2928 mechanisms may alleviate this - once the store has actually hit the cache
2929 - there's no guarantee that the coherency management will be propagated in
2930 order to other CPUs.
2931
2932So what another CPU, say, might actually observe from the above piece of code
2933is:
2934
2935 LOAD *A, ..., LOAD {*C,*D}, STORE *E, STORE *B
2936
2937 (Where "LOAD {*C,*D}" is a combined load)
2938
2939
2940However, it is guaranteed that a CPU will be self-consistent: it will see its
2941_own_ accesses appear to be correctly ordered, without the need for a memory
2942barrier. For instance with the following code:
2943
9af194ce
PM
2944 U = READ_ONCE(*A);
2945 WRITE_ONCE(*A, V);
2946 WRITE_ONCE(*A, W);
2947 X = READ_ONCE(*A);
2948 WRITE_ONCE(*A, Y);
2949 Z = READ_ONCE(*A);
108b42b4
DH
2950
2951and assuming no intervention by an external influence, it can be assumed that
2952the final result will appear to be:
2953
2954 U == the original value of *A
2955 X == W
2956 Z == Y
2957 *A == Y
2958
2959The code above may cause the CPU to generate the full sequence of memory
2960accesses:
2961
2962 U=LOAD *A, STORE *A=V, STORE *A=W, X=LOAD *A, STORE *A=Y, Z=LOAD *A
2963
2964in that order, but, without intervention, the sequence may have almost any
9af194ce
PM
2965combination of elements combined or discarded, provided the program's view
2966of the world remains consistent. Note that READ_ONCE() and WRITE_ONCE()
2967are -not- optional in the above example, as there are architectures
2968where a given CPU might reorder successive loads to the same location.
2969On such architectures, READ_ONCE() and WRITE_ONCE() do whatever is
2970necessary to prevent this, for example, on Itanium the volatile casts
2971used by READ_ONCE() and WRITE_ONCE() cause GCC to emit the special ld.acq
2972and st.rel instructions (respectively) that prevent such reordering.
108b42b4
DH
2973
2974The compiler may also combine, discard or defer elements of the sequence before
2975the CPU even sees them.
2976
2977For instance:
2978
2979 *A = V;
2980 *A = W;
2981
2982may be reduced to:
2983
2984 *A = W;
2985
9af194ce 2986since, without either a write barrier or an WRITE_ONCE(), it can be
2ecf8101 2987assumed that the effect of the storage of V to *A is lost. Similarly:
108b42b4
DH
2988
2989 *A = Y;
2990 Z = *A;
2991
9af194ce
PM
2992may, without a memory barrier or an READ_ONCE() and WRITE_ONCE(), be
2993reduced to:
108b42b4
DH
2994
2995 *A = Y;
2996 Z = Y;
2997
2998and the LOAD operation never appear outside of the CPU.
2999
3000
3001AND THEN THERE'S THE ALPHA
3002--------------------------
3003
3004The DEC Alpha CPU is one of the most relaxed CPUs there is. Not only that,
3005some versions of the Alpha CPU have a split data cache, permitting them to have
81fc6323 3006two semantically-related cache lines updated at separate times. This is where
108b42b4
DH
3007the data dependency barrier really becomes necessary as this synchronises both
3008caches with the memory coherence system, thus making it seem like pointer
3009changes vs new data occur in the right order.
3010
f28f0868
PM
3011The Alpha defines the Linux kernel's memory model, although as of v4.15
3012the Linux kernel's addition of smp_read_barrier_depends() to READ_ONCE()
3013greatly reduced Alpha's impact on the memory model.
108b42b4
DH
3014
3015See the subsection on "Cache Coherency" above.
3016
0b6fa347 3017
6a65d263 3018VIRTUAL MACHINE GUESTS
3dbf0913 3019----------------------
6a65d263
MT
3020
3021Guests running within virtual machines might be affected by SMP effects even if
3022the guest itself is compiled without SMP support. This is an artifact of
3023interfacing with an SMP host while running an UP kernel. Using mandatory
3024barriers for this use-case would be possible but is often suboptimal.
3025
3026To handle this case optimally, low-level virt_mb() etc macros are available.
3027These have the same effect as smp_mb() etc when SMP is enabled, but generate
0b6fa347 3028identical code for SMP and non-SMP systems. For example, virtual machine guests
6a65d263
MT
3029should use virt_mb() rather than smp_mb() when synchronizing against a
3030(possibly SMP) host.
3031
3032These are equivalent to smp_mb() etc counterparts in all other respects,
3033in particular, they do not control MMIO effects: to control
3034MMIO effects, use mandatory barriers.
108b42b4 3035
0b6fa347 3036
90fddabf
DH
3037============
3038EXAMPLE USES
3039============
3040
3041CIRCULAR BUFFERS
3042----------------
3043
3044Memory barriers can be used to implement circular buffering without the need
3045of a lock to serialise the producer with the consumer. See:
3046
d8a121e3 3047 Documentation/core-api/circular-buffers.rst
90fddabf
DH
3048
3049for details.
3050
3051
108b42b4
DH
3052==========
3053REFERENCES
3054==========
3055
3056Alpha AXP Architecture Reference Manual, Second Edition (Sites & Witek,
3057Digital Press)
3058 Chapter 5.2: Physical Address Space Characteristics
3059 Chapter 5.4: Caches and Write Buffers
3060 Chapter 5.5: Data Sharing
3061 Chapter 5.6: Read/Write Ordering
3062
3063AMD64 Architecture Programmer's Manual Volume 2: System Programming
3064 Chapter 7.1: Memory-Access Ordering
3065 Chapter 7.4: Buffering and Combining Memory Writes
3066
f1ab25a3
PM
3067ARM Architecture Reference Manual (ARMv8, for ARMv8-A architecture profile)
3068 Chapter B2: The AArch64 Application Level Memory Model
3069
108b42b4
DH
3070IA-32 Intel Architecture Software Developer's Manual, Volume 3:
3071System Programming Guide
3072 Chapter 7.1: Locked Atomic Operations
3073 Chapter 7.2: Memory Ordering
3074 Chapter 7.4: Serializing Instructions
3075
3076The SPARC Architecture Manual, Version 9
3077 Chapter 8: Memory Models
3078 Appendix D: Formal Specification of the Memory Models
3079 Appendix J: Programming with the Memory Models
3080
f1ab25a3
PM
3081Storage in the PowerPC (Stone and Fitzgerald)
3082
108b42b4
DH
3083UltraSPARC Programmer Reference Manual
3084 Chapter 5: Memory Accesses and Cacheability
3085 Chapter 15: Sparc-V9 Memory Models
3086
3087UltraSPARC III Cu User's Manual
3088 Chapter 9: Memory Models
3089
3090UltraSPARC IIIi Processor User's Manual
3091 Chapter 8: Memory Models
3092
3093UltraSPARC Architecture 2005
3094 Chapter 9: Memory
3095 Appendix D: Formal Specifications of the Memory Models
3096
3097UltraSPARC T1 Supplement to the UltraSPARC Architecture 2005
3098 Chapter 8: Memory Models
3099 Appendix F: Caches and Cache Coherency
3100
3101Solaris Internals, Core Kernel Architecture, p63-68:
3102 Chapter 3.3: Hardware Considerations for Locks and
3103 Synchronization
3104
3105Unix Systems for Modern Architectures, Symmetric Multiprocessing and Caching
3106for Kernel Programmers:
3107 Chapter 13: Other Memory Models
3108
3109Intel Itanium Architecture Software Developer's Manual: Volume 1:
3110 Section 2.6: Speculation
3111 Section 4.4: Memory Access