Commit | Line | Data |
---|---|---|
108b42b4 DH |
1 | ============================ |
2 | LINUX KERNEL MEMORY BARRIERS | |
3 | ============================ | |
4 | ||
5 | By: David Howells <dhowells@redhat.com> | |
714b6904 | 6 | Paul E. McKenney <paulmck@linux.ibm.com> |
e7720af5 PZ |
7 | Will Deacon <will.deacon@arm.com> |
8 | Peter Zijlstra <peterz@infradead.org> | |
108b42b4 | 9 | |
e7720af5 PZ |
10 | ========== |
11 | DISCLAIMER | |
12 | ========== | |
13 | ||
14 | This document is not a specification; it is intentionally (for the sake of | |
15 | brevity) and unintentionally (due to being human) incomplete. This document is | |
16 | meant as a guide to using the various memory barriers provided by Linux, but | |
621df431 AP |
17 | in case of any doubt (and there are many) please ask. Some doubts may be |
18 | resolved by referring to the formal memory consistency model and related | |
19 | documentation at tools/memory-model/. Nevertheless, even this memory | |
20 | model should be viewed as the collective opinion of its maintainers rather | |
21 | than as an infallible oracle. | |
e7720af5 PZ |
22 | |
23 | To repeat, this document is not a specification of what Linux expects from | |
24 | hardware. | |
25 | ||
8d4840e8 DH |
26 | The purpose of this document is twofold: |
27 | ||
28 | (1) to specify the minimum functionality that one can rely on for any | |
29 | particular barrier, and | |
30 | ||
31 | (2) to provide a guide as to how to use the barriers that are available. | |
32 | ||
33 | Note that an architecture can provide more than the minimum requirement | |
35bdc72a | 34 | for any particular barrier, but if the architecture provides less than |
8d4840e8 DH |
35 | that, that architecture is incorrect. |
36 | ||
37 | Note also that it is possible that a barrier may be a no-op for an | |
38 | architecture because the way that arch works renders an explicit barrier | |
39 | unnecessary in that case. | |
40 | ||
41 | ||
e7720af5 PZ |
42 | ======== |
43 | CONTENTS | |
44 | ======== | |
108b42b4 DH |
45 | |
46 | (*) Abstract memory access model. | |
47 | ||
48 | - Device operations. | |
49 | - Guarantees. | |
50 | ||
51 | (*) What are memory barriers? | |
52 | ||
53 | - Varieties of memory barrier. | |
54 | - What may not be assumed about memory barriers? | |
203185f6 | 55 | - Address-dependency barriers (historical). |
108b42b4 DH |
56 | - Control dependencies. |
57 | - SMP barrier pairing. | |
58 | - Examples of memory barrier sequences. | |
670bd95e | 59 | - Read memory barriers vs load speculation. |
f1ab25a3 | 60 | - Multicopy atomicity. |
108b42b4 DH |
61 | |
62 | (*) Explicit kernel barriers. | |
63 | ||
64 | - Compiler barrier. | |
81fc6323 | 65 | - CPU memory barriers. |
108b42b4 DH |
66 | |
67 | (*) Implicit kernel memory barriers. | |
68 | ||
166bda71 | 69 | - Lock acquisition functions. |
108b42b4 | 70 | - Interrupt disabling functions. |
50fa610a | 71 | - Sleep and wake-up functions. |
108b42b4 DH |
72 | - Miscellaneous functions. |
73 | ||
166bda71 | 74 | (*) Inter-CPU acquiring barrier effects. |
108b42b4 | 75 | |
166bda71 | 76 | - Acquires vs memory accesses. |
108b42b4 DH |
77 | |
78 | (*) Where are memory barriers needed? | |
79 | ||
80 | - Interprocessor interaction. | |
81 | - Atomic operations. | |
82 | - Accessing devices. | |
83 | - Interrupts. | |
84 | ||
85 | (*) Kernel I/O barrier effects. | |
86 | ||
87 | (*) Assumed minimum execution ordering model. | |
88 | ||
89 | (*) The effects of the cpu cache. | |
90 | ||
91 | - Cache coherency. | |
92 | - Cache coherency vs DMA. | |
93 | - Cache coherency vs MMIO. | |
94 | ||
95 | (*) The things CPUs get up to. | |
96 | ||
97 | - And then there's the Alpha. | |
01e1cd6d | 98 | - Virtual Machine Guests. |
108b42b4 | 99 | |
90fddabf DH |
100 | (*) Example uses. |
101 | ||
102 | - Circular buffers. | |
103 | ||
108b42b4 DH |
104 | (*) References. |
105 | ||
106 | ||
107 | ============================ | |
108 | ABSTRACT MEMORY ACCESS MODEL | |
109 | ============================ | |
110 | ||
111 | Consider the following abstract model of the system: | |
112 | ||
113 | : : | |
114 | : : | |
115 | : : | |
116 | +-------+ : +--------+ : +-------+ | |
117 | | | : | | : | | | |
118 | | | : | | : | | | |
119 | | CPU 1 |<----->| Memory |<----->| CPU 2 | | |
120 | | | : | | : | | | |
121 | | | : | | : | | | |
122 | +-------+ : +--------+ : +-------+ | |
123 | ^ : ^ : ^ | |
124 | | : | : | | |
125 | | : | : | | |
126 | | : v : | | |
127 | | : +--------+ : | | |
128 | | : | | : | | |
129 | | : | | : | | |
130 | +---------->| Device |<----------+ | |
131 | : | | : | |
132 | : | | : | |
133 | : +--------+ : | |
134 | : : | |
135 | ||
136 | Each CPU executes a program that generates memory access operations. In the | |
137 | abstract CPU, memory operation ordering is very relaxed, and a CPU may actually | |
138 | perform the memory operations in any order it likes, provided program causality | |
139 | appears to be maintained. Similarly, the compiler may also arrange the | |
140 | instructions it emits in any order it likes, provided it doesn't affect the | |
141 | apparent operation of the program. | |
142 | ||
143 | So in the above diagram, the effects of the memory operations performed by a | |
144 | CPU are perceived by the rest of the system as the operations cross the | |
145 | interface between the CPU and rest of the system (the dotted lines). | |
146 | ||
147 | ||
148 | For example, consider the following sequence of events: | |
149 | ||
150 | CPU 1 CPU 2 | |
151 | =============== =============== | |
152 | { A == 1; B == 2 } | |
615cc2c9 AD |
153 | A = 3; x = B; |
154 | B = 4; y = A; | |
108b42b4 DH |
155 | |
156 | The set of accesses as seen by the memory system in the middle can be arranged | |
157 | in 24 different combinations: | |
158 | ||
8ab8b3e1 PK |
159 | STORE A=3, STORE B=4, y=LOAD A->3, x=LOAD B->4 |
160 | STORE A=3, STORE B=4, x=LOAD B->4, y=LOAD A->3 | |
161 | STORE A=3, y=LOAD A->3, STORE B=4, x=LOAD B->4 | |
162 | STORE A=3, y=LOAD A->3, x=LOAD B->2, STORE B=4 | |
163 | STORE A=3, x=LOAD B->2, STORE B=4, y=LOAD A->3 | |
164 | STORE A=3, x=LOAD B->2, y=LOAD A->3, STORE B=4 | |
165 | STORE B=4, STORE A=3, y=LOAD A->3, x=LOAD B->4 | |
108b42b4 DH |
166 | STORE B=4, ... |
167 | ... | |
168 | ||
169 | and can thus result in four different combinations of values: | |
170 | ||
8ab8b3e1 PK |
171 | x == 2, y == 1 |
172 | x == 2, y == 3 | |
173 | x == 4, y == 1 | |
174 | x == 4, y == 3 | |
108b42b4 DH |
175 | |
176 | ||
177 | Furthermore, the stores committed by a CPU to the memory system may not be | |
178 | perceived by the loads made by another CPU in the same order as the stores were | |
179 | committed. | |
180 | ||
181 | ||
182 | As a further example, consider this sequence of events: | |
183 | ||
184 | CPU 1 CPU 2 | |
185 | =============== =============== | |
3dbf0913 | 186 | { A == 1, B == 2, C == 3, P == &A, Q == &C } |
108b42b4 | 187 | B = 4; Q = P; |
8149b5cb | 188 | P = &B; D = *Q; |
108b42b4 | 189 | |
f556082d AY |
190 | There is an obvious address dependency here, as the value loaded into D depends |
191 | on the address retrieved from P by CPU 2. At the end of the sequence, any of | |
192 | the following results are possible: | |
108b42b4 DH |
193 | |
194 | (Q == &A) and (D == 1) | |
195 | (Q == &B) and (D == 2) | |
196 | (Q == &B) and (D == 4) | |
197 | ||
198 | Note that CPU 2 will never try and load C into D because the CPU will load P | |
199 | into Q before issuing the load of *Q. | |
200 | ||
201 | ||
202 | DEVICE OPERATIONS | |
203 | ----------------- | |
204 | ||
205 | Some devices present their control interfaces as collections of memory | |
206 | locations, but the order in which the control registers are accessed is very | |
207 | important. For instance, imagine an ethernet card with a set of internal | |
208 | registers that are accessed through an address port register (A) and a data | |
209 | port register (D). To read internal register 5, the following code might then | |
210 | be used: | |
211 | ||
212 | *A = 5; | |
213 | x = *D; | |
214 | ||
215 | but this might show up as either of the following two sequences: | |
216 | ||
217 | STORE *A = 5, x = LOAD *D | |
218 | x = LOAD *D, STORE *A = 5 | |
219 | ||
220 | the second of which will almost certainly result in a malfunction, since it set | |
221 | the address _after_ attempting to read the register. | |
222 | ||
223 | ||
224 | GUARANTEES | |
225 | ---------- | |
226 | ||
227 | There are some minimal guarantees that may be expected of a CPU: | |
228 | ||
229 | (*) On any given CPU, dependent memory accesses will be issued in order, with | |
230 | respect to itself. This means that for: | |
231 | ||
40555946 | 232 | Q = READ_ONCE(P); D = READ_ONCE(*Q); |
108b42b4 DH |
233 | |
234 | the CPU will issue the following memory operations: | |
235 | ||
236 | Q = LOAD P, D = LOAD *Q | |
237 | ||
40555946 PM |
238 | and always in that order. However, on DEC Alpha, READ_ONCE() also |
239 | emits a memory-barrier instruction, so that a DEC Alpha CPU will | |
240 | instead issue the following memory operations: | |
241 | ||
242 | Q = LOAD P, MEMORY_BARRIER, D = LOAD *Q, MEMORY_BARRIER | |
243 | ||
244 | Whether on DEC Alpha or not, the READ_ONCE() also prevents compiler | |
245 | mischief. | |
108b42b4 DH |
246 | |
247 | (*) Overlapping loads and stores within a particular CPU will appear to be | |
248 | ordered within that CPU. This means that for: | |
249 | ||
9af194ce | 250 | a = READ_ONCE(*X); WRITE_ONCE(*X, b); |
108b42b4 DH |
251 | |
252 | the CPU will only issue the following sequence of memory operations: | |
253 | ||
254 | a = LOAD *X, STORE *X = b | |
255 | ||
256 | And for: | |
257 | ||
9af194ce | 258 | WRITE_ONCE(*X, c); d = READ_ONCE(*X); |
108b42b4 DH |
259 | |
260 | the CPU will only issue: | |
261 | ||
262 | STORE *X = c, d = LOAD *X | |
263 | ||
fa00e7e1 | 264 | (Loads and stores overlap if they are targeted at overlapping pieces of |
108b42b4 DH |
265 | memory). |
266 | ||
267 | And there are a number of things that _must_ or _must_not_ be assumed: | |
268 | ||
9af194ce PM |
269 | (*) It _must_not_ be assumed that the compiler will do what you want |
270 | with memory references that are not protected by READ_ONCE() and | |
271 | WRITE_ONCE(). Without them, the compiler is within its rights to | |
272 | do all sorts of "creative" transformations, which are covered in | |
895f5542 | 273 | the COMPILER BARRIER section. |
2ecf8101 | 274 | |
108b42b4 DH |
275 | (*) It _must_not_ be assumed that independent loads and stores will be issued |
276 | in the order given. This means that for: | |
277 | ||
278 | X = *A; Y = *B; *D = Z; | |
279 | ||
280 | we may get any of the following sequences: | |
281 | ||
282 | X = LOAD *A, Y = LOAD *B, STORE *D = Z | |
283 | X = LOAD *A, STORE *D = Z, Y = LOAD *B | |
284 | Y = LOAD *B, X = LOAD *A, STORE *D = Z | |
285 | Y = LOAD *B, STORE *D = Z, X = LOAD *A | |
286 | STORE *D = Z, X = LOAD *A, Y = LOAD *B | |
287 | STORE *D = Z, Y = LOAD *B, X = LOAD *A | |
288 | ||
289 | (*) It _must_ be assumed that overlapping memory accesses may be merged or | |
290 | discarded. This means that for: | |
291 | ||
292 | X = *A; Y = *(A + 4); | |
293 | ||
294 | we may get any one of the following sequences: | |
295 | ||
296 | X = LOAD *A; Y = LOAD *(A + 4); | |
297 | Y = LOAD *(A + 4); X = LOAD *A; | |
298 | {X, Y} = LOAD {*A, *(A + 4) }; | |
299 | ||
300 | And for: | |
301 | ||
f191eec5 | 302 | *A = X; *(A + 4) = Y; |
108b42b4 | 303 | |
f191eec5 | 304 | we may get any of: |
108b42b4 | 305 | |
f191eec5 PM |
306 | STORE *A = X; STORE *(A + 4) = Y; |
307 | STORE *(A + 4) = Y; STORE *A = X; | |
308 | STORE {*A, *(A + 4) } = {X, Y}; | |
108b42b4 | 309 | |
432fbf3c PM |
310 | And there are anti-guarantees: |
311 | ||
312 | (*) These guarantees do not apply to bitfields, because compilers often | |
313 | generate code to modify these using non-atomic read-modify-write | |
314 | sequences. Do not attempt to use bitfields to synchronize parallel | |
315 | algorithms. | |
316 | ||
317 | (*) Even in cases where bitfields are protected by locks, all fields | |
318 | in a given bitfield must be protected by one lock. If two fields | |
319 | in a given bitfield are protected by different locks, the compiler's | |
320 | non-atomic read-modify-write sequences can cause an update to one | |
321 | field to corrupt the value of an adjacent field. | |
322 | ||
323 | (*) These guarantees apply only to properly aligned and sized scalar | |
324 | variables. "Properly sized" currently means variables that are | |
325 | the same size as "char", "short", "int" and "long". "Properly | |
326 | aligned" means the natural alignment, thus no constraints for | |
327 | "char", two-byte alignment for "short", four-byte alignment for | |
328 | "int", and either four-byte or eight-byte alignment for "long", | |
329 | on 32-bit and 64-bit systems, respectively. Note that these | |
330 | guarantees were introduced into the C11 standard, so beware when | |
331 | using older pre-C11 compilers (for example, gcc 4.6). The portion | |
332 | of the standard containing this guarantee is Section 3.14, which | |
333 | defines "memory location" as follows: | |
334 | ||
335 | memory location | |
336 | either an object of scalar type, or a maximal sequence | |
337 | of adjacent bit-fields all having nonzero width | |
338 | ||
339 | NOTE 1: Two threads of execution can update and access | |
340 | separate memory locations without interfering with | |
341 | each other. | |
342 | ||
343 | NOTE 2: A bit-field and an adjacent non-bit-field member | |
344 | are in separate memory locations. The same applies | |
345 | to two bit-fields, if one is declared inside a nested | |
346 | structure declaration and the other is not, or if the two | |
347 | are separated by a zero-length bit-field declaration, | |
348 | or if they are separated by a non-bit-field member | |
349 | declaration. It is not safe to concurrently update two | |
350 | bit-fields in the same structure if all members declared | |
351 | between them are also bit-fields, no matter what the | |
352 | sizes of those intervening bit-fields happen to be. | |
353 | ||
108b42b4 DH |
354 | |
355 | ========================= | |
356 | WHAT ARE MEMORY BARRIERS? | |
357 | ========================= | |
358 | ||
359 | As can be seen above, independent memory operations are effectively performed | |
360 | in random order, but this can be a problem for CPU-CPU interaction and for I/O. | |
361 | What is required is some way of intervening to instruct the compiler and the | |
362 | CPU to restrict the order. | |
363 | ||
364 | Memory barriers are such interventions. They impose a perceived partial | |
2b94895b DH |
365 | ordering over the memory operations on either side of the barrier. |
366 | ||
367 | Such enforcement is important because the CPUs and other devices in a system | |
81fc6323 | 368 | can use a variety of tricks to improve performance, including reordering, |
2b94895b DH |
369 | deferral and combination of memory operations; speculative loads; speculative |
370 | branch prediction and various types of caching. Memory barriers are used to | |
371 | override or suppress these tricks, allowing the code to sanely control the | |
372 | interaction of multiple CPUs and/or devices. | |
108b42b4 DH |
373 | |
374 | ||
375 | VARIETIES OF MEMORY BARRIER | |
376 | --------------------------- | |
377 | ||
378 | Memory barriers come in four basic varieties: | |
379 | ||
380 | (1) Write (or store) memory barriers. | |
381 | ||
382 | A write memory barrier gives a guarantee that all the STORE operations | |
383 | specified before the barrier will appear to happen before all the STORE | |
384 | operations specified after the barrier with respect to the other | |
385 | components of the system. | |
386 | ||
387 | A write barrier is a partial ordering on stores only; it is not required | |
388 | to have any effect on loads. | |
389 | ||
6bc39274 | 390 | A CPU can be viewed as committing a sequence of store operations to the |
5692fcc6 GP |
391 | memory system as time progresses. All stores _before_ a write barrier |
392 | will occur _before_ all the stores after the write barrier. | |
108b42b4 | 393 | |
203185f6 AY |
394 | [!] Note that write barriers should normally be paired with read or |
395 | address-dependency barriers; see the "SMP barrier pairing" subsection. | |
108b42b4 DH |
396 | |
397 | ||
203185f6 | 398 | (2) Address-dependency barriers (historical). |
108b42b4 | 399 | |
f556082d AY |
400 | An address-dependency barrier is a weaker form of read barrier. In the |
401 | case where two loads are performed such that the second depends on the | |
402 | result of the first (eg: the first load retrieves the address to which | |
403 | the second load will be directed), an address-dependency barrier would | |
404 | be required to make sure that the target of the second load is updated | |
405 | after the address obtained by the first load is accessed. | |
108b42b4 | 406 | |
f556082d AY |
407 | An address-dependency barrier is a partial ordering on interdependent |
408 | loads only; it is not required to have any effect on stores, independent | |
409 | loads or overlapping loads. | |
108b42b4 DH |
410 | |
411 | As mentioned in (1), the other CPUs in the system can be viewed as | |
412 | committing sequences of stores to the memory system that the CPU being | |
f556082d AY |
413 | considered can then perceive. An address-dependency barrier issued by |
414 | the CPU under consideration guarantees that for any load preceding it, | |
415 | if that load touches one of a sequence of stores from another CPU, then | |
416 | by the time the barrier completes, the effects of all the stores prior to | |
417 | that touched by the load will be perceptible to any loads issued after | |
418 | the address-dependency barrier. | |
108b42b4 DH |
419 | |
420 | See the "Examples of memory barrier sequences" subsection for diagrams | |
421 | showing the ordering constraints. | |
422 | ||
203185f6 | 423 | [!] Note that the first load really has to have an _address_ dependency and |
108b42b4 DH |
424 | not a control dependency. If the address for the second load is dependent |
425 | on the first load, but the dependency is through a conditional rather than | |
426 | actually loading the address itself, then it's a _control_ dependency and | |
427 | a full read barrier or better is required. See the "Control dependencies" | |
428 | subsection for more information. | |
429 | ||
203185f6 | 430 | [!] Note that address-dependency barriers should normally be paired with |
108b42b4 DH |
431 | write barriers; see the "SMP barrier pairing" subsection. |
432 | ||
203185f6 AY |
433 | [!] Kernel release v5.9 removed kernel APIs for explicit address- |
434 | dependency barriers. Nowadays, APIs for marking loads from shared | |
435 | variables such as READ_ONCE() and rcu_dereference() provide implicit | |
436 | address-dependency barriers. | |
108b42b4 DH |
437 | |
438 | (3) Read (or load) memory barriers. | |
439 | ||
f556082d AY |
440 | A read barrier is an address-dependency barrier plus a guarantee that all |
441 | the LOAD operations specified before the barrier will appear to happen | |
442 | before all the LOAD operations specified after the barrier with respect to | |
443 | the other components of the system. | |
108b42b4 DH |
444 | |
445 | A read barrier is a partial ordering on loads only; it is not required to | |
446 | have any effect on stores. | |
447 | ||
f556082d AY |
448 | Read memory barriers imply address-dependency barriers, and so can |
449 | substitute for them. | |
108b42b4 DH |
450 | |
451 | [!] Note that read barriers should normally be paired with write barriers; | |
452 | see the "SMP barrier pairing" subsection. | |
453 | ||
454 | ||
455 | (4) General memory barriers. | |
456 | ||
670bd95e DH |
457 | A general memory barrier gives a guarantee that all the LOAD and STORE |
458 | operations specified before the barrier will appear to happen before all | |
459 | the LOAD and STORE operations specified after the barrier with respect to | |
460 | the other components of the system. | |
461 | ||
462 | A general memory barrier is a partial ordering over both loads and stores. | |
108b42b4 DH |
463 | |
464 | General memory barriers imply both read and write memory barriers, and so | |
465 | can substitute for either. | |
466 | ||
467 | ||
468 | And a couple of implicit varieties: | |
469 | ||
2e4f5382 | 470 | (5) ACQUIRE operations. |
108b42b4 DH |
471 | |
472 | This acts as a one-way permeable barrier. It guarantees that all memory | |
2e4f5382 PZ |
473 | operations after the ACQUIRE operation will appear to happen after the |
474 | ACQUIRE operation with respect to the other components of the system. | |
787df638 | 475 | ACQUIRE operations include LOCK operations and both smp_load_acquire() |
2f359c7e | 476 | and smp_cond_load_acquire() operations. |
108b42b4 | 477 | |
2e4f5382 PZ |
478 | Memory operations that occur before an ACQUIRE operation may appear to |
479 | happen after it completes. | |
108b42b4 | 480 | |
2e4f5382 PZ |
481 | An ACQUIRE operation should almost always be paired with a RELEASE |
482 | operation. | |
108b42b4 DH |
483 | |
484 | ||
2e4f5382 | 485 | (6) RELEASE operations. |
108b42b4 DH |
486 | |
487 | This also acts as a one-way permeable barrier. It guarantees that all | |
2e4f5382 PZ |
488 | memory operations before the RELEASE operation will appear to happen |
489 | before the RELEASE operation with respect to the other components of the | |
490 | system. RELEASE operations include UNLOCK operations and | |
491 | smp_store_release() operations. | |
108b42b4 | 492 | |
2e4f5382 | 493 | Memory operations that occur after a RELEASE operation may appear to |
108b42b4 DH |
494 | happen before it completes. |
495 | ||
2e4f5382 | 496 | The use of ACQUIRE and RELEASE operations generally precludes the need |
a897b13d SP |
497 | for other sorts of memory barrier. In addition, a RELEASE+ACQUIRE pair is |
498 | -not- guaranteed to act as a full memory barrier. However, after an | |
499 | ACQUIRE on a given variable, all memory accesses preceding any prior | |
2e4f5382 PZ |
500 | RELEASE on that same variable are guaranteed to be visible. In other |
501 | words, within a given variable's critical section, all accesses of all | |
502 | previous critical sections for that variable are guaranteed to have | |
503 | completed. | |
17eb88e0 | 504 | |
2e4f5382 PZ |
505 | This means that ACQUIRE acts as a minimal "acquire" operation and |
506 | RELEASE acts as a minimal "release" operation. | |
108b42b4 | 507 | |
706eeb3e PZ |
508 | A subset of the atomic operations described in atomic_t.txt have ACQUIRE and |
509 | RELEASE variants in addition to fully-ordered and relaxed (no barrier | |
510 | semantics) definitions. For compound atomics performing both a load and a | |
511 | store, ACQUIRE semantics apply only to the load and RELEASE semantics apply | |
512 | only to the store portion of the operation. | |
108b42b4 DH |
513 | |
514 | Memory barriers are only required where there's a possibility of interaction | |
515 | between two CPUs or between a CPU and a device. If it can be guaranteed that | |
516 | there won't be any such interaction in any particular piece of code, then | |
517 | memory barriers are unnecessary in that piece of code. | |
518 | ||
519 | ||
520 | Note that these are the _minimum_ guarantees. Different architectures may give | |
521 | more substantial guarantees, but they may _not_ be relied upon outside of arch | |
522 | specific code. | |
523 | ||
524 | ||
525 | WHAT MAY NOT BE ASSUMED ABOUT MEMORY BARRIERS? | |
526 | ---------------------------------------------- | |
527 | ||
528 | There are certain things that the Linux kernel memory barriers do not guarantee: | |
529 | ||
530 | (*) There is no guarantee that any of the memory accesses specified before a | |
531 | memory barrier will be _complete_ by the completion of a memory barrier | |
532 | instruction; the barrier can be considered to draw a line in that CPU's | |
533 | access queue that accesses of the appropriate type may not cross. | |
534 | ||
535 | (*) There is no guarantee that issuing a memory barrier on one CPU will have | |
536 | any direct effect on another CPU or any other hardware in the system. The | |
537 | indirect effect will be the order in which the second CPU sees the effects | |
538 | of the first CPU's accesses occur, but see the next point: | |
539 | ||
6bc39274 | 540 | (*) There is no guarantee that a CPU will see the correct order of effects |
108b42b4 DH |
541 | from a second CPU's accesses, even _if_ the second CPU uses a memory |
542 | barrier, unless the first CPU _also_ uses a matching memory barrier (see | |
543 | the subsection on "SMP Barrier Pairing"). | |
544 | ||
545 | (*) There is no guarantee that some intervening piece of off-the-CPU | |
546 | hardware[*] will not reorder the memory accesses. CPU cache coherency | |
547 | mechanisms should propagate the indirect effects of a memory barrier | |
548 | between CPUs, but might not do so in order. | |
549 | ||
550 | [*] For information on bus mastering DMA and coherency please read: | |
551 | ||
bff9e34c | 552 | Documentation/driver-api/pci/pci.rst |
537f3a7c SP |
553 | Documentation/core-api/dma-api-howto.rst |
554 | Documentation/core-api/dma-api.rst | |
108b42b4 DH |
555 | |
556 | ||
203185f6 AY |
557 | ADDRESS-DEPENDENCY BARRIERS (HISTORICAL) |
558 | ---------------------------------------- | |
f28f0868 | 559 | |
8ca924ae WD |
560 | As of v4.15 of the Linux kernel, an smp_mb() was added to READ_ONCE() for |
561 | DEC Alpha, which means that about the only people who need to pay attention | |
562 | to this section are those working on DEC Alpha architecture-specific code | |
563 | and those working on READ_ONCE() itself. For those who need it, and for | |
564 | those who are interested in the history, here is the story of | |
203185f6 AY |
565 | address-dependency barriers. |
566 | ||
567 | [!] While address dependencies are observed in both load-to-load and | |
568 | load-to-store relations, address-dependency barriers are not necessary | |
569 | for load-to-store situations. | |
108b42b4 | 570 | |
203185f6 | 571 | The requirement of address-dependency barriers is a little subtle, and |
108b42b4 DH |
572 | it's not always obvious that they're needed. To illustrate, consider the |
573 | following sequence of events: | |
574 | ||
2ecf8101 PM |
575 | CPU 1 CPU 2 |
576 | =============== =============== | |
3dbf0913 | 577 | { A == 1, B == 2, C == 3, P == &A, Q == &C } |
108b42b4 DH |
578 | B = 4; |
579 | <write barrier> | |
8149b5cb | 580 | WRITE_ONCE(P, &B); |
203185f6 | 581 | Q = READ_ONCE_OLD(P); |
2ecf8101 | 582 | D = *Q; |
108b42b4 | 583 | |
203185f6 AY |
584 | [!] READ_ONCE_OLD() corresponds to READ_ONCE() of pre-4.15 kernel, which |
585 | doesn't imply an address-dependency barrier. | |
586 | ||
f556082d AY |
587 | There's a clear address dependency here, and it would seem that by the end of |
588 | the sequence, Q must be either &A or &B, and that: | |
108b42b4 DH |
589 | |
590 | (Q == &A) implies (D == 1) | |
591 | (Q == &B) implies (D == 4) | |
592 | ||
81fc6323 | 593 | But! CPU 2's perception of P may be updated _before_ its perception of B, thus |
108b42b4 DH |
594 | leading to the following situation: |
595 | ||
596 | (Q == &B) and (D == 2) ???? | |
597 | ||
806654a9 | 598 | While this may seem like a failure of coherency or causality maintenance, it |
108b42b4 DH |
599 | isn't, and this behaviour can be observed on certain real CPUs (such as the DEC |
600 | Alpha). | |
601 | ||
f556082d AY |
602 | To deal with this, READ_ONCE() provides an implicit address-dependency barrier |
603 | since kernel release v4.15: | |
108b42b4 | 604 | |
2ecf8101 PM |
605 | CPU 1 CPU 2 |
606 | =============== =============== | |
3dbf0913 | 607 | { A == 1, B == 2, C == 3, P == &A, Q == &C } |
108b42b4 DH |
608 | B = 4; |
609 | <write barrier> | |
9af194ce PM |
610 | WRITE_ONCE(P, &B); |
611 | Q = READ_ONCE(P); | |
203185f6 | 612 | <implicit address-dependency barrier> |
2ecf8101 | 613 | D = *Q; |
108b42b4 DH |
614 | |
615 | This enforces the occurrence of one of the two implications, and prevents the | |
616 | third possibility from arising. | |
617 | ||
66ce3a4d PM |
618 | |
619 | [!] Note that this extremely counterintuitive situation arises most easily on | |
620 | machines with split caches, so that, for example, one cache bank processes | |
621 | even-numbered cache lines and the other bank processes odd-numbered cache | |
622 | lines. The pointer P might be stored in an odd-numbered cache line, and the | |
623 | variable B might be stored in an even-numbered cache line. Then, if the | |
624 | even-numbered bank of the reading CPU's cache is extremely busy while the | |
625 | odd-numbered bank is idle, one can see the new value of the pointer P (&B), | |
626 | but the old value of the variable B (2). | |
627 | ||
628 | ||
203185f6 | 629 | An address-dependency barrier is not required to order dependent writes |
f556082d AY |
630 | because the CPUs that the Linux kernel supports don't do writes until they |
631 | are certain (1) that the write will actually happen, (2) of the location of | |
632 | the write, and (3) of the value to be written. | |
66ce3a4d | 633 | But please carefully read the "CONTROL DEPENDENCIES" section and the |
f556082d AY |
634 | Documentation/RCU/rcu_dereference.rst file: The compiler can and does break |
635 | dependencies in a great many highly creative ways. | |
92a84dd2 PM |
636 | |
637 | CPU 1 CPU 2 | |
638 | =============== =============== | |
639 | { A == 1, B == 2, C = 3, P == &A, Q == &C } | |
640 | B = 4; | |
641 | <write barrier> | |
642 | WRITE_ONCE(P, &B); | |
203185f6 | 643 | Q = READ_ONCE_OLD(P); |
66ce3a4d | 644 | WRITE_ONCE(*Q, 5); |
92a84dd2 | 645 | |
203185f6 | 646 | Therefore, no address-dependency barrier is required to order the read into |
66ce3a4d | 647 | Q with the store into *Q. In other words, this outcome is prohibited, |
203185f6 | 648 | even without an implicit address-dependency barrier of modern READ_ONCE(): |
92a84dd2 | 649 | |
8b9e7715 | 650 | (Q == &B) && (B == 4) |
92a84dd2 PM |
651 | |
652 | Please note that this pattern should be rare. After all, the whole point | |
653 | of dependency ordering is to -prevent- writes to the data structure, along | |
654 | with the expensive cache misses associated with those writes. This pattern | |
66ce3a4d PM |
655 | can be used to record rare error conditions and the like, and the CPUs' |
656 | naturally occurring ordering prevents such records from being lost. | |
108b42b4 DH |
657 | |
658 | ||
203185f6 | 659 | Note well that the ordering provided by an address dependency is local to |
f1ab25a3 PM |
660 | the CPU containing it. See the section on "Multicopy atomicity" for |
661 | more information. | |
662 | ||
663 | ||
203185f6 | 664 | The address-dependency barrier is very important to the RCU system, |
2ecf8101 PM |
665 | for example. See rcu_assign_pointer() and rcu_dereference() in |
666 | include/linux/rcupdate.h. This permits the current target of an RCU'd | |
667 | pointer to be replaced with a new modified target, without the replacement | |
668 | target appearing to be incompletely initialised. | |
108b42b4 DH |
669 | |
670 | See also the subsection on "Cache Coherency" for a more thorough example. | |
671 | ||
672 | ||
673 | CONTROL DEPENDENCIES | |
674 | -------------------- | |
675 | ||
c8241f85 PM |
676 | Control dependencies can be a bit tricky because current compilers do |
677 | not understand them. The purpose of this section is to help you prevent | |
678 | the compiler's ignorance from breaking your code. | |
679 | ||
ff382810 | 680 | A load-load control dependency requires a full read memory barrier, not |
f556082d AY |
681 | simply an (implicit) address-dependency barrier to make it work correctly. |
682 | Consider the following bit of code: | |
108b42b4 | 683 | |
9af194ce | 684 | q = READ_ONCE(a); |
203185f6 | 685 | <implicit address-dependency barrier> |
18c03c61 | 686 | if (q) { |
203185f6 | 687 | /* BUG: No address dependency!!! */ |
9af194ce | 688 | p = READ_ONCE(b); |
45c8a36a | 689 | } |
108b42b4 | 690 | |
203185f6 | 691 | This will not have the desired effect because there is no actual address |
2ecf8101 PM |
692 | dependency, but rather a control dependency that the CPU may short-circuit |
693 | by attempting to predict the outcome in advance, so that other CPUs see | |
f556082d AY |
694 | the load from b as having happened before the load from a. In such a case |
695 | what's actually required is: | |
108b42b4 | 696 | |
9af194ce | 697 | q = READ_ONCE(a); |
18c03c61 | 698 | if (q) { |
45c8a36a | 699 | <read barrier> |
9af194ce | 700 | p = READ_ONCE(b); |
45c8a36a | 701 | } |
18c03c61 PZ |
702 | |
703 | However, stores are not speculated. This means that ordering -is- provided | |
ff382810 | 704 | for load-store control dependencies, as in the following example: |
18c03c61 | 705 | |
105ff3cb | 706 | q = READ_ONCE(a); |
18c03c61 | 707 | if (q) { |
c8241f85 | 708 | WRITE_ONCE(b, 1); |
18c03c61 PZ |
709 | } |
710 | ||
c8241f85 PM |
711 | Control dependencies pair normally with other types of barriers. |
712 | That said, please note that neither READ_ONCE() nor WRITE_ONCE() | |
713 | are optional! Without the READ_ONCE(), the compiler might combine the | |
714 | load from 'a' with other loads from 'a'. Without the WRITE_ONCE(), | |
715 | the compiler might combine the store to 'b' with other stores to 'b'. | |
716 | Either can result in highly counterintuitive effects on ordering. | |
18c03c61 PZ |
717 | |
718 | Worse yet, if the compiler is able to prove (say) that the value of | |
719 | variable 'a' is always non-zero, it would be well within its rights | |
720 | to optimize the original example by eliminating the "if" statement | |
721 | as follows: | |
722 | ||
723 | q = a; | |
c8241f85 | 724 | b = 1; /* BUG: Compiler and CPU can both reorder!!! */ |
2456d2a6 | 725 | |
105ff3cb | 726 | So don't leave out the READ_ONCE(). |
18c03c61 | 727 | |
2456d2a6 PM |
728 | It is tempting to try to enforce ordering on identical stores on both |
729 | branches of the "if" statement as follows: | |
18c03c61 | 730 | |
105ff3cb | 731 | q = READ_ONCE(a); |
18c03c61 | 732 | if (q) { |
9b2b3bf5 | 733 | barrier(); |
c8241f85 | 734 | WRITE_ONCE(b, 1); |
18c03c61 PZ |
735 | do_something(); |
736 | } else { | |
9b2b3bf5 | 737 | barrier(); |
c8241f85 | 738 | WRITE_ONCE(b, 1); |
18c03c61 PZ |
739 | do_something_else(); |
740 | } | |
741 | ||
2456d2a6 PM |
742 | Unfortunately, current compilers will transform this as follows at high |
743 | optimization levels: | |
18c03c61 | 744 | |
105ff3cb | 745 | q = READ_ONCE(a); |
2456d2a6 | 746 | barrier(); |
c8241f85 | 747 | WRITE_ONCE(b, 1); /* BUG: No ordering vs. load from a!!! */ |
18c03c61 | 748 | if (q) { |
c8241f85 | 749 | /* WRITE_ONCE(b, 1); -- moved up, BUG!!! */ |
18c03c61 PZ |
750 | do_something(); |
751 | } else { | |
c8241f85 | 752 | /* WRITE_ONCE(b, 1); -- moved up, BUG!!! */ |
18c03c61 PZ |
753 | do_something_else(); |
754 | } | |
755 | ||
2456d2a6 PM |
756 | Now there is no conditional between the load from 'a' and the store to |
757 | 'b', which means that the CPU is within its rights to reorder them: | |
758 | The conditional is absolutely required, and must be present in the | |
759 | assembly code even after all compiler optimizations have been applied. | |
760 | Therefore, if you need ordering in this example, you need explicit | |
761 | memory barriers, for example, smp_store_release(): | |
18c03c61 | 762 | |
9af194ce | 763 | q = READ_ONCE(a); |
2456d2a6 | 764 | if (q) { |
c8241f85 | 765 | smp_store_release(&b, 1); |
18c03c61 PZ |
766 | do_something(); |
767 | } else { | |
c8241f85 | 768 | smp_store_release(&b, 1); |
18c03c61 PZ |
769 | do_something_else(); |
770 | } | |
771 | ||
2456d2a6 PM |
772 | In contrast, without explicit memory barriers, two-legged-if control |
773 | ordering is guaranteed only when the stores differ, for example: | |
774 | ||
105ff3cb | 775 | q = READ_ONCE(a); |
2456d2a6 | 776 | if (q) { |
c8241f85 | 777 | WRITE_ONCE(b, 1); |
2456d2a6 PM |
778 | do_something(); |
779 | } else { | |
c8241f85 | 780 | WRITE_ONCE(b, 2); |
2456d2a6 PM |
781 | do_something_else(); |
782 | } | |
783 | ||
105ff3cb LT |
784 | The initial READ_ONCE() is still required to prevent the compiler from |
785 | proving the value of 'a'. | |
18c03c61 PZ |
786 | |
787 | In addition, you need to be careful what you do with the local variable 'q', | |
788 | otherwise the compiler might be able to guess the value and again remove | |
789 | the needed conditional. For example: | |
790 | ||
105ff3cb | 791 | q = READ_ONCE(a); |
18c03c61 | 792 | if (q % MAX) { |
c8241f85 | 793 | WRITE_ONCE(b, 1); |
18c03c61 PZ |
794 | do_something(); |
795 | } else { | |
c8241f85 | 796 | WRITE_ONCE(b, 2); |
18c03c61 PZ |
797 | do_something_else(); |
798 | } | |
799 | ||
800 | If MAX is defined to be 1, then the compiler knows that (q % MAX) is | |
801 | equal to zero, in which case the compiler is within its rights to | |
802 | transform the above code into the following: | |
803 | ||
105ff3cb | 804 | q = READ_ONCE(a); |
b26cfc48 | 805 | WRITE_ONCE(b, 2); |
18c03c61 PZ |
806 | do_something_else(); |
807 | ||
2456d2a6 PM |
808 | Given this transformation, the CPU is not required to respect the ordering |
809 | between the load from variable 'a' and the store to variable 'b'. It is | |
810 | tempting to add a barrier(), but this does not help. The conditional | |
811 | is gone, and the barrier won't bring it back. Therefore, if you are | |
812 | relying on this ordering, you should make sure that MAX is greater than | |
813 | one, perhaps as follows: | |
18c03c61 | 814 | |
105ff3cb | 815 | q = READ_ONCE(a); |
18c03c61 PZ |
816 | BUILD_BUG_ON(MAX <= 1); /* Order load from a with store to b. */ |
817 | if (q % MAX) { | |
c8241f85 | 818 | WRITE_ONCE(b, 1); |
18c03c61 PZ |
819 | do_something(); |
820 | } else { | |
c8241f85 | 821 | WRITE_ONCE(b, 2); |
18c03c61 PZ |
822 | do_something_else(); |
823 | } | |
824 | ||
2456d2a6 PM |
825 | Please note once again that the stores to 'b' differ. If they were |
826 | identical, as noted earlier, the compiler could pull this store outside | |
827 | of the 'if' statement. | |
828 | ||
8b19d1de PM |
829 | You must also be careful not to rely too much on boolean short-circuit |
830 | evaluation. Consider this example: | |
831 | ||
105ff3cb | 832 | q = READ_ONCE(a); |
57aecae9 | 833 | if (q || 1 > 0) |
9af194ce | 834 | WRITE_ONCE(b, 1); |
8b19d1de | 835 | |
5af4692a PM |
836 | Because the first condition cannot fault and the second condition is |
837 | always true, the compiler can transform this example as following, | |
838 | defeating control dependency: | |
8b19d1de | 839 | |
105ff3cb | 840 | q = READ_ONCE(a); |
9af194ce | 841 | WRITE_ONCE(b, 1); |
8b19d1de PM |
842 | |
843 | This example underscores the need to ensure that the compiler cannot | |
9af194ce | 844 | out-guess your code. More generally, although READ_ONCE() does force |
8b19d1de PM |
845 | the compiler to actually emit code for a given load, it does not force |
846 | the compiler to use the results. | |
847 | ||
ebff09a6 PM |
848 | In addition, control dependencies apply only to the then-clause and |
849 | else-clause of the if-statement in question. In particular, it does | |
850 | not necessarily apply to code following the if-statement: | |
851 | ||
852 | q = READ_ONCE(a); | |
853 | if (q) { | |
c8241f85 | 854 | WRITE_ONCE(b, 1); |
ebff09a6 | 855 | } else { |
c8241f85 | 856 | WRITE_ONCE(b, 2); |
ebff09a6 | 857 | } |
c8241f85 | 858 | WRITE_ONCE(c, 1); /* BUG: No ordering against the read from 'a'. */ |
ebff09a6 PM |
859 | |
860 | It is tempting to argue that there in fact is ordering because the | |
861 | compiler cannot reorder volatile accesses and also cannot reorder | |
c8241f85 PM |
862 | the writes to 'b' with the condition. Unfortunately for this line |
863 | of reasoning, the compiler might compile the two writes to 'b' as | |
ebff09a6 PM |
864 | conditional-move instructions, as in this fanciful pseudo-assembly |
865 | language: | |
866 | ||
867 | ld r1,a | |
ebff09a6 | 868 | cmp r1,$0 |
c8241f85 PM |
869 | cmov,ne r4,$1 |
870 | cmov,eq r4,$2 | |
ebff09a6 PM |
871 | st r4,b |
872 | st $1,c | |
873 | ||
874 | A weakly ordered CPU would have no dependency of any sort between the load | |
c8241f85 | 875 | from 'a' and the store to 'c'. The control dependencies would extend |
ebff09a6 PM |
876 | only to the pair of cmov instructions and the store depending on them. |
877 | In short, control dependencies apply only to the stores in the then-clause | |
878 | and else-clause of the if-statement in question (including functions | |
879 | invoked by those two clauses), not to code following that if-statement. | |
880 | ||
18c03c61 | 881 | |
f1ab25a3 PM |
882 | Note well that the ordering provided by a control dependency is local |
883 | to the CPU containing it. See the section on "Multicopy atomicity" | |
884 | for more information. | |
18c03c61 | 885 | |
18c03c61 PZ |
886 | |
887 | In summary: | |
888 | ||
889 | (*) Control dependencies can order prior loads against later stores. | |
890 | However, they do -not- guarantee any other sort of ordering: | |
891 | Not prior loads against later loads, nor prior stores against | |
892 | later anything. If you need these other forms of ordering, | |
d87510c5 | 893 | use smp_rmb(), smp_wmb(), or, in the case of prior stores and |
18c03c61 PZ |
894 | later loads, smp_mb(). |
895 | ||
7817b799 PM |
896 | (*) If both legs of the "if" statement begin with identical stores to |
897 | the same variable, then those stores must be ordered, either by | |
898 | preceding both of them with smp_mb() or by using smp_store_release() | |
899 | to carry out the stores. Please note that it is -not- sufficient | |
a5052657 PM |
900 | to use barrier() at beginning of each leg of the "if" statement |
901 | because, as shown by the example above, optimizing compilers can | |
902 | destroy the control dependency while respecting the letter of the | |
903 | barrier() law. | |
9b2b3bf5 | 904 | |
18c03c61 | 905 | (*) Control dependencies require at least one run-time conditional |
586dd56a | 906 | between the prior load and the subsequent store, and this |
9af194ce PM |
907 | conditional must involve the prior load. If the compiler is able |
908 | to optimize the conditional away, it will have also optimized | |
105ff3cb LT |
909 | away the ordering. Careful use of READ_ONCE() and WRITE_ONCE() |
910 | can help to preserve the needed conditional. | |
18c03c61 PZ |
911 | |
912 | (*) Control dependencies require that the compiler avoid reordering the | |
105ff3cb LT |
913 | dependency into nonexistence. Careful use of READ_ONCE() or |
914 | atomic{,64}_read() can help to preserve your control dependency. | |
895f5542 | 915 | Please see the COMPILER BARRIER section for more information. |
18c03c61 | 916 | |
ebff09a6 PM |
917 | (*) Control dependencies apply only to the then-clause and else-clause |
918 | of the if-statement containing the control dependency, including | |
919 | any functions that these two clauses call. Control dependencies | |
920 | do -not- apply to code following the if-statement containing the | |
921 | control dependency. | |
922 | ||
ff382810 PM |
923 | (*) Control dependencies pair normally with other types of barriers. |
924 | ||
f1ab25a3 PM |
925 | (*) Control dependencies do -not- provide multicopy atomicity. If you |
926 | need all the CPUs to see a given store at the same time, use smp_mb(). | |
108b42b4 | 927 | |
c8241f85 PM |
928 | (*) Compilers do not understand control dependencies. It is therefore |
929 | your job to ensure that they do not break your code. | |
930 | ||
108b42b4 DH |
931 | |
932 | SMP BARRIER PAIRING | |
933 | ------------------- | |
934 | ||
935 | When dealing with CPU-CPU interactions, certain types of memory barrier should | |
936 | always be paired. A lack of appropriate pairing is almost certainly an error. | |
937 | ||
ff382810 | 938 | General barriers pair with each other, though they also pair with most |
f1ab25a3 PM |
939 | other types of barriers, albeit without multicopy atomicity. An acquire |
940 | barrier pairs with a release barrier, but both may also pair with other | |
941 | barriers, including of course general barriers. A write barrier pairs | |
203185f6 | 942 | with an address-dependency barrier, a control dependency, an acquire barrier, |
f1ab25a3 | 943 | a release barrier, a read barrier, or a general barrier. Similarly a |
203185f6 | 944 | read barrier, control dependency, or an address-dependency barrier pairs |
f1ab25a3 PM |
945 | with a write barrier, an acquire barrier, a release barrier, or a |
946 | general barrier: | |
108b42b4 | 947 | |
2ecf8101 PM |
948 | CPU 1 CPU 2 |
949 | =============== =============== | |
9af194ce | 950 | WRITE_ONCE(a, 1); |
108b42b4 | 951 | <write barrier> |
9af194ce | 952 | WRITE_ONCE(b, 2); x = READ_ONCE(b); |
2ecf8101 | 953 | <read barrier> |
9af194ce | 954 | y = READ_ONCE(a); |
108b42b4 DH |
955 | |
956 | Or: | |
957 | ||
2ecf8101 PM |
958 | CPU 1 CPU 2 |
959 | =============== =============================== | |
108b42b4 DH |
960 | a = 1; |
961 | <write barrier> | |
9af194ce | 962 | WRITE_ONCE(b, &a); x = READ_ONCE(b); |
203185f6 | 963 | <implicit address-dependency barrier> |
2ecf8101 | 964 | y = *x; |
108b42b4 | 965 | |
ff382810 PM |
966 | Or even: |
967 | ||
968 | CPU 1 CPU 2 | |
969 | =============== =============================== | |
9af194ce | 970 | r1 = READ_ONCE(y); |
ff382810 | 971 | <general barrier> |
d92f842b | 972 | WRITE_ONCE(x, 1); if (r2 = READ_ONCE(x)) { |
ff382810 | 973 | <implicit control dependency> |
9af194ce | 974 | WRITE_ONCE(y, 1); |
ff382810 PM |
975 | } |
976 | ||
977 | assert(r1 == 0 || r2 == 0); | |
978 | ||
108b42b4 DH |
979 | Basically, the read barrier always has to be there, even though it can be of |
980 | the "weaker" type. | |
981 | ||
670bd95e | 982 | [!] Note that the stores before the write barrier would normally be expected to |
f556082d AY |
983 | match the loads after the read barrier or the address-dependency barrier, and |
984 | vice versa: | |
670bd95e | 985 | |
2ecf8101 PM |
986 | CPU 1 CPU 2 |
987 | =================== =================== | |
9af194ce PM |
988 | WRITE_ONCE(a, 1); }---- --->{ v = READ_ONCE(c); |
989 | WRITE_ONCE(b, 2); } \ / { w = READ_ONCE(d); | |
2ecf8101 | 990 | <write barrier> \ <read barrier> |
9af194ce PM |
991 | WRITE_ONCE(c, 3); } / \ { x = READ_ONCE(a); |
992 | WRITE_ONCE(d, 4); }---- --->{ y = READ_ONCE(b); | |
670bd95e | 993 | |
108b42b4 DH |
994 | |
995 | EXAMPLES OF MEMORY BARRIER SEQUENCES | |
996 | ------------------------------------ | |
997 | ||
81fc6323 | 998 | Firstly, write barriers act as partial orderings on store operations. |
108b42b4 DH |
999 | Consider the following sequence of events: |
1000 | ||
1001 | CPU 1 | |
1002 | ======================= | |
1003 | STORE A = 1 | |
1004 | STORE B = 2 | |
1005 | STORE C = 3 | |
1006 | <write barrier> | |
1007 | STORE D = 4 | |
1008 | STORE E = 5 | |
1009 | ||
1010 | This sequence of events is committed to the memory coherence system in an order | |
1011 | that the rest of the system might perceive as the unordered set of { STORE A, | |
80f7228b | 1012 | STORE B, STORE C } all occurring before the unordered set of { STORE D, STORE E |
108b42b4 DH |
1013 | }: |
1014 | ||
1015 | +-------+ : : | |
1016 | | | +------+ | |
1017 | | |------>| C=3 | } /\ | |
81fc6323 JP |
1018 | | | : +------+ }----- \ -----> Events perceptible to |
1019 | | | : | A=1 | } \/ the rest of the system | |
108b42b4 DH |
1020 | | | : +------+ } |
1021 | | CPU 1 | : | B=2 | } | |
1022 | | | +------+ } | |
1023 | | | wwwwwwwwwwwwwwww } <--- At this point the write barrier | |
1024 | | | +------+ } requires all stores prior to the | |
1025 | | | : | E=5 | } barrier to be committed before | |
81fc6323 | 1026 | | | : +------+ } further stores may take place |
108b42b4 DH |
1027 | | |------>| D=4 | } |
1028 | | | +------+ | |
1029 | +-------+ : : | |
1030 | | | |
670bd95e DH |
1031 | | Sequence in which stores are committed to the |
1032 | | memory system by CPU 1 | |
108b42b4 DH |
1033 | V |
1034 | ||
1035 | ||
f556082d AY |
1036 | Secondly, address-dependency barriers act as partial orderings on address- |
1037 | dependent loads. Consider the following sequence of events: | |
108b42b4 DH |
1038 | |
1039 | CPU 1 CPU 2 | |
1040 | ======================= ======================= | |
c14038c3 | 1041 | { B = 7; X = 9; Y = 8; C = &Y } |
108b42b4 DH |
1042 | STORE A = 1 |
1043 | STORE B = 2 | |
1044 | <write barrier> | |
1045 | STORE C = &B LOAD X | |
1046 | STORE D = 4 LOAD C (gets &B) | |
1047 | LOAD *C (reads B) | |
1048 | ||
1049 | Without intervention, CPU 2 may perceive the events on CPU 1 in some | |
1050 | effectively random order, despite the write barrier issued by CPU 1: | |
1051 | ||
1052 | +-------+ : : : : | |
1053 | | | +------+ +-------+ | Sequence of update | |
1054 | | |------>| B=2 |----- --->| Y->8 | | of perception on | |
1055 | | | : +------+ \ +-------+ | CPU 2 | |
1056 | | CPU 1 | : | A=1 | \ --->| C->&Y | V | |
1057 | | | +------+ | +-------+ | |
1058 | | | wwwwwwwwwwwwwwww | : : | |
1059 | | | +------+ | : : | |
1060 | | | : | C=&B |--- | : : +-------+ | |
1061 | | | : +------+ \ | +-------+ | | | |
1062 | | |------>| D=4 | ----------->| C->&B |------>| | | |
1063 | | | +------+ | +-------+ | | | |
1064 | +-------+ : : | : : | | | |
1065 | | : : | | | |
1066 | | : : | CPU 2 | | |
1067 | | +-------+ | | | |
1068 | Apparently incorrect ---> | | B->7 |------>| | | |
1069 | perception of B (!) | +-------+ | | | |
1070 | | : : | | | |
1071 | | +-------+ | | | |
1072 | The load of X holds ---> \ | X->9 |------>| | | |
1073 | up the maintenance \ +-------+ | | | |
1074 | of coherence of B ----->| B->2 | +-------+ | |
1075 | +-------+ | |
1076 | : : | |
1077 | ||
1078 | ||
1079 | In the above example, CPU 2 perceives that B is 7, despite the load of *C | |
670e9f34 | 1080 | (which would be B) coming after the LOAD of C. |
108b42b4 | 1081 | |
f556082d AY |
1082 | If, however, an address-dependency barrier were to be placed between the load |
1083 | of C and the load of *C (ie: B) on CPU 2: | |
c14038c3 DH |
1084 | |
1085 | CPU 1 CPU 2 | |
1086 | ======================= ======================= | |
1087 | { B = 7; X = 9; Y = 8; C = &Y } | |
1088 | STORE A = 1 | |
1089 | STORE B = 2 | |
1090 | <write barrier> | |
1091 | STORE C = &B LOAD X | |
1092 | STORE D = 4 LOAD C (gets &B) | |
203185f6 | 1093 | <address-dependency barrier> |
c14038c3 DH |
1094 | LOAD *C (reads B) |
1095 | ||
1096 | then the following will occur: | |
108b42b4 DH |
1097 | |
1098 | +-------+ : : : : | |
1099 | | | +------+ +-------+ | |
1100 | | |------>| B=2 |----- --->| Y->8 | | |
1101 | | | : +------+ \ +-------+ | |
1102 | | CPU 1 | : | A=1 | \ --->| C->&Y | | |
1103 | | | +------+ | +-------+ | |
1104 | | | wwwwwwwwwwwwwwww | : : | |
1105 | | | +------+ | : : | |
1106 | | | : | C=&B |--- | : : +-------+ | |
1107 | | | : +------+ \ | +-------+ | | | |
1108 | | |------>| D=4 | ----------->| C->&B |------>| | | |
1109 | | | +------+ | +-------+ | | | |
1110 | +-------+ : : | : : | | | |
1111 | | : : | | | |
1112 | | : : | CPU 2 | | |
1113 | | +-------+ | | | |
670bd95e DH |
1114 | | | X->9 |------>| | |
1115 | | +-------+ | | | |
203185f6 | 1116 | Makes sure all effects ---> \ aaaaaaaaaaaaaaaaa | | |
670bd95e DH |
1117 | prior to the store of C \ +-------+ | | |
1118 | are perceptible to ----->| B->2 |------>| | | |
1119 | subsequent loads +-------+ | | | |
108b42b4 DH |
1120 | : : +-------+ |
1121 | ||
1122 | ||
1123 | And thirdly, a read barrier acts as a partial order on loads. Consider the | |
1124 | following sequence of events: | |
1125 | ||
1126 | CPU 1 CPU 2 | |
1127 | ======================= ======================= | |
670bd95e | 1128 | { A = 0, B = 9 } |
108b42b4 | 1129 | STORE A=1 |
108b42b4 | 1130 | <write barrier> |
670bd95e | 1131 | STORE B=2 |
108b42b4 | 1132 | LOAD B |
670bd95e | 1133 | LOAD A |
108b42b4 DH |
1134 | |
1135 | Without intervention, CPU 2 may then choose to perceive the events on CPU 1 in | |
1136 | some effectively random order, despite the write barrier issued by CPU 1: | |
1137 | ||
670bd95e DH |
1138 | +-------+ : : : : |
1139 | | | +------+ +-------+ | |
1140 | | |------>| A=1 |------ --->| A->0 | | |
1141 | | | +------+ \ +-------+ | |
1142 | | CPU 1 | wwwwwwwwwwwwwwww \ --->| B->9 | | |
1143 | | | +------+ | +-------+ | |
1144 | | |------>| B=2 |--- | : : | |
1145 | | | +------+ \ | : : +-------+ | |
1146 | +-------+ : : \ | +-------+ | | | |
1147 | ---------->| B->2 |------>| | | |
1148 | | +-------+ | CPU 2 | | |
1149 | | | A->0 |------>| | | |
1150 | | +-------+ | | | |
1151 | | : : +-------+ | |
1152 | \ : : | |
1153 | \ +-------+ | |
1154 | ---->| A->1 | | |
1155 | +-------+ | |
1156 | : : | |
108b42b4 | 1157 | |
670bd95e | 1158 | |
6bc39274 | 1159 | If, however, a read barrier were to be placed between the load of B and the |
670bd95e DH |
1160 | load of A on CPU 2: |
1161 | ||
1162 | CPU 1 CPU 2 | |
1163 | ======================= ======================= | |
1164 | { A = 0, B = 9 } | |
1165 | STORE A=1 | |
1166 | <write barrier> | |
1167 | STORE B=2 | |
1168 | LOAD B | |
1169 | <read barrier> | |
1170 | LOAD A | |
1171 | ||
1172 | then the partial ordering imposed by CPU 1 will be perceived correctly by CPU | |
1173 | 2: | |
1174 | ||
1175 | +-------+ : : : : | |
1176 | | | +------+ +-------+ | |
1177 | | |------>| A=1 |------ --->| A->0 | | |
1178 | | | +------+ \ +-------+ | |
1179 | | CPU 1 | wwwwwwwwwwwwwwww \ --->| B->9 | | |
1180 | | | +------+ | +-------+ | |
1181 | | |------>| B=2 |--- | : : | |
1182 | | | +------+ \ | : : +-------+ | |
1183 | +-------+ : : \ | +-------+ | | | |
1184 | ---------->| B->2 |------>| | | |
1185 | | +-------+ | CPU 2 | | |
1186 | | : : | | | |
1187 | | : : | | | |
1188 | At this point the read ----> \ rrrrrrrrrrrrrrrrr | | | |
1189 | barrier causes all effects \ +-------+ | | | |
1190 | prior to the storage of B ---->| A->1 |------>| | | |
1191 | to be perceptible to CPU 2 +-------+ | | | |
1192 | : : +-------+ | |
1193 | ||
1194 | ||
1195 | To illustrate this more completely, consider what could happen if the code | |
1196 | contained a load of A either side of the read barrier: | |
1197 | ||
1198 | CPU 1 CPU 2 | |
1199 | ======================= ======================= | |
1200 | { A = 0, B = 9 } | |
1201 | STORE A=1 | |
1202 | <write barrier> | |
1203 | STORE B=2 | |
1204 | LOAD B | |
1205 | LOAD A [first load of A] | |
1206 | <read barrier> | |
1207 | LOAD A [second load of A] | |
1208 | ||
1209 | Even though the two loads of A both occur after the load of B, they may both | |
1210 | come up with different values: | |
1211 | ||
1212 | +-------+ : : : : | |
1213 | | | +------+ +-------+ | |
1214 | | |------>| A=1 |------ --->| A->0 | | |
1215 | | | +------+ \ +-------+ | |
1216 | | CPU 1 | wwwwwwwwwwwwwwww \ --->| B->9 | | |
1217 | | | +------+ | +-------+ | |
1218 | | |------>| B=2 |--- | : : | |
1219 | | | +------+ \ | : : +-------+ | |
1220 | +-------+ : : \ | +-------+ | | | |
1221 | ---------->| B->2 |------>| | | |
1222 | | +-------+ | CPU 2 | | |
1223 | | : : | | | |
1224 | | : : | | | |
1225 | | +-------+ | | | |
1226 | | | A->0 |------>| 1st | | |
1227 | | +-------+ | | | |
1228 | At this point the read ----> \ rrrrrrrrrrrrrrrrr | | | |
1229 | barrier causes all effects \ +-------+ | | | |
1230 | prior to the storage of B ---->| A->1 |------>| 2nd | | |
1231 | to be perceptible to CPU 2 +-------+ | | | |
1232 | : : +-------+ | |
1233 | ||
1234 | ||
1235 | But it may be that the update to A from CPU 1 becomes perceptible to CPU 2 | |
1236 | before the read barrier completes anyway: | |
1237 | ||
1238 | +-------+ : : : : | |
1239 | | | +------+ +-------+ | |
1240 | | |------>| A=1 |------ --->| A->0 | | |
1241 | | | +------+ \ +-------+ | |
1242 | | CPU 1 | wwwwwwwwwwwwwwww \ --->| B->9 | | |
1243 | | | +------+ | +-------+ | |
1244 | | |------>| B=2 |--- | : : | |
1245 | | | +------+ \ | : : +-------+ | |
1246 | +-------+ : : \ | +-------+ | | | |
1247 | ---------->| B->2 |------>| | | |
1248 | | +-------+ | CPU 2 | | |
1249 | | : : | | | |
1250 | \ : : | | | |
1251 | \ +-------+ | | | |
1252 | ---->| A->1 |------>| 1st | | |
1253 | +-------+ | | | |
1254 | rrrrrrrrrrrrrrrrr | | | |
1255 | +-------+ | | | |
1256 | | A->1 |------>| 2nd | | |
1257 | +-------+ | | | |
1258 | : : +-------+ | |
1259 | ||
1260 | ||
1261 | The guarantee is that the second load will always come up with A == 1 if the | |
1262 | load of B came up with B == 2. No such guarantee exists for the first load of | |
1263 | A; that may come up with either A == 0 or A == 1. | |
1264 | ||
1265 | ||
1266 | READ MEMORY BARRIERS VS LOAD SPECULATION | |
1267 | ---------------------------------------- | |
1268 | ||
1269 | Many CPUs speculate with loads: that is they see that they will need to load an | |
1270 | item from memory, and they find a time where they're not using the bus for any | |
1271 | other loads, and so do the load in advance - even though they haven't actually | |
1272 | got to that point in the instruction execution flow yet. This permits the | |
1273 | actual load instruction to potentially complete immediately because the CPU | |
1274 | already has the value to hand. | |
1275 | ||
1276 | It may turn out that the CPU didn't actually need the value - perhaps because a | |
1277 | branch circumvented the load - in which case it can discard the value or just | |
1278 | cache it for later use. | |
1279 | ||
1280 | Consider: | |
1281 | ||
e0edc78f | 1282 | CPU 1 CPU 2 |
670bd95e | 1283 | ======================= ======================= |
e0edc78f IM |
1284 | LOAD B |
1285 | DIVIDE } Divide instructions generally | |
1286 | DIVIDE } take a long time to perform | |
1287 | LOAD A | |
670bd95e DH |
1288 | |
1289 | Which might appear as this: | |
1290 | ||
1291 | : : +-------+ | |
1292 | +-------+ | | | |
1293 | --->| B->2 |------>| | | |
1294 | +-------+ | CPU 2 | | |
1295 | : :DIVIDE | | | |
1296 | +-------+ | | | |
1297 | The CPU being busy doing a ---> --->| A->0 |~~~~ | | | |
1298 | division speculates on the +-------+ ~ | | | |
1299 | LOAD of A : : ~ | | | |
1300 | : :DIVIDE | | | |
1301 | : : ~ | | | |
1302 | Once the divisions are complete --> : : ~-->| | | |
1303 | the CPU can then perform the : : | | | |
1304 | LOAD with immediate effect : : +-------+ | |
1305 | ||
1306 | ||
203185f6 | 1307 | Placing a read barrier or an address-dependency barrier just before the second |
670bd95e DH |
1308 | load: |
1309 | ||
e0edc78f | 1310 | CPU 1 CPU 2 |
670bd95e | 1311 | ======================= ======================= |
e0edc78f IM |
1312 | LOAD B |
1313 | DIVIDE | |
1314 | DIVIDE | |
670bd95e | 1315 | <read barrier> |
e0edc78f | 1316 | LOAD A |
670bd95e DH |
1317 | |
1318 | will force any value speculatively obtained to be reconsidered to an extent | |
1319 | dependent on the type of barrier used. If there was no change made to the | |
1320 | speculated memory location, then the speculated value will just be used: | |
1321 | ||
1322 | : : +-------+ | |
1323 | +-------+ | | | |
1324 | --->| B->2 |------>| | | |
1325 | +-------+ | CPU 2 | | |
1326 | : :DIVIDE | | | |
1327 | +-------+ | | | |
1328 | The CPU being busy doing a ---> --->| A->0 |~~~~ | | | |
1329 | division speculates on the +-------+ ~ | | | |
1330 | LOAD of A : : ~ | | | |
1331 | : :DIVIDE | | | |
1332 | : : ~ | | | |
1333 | : : ~ | | | |
1334 | rrrrrrrrrrrrrrrr~ | | | |
1335 | : : ~ | | | |
1336 | : : ~-->| | | |
1337 | : : | | | |
1338 | : : +-------+ | |
1339 | ||
1340 | ||
1341 | but if there was an update or an invalidation from another CPU pending, then | |
1342 | the speculation will be cancelled and the value reloaded: | |
1343 | ||
1344 | : : +-------+ | |
1345 | +-------+ | | | |
1346 | --->| B->2 |------>| | | |
1347 | +-------+ | CPU 2 | | |
1348 | : :DIVIDE | | | |
1349 | +-------+ | | | |
1350 | The CPU being busy doing a ---> --->| A->0 |~~~~ | | | |
1351 | division speculates on the +-------+ ~ | | | |
1352 | LOAD of A : : ~ | | | |
1353 | : :DIVIDE | | | |
1354 | : : ~ | | | |
1355 | : : ~ | | | |
1356 | rrrrrrrrrrrrrrrrr | | | |
1357 | +-------+ | | | |
1358 | The speculation is discarded ---> --->| A->1 |------>| | | |
1359 | and an updated value is +-------+ | | | |
1360 | retrieved : : +-------+ | |
108b42b4 DH |
1361 | |
1362 | ||
f1ab25a3 PM |
1363 | MULTICOPY ATOMICITY |
1364 | -------------------- | |
1365 | ||
1366 | Multicopy atomicity is a deeply intuitive notion about ordering that is | |
1367 | not always provided by real computer systems, namely that a given store | |
0902b1f4 AS |
1368 | becomes visible at the same time to all CPUs, or, alternatively, that all |
1369 | CPUs agree on the order in which all stores become visible. However, | |
1370 | support of full multicopy atomicity would rule out valuable hardware | |
1371 | optimizations, so a weaker form called ``other multicopy atomicity'' | |
1372 | instead guarantees only that a given store becomes visible at the same | |
1373 | time to all -other- CPUs. The remainder of this document discusses this | |
1374 | weaker form, but for brevity will call it simply ``multicopy atomicity''. | |
241e6663 | 1375 | |
f1ab25a3 | 1376 | The following example demonstrates multicopy atomicity: |
241e6663 PM |
1377 | |
1378 | CPU 1 CPU 2 CPU 3 | |
1379 | ======================= ======================= ======================= | |
1380 | { X = 0, Y = 0 } | |
f1ab25a3 PM |
1381 | STORE X=1 r1=LOAD X (reads 1) LOAD Y (reads 1) |
1382 | <general barrier> <read barrier> | |
1383 | STORE Y=r1 LOAD X | |
241e6663 | 1384 | |
0902b1f4 AS |
1385 | Suppose that CPU 2's load from X returns 1, which it then stores to Y, |
1386 | and CPU 3's load from Y returns 1. This indicates that CPU 1's store | |
1387 | to X precedes CPU 2's load from X and that CPU 2's store to Y precedes | |
1388 | CPU 3's load from Y. In addition, the memory barriers guarantee that | |
1389 | CPU 2 executes its load before its store, and CPU 3 loads from Y before | |
1390 | it loads from X. The question is then "Can CPU 3's load from X return 0?" | |
241e6663 | 1391 | |
0902b1f4 | 1392 | Because CPU 3's load from X in some sense comes after CPU 2's load, it |
241e6663 | 1393 | is natural to expect that CPU 3's load from X must therefore return 1. |
0902b1f4 AS |
1394 | This expectation follows from multicopy atomicity: if a load executing |
1395 | on CPU B follows a load from the same variable executing on CPU A (and | |
1396 | CPU A did not originally store the value which it read), then on | |
1397 | multicopy-atomic systems, CPU B's load must return either the same value | |
1398 | that CPU A's load did or some later value. However, the Linux kernel | |
1399 | does not require systems to be multicopy atomic. | |
1400 | ||
1401 | The use of a general memory barrier in the example above compensates | |
1402 | for any lack of multicopy atomicity. In the example, if CPU 2's load | |
1403 | from X returns 1 and CPU 3's load from Y returns 1, then CPU 3's load | |
1404 | from X must indeed also return 1. | |
f1ab25a3 PM |
1405 | |
1406 | However, dependencies, read barriers, and write barriers are not always | |
1407 | able to compensate for non-multicopy atomicity. For example, suppose | |
1408 | that CPU 2's general barrier is removed from the above example, leaving | |
1409 | only the data dependency shown below: | |
241e6663 PM |
1410 | |
1411 | CPU 1 CPU 2 CPU 3 | |
1412 | ======================= ======================= ======================= | |
1413 | { X = 0, Y = 0 } | |
f1ab25a3 PM |
1414 | STORE X=1 r1=LOAD X (reads 1) LOAD Y (reads 1) |
1415 | <data dependency> <read barrier> | |
1416 | STORE Y=r1 LOAD X (reads 0) | |
1417 | ||
1418 | This substitution allows non-multicopy atomicity to run rampant: in | |
1419 | this example, it is perfectly legal for CPU 2's load from X to return 1, | |
1420 | CPU 3's load from Y to return 1, and its load from X to return 0. | |
1421 | ||
1422 | The key point is that although CPU 2's data dependency orders its load | |
0902b1f4 AS |
1423 | and store, it does not guarantee to order CPU 1's store. Thus, if this |
1424 | example runs on a non-multicopy-atomic system where CPUs 1 and 2 share a | |
1425 | store buffer or a level of cache, CPU 2 might have early access to CPU 1's | |
1426 | writes. General barriers are therefore required to ensure that all CPUs | |
1427 | agree on the combined order of multiple accesses. | |
f1ab25a3 PM |
1428 | |
1429 | General barriers can compensate not only for non-multicopy atomicity, | |
1430 | but can also generate additional ordering that can ensure that -all- | |
1431 | CPUs will perceive the same order of -all- operations. In contrast, a | |
1432 | chain of release-acquire pairs do not provide this additional ordering, | |
1433 | which means that only those CPUs on the chain are guaranteed to agree | |
1434 | on the combined order of the accesses. For example, switching to C code | |
1435 | in deference to the ghost of Herman Hollerith: | |
c535cc92 PM |
1436 | |
1437 | int u, v, x, y, z; | |
1438 | ||
1439 | void cpu0(void) | |
1440 | { | |
1441 | r0 = smp_load_acquire(&x); | |
1442 | WRITE_ONCE(u, 1); | |
1443 | smp_store_release(&y, 1); | |
1444 | } | |
1445 | ||
1446 | void cpu1(void) | |
1447 | { | |
1448 | r1 = smp_load_acquire(&y); | |
1449 | r4 = READ_ONCE(v); | |
1450 | r5 = READ_ONCE(u); | |
1451 | smp_store_release(&z, 1); | |
1452 | } | |
1453 | ||
1454 | void cpu2(void) | |
1455 | { | |
1456 | r2 = smp_load_acquire(&z); | |
1457 | smp_store_release(&x, 1); | |
1458 | } | |
1459 | ||
1460 | void cpu3(void) | |
1461 | { | |
1462 | WRITE_ONCE(v, 1); | |
1463 | smp_mb(); | |
1464 | r3 = READ_ONCE(u); | |
1465 | } | |
1466 | ||
f1ab25a3 PM |
1467 | Because cpu0(), cpu1(), and cpu2() participate in a chain of |
1468 | smp_store_release()/smp_load_acquire() pairs, the following outcome | |
1469 | is prohibited: | |
c535cc92 PM |
1470 | |
1471 | r0 == 1 && r1 == 1 && r2 == 1 | |
1472 | ||
1473 | Furthermore, because of the release-acquire relationship between cpu0() | |
1474 | and cpu1(), cpu1() must see cpu0()'s writes, so that the following | |
1475 | outcome is prohibited: | |
1476 | ||
1477 | r1 == 1 && r5 == 0 | |
1478 | ||
f1ab25a3 PM |
1479 | However, the ordering provided by a release-acquire chain is local |
1480 | to the CPUs participating in that chain and does not apply to cpu3(), | |
1481 | at least aside from stores. Therefore, the following outcome is possible: | |
c535cc92 PM |
1482 | |
1483 | r0 == 0 && r1 == 1 && r2 == 1 && r3 == 0 && r4 == 0 | |
1484 | ||
37ef0341 PM |
1485 | As an aside, the following outcome is also possible: |
1486 | ||
1487 | r0 == 0 && r1 == 1 && r2 == 1 && r3 == 0 && r4 == 0 && r5 == 1 | |
1488 | ||
c535cc92 PM |
1489 | Although cpu0(), cpu1(), and cpu2() will see their respective reads and |
1490 | writes in order, CPUs not involved in the release-acquire chain might | |
1491 | well disagree on the order. This disagreement stems from the fact that | |
1492 | the weak memory-barrier instructions used to implement smp_load_acquire() | |
1493 | and smp_store_release() are not required to order prior stores against | |
1494 | subsequent loads in all cases. This means that cpu3() can see cpu0()'s | |
1495 | store to u as happening -after- cpu1()'s load from v, even though | |
1496 | both cpu0() and cpu1() agree that these two operations occurred in the | |
1497 | intended order. | |
1498 | ||
1499 | However, please keep in mind that smp_load_acquire() is not magic. | |
1500 | In particular, it simply reads from its argument with ordering. It does | |
1501 | -not- ensure that any particular value will be read. Therefore, the | |
1502 | following outcome is possible: | |
1503 | ||
1504 | r0 == 0 && r1 == 0 && r2 == 0 && r5 == 0 | |
1505 | ||
1506 | Note that this outcome can happen even on a mythical sequentially | |
1507 | consistent system where nothing is ever reordered. | |
1508 | ||
f1ab25a3 PM |
1509 | To reiterate, if your code requires full ordering of all operations, |
1510 | use general barriers throughout. | |
241e6663 PM |
1511 | |
1512 | ||
108b42b4 DH |
1513 | ======================== |
1514 | EXPLICIT KERNEL BARRIERS | |
1515 | ======================== | |
1516 | ||
1517 | The Linux kernel has a variety of different barriers that act at different | |
1518 | levels: | |
1519 | ||
1520 | (*) Compiler barrier. | |
1521 | ||
1522 | (*) CPU memory barriers. | |
1523 | ||
108b42b4 DH |
1524 | |
1525 | COMPILER BARRIER | |
1526 | ---------------- | |
1527 | ||
1528 | The Linux kernel has an explicit compiler barrier function that prevents the | |
1529 | compiler from moving the memory accesses either side of it to the other side: | |
1530 | ||
1531 | barrier(); | |
1532 | ||
9af194ce PM |
1533 | This is a general barrier -- there are no read-read or write-write |
1534 | variants of barrier(). However, READ_ONCE() and WRITE_ONCE() can be | |
1535 | thought of as weak forms of barrier() that affect only the specific | |
1536 | accesses flagged by the READ_ONCE() or WRITE_ONCE(). | |
108b42b4 | 1537 | |
692118da PM |
1538 | The barrier() function has the following effects: |
1539 | ||
1540 | (*) Prevents the compiler from reordering accesses following the | |
1541 | barrier() to precede any accesses preceding the barrier(). | |
1542 | One example use for this property is to ease communication between | |
1543 | interrupt-handler code and the code that was interrupted. | |
1544 | ||
1545 | (*) Within a loop, forces the compiler to load the variables used | |
1546 | in that loop's conditional on each pass through that loop. | |
1547 | ||
9af194ce PM |
1548 | The READ_ONCE() and WRITE_ONCE() functions can prevent any number of |
1549 | optimizations that, while perfectly safe in single-threaded code, can | |
1550 | be fatal in concurrent code. Here are some examples of these sorts | |
1551 | of optimizations: | |
692118da | 1552 | |
449f7413 PM |
1553 | (*) The compiler is within its rights to reorder loads and stores |
1554 | to the same variable, and in some cases, the CPU is within its | |
1555 | rights to reorder loads to the same variable. This means that | |
1556 | the following code: | |
1557 | ||
1558 | a[0] = x; | |
1559 | a[1] = x; | |
1560 | ||
1561 | Might result in an older value of x stored in a[1] than in a[0]. | |
1562 | Prevent both the compiler and the CPU from doing this as follows: | |
1563 | ||
9af194ce PM |
1564 | a[0] = READ_ONCE(x); |
1565 | a[1] = READ_ONCE(x); | |
449f7413 | 1566 | |
9af194ce PM |
1567 | In short, READ_ONCE() and WRITE_ONCE() provide cache coherence for |
1568 | accesses from multiple CPUs to a single variable. | |
449f7413 | 1569 | |
692118da PM |
1570 | (*) The compiler is within its rights to merge successive loads from |
1571 | the same variable. Such merging can cause the compiler to "optimize" | |
1572 | the following code: | |
1573 | ||
1574 | while (tmp = a) | |
1575 | do_something_with(tmp); | |
1576 | ||
1577 | into the following code, which, although in some sense legitimate | |
1578 | for single-threaded code, is almost certainly not what the developer | |
1579 | intended: | |
1580 | ||
1581 | if (tmp = a) | |
1582 | for (;;) | |
1583 | do_something_with(tmp); | |
1584 | ||
9af194ce | 1585 | Use READ_ONCE() to prevent the compiler from doing this to you: |
692118da | 1586 | |
9af194ce | 1587 | while (tmp = READ_ONCE(a)) |
692118da PM |
1588 | do_something_with(tmp); |
1589 | ||
1590 | (*) The compiler is within its rights to reload a variable, for example, | |
1591 | in cases where high register pressure prevents the compiler from | |
1592 | keeping all data of interest in registers. The compiler might | |
1593 | therefore optimize the variable 'tmp' out of our previous example: | |
1594 | ||
1595 | while (tmp = a) | |
1596 | do_something_with(tmp); | |
1597 | ||
1598 | This could result in the following code, which is perfectly safe in | |
1599 | single-threaded code, but can be fatal in concurrent code: | |
1600 | ||
1601 | while (a) | |
1602 | do_something_with(a); | |
1603 | ||
1604 | For example, the optimized version of this code could result in | |
1605 | passing a zero to do_something_with() in the case where the variable | |
1606 | a was modified by some other CPU between the "while" statement and | |
1607 | the call to do_something_with(). | |
1608 | ||
9af194ce | 1609 | Again, use READ_ONCE() to prevent the compiler from doing this: |
692118da | 1610 | |
9af194ce | 1611 | while (tmp = READ_ONCE(a)) |
692118da PM |
1612 | do_something_with(tmp); |
1613 | ||
1614 | Note that if the compiler runs short of registers, it might save | |
1615 | tmp onto the stack. The overhead of this saving and later restoring | |
1616 | is why compilers reload variables. Doing so is perfectly safe for | |
1617 | single-threaded code, so you need to tell the compiler about cases | |
1618 | where it is not safe. | |
1619 | ||
1620 | (*) The compiler is within its rights to omit a load entirely if it knows | |
1621 | what the value will be. For example, if the compiler can prove that | |
1622 | the value of variable 'a' is always zero, it can optimize this code: | |
1623 | ||
1624 | while (tmp = a) | |
1625 | do_something_with(tmp); | |
1626 | ||
1627 | Into this: | |
1628 | ||
1629 | do { } while (0); | |
1630 | ||
9af194ce PM |
1631 | This transformation is a win for single-threaded code because it |
1632 | gets rid of a load and a branch. The problem is that the compiler | |
1633 | will carry out its proof assuming that the current CPU is the only | |
1634 | one updating variable 'a'. If variable 'a' is shared, then the | |
1635 | compiler's proof will be erroneous. Use READ_ONCE() to tell the | |
1636 | compiler that it doesn't know as much as it thinks it does: | |
692118da | 1637 | |
9af194ce | 1638 | while (tmp = READ_ONCE(a)) |
692118da PM |
1639 | do_something_with(tmp); |
1640 | ||
1641 | But please note that the compiler is also closely watching what you | |
9af194ce | 1642 | do with the value after the READ_ONCE(). For example, suppose you |
692118da PM |
1643 | do the following and MAX is a preprocessor macro with the value 1: |
1644 | ||
9af194ce | 1645 | while ((tmp = READ_ONCE(a)) % MAX) |
692118da PM |
1646 | do_something_with(tmp); |
1647 | ||
1648 | Then the compiler knows that the result of the "%" operator applied | |
1649 | to MAX will always be zero, again allowing the compiler to optimize | |
1650 | the code into near-nonexistence. (It will still load from the | |
1651 | variable 'a'.) | |
1652 | ||
1653 | (*) Similarly, the compiler is within its rights to omit a store entirely | |
1654 | if it knows that the variable already has the value being stored. | |
1655 | Again, the compiler assumes that the current CPU is the only one | |
1656 | storing into the variable, which can cause the compiler to do the | |
1657 | wrong thing for shared variables. For example, suppose you have | |
1658 | the following: | |
1659 | ||
1660 | a = 0; | |
65f95ff2 | 1661 | ... Code that does not store to variable a ... |
692118da PM |
1662 | a = 0; |
1663 | ||
1664 | The compiler sees that the value of variable 'a' is already zero, so | |
1665 | it might well omit the second store. This would come as a fatal | |
1666 | surprise if some other CPU might have stored to variable 'a' in the | |
1667 | meantime. | |
1668 | ||
9af194ce | 1669 | Use WRITE_ONCE() to prevent the compiler from making this sort of |
692118da PM |
1670 | wrong guess: |
1671 | ||
9af194ce | 1672 | WRITE_ONCE(a, 0); |
65f95ff2 | 1673 | ... Code that does not store to variable a ... |
9af194ce | 1674 | WRITE_ONCE(a, 0); |
692118da PM |
1675 | |
1676 | (*) The compiler is within its rights to reorder memory accesses unless | |
1677 | you tell it not to. For example, consider the following interaction | |
1678 | between process-level code and an interrupt handler: | |
1679 | ||
1680 | void process_level(void) | |
1681 | { | |
1682 | msg = get_message(); | |
1683 | flag = true; | |
1684 | } | |
1685 | ||
1686 | void interrupt_handler(void) | |
1687 | { | |
1688 | if (flag) | |
1689 | process_message(msg); | |
1690 | } | |
1691 | ||
df5cbb27 | 1692 | There is nothing to prevent the compiler from transforming |
692118da PM |
1693 | process_level() to the following, in fact, this might well be a |
1694 | win for single-threaded code: | |
1695 | ||
1696 | void process_level(void) | |
1697 | { | |
1698 | flag = true; | |
1699 | msg = get_message(); | |
1700 | } | |
1701 | ||
1702 | If the interrupt occurs between these two statement, then | |
9af194ce | 1703 | interrupt_handler() might be passed a garbled msg. Use WRITE_ONCE() |
692118da PM |
1704 | to prevent this as follows: |
1705 | ||
1706 | void process_level(void) | |
1707 | { | |
9af194ce PM |
1708 | WRITE_ONCE(msg, get_message()); |
1709 | WRITE_ONCE(flag, true); | |
692118da PM |
1710 | } |
1711 | ||
1712 | void interrupt_handler(void) | |
1713 | { | |
9af194ce PM |
1714 | if (READ_ONCE(flag)) |
1715 | process_message(READ_ONCE(msg)); | |
692118da PM |
1716 | } |
1717 | ||
9af194ce PM |
1718 | Note that the READ_ONCE() and WRITE_ONCE() wrappers in |
1719 | interrupt_handler() are needed if this interrupt handler can itself | |
1720 | be interrupted by something that also accesses 'flag' and 'msg', | |
1721 | for example, a nested interrupt or an NMI. Otherwise, READ_ONCE() | |
1722 | and WRITE_ONCE() are not needed in interrupt_handler() other than | |
1723 | for documentation purposes. (Note also that nested interrupts | |
1724 | do not typically occur in modern Linux kernels, in fact, if an | |
1725 | interrupt handler returns with interrupts enabled, you will get a | |
1726 | WARN_ONCE() splat.) | |
1727 | ||
1728 | You should assume that the compiler can move READ_ONCE() and | |
1729 | WRITE_ONCE() past code not containing READ_ONCE(), WRITE_ONCE(), | |
1730 | barrier(), or similar primitives. | |
1731 | ||
1732 | This effect could also be achieved using barrier(), but READ_ONCE() | |
1733 | and WRITE_ONCE() are more selective: With READ_ONCE() and | |
1734 | WRITE_ONCE(), the compiler need only forget the contents of the | |
1735 | indicated memory locations, while with barrier() the compiler must | |
8149b5cb | 1736 | discard the value of all memory locations that it has currently |
9af194ce PM |
1737 | cached in any machine registers. Of course, the compiler must also |
1738 | respect the order in which the READ_ONCE()s and WRITE_ONCE()s occur, | |
1739 | though the CPU of course need not do so. | |
692118da PM |
1740 | |
1741 | (*) The compiler is within its rights to invent stores to a variable, | |
1742 | as in the following example: | |
1743 | ||
1744 | if (a) | |
1745 | b = a; | |
1746 | else | |
1747 | b = 42; | |
1748 | ||
1749 | The compiler might save a branch by optimizing this as follows: | |
1750 | ||
1751 | b = 42; | |
1752 | if (a) | |
1753 | b = a; | |
1754 | ||
1755 | In single-threaded code, this is not only safe, but also saves | |
1756 | a branch. Unfortunately, in concurrent code, this optimization | |
1757 | could cause some other CPU to see a spurious value of 42 -- even | |
1758 | if variable 'a' was never zero -- when loading variable 'b'. | |
9af194ce | 1759 | Use WRITE_ONCE() to prevent this as follows: |
692118da PM |
1760 | |
1761 | if (a) | |
9af194ce | 1762 | WRITE_ONCE(b, a); |
692118da | 1763 | else |
9af194ce | 1764 | WRITE_ONCE(b, 42); |
692118da PM |
1765 | |
1766 | The compiler can also invent loads. These are usually less | |
1767 | damaging, but they can result in cache-line bouncing and thus in | |
9af194ce | 1768 | poor performance and scalability. Use READ_ONCE() to prevent |
692118da PM |
1769 | invented loads. |
1770 | ||
1771 | (*) For aligned memory locations whose size allows them to be accessed | |
1772 | with a single memory-reference instruction, prevents "load tearing" | |
1773 | and "store tearing," in which a single large access is replaced by | |
1774 | multiple smaller accesses. For example, given an architecture having | |
1775 | 16-bit store instructions with 7-bit immediate fields, the compiler | |
1776 | might be tempted to use two 16-bit store-immediate instructions to | |
1777 | implement the following 32-bit store: | |
1778 | ||
1779 | p = 0x00010002; | |
1780 | ||
1781 | Please note that GCC really does use this sort of optimization, | |
1782 | which is not surprising given that it would likely take more | |
1783 | than two instructions to build the constant and then store it. | |
1784 | This optimization can therefore be a win in single-threaded code. | |
1785 | In fact, a recent bug (since fixed) caused GCC to incorrectly use | |
1786 | this optimization in a volatile store. In the absence of such bugs, | |
9af194ce | 1787 | use of WRITE_ONCE() prevents store tearing in the following example: |
692118da | 1788 | |
9af194ce | 1789 | WRITE_ONCE(p, 0x00010002); |
692118da PM |
1790 | |
1791 | Use of packed structures can also result in load and store tearing, | |
1792 | as in this example: | |
1793 | ||
1794 | struct __attribute__((__packed__)) foo { | |
1795 | short a; | |
1796 | int b; | |
1797 | short c; | |
1798 | }; | |
1799 | struct foo foo1, foo2; | |
1800 | ... | |
1801 | ||
1802 | foo2.a = foo1.a; | |
1803 | foo2.b = foo1.b; | |
1804 | foo2.c = foo1.c; | |
1805 | ||
9af194ce PM |
1806 | Because there are no READ_ONCE() or WRITE_ONCE() wrappers and no |
1807 | volatile markings, the compiler would be well within its rights to | |
1808 | implement these three assignment statements as a pair of 32-bit | |
1809 | loads followed by a pair of 32-bit stores. This would result in | |
1810 | load tearing on 'foo1.b' and store tearing on 'foo2.b'. READ_ONCE() | |
1811 | and WRITE_ONCE() again prevent tearing in this example: | |
692118da PM |
1812 | |
1813 | foo2.a = foo1.a; | |
9af194ce | 1814 | WRITE_ONCE(foo2.b, READ_ONCE(foo1.b)); |
692118da PM |
1815 | foo2.c = foo1.c; |
1816 | ||
9af194ce PM |
1817 | All that aside, it is never necessary to use READ_ONCE() and |
1818 | WRITE_ONCE() on a variable that has been marked volatile. For example, | |
1819 | because 'jiffies' is marked volatile, it is never necessary to | |
1820 | say READ_ONCE(jiffies). The reason for this is that READ_ONCE() and | |
1821 | WRITE_ONCE() are implemented as volatile casts, which has no effect when | |
1822 | its argument is already marked volatile. | |
692118da PM |
1823 | |
1824 | Please note that these compiler barriers have no direct effect on the CPU, | |
1825 | which may then reorder things however it wishes. | |
108b42b4 DH |
1826 | |
1827 | ||
1828 | CPU MEMORY BARRIERS | |
1829 | ------------------- | |
1830 | ||
203185f6 | 1831 | The Linux kernel has seven basic CPU memory barriers: |
108b42b4 | 1832 | |
203185f6 AY |
1833 | TYPE MANDATORY SMP CONDITIONAL |
1834 | ======================= =============== =============== | |
1835 | GENERAL mb() smp_mb() | |
1836 | WRITE wmb() smp_wmb() | |
1837 | READ rmb() smp_rmb() | |
1838 | ADDRESS DEPENDENCY READ_ONCE() | |
108b42b4 DH |
1839 | |
1840 | ||
203185f6 AY |
1841 | All memory barriers except the address-dependency barriers imply a compiler |
1842 | barrier. Address dependencies do not impose any additional compiler ordering. | |
73f10281 | 1843 | |
203185f6 | 1844 | Aside: In the case of address dependencies, the compiler would be expected |
9af194ce PM |
1845 | to issue the loads in the correct order (eg. `a[b]` would have to load |
1846 | the value of b before loading a[b]), however there is no guarantee in | |
1847 | the C specification that the compiler may not speculate the value of b | |
8149b5cb | 1848 | (eg. is equal to 1) and load a[b] before b (eg. tmp = a[1]; if (b != 1) |
0b6fa347 SP |
1849 | tmp = a[b]; ). There is also the problem of a compiler reloading b after |
1850 | having loaded a[b], thus having a newer copy of b than a[b]. A consensus | |
9af194ce PM |
1851 | has not yet been reached about these problems, however the READ_ONCE() |
1852 | macro is a good place to start looking. | |
108b42b4 DH |
1853 | |
1854 | SMP memory barriers are reduced to compiler barriers on uniprocessor compiled | |
81fc6323 | 1855 | systems because it is assumed that a CPU will appear to be self-consistent, |
108b42b4 | 1856 | and will order overlapping accesses correctly with respect to itself. |
6a65d263 | 1857 | However, see the subsection on "Virtual Machine Guests" below. |
108b42b4 DH |
1858 | |
1859 | [!] Note that SMP memory barriers _must_ be used to control the ordering of | |
1860 | references to shared memory on SMP systems, though the use of locking instead | |
1861 | is sufficient. | |
1862 | ||
1863 | Mandatory barriers should not be used to control SMP effects, since mandatory | |
6a65d263 MT |
1864 | barriers impose unnecessary overhead on both SMP and UP systems. They may, |
1865 | however, be used to control MMIO effects on accesses through relaxed memory I/O | |
1866 | windows. These barriers are required even on non-SMP systems as they affect | |
1867 | the order in which memory operations appear to a device by prohibiting both the | |
1868 | compiler and the CPU from reordering them. | |
108b42b4 DH |
1869 | |
1870 | ||
1871 | There are some more advanced barrier functions: | |
1872 | ||
b92b8b35 | 1873 | (*) smp_store_mb(var, value) |
108b42b4 | 1874 | |
75b2bd55 | 1875 | This assigns the value to the variable and then inserts a full memory |
2d142e59 DB |
1876 | barrier after it. It isn't guaranteed to insert anything more than a |
1877 | compiler barrier in a UP compilation. | |
108b42b4 DH |
1878 | |
1879 | ||
1b15611e PZ |
1880 | (*) smp_mb__before_atomic(); |
1881 | (*) smp_mb__after_atomic(); | |
108b42b4 | 1882 | |
39323c64 MS |
1883 | These are for use with atomic RMW functions that do not imply memory |
1884 | barriers, but where the code needs a memory barrier. Examples for atomic | |
d8566f15 | 1885 | RMW functions that do not imply a memory barrier are e.g. add, |
39323c64 MS |
1886 | subtract, (failed) conditional operations, _relaxed functions, |
1887 | but not atomic_read or atomic_set. A common example where a memory | |
1888 | barrier may be required is when atomic ops are used for reference | |
1889 | counting. | |
1890 | ||
1891 | These are also used for atomic RMW bitop functions that do not imply a | |
1892 | memory barrier (such as set_bit and clear_bit). | |
108b42b4 DH |
1893 | |
1894 | As an example, consider a piece of code that marks an object as being dead | |
1895 | and then decrements the object's reference count: | |
1896 | ||
1897 | obj->dead = 1; | |
1b15611e | 1898 | smp_mb__before_atomic(); |
108b42b4 DH |
1899 | atomic_dec(&obj->ref_count); |
1900 | ||
1901 | This makes sure that the death mark on the object is perceived to be set | |
1902 | *before* the reference counter is decremented. | |
1903 | ||
706eeb3e | 1904 | See Documentation/atomic_{t,bitops}.txt for more information. |
108b42b4 DH |
1905 | |
1906 | ||
1077fa36 AD |
1907 | (*) dma_wmb(); |
1908 | (*) dma_rmb(); | |
ed59dfd9 | 1909 | (*) dma_mb(); |
1077fa36 AD |
1910 | |
1911 | These are for use with consistent memory to guarantee the ordering | |
1912 | of writes or reads of shared memory accessible to both the CPU and a | |
1913 | DMA capable device. | |
1914 | ||
1915 | For example, consider a device driver that shares memory with a device | |
1916 | and uses a descriptor status value to indicate if the descriptor belongs | |
1917 | to the device or the CPU, and a doorbell to notify it when new | |
1918 | descriptors are available: | |
1919 | ||
1920 | if (desc->status != DEVICE_OWN) { | |
1921 | /* do not read data until we own descriptor */ | |
1922 | dma_rmb(); | |
1923 | ||
1924 | /* read/modify data */ | |
1925 | read_data = desc->data; | |
1926 | desc->data = write_data; | |
1927 | ||
1928 | /* flush modifications before status update */ | |
1929 | dma_wmb(); | |
1930 | ||
1931 | /* assign ownership */ | |
1932 | desc->status = DEVICE_OWN; | |
1933 | ||
1077fa36 AD |
1934 | /* notify device of new descriptors */ |
1935 | writel(DESC_NOTIFY, doorbell); | |
1936 | } | |
1937 | ||
1938 | The dma_rmb() allows us guarantee the device has released ownership | |
7a458007 | 1939 | before we read the data from the descriptor, and the dma_wmb() allows |
1077fa36 | 1940 | us to guarantee the data is written to the descriptor before the device |
ed59dfd9 KW |
1941 | can see it now has ownership. The dma_mb() implies both a dma_rmb() and |
1942 | a dma_wmb(). Note that, when using writel(), a prior wmb() is not needed | |
1943 | to guarantee that the cache coherent memory writes have completed before | |
1944 | writing to the MMIO region. The cheaper writel_relaxed() does not provide | |
1945 | this guarantee and must not be used here. | |
5846581e WD |
1946 | |
1947 | See the subsection "Kernel I/O barrier effects" for more information on | |
537f3a7c SP |
1948 | relaxed I/O accessors and the Documentation/core-api/dma-api.rst file for |
1949 | more information on consistent memory. | |
1077fa36 | 1950 | |
3e79f082 AK |
1951 | (*) pmem_wmb(); |
1952 | ||
1953 | This is for use with persistent memory to ensure that stores for which | |
1954 | modifications are written to persistent storage reached a platform | |
1955 | durability domain. | |
1956 | ||
1957 | For example, after a non-temporal write to pmem region, we use pmem_wmb() | |
1958 | to ensure that stores have reached a platform durability domain. This ensures | |
1959 | that stores have updated persistent storage before any data access or | |
1960 | data transfer caused by subsequent instructions is initiated. This is | |
1961 | in addition to the ordering done by wmb(). | |
1962 | ||
1963 | For load from persistent memory, existing read memory barriers are sufficient | |
1964 | to ensure read ordering. | |
dfeccea6 | 1965 | |
d5624bb2 XW |
1966 | (*) io_stop_wc(); |
1967 | ||
1968 | For memory accesses with write-combining attributes (e.g. those returned | |
1ab8f248 | 1969 | by ioremap_wc()), the CPU may wait for prior accesses to be merged with |
d5624bb2 XW |
1970 | subsequent ones. io_stop_wc() can be used to prevent the merging of |
1971 | write-combining memory accesses before this macro with those after it when | |
1972 | such wait has performance implications. | |
1973 | ||
108b42b4 DH |
1974 | =============================== |
1975 | IMPLICIT KERNEL MEMORY BARRIERS | |
1976 | =============================== | |
1977 | ||
1978 | Some of the other functions in the linux kernel imply memory barriers, amongst | |
670bd95e | 1979 | which are locking and scheduling functions. |
108b42b4 DH |
1980 | |
1981 | This specification is a _minimum_ guarantee; any particular architecture may | |
1982 | provide more substantial guarantees, but these may not be relied upon outside | |
1983 | of arch specific code. | |
1984 | ||
1985 | ||
166bda71 SP |
1986 | LOCK ACQUISITION FUNCTIONS |
1987 | -------------------------- | |
108b42b4 DH |
1988 | |
1989 | The Linux kernel has a number of locking constructs: | |
1990 | ||
1991 | (*) spin locks | |
1992 | (*) R/W spin locks | |
1993 | (*) mutexes | |
1994 | (*) semaphores | |
1995 | (*) R/W semaphores | |
108b42b4 | 1996 | |
2e4f5382 | 1997 | In all cases there are variants on "ACQUIRE" operations and "RELEASE" operations |
108b42b4 DH |
1998 | for each construct. These operations all imply certain barriers: |
1999 | ||
2e4f5382 | 2000 | (1) ACQUIRE operation implication: |
108b42b4 | 2001 | |
2e4f5382 PZ |
2002 | Memory operations issued after the ACQUIRE will be completed after the |
2003 | ACQUIRE operation has completed. | |
108b42b4 | 2004 | |
8dd853d7 | 2005 | Memory operations issued before the ACQUIRE may be completed after |
a9668cd6 | 2006 | the ACQUIRE operation has completed. |
108b42b4 | 2007 | |
2e4f5382 | 2008 | (2) RELEASE operation implication: |
108b42b4 | 2009 | |
2e4f5382 PZ |
2010 | Memory operations issued before the RELEASE will be completed before the |
2011 | RELEASE operation has completed. | |
108b42b4 | 2012 | |
2e4f5382 PZ |
2013 | Memory operations issued after the RELEASE may be completed before the |
2014 | RELEASE operation has completed. | |
108b42b4 | 2015 | |
2e4f5382 | 2016 | (3) ACQUIRE vs ACQUIRE implication: |
108b42b4 | 2017 | |
2e4f5382 PZ |
2018 | All ACQUIRE operations issued before another ACQUIRE operation will be |
2019 | completed before that ACQUIRE operation. | |
108b42b4 | 2020 | |
2e4f5382 | 2021 | (4) ACQUIRE vs RELEASE implication: |
108b42b4 | 2022 | |
2e4f5382 PZ |
2023 | All ACQUIRE operations issued before a RELEASE operation will be |
2024 | completed before the RELEASE operation. | |
108b42b4 | 2025 | |
2e4f5382 | 2026 | (5) Failed conditional ACQUIRE implication: |
108b42b4 | 2027 | |
2e4f5382 PZ |
2028 | Certain locking variants of the ACQUIRE operation may fail, either due to |
2029 | being unable to get the lock immediately, or due to receiving an unblocked | |
806654a9 | 2030 | signal while asleep waiting for the lock to become available. Failed |
108b42b4 DH |
2031 | locks do not imply any sort of barrier. |
2032 | ||
2e4f5382 PZ |
2033 | [!] Note: one of the consequences of lock ACQUIREs and RELEASEs being only |
2034 | one-way barriers is that the effects of instructions outside of a critical | |
2035 | section may seep into the inside of the critical section. | |
108b42b4 | 2036 | |
2e4f5382 PZ |
2037 | An ACQUIRE followed by a RELEASE may not be assumed to be full memory barrier |
2038 | because it is possible for an access preceding the ACQUIRE to happen after the | |
2039 | ACQUIRE, and an access following the RELEASE to happen before the RELEASE, and | |
2040 | the two accesses can themselves then cross: | |
670bd95e DH |
2041 | |
2042 | *A = a; | |
2e4f5382 PZ |
2043 | ACQUIRE M |
2044 | RELEASE M | |
670bd95e DH |
2045 | *B = b; |
2046 | ||
2047 | may occur as: | |
2048 | ||
2e4f5382 | 2049 | ACQUIRE M, STORE *B, STORE *A, RELEASE M |
17eb88e0 | 2050 | |
8dd853d7 PM |
2051 | When the ACQUIRE and RELEASE are a lock acquisition and release, |
2052 | respectively, this same reordering can occur if the lock's ACQUIRE and | |
2053 | RELEASE are to the same lock variable, but only from the perspective of | |
2054 | another CPU not holding that lock. In short, a ACQUIRE followed by an | |
2055 | RELEASE may -not- be assumed to be a full memory barrier. | |
2056 | ||
12d560f4 PM |
2057 | Similarly, the reverse case of a RELEASE followed by an ACQUIRE does |
2058 | not imply a full memory barrier. Therefore, the CPU's execution of the | |
2059 | critical sections corresponding to the RELEASE and the ACQUIRE can cross, | |
2060 | so that: | |
17eb88e0 PM |
2061 | |
2062 | *A = a; | |
2e4f5382 PZ |
2063 | RELEASE M |
2064 | ACQUIRE N | |
17eb88e0 PM |
2065 | *B = b; |
2066 | ||
2067 | could occur as: | |
2068 | ||
2e4f5382 | 2069 | ACQUIRE N, STORE *B, STORE *A, RELEASE M |
17eb88e0 | 2070 | |
8dd853d7 PM |
2071 | It might appear that this reordering could introduce a deadlock. |
2072 | However, this cannot happen because if such a deadlock threatened, | |
2073 | the RELEASE would simply complete, thereby avoiding the deadlock. | |
2074 | ||
2075 | Why does this work? | |
2076 | ||
2077 | One key point is that we are only talking about the CPU doing | |
2078 | the reordering, not the compiler. If the compiler (or, for | |
2079 | that matter, the developer) switched the operations, deadlock | |
2080 | -could- occur. | |
2081 | ||
2082 | But suppose the CPU reordered the operations. In this case, | |
2083 | the unlock precedes the lock in the assembly code. The CPU | |
2084 | simply elected to try executing the later lock operation first. | |
2085 | If there is a deadlock, this lock operation will simply spin (or | |
2086 | try to sleep, but more on that later). The CPU will eventually | |
2087 | execute the unlock operation (which preceded the lock operation | |
2088 | in the assembly code), which will unravel the potential deadlock, | |
2089 | allowing the lock operation to succeed. | |
2090 | ||
2091 | But what if the lock is a sleeplock? In that case, the code will | |
2092 | try to enter the scheduler, where it will eventually encounter | |
2093 | a memory barrier, which will force the earlier unlock operation | |
2094 | to complete, again unraveling the deadlock. There might be | |
2095 | a sleep-unlock race, but the locking primitive needs to resolve | |
2096 | such races properly in any case. | |
2097 | ||
108b42b4 DH |
2098 | Locks and semaphores may not provide any guarantee of ordering on UP compiled |
2099 | systems, and so cannot be counted on in such a situation to actually achieve | |
2100 | anything at all - especially with respect to I/O accesses - unless combined | |
2101 | with interrupt disabling operations. | |
2102 | ||
d7cab36d | 2103 | See also the section on "Inter-CPU acquiring barrier effects". |
108b42b4 DH |
2104 | |
2105 | ||
2106 | As an example, consider the following: | |
2107 | ||
2108 | *A = a; | |
2109 | *B = b; | |
2e4f5382 | 2110 | ACQUIRE |
108b42b4 DH |
2111 | *C = c; |
2112 | *D = d; | |
2e4f5382 | 2113 | RELEASE |
108b42b4 DH |
2114 | *E = e; |
2115 | *F = f; | |
2116 | ||
2117 | The following sequence of events is acceptable: | |
2118 | ||
2e4f5382 | 2119 | ACQUIRE, {*F,*A}, *E, {*C,*D}, *B, RELEASE |
108b42b4 DH |
2120 | |
2121 | [+] Note that {*F,*A} indicates a combined access. | |
2122 | ||
2123 | But none of the following are: | |
2124 | ||
2e4f5382 PZ |
2125 | {*F,*A}, *B, ACQUIRE, *C, *D, RELEASE, *E |
2126 | *A, *B, *C, ACQUIRE, *D, RELEASE, *E, *F | |
2127 | *A, *B, ACQUIRE, *C, RELEASE, *D, *E, *F | |
2128 | *B, ACQUIRE, *C, *D, RELEASE, {*F,*A}, *E | |
108b42b4 DH |
2129 | |
2130 | ||
2131 | ||
2132 | INTERRUPT DISABLING FUNCTIONS | |
2133 | ----------------------------- | |
2134 | ||
2e4f5382 PZ |
2135 | Functions that disable interrupts (ACQUIRE equivalent) and enable interrupts |
2136 | (RELEASE equivalent) will act as compiler barriers only. So if memory or I/O | |
108b42b4 DH |
2137 | barriers are required in such a situation, they must be provided from some |
2138 | other means. | |
2139 | ||
2140 | ||
50fa610a DH |
2141 | SLEEP AND WAKE-UP FUNCTIONS |
2142 | --------------------------- | |
2143 | ||
2144 | Sleeping and waking on an event flagged in global data can be viewed as an | |
2145 | interaction between two pieces of data: the task state of the task waiting for | |
2146 | the event and the global data used to indicate the event. To make sure that | |
2147 | these appear to happen in the right order, the primitives to begin the process | |
2148 | of going to sleep, and the primitives to initiate a wake up imply certain | |
2149 | barriers. | |
2150 | ||
2151 | Firstly, the sleeper normally follows something like this sequence of events: | |
2152 | ||
2153 | for (;;) { | |
2154 | set_current_state(TASK_UNINTERRUPTIBLE); | |
2155 | if (event_indicated) | |
2156 | break; | |
2157 | schedule(); | |
2158 | } | |
2159 | ||
2160 | A general memory barrier is interpolated automatically by set_current_state() | |
2161 | after it has altered the task state: | |
2162 | ||
2163 | CPU 1 | |
2164 | =============================== | |
2165 | set_current_state(); | |
b92b8b35 | 2166 | smp_store_mb(); |
50fa610a DH |
2167 | STORE current->state |
2168 | <general barrier> | |
2169 | LOAD event_indicated | |
2170 | ||
2171 | set_current_state() may be wrapped by: | |
2172 | ||
2173 | prepare_to_wait(); | |
2174 | prepare_to_wait_exclusive(); | |
2175 | ||
2176 | which therefore also imply a general memory barrier after setting the state. | |
2177 | The whole sequence above is available in various canned forms, all of which | |
2178 | interpolate the memory barrier in the right place: | |
2179 | ||
2180 | wait_event(); | |
2181 | wait_event_interruptible(); | |
2182 | wait_event_interruptible_exclusive(); | |
2183 | wait_event_interruptible_timeout(); | |
2184 | wait_event_killable(); | |
2185 | wait_event_timeout(); | |
2186 | wait_on_bit(); | |
2187 | wait_on_bit_lock(); | |
2188 | ||
2189 | ||
2190 | Secondly, code that performs a wake up normally follows something like this: | |
2191 | ||
2192 | event_indicated = 1; | |
2193 | wake_up(&event_wait_queue); | |
2194 | ||
2195 | or: | |
2196 | ||
2197 | event_indicated = 1; | |
2198 | wake_up_process(event_daemon); | |
2199 | ||
7696f991 AP |
2200 | A general memory barrier is executed by wake_up() if it wakes something up. |
2201 | If it doesn't wake anything up then a memory barrier may or may not be | |
2202 | executed; you must not rely on it. The barrier occurs before the task state | |
2203 | is accessed, in particular, it sits between the STORE to indicate the event | |
2204 | and the STORE to set TASK_RUNNING: | |
50fa610a | 2205 | |
7696f991 | 2206 | CPU 1 (Sleeper) CPU 2 (Waker) |
50fa610a DH |
2207 | =============================== =============================== |
2208 | set_current_state(); STORE event_indicated | |
b92b8b35 | 2209 | smp_store_mb(); wake_up(); |
7696f991 AP |
2210 | STORE current->state ... |
2211 | <general barrier> <general barrier> | |
2212 | LOAD event_indicated if ((LOAD task->state) & TASK_NORMAL) | |
2213 | STORE task->state | |
50fa610a | 2214 | |
7696f991 AP |
2215 | where "task" is the thread being woken up and it equals CPU 1's "current". |
2216 | ||
2217 | To repeat, a general memory barrier is guaranteed to be executed by wake_up() | |
2218 | if something is actually awakened, but otherwise there is no such guarantee. | |
2219 | To see this, consider the following sequence of events, where X and Y are both | |
2220 | initially zero: | |
5726ce06 PM |
2221 | |
2222 | CPU 1 CPU 2 | |
2223 | =============================== =============================== | |
7696f991 | 2224 | X = 1; Y = 1; |
5726ce06 | 2225 | smp_mb(); wake_up(); |
7696f991 AP |
2226 | LOAD Y LOAD X |
2227 | ||
2228 | If a wakeup does occur, one (at least) of the two loads must see 1. If, on | |
2229 | the other hand, a wakeup does not occur, both loads might see 0. | |
5726ce06 | 2230 | |
7696f991 AP |
2231 | wake_up_process() always executes a general memory barrier. The barrier again |
2232 | occurs before the task state is accessed. In particular, if the wake_up() in | |
2233 | the previous snippet were replaced by a call to wake_up_process() then one of | |
2234 | the two loads would be guaranteed to see 1. | |
5726ce06 | 2235 | |
50fa610a DH |
2236 | The available waker functions include: |
2237 | ||
2238 | complete(); | |
2239 | wake_up(); | |
2240 | wake_up_all(); | |
2241 | wake_up_bit(); | |
2242 | wake_up_interruptible(); | |
2243 | wake_up_interruptible_all(); | |
2244 | wake_up_interruptible_nr(); | |
2245 | wake_up_interruptible_poll(); | |
2246 | wake_up_interruptible_sync(); | |
2247 | wake_up_interruptible_sync_poll(); | |
2248 | wake_up_locked(); | |
2249 | wake_up_locked_poll(); | |
2250 | wake_up_nr(); | |
2251 | wake_up_poll(); | |
2252 | wake_up_process(); | |
2253 | ||
7696f991 AP |
2254 | In terms of memory ordering, these functions all provide the same guarantees of |
2255 | a wake_up() (or stronger). | |
50fa610a DH |
2256 | |
2257 | [!] Note that the memory barriers implied by the sleeper and the waker do _not_ | |
2258 | order multiple stores before the wake-up with respect to loads of those stored | |
2259 | values after the sleeper has called set_current_state(). For instance, if the | |
2260 | sleeper does: | |
2261 | ||
2262 | set_current_state(TASK_INTERRUPTIBLE); | |
2263 | if (event_indicated) | |
2264 | break; | |
2265 | __set_current_state(TASK_RUNNING); | |
2266 | do_something(my_data); | |
2267 | ||
2268 | and the waker does: | |
2269 | ||
2270 | my_data = value; | |
2271 | event_indicated = 1; | |
2272 | wake_up(&event_wait_queue); | |
2273 | ||
2274 | there's no guarantee that the change to event_indicated will be perceived by | |
2275 | the sleeper as coming after the change to my_data. In such a circumstance, the | |
2276 | code on both sides must interpolate its own memory barriers between the | |
2277 | separate data accesses. Thus the above sleeper ought to do: | |
2278 | ||
2279 | set_current_state(TASK_INTERRUPTIBLE); | |
2280 | if (event_indicated) { | |
2281 | smp_rmb(); | |
2282 | do_something(my_data); | |
2283 | } | |
2284 | ||
2285 | and the waker should do: | |
2286 | ||
2287 | my_data = value; | |
2288 | smp_wmb(); | |
2289 | event_indicated = 1; | |
2290 | wake_up(&event_wait_queue); | |
2291 | ||
2292 | ||
108b42b4 DH |
2293 | MISCELLANEOUS FUNCTIONS |
2294 | ----------------------- | |
2295 | ||
2296 | Other functions that imply barriers: | |
2297 | ||
2298 | (*) schedule() and similar imply full memory barriers. | |
2299 | ||
108b42b4 | 2300 | |
2e4f5382 PZ |
2301 | =================================== |
2302 | INTER-CPU ACQUIRING BARRIER EFFECTS | |
2303 | =================================== | |
108b42b4 DH |
2304 | |
2305 | On SMP systems locking primitives give a more substantial form of barrier: one | |
2306 | that does affect memory access ordering on other CPUs, within the context of | |
2307 | conflict on any particular lock. | |
2308 | ||
2309 | ||
2e4f5382 PZ |
2310 | ACQUIRES VS MEMORY ACCESSES |
2311 | --------------------------- | |
108b42b4 | 2312 | |
79afecfa | 2313 | Consider the following: the system has a pair of spinlocks (M) and (Q), and |
108b42b4 DH |
2314 | three CPUs; then should the following sequence of events occur: |
2315 | ||
2316 | CPU 1 CPU 2 | |
2317 | =============================== =============================== | |
9af194ce | 2318 | WRITE_ONCE(*A, a); WRITE_ONCE(*E, e); |
2e4f5382 | 2319 | ACQUIRE M ACQUIRE Q |
9af194ce PM |
2320 | WRITE_ONCE(*B, b); WRITE_ONCE(*F, f); |
2321 | WRITE_ONCE(*C, c); WRITE_ONCE(*G, g); | |
2e4f5382 | 2322 | RELEASE M RELEASE Q |
9af194ce | 2323 | WRITE_ONCE(*D, d); WRITE_ONCE(*H, h); |
108b42b4 | 2324 | |
81fc6323 | 2325 | Then there is no guarantee as to what order CPU 3 will see the accesses to *A |
108b42b4 | 2326 | through *H occur in, other than the constraints imposed by the separate locks |
0b6fa347 | 2327 | on the separate CPUs. It might, for example, see: |
108b42b4 | 2328 | |
2e4f5382 | 2329 | *E, ACQUIRE M, ACQUIRE Q, *G, *C, *F, *A, *B, RELEASE Q, *D, *H, RELEASE M |
108b42b4 DH |
2330 | |
2331 | But it won't see any of: | |
2332 | ||
2e4f5382 PZ |
2333 | *B, *C or *D preceding ACQUIRE M |
2334 | *A, *B or *C following RELEASE M | |
2335 | *F, *G or *H preceding ACQUIRE Q | |
2336 | *E, *F or *G following RELEASE Q | |
108b42b4 DH |
2337 | |
2338 | ||
108b42b4 DH |
2339 | ================================= |
2340 | WHERE ARE MEMORY BARRIERS NEEDED? | |
2341 | ================================= | |
2342 | ||
2343 | Under normal operation, memory operation reordering is generally not going to | |
2344 | be a problem as a single-threaded linear piece of code will still appear to | |
50fa610a | 2345 | work correctly, even if it's in an SMP kernel. There are, however, four |
108b42b4 DH |
2346 | circumstances in which reordering definitely _could_ be a problem: |
2347 | ||
2348 | (*) Interprocessor interaction. | |
2349 | ||
2350 | (*) Atomic operations. | |
2351 | ||
81fc6323 | 2352 | (*) Accessing devices. |
108b42b4 DH |
2353 | |
2354 | (*) Interrupts. | |
2355 | ||
2356 | ||
2357 | INTERPROCESSOR INTERACTION | |
2358 | -------------------------- | |
2359 | ||
2360 | When there's a system with more than one processor, more than one CPU in the | |
2361 | system may be working on the same data set at the same time. This can cause | |
2362 | synchronisation problems, and the usual way of dealing with them is to use | |
2363 | locks. Locks, however, are quite expensive, and so it may be preferable to | |
2364 | operate without the use of a lock if at all possible. In such a case | |
2365 | operations that affect both CPUs may have to be carefully ordered to prevent | |
2366 | a malfunction. | |
2367 | ||
2368 | Consider, for example, the R/W semaphore slow path. Here a waiting process is | |
2369 | queued on the semaphore, by virtue of it having a piece of its stack linked to | |
2370 | the semaphore's list of waiting processes: | |
2371 | ||
2372 | struct rw_semaphore { | |
2373 | ... | |
2374 | spinlock_t lock; | |
2375 | struct list_head waiters; | |
2376 | }; | |
2377 | ||
2378 | struct rwsem_waiter { | |
2379 | struct list_head list; | |
2380 | struct task_struct *task; | |
2381 | }; | |
2382 | ||
2383 | To wake up a particular waiter, the up_read() or up_write() functions have to: | |
2384 | ||
2385 | (1) read the next pointer from this waiter's record to know as to where the | |
2386 | next waiter record is; | |
2387 | ||
81fc6323 | 2388 | (2) read the pointer to the waiter's task structure; |
108b42b4 DH |
2389 | |
2390 | (3) clear the task pointer to tell the waiter it has been given the semaphore; | |
2391 | ||
2392 | (4) call wake_up_process() on the task; and | |
2393 | ||
2394 | (5) release the reference held on the waiter's task struct. | |
2395 | ||
81fc6323 | 2396 | In other words, it has to perform this sequence of events: |
108b42b4 DH |
2397 | |
2398 | LOAD waiter->list.next; | |
2399 | LOAD waiter->task; | |
2400 | STORE waiter->task; | |
2401 | CALL wakeup | |
2402 | RELEASE task | |
2403 | ||
2404 | and if any of these steps occur out of order, then the whole thing may | |
2405 | malfunction. | |
2406 | ||
2407 | Once it has queued itself and dropped the semaphore lock, the waiter does not | |
2408 | get the lock again; it instead just waits for its task pointer to be cleared | |
2409 | before proceeding. Since the record is on the waiter's stack, this means that | |
2410 | if the task pointer is cleared _before_ the next pointer in the list is read, | |
2411 | another CPU might start processing the waiter and might clobber the waiter's | |
2412 | stack before the up*() function has a chance to read the next pointer. | |
2413 | ||
2414 | Consider then what might happen to the above sequence of events: | |
2415 | ||
2416 | CPU 1 CPU 2 | |
2417 | =============================== =============================== | |
2418 | down_xxx() | |
2419 | Queue waiter | |
2420 | Sleep | |
2421 | up_yyy() | |
2422 | LOAD waiter->task; | |
2423 | STORE waiter->task; | |
2424 | Woken up by other event | |
2425 | <preempt> | |
2426 | Resume processing | |
2427 | down_xxx() returns | |
2428 | call foo() | |
2429 | foo() clobbers *waiter | |
2430 | </preempt> | |
2431 | LOAD waiter->list.next; | |
2432 | --- OOPS --- | |
2433 | ||
2434 | This could be dealt with using the semaphore lock, but then the down_xxx() | |
2435 | function has to needlessly get the spinlock again after being woken up. | |
2436 | ||
2437 | The way to deal with this is to insert a general SMP memory barrier: | |
2438 | ||
2439 | LOAD waiter->list.next; | |
2440 | LOAD waiter->task; | |
2441 | smp_mb(); | |
2442 | STORE waiter->task; | |
2443 | CALL wakeup | |
2444 | RELEASE task | |
2445 | ||
2446 | In this case, the barrier makes a guarantee that all memory accesses before the | |
2447 | barrier will appear to happen before all the memory accesses after the barrier | |
2448 | with respect to the other CPUs on the system. It does _not_ guarantee that all | |
2449 | the memory accesses before the barrier will be complete by the time the barrier | |
2450 | instruction itself is complete. | |
2451 | ||
2452 | On a UP system - where this wouldn't be a problem - the smp_mb() is just a | |
2453 | compiler barrier, thus making sure the compiler emits the instructions in the | |
6bc39274 DH |
2454 | right order without actually intervening in the CPU. Since there's only one |
2455 | CPU, that CPU's dependency ordering logic will take care of everything else. | |
108b42b4 DH |
2456 | |
2457 | ||
2458 | ATOMIC OPERATIONS | |
2459 | ----------------- | |
2460 | ||
806654a9 | 2461 | While they are technically interprocessor interaction considerations, atomic |
dbc8700e DH |
2462 | operations are noted specially as some of them imply full memory barriers and |
2463 | some don't, but they're very heavily relied on as a group throughout the | |
2464 | kernel. | |
2465 | ||
706eeb3e | 2466 | See Documentation/atomic_t.txt for more information. |
108b42b4 DH |
2467 | |
2468 | ||
2469 | ACCESSING DEVICES | |
2470 | ----------------- | |
2471 | ||
2472 | Many devices can be memory mapped, and so appear to the CPU as if they're just | |
2473 | a set of memory locations. To control such a device, the driver usually has to | |
2474 | make the right memory accesses in exactly the right order. | |
2475 | ||
2476 | However, having a clever CPU or a clever compiler creates a potential problem | |
2477 | in that the carefully sequenced accesses in the driver code won't reach the | |
2478 | device in the requisite order if the CPU or the compiler thinks it is more | |
2479 | efficient to reorder, combine or merge accesses - something that would cause | |
2480 | the device to malfunction. | |
2481 | ||
2482 | Inside of the Linux kernel, I/O should be done through the appropriate accessor | |
2483 | routines - such as inb() or writel() - which know how to make such accesses | |
806654a9 | 2484 | appropriately sequential. While this, for the most part, renders the explicit |
91553039 WD |
2485 | use of memory barriers unnecessary, if the accessor functions are used to refer |
2486 | to an I/O memory window with relaxed memory access properties, then _mandatory_ | |
2487 | memory barriers are required to enforce ordering. | |
108b42b4 | 2488 | |
0fe397f0 | 2489 | See Documentation/driver-api/device-io.rst for more information. |
108b42b4 DH |
2490 | |
2491 | ||
2492 | INTERRUPTS | |
2493 | ---------- | |
2494 | ||
2495 | A driver may be interrupted by its own interrupt service routine, and thus the | |
2496 | two parts of the driver may interfere with each other's attempts to control or | |
2497 | access the device. | |
2498 | ||
2499 | This may be alleviated - at least in part - by disabling local interrupts (a | |
2500 | form of locking), such that the critical operations are all contained within | |
806654a9 | 2501 | the interrupt-disabled section in the driver. While the driver's interrupt |
108b42b4 DH |
2502 | routine is executing, the driver's core may not run on the same CPU, and its |
2503 | interrupt is not permitted to happen again until the current interrupt has been | |
2504 | handled, thus the interrupt handler does not need to lock against that. | |
2505 | ||
2506 | However, consider a driver that was talking to an ethernet card that sports an | |
2507 | address register and a data register. If that driver's core talks to the card | |
2508 | under interrupt-disablement and then the driver's interrupt handler is invoked: | |
2509 | ||
2510 | LOCAL IRQ DISABLE | |
2511 | writew(ADDR, 3); | |
2512 | writew(DATA, y); | |
2513 | LOCAL IRQ ENABLE | |
2514 | <interrupt> | |
2515 | writew(ADDR, 4); | |
2516 | q = readw(DATA); | |
2517 | </interrupt> | |
2518 | ||
2519 | The store to the data register might happen after the second store to the | |
2520 | address register if ordering rules are sufficiently relaxed: | |
2521 | ||
2522 | STORE *ADDR = 3, STORE *ADDR = 4, STORE *DATA = y, q = LOAD *DATA | |
2523 | ||
2524 | ||
2525 | If ordering rules are relaxed, it must be assumed that accesses done inside an | |
2526 | interrupt disabled section may leak outside of it and may interleave with | |
2527 | accesses performed in an interrupt - and vice versa - unless implicit or | |
2528 | explicit barriers are used. | |
2529 | ||
2530 | Normally this won't be a problem because the I/O accesses done inside such | |
2531 | sections will include synchronous load operations on strictly ordered I/O | |
91553039 | 2532 | registers that form implicit I/O barriers. |
108b42b4 DH |
2533 | |
2534 | ||
2535 | A similar situation may occur between an interrupt routine and two routines | |
0b6fa347 | 2536 | running on separate CPUs that communicate with each other. If such a case is |
108b42b4 DH |
2537 | likely, then interrupt-disabling locks should be used to guarantee ordering. |
2538 | ||
2539 | ||
2540 | ========================== | |
2541 | KERNEL I/O BARRIER EFFECTS | |
2542 | ========================== | |
2543 | ||
4614bbde WD |
2544 | Interfacing with peripherals via I/O accesses is deeply architecture and device |
2545 | specific. Therefore, drivers which are inherently non-portable may rely on | |
2546 | specific behaviours of their target systems in order to achieve synchronization | |
2547 | in the most lightweight manner possible. For drivers intending to be portable | |
2548 | between multiple architectures and bus implementations, the kernel offers a | |
2549 | series of accessor functions that provide various degrees of ordering | |
2550 | guarantees: | |
108b42b4 | 2551 | |
4614bbde | 2552 | (*) readX(), writeX(): |
108b42b4 | 2553 | |
0cde62a4 WD |
2554 | The readX() and writeX() MMIO accessors take a pointer to the |
2555 | peripheral being accessed as an __iomem * parameter. For pointers | |
2556 | mapped with the default I/O attributes (e.g. those returned by | |
2557 | ioremap()), the ordering guarantees are as follows: | |
2558 | ||
2559 | 1. All readX() and writeX() accesses to the same peripheral are ordered | |
9726840d WD |
2560 | with respect to each other. This ensures that MMIO register accesses |
2561 | by the same CPU thread to a particular device will arrive in program | |
2562 | order. | |
2563 | ||
2564 | 2. A writeX() issued by a CPU thread holding a spinlock is ordered | |
2565 | before a writeX() to the same peripheral from another CPU thread | |
2566 | issued after a later acquisition of the same spinlock. This ensures | |
2567 | that MMIO register writes to a particular device issued while holding | |
2568 | a spinlock will arrive in an order consistent with acquisitions of | |
2569 | the lock. | |
2570 | ||
2571 | 3. A writeX() by a CPU thread to the peripheral will first wait for the | |
2572 | completion of all prior writes to memory either issued by, or | |
2573 | propagated to, the same thread. This ensures that writes by the CPU | |
2574 | to an outbound DMA buffer allocated by dma_alloc_coherent() will be | |
2575 | visible to a DMA engine when the CPU writes to its MMIO control | |
2576 | register to trigger the transfer. | |
2577 | ||
2578 | 4. A readX() by a CPU thread from the peripheral will complete before | |
2579 | any subsequent reads from memory by the same thread can begin. This | |
2580 | ensures that reads by the CPU from an incoming DMA buffer allocated | |
2581 | by dma_alloc_coherent() will not see stale data after reading from | |
2582 | the DMA engine's MMIO status register to establish that the DMA | |
2583 | transfer has completed. | |
2584 | ||
2585 | 5. A readX() by a CPU thread from the peripheral will complete before | |
2586 | any subsequent delay() loop can begin execution on the same thread. | |
2587 | This ensures that two MMIO register writes by the CPU to a peripheral | |
2588 | will arrive at least 1us apart if the first write is immediately read | |
2589 | back with readX() and udelay(1) is called prior to the second | |
2590 | writeX(): | |
0cde62a4 WD |
2591 | |
2592 | writel(42, DEVICE_REGISTER_0); // Arrives at the device... | |
2593 | readl(DEVICE_REGISTER_0); | |
2594 | udelay(1); | |
2595 | writel(42, DEVICE_REGISTER_1); // ...at least 1us before this. | |
2596 | ||
2597 | The ordering properties of __iomem pointers obtained with non-default | |
2598 | attributes (e.g. those returned by ioremap_wc()) are specific to the | |
2599 | underlying architecture and therefore the guarantees listed above cannot | |
2600 | generally be relied upon for accesses to these types of mappings. | |
108b42b4 | 2601 | |
4614bbde | 2602 | (*) readX_relaxed(), writeX_relaxed(): |
108b42b4 | 2603 | |
0cde62a4 WD |
2604 | These are similar to readX() and writeX(), but provide weaker memory |
2605 | ordering guarantees. Specifically, they do not guarantee ordering with | |
9726840d WD |
2606 | respect to locking, normal memory accesses or delay() loops (i.e. |
2607 | bullets 2-5 above) but they are still guaranteed to be ordered with | |
2608 | respect to other accesses from the same CPU thread to the same | |
2609 | peripheral when operating on __iomem pointers mapped with the default | |
2610 | I/O attributes. | |
108b42b4 | 2611 | |
4614bbde | 2612 | (*) readsX(), writesX(): |
108b42b4 | 2613 | |
0cde62a4 WD |
2614 | The readsX() and writesX() MMIO accessors are designed for accessing |
2615 | register-based, memory-mapped FIFOs residing on peripherals that are not | |
2616 | capable of performing DMA. Consequently, they provide only the ordering | |
2617 | guarantees of readX_relaxed() and writeX_relaxed(), as documented above. | |
108b42b4 | 2618 | |
4614bbde | 2619 | (*) inX(), outX(): |
108b42b4 | 2620 | |
0cde62a4 WD |
2621 | The inX() and outX() accessors are intended to access legacy port-mapped |
2622 | I/O peripherals, which may require special instructions on some | |
2623 | architectures (notably x86). The port number of the peripheral being | |
2624 | accessed is passed as an argument. | |
108b42b4 | 2625 | |
0cde62a4 WD |
2626 | Since many CPU architectures ultimately access these peripherals via an |
2627 | internal virtual memory mapping, the portable ordering guarantees | |
2628 | provided by inX() and outX() are the same as those provided by readX() | |
2629 | and writeX() respectively when accessing a mapping with the default I/O | |
2630 | attributes. | |
a8e0aead | 2631 | |
0cde62a4 WD |
2632 | Device drivers may expect outX() to emit a non-posted write transaction |
2633 | that waits for a completion response from the I/O peripheral before | |
2634 | returning. This is not guaranteed by all architectures and is therefore | |
2635 | not part of the portable ordering semantics. | |
4614bbde WD |
2636 | |
2637 | (*) insX(), outsX(): | |
2638 | ||
0cde62a4 WD |
2639 | As above, the insX() and outsX() accessors provide the same ordering |
2640 | guarantees as readsX() and writesX() respectively when accessing a | |
2641 | mapping with the default I/O attributes. | |
108b42b4 | 2642 | |
0cde62a4 | 2643 | (*) ioreadX(), iowriteX(): |
108b42b4 | 2644 | |
0cde62a4 WD |
2645 | These will perform appropriately for the type of access they're actually |
2646 | doing, be it inX()/outX() or readX()/writeX(). | |
108b42b4 | 2647 | |
9726840d WD |
2648 | With the exception of the string accessors (insX(), outsX(), readsX() and |
2649 | writesX()), all of the above assume that the underlying peripheral is | |
2650 | little-endian and will therefore perform byte-swapping operations on big-endian | |
2651 | architectures. | |
4614bbde | 2652 | |
108b42b4 DH |
2653 | |
2654 | ======================================== | |
2655 | ASSUMED MINIMUM EXECUTION ORDERING MODEL | |
2656 | ======================================== | |
2657 | ||
2658 | It has to be assumed that the conceptual CPU is weakly-ordered but that it will | |
2659 | maintain the appearance of program causality with respect to itself. Some CPUs | |
2660 | (such as i386 or x86_64) are more constrained than others (such as powerpc or | |
2661 | frv), and so the most relaxed case (namely DEC Alpha) must be assumed outside | |
2662 | of arch-specific code. | |
2663 | ||
2664 | This means that it must be considered that the CPU will execute its instruction | |
2665 | stream in any order it feels like - or even in parallel - provided that if an | |
81fc6323 | 2666 | instruction in the stream depends on an earlier instruction, then that |
108b42b4 DH |
2667 | earlier instruction must be sufficiently complete[*] before the later |
2668 | instruction may proceed; in other words: provided that the appearance of | |
2669 | causality is maintained. | |
2670 | ||
2671 | [*] Some instructions have more than one effect - such as changing the | |
2672 | condition codes, changing registers or changing memory - and different | |
2673 | instructions may depend on different effects. | |
2674 | ||
2675 | A CPU may also discard any instruction sequence that winds up having no | |
2676 | ultimate effect. For example, if two adjacent instructions both load an | |
2677 | immediate value into the same register, the first may be discarded. | |
2678 | ||
2679 | ||
2680 | Similarly, it has to be assumed that compiler might reorder the instruction | |
2681 | stream in any way it sees fit, again provided the appearance of causality is | |
2682 | maintained. | |
2683 | ||
2684 | ||
2685 | ============================ | |
2686 | THE EFFECTS OF THE CPU CACHE | |
2687 | ============================ | |
2688 | ||
2689 | The way cached memory operations are perceived across the system is affected to | |
2690 | a certain extent by the caches that lie between CPUs and memory, and by the | |
2691 | memory coherence system that maintains the consistency of state in the system. | |
2692 | ||
2693 | As far as the way a CPU interacts with another part of the system through the | |
2694 | caches goes, the memory system has to include the CPU's caches, and memory | |
2695 | barriers for the most part act at the interface between the CPU and its cache | |
2696 | (memory barriers logically act on the dotted line in the following diagram): | |
2697 | ||
2698 | <--- CPU ---> : <----------- Memory -----------> | |
2699 | : | |
2700 | +--------+ +--------+ : +--------+ +-----------+ | |
2701 | | | | | : | | | | +--------+ | |
e0edc78f IM |
2702 | | CPU | | Memory | : | CPU | | | | | |
2703 | | Core |--->| Access |----->| Cache |<-->| | | | | |
108b42b4 | 2704 | | | | Queue | : | | | |--->| Memory | |
e0edc78f IM |
2705 | | | | | : | | | | | | |
2706 | +--------+ +--------+ : +--------+ | | | | | |
108b42b4 DH |
2707 | : | Cache | +--------+ |
2708 | : | Coherency | | |
2709 | : | Mechanism | +--------+ | |
2710 | +--------+ +--------+ : +--------+ | | | | | |
2711 | | | | | : | | | | | | | |
2712 | | CPU | | Memory | : | CPU | | |--->| Device | | |
e0edc78f IM |
2713 | | Core |--->| Access |----->| Cache |<-->| | | | |
2714 | | | | Queue | : | | | | | | | |
108b42b4 DH |
2715 | | | | | : | | | | +--------+ |
2716 | +--------+ +--------+ : +--------+ +-----------+ | |
2717 | : | |
2718 | : | |
2719 | ||
2720 | Although any particular load or store may not actually appear outside of the | |
2721 | CPU that issued it since it may have been satisfied within the CPU's own cache, | |
2722 | it will still appear as if the full memory access had taken place as far as the | |
2723 | other CPUs are concerned since the cache coherency mechanisms will migrate the | |
2724 | cacheline over to the accessing CPU and propagate the effects upon conflict. | |
2725 | ||
2726 | The CPU core may execute instructions in any order it deems fit, provided the | |
2727 | expected program causality appears to be maintained. Some of the instructions | |
2728 | generate load and store operations which then go into the queue of memory | |
2729 | accesses to be performed. The core may place these in the queue in any order | |
2730 | it wishes, and continue execution until it is forced to wait for an instruction | |
2731 | to complete. | |
2732 | ||
2733 | What memory barriers are concerned with is controlling the order in which | |
2734 | accesses cross from the CPU side of things to the memory side of things, and | |
2735 | the order in which the effects are perceived to happen by the other observers | |
2736 | in the system. | |
2737 | ||
2738 | [!] Memory barriers are _not_ needed within a given CPU, as CPUs always see | |
2739 | their own loads and stores as if they had happened in program order. | |
2740 | ||
2741 | [!] MMIO or other device accesses may bypass the cache system. This depends on | |
2742 | the properties of the memory window through which devices are accessed and/or | |
2743 | the use of any special device communication instructions the CPU may have. | |
2744 | ||
2745 | ||
108b42b4 DH |
2746 | CACHE COHERENCY VS DMA |
2747 | ---------------------- | |
2748 | ||
2749 | Not all systems maintain cache coherency with respect to devices doing DMA. In | |
2750 | such cases, a device attempting DMA may obtain stale data from RAM because | |
2751 | dirty cache lines may be resident in the caches of various CPUs, and may not | |
2752 | have been written back to RAM yet. To deal with this, the appropriate part of | |
2753 | the kernel must flush the overlapping bits of cache on each CPU (and maybe | |
2754 | invalidate them as well). | |
2755 | ||
2756 | In addition, the data DMA'd to RAM by a device may be overwritten by dirty | |
2757 | cache lines being written back to RAM from a CPU's cache after the device has | |
81fc6323 JP |
2758 | installed its own data, or cache lines present in the CPU's cache may simply |
2759 | obscure the fact that RAM has been updated, until at such time as the cacheline | |
2760 | is discarded from the CPU's cache and reloaded. To deal with this, the | |
2761 | appropriate part of the kernel must invalidate the overlapping bits of the | |
108b42b4 DH |
2762 | cache on each CPU. |
2763 | ||
f556082d AY |
2764 | See Documentation/core-api/cachetlb.rst for more information on cache |
2765 | management. | |
108b42b4 DH |
2766 | |
2767 | ||
2768 | CACHE COHERENCY VS MMIO | |
2769 | ----------------------- | |
2770 | ||
2771 | Memory mapped I/O usually takes place through memory locations that are part of | |
81fc6323 | 2772 | a window in the CPU's memory space that has different properties assigned than |
108b42b4 DH |
2773 | the usual RAM directed window. |
2774 | ||
2775 | Amongst these properties is usually the fact that such accesses bypass the | |
2776 | caching entirely and go directly to the device buses. This means MMIO accesses | |
2777 | may, in effect, overtake accesses to cached memory that were emitted earlier. | |
2778 | A memory barrier isn't sufficient in such a case, but rather the cache must be | |
2779 | flushed between the cached memory write and the MMIO access if the two are in | |
2780 | any way dependent. | |
2781 | ||
2782 | ||
2783 | ========================= | |
2784 | THE THINGS CPUS GET UP TO | |
2785 | ========================= | |
2786 | ||
2787 | A programmer might take it for granted that the CPU will perform memory | |
81fc6323 | 2788 | operations in exactly the order specified, so that if the CPU is, for example, |
108b42b4 DH |
2789 | given the following piece of code to execute: |
2790 | ||
9af194ce PM |
2791 | a = READ_ONCE(*A); |
2792 | WRITE_ONCE(*B, b); | |
2793 | c = READ_ONCE(*C); | |
2794 | d = READ_ONCE(*D); | |
2795 | WRITE_ONCE(*E, e); | |
108b42b4 | 2796 | |
81fc6323 | 2797 | they would then expect that the CPU will complete the memory operation for each |
108b42b4 DH |
2798 | instruction before moving on to the next one, leading to a definite sequence of |
2799 | operations as seen by external observers in the system: | |
2800 | ||
2801 | LOAD *A, STORE *B, LOAD *C, LOAD *D, STORE *E. | |
2802 | ||
2803 | ||
2804 | Reality is, of course, much messier. With many CPUs and compilers, the above | |
2805 | assumption doesn't hold because: | |
2806 | ||
2807 | (*) loads are more likely to need to be completed immediately to permit | |
2808 | execution progress, whereas stores can often be deferred without a | |
2809 | problem; | |
2810 | ||
2811 | (*) loads may be done speculatively, and the result discarded should it prove | |
2812 | to have been unnecessary; | |
2813 | ||
81fc6323 JP |
2814 | (*) loads may be done speculatively, leading to the result having been fetched |
2815 | at the wrong time in the expected sequence of events; | |
108b42b4 DH |
2816 | |
2817 | (*) the order of the memory accesses may be rearranged to promote better use | |
2818 | of the CPU buses and caches; | |
2819 | ||
2820 | (*) loads and stores may be combined to improve performance when talking to | |
2821 | memory or I/O hardware that can do batched accesses of adjacent locations, | |
2822 | thus cutting down on transaction setup costs (memory and PCI devices may | |
2823 | both be able to do this); and | |
2824 | ||
806654a9 | 2825 | (*) the CPU's data cache may affect the ordering, and while cache-coherency |
108b42b4 DH |
2826 | mechanisms may alleviate this - once the store has actually hit the cache |
2827 | - there's no guarantee that the coherency management will be propagated in | |
2828 | order to other CPUs. | |
2829 | ||
2830 | So what another CPU, say, might actually observe from the above piece of code | |
2831 | is: | |
2832 | ||
2833 | LOAD *A, ..., LOAD {*C,*D}, STORE *E, STORE *B | |
2834 | ||
2835 | (Where "LOAD {*C,*D}" is a combined load) | |
2836 | ||
2837 | ||
2838 | However, it is guaranteed that a CPU will be self-consistent: it will see its | |
2839 | _own_ accesses appear to be correctly ordered, without the need for a memory | |
2840 | barrier. For instance with the following code: | |
2841 | ||
9af194ce PM |
2842 | U = READ_ONCE(*A); |
2843 | WRITE_ONCE(*A, V); | |
2844 | WRITE_ONCE(*A, W); | |
2845 | X = READ_ONCE(*A); | |
2846 | WRITE_ONCE(*A, Y); | |
2847 | Z = READ_ONCE(*A); | |
108b42b4 DH |
2848 | |
2849 | and assuming no intervention by an external influence, it can be assumed that | |
2850 | the final result will appear to be: | |
2851 | ||
2852 | U == the original value of *A | |
2853 | X == W | |
2854 | Z == Y | |
2855 | *A == Y | |
2856 | ||
2857 | The code above may cause the CPU to generate the full sequence of memory | |
2858 | accesses: | |
2859 | ||
2860 | U=LOAD *A, STORE *A=V, STORE *A=W, X=LOAD *A, STORE *A=Y, Z=LOAD *A | |
2861 | ||
2862 | in that order, but, without intervention, the sequence may have almost any | |
9af194ce PM |
2863 | combination of elements combined or discarded, provided the program's view |
2864 | of the world remains consistent. Note that READ_ONCE() and WRITE_ONCE() | |
2865 | are -not- optional in the above example, as there are architectures | |
2866 | where a given CPU might reorder successive loads to the same location. | |
2867 | On such architectures, READ_ONCE() and WRITE_ONCE() do whatever is | |
2868 | necessary to prevent this, for example, on Itanium the volatile casts | |
2869 | used by READ_ONCE() and WRITE_ONCE() cause GCC to emit the special ld.acq | |
2870 | and st.rel instructions (respectively) that prevent such reordering. | |
108b42b4 DH |
2871 | |
2872 | The compiler may also combine, discard or defer elements of the sequence before | |
2873 | the CPU even sees them. | |
2874 | ||
2875 | For instance: | |
2876 | ||
2877 | *A = V; | |
2878 | *A = W; | |
2879 | ||
2880 | may be reduced to: | |
2881 | ||
2882 | *A = W; | |
2883 | ||
9af194ce | 2884 | since, without either a write barrier or an WRITE_ONCE(), it can be |
2ecf8101 | 2885 | assumed that the effect of the storage of V to *A is lost. Similarly: |
108b42b4 DH |
2886 | |
2887 | *A = Y; | |
2888 | Z = *A; | |
2889 | ||
9af194ce PM |
2890 | may, without a memory barrier or an READ_ONCE() and WRITE_ONCE(), be |
2891 | reduced to: | |
108b42b4 DH |
2892 | |
2893 | *A = Y; | |
2894 | Z = Y; | |
2895 | ||
2896 | and the LOAD operation never appear outside of the CPU. | |
2897 | ||
2898 | ||
2899 | AND THEN THERE'S THE ALPHA | |
2900 | -------------------------- | |
2901 | ||
2902 | The DEC Alpha CPU is one of the most relaxed CPUs there is. Not only that, | |
2903 | some versions of the Alpha CPU have a split data cache, permitting them to have | |
81fc6323 | 2904 | two semantically-related cache lines updated at separate times. This is where |
f556082d AY |
2905 | the address-dependency barrier really becomes necessary as this synchronises |
2906 | both caches with the memory coherence system, thus making it seem like pointer | |
108b42b4 DH |
2907 | changes vs new data occur in the right order. |
2908 | ||
f28f0868 | 2909 | The Alpha defines the Linux kernel's memory model, although as of v4.15 |
8ca924ae WD |
2910 | the Linux kernel's addition of smp_mb() to READ_ONCE() on Alpha greatly |
2911 | reduced its impact on the memory model. | |
108b42b4 | 2912 | |
0b6fa347 | 2913 | |
6a65d263 | 2914 | VIRTUAL MACHINE GUESTS |
3dbf0913 | 2915 | ---------------------- |
6a65d263 MT |
2916 | |
2917 | Guests running within virtual machines might be affected by SMP effects even if | |
2918 | the guest itself is compiled without SMP support. This is an artifact of | |
2919 | interfacing with an SMP host while running an UP kernel. Using mandatory | |
2920 | barriers for this use-case would be possible but is often suboptimal. | |
2921 | ||
2922 | To handle this case optimally, low-level virt_mb() etc macros are available. | |
2923 | These have the same effect as smp_mb() etc when SMP is enabled, but generate | |
0b6fa347 | 2924 | identical code for SMP and non-SMP systems. For example, virtual machine guests |
6a65d263 MT |
2925 | should use virt_mb() rather than smp_mb() when synchronizing against a |
2926 | (possibly SMP) host. | |
2927 | ||
2928 | These are equivalent to smp_mb() etc counterparts in all other respects, | |
2929 | in particular, they do not control MMIO effects: to control | |
2930 | MMIO effects, use mandatory barriers. | |
108b42b4 | 2931 | |
0b6fa347 | 2932 | |
90fddabf DH |
2933 | ============ |
2934 | EXAMPLE USES | |
2935 | ============ | |
2936 | ||
2937 | CIRCULAR BUFFERS | |
2938 | ---------------- | |
2939 | ||
2940 | Memory barriers can be used to implement circular buffering without the need | |
2941 | of a lock to serialise the producer with the consumer. See: | |
2942 | ||
d8a121e3 | 2943 | Documentation/core-api/circular-buffers.rst |
90fddabf DH |
2944 | |
2945 | for details. | |
2946 | ||
2947 | ||
108b42b4 DH |
2948 | ========== |
2949 | REFERENCES | |
2950 | ========== | |
2951 | ||
2952 | Alpha AXP Architecture Reference Manual, Second Edition (Sites & Witek, | |
2953 | Digital Press) | |
2954 | Chapter 5.2: Physical Address Space Characteristics | |
2955 | Chapter 5.4: Caches and Write Buffers | |
2956 | Chapter 5.5: Data Sharing | |
2957 | Chapter 5.6: Read/Write Ordering | |
2958 | ||
2959 | AMD64 Architecture Programmer's Manual Volume 2: System Programming | |
2960 | Chapter 7.1: Memory-Access Ordering | |
2961 | Chapter 7.4: Buffering and Combining Memory Writes | |
2962 | ||
f1ab25a3 PM |
2963 | ARM Architecture Reference Manual (ARMv8, for ARMv8-A architecture profile) |
2964 | Chapter B2: The AArch64 Application Level Memory Model | |
2965 | ||
108b42b4 DH |
2966 | IA-32 Intel Architecture Software Developer's Manual, Volume 3: |
2967 | System Programming Guide | |
2968 | Chapter 7.1: Locked Atomic Operations | |
2969 | Chapter 7.2: Memory Ordering | |
2970 | Chapter 7.4: Serializing Instructions | |
2971 | ||
2972 | The SPARC Architecture Manual, Version 9 | |
2973 | Chapter 8: Memory Models | |
2974 | Appendix D: Formal Specification of the Memory Models | |
2975 | Appendix J: Programming with the Memory Models | |
2976 | ||
f1ab25a3 PM |
2977 | Storage in the PowerPC (Stone and Fitzgerald) |
2978 | ||
108b42b4 DH |
2979 | UltraSPARC Programmer Reference Manual |
2980 | Chapter 5: Memory Accesses and Cacheability | |
2981 | Chapter 15: Sparc-V9 Memory Models | |
2982 | ||
2983 | UltraSPARC III Cu User's Manual | |
2984 | Chapter 9: Memory Models | |
2985 | ||
2986 | UltraSPARC IIIi Processor User's Manual | |
2987 | Chapter 8: Memory Models | |
2988 | ||
2989 | UltraSPARC Architecture 2005 | |
2990 | Chapter 9: Memory | |
2991 | Appendix D: Formal Specifications of the Memory Models | |
2992 | ||
2993 | UltraSPARC T1 Supplement to the UltraSPARC Architecture 2005 | |
2994 | Chapter 8: Memory Models | |
2995 | Appendix F: Caches and Cache Coherency | |
2996 | ||
2997 | Solaris Internals, Core Kernel Architecture, p63-68: | |
2998 | Chapter 3.3: Hardware Considerations for Locks and | |
2999 | Synchronization | |
3000 | ||
3001 | Unix Systems for Modern Architectures, Symmetric Multiprocessing and Caching | |
3002 | for Kernel Programmers: | |
3003 | Chapter 13: Other Memory Models | |
3004 | ||
3005 | Intel Itanium Architecture Software Developer's Manual: Volume 1: | |
3006 | Section 2.6: Speculation | |
3007 | Section 4.4: Memory Access |