Commit | Line | Data |
---|---|---|
706eeb3e PZ |
1 | |
2 | On atomic types (atomic_t atomic64_t and atomic_long_t). | |
3 | ||
4 | The atomic type provides an interface to the architecture's means of atomic | |
5 | RMW operations between CPUs (atomic operations on MMIO are not supported and | |
6 | can lead to fatal traps on some platforms). | |
7 | ||
8 | API | |
9 | --- | |
10 | ||
11 | The 'full' API consists of (atomic64_ and atomic_long_ prefixes omitted for | |
12 | brevity): | |
13 | ||
14 | Non-RMW ops: | |
15 | ||
16 | atomic_read(), atomic_set() | |
17 | atomic_read_acquire(), atomic_set_release() | |
18 | ||
19 | ||
20 | RMW atomic operations: | |
21 | ||
22 | Arithmetic: | |
23 | ||
24 | atomic_{add,sub,inc,dec}() | |
25 | atomic_{add,sub,inc,dec}_return{,_relaxed,_acquire,_release}() | |
26 | atomic_fetch_{add,sub,inc,dec}{,_relaxed,_acquire,_release}() | |
27 | ||
28 | ||
29 | Bitwise: | |
30 | ||
31 | atomic_{and,or,xor,andnot}() | |
32 | atomic_fetch_{and,or,xor,andnot}{,_relaxed,_acquire,_release}() | |
33 | ||
34 | ||
35 | Swap: | |
36 | ||
37 | atomic_xchg{,_relaxed,_acquire,_release}() | |
38 | atomic_cmpxchg{,_relaxed,_acquire,_release}() | |
39 | atomic_try_cmpxchg{,_relaxed,_acquire,_release}() | |
40 | ||
41 | ||
42 | Reference count (but please see refcount_t): | |
43 | ||
44 | atomic_add_unless(), atomic_inc_not_zero() | |
45 | atomic_sub_and_test(), atomic_dec_and_test() | |
46 | ||
47 | ||
48 | Misc: | |
49 | ||
50 | atomic_inc_and_test(), atomic_add_negative() | |
51 | atomic_dec_unless_positive(), atomic_inc_unless_negative() | |
52 | ||
53 | ||
54 | Barriers: | |
55 | ||
56 | smp_mb__{before,after}_atomic() | |
57 | ||
58 | ||
59 | ||
60 | SEMANTICS | |
61 | --------- | |
62 | ||
63 | Non-RMW ops: | |
64 | ||
65 | The non-RMW ops are (typically) regular LOADs and STOREs and are canonically | |
66 | implemented using READ_ONCE(), WRITE_ONCE(), smp_load_acquire() and | |
67 | smp_store_release() respectively. | |
68 | ||
69 | The one detail to this is that atomic_set{}() should be observable to the RMW | |
70 | ops. That is: | |
71 | ||
72 | C atomic-set | |
73 | ||
74 | { | |
75 | atomic_set(v, 1); | |
76 | } | |
77 | ||
78 | P1(atomic_t *v) | |
79 | { | |
80 | atomic_add_unless(v, 1, 0); | |
81 | } | |
82 | ||
83 | P2(atomic_t *v) | |
84 | { | |
85 | atomic_set(v, 0); | |
86 | } | |
87 | ||
88 | exists | |
89 | (v=2) | |
90 | ||
91 | In this case we would expect the atomic_set() from CPU1 to either happen | |
92 | before the atomic_add_unless(), in which case that latter one would no-op, or | |
93 | _after_ in which case we'd overwrite its result. In no case is "2" a valid | |
94 | outcome. | |
95 | ||
96 | This is typically true on 'normal' platforms, where a regular competing STORE | |
97 | will invalidate a LL/SC or fail a CMPXCHG. | |
98 | ||
99 | The obvious case where this is not so is when we need to implement atomic ops | |
100 | with a lock: | |
101 | ||
102 | CPU0 CPU1 | |
103 | ||
104 | atomic_add_unless(v, 1, 0); | |
105 | lock(); | |
106 | ret = READ_ONCE(v->counter); // == 1 | |
107 | atomic_set(v, 0); | |
108 | if (ret != u) WRITE_ONCE(v->counter, 0); | |
109 | WRITE_ONCE(v->counter, ret + 1); | |
110 | unlock(); | |
111 | ||
112 | the typical solution is to then implement atomic_set{}() with atomic_xchg(). | |
113 | ||
114 | ||
115 | RMW ops: | |
116 | ||
117 | These come in various forms: | |
118 | ||
119 | - plain operations without return value: atomic_{}() | |
120 | ||
121 | - operations which return the modified value: atomic_{}_return() | |
122 | ||
123 | these are limited to the arithmetic operations because those are | |
124 | reversible. Bitops are irreversible and therefore the modified value | |
125 | is of dubious utility. | |
126 | ||
127 | - operations which return the original value: atomic_fetch_{}() | |
128 | ||
129 | - swap operations: xchg(), cmpxchg() and try_cmpxchg() | |
130 | ||
131 | - misc; the special purpose operations that are commonly used and would, | |
132 | given the interface, normally be implemented using (try_)cmpxchg loops but | |
133 | are time critical and can, (typically) on LL/SC architectures, be more | |
134 | efficiently implemented. | |
135 | ||
136 | All these operations are SMP atomic; that is, the operations (for a single | |
137 | atomic variable) can be fully ordered and no intermediate state is lost or | |
138 | visible. | |
139 | ||
140 | ||
141 | ORDERING (go read memory-barriers.txt first) | |
142 | -------- | |
143 | ||
144 | The rule of thumb: | |
145 | ||
146 | - non-RMW operations are unordered; | |
147 | ||
148 | - RMW operations that have no return value are unordered; | |
149 | ||
150 | - RMW operations that have a return value are fully ordered; | |
151 | ||
152 | - RMW operations that are conditional are unordered on FAILURE, | |
153 | otherwise the above rules apply. | |
154 | ||
155 | Except of course when an operation has an explicit ordering like: | |
156 | ||
157 | {}_relaxed: unordered | |
158 | {}_acquire: the R of the RMW (or atomic_read) is an ACQUIRE | |
159 | {}_release: the W of the RMW (or atomic_set) is a RELEASE | |
160 | ||
161 | Where 'unordered' is against other memory locations. Address dependencies are | |
162 | not defeated. | |
163 | ||
164 | Fully ordered primitives are ordered against everything prior and everything | |
165 | subsequent. Therefore a fully ordered primitive is like having an smp_mb() | |
166 | before and an smp_mb() after the primitive. | |
167 | ||
168 | ||
169 | The barriers: | |
170 | ||
171 | smp_mb__{before,after}_atomic() | |
172 | ||
173 | only apply to the RMW ops and can be used to augment/upgrade the ordering | |
174 | inherent to the used atomic op. These barriers provide a full smp_mb(). | |
175 | ||
176 | These helper barriers exist because architectures have varying implicit | |
177 | ordering on their SMP atomic primitives. For example our TSO architectures | |
178 | provide full ordered atomics and these barriers are no-ops. | |
179 | ||
180 | Thus: | |
181 | ||
182 | atomic_fetch_add(); | |
183 | ||
184 | is equivalent to: | |
185 | ||
186 | smp_mb__before_atomic(); | |
187 | atomic_fetch_add_relaxed(); | |
188 | smp_mb__after_atomic(); | |
189 | ||
190 | However the atomic_fetch_add() might be implemented more efficiently. | |
191 | ||
192 | Further, while something like: | |
193 | ||
194 | smp_mb__before_atomic(); | |
195 | atomic_dec(&X); | |
196 | ||
197 | is a 'typical' RELEASE pattern, the barrier is strictly stronger than | |
198 | a RELEASE. Similarly for something like: | |
199 | ||
ca110694 PZ |
200 | atomic_inc(&X); |
201 | smp_mb__after_atomic(); | |
202 | ||
203 | is an ACQUIRE pattern (though very much not typical), but again the barrier is | |
204 | strictly stronger than ACQUIRE. As illustrated: | |
205 | ||
206 | C strong-acquire | |
207 | ||
208 | { | |
209 | } | |
210 | ||
211 | P1(int *x, atomic_t *y) | |
212 | { | |
213 | r0 = READ_ONCE(*x); | |
214 | smp_rmb(); | |
215 | r1 = atomic_read(y); | |
216 | } | |
217 | ||
218 | P2(int *x, atomic_t *y) | |
219 | { | |
220 | atomic_inc(y); | |
221 | smp_mb__after_atomic(); | |
222 | WRITE_ONCE(*x, 1); | |
223 | } | |
224 | ||
225 | exists | |
226 | (r0=1 /\ r1=0) | |
227 | ||
228 | This should not happen; but a hypothetical atomic_inc_acquire() -- | |
229 | (void)atomic_fetch_inc_acquire() for instance -- would allow the outcome, | |
230 | since then: | |
231 | ||
232 | P1 P2 | |
233 | ||
234 | t = LL.acq *y (0) | |
235 | t++; | |
236 | *x = 1; | |
237 | r0 = *x (1) | |
238 | RMB | |
239 | r1 = *y (0) | |
240 | SC *y, t; | |
706eeb3e | 241 | |
ca110694 | 242 | is allowed. |