Commit | Line | Data |
---|---|---|
4af49830 AG |
1 | .. _rcu_barrier: |
2 | ||
1c12757c | 3 | RCU and Unloadable Modules |
4af49830 | 4 | ========================== |
1c12757c PM |
5 | |
6 | [Originally published in LWN Jan. 14, 2007: http://lwn.net/Articles/217484/] | |
7 | ||
42d689ec PM |
8 | RCU updaters sometimes use call_rcu() to initiate an asynchronous wait for |
9 | a grace period to elapse. This primitive takes a pointer to an rcu_head | |
10 | struct placed within the RCU-protected data structure and another pointer | |
11 | to a function that may be invoked later to free that structure. Code to | |
12 | delete an element p from the linked list from IRQ context might then be | |
13 | as follows:: | |
1c12757c PM |
14 | |
15 | list_del_rcu(p); | |
16 | call_rcu(&p->rcu, p_callback); | |
17 | ||
18 | Since call_rcu() never blocks, this code can safely be used from within | |
4af49830 | 19 | IRQ context. The function p_callback() might be defined as follows:: |
1c12757c PM |
20 | |
21 | static void p_callback(struct rcu_head *rp) | |
22 | { | |
23 | struct pstruct *p = container_of(rp, struct pstruct, rcu); | |
24 | ||
25 | kfree(p); | |
26 | } | |
27 | ||
28 | ||
29 | Unloading Modules That Use call_rcu() | |
4af49830 | 30 | ------------------------------------- |
1c12757c | 31 | |
42d689ec | 32 | But what if the p_callback() function is defined in an unloadable module? |
1c12757c PM |
33 | |
34 | If we unload the module while some RCU callbacks are pending, | |
35 | the CPUs executing these callbacks are going to be severely | |
36 | disappointed when they are later invoked, as fancifully depicted at | |
37 | http://lwn.net/images/ns/kernel/rcu-drop.jpg. | |
38 | ||
39 | We could try placing a synchronize_rcu() in the module-exit code path, | |
40 | but this is not sufficient. Although synchronize_rcu() does wait for a | |
41 | grace period to elapse, it does not wait for the callbacks to complete. | |
42 | ||
43 | One might be tempted to try several back-to-back synchronize_rcu() | |
44 | calls, but this is still not guaranteed to work. If there is a very | |
42d689ec PM |
45 | heavy RCU-callback load, then some of the callbacks might be deferred in |
46 | order to allow other processing to proceed. For but one example, such | |
47 | deferral is required in realtime kernels in order to avoid excessive | |
48 | scheduling latencies. | |
1c12757c PM |
49 | |
50 | ||
51 | rcu_barrier() | |
4af49830 | 52 | ------------- |
1c12757c | 53 | |
42d689ec PM |
54 | This situation can be handled by the rcu_barrier() primitive. Rather |
55 | than waiting for a grace period to elapse, rcu_barrier() waits for all | |
56 | outstanding RCU callbacks to complete. Please note that rcu_barrier() | |
57 | does **not** imply synchronize_rcu(), in particular, if there are no RCU | |
58 | callbacks queued anywhere, rcu_barrier() is within its rights to return | |
59 | immediately, without waiting for anything, let alone a grace period. | |
d84297c9 PM |
60 | |
61 | Pseudo-code using rcu_barrier() is as follows: | |
1c12757c PM |
62 | |
63 | 1. Prevent any new RCU callbacks from being posted. | |
64 | 2. Execute rcu_barrier(). | |
65 | 3. Allow the module to be unloaded. | |
66 | ||
4fea6ef0 | 67 | There is also an srcu_barrier() function for SRCU, and you of course |
42d689ec PM |
68 | must match the flavor of srcu_barrier() with that of call_srcu(). |
69 | If your module uses multiple srcu_struct structures, then it must also | |
70 | use multiple invocations of srcu_barrier() when unloading that module. | |
71 | For example, if it uses call_rcu(), call_srcu() on srcu_struct_1, and | |
72 | call_srcu() on srcu_struct_2, then the following three lines of code | |
73 | will be required when unloading:: | |
3f944adb | 74 | |
eff86459 AY |
75 | 1 rcu_barrier(); |
76 | 2 srcu_barrier(&srcu_struct_1); | |
77 | 3 srcu_barrier(&srcu_struct_2); | |
3f944adb | 78 | |
42d689ec PM |
79 | If latency is of the essence, workqueues could be used to run these |
80 | three functions concurrently. | |
81 | ||
82 | An ancient version of the rcutorture module makes use of rcu_barrier() | |
83 | in its exit function as follows:: | |
1c12757c | 84 | |
eff86459 AY |
85 | 1 static void |
86 | 2 rcu_torture_cleanup(void) | |
87 | 3 { | |
88 | 4 int i; | |
89 | 5 | |
90 | 6 fullstop = 1; | |
91 | 7 if (shuffler_task != NULL) { | |
92 | 8 VERBOSE_PRINTK_STRING("Stopping rcu_torture_shuffle task"); | |
93 | 9 kthread_stop(shuffler_task); | |
94 | 10 } | |
95 | 11 shuffler_task = NULL; | |
4af49830 | 96 | 12 |
eff86459 AY |
97 | 13 if (writer_task != NULL) { |
98 | 14 VERBOSE_PRINTK_STRING("Stopping rcu_torture_writer task"); | |
99 | 15 kthread_stop(writer_task); | |
100 | 16 } | |
101 | 17 writer_task = NULL; | |
4af49830 | 102 | 18 |
eff86459 AY |
103 | 19 if (reader_tasks != NULL) { |
104 | 20 for (i = 0; i < nrealreaders; i++) { | |
105 | 21 if (reader_tasks[i] != NULL) { | |
106 | 22 VERBOSE_PRINTK_STRING( | |
107 | 23 "Stopping rcu_torture_reader task"); | |
108 | 24 kthread_stop(reader_tasks[i]); | |
109 | 25 } | |
110 | 26 reader_tasks[i] = NULL; | |
111 | 27 } | |
112 | 28 kfree(reader_tasks); | |
113 | 29 reader_tasks = NULL; | |
114 | 30 } | |
115 | 31 rcu_torture_current = NULL; | |
4af49830 | 116 | 32 |
eff86459 AY |
117 | 33 if (fakewriter_tasks != NULL) { |
118 | 34 for (i = 0; i < nfakewriters; i++) { | |
119 | 35 if (fakewriter_tasks[i] != NULL) { | |
120 | 36 VERBOSE_PRINTK_STRING( | |
121 | 37 "Stopping rcu_torture_fakewriter task"); | |
122 | 38 kthread_stop(fakewriter_tasks[i]); | |
123 | 39 } | |
124 | 40 fakewriter_tasks[i] = NULL; | |
125 | 41 } | |
126 | 42 kfree(fakewriter_tasks); | |
127 | 43 fakewriter_tasks = NULL; | |
128 | 44 } | |
4af49830 | 129 | 45 |
eff86459 AY |
130 | 46 if (stats_task != NULL) { |
131 | 47 VERBOSE_PRINTK_STRING("Stopping rcu_torture_stats task"); | |
132 | 48 kthread_stop(stats_task); | |
133 | 49 } | |
134 | 50 stats_task = NULL; | |
4af49830 | 135 | 51 |
eff86459 AY |
136 | 52 /* Wait for all RCU callbacks to fire. */ |
137 | 53 rcu_barrier(); | |
4af49830 | 138 | 54 |
eff86459 | 139 | 55 rcu_torture_stats_print(); /* -After- the stats thread is stopped! */ |
4af49830 | 140 | 56 |
eff86459 AY |
141 | 57 if (cur_ops->cleanup != NULL) |
142 | 58 cur_ops->cleanup(); | |
143 | 59 if (atomic_read(&n_rcu_torture_error)) | |
144 | 60 rcu_torture_print_module_parms("End of test: FAILURE"); | |
145 | 61 else | |
146 | 62 rcu_torture_print_module_parms("End of test: SUCCESS"); | |
147 | 63 } | |
1c12757c PM |
148 | |
149 | Line 6 sets a global variable that prevents any RCU callbacks from | |
150 | re-posting themselves. This will not be necessary in most cases, since | |
151 | RCU callbacks rarely include calls to call_rcu(). However, the rcutorture | |
152 | module is an exception to this rule, and therefore needs to set this | |
153 | global variable. | |
154 | ||
155 | Lines 7-50 stop all the kernel tasks associated with the rcutorture | |
156 | module. Therefore, once execution reaches line 53, no more rcutorture | |
157 | RCU callbacks will be posted. The rcu_barrier() call on line 53 waits | |
158 | for any pre-existing callbacks to complete. | |
159 | ||
160 | Then lines 55-62 print status and do operation-specific cleanup, and | |
161 | then return, permitting the module-unload operation to be completed. | |
162 | ||
4af49830 AG |
163 | .. _rcubarrier_quiz_1: |
164 | ||
165 | Quick Quiz #1: | |
166 | Is there any other situation where rcu_barrier() might | |
1c12757c PM |
167 | be required? |
168 | ||
4af49830 AG |
169 | :ref:`Answer to Quick Quiz #1 <answer_rcubarrier_quiz_1>` |
170 | ||
1c12757c | 171 | Your module might have additional complications. For example, if your |
42d689ec PM |
172 | module invokes call_rcu() from timers, you will need to first refrain |
173 | from posting new timers, cancel (or wait for) all the already-posted | |
174 | timers, and only then invoke rcu_barrier() to wait for any remaining | |
1c12757c PM |
175 | RCU callbacks to complete. |
176 | ||
42d689ec | 177 | Of course, if your module uses call_rcu(), you will need to invoke |
4fea6ef0 PM |
178 | rcu_barrier() before unloading. Similarly, if your module uses |
179 | call_srcu(), you will need to invoke srcu_barrier() before unloading, | |
180 | and on the same srcu_struct structure. If your module uses call_rcu() | |
42d689ec PM |
181 | **and** call_srcu(), then (as noted above) you will need to invoke |
182 | rcu_barrier() **and** srcu_barrier(). | |
240ebbf8 | 183 | |
1c12757c PM |
184 | |
185 | Implementing rcu_barrier() | |
4af49830 | 186 | -------------------------- |
1c12757c PM |
187 | |
188 | Dipankar Sarma's implementation of rcu_barrier() makes use of the fact | |
189 | that RCU callbacks are never reordered once queued on one of the per-CPU | |
190 | queues. His implementation queues an RCU callback on each of the per-CPU | |
191 | callback queues, and then waits until they have all started executing, at | |
192 | which point, all earlier RCU callbacks are guaranteed to have completed. | |
193 | ||
42d689ec PM |
194 | The original code for rcu_barrier() was roughly as follows:: |
195 | ||
eff86459 AY |
196 | 1 void rcu_barrier(void) |
197 | 2 { | |
198 | 3 BUG_ON(in_interrupt()); | |
199 | 4 /* Take cpucontrol mutex to protect against CPU hotplug */ | |
200 | 5 mutex_lock(&rcu_barrier_mutex); | |
201 | 6 init_completion(&rcu_barrier_completion); | |
202 | 7 atomic_set(&rcu_barrier_cpu_count, 1); | |
203 | 8 on_each_cpu(rcu_barrier_func, NULL, 0, 1); | |
204 | 9 if (atomic_dec_and_test(&rcu_barrier_cpu_count)) | |
205 | 10 complete(&rcu_barrier_completion); | |
42d689ec PM |
206 | 11 wait_for_completion(&rcu_barrier_completion); |
207 | 12 mutex_unlock(&rcu_barrier_mutex); | |
208 | 13 } | |
209 | ||
210 | Line 3 verifies that the caller is in process context, and lines 5 and 12 | |
1c12757c PM |
211 | use rcu_barrier_mutex to ensure that only one rcu_barrier() is using the |
212 | global completion and counters at a time, which are initialized on lines | |
213 | 6 and 7. Line 8 causes each CPU to invoke rcu_barrier_func(), which is | |
214 | shown below. Note that the final "1" in on_each_cpu()'s argument list | |
215 | ensures that all the calls to rcu_barrier_func() will have completed | |
42d689ec PM |
216 | before on_each_cpu() returns. Line 9 removes the initial count from |
217 | rcu_barrier_cpu_count, and if this count is now zero, line 10 finalizes | |
218 | the completion, which prevents line 11 from blocking. Either way, | |
219 | line 11 then waits (if needed) for the completion. | |
220 | ||
221 | .. _rcubarrier_quiz_2: | |
222 | ||
223 | Quick Quiz #2: | |
224 | Why doesn't line 8 initialize rcu_barrier_cpu_count to zero, | |
225 | thereby avoiding the need for lines 9 and 10? | |
226 | ||
227 | :ref:`Answer to Quick Quiz #2 <answer_rcubarrier_quiz_2>` | |
1c12757c | 228 | |
4fea6ef0 PM |
229 | This code was rewritten in 2008 and several times thereafter, but this |
230 | still gives the general idea. | |
1c12757c PM |
231 | |
232 | The rcu_barrier_func() runs on each CPU, where it invokes call_rcu() | |
4af49830 | 233 | to post an RCU callback, as follows:: |
1c12757c | 234 | |
eff86459 AY |
235 | 1 static void rcu_barrier_func(void *notused) |
236 | 2 { | |
237 | 3 int cpu = smp_processor_id(); | |
238 | 4 struct rcu_data *rdp = &per_cpu(rcu_data, cpu); | |
239 | 5 struct rcu_head *head; | |
240 | 6 | |
241 | 7 head = &rdp->barrier; | |
242 | 8 atomic_inc(&rcu_barrier_cpu_count); | |
243 | 9 call_rcu(head, rcu_barrier_callback); | |
244 | 10 } | |
1c12757c PM |
245 | |
246 | Lines 3 and 4 locate RCU's internal per-CPU rcu_data structure, | |
247 | which contains the struct rcu_head that needed for the later call to | |
248 | call_rcu(). Line 7 picks up a pointer to this struct rcu_head, and line | |
42d689ec | 249 | 8 increments the global counter. This counter will later be decremented |
1c12757c PM |
250 | by the callback. Line 9 then registers the rcu_barrier_callback() on |
251 | the current CPU's queue. | |
252 | ||
253 | The rcu_barrier_callback() function simply atomically decrements the | |
254 | rcu_barrier_cpu_count variable and finalizes the completion when it | |
4af49830 | 255 | reaches zero, as follows:: |
1c12757c | 256 | |
eff86459 AY |
257 | 1 static void rcu_barrier_callback(struct rcu_head *notused) |
258 | 2 { | |
259 | 3 if (atomic_dec_and_test(&rcu_barrier_cpu_count)) | |
260 | 4 complete(&rcu_barrier_completion); | |
261 | 5 } | |
1c12757c | 262 | |
42d689ec | 263 | .. _rcubarrier_quiz_3: |
4af49830 | 264 | |
42d689ec | 265 | Quick Quiz #3: |
4af49830 | 266 | What happens if CPU 0's rcu_barrier_func() executes |
1c12757c PM |
267 | immediately (thus incrementing rcu_barrier_cpu_count to the |
268 | value one), but the other CPU's rcu_barrier_func() invocations | |
269 | are delayed for a full grace period? Couldn't this result in | |
270 | rcu_barrier() returning prematurely? | |
271 | ||
42d689ec | 272 | :ref:`Answer to Quick Quiz #3 <answer_rcubarrier_quiz_3>` |
4af49830 | 273 | |
4de5f89e PM |
274 | The current rcu_barrier() implementation is more complex, due to the need |
275 | to avoid disturbing idle CPUs (especially on battery-powered systems) | |
276 | and the need to minimally disturb non-idle CPUs in real-time systems. | |
42d689ec PM |
277 | In addition, a great many optimizations have been applied. However, |
278 | the code above illustrates the concepts. | |
4de5f89e | 279 | |
1c12757c PM |
280 | |
281 | rcu_barrier() Summary | |
4af49830 | 282 | --------------------- |
1c12757c | 283 | |
42d689ec | 284 | The rcu_barrier() primitive is used relatively infrequently, since most |
1c12757c PM |
285 | code using RCU is in the core kernel rather than in modules. However, if |
286 | you are using RCU from an unloadable module, you need to use rcu_barrier() | |
287 | so that your module may be safely unloaded. | |
288 | ||
289 | ||
290 | Answers to Quick Quizzes | |
4af49830 AG |
291 | ------------------------ |
292 | ||
293 | .. _answer_rcubarrier_quiz_1: | |
1c12757c | 294 | |
4af49830 AG |
295 | Quick Quiz #1: |
296 | Is there any other situation where rcu_barrier() might | |
1c12757c PM |
297 | be required? |
298 | ||
a75f7b48 AY |
299 | Answer: |
300 | Interestingly enough, rcu_barrier() was not originally | |
1c12757c PM |
301 | implemented for module unloading. Nikita Danilov was using |
302 | RCU in a filesystem, which resulted in a similar situation at | |
303 | filesystem-unmount time. Dipankar Sarma coded up rcu_barrier() | |
304 | in response, so that Nikita could invoke it during the | |
305 | filesystem-unmount process. | |
306 | ||
307 | Much later, yours truly hit the RCU module-unload problem when | |
308 | implementing rcutorture, and found that rcu_barrier() solves | |
309 | this problem as well. | |
310 | ||
4af49830 AG |
311 | :ref:`Back to Quick Quiz #1 <rcubarrier_quiz_1>` |
312 | ||
313 | .. _answer_rcubarrier_quiz_2: | |
314 | ||
315 | Quick Quiz #2: | |
42d689ec PM |
316 | Why doesn't line 8 initialize rcu_barrier_cpu_count to zero, |
317 | thereby avoiding the need for lines 9 and 10? | |
318 | ||
a75f7b48 AY |
319 | Answer: |
320 | Suppose that the on_each_cpu() function shown on line 8 was | |
42d689ec PM |
321 | delayed, so that CPU 0's rcu_barrier_func() executed and |
322 | the corresponding grace period elapsed, all before CPU 1's | |
323 | rcu_barrier_func() started executing. This would result in | |
324 | rcu_barrier_cpu_count being decremented to zero, so that line | |
325 | 11's wait_for_completion() would return immediately, failing to | |
326 | wait for CPU 1's callbacks to be invoked. | |
327 | ||
328 | Note that this was not a problem when the rcu_barrier() code | |
329 | was first added back in 2005. This is because on_each_cpu() | |
330 | disables preemption, which acted as an RCU read-side critical | |
331 | section, thus preventing CPU 0's grace period from completing | |
332 | until on_each_cpu() had dealt with all of the CPUs. However, | |
333 | with the advent of preemptible RCU, rcu_barrier() no longer | |
334 | waited on nonpreemptible regions of code in preemptible kernels, | |
335 | that being the job of the new rcu_barrier_sched() function. | |
336 | ||
337 | However, with the RCU flavor consolidation around v4.20, this | |
338 | possibility was once again ruled out, because the consolidated | |
339 | RCU once again waits on nonpreemptible regions of code. | |
340 | ||
341 | Nevertheless, that extra count might still be a good idea. | |
342 | Relying on these sort of accidents of implementation can result | |
343 | in later surprise bugs when the implementation changes. | |
344 | ||
345 | :ref:`Back to Quick Quiz #2 <rcubarrier_quiz_2>` | |
346 | ||
347 | .. _answer_rcubarrier_quiz_3: | |
348 | ||
349 | Quick Quiz #3: | |
4af49830 | 350 | What happens if CPU 0's rcu_barrier_func() executes |
1c12757c PM |
351 | immediately (thus incrementing rcu_barrier_cpu_count to the |
352 | value one), but the other CPU's rcu_barrier_func() invocations | |
353 | are delayed for a full grace period? Couldn't this result in | |
354 | rcu_barrier() returning prematurely? | |
355 | ||
a75f7b48 AY |
356 | Answer: |
357 | This cannot happen. The reason is that on_each_cpu() has its last | |
1c12757c PM |
358 | argument, the wait flag, set to "1". This flag is passed through |
359 | to smp_call_function() and further to smp_call_function_on_cpu(), | |
360 | causing this latter to spin until the cross-CPU invocation of | |
361 | rcu_barrier_func() has completed. This by itself would prevent | |
81ad58be | 362 | a grace period from completing on non-CONFIG_PREEMPTION kernels, |
1c12757c PM |
363 | since each CPU must undergo a context switch (or other quiescent |
364 | state) before the grace period can complete. However, this is | |
81ad58be | 365 | of no use in CONFIG_PREEMPTION kernels. |
1c12757c PM |
366 | |
367 | Therefore, on_each_cpu() disables preemption across its call | |
368 | to smp_call_function() and also across the local call to | |
42d689ec PM |
369 | rcu_barrier_func(). Because recent RCU implementations treat |
370 | preemption-disabled regions of code as RCU read-side critical | |
371 | sections, this prevents grace periods from completing. This | |
1c12757c PM |
372 | means that all CPUs have executed rcu_barrier_func() before |
373 | the first rcu_barrier_callback() can possibly execute, in turn | |
374 | preventing rcu_barrier_cpu_count from prematurely reaching zero. | |
375 | ||
42d689ec PM |
376 | But if on_each_cpu() ever decides to forgo disabling preemption, |
377 | as might well happen due to real-time latency considerations, | |
378 | initializing rcu_barrier_cpu_count to one will save the day. | |
4af49830 | 379 | |
42d689ec | 380 | :ref:`Back to Quick Quiz #3 <rcubarrier_quiz_3>` |