3 What is RCU? -- "Read, Copy, Update"
4 ======================================
6 Please note that the "What is RCU?" LWN series is an excellent place
7 to start learning about RCU:
9 | 1. What is RCU, Fundamentally? https://lwn.net/Articles/262464/
10 | 2. What is RCU? Part 2: Usage https://lwn.net/Articles/263130/
11 | 3. RCU part 3: the RCU API https://lwn.net/Articles/264090/
12 | 4. The RCU API, 2010 Edition https://lwn.net/Articles/418853/
13 | 2010 Big API Table https://lwn.net/Articles/419086/
14 | 5. The RCU API, 2014 Edition https://lwn.net/Articles/609904/
15 | 2014 Big API Table https://lwn.net/Articles/609973/
16 | 6. The RCU API, 2019 Edition https://lwn.net/Articles/777036/
17 | 2019 Big API Table https://lwn.net/Articles/777165/
19 For those preferring video:
21 | 1. Unraveling RCU Mysteries: Fundamentals https://www.linuxfoundation.org/webinars/unraveling-rcu-usage-mysteries
22 | 2. Unraveling RCU Mysteries: Additional Use Cases https://www.linuxfoundation.org/webinars/unraveling-rcu-usage-mysteries-additional-use-cases
27 RCU is a synchronization mechanism that was added to the Linux kernel
28 during the 2.5 development effort that is optimized for read-mostly
29 situations. Although RCU is actually quite simple, making effective use
30 of it requires you to think differently about your code. Another part
31 of the problem is the mistaken assumption that there is "one true way" to
32 describe and to use RCU. Instead, the experience has been that different
33 people must take different paths to arrive at an understanding of RCU,
34 depending on their experiences and use cases. This document provides
35 several different paths, as follows:
37 :ref:`1. RCU OVERVIEW <1_whatisRCU>`
39 :ref:`2. WHAT IS RCU'S CORE API? <2_whatisRCU>`
41 :ref:`3. WHAT ARE SOME EXAMPLE USES OF CORE RCU API? <3_whatisRCU>`
43 :ref:`4. WHAT IF MY UPDATING THREAD CANNOT BLOCK? <4_whatisRCU>`
45 :ref:`5. WHAT ARE SOME SIMPLE IMPLEMENTATIONS OF RCU? <5_whatisRCU>`
47 :ref:`6. ANALOGY WITH READER-WRITER LOCKING <6_whatisRCU>`
49 :ref:`7. ANALOGY WITH REFERENCE COUNTING <7_whatisRCU>`
51 :ref:`8. FULL LIST OF RCU APIs <8_whatisRCU>`
53 :ref:`9. ANSWERS TO QUICK QUIZZES <9_whatisRCU>`
55 People who prefer starting with a conceptual overview should focus on
56 Section 1, though most readers will profit by reading this section at
57 some point. People who prefer to start with an API that they can then
58 experiment with should focus on Section 2. People who prefer to start
59 with example uses should focus on Sections 3 and 4. People who need to
60 understand the RCU implementation should focus on Section 5, then dive
61 into the kernel source code. People who reason best by analogy should
62 focus on Section 6. Section 7 serves as an index to the docbook API
63 documentation, and Section 8 is the traditional answer key.
65 So, start with the section that makes the most sense to you and your
66 preferred method of learning. If you need to know everything about
67 everything, feel free to read the whole thing -- but if you are really
68 that type of person, you have perused the source code and will therefore
69 never need this document anyway. ;-)
76 The basic idea behind RCU is to split updates into "removal" and
77 "reclamation" phases. The removal phase removes references to data items
78 within a data structure (possibly by replacing them with references to
79 new versions of these data items), and can run concurrently with readers.
80 The reason that it is safe to run the removal phase concurrently with
81 readers is the semantics of modern CPUs guarantee that readers will see
82 either the old or the new version of the data structure rather than a
83 partially updated reference. The reclamation phase does the work of reclaiming
84 (e.g., freeing) the data items removed from the data structure during the
85 removal phase. Because reclaiming data items can disrupt any readers
86 concurrently referencing those data items, the reclamation phase must
87 not start until readers no longer hold references to those data items.
89 Splitting the update into removal and reclamation phases permits the
90 updater to perform the removal phase immediately, and to defer the
91 reclamation phase until all readers active during the removal phase have
92 completed, either by blocking until they finish or by registering a
93 callback that is invoked after they finish. Only readers that are active
94 during the removal phase need be considered, because any reader starting
95 after the removal phase will be unable to gain a reference to the removed
96 data items, and therefore cannot be disrupted by the reclamation phase.
98 So the typical RCU update sequence goes something like the following:
100 a. Remove pointers to a data structure, so that subsequent
101 readers cannot gain a reference to it.
103 b. Wait for all previous readers to complete their RCU read-side
106 c. At this point, there cannot be any readers who hold references
107 to the data structure, so it now may safely be reclaimed
110 Step (b) above is the key idea underlying RCU's deferred destruction.
111 The ability to wait until all readers are done allows RCU readers to
112 use much lighter-weight synchronization, in some cases, absolutely no
113 synchronization at all. In contrast, in more conventional lock-based
114 schemes, readers must use heavy-weight synchronization in order to
115 prevent an updater from deleting the data structure out from under them.
116 This is because lock-based updaters typically update data items in place,
117 and must therefore exclude readers. In contrast, RCU-based updaters
118 typically take advantage of the fact that writes to single aligned
119 pointers are atomic on modern CPUs, allowing atomic insertion, removal,
120 and replacement of data items in a linked structure without disrupting
121 readers. Concurrent RCU readers can then continue accessing the old
122 versions, and can dispense with the atomic operations, memory barriers,
123 and communications cache misses that are so expensive on present-day
124 SMP computer systems, even in absence of lock contention.
126 In the three-step procedure shown above, the updater is performing both
127 the removal and the reclamation step, but it is often helpful for an
128 entirely different thread to do the reclamation, as is in fact the case
129 in the Linux kernel's directory-entry cache (dcache). Even if the same
130 thread performs both the update step (step (a) above) and the reclamation
131 step (step (c) above), it is often helpful to think of them separately.
132 For example, RCU readers and updaters need not communicate at all,
133 but RCU provides implicit low-overhead communication between readers
134 and reclaimers, namely, in step (b) above.
136 So how the heck can a reclaimer tell when a reader is done, given
137 that readers are not doing any sort of synchronization operations???
138 Read on to learn about how RCU's API makes this easy.
142 2. WHAT IS RCU'S CORE API?
143 ---------------------------
145 The core RCU API is quite small:
149 c. synchronize_rcu() / call_rcu()
150 d. rcu_assign_pointer()
153 There are many other members of the RCU API, but the rest can be
154 expressed in terms of these five, though most implementations instead
155 express synchronize_rcu() in terms of the call_rcu() callback API.
157 The five core RCU APIs are described below, the other 18 will be enumerated
158 later. See the kernel docbook documentation for more info, or look directly
159 at the function header comments.
163 void rcu_read_lock(void);
165 This temporal primitive is used by a reader to inform the
166 reclaimer that the reader is entering an RCU read-side critical
167 section. It is illegal to block while in an RCU read-side
168 critical section, though kernels built with CONFIG_PREEMPT_RCU
169 can preempt RCU read-side critical sections. Any RCU-protected
170 data structure accessed during an RCU read-side critical section
171 is guaranteed to remain unreclaimed for the full duration of that
172 critical section. Reference counts may be used in conjunction
173 with RCU to maintain longer-term references to data structures.
177 void rcu_read_unlock(void);
179 This temporal primitives is used by a reader to inform the
180 reclaimer that the reader is exiting an RCU read-side critical
181 section. Note that RCU read-side critical sections may be nested
186 void synchronize_rcu(void);
188 This temporal primitive marks the end of updater code and the
189 beginning of reclaimer code. It does this by blocking until
190 all pre-existing RCU read-side critical sections on all CPUs
191 have completed. Note that synchronize_rcu() will **not**
192 necessarily wait for any subsequent RCU read-side critical
193 sections to complete. For example, consider the following
197 ----------------- ------------------------- ---------------
199 2. enters synchronize_rcu()
202 5. exits synchronize_rcu()
205 To reiterate, synchronize_rcu() waits only for ongoing RCU
206 read-side critical sections to complete, not necessarily for
207 any that begin after synchronize_rcu() is invoked.
209 Of course, synchronize_rcu() does not necessarily return
210 **immediately** after the last pre-existing RCU read-side critical
211 section completes. For one thing, there might well be scheduling
212 delays. For another thing, many RCU implementations process
213 requests in batches in order to improve efficiencies, which can
214 further delay synchronize_rcu().
216 Since synchronize_rcu() is the API that must figure out when
217 readers are done, its implementation is key to RCU. For RCU
218 to be useful in all but the most read-intensive situations,
219 synchronize_rcu()'s overhead must also be quite small.
221 The call_rcu() API is an asynchronous callback form of
222 synchronize_rcu(), and is described in more detail in a later
223 section. Instead of blocking, it registers a function and
224 argument which are invoked after all ongoing RCU read-side
225 critical sections have completed. This callback variant is
226 particularly useful in situations where it is illegal to block
227 or where update-side performance is critically important.
229 However, the call_rcu() API should not be used lightly, as use
230 of the synchronize_rcu() API generally results in simpler code.
231 In addition, the synchronize_rcu() API has the nice property
232 of automatically limiting update rate should grace periods
233 be delayed. This property results in system resilience in face
234 of denial-of-service attacks. Code using call_rcu() should limit
235 update rate in order to gain this same sort of resilience. See
236 checklist.rst for some approaches to limiting the update rate.
240 void rcu_assign_pointer(p, typeof(p) v);
242 Yes, rcu_assign_pointer() **is** implemented as a macro, though it
243 would be cool to be able to declare a function in this manner.
244 (Compiler experts will no doubt disagree.)
246 The updater uses this spatial macro to assign a new value to an
247 RCU-protected pointer, in order to safely communicate the change
248 in value from the updater to the reader. This is a spatial (as
249 opposed to temporal) macro. It does not evaluate to an rvalue,
250 but it does execute any memory-barrier instructions required
251 for a given CPU architecture. Its ordering properties are that
252 of a store-release operation.
254 Perhaps just as important, it serves to document (1) which
255 pointers are protected by RCU and (2) the point at which a
256 given structure becomes accessible to other CPUs. That said,
257 rcu_assign_pointer() is most frequently used indirectly, via
258 the _rcu list-manipulation primitives such as list_add_rcu().
262 typeof(p) rcu_dereference(p);
264 Like rcu_assign_pointer(), rcu_dereference() must be implemented
267 The reader uses the spatial rcu_dereference() macro to fetch
268 an RCU-protected pointer, which returns a value that may
269 then be safely dereferenced. Note that rcu_dereference()
270 does not actually dereference the pointer, instead, it
271 protects the pointer for later dereferencing. It also
272 executes any needed memory-barrier instructions for a given
273 CPU architecture. Currently, only Alpha needs memory barriers
274 within rcu_dereference() -- on other CPUs, it compiles to a
277 Common coding practice uses rcu_dereference() to copy an
278 RCU-protected pointer to a local variable, then dereferences
279 this local variable, for example as follows::
281 p = rcu_dereference(head.next);
284 However, in this case, one could just as easily combine these
287 return rcu_dereference(head.next)->data;
289 If you are going to be fetching multiple fields from the
290 RCU-protected structure, using the local variable is of
291 course preferred. Repeated rcu_dereference() calls look
292 ugly, do not guarantee that the same pointer will be returned
293 if an update happened while in the critical section, and incur
294 unnecessary overhead on Alpha CPUs.
296 Note that the value returned by rcu_dereference() is valid
297 only within the enclosing RCU read-side critical section [1]_.
298 For example, the following is **not** legal::
301 p = rcu_dereference(head.next);
303 x = p->address; /* BUG!!! */
305 y = p->data; /* BUG!!! */
308 Holding a reference from one RCU read-side critical section
309 to another is just as illegal as holding a reference from
310 one lock-based critical section to another! Similarly,
311 using a reference outside of the critical section in which
312 it was acquired is just as illegal as doing so with normal
315 As with rcu_assign_pointer(), an important function of
316 rcu_dereference() is to document which pointers are protected by
317 RCU, in particular, flagging a pointer that is subject to changing
318 at any time, including immediately after the rcu_dereference().
319 And, again like rcu_assign_pointer(), rcu_dereference() is
320 typically used indirectly, via the _rcu list-manipulation
321 primitives, such as list_for_each_entry_rcu() [2]_.
323 .. [1] The variant rcu_dereference_protected() can be used outside
324 of an RCU read-side critical section as long as the usage is
325 protected by locks acquired by the update-side code. This variant
326 avoids the lockdep warning that would happen when using (for
327 example) rcu_dereference() without rcu_read_lock() protection.
328 Using rcu_dereference_protected() also has the advantage
329 of permitting compiler optimizations that rcu_dereference()
330 must prohibit. The rcu_dereference_protected() variant takes
331 a lockdep expression to indicate which locks must be acquired
332 by the caller. If the indicated protection is not provided,
333 a lockdep splat is emitted. See Design/Requirements/Requirements.rst
334 and the API's code comments for more details and example usage.
336 .. [2] If the list_for_each_entry_rcu() instance might be used by
337 update-side code as well as by RCU readers, then an additional
338 lockdep expression can be added to its list of arguments.
339 For example, given an additional "lock_is_held(&mylock)" argument,
340 the RCU lockdep code would complain only if this instance was
341 invoked outside of an RCU read-side critical section and without
342 the protection of mylock.
344 The following diagram shows how each API communicates among the
345 reader, updater, and reclaimer.
351 +---------------------->| reader |---------+
355 | | | rcu_read_lock()
356 | | | rcu_read_unlock()
357 | rcu_dereference() | |
359 | updater |<----------------+ |
362 +----------------------------------->| reclaimer |
365 synchronize_rcu() & call_rcu()
368 The RCU infrastructure observes the temporal sequence of rcu_read_lock(),
369 rcu_read_unlock(), synchronize_rcu(), and call_rcu() invocations in
370 order to determine when (1) synchronize_rcu() invocations may return
371 to their callers and (2) call_rcu() callbacks may be invoked. Efficient
372 implementations of the RCU infrastructure make heavy use of batching in
373 order to amortize their overhead over many uses of the corresponding APIs.
374 The rcu_assign_pointer() and rcu_dereference() invocations communicate
375 spatial changes via stores to and loads from the RCU-protected pointer in
378 There are at least three flavors of RCU usage in the Linux kernel. The diagram
379 above shows the most common one. On the updater side, the rcu_assign_pointer(),
380 synchronize_rcu() and call_rcu() primitives used are the same for all three
381 flavors. However for protection (on the reader side), the primitives used vary
382 depending on the flavor:
384 a. rcu_read_lock() / rcu_read_unlock()
387 b. rcu_read_lock_bh() / rcu_read_unlock_bh()
388 local_bh_disable() / local_bh_enable()
391 c. rcu_read_lock_sched() / rcu_read_unlock_sched()
392 preempt_disable() / preempt_enable()
393 local_irq_save() / local_irq_restore()
394 hardirq enter / hardirq exit
396 rcu_dereference_sched()
398 These three flavors are used as follows:
400 a. RCU applied to normal data structures.
402 b. RCU applied to networking data structures that may be subjected
403 to remote denial-of-service attacks.
405 c. RCU applied to scheduler and interrupt/NMI-handler tasks.
407 Again, most uses will be of (a). The (b) and (c) cases are important
408 for specialized uses, but are relatively uncommon. The SRCU, RCU-Tasks,
409 RCU-Tasks-Rude, and RCU-Tasks-Trace have similar relationships among
410 their assorted primitives.
414 3. WHAT ARE SOME EXAMPLE USES OF CORE RCU API?
415 -----------------------------------------------
417 This section shows a simple use of the core RCU API to protect a
418 global pointer to a dynamically allocated structure. More-typical
419 uses of RCU may be found in listRCU.rst, arrayRCU.rst, and NMI-RCU.rst.
427 DEFINE_SPINLOCK(foo_mutex);
429 struct foo __rcu *gbl_foo;
432 * Create a new struct foo that is the same as the one currently
433 * pointed to by gbl_foo, except that field "a" is replaced
434 * with "new_a". Points gbl_foo to the new structure, and
435 * frees up the old structure after a grace period.
437 * Uses rcu_assign_pointer() to ensure that concurrent readers
438 * see the initialized version of the new structure.
440 * Uses synchronize_rcu() to ensure that any readers that might
441 * have references to the old structure complete before freeing
444 void foo_update_a(int new_a)
449 new_fp = kmalloc(sizeof(*new_fp), GFP_KERNEL);
450 spin_lock(&foo_mutex);
451 old_fp = rcu_dereference_protected(gbl_foo, lockdep_is_held(&foo_mutex));
454 rcu_assign_pointer(gbl_foo, new_fp);
455 spin_unlock(&foo_mutex);
461 * Return the value of field "a" of the current gbl_foo
462 * structure. Use rcu_read_lock() and rcu_read_unlock()
463 * to ensure that the structure does not get deleted out
464 * from under us, and use rcu_dereference() to ensure that
465 * we see the initialized version of the structure (important
466 * for DEC Alpha and for people reading the code).
473 retval = rcu_dereference(gbl_foo)->a;
480 - Use rcu_read_lock() and rcu_read_unlock() to guard RCU
481 read-side critical sections.
483 - Within an RCU read-side critical section, use rcu_dereference()
484 to dereference RCU-protected pointers.
486 - Use some solid design (such as locks or semaphores) to
487 keep concurrent updates from interfering with each other.
489 - Use rcu_assign_pointer() to update an RCU-protected pointer.
490 This primitive protects concurrent readers from the updater,
491 **not** concurrent updates from each other! You therefore still
492 need to use locking (or something similar) to keep concurrent
493 rcu_assign_pointer() primitives from interfering with each other.
495 - Use synchronize_rcu() **after** removing a data element from an
496 RCU-protected data structure, but **before** reclaiming/freeing
497 the data element, in order to wait for the completion of all
498 RCU read-side critical sections that might be referencing that
501 See checklist.rst for additional rules to follow when using RCU.
502 And again, more-typical uses of RCU may be found in listRCU.rst,
503 arrayRCU.rst, and NMI-RCU.rst.
507 4. WHAT IF MY UPDATING THREAD CANNOT BLOCK?
508 --------------------------------------------
510 In the example above, foo_update_a() blocks until a grace period elapses.
511 This is quite simple, but in some cases one cannot afford to wait so
512 long -- there might be other high-priority work to be done.
514 In such cases, one uses call_rcu() rather than synchronize_rcu().
515 The call_rcu() API is as follows::
517 void call_rcu(struct rcu_head *head, rcu_callback_t func);
519 This function invokes func(head) after a grace period has elapsed.
520 This invocation might happen from either softirq or process context,
521 so the function is not permitted to block. The foo struct needs to
522 have an rcu_head structure added, perhaps as follows::
531 The foo_update_a() function might then be written as follows::
534 * Create a new struct foo that is the same as the one currently
535 * pointed to by gbl_foo, except that field "a" is replaced
536 * with "new_a". Points gbl_foo to the new structure, and
537 * frees up the old structure after a grace period.
539 * Uses rcu_assign_pointer() to ensure that concurrent readers
540 * see the initialized version of the new structure.
542 * Uses call_rcu() to ensure that any readers that might have
543 * references to the old structure complete before freeing the
546 void foo_update_a(int new_a)
551 new_fp = kmalloc(sizeof(*new_fp), GFP_KERNEL);
552 spin_lock(&foo_mutex);
553 old_fp = rcu_dereference_protected(gbl_foo, lockdep_is_held(&foo_mutex));
556 rcu_assign_pointer(gbl_foo, new_fp);
557 spin_unlock(&foo_mutex);
558 call_rcu(&old_fp->rcu, foo_reclaim);
561 The foo_reclaim() function might appear as follows::
563 void foo_reclaim(struct rcu_head *rp)
565 struct foo *fp = container_of(rp, struct foo, rcu);
572 The container_of() primitive is a macro that, given a pointer into a
573 struct, the type of the struct, and the pointed-to field within the
574 struct, returns a pointer to the beginning of the struct.
576 The use of call_rcu() permits the caller of foo_update_a() to
577 immediately regain control, without needing to worry further about the
578 old version of the newly updated element. It also clearly shows the
579 RCU distinction between updater, namely foo_update_a(), and reclaimer,
580 namely foo_reclaim().
582 The summary of advice is the same as for the previous section, except
583 that we are now using call_rcu() rather than synchronize_rcu():
585 - Use call_rcu() **after** removing a data element from an
586 RCU-protected data structure in order to register a callback
587 function that will be invoked after the completion of all RCU
588 read-side critical sections that might be referencing that
591 If the callback for call_rcu() is not doing anything more than calling
592 kfree() on the structure, you can use kfree_rcu() instead of call_rcu()
593 to avoid having to write your own callback::
595 kfree_rcu(old_fp, rcu);
597 If the occasional sleep is permitted, the single-argument form may
598 be used, omitting the rcu_head structure from struct foo.
600 kfree_rcu_mightsleep(old_fp);
602 This variant almost never blocks, but might do so by invoking
603 synchronize_rcu() in response to memory-allocation failure.
605 Again, see checklist.rst for additional rules governing the use of RCU.
609 5. WHAT ARE SOME SIMPLE IMPLEMENTATIONS OF RCU?
610 ------------------------------------------------
612 One of the nice things about RCU is that it has extremely simple "toy"
613 implementations that are a good first step towards understanding the
614 production-quality implementations in the Linux kernel. This section
615 presents two such "toy" implementations of RCU, one that is implemented
616 in terms of familiar locking primitives, and another that more closely
617 resembles "classic" RCU. Both are way too simple for real-world use,
618 lacking both functionality and performance. However, they are useful
619 in getting a feel for how RCU works. See kernel/rcu/update.c for a
620 production-quality implementation, and see:
622 https://docs.google.com/document/d/1X0lThx8OK0ZgLMqVoXiR4ZrGURHrXK6NyLRbeXe3Xac/edit
624 for papers describing the Linux kernel RCU implementation. The OLS'01
625 and OLS'02 papers are a good introduction, and the dissertation provides
626 more details on the current implementation as of early 2004.
629 5A. "TOY" IMPLEMENTATION #1: LOCKING
630 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
631 This section presents a "toy" RCU implementation that is based on
632 familiar locking primitives. Its overhead makes it a non-starter for
633 real-life use, as does its lack of scalability. It is also unsuitable
634 for realtime use, since it allows scheduling latency to "bleed" from
635 one read-side critical section to another. It also assumes recursive
636 reader-writer locks: If you try this with non-recursive locks, and
637 you allow nested rcu_read_lock() calls, you can deadlock.
639 However, it is probably the easiest implementation to relate to, so is
640 a good starting point.
642 It is extremely simple::
644 static DEFINE_RWLOCK(rcu_gp_mutex);
646 void rcu_read_lock(void)
648 read_lock(&rcu_gp_mutex);
651 void rcu_read_unlock(void)
653 read_unlock(&rcu_gp_mutex);
656 void synchronize_rcu(void)
658 write_lock(&rcu_gp_mutex);
659 smp_mb__after_spinlock();
660 write_unlock(&rcu_gp_mutex);
663 [You can ignore rcu_assign_pointer() and rcu_dereference() without missing
664 much. But here are simplified versions anyway. And whatever you do,
665 don't forget about them when submitting patches making use of RCU!]::
667 #define rcu_assign_pointer(p, v) \
669 smp_store_release(&(p), (v)); \
672 #define rcu_dereference(p) \
674 typeof(p) _________p1 = READ_ONCE(p); \
679 The rcu_read_lock() and rcu_read_unlock() primitive read-acquire
680 and release a global reader-writer lock. The synchronize_rcu()
681 primitive write-acquires this same lock, then releases it. This means
682 that once synchronize_rcu() exits, all RCU read-side critical sections
683 that were in progress before synchronize_rcu() was called are guaranteed
684 to have completed -- there is no way that synchronize_rcu() would have
685 been able to write-acquire the lock otherwise. The smp_mb__after_spinlock()
686 promotes synchronize_rcu() to a full memory barrier in compliance with
687 the "Memory-Barrier Guarantees" listed in:
689 Design/Requirements/Requirements.rst
691 It is possible to nest rcu_read_lock(), since reader-writer locks may
692 be recursively acquired. Note also that rcu_read_lock() is immune
693 from deadlock (an important property of RCU). The reason for this is
694 that the only thing that can block rcu_read_lock() is a synchronize_rcu().
695 But synchronize_rcu() does not acquire any locks while holding rcu_gp_mutex,
696 so there can be no deadlock cycle.
701 Why is this argument naive? How could a deadlock
702 occur when using this algorithm in a real-world Linux
703 kernel? How could this deadlock be avoided?
705 :ref:`Answers to Quick Quiz <9_whatisRCU>`
707 5B. "TOY" EXAMPLE #2: CLASSIC RCU
708 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
709 This section presents a "toy" RCU implementation that is based on
710 "classic RCU". It is also short on performance (but only for updates) and
711 on features such as hotplug CPU and the ability to run in CONFIG_PREEMPTION
712 kernels. The definitions of rcu_dereference() and rcu_assign_pointer()
713 are the same as those shown in the preceding section, so they are omitted.
716 void rcu_read_lock(void) { }
718 void rcu_read_unlock(void) { }
720 void synchronize_rcu(void)
724 for_each_possible_cpu(cpu)
728 Note that rcu_read_lock() and rcu_read_unlock() do absolutely nothing.
729 This is the great strength of classic RCU in a non-preemptive kernel:
730 read-side overhead is precisely zero, at least on non-Alpha CPUs.
731 And there is absolutely no way that rcu_read_lock() can possibly
732 participate in a deadlock cycle!
734 The implementation of synchronize_rcu() simply schedules itself on each
735 CPU in turn. The run_on() primitive can be implemented straightforwardly
736 in terms of the sched_setaffinity() primitive. Of course, a somewhat less
737 "toy" implementation would restore the affinity upon completion rather
738 than just leaving all tasks running on the last CPU, but when I said
739 "toy", I meant **toy**!
741 So how the heck is this supposed to work???
743 Remember that it is illegal to block while in an RCU read-side critical
744 section. Therefore, if a given CPU executes a context switch, we know
745 that it must have completed all preceding RCU read-side critical sections.
746 Once **all** CPUs have executed a context switch, then **all** preceding
747 RCU read-side critical sections will have completed.
749 So, suppose that we remove a data item from its structure and then invoke
750 synchronize_rcu(). Once synchronize_rcu() returns, we are guaranteed
751 that there are no RCU read-side critical sections holding a reference
752 to that data item, so we can safely reclaim it.
757 Give an example where Classic RCU's read-side
758 overhead is **negative**.
760 :ref:`Answers to Quick Quiz <9_whatisRCU>`
765 If it is illegal to block in an RCU read-side
766 critical section, what the heck do you do in
767 CONFIG_PREEMPT_RT, where normal spinlocks can block???
769 :ref:`Answers to Quick Quiz <9_whatisRCU>`
773 6. ANALOGY WITH READER-WRITER LOCKING
774 --------------------------------------
776 Although RCU can be used in many different ways, a very common use of
777 RCU is analogous to reader-writer locking. The following unified
778 diff shows how closely related RCU and reader-writer locking can be.
781 @@ -5,5 +5,5 @@ struct el {
783 /* Other data fields */
786 +spinlock_t listmutex;
790 struct list_head *lp;
793 - read_lock(&listmutex);
794 - list_for_each_entry(p, head, lp) {
796 + list_for_each_entry_rcu(p, head, lp) {
799 - read_unlock(&listmutex);
804 - read_unlock(&listmutex);
813 - write_lock(&listmutex);
814 + spin_lock(&listmutex);
815 list_for_each_entry(p, head, lp) {
817 - list_del(&p->list);
818 - write_unlock(&listmutex);
819 + list_del_rcu(&p->list);
820 + spin_unlock(&listmutex);
826 - write_unlock(&listmutex);
827 + spin_unlock(&listmutex);
831 Or, for those who prefer a side-by-side listing::
833 1 struct el { 1 struct el {
834 2 struct list_head list; 2 struct list_head list;
835 3 long key; 3 long key;
836 4 spinlock_t mutex; 4 spinlock_t mutex;
837 5 int data; 5 int data;
838 6 /* Other data fields */ 6 /* Other data fields */
840 8 rwlock_t listmutex; 8 spinlock_t listmutex;
841 9 struct el head; 9 struct el head;
845 1 int search(long key, int *result) 1 int search(long key, int *result)
847 3 struct list_head *lp; 3 struct list_head *lp;
848 4 struct el *p; 4 struct el *p;
850 6 read_lock(&listmutex); 6 rcu_read_lock();
851 7 list_for_each_entry(p, head, lp) { 7 list_for_each_entry_rcu(p, head, lp) {
852 8 if (p->key == key) { 8 if (p->key == key) {
853 9 *result = p->data; 9 *result = p->data;
854 10 read_unlock(&listmutex); 10 rcu_read_unlock();
855 11 return 1; 11 return 1;
858 14 read_unlock(&listmutex); 14 rcu_read_unlock();
859 15 return 0; 15 return 0;
864 1 int delete(long key) 1 int delete(long key)
866 3 struct el *p; 3 struct el *p;
868 5 write_lock(&listmutex); 5 spin_lock(&listmutex);
869 6 list_for_each_entry(p, head, lp) { 6 list_for_each_entry(p, head, lp) {
870 7 if (p->key == key) { 7 if (p->key == key) {
871 8 list_del(&p->list); 8 list_del_rcu(&p->list);
872 9 write_unlock(&listmutex); 9 spin_unlock(&listmutex);
873 10 synchronize_rcu();
874 10 kfree(p); 11 kfree(p);
875 11 return 1; 12 return 1;
878 14 write_unlock(&listmutex); 15 spin_unlock(&listmutex);
879 15 return 0; 16 return 0;
882 Either way, the differences are quite small. Read-side locking moves
883 to rcu_read_lock() and rcu_read_unlock, update-side locking moves from
884 a reader-writer lock to a simple spinlock, and a synchronize_rcu()
885 precedes the kfree().
887 However, there is one potential catch: the read-side and update-side
888 critical sections can now run concurrently. In many cases, this will
889 not be a problem, but it is necessary to check carefully regardless.
890 For example, if multiple independent list updates must be seen as
891 a single atomic update, converting to RCU will require special care.
893 Also, the presence of synchronize_rcu() means that the RCU version of
894 delete() can now block. If this is a problem, there is a callback-based
895 mechanism that never blocks, namely call_rcu() or kfree_rcu(), that can
896 be used in place of synchronize_rcu().
900 7. ANALOGY WITH REFERENCE COUNTING
901 -----------------------------------
903 The reader-writer analogy (illustrated by the previous section) is not
904 always the best way to think about using RCU. Another helpful analogy
905 considers RCU an effective reference count on everything which is
908 A reference count typically does not prevent the referenced object's
909 values from changing, but does prevent changes to type -- particularly the
910 gross change of type that happens when that object's memory is freed and
911 re-allocated for some other purpose. Once a type-safe reference to the
912 object is obtained, some other mechanism is needed to ensure consistent
913 access to the data in the object. This could involve taking a spinlock,
914 but with RCU the typical approach is to perform reads with SMP-aware
915 operations such as smp_load_acquire(), to perform updates with atomic
916 read-modify-write operations, and to provide the necessary ordering.
917 RCU provides a number of support functions that embed the required
918 operations and ordering, such as the list_for_each_entry_rcu() macro
919 used in the previous section.
921 A more focused view of the reference counting behavior is that,
922 between rcu_read_lock() and rcu_read_unlock(), any reference taken with
923 rcu_dereference() on a pointer marked as ``__rcu`` can be treated as
924 though a reference-count on that object has been temporarily increased.
925 This prevents the object from changing type. Exactly what this means
926 will depend on normal expectations of objects of that type, but it
927 typically includes that spinlocks can still be safely locked, normal
928 reference counters can be safely manipulated, and ``__rcu`` pointers
929 can be safely dereferenced.
931 Some operations that one might expect to see on an object for
932 which an RCU reference is held include:
934 - Copying out data that is guaranteed to be stable by the object's type.
935 - Using kref_get_unless_zero() or similar to get a longer-term
936 reference. This may fail of course.
937 - Acquiring a spinlock in the object, and checking if the object still
938 is the expected object and if so, manipulating it freely.
940 The understanding that RCU provides a reference that only prevents a
941 change of type is particularly visible with objects allocated from a
942 slab cache marked ``SLAB_TYPESAFE_BY_RCU``. RCU operations may yield a
943 reference to an object from such a cache that has been concurrently freed
944 and the memory reallocated to a completely different object, though of
945 the same type. In this case RCU doesn't even protect the identity of the
946 object from changing, only its type. So the object found may not be the
947 one expected, but it will be one where it is safe to take a reference
948 (and then potentially acquiring a spinlock), allowing subsequent code
949 to check whether the identity matches expectations. It is tempting
950 to simply acquire the spinlock without first taking the reference, but
951 unfortunately any spinlock in a ``SLAB_TYPESAFE_BY_RCU`` object must be
952 initialized after each and every call to kmem_cache_alloc(), which renders
953 reference-free spinlock acquisition completely unsafe. Therefore, when
954 using ``SLAB_TYPESAFE_BY_RCU``, make proper use of a reference counter.
955 (Those willing to use a kmem_cache constructor may also use locking,
956 including cache-friendly sequence locking.)
958 With traditional reference counting -- such as that implemented by the
959 kref library in Linux -- there is typically code that runs when the last
960 reference to an object is dropped. With kref, this is the function
961 passed to kref_put(). When RCU is being used, such finalization code
962 must not be run until all ``__rcu`` pointers referencing the object have
963 been updated, and then a grace period has passed. Every remaining
964 globally visible pointer to the object must be considered to be a
965 potential counted reference, and the finalization code is typically run
966 using call_rcu() only after all those pointers have been changed.
968 To see how to choose between these two analogies -- of RCU as a
969 reader-writer lock and RCU as a reference counting system -- it is useful
970 to reflect on the scale of the thing being protected. The reader-writer
971 lock analogy looks at larger multi-part objects such as a linked list
972 and shows how RCU can facilitate concurrency while elements are added
973 to, and removed from, the list. The reference-count analogy looks at
974 the individual objects and looks at how they can be accessed safely
975 within whatever whole they are a part of.
979 8. FULL LIST OF RCU APIs
980 -------------------------
982 The RCU APIs are documented in docbook-format header comments in the
983 Linux-kernel source code, but it helps to have a full list of the
984 APIs, since there does not appear to be a way to categorize them
985 in docbook. Here is the list, by category.
993 list_for_each_entry_rcu
994 list_for_each_entry_continue_rcu
995 list_for_each_entry_from_rcu
996 list_first_or_null_rcu
997 list_next_or_null_rcu
1001 hlist_for_each_entry_rcu
1002 hlist_for_each_entry_rcu_bh
1003 hlist_for_each_entry_from_rcu
1004 hlist_for_each_entry_continue_rcu
1005 hlist_for_each_entry_continue_rcu_bh
1006 hlist_nulls_first_rcu
1007 hlist_nulls_for_each_entry_rcu
1009 hlist_bl_for_each_entry_rcu
1011 RCU pointer/list update::
1018 hlist_add_behind_rcu
1019 hlist_add_before_rcu
1025 list_splice_init_rcu
1026 list_splice_tail_init_rcu
1027 hlist_nulls_del_init_rcu
1029 hlist_nulls_add_head_rcu
1030 hlist_bl_add_head_rcu
1031 hlist_bl_del_init_rcu
1033 hlist_bl_set_first_rcu
1037 Critical sections Grace period Barrier
1039 rcu_read_lock synchronize_net rcu_barrier
1040 rcu_read_unlock synchronize_rcu
1041 rcu_dereference synchronize_rcu_expedited
1042 rcu_read_lock_held call_rcu
1043 rcu_dereference_check kfree_rcu
1044 rcu_dereference_protected
1048 Critical sections Grace period Barrier
1050 rcu_read_lock_bh call_rcu rcu_barrier
1051 rcu_read_unlock_bh synchronize_rcu
1052 [local_bh_disable] synchronize_rcu_expedited
1055 rcu_dereference_bh_check
1056 rcu_dereference_bh_protected
1057 rcu_read_lock_bh_held
1061 Critical sections Grace period Barrier
1063 rcu_read_lock_sched call_rcu rcu_barrier
1064 rcu_read_unlock_sched synchronize_rcu
1065 [preempt_disable] synchronize_rcu_expedited
1067 rcu_read_lock_sched_notrace
1068 rcu_read_unlock_sched_notrace
1069 rcu_dereference_sched
1070 rcu_dereference_sched_check
1071 rcu_dereference_sched_protected
1072 rcu_read_lock_sched_held
1077 Critical sections Grace period Barrier
1079 N/A call_rcu_tasks rcu_barrier_tasks
1080 synchronize_rcu_tasks
1085 Critical sections Grace period Barrier
1087 N/A call_rcu_tasks_rude rcu_barrier_tasks_rude
1088 synchronize_rcu_tasks_rude
1093 Critical sections Grace period Barrier
1095 rcu_read_lock_trace call_rcu_tasks_trace rcu_barrier_tasks_trace
1096 rcu_read_unlock_trace synchronize_rcu_tasks_trace
1101 Critical sections Grace period Barrier
1103 srcu_read_lock call_srcu srcu_barrier
1104 srcu_read_unlock synchronize_srcu
1105 srcu_dereference synchronize_srcu_expedited
1106 srcu_dereference_check
1109 SRCU: Initialization/cleanup::
1116 All: lockdep-checked RCU utility APIs::
1122 All: Unchecked RCU-protected pointer access::
1126 All: Unchecked RCU-protected pointer access with dereferencing prohibited::
1130 See the comment headers in the source code (or the docbook generated
1131 from them) for more information.
1133 However, given that there are no fewer than four families of RCU APIs
1134 in the Linux kernel, how do you choose which one to use? The following
1135 list can be helpful:
1137 a. Will readers need to block? If so, you need SRCU.
1139 b. Will readers need to block and are you doing tracing, for
1140 example, ftrace or BPF? If so, you need RCU-tasks,
1141 RCU-tasks-rude, and/or RCU-tasks-trace.
1143 c. What about the -rt patchset? If readers would need to block in
1144 an non-rt kernel, you need SRCU. If readers would block when
1145 acquiring spinlocks in a -rt kernel, but not in a non-rt kernel,
1146 SRCU is not necessary. (The -rt patchset turns spinlocks into
1147 sleeplocks, hence this distinction.)
1149 d. Do you need to treat NMI handlers, hardirq handlers,
1150 and code segments with preemption disabled (whether
1151 via preempt_disable(), local_irq_save(), local_bh_disable(),
1152 or some other mechanism) as if they were explicit RCU readers?
1153 If so, RCU-sched readers are the only choice that will work
1154 for you, but since about v4.20 you use can use the vanilla RCU
1157 e. Do you need RCU grace periods to complete even in the face of
1158 softirq monopolization of one or more of the CPUs? For example,
1159 is your code subject to network-based denial-of-service attacks?
1160 If so, you should disable softirq across your readers, for
1161 example, by using rcu_read_lock_bh(). Since about v4.20 you
1162 use can use the vanilla RCU update primitives.
1164 f. Is your workload too update-intensive for normal use of
1165 RCU, but inappropriate for other synchronization mechanisms?
1166 If so, consider SLAB_TYPESAFE_BY_RCU (which was originally
1167 named SLAB_DESTROY_BY_RCU). But please be careful!
1169 g. Do you need read-side critical sections that are respected even
1170 on CPUs that are deep in the idle loop, during entry to or exit
1171 from user-mode execution, or on an offlined CPU? If so, SRCU
1172 and RCU Tasks Trace are the only choices that will work for you,
1173 with SRCU being strongly preferred in almost all cases.
1175 h. Otherwise, use RCU.
1177 Of course, this all assumes that you have determined that RCU is in fact
1178 the right tool for your job.
1182 9. ANSWERS TO QUICK QUIZZES
1183 ----------------------------
1186 Why is this argument naive? How could a deadlock
1187 occur when using this algorithm in a real-world Linux
1188 kernel? [Referring to the lock-based "toy" RCU
1192 Consider the following sequence of events:
1194 1. CPU 0 acquires some unrelated lock, call it
1195 "problematic_lock", disabling irq via
1196 spin_lock_irqsave().
1198 2. CPU 1 enters synchronize_rcu(), write-acquiring
1201 3. CPU 0 enters rcu_read_lock(), but must wait
1202 because CPU 1 holds rcu_gp_mutex.
1204 4. CPU 1 is interrupted, and the irq handler
1205 attempts to acquire problematic_lock.
1207 The system is now deadlocked.
1209 One way to avoid this deadlock is to use an approach like
1210 that of CONFIG_PREEMPT_RT, where all normal spinlocks
1211 become blocking locks, and all irq handlers execute in
1212 the context of special tasks. In this case, in step 4
1213 above, the irq handler would block, allowing CPU 1 to
1214 release rcu_gp_mutex, avoiding the deadlock.
1216 Even in the absence of deadlock, this RCU implementation
1217 allows latency to "bleed" from readers to other
1218 readers through synchronize_rcu(). To see this,
1219 consider task A in an RCU read-side critical section
1220 (thus read-holding rcu_gp_mutex), task B blocked
1221 attempting to write-acquire rcu_gp_mutex, and
1222 task C blocked in rcu_read_lock() attempting to
1223 read_acquire rcu_gp_mutex. Task A's RCU read-side
1224 latency is holding up task C, albeit indirectly via
1227 Realtime RCU implementations therefore use a counter-based
1228 approach where tasks in RCU read-side critical sections
1229 cannot be blocked by tasks executing synchronize_rcu().
1231 :ref:`Back to Quick Quiz #1 <quiz_1>`
1234 Give an example where Classic RCU's read-side
1235 overhead is **negative**.
1238 Imagine a single-CPU system with a non-CONFIG_PREEMPTION
1239 kernel where a routing table is used by process-context
1240 code, but can be updated by irq-context code (for example,
1241 by an "ICMP REDIRECT" packet). The usual way of handling
1242 this would be to have the process-context code disable
1243 interrupts while searching the routing table. Use of
1244 RCU allows such interrupt-disabling to be dispensed with.
1245 Thus, without RCU, you pay the cost of disabling interrupts,
1246 and with RCU you don't.
1248 One can argue that the overhead of RCU in this
1249 case is negative with respect to the single-CPU
1250 interrupt-disabling approach. Others might argue that
1251 the overhead of RCU is merely zero, and that replacing
1252 the positive overhead of the interrupt-disabling scheme
1253 with the zero-overhead RCU scheme does not constitute
1256 In real life, of course, things are more complex. But
1257 even the theoretical possibility of negative overhead for
1258 a synchronization primitive is a bit unexpected. ;-)
1260 :ref:`Back to Quick Quiz #2 <quiz_2>`
1263 If it is illegal to block in an RCU read-side
1264 critical section, what the heck do you do in
1265 CONFIG_PREEMPT_RT, where normal spinlocks can block???
1268 Just as CONFIG_PREEMPT_RT permits preemption of spinlock
1269 critical sections, it permits preemption of RCU
1270 read-side critical sections. It also permits
1271 spinlocks blocking while in RCU read-side critical
1274 Why the apparent inconsistency? Because it is
1275 possible to use priority boosting to keep the RCU
1276 grace periods short if need be (for example, if running
1277 short of memory). In contrast, if blocking waiting
1278 for (say) network reception, there is no way to know
1279 what should be boosted. Especially given that the
1280 process we need to boost might well be a human being
1281 who just went out for a pizza or something. And although
1282 a computer-operated cattle prod might arouse serious
1283 interest, it might also provoke serious objections.
1284 Besides, how does the computer know what pizza parlor
1285 the human being went to???
1287 :ref:`Back to Quick Quiz #3 <quiz_3>`
1291 My thanks to the people who helped make this human-readable, including
1292 Jon Walpole, Josh Triplett, Serge Hallyn, Suzanne Wood, and Alan Stern.
1295 For more information, see http://www.rdrop.com/users/paulmck/RCU.