Phong Tran | 5e1bc93 | 2019-11-06 20:09:50 +0700 | [diff] [blame] | 1 | .. _whatisrcu_doc: |
| 2 | |
Paul Gortmaker | 628c084 | 2018-04-19 13:59:36 -0400 | [diff] [blame] | 3 | What is RCU? -- "Read, Copy, Update" |
Phong Tran | 5e1bc93 | 2019-11-06 20:09:50 +0700 | [diff] [blame] | 4 | ====================================== |
Paul Gortmaker | 628c084 | 2018-04-19 13:59:36 -0400 | [diff] [blame] | 5 | |
Paul E. McKenney | 3230075 | 2008-05-12 21:21:05 +0200 | [diff] [blame] | 6 | Please note that the "What is RCU?" LWN series is an excellent place |
| 7 | to start learning about RCU: |
| 8 | |
Phong Tran | 5e1bc93 | 2019-11-06 20:09:50 +0700 | [diff] [blame] | 9 | | 1. What is RCU, Fundamentally? http://lwn.net/Articles/262464/ |
| 10 | | 2. What is RCU? Part 2: Usage http://lwn.net/Articles/263130/ |
| 11 | | 3. RCU part 3: the RCU API http://lwn.net/Articles/264090/ |
| 12 | | 4. The RCU API, 2010 Edition http://lwn.net/Articles/418853/ |
| 13 | | 2010 Big API Table http://lwn.net/Articles/419086/ |
| 14 | | 5. The RCU API, 2014 Edition http://lwn.net/Articles/609904/ |
| 15 | | 2014 Big API Table http://lwn.net/Articles/609973/ |
Paul E. McKenney | 3230075 | 2008-05-12 21:21:05 +0200 | [diff] [blame] | 16 | |
| 17 | |
Paul E. McKenney | dd81eca | 2005-09-10 00:26:24 -0700 | [diff] [blame] | 18 | What is RCU? |
| 19 | |
| 20 | RCU is a synchronization mechanism that was added to the Linux kernel |
| 21 | during the 2.5 development effort that is optimized for read-mostly |
| 22 | situations. Although RCU is actually quite simple once you understand it, |
| 23 | getting there can sometimes be a challenge. Part of the problem is that |
| 24 | most of the past descriptions of RCU have been written with the mistaken |
| 25 | assumption that there is "one true way" to describe RCU. Instead, |
| 26 | the experience has been that different people must take different paths |
| 27 | to arrive at an understanding of RCU. This document provides several |
| 28 | different paths, as follows: |
| 29 | |
Phong Tran | 5e1bc93 | 2019-11-06 20:09:50 +0700 | [diff] [blame] | 30 | :ref:`1. RCU OVERVIEW <1_whatisRCU>` |
| 31 | |
| 32 | :ref:`2. WHAT IS RCU'S CORE API? <2_whatisRCU>` |
| 33 | |
| 34 | :ref:`3. WHAT ARE SOME EXAMPLE USES OF CORE RCU API? <3_whatisRCU>` |
| 35 | |
| 36 | :ref:`4. WHAT IF MY UPDATING THREAD CANNOT BLOCK? <4_whatisRCU>` |
| 37 | |
| 38 | :ref:`5. WHAT ARE SOME SIMPLE IMPLEMENTATIONS OF RCU? <5_whatisRCU>` |
| 39 | |
| 40 | :ref:`6. ANALOGY WITH READER-WRITER LOCKING <6_whatisRCU>` |
| 41 | |
| 42 | :ref:`7. FULL LIST OF RCU APIs <7_whatisRCU>` |
| 43 | |
| 44 | :ref:`8. ANSWERS TO QUICK QUIZZES <8_whatisRCU>` |
Paul E. McKenney | dd81eca | 2005-09-10 00:26:24 -0700 | [diff] [blame] | 45 | |
| 46 | People who prefer starting with a conceptual overview should focus on |
| 47 | Section 1, though most readers will profit by reading this section at |
| 48 | some point. People who prefer to start with an API that they can then |
| 49 | experiment with should focus on Section 2. People who prefer to start |
| 50 | with example uses should focus on Sections 3 and 4. People who need to |
| 51 | understand the RCU implementation should focus on Section 5, then dive |
| 52 | into the kernel source code. People who reason best by analogy should |
| 53 | focus on Section 6. Section 7 serves as an index to the docbook API |
| 54 | documentation, and Section 8 is the traditional answer key. |
| 55 | |
| 56 | So, start with the section that makes the most sense to you and your |
| 57 | preferred method of learning. If you need to know everything about |
| 58 | everything, feel free to read the whole thing -- but if you are really |
| 59 | that type of person, you have perused the source code and will therefore |
| 60 | never need this document anyway. ;-) |
| 61 | |
Phong Tran | 5e1bc93 | 2019-11-06 20:09:50 +0700 | [diff] [blame] | 62 | .. _1_whatisRCU: |
Paul E. McKenney | dd81eca | 2005-09-10 00:26:24 -0700 | [diff] [blame] | 63 | |
| 64 | 1. RCU OVERVIEW |
Phong Tran | 5e1bc93 | 2019-11-06 20:09:50 +0700 | [diff] [blame] | 65 | ---------------- |
Paul E. McKenney | dd81eca | 2005-09-10 00:26:24 -0700 | [diff] [blame] | 66 | |
| 67 | The basic idea behind RCU is to split updates into "removal" and |
| 68 | "reclamation" phases. The removal phase removes references to data items |
| 69 | within a data structure (possibly by replacing them with references to |
| 70 | new versions of these data items), and can run concurrently with readers. |
| 71 | The reason that it is safe to run the removal phase concurrently with |
| 72 | readers is the semantics of modern CPUs guarantee that readers will see |
| 73 | either the old or the new version of the data structure rather than a |
| 74 | partially updated reference. The reclamation phase does the work of reclaiming |
| 75 | (e.g., freeing) the data items removed from the data structure during the |
| 76 | removal phase. Because reclaiming data items can disrupt any readers |
| 77 | concurrently referencing those data items, the reclamation phase must |
| 78 | not start until readers no longer hold references to those data items. |
| 79 | |
| 80 | Splitting the update into removal and reclamation phases permits the |
| 81 | updater to perform the removal phase immediately, and to defer the |
| 82 | reclamation phase until all readers active during the removal phase have |
| 83 | completed, either by blocking until they finish or by registering a |
| 84 | callback that is invoked after they finish. Only readers that are active |
| 85 | during the removal phase need be considered, because any reader starting |
| 86 | after the removal phase will be unable to gain a reference to the removed |
| 87 | data items, and therefore cannot be disrupted by the reclamation phase. |
| 88 | |
| 89 | So the typical RCU update sequence goes something like the following: |
| 90 | |
| 91 | a. Remove pointers to a data structure, so that subsequent |
| 92 | readers cannot gain a reference to it. |
| 93 | |
| 94 | b. Wait for all previous readers to complete their RCU read-side |
| 95 | critical sections. |
| 96 | |
| 97 | c. At this point, there cannot be any readers who hold references |
| 98 | to the data structure, so it now may safely be reclaimed |
| 99 | (e.g., kfree()d). |
| 100 | |
| 101 | Step (b) above is the key idea underlying RCU's deferred destruction. |
| 102 | The ability to wait until all readers are done allows RCU readers to |
| 103 | use much lighter-weight synchronization, in some cases, absolutely no |
| 104 | synchronization at all. In contrast, in more conventional lock-based |
| 105 | schemes, readers must use heavy-weight synchronization in order to |
| 106 | prevent an updater from deleting the data structure out from under them. |
| 107 | This is because lock-based updaters typically update data items in place, |
| 108 | and must therefore exclude readers. In contrast, RCU-based updaters |
| 109 | typically take advantage of the fact that writes to single aligned |
| 110 | pointers are atomic on modern CPUs, allowing atomic insertion, removal, |
| 111 | and replacement of data items in a linked structure without disrupting |
| 112 | readers. Concurrent RCU readers can then continue accessing the old |
| 113 | versions, and can dispense with the atomic operations, memory barriers, |
| 114 | and communications cache misses that are so expensive on present-day |
| 115 | SMP computer systems, even in absence of lock contention. |
| 116 | |
| 117 | In the three-step procedure shown above, the updater is performing both |
| 118 | the removal and the reclamation step, but it is often helpful for an |
| 119 | entirely different thread to do the reclamation, as is in fact the case |
| 120 | in the Linux kernel's directory-entry cache (dcache). Even if the same |
| 121 | thread performs both the update step (step (a) above) and the reclamation |
| 122 | step (step (c) above), it is often helpful to think of them separately. |
| 123 | For example, RCU readers and updaters need not communicate at all, |
| 124 | but RCU provides implicit low-overhead communication between readers |
| 125 | and reclaimers, namely, in step (b) above. |
| 126 | |
| 127 | So how the heck can a reclaimer tell when a reader is done, given |
| 128 | that readers are not doing any sort of synchronization operations??? |
| 129 | Read on to learn about how RCU's API makes this easy. |
| 130 | |
Phong Tran | 5e1bc93 | 2019-11-06 20:09:50 +0700 | [diff] [blame] | 131 | .. _2_whatisRCU: |
Paul E. McKenney | dd81eca | 2005-09-10 00:26:24 -0700 | [diff] [blame] | 132 | |
| 133 | 2. WHAT IS RCU'S CORE API? |
Phong Tran | 5e1bc93 | 2019-11-06 20:09:50 +0700 | [diff] [blame] | 134 | --------------------------- |
Paul E. McKenney | dd81eca | 2005-09-10 00:26:24 -0700 | [diff] [blame] | 135 | |
| 136 | The core RCU API is quite small: |
| 137 | |
| 138 | a. rcu_read_lock() |
| 139 | b. rcu_read_unlock() |
| 140 | c. synchronize_rcu() / call_rcu() |
| 141 | d. rcu_assign_pointer() |
| 142 | e. rcu_dereference() |
| 143 | |
| 144 | There are many other members of the RCU API, but the rest can be |
| 145 | expressed in terms of these five, though most implementations instead |
| 146 | express synchronize_rcu() in terms of the call_rcu() callback API. |
| 147 | |
| 148 | The five core RCU APIs are described below, the other 18 will be enumerated |
| 149 | later. See the kernel docbook documentation for more info, or look directly |
| 150 | at the function header comments. |
| 151 | |
| 152 | rcu_read_lock() |
Phong Tran | 5e1bc93 | 2019-11-06 20:09:50 +0700 | [diff] [blame] | 153 | ^^^^^^^^^^^^^^^ |
Paul E. McKenney | dd81eca | 2005-09-10 00:26:24 -0700 | [diff] [blame] | 154 | void rcu_read_lock(void); |
| 155 | |
| 156 | Used by a reader to inform the reclaimer that the reader is |
| 157 | entering an RCU read-side critical section. It is illegal |
| 158 | to block while in an RCU read-side critical section, though |
Pranith Kumar | 28f6569 | 2014-09-22 14:00:48 -0400 | [diff] [blame] | 159 | kernels built with CONFIG_PREEMPT_RCU can preempt RCU |
Paul E. McKenney | 6b3ef48 | 2009-08-22 13:56:53 -0700 | [diff] [blame] | 160 | read-side critical sections. Any RCU-protected data structure |
| 161 | accessed during an RCU read-side critical section is guaranteed to |
| 162 | remain unreclaimed for the full duration of that critical section. |
Paul E. McKenney | dd81eca | 2005-09-10 00:26:24 -0700 | [diff] [blame] | 163 | Reference counts may be used in conjunction with RCU to maintain |
| 164 | longer-term references to data structures. |
| 165 | |
| 166 | rcu_read_unlock() |
Phong Tran | 5e1bc93 | 2019-11-06 20:09:50 +0700 | [diff] [blame] | 167 | ^^^^^^^^^^^^^^^^^ |
Paul E. McKenney | dd81eca | 2005-09-10 00:26:24 -0700 | [diff] [blame] | 168 | void rcu_read_unlock(void); |
| 169 | |
| 170 | Used by a reader to inform the reclaimer that the reader is |
| 171 | exiting an RCU read-side critical section. Note that RCU |
| 172 | read-side critical sections may be nested and/or overlapping. |
| 173 | |
| 174 | synchronize_rcu() |
Phong Tran | 5e1bc93 | 2019-11-06 20:09:50 +0700 | [diff] [blame] | 175 | ^^^^^^^^^^^^^^^^^ |
Paul E. McKenney | dd81eca | 2005-09-10 00:26:24 -0700 | [diff] [blame] | 176 | void synchronize_rcu(void); |
| 177 | |
| 178 | Marks the end of updater code and the beginning of reclaimer |
| 179 | code. It does this by blocking until all pre-existing RCU |
| 180 | read-side critical sections on all CPUs have completed. |
Phong Tran | 5e1bc93 | 2019-11-06 20:09:50 +0700 | [diff] [blame] | 181 | Note that synchronize_rcu() will **not** necessarily wait for |
Paul E. McKenney | dd81eca | 2005-09-10 00:26:24 -0700 | [diff] [blame] | 182 | any subsequent RCU read-side critical sections to complete. |
Phong Tran | 5e1bc93 | 2019-11-06 20:09:50 +0700 | [diff] [blame] | 183 | For example, consider the following sequence of events:: |
Paul E. McKenney | dd81eca | 2005-09-10 00:26:24 -0700 | [diff] [blame] | 184 | |
| 185 | CPU 0 CPU 1 CPU 2 |
| 186 | ----------------- ------------------------- --------------- |
| 187 | 1. rcu_read_lock() |
| 188 | 2. enters synchronize_rcu() |
| 189 | 3. rcu_read_lock() |
| 190 | 4. rcu_read_unlock() |
| 191 | 5. exits synchronize_rcu() |
| 192 | 6. rcu_read_unlock() |
| 193 | |
| 194 | To reiterate, synchronize_rcu() waits only for ongoing RCU |
| 195 | read-side critical sections to complete, not necessarily for |
| 196 | any that begin after synchronize_rcu() is invoked. |
| 197 | |
| 198 | Of course, synchronize_rcu() does not necessarily return |
Phong Tran | 5e1bc93 | 2019-11-06 20:09:50 +0700 | [diff] [blame] | 199 | **immediately** after the last pre-existing RCU read-side critical |
Paul E. McKenney | dd81eca | 2005-09-10 00:26:24 -0700 | [diff] [blame] | 200 | section completes. For one thing, there might well be scheduling |
| 201 | delays. For another thing, many RCU implementations process |
| 202 | requests in batches in order to improve efficiencies, which can |
| 203 | further delay synchronize_rcu(). |
| 204 | |
| 205 | Since synchronize_rcu() is the API that must figure out when |
| 206 | readers are done, its implementation is key to RCU. For RCU |
| 207 | to be useful in all but the most read-intensive situations, |
| 208 | synchronize_rcu()'s overhead must also be quite small. |
| 209 | |
| 210 | The call_rcu() API is a callback form of synchronize_rcu(), |
| 211 | and is described in more detail in a later section. Instead of |
| 212 | blocking, it registers a function and argument which are invoked |
| 213 | after all ongoing RCU read-side critical sections have completed. |
| 214 | This callback variant is particularly useful in situations where |
Paul E. McKenney | 165d6c7 | 2006-06-25 05:48:44 -0700 | [diff] [blame] | 215 | it is illegal to block or where update-side performance is |
| 216 | critically important. |
| 217 | |
| 218 | However, the call_rcu() API should not be used lightly, as use |
| 219 | of the synchronize_rcu() API generally results in simpler code. |
| 220 | In addition, the synchronize_rcu() API has the nice property |
| 221 | of automatically limiting update rate should grace periods |
| 222 | be delayed. This property results in system resilience in face |
| 223 | of denial-of-service attacks. Code using call_rcu() should limit |
| 224 | update rate in order to gain this same sort of resilience. See |
| 225 | checklist.txt for some approaches to limiting the update rate. |
Paul E. McKenney | dd81eca | 2005-09-10 00:26:24 -0700 | [diff] [blame] | 226 | |
| 227 | rcu_assign_pointer() |
Phong Tran | 5e1bc93 | 2019-11-06 20:09:50 +0700 | [diff] [blame] | 228 | ^^^^^^^^^^^^^^^^^^^^ |
Andrea Parri | 9129b01 | 2019-05-27 10:49:57 +0200 | [diff] [blame] | 229 | void rcu_assign_pointer(p, typeof(p) v); |
Paul E. McKenney | dd81eca | 2005-09-10 00:26:24 -0700 | [diff] [blame] | 230 | |
Phong Tran | 5e1bc93 | 2019-11-06 20:09:50 +0700 | [diff] [blame] | 231 | Yes, rcu_assign_pointer() **is** implemented as a macro, though it |
Paul E. McKenney | dd81eca | 2005-09-10 00:26:24 -0700 | [diff] [blame] | 232 | would be cool to be able to declare a function in this manner. |
| 233 | (Compiler experts will no doubt disagree.) |
| 234 | |
| 235 | The updater uses this function to assign a new value to an |
| 236 | RCU-protected pointer, in order to safely communicate the change |
Andrea Parri | 9129b01 | 2019-05-27 10:49:57 +0200 | [diff] [blame] | 237 | in value from the updater to the reader. This macro does not |
| 238 | evaluate to an rvalue, but it does execute any memory-barrier |
| 239 | instructions required for a given CPU architecture. |
Paul E. McKenney | dd81eca | 2005-09-10 00:26:24 -0700 | [diff] [blame] | 240 | |
Paul E. McKenney | d19720a | 2006-02-01 03:06:42 -0800 | [diff] [blame] | 241 | Perhaps just as important, it serves to document (1) which |
| 242 | pointers are protected by RCU and (2) the point at which a |
| 243 | given structure becomes accessible to other CPUs. That said, |
| 244 | rcu_assign_pointer() is most frequently used indirectly, via |
| 245 | the _rcu list-manipulation primitives such as list_add_rcu(). |
Paul E. McKenney | dd81eca | 2005-09-10 00:26:24 -0700 | [diff] [blame] | 246 | |
| 247 | rcu_dereference() |
Phong Tran | 5e1bc93 | 2019-11-06 20:09:50 +0700 | [diff] [blame] | 248 | ^^^^^^^^^^^^^^^^^ |
Paul E. McKenney | dd81eca | 2005-09-10 00:26:24 -0700 | [diff] [blame] | 249 | typeof(p) rcu_dereference(p); |
| 250 | |
| 251 | Like rcu_assign_pointer(), rcu_dereference() must be implemented |
| 252 | as a macro. |
| 253 | |
| 254 | The reader uses rcu_dereference() to fetch an RCU-protected |
| 255 | pointer, which returns a value that may then be safely |
Pranith Kumar | 8cf503d | 2016-10-18 00:54:03 -0400 | [diff] [blame] | 256 | dereferenced. Note that rcu_dereference() does not actually |
Paul E. McKenney | dd81eca | 2005-09-10 00:26:24 -0700 | [diff] [blame] | 257 | dereference the pointer, instead, it protects the pointer for |
| 258 | later dereferencing. It also executes any needed memory-barrier |
| 259 | instructions for a given CPU architecture. Currently, only Alpha |
| 260 | needs memory barriers within rcu_dereference() -- on other CPUs, |
| 261 | it compiles to nothing, not even a compiler directive. |
| 262 | |
| 263 | Common coding practice uses rcu_dereference() to copy an |
| 264 | RCU-protected pointer to a local variable, then dereferences |
Phong Tran | 5e1bc93 | 2019-11-06 20:09:50 +0700 | [diff] [blame] | 265 | this local variable, for example as follows:: |
Paul E. McKenney | dd81eca | 2005-09-10 00:26:24 -0700 | [diff] [blame] | 266 | |
| 267 | p = rcu_dereference(head.next); |
| 268 | return p->data; |
| 269 | |
| 270 | However, in this case, one could just as easily combine these |
Phong Tran | 5e1bc93 | 2019-11-06 20:09:50 +0700 | [diff] [blame] | 271 | into one statement:: |
Paul E. McKenney | dd81eca | 2005-09-10 00:26:24 -0700 | [diff] [blame] | 272 | |
| 273 | return rcu_dereference(head.next)->data; |
| 274 | |
| 275 | If you are going to be fetching multiple fields from the |
| 276 | RCU-protected structure, using the local variable is of |
| 277 | course preferred. Repeated rcu_dereference() calls look |
Milos Vyletel | ed38446 | 2015-04-17 16:38:04 +0200 | [diff] [blame] | 278 | ugly, do not guarantee that the same pointer will be returned |
| 279 | if an update happened while in the critical section, and incur |
| 280 | unnecessary overhead on Alpha CPUs. |
Paul E. McKenney | dd81eca | 2005-09-10 00:26:24 -0700 | [diff] [blame] | 281 | |
| 282 | Note that the value returned by rcu_dereference() is valid |
Phong Tran | 5e1bc93 | 2019-11-06 20:09:50 +0700 | [diff] [blame] | 283 | only within the enclosing RCU read-side critical section [1]_. |
| 284 | For example, the following is **not** legal:: |
Paul E. McKenney | dd81eca | 2005-09-10 00:26:24 -0700 | [diff] [blame] | 285 | |
| 286 | rcu_read_lock(); |
| 287 | p = rcu_dereference(head.next); |
| 288 | rcu_read_unlock(); |
Paul E. McKenney | 4357fb5 | 2013-02-12 07:56:27 -0800 | [diff] [blame] | 289 | x = p->address; /* BUG!!! */ |
Paul E. McKenney | dd81eca | 2005-09-10 00:26:24 -0700 | [diff] [blame] | 290 | rcu_read_lock(); |
Paul E. McKenney | 4357fb5 | 2013-02-12 07:56:27 -0800 | [diff] [blame] | 291 | y = p->data; /* BUG!!! */ |
Paul E. McKenney | dd81eca | 2005-09-10 00:26:24 -0700 | [diff] [blame] | 292 | rcu_read_unlock(); |
| 293 | |
| 294 | Holding a reference from one RCU read-side critical section |
| 295 | to another is just as illegal as holding a reference from |
| 296 | one lock-based critical section to another! Similarly, |
| 297 | using a reference outside of the critical section in which |
| 298 | it was acquired is just as illegal as doing so with normal |
| 299 | locking. |
| 300 | |
| 301 | As with rcu_assign_pointer(), an important function of |
Paul E. McKenney | d19720a | 2006-02-01 03:06:42 -0800 | [diff] [blame] | 302 | rcu_dereference() is to document which pointers are protected by |
| 303 | RCU, in particular, flagging a pointer that is subject to changing |
| 304 | at any time, including immediately after the rcu_dereference(). |
| 305 | And, again like rcu_assign_pointer(), rcu_dereference() is |
| 306 | typically used indirectly, via the _rcu list-manipulation |
Phong Tran | 5e1bc93 | 2019-11-06 20:09:50 +0700 | [diff] [blame] | 307 | primitives, such as list_for_each_entry_rcu() [2]_. |
Paul E. McKenney | dd81eca | 2005-09-10 00:26:24 -0700 | [diff] [blame] | 308 | |
Phong Tran | 5e1bc93 | 2019-11-06 20:09:50 +0700 | [diff] [blame] | 309 | .. [1] The variant rcu_dereference_protected() can be used outside |
Joel Fernandes (Google) | 93eb1420 | 2018-10-08 18:33:41 -0700 | [diff] [blame] | 310 | of an RCU read-side critical section as long as the usage is |
| 311 | protected by locks acquired by the update-side code. This variant |
| 312 | avoids the lockdep warning that would happen when using (for |
| 313 | example) rcu_dereference() without rcu_read_lock() protection. |
| 314 | Using rcu_dereference_protected() also has the advantage |
| 315 | of permitting compiler optimizations that rcu_dereference() |
| 316 | must prohibit. The rcu_dereference_protected() variant takes |
| 317 | a lockdep expression to indicate which locks must be acquired |
| 318 | by the caller. If the indicated protection is not provided, |
Mauro Carvalho Chehab | ccc9971 | 2019-08-01 17:39:18 -0400 | [diff] [blame] | 319 | a lockdep splat is emitted. See Documentation/RCU/Design/Requirements/Requirements.rst |
Joel Fernandes (Google) | 93eb1420 | 2018-10-08 18:33:41 -0700 | [diff] [blame] | 320 | and the API's code comments for more details and example usage. |
| 321 | |
Phong Tran | 5e1bc93 | 2019-11-06 20:09:50 +0700 | [diff] [blame] | 322 | .. [2] If the list_for_each_entry_rcu() instance might be used by |
Joel Fernandes (Google) | 4527106 | 2019-08-11 18:11:10 -0400 | [diff] [blame] | 323 | update-side code as well as by RCU readers, then an additional |
| 324 | lockdep expression can be added to its list of arguments. |
| 325 | For example, given an additional "lock_is_held(&mylock)" argument, |
| 326 | the RCU lockdep code would complain only if this instance was |
| 327 | invoked outside of an RCU read-side critical section and without |
| 328 | the protection of mylock. |
| 329 | |
Paul E. McKenney | dd81eca | 2005-09-10 00:26:24 -0700 | [diff] [blame] | 330 | The following diagram shows how each API communicates among the |
| 331 | reader, updater, and reclaimer. |
Phong Tran | 5e1bc93 | 2019-11-06 20:09:50 +0700 | [diff] [blame] | 332 | :: |
Paul E. McKenney | dd81eca | 2005-09-10 00:26:24 -0700 | [diff] [blame] | 333 | |
| 334 | |
| 335 | rcu_assign_pointer() |
Tycho Andersen | 0fa201d | 2019-01-29 15:05:46 -0700 | [diff] [blame] | 336 | +--------+ |
Paul E. McKenney | dd81eca | 2005-09-10 00:26:24 -0700 | [diff] [blame] | 337 | +---------------------->| reader |---------+ |
| 338 | | +--------+ | |
| 339 | | | | |
| 340 | | | | Protect: |
| 341 | | | | rcu_read_lock() |
| 342 | | | | rcu_read_unlock() |
| 343 | | rcu_dereference() | | |
Tycho Andersen | 0fa201d | 2019-01-29 15:05:46 -0700 | [diff] [blame] | 344 | +---------+ | | |
| 345 | | updater |<----------------+ | |
| 346 | +---------+ V |
Paul E. McKenney | dd81eca | 2005-09-10 00:26:24 -0700 | [diff] [blame] | 347 | | +-----------+ |
| 348 | +----------------------------------->| reclaimer | |
Tycho Andersen | 0fa201d | 2019-01-29 15:05:46 -0700 | [diff] [blame] | 349 | +-----------+ |
Paul E. McKenney | dd81eca | 2005-09-10 00:26:24 -0700 | [diff] [blame] | 350 | Defer: |
| 351 | synchronize_rcu() & call_rcu() |
| 352 | |
| 353 | |
| 354 | The RCU infrastructure observes the time sequence of rcu_read_lock(), |
| 355 | rcu_read_unlock(), synchronize_rcu(), and call_rcu() invocations in |
| 356 | order to determine when (1) synchronize_rcu() invocations may return |
| 357 | to their callers and (2) call_rcu() callbacks may be invoked. Efficient |
| 358 | implementations of the RCU infrastructure make heavy use of batching in |
| 359 | order to amortize their overhead over many uses of the corresponding APIs. |
| 360 | |
Joel Fernandes (Google) | 3398496 | 2018-10-05 16:18:10 -0700 | [diff] [blame] | 361 | There are at least three flavors of RCU usage in the Linux kernel. The diagram |
| 362 | above shows the most common one. On the updater side, the rcu_assign_pointer(), |
| 363 | sychronize_rcu() and call_rcu() primitives used are the same for all three |
| 364 | flavors. However for protection (on the reader side), the primitives used vary |
| 365 | depending on the flavor: |
Paul E. McKenney | dd81eca | 2005-09-10 00:26:24 -0700 | [diff] [blame] | 366 | |
Joel Fernandes (Google) | 3398496 | 2018-10-05 16:18:10 -0700 | [diff] [blame] | 367 | a. rcu_read_lock() / rcu_read_unlock() |
| 368 | rcu_dereference() |
Paul E. McKenney | dd81eca | 2005-09-10 00:26:24 -0700 | [diff] [blame] | 369 | |
Joel Fernandes (Google) | 3398496 | 2018-10-05 16:18:10 -0700 | [diff] [blame] | 370 | b. rcu_read_lock_bh() / rcu_read_unlock_bh() |
| 371 | local_bh_disable() / local_bh_enable() |
| 372 | rcu_dereference_bh() |
Paul E. McKenney | dd81eca | 2005-09-10 00:26:24 -0700 | [diff] [blame] | 373 | |
Joel Fernandes (Google) | 3398496 | 2018-10-05 16:18:10 -0700 | [diff] [blame] | 374 | c. rcu_read_lock_sched() / rcu_read_unlock_sched() |
| 375 | preempt_disable() / preempt_enable() |
| 376 | local_irq_save() / local_irq_restore() |
| 377 | hardirq enter / hardirq exit |
| 378 | NMI enter / NMI exit |
| 379 | rcu_dereference_sched() |
Paul E. McKenney | dd81eca | 2005-09-10 00:26:24 -0700 | [diff] [blame] | 380 | |
Joel Fernandes (Google) | 3398496 | 2018-10-05 16:18:10 -0700 | [diff] [blame] | 381 | These three flavors are used as follows: |
Paul E. McKenney | dd81eca | 2005-09-10 00:26:24 -0700 | [diff] [blame] | 382 | |
| 383 | a. RCU applied to normal data structures. |
| 384 | |
| 385 | b. RCU applied to networking data structures that may be subjected |
| 386 | to remote denial-of-service attacks. |
| 387 | |
| 388 | c. RCU applied to scheduler and interrupt/NMI-handler tasks. |
| 389 | |
| 390 | Again, most uses will be of (a). The (b) and (c) cases are important |
| 391 | for specialized uses, but are relatively uncommon. |
| 392 | |
Phong Tran | 5e1bc93 | 2019-11-06 20:09:50 +0700 | [diff] [blame] | 393 | .. _3_whatisRCU: |
Paul E. McKenney | dd81eca | 2005-09-10 00:26:24 -0700 | [diff] [blame] | 394 | |
| 395 | 3. WHAT ARE SOME EXAMPLE USES OF CORE RCU API? |
Phong Tran | 5e1bc93 | 2019-11-06 20:09:50 +0700 | [diff] [blame] | 396 | ----------------------------------------------- |
Paul E. McKenney | dd81eca | 2005-09-10 00:26:24 -0700 | [diff] [blame] | 397 | |
| 398 | This section shows a simple use of the core RCU API to protect a |
Paul E. McKenney | d19720a | 2006-02-01 03:06:42 -0800 | [diff] [blame] | 399 | global pointer to a dynamically allocated structure. More-typical |
Phong Tran | 5e1bc93 | 2019-11-06 20:09:50 +0700 | [diff] [blame] | 400 | uses of RCU may be found in :ref:`listRCU.rst <list_rcu_doc>`, |
| 401 | :ref:`arrayRCU.rst <array_rcu_doc>`, and :ref:`NMI-RCU.rst <NMI_rcu_doc>`. |
| 402 | :: |
Paul E. McKenney | dd81eca | 2005-09-10 00:26:24 -0700 | [diff] [blame] | 403 | |
| 404 | struct foo { |
| 405 | int a; |
| 406 | char b; |
| 407 | long c; |
| 408 | }; |
| 409 | DEFINE_SPINLOCK(foo_mutex); |
| 410 | |
Jason A. Donenfeld | 2c4ac34 | 2015-08-11 14:26:33 +0200 | [diff] [blame] | 411 | struct foo __rcu *gbl_foo; |
Paul E. McKenney | dd81eca | 2005-09-10 00:26:24 -0700 | [diff] [blame] | 412 | |
| 413 | /* |
| 414 | * Create a new struct foo that is the same as the one currently |
| 415 | * pointed to by gbl_foo, except that field "a" is replaced |
| 416 | * with "new_a". Points gbl_foo to the new structure, and |
| 417 | * frees up the old structure after a grace period. |
| 418 | * |
| 419 | * Uses rcu_assign_pointer() to ensure that concurrent readers |
| 420 | * see the initialized version of the new structure. |
| 421 | * |
| 422 | * Uses synchronize_rcu() to ensure that any readers that might |
| 423 | * have references to the old structure complete before freeing |
| 424 | * the old structure. |
| 425 | */ |
| 426 | void foo_update_a(int new_a) |
| 427 | { |
| 428 | struct foo *new_fp; |
| 429 | struct foo *old_fp; |
| 430 | |
Baruch Even | de0dfcd | 2006-03-24 18:25:25 +0100 | [diff] [blame] | 431 | new_fp = kmalloc(sizeof(*new_fp), GFP_KERNEL); |
Paul E. McKenney | dd81eca | 2005-09-10 00:26:24 -0700 | [diff] [blame] | 432 | spin_lock(&foo_mutex); |
Jason A. Donenfeld | 2c4ac34 | 2015-08-11 14:26:33 +0200 | [diff] [blame] | 433 | old_fp = rcu_dereference_protected(gbl_foo, lockdep_is_held(&foo_mutex)); |
Paul E. McKenney | dd81eca | 2005-09-10 00:26:24 -0700 | [diff] [blame] | 434 | *new_fp = *old_fp; |
| 435 | new_fp->a = new_a; |
| 436 | rcu_assign_pointer(gbl_foo, new_fp); |
| 437 | spin_unlock(&foo_mutex); |
| 438 | synchronize_rcu(); |
| 439 | kfree(old_fp); |
| 440 | } |
| 441 | |
| 442 | /* |
| 443 | * Return the value of field "a" of the current gbl_foo |
| 444 | * structure. Use rcu_read_lock() and rcu_read_unlock() |
| 445 | * to ensure that the structure does not get deleted out |
| 446 | * from under us, and use rcu_dereference() to ensure that |
| 447 | * we see the initialized version of the structure (important |
| 448 | * for DEC Alpha and for people reading the code). |
| 449 | */ |
| 450 | int foo_get_a(void) |
| 451 | { |
| 452 | int retval; |
| 453 | |
| 454 | rcu_read_lock(); |
| 455 | retval = rcu_dereference(gbl_foo)->a; |
| 456 | rcu_read_unlock(); |
| 457 | return retval; |
| 458 | } |
| 459 | |
| 460 | So, to sum up: |
| 461 | |
Phong Tran | 5e1bc93 | 2019-11-06 20:09:50 +0700 | [diff] [blame] | 462 | - Use rcu_read_lock() and rcu_read_unlock() to guard RCU |
Paul E. McKenney | dd81eca | 2005-09-10 00:26:24 -0700 | [diff] [blame] | 463 | read-side critical sections. |
| 464 | |
Phong Tran | 5e1bc93 | 2019-11-06 20:09:50 +0700 | [diff] [blame] | 465 | - Within an RCU read-side critical section, use rcu_dereference() |
Paul E. McKenney | dd81eca | 2005-09-10 00:26:24 -0700 | [diff] [blame] | 466 | to dereference RCU-protected pointers. |
| 467 | |
Phong Tran | 5e1bc93 | 2019-11-06 20:09:50 +0700 | [diff] [blame] | 468 | - Use some solid scheme (such as locks or semaphores) to |
Paul E. McKenney | dd81eca | 2005-09-10 00:26:24 -0700 | [diff] [blame] | 469 | keep concurrent updates from interfering with each other. |
| 470 | |
Phong Tran | 5e1bc93 | 2019-11-06 20:09:50 +0700 | [diff] [blame] | 471 | - Use rcu_assign_pointer() to update an RCU-protected pointer. |
Paul E. McKenney | dd81eca | 2005-09-10 00:26:24 -0700 | [diff] [blame] | 472 | This primitive protects concurrent readers from the updater, |
Phong Tran | 5e1bc93 | 2019-11-06 20:09:50 +0700 | [diff] [blame] | 473 | **not** concurrent updates from each other! You therefore still |
Paul E. McKenney | dd81eca | 2005-09-10 00:26:24 -0700 | [diff] [blame] | 474 | need to use locking (or something similar) to keep concurrent |
| 475 | rcu_assign_pointer() primitives from interfering with each other. |
| 476 | |
Phong Tran | 5e1bc93 | 2019-11-06 20:09:50 +0700 | [diff] [blame] | 477 | - Use synchronize_rcu() **after** removing a data element from an |
| 478 | RCU-protected data structure, but **before** reclaiming/freeing |
Paul E. McKenney | dd81eca | 2005-09-10 00:26:24 -0700 | [diff] [blame] | 479 | the data element, in order to wait for the completion of all |
| 480 | RCU read-side critical sections that might be referencing that |
| 481 | data item. |
| 482 | |
| 483 | See checklist.txt for additional rules to follow when using RCU. |
Phong Tran | 5e1bc93 | 2019-11-06 20:09:50 +0700 | [diff] [blame] | 484 | And again, more-typical uses of RCU may be found in :ref:`listRCU.rst |
| 485 | <list_rcu_doc>`, :ref:`arrayRCU.rst <array_rcu_doc>`, and :ref:`NMI-RCU.rst |
| 486 | <NMI_rcu_doc>`. |
Paul E. McKenney | dd81eca | 2005-09-10 00:26:24 -0700 | [diff] [blame] | 487 | |
Phong Tran | 5e1bc93 | 2019-11-06 20:09:50 +0700 | [diff] [blame] | 488 | .. _4_whatisRCU: |
Paul E. McKenney | dd81eca | 2005-09-10 00:26:24 -0700 | [diff] [blame] | 489 | |
| 490 | 4. WHAT IF MY UPDATING THREAD CANNOT BLOCK? |
Phong Tran | 5e1bc93 | 2019-11-06 20:09:50 +0700 | [diff] [blame] | 491 | -------------------------------------------- |
Paul E. McKenney | dd81eca | 2005-09-10 00:26:24 -0700 | [diff] [blame] | 492 | |
| 493 | In the example above, foo_update_a() blocks until a grace period elapses. |
| 494 | This is quite simple, but in some cases one cannot afford to wait so |
| 495 | long -- there might be other high-priority work to be done. |
| 496 | |
| 497 | In such cases, one uses call_rcu() rather than synchronize_rcu(). |
Phong Tran | 5e1bc93 | 2019-11-06 20:09:50 +0700 | [diff] [blame] | 498 | The call_rcu() API is as follows:: |
Paul E. McKenney | dd81eca | 2005-09-10 00:26:24 -0700 | [diff] [blame] | 499 | |
| 500 | void call_rcu(struct rcu_head * head, |
| 501 | void (*func)(struct rcu_head *head)); |
| 502 | |
| 503 | This function invokes func(head) after a grace period has elapsed. |
| 504 | This invocation might happen from either softirq or process context, |
| 505 | so the function is not permitted to block. The foo struct needs to |
Phong Tran | 5e1bc93 | 2019-11-06 20:09:50 +0700 | [diff] [blame] | 506 | have an rcu_head structure added, perhaps as follows:: |
Paul E. McKenney | dd81eca | 2005-09-10 00:26:24 -0700 | [diff] [blame] | 507 | |
| 508 | struct foo { |
| 509 | int a; |
| 510 | char b; |
| 511 | long c; |
| 512 | struct rcu_head rcu; |
| 513 | }; |
| 514 | |
Phong Tran | 5e1bc93 | 2019-11-06 20:09:50 +0700 | [diff] [blame] | 515 | The foo_update_a() function might then be written as follows:: |
Paul E. McKenney | dd81eca | 2005-09-10 00:26:24 -0700 | [diff] [blame] | 516 | |
| 517 | /* |
| 518 | * Create a new struct foo that is the same as the one currently |
| 519 | * pointed to by gbl_foo, except that field "a" is replaced |
| 520 | * with "new_a". Points gbl_foo to the new structure, and |
| 521 | * frees up the old structure after a grace period. |
| 522 | * |
| 523 | * Uses rcu_assign_pointer() to ensure that concurrent readers |
| 524 | * see the initialized version of the new structure. |
| 525 | * |
| 526 | * Uses call_rcu() to ensure that any readers that might have |
| 527 | * references to the old structure complete before freeing the |
| 528 | * old structure. |
| 529 | */ |
| 530 | void foo_update_a(int new_a) |
| 531 | { |
| 532 | struct foo *new_fp; |
| 533 | struct foo *old_fp; |
| 534 | |
Baruch Even | de0dfcd | 2006-03-24 18:25:25 +0100 | [diff] [blame] | 535 | new_fp = kmalloc(sizeof(*new_fp), GFP_KERNEL); |
Paul E. McKenney | dd81eca | 2005-09-10 00:26:24 -0700 | [diff] [blame] | 536 | spin_lock(&foo_mutex); |
Jason A. Donenfeld | 2c4ac34 | 2015-08-11 14:26:33 +0200 | [diff] [blame] | 537 | old_fp = rcu_dereference_protected(gbl_foo, lockdep_is_held(&foo_mutex)); |
Paul E. McKenney | dd81eca | 2005-09-10 00:26:24 -0700 | [diff] [blame] | 538 | *new_fp = *old_fp; |
| 539 | new_fp->a = new_a; |
| 540 | rcu_assign_pointer(gbl_foo, new_fp); |
| 541 | spin_unlock(&foo_mutex); |
| 542 | call_rcu(&old_fp->rcu, foo_reclaim); |
| 543 | } |
| 544 | |
Phong Tran | 5e1bc93 | 2019-11-06 20:09:50 +0700 | [diff] [blame] | 545 | The foo_reclaim() function might appear as follows:: |
Paul E. McKenney | dd81eca | 2005-09-10 00:26:24 -0700 | [diff] [blame] | 546 | |
| 547 | void foo_reclaim(struct rcu_head *rp) |
| 548 | { |
| 549 | struct foo *fp = container_of(rp, struct foo, rcu); |
| 550 | |
Kees Cook | 57d34a6 | 2012-10-19 09:48:30 -0700 | [diff] [blame] | 551 | foo_cleanup(fp->a); |
| 552 | |
Paul E. McKenney | dd81eca | 2005-09-10 00:26:24 -0700 | [diff] [blame] | 553 | kfree(fp); |
| 554 | } |
| 555 | |
| 556 | The container_of() primitive is a macro that, given a pointer into a |
| 557 | struct, the type of the struct, and the pointed-to field within the |
| 558 | struct, returns a pointer to the beginning of the struct. |
| 559 | |
| 560 | The use of call_rcu() permits the caller of foo_update_a() to |
| 561 | immediately regain control, without needing to worry further about the |
| 562 | old version of the newly updated element. It also clearly shows the |
| 563 | RCU distinction between updater, namely foo_update_a(), and reclaimer, |
| 564 | namely foo_reclaim(). |
| 565 | |
| 566 | The summary of advice is the same as for the previous section, except |
| 567 | that we are now using call_rcu() rather than synchronize_rcu(): |
| 568 | |
Phong Tran | 5e1bc93 | 2019-11-06 20:09:50 +0700 | [diff] [blame] | 569 | - Use call_rcu() **after** removing a data element from an |
Paul E. McKenney | dd81eca | 2005-09-10 00:26:24 -0700 | [diff] [blame] | 570 | RCU-protected data structure in order to register a callback |
| 571 | function that will be invoked after the completion of all RCU |
| 572 | read-side critical sections that might be referencing that |
| 573 | data item. |
| 574 | |
Kees Cook | 57d34a6 | 2012-10-19 09:48:30 -0700 | [diff] [blame] | 575 | If the callback for call_rcu() is not doing anything more than calling |
| 576 | kfree() on the structure, you can use kfree_rcu() instead of call_rcu() |
Phong Tran | 5e1bc93 | 2019-11-06 20:09:50 +0700 | [diff] [blame] | 577 | to avoid having to write your own callback:: |
Kees Cook | 57d34a6 | 2012-10-19 09:48:30 -0700 | [diff] [blame] | 578 | |
| 579 | kfree_rcu(old_fp, rcu); |
| 580 | |
Paul E. McKenney | dd81eca | 2005-09-10 00:26:24 -0700 | [diff] [blame] | 581 | Again, see checklist.txt for additional rules governing the use of RCU. |
| 582 | |
Phong Tran | 5e1bc93 | 2019-11-06 20:09:50 +0700 | [diff] [blame] | 583 | .. _5_whatisRCU: |
Paul E. McKenney | dd81eca | 2005-09-10 00:26:24 -0700 | [diff] [blame] | 584 | |
| 585 | 5. WHAT ARE SOME SIMPLE IMPLEMENTATIONS OF RCU? |
Phong Tran | 5e1bc93 | 2019-11-06 20:09:50 +0700 | [diff] [blame] | 586 | ------------------------------------------------ |
Paul E. McKenney | dd81eca | 2005-09-10 00:26:24 -0700 | [diff] [blame] | 587 | |
| 588 | One of the nice things about RCU is that it has extremely simple "toy" |
| 589 | implementations that are a good first step towards understanding the |
| 590 | production-quality implementations in the Linux kernel. This section |
| 591 | presents two such "toy" implementations of RCU, one that is implemented |
| 592 | in terms of familiar locking primitives, and another that more closely |
| 593 | resembles "classic" RCU. Both are way too simple for real-world use, |
| 594 | lacking both functionality and performance. However, they are useful |
Junchang Wang | 87d1779 | 2019-01-01 22:03:19 +0800 | [diff] [blame] | 595 | in getting a feel for how RCU works. See kernel/rcu/update.c for a |
Paul E. McKenney | dd81eca | 2005-09-10 00:26:24 -0700 | [diff] [blame] | 596 | production-quality implementation, and see: |
| 597 | |
| 598 | http://www.rdrop.com/users/paulmck/RCU |
| 599 | |
| 600 | for papers describing the Linux kernel RCU implementation. The OLS'01 |
| 601 | and OLS'02 papers are a good introduction, and the dissertation provides |
Paul E. McKenney | d19720a | 2006-02-01 03:06:42 -0800 | [diff] [blame] | 602 | more details on the current implementation as of early 2004. |
Paul E. McKenney | dd81eca | 2005-09-10 00:26:24 -0700 | [diff] [blame] | 603 | |
| 604 | |
| 605 | 5A. "TOY" IMPLEMENTATION #1: LOCKING |
Phong Tran | 5e1bc93 | 2019-11-06 20:09:50 +0700 | [diff] [blame] | 606 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
Paul E. McKenney | dd81eca | 2005-09-10 00:26:24 -0700 | [diff] [blame] | 607 | This section presents a "toy" RCU implementation that is based on |
| 608 | familiar locking primitives. Its overhead makes it a non-starter for |
| 609 | real-life use, as does its lack of scalability. It is also unsuitable |
| 610 | for realtime use, since it allows scheduling latency to "bleed" from |
Paul E. McKenney | d3d3a3c | 2017-03-28 19:57:45 -0700 | [diff] [blame] | 611 | one read-side critical section to another. It also assumes recursive |
| 612 | reader-writer locks: If you try this with non-recursive locks, and |
| 613 | you allow nested rcu_read_lock() calls, you can deadlock. |
Paul E. McKenney | dd81eca | 2005-09-10 00:26:24 -0700 | [diff] [blame] | 614 | |
| 615 | However, it is probably the easiest implementation to relate to, so is |
| 616 | a good starting point. |
| 617 | |
Phong Tran | 5e1bc93 | 2019-11-06 20:09:50 +0700 | [diff] [blame] | 618 | It is extremely simple:: |
Paul E. McKenney | dd81eca | 2005-09-10 00:26:24 -0700 | [diff] [blame] | 619 | |
| 620 | static DEFINE_RWLOCK(rcu_gp_mutex); |
| 621 | |
| 622 | void rcu_read_lock(void) |
| 623 | { |
| 624 | read_lock(&rcu_gp_mutex); |
| 625 | } |
| 626 | |
| 627 | void rcu_read_unlock(void) |
| 628 | { |
| 629 | read_unlock(&rcu_gp_mutex); |
| 630 | } |
| 631 | |
| 632 | void synchronize_rcu(void) |
| 633 | { |
| 634 | write_lock(&rcu_gp_mutex); |
Andrea Parri | 264d4f8 | 2018-06-07 12:01:57 +0200 | [diff] [blame] | 635 | smp_mb__after_spinlock(); |
Paul E. McKenney | dd81eca | 2005-09-10 00:26:24 -0700 | [diff] [blame] | 636 | write_unlock(&rcu_gp_mutex); |
| 637 | } |
| 638 | |
Paul E. McKenney | 066bb1c | 2017-03-07 07:30:58 -0800 | [diff] [blame] | 639 | [You can ignore rcu_assign_pointer() and rcu_dereference() without missing |
| 640 | much. But here are simplified versions anyway. And whatever you do, |
Phong Tran | 5e1bc93 | 2019-11-06 20:09:50 +0700 | [diff] [blame] | 641 | don't forget about them when submitting patches making use of RCU!]:: |
Paul E. McKenney | dd81eca | 2005-09-10 00:26:24 -0700 | [diff] [blame] | 642 | |
Paul E. McKenney | 066bb1c | 2017-03-07 07:30:58 -0800 | [diff] [blame] | 643 | #define rcu_assign_pointer(p, v) \ |
| 644 | ({ \ |
| 645 | smp_store_release(&(p), (v)); \ |
| 646 | }) |
Paul E. McKenney | dd81eca | 2005-09-10 00:26:24 -0700 | [diff] [blame] | 647 | |
Paul E. McKenney | 066bb1c | 2017-03-07 07:30:58 -0800 | [diff] [blame] | 648 | #define rcu_dereference(p) \ |
| 649 | ({ \ |
Paul E. McKenney | 9ad3c14 | 2017-11-27 09:20:40 -0800 | [diff] [blame] | 650 | typeof(p) _________p1 = READ_ONCE(p); \ |
Paul E. McKenney | 066bb1c | 2017-03-07 07:30:58 -0800 | [diff] [blame] | 651 | (_________p1); \ |
| 652 | }) |
Paul E. McKenney | dd81eca | 2005-09-10 00:26:24 -0700 | [diff] [blame] | 653 | |
| 654 | |
| 655 | The rcu_read_lock() and rcu_read_unlock() primitive read-acquire |
| 656 | and release a global reader-writer lock. The synchronize_rcu() |
Andrea Parri | 264d4f8 | 2018-06-07 12:01:57 +0200 | [diff] [blame] | 657 | primitive write-acquires this same lock, then releases it. This means |
| 658 | that once synchronize_rcu() exits, all RCU read-side critical sections |
| 659 | that were in progress before synchronize_rcu() was called are guaranteed |
| 660 | to have completed -- there is no way that synchronize_rcu() would have |
| 661 | been able to write-acquire the lock otherwise. The smp_mb__after_spinlock() |
| 662 | promotes synchronize_rcu() to a full memory barrier in compliance with |
| 663 | the "Memory-Barrier Guarantees" listed in: |
| 664 | |
Mauro Carvalho Chehab | ccc9971 | 2019-08-01 17:39:18 -0400 | [diff] [blame] | 665 | Documentation/RCU/Design/Requirements/Requirements.rst |
Paul E. McKenney | dd81eca | 2005-09-10 00:26:24 -0700 | [diff] [blame] | 666 | |
| 667 | It is possible to nest rcu_read_lock(), since reader-writer locks may |
| 668 | be recursively acquired. Note also that rcu_read_lock() is immune |
| 669 | from deadlock (an important property of RCU). The reason for this is |
| 670 | that the only thing that can block rcu_read_lock() is a synchronize_rcu(). |
| 671 | But synchronize_rcu() does not acquire any locks while holding rcu_gp_mutex, |
| 672 | so there can be no deadlock cycle. |
| 673 | |
Phong Tran | 5e1bc93 | 2019-11-06 20:09:50 +0700 | [diff] [blame] | 674 | .. _quiz_1: |
| 675 | |
| 676 | Quick Quiz #1: |
| 677 | Why is this argument naive? How could a deadlock |
Paul E. McKenney | dd81eca | 2005-09-10 00:26:24 -0700 | [diff] [blame] | 678 | occur when using this algorithm in a real-world Linux |
| 679 | kernel? How could this deadlock be avoided? |
| 680 | |
Phong Tran | 5e1bc93 | 2019-11-06 20:09:50 +0700 | [diff] [blame] | 681 | :ref:`Answers to Quick Quiz <8_whatisRCU>` |
Paul E. McKenney | dd81eca | 2005-09-10 00:26:24 -0700 | [diff] [blame] | 682 | |
| 683 | 5B. "TOY" EXAMPLE #2: CLASSIC RCU |
Phong Tran | 5e1bc93 | 2019-11-06 20:09:50 +0700 | [diff] [blame] | 684 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
Paul E. McKenney | dd81eca | 2005-09-10 00:26:24 -0700 | [diff] [blame] | 685 | This section presents a "toy" RCU implementation that is based on |
| 686 | "classic RCU". It is also short on performance (but only for updates) and |
| 687 | on features such as hotplug CPU and the ability to run in CONFIG_PREEMPT |
| 688 | kernels. The definitions of rcu_dereference() and rcu_assign_pointer() |
| 689 | are the same as those shown in the preceding section, so they are omitted. |
Phong Tran | 5e1bc93 | 2019-11-06 20:09:50 +0700 | [diff] [blame] | 690 | :: |
Paul E. McKenney | dd81eca | 2005-09-10 00:26:24 -0700 | [diff] [blame] | 691 | |
| 692 | void rcu_read_lock(void) { } |
| 693 | |
| 694 | void rcu_read_unlock(void) { } |
| 695 | |
| 696 | void synchronize_rcu(void) |
| 697 | { |
| 698 | int cpu; |
| 699 | |
KAMEZAWA Hiroyuki | 3c30a75 | 2006-03-28 01:56:39 -0800 | [diff] [blame] | 700 | for_each_possible_cpu(cpu) |
Paul E. McKenney | dd81eca | 2005-09-10 00:26:24 -0700 | [diff] [blame] | 701 | run_on(cpu); |
| 702 | } |
| 703 | |
| 704 | Note that rcu_read_lock() and rcu_read_unlock() do absolutely nothing. |
| 705 | This is the great strength of classic RCU in a non-preemptive kernel: |
| 706 | read-side overhead is precisely zero, at least on non-Alpha CPUs. |
| 707 | And there is absolutely no way that rcu_read_lock() can possibly |
| 708 | participate in a deadlock cycle! |
| 709 | |
| 710 | The implementation of synchronize_rcu() simply schedules itself on each |
| 711 | CPU in turn. The run_on() primitive can be implemented straightforwardly |
| 712 | in terms of the sched_setaffinity() primitive. Of course, a somewhat less |
| 713 | "toy" implementation would restore the affinity upon completion rather |
| 714 | than just leaving all tasks running on the last CPU, but when I said |
Phong Tran | 5e1bc93 | 2019-11-06 20:09:50 +0700 | [diff] [blame] | 715 | "toy", I meant **toy**! |
Paul E. McKenney | dd81eca | 2005-09-10 00:26:24 -0700 | [diff] [blame] | 716 | |
| 717 | So how the heck is this supposed to work??? |
| 718 | |
| 719 | Remember that it is illegal to block while in an RCU read-side critical |
| 720 | section. Therefore, if a given CPU executes a context switch, we know |
| 721 | that it must have completed all preceding RCU read-side critical sections. |
Phong Tran | 5e1bc93 | 2019-11-06 20:09:50 +0700 | [diff] [blame] | 722 | Once **all** CPUs have executed a context switch, then **all** preceding |
Paul E. McKenney | dd81eca | 2005-09-10 00:26:24 -0700 | [diff] [blame] | 723 | RCU read-side critical sections will have completed. |
| 724 | |
| 725 | So, suppose that we remove a data item from its structure and then invoke |
| 726 | synchronize_rcu(). Once synchronize_rcu() returns, we are guaranteed |
| 727 | that there are no RCU read-side critical sections holding a reference |
| 728 | to that data item, so we can safely reclaim it. |
| 729 | |
Phong Tran | 5e1bc93 | 2019-11-06 20:09:50 +0700 | [diff] [blame] | 730 | .. _quiz_2: |
Paul E. McKenney | dd81eca | 2005-09-10 00:26:24 -0700 | [diff] [blame] | 731 | |
Phong Tran | 5e1bc93 | 2019-11-06 20:09:50 +0700 | [diff] [blame] | 732 | Quick Quiz #2: |
| 733 | Give an example where Classic RCU's read-side |
| 734 | overhead is **negative**. |
| 735 | |
| 736 | :ref:`Answers to Quick Quiz <8_whatisRCU>` |
| 737 | |
| 738 | .. _quiz_3: |
| 739 | |
| 740 | Quick Quiz #3: |
| 741 | If it is illegal to block in an RCU read-side |
Paul E. McKenney | dd81eca | 2005-09-10 00:26:24 -0700 | [diff] [blame] | 742 | critical section, what the heck do you do in |
| 743 | PREEMPT_RT, where normal spinlocks can block??? |
| 744 | |
Phong Tran | 5e1bc93 | 2019-11-06 20:09:50 +0700 | [diff] [blame] | 745 | :ref:`Answers to Quick Quiz <8_whatisRCU>` |
| 746 | |
| 747 | .. _6_whatisRCU: |
Paul E. McKenney | dd81eca | 2005-09-10 00:26:24 -0700 | [diff] [blame] | 748 | |
| 749 | 6. ANALOGY WITH READER-WRITER LOCKING |
Phong Tran | 5e1bc93 | 2019-11-06 20:09:50 +0700 | [diff] [blame] | 750 | -------------------------------------- |
Paul E. McKenney | dd81eca | 2005-09-10 00:26:24 -0700 | [diff] [blame] | 751 | |
| 752 | Although RCU can be used in many different ways, a very common use of |
| 753 | RCU is analogous to reader-writer locking. The following unified |
| 754 | diff shows how closely related RCU and reader-writer locking can be. |
Phong Tran | 5e1bc93 | 2019-11-06 20:09:50 +0700 | [diff] [blame] | 755 | :: |
Paul E. McKenney | dd81eca | 2005-09-10 00:26:24 -0700 | [diff] [blame] | 756 | |
Yao Dongdong | 70946a4 | 2016-03-07 16:02:14 +0800 | [diff] [blame] | 757 | @@ -5,5 +5,5 @@ struct el { |
| 758 | int data; |
| 759 | /* Other data fields */ |
| 760 | }; |
| 761 | -rwlock_t listmutex; |
| 762 | +spinlock_t listmutex; |
| 763 | struct el head; |
| 764 | |
Paul E. McKenney | dd81eca | 2005-09-10 00:26:24 -0700 | [diff] [blame] | 765 | @@ -13,15 +14,15 @@ |
| 766 | struct list_head *lp; |
| 767 | struct el *p; |
| 768 | |
Yao Dongdong | 70946a4 | 2016-03-07 16:02:14 +0800 | [diff] [blame] | 769 | - read_lock(&listmutex); |
Paul E. McKenney | dd81eca | 2005-09-10 00:26:24 -0700 | [diff] [blame] | 770 | - list_for_each_entry(p, head, lp) { |
| 771 | + rcu_read_lock(); |
| 772 | + list_for_each_entry_rcu(p, head, lp) { |
| 773 | if (p->key == key) { |
| 774 | *result = p->data; |
Yao Dongdong | 70946a4 | 2016-03-07 16:02:14 +0800 | [diff] [blame] | 775 | - read_unlock(&listmutex); |
Paul E. McKenney | dd81eca | 2005-09-10 00:26:24 -0700 | [diff] [blame] | 776 | + rcu_read_unlock(); |
| 777 | return 1; |
| 778 | } |
| 779 | } |
Yao Dongdong | 70946a4 | 2016-03-07 16:02:14 +0800 | [diff] [blame] | 780 | - read_unlock(&listmutex); |
Paul E. McKenney | dd81eca | 2005-09-10 00:26:24 -0700 | [diff] [blame] | 781 | + rcu_read_unlock(); |
| 782 | return 0; |
| 783 | } |
| 784 | |
| 785 | @@ -29,15 +30,16 @@ |
| 786 | { |
| 787 | struct el *p; |
| 788 | |
| 789 | - write_lock(&listmutex); |
| 790 | + spin_lock(&listmutex); |
| 791 | list_for_each_entry(p, head, lp) { |
| 792 | if (p->key == key) { |
Urs Thuermann | 82a854e | 2006-07-10 04:44:06 -0700 | [diff] [blame] | 793 | - list_del(&p->list); |
Paul E. McKenney | dd81eca | 2005-09-10 00:26:24 -0700 | [diff] [blame] | 794 | - write_unlock(&listmutex); |
Urs Thuermann | 82a854e | 2006-07-10 04:44:06 -0700 | [diff] [blame] | 795 | + list_del_rcu(&p->list); |
Paul E. McKenney | dd81eca | 2005-09-10 00:26:24 -0700 | [diff] [blame] | 796 | + spin_unlock(&listmutex); |
| 797 | + synchronize_rcu(); |
| 798 | kfree(p); |
| 799 | return 1; |
| 800 | } |
| 801 | } |
| 802 | - write_unlock(&listmutex); |
| 803 | + spin_unlock(&listmutex); |
| 804 | return 0; |
| 805 | } |
| 806 | |
Phong Tran | 5e1bc93 | 2019-11-06 20:09:50 +0700 | [diff] [blame] | 807 | Or, for those who prefer a side-by-side listing:: |
Paul E. McKenney | dd81eca | 2005-09-10 00:26:24 -0700 | [diff] [blame] | 808 | |
| 809 | 1 struct el { 1 struct el { |
| 810 | 2 struct list_head list; 2 struct list_head list; |
| 811 | 3 long key; 3 long key; |
| 812 | 4 spinlock_t mutex; 4 spinlock_t mutex; |
| 813 | 5 int data; 5 int data; |
| 814 | 6 /* Other data fields */ 6 /* Other data fields */ |
| 815 | 7 }; 7 }; |
Yao Dongdong | 70946a4 | 2016-03-07 16:02:14 +0800 | [diff] [blame] | 816 | 8 rwlock_t listmutex; 8 spinlock_t listmutex; |
Paul E. McKenney | dd81eca | 2005-09-10 00:26:24 -0700 | [diff] [blame] | 817 | 9 struct el head; 9 struct el head; |
| 818 | |
Phong Tran | 5e1bc93 | 2019-11-06 20:09:50 +0700 | [diff] [blame] | 819 | :: |
Paul E. McKenney | dd81eca | 2005-09-10 00:26:24 -0700 | [diff] [blame] | 820 | |
Phong Tran | 5e1bc93 | 2019-11-06 20:09:50 +0700 | [diff] [blame] | 821 | 1 int search(long key, int *result) 1 int search(long key, int *result) |
| 822 | 2 { 2 { |
| 823 | 3 struct list_head *lp; 3 struct list_head *lp; |
| 824 | 4 struct el *p; 4 struct el *p; |
| 825 | 5 5 |
| 826 | 6 read_lock(&listmutex); 6 rcu_read_lock(); |
| 827 | 7 list_for_each_entry(p, head, lp) { 7 list_for_each_entry_rcu(p, head, lp) { |
| 828 | 8 if (p->key == key) { 8 if (p->key == key) { |
| 829 | 9 *result = p->data; 9 *result = p->data; |
| 830 | 10 read_unlock(&listmutex); 10 rcu_read_unlock(); |
| 831 | 11 return 1; 11 return 1; |
| 832 | 12 } 12 } |
| 833 | 13 } 13 } |
| 834 | 14 read_unlock(&listmutex); 14 rcu_read_unlock(); |
| 835 | 15 return 0; 15 return 0; |
| 836 | 16 } 16 } |
| 837 | |
| 838 | :: |
| 839 | |
| 840 | 1 int delete(long key) 1 int delete(long key) |
| 841 | 2 { 2 { |
| 842 | 3 struct el *p; 3 struct el *p; |
| 843 | 4 4 |
| 844 | 5 write_lock(&listmutex); 5 spin_lock(&listmutex); |
| 845 | 6 list_for_each_entry(p, head, lp) { 6 list_for_each_entry(p, head, lp) { |
| 846 | 7 if (p->key == key) { 7 if (p->key == key) { |
| 847 | 8 list_del(&p->list); 8 list_del_rcu(&p->list); |
| 848 | 9 write_unlock(&listmutex); 9 spin_unlock(&listmutex); |
| 849 | 10 synchronize_rcu(); |
| 850 | 10 kfree(p); 11 kfree(p); |
| 851 | 11 return 1; 12 return 1; |
| 852 | 12 } 13 } |
| 853 | 13 } 14 } |
| 854 | 14 write_unlock(&listmutex); 15 spin_unlock(&listmutex); |
| 855 | 15 return 0; 16 return 0; |
| 856 | 16 } 17 } |
Paul E. McKenney | dd81eca | 2005-09-10 00:26:24 -0700 | [diff] [blame] | 857 | |
| 858 | Either way, the differences are quite small. Read-side locking moves |
| 859 | to rcu_read_lock() and rcu_read_unlock, update-side locking moves from |
Paolo Ornati | 670e9f3 | 2006-10-03 22:57:56 +0200 | [diff] [blame] | 860 | a reader-writer lock to a simple spinlock, and a synchronize_rcu() |
Paul E. McKenney | dd81eca | 2005-09-10 00:26:24 -0700 | [diff] [blame] | 861 | precedes the kfree(). |
| 862 | |
| 863 | However, there is one potential catch: the read-side and update-side |
| 864 | critical sections can now run concurrently. In many cases, this will |
| 865 | not be a problem, but it is necessary to check carefully regardless. |
| 866 | For example, if multiple independent list updates must be seen as |
| 867 | a single atomic update, converting to RCU will require special care. |
| 868 | |
| 869 | Also, the presence of synchronize_rcu() means that the RCU version of |
| 870 | delete() can now block. If this is a problem, there is a callback-based |
Kees Cook | 57d34a6 | 2012-10-19 09:48:30 -0700 | [diff] [blame] | 871 | mechanism that never blocks, namely call_rcu() or kfree_rcu(), that can |
| 872 | be used in place of synchronize_rcu(). |
Paul E. McKenney | dd81eca | 2005-09-10 00:26:24 -0700 | [diff] [blame] | 873 | |
Phong Tran | 5e1bc93 | 2019-11-06 20:09:50 +0700 | [diff] [blame] | 874 | .. _7_whatisRCU: |
Paul E. McKenney | dd81eca | 2005-09-10 00:26:24 -0700 | [diff] [blame] | 875 | |
| 876 | 7. FULL LIST OF RCU APIs |
Phong Tran | 5e1bc93 | 2019-11-06 20:09:50 +0700 | [diff] [blame] | 877 | ------------------------- |
Paul E. McKenney | dd81eca | 2005-09-10 00:26:24 -0700 | [diff] [blame] | 878 | |
| 879 | The RCU APIs are documented in docbook-format header comments in the |
| 880 | Linux-kernel source code, but it helps to have a full list of the |
| 881 | APIs, since there does not appear to be a way to categorize them |
| 882 | in docbook. Here is the list, by category. |
| 883 | |
Phong Tran | 5e1bc93 | 2019-11-06 20:09:50 +0700 | [diff] [blame] | 884 | RCU list traversal:: |
Paul E. McKenney | dd81eca | 2005-09-10 00:26:24 -0700 | [diff] [blame] | 885 | |
Paul E. McKenney | d07e6d0 | 2014-03-31 13:36:33 -0700 | [diff] [blame] | 886 | list_entry_rcu |
Madhuparna Bhowmik | 17f0da1 | 2019-11-11 23:41:22 +0530 | [diff] [blame] | 887 | list_entry_lockless |
Paul E. McKenney | d07e6d0 | 2014-03-31 13:36:33 -0700 | [diff] [blame] | 888 | list_first_entry_rcu |
| 889 | list_next_rcu |
Paul E. McKenney | dd81eca | 2005-09-10 00:26:24 -0700 | [diff] [blame] | 890 | list_for_each_entry_rcu |
Paul E. McKenney | bb08f76 | 2012-10-20 12:33:37 -0700 | [diff] [blame] | 891 | list_for_each_entry_continue_rcu |
NeilBrown | b7b6f94 | 2018-06-18 14:22:40 +1000 | [diff] [blame] | 892 | list_for_each_entry_from_rcu |
Madhuparna Bhowmik | 17f0da1 | 2019-11-11 23:41:22 +0530 | [diff] [blame] | 893 | list_first_or_null_rcu |
| 894 | list_next_or_null_rcu |
Paul E. McKenney | d07e6d0 | 2014-03-31 13:36:33 -0700 | [diff] [blame] | 895 | hlist_first_rcu |
| 896 | hlist_next_rcu |
| 897 | hlist_pprev_rcu |
| 898 | hlist_for_each_entry_rcu |
| 899 | hlist_for_each_entry_rcu_bh |
NeilBrown | b7b6f94 | 2018-06-18 14:22:40 +1000 | [diff] [blame] | 900 | hlist_for_each_entry_from_rcu |
Paul E. McKenney | d07e6d0 | 2014-03-31 13:36:33 -0700 | [diff] [blame] | 901 | hlist_for_each_entry_continue_rcu |
| 902 | hlist_for_each_entry_continue_rcu_bh |
| 903 | hlist_nulls_first_rcu |
| 904 | hlist_nulls_for_each_entry_rcu |
| 905 | hlist_bl_first_rcu |
| 906 | hlist_bl_for_each_entry_rcu |
Paul E. McKenney | 3230075 | 2008-05-12 21:21:05 +0200 | [diff] [blame] | 907 | |
Madhuparna Bhowmik | 17f0da1 | 2019-11-11 23:41:22 +0530 | [diff] [blame] | 908 | RCU pointer/list update:: |
Paul E. McKenney | dd81eca | 2005-09-10 00:26:24 -0700 | [diff] [blame] | 909 | |
| 910 | rcu_assign_pointer |
| 911 | list_add_rcu |
| 912 | list_add_tail_rcu |
| 913 | list_del_rcu |
| 914 | list_replace_rcu |
Ken Helias | 1d02328 | 2014-08-06 16:09:16 -0700 | [diff] [blame] | 915 | hlist_add_behind_rcu |
Paul E. McKenney | 3230075 | 2008-05-12 21:21:05 +0200 | [diff] [blame] | 916 | hlist_add_before_rcu |
Paul E. McKenney | dd81eca | 2005-09-10 00:26:24 -0700 | [diff] [blame] | 917 | hlist_add_head_rcu |
Madhuparna Bhowmik | 17f0da1 | 2019-11-11 23:41:22 +0530 | [diff] [blame] | 918 | hlist_add_tail_rcu |
Paul E. McKenney | d07e6d0 | 2014-03-31 13:36:33 -0700 | [diff] [blame] | 919 | hlist_del_rcu |
| 920 | hlist_del_init_rcu |
Paul E. McKenney | 3230075 | 2008-05-12 21:21:05 +0200 | [diff] [blame] | 921 | hlist_replace_rcu |
Madhuparna Bhowmik | 17f0da1 | 2019-11-11 23:41:22 +0530 | [diff] [blame] | 922 | list_splice_init_rcu |
| 923 | list_splice_tail_init_rcu |
Paul E. McKenney | d07e6d0 | 2014-03-31 13:36:33 -0700 | [diff] [blame] | 924 | hlist_nulls_del_init_rcu |
| 925 | hlist_nulls_del_rcu |
| 926 | hlist_nulls_add_head_rcu |
| 927 | hlist_bl_add_head_rcu |
| 928 | hlist_bl_del_init_rcu |
| 929 | hlist_bl_del_rcu |
| 930 | hlist_bl_set_first_rcu |
Paul E. McKenney | dd81eca | 2005-09-10 00:26:24 -0700 | [diff] [blame] | 931 | |
Phong Tran | 5e1bc93 | 2019-11-06 20:09:50 +0700 | [diff] [blame] | 932 | RCU:: |
| 933 | |
| 934 | Critical sections Grace period Barrier |
Paul E. McKenney | dd81eca | 2005-09-10 00:26:24 -0700 | [diff] [blame] | 935 | |
Paul E. McKenney | 3230075 | 2008-05-12 21:21:05 +0200 | [diff] [blame] | 936 | rcu_read_lock synchronize_net rcu_barrier |
| 937 | rcu_read_unlock synchronize_rcu |
Paul E. McKenney | c598a07 | 2010-02-22 17:04:57 -0800 | [diff] [blame] | 938 | rcu_dereference synchronize_rcu_expedited |
Paul E. McKenney | d07e6d0 | 2014-03-31 13:36:33 -0700 | [diff] [blame] | 939 | rcu_read_lock_held call_rcu |
| 940 | rcu_dereference_check kfree_rcu |
| 941 | rcu_dereference_protected |
Paul E. McKenney | 3230075 | 2008-05-12 21:21:05 +0200 | [diff] [blame] | 942 | |
Phong Tran | 5e1bc93 | 2019-11-06 20:09:50 +0700 | [diff] [blame] | 943 | bh:: |
| 944 | |
| 945 | Critical sections Grace period Barrier |
Paul E. McKenney | 3230075 | 2008-05-12 21:21:05 +0200 | [diff] [blame] | 946 | |
Joel Fernandes (Google) | 3398496 | 2018-10-05 16:18:10 -0700 | [diff] [blame] | 947 | rcu_read_lock_bh call_rcu rcu_barrier |
| 948 | rcu_read_unlock_bh synchronize_rcu |
| 949 | [local_bh_disable] synchronize_rcu_expedited |
| 950 | [and friends] |
| 951 | rcu_dereference_bh |
Paul E. McKenney | d07e6d0 | 2014-03-31 13:36:33 -0700 | [diff] [blame] | 952 | rcu_dereference_bh_check |
| 953 | rcu_dereference_bh_protected |
| 954 | rcu_read_lock_bh_held |
Paul E. McKenney | 3230075 | 2008-05-12 21:21:05 +0200 | [diff] [blame] | 955 | |
Phong Tran | 5e1bc93 | 2019-11-06 20:09:50 +0700 | [diff] [blame] | 956 | sched:: |
| 957 | |
| 958 | Critical sections Grace period Barrier |
Paul E. McKenney | 3230075 | 2008-05-12 21:21:05 +0200 | [diff] [blame] | 959 | |
Joel Fernandes (Google) | 3398496 | 2018-10-05 16:18:10 -0700 | [diff] [blame] | 960 | rcu_read_lock_sched call_rcu rcu_barrier |
| 961 | rcu_read_unlock_sched synchronize_rcu |
| 962 | [preempt_disable] synchronize_rcu_expedited |
Paul E. McKenney | 240ebbf | 2009-06-25 09:08:18 -0700 | [diff] [blame] | 963 | [and friends] |
Paul E. McKenney | d07e6d0 | 2014-03-31 13:36:33 -0700 | [diff] [blame] | 964 | rcu_read_lock_sched_notrace |
| 965 | rcu_read_unlock_sched_notrace |
Paul E. McKenney | c598a07 | 2010-02-22 17:04:57 -0800 | [diff] [blame] | 966 | rcu_dereference_sched |
Paul E. McKenney | d07e6d0 | 2014-03-31 13:36:33 -0700 | [diff] [blame] | 967 | rcu_dereference_sched_check |
| 968 | rcu_dereference_sched_protected |
| 969 | rcu_read_lock_sched_held |
Paul E. McKenney | 3230075 | 2008-05-12 21:21:05 +0200 | [diff] [blame] | 970 | |
| 971 | |
Phong Tran | 5e1bc93 | 2019-11-06 20:09:50 +0700 | [diff] [blame] | 972 | SRCU:: |
| 973 | |
| 974 | Critical sections Grace period Barrier |
Paul E. McKenney | 3230075 | 2008-05-12 21:21:05 +0200 | [diff] [blame] | 975 | |
Joel Fernandes (Google) | 3398496 | 2018-10-05 16:18:10 -0700 | [diff] [blame] | 976 | srcu_read_lock call_srcu srcu_barrier |
| 977 | srcu_read_unlock synchronize_srcu |
Paul E. McKenney | 99f8891 | 2013-03-12 16:54:14 -0700 | [diff] [blame] | 978 | srcu_dereference synchronize_srcu_expedited |
Paul E. McKenney | d07e6d0 | 2014-03-31 13:36:33 -0700 | [diff] [blame] | 979 | srcu_dereference_check |
| 980 | srcu_read_lock_held |
Paul E. McKenney | 3230075 | 2008-05-12 21:21:05 +0200 | [diff] [blame] | 981 | |
Phong Tran | 5e1bc93 | 2019-11-06 20:09:50 +0700 | [diff] [blame] | 982 | SRCU: Initialization/cleanup:: |
| 983 | |
Paul E. McKenney | 4de5f89 | 2017-06-06 15:04:03 -0700 | [diff] [blame] | 984 | DEFINE_SRCU |
| 985 | DEFINE_STATIC_SRCU |
Paul E. McKenney | 240ebbf | 2009-06-25 09:08:18 -0700 | [diff] [blame] | 986 | init_srcu_struct |
| 987 | cleanup_srcu_struct |
Paul E. McKenney | dd81eca | 2005-09-10 00:26:24 -0700 | [diff] [blame] | 988 | |
Phong Tran | 5e1bc93 | 2019-11-06 20:09:50 +0700 | [diff] [blame] | 989 | All: lockdep-checked RCU-protected pointer access:: |
Paul E. McKenney | 50aec00 | 2010-04-09 15:39:12 -0700 | [diff] [blame] | 990 | |
Paul E. McKenney | 50aec00 | 2010-04-09 15:39:12 -0700 | [diff] [blame] | 991 | rcu_access_pointer |
Paul E. McKenney | d07e6d0 | 2014-03-31 13:36:33 -0700 | [diff] [blame] | 992 | rcu_dereference_raw |
Paul E. McKenney | f78f5b9 | 2015-06-18 15:50:02 -0700 | [diff] [blame] | 993 | RCU_LOCKDEP_WARN |
Paul E. McKenney | d07e6d0 | 2014-03-31 13:36:33 -0700 | [diff] [blame] | 994 | rcu_sleep_check |
| 995 | RCU_NONIDLE |
Paul E. McKenney | 50aec00 | 2010-04-09 15:39:12 -0700 | [diff] [blame] | 996 | |
Paul E. McKenney | dd81eca | 2005-09-10 00:26:24 -0700 | [diff] [blame] | 997 | See the comment headers in the source code (or the docbook generated |
| 998 | from them) for more information. |
| 999 | |
Paul E. McKenney | fea6512 | 2011-01-23 22:35:45 -0800 | [diff] [blame] | 1000 | However, given that there are no fewer than four families of RCU APIs |
| 1001 | in the Linux kernel, how do you choose which one to use? The following |
| 1002 | list can be helpful: |
| 1003 | |
| 1004 | a. Will readers need to block? If so, you need SRCU. |
| 1005 | |
Paul E. McKenney | 99f8891 | 2013-03-12 16:54:14 -0700 | [diff] [blame] | 1006 | b. What about the -rt patchset? If readers would need to block |
Paul E. McKenney | fea6512 | 2011-01-23 22:35:45 -0800 | [diff] [blame] | 1007 | in an non-rt kernel, you need SRCU. If readers would block |
| 1008 | in a -rt kernel, but not in a non-rt kernel, SRCU is not |
Paul E. McKenney | 4de5f89 | 2017-06-06 15:04:03 -0700 | [diff] [blame] | 1009 | necessary. (The -rt patchset turns spinlocks into sleeplocks, |
| 1010 | hence this distinction.) |
Paul E. McKenney | fea6512 | 2011-01-23 22:35:45 -0800 | [diff] [blame] | 1011 | |
Paul E. McKenney | 99f8891 | 2013-03-12 16:54:14 -0700 | [diff] [blame] | 1012 | c. Do you need to treat NMI handlers, hardirq handlers, |
Paul E. McKenney | fea6512 | 2011-01-23 22:35:45 -0800 | [diff] [blame] | 1013 | and code segments with preemption disabled (whether |
| 1014 | via preempt_disable(), local_irq_save(), local_bh_disable(), |
| 1015 | or some other mechanism) as if they were explicit RCU readers? |
Paul E. McKenney | 2aef619 | 2012-08-03 16:41:23 -0700 | [diff] [blame] | 1016 | If so, RCU-sched is the only choice that will work for you. |
Paul E. McKenney | fea6512 | 2011-01-23 22:35:45 -0800 | [diff] [blame] | 1017 | |
Paul E. McKenney | 99f8891 | 2013-03-12 16:54:14 -0700 | [diff] [blame] | 1018 | d. Do you need RCU grace periods to complete even in the face |
Paul E. McKenney | fea6512 | 2011-01-23 22:35:45 -0800 | [diff] [blame] | 1019 | of softirq monopolization of one or more of the CPUs? For |
| 1020 | example, is your code subject to network-based denial-of-service |
Paul E. McKenney | 7709590 | 2018-07-02 08:25:57 -0700 | [diff] [blame] | 1021 | attacks? If so, you should disable softirq across your readers, |
| 1022 | for example, by using rcu_read_lock_bh(). |
Paul E. McKenney | fea6512 | 2011-01-23 22:35:45 -0800 | [diff] [blame] | 1023 | |
Paul E. McKenney | 99f8891 | 2013-03-12 16:54:14 -0700 | [diff] [blame] | 1024 | e. Is your workload too update-intensive for normal use of |
Paul E. McKenney | fea6512 | 2011-01-23 22:35:45 -0800 | [diff] [blame] | 1025 | RCU, but inappropriate for other synchronization mechanisms? |
Paul E. McKenney | 5f0d5a3 | 2017-01-18 02:53:44 -0800 | [diff] [blame] | 1026 | If so, consider SLAB_TYPESAFE_BY_RCU (which was originally |
| 1027 | named SLAB_DESTROY_BY_RCU). But please be careful! |
Paul E. McKenney | fea6512 | 2011-01-23 22:35:45 -0800 | [diff] [blame] | 1028 | |
Paul E. McKenney | 99f8891 | 2013-03-12 16:54:14 -0700 | [diff] [blame] | 1029 | f. Do you need read-side critical sections that are respected |
Paul E. McKenney | 2aef619 | 2012-08-03 16:41:23 -0700 | [diff] [blame] | 1030 | even though they are in the middle of the idle loop, during |
| 1031 | user-mode execution, or on an offlined CPU? If so, SRCU is the |
| 1032 | only choice that will work for you. |
| 1033 | |
Paul E. McKenney | 99f8891 | 2013-03-12 16:54:14 -0700 | [diff] [blame] | 1034 | g. Otherwise, use RCU. |
Paul E. McKenney | fea6512 | 2011-01-23 22:35:45 -0800 | [diff] [blame] | 1035 | |
| 1036 | Of course, this all assumes that you have determined that RCU is in fact |
| 1037 | the right tool for your job. |
| 1038 | |
Phong Tran | 5e1bc93 | 2019-11-06 20:09:50 +0700 | [diff] [blame] | 1039 | .. _8_whatisRCU: |
Paul E. McKenney | dd81eca | 2005-09-10 00:26:24 -0700 | [diff] [blame] | 1040 | |
| 1041 | 8. ANSWERS TO QUICK QUIZZES |
Phong Tran | 5e1bc93 | 2019-11-06 20:09:50 +0700 | [diff] [blame] | 1042 | ---------------------------- |
Paul E. McKenney | dd81eca | 2005-09-10 00:26:24 -0700 | [diff] [blame] | 1043 | |
Phong Tran | 5e1bc93 | 2019-11-06 20:09:50 +0700 | [diff] [blame] | 1044 | Quick Quiz #1: |
| 1045 | Why is this argument naive? How could a deadlock |
Paul E. McKenney | dd81eca | 2005-09-10 00:26:24 -0700 | [diff] [blame] | 1046 | occur when using this algorithm in a real-world Linux |
| 1047 | kernel? [Referring to the lock-based "toy" RCU |
| 1048 | algorithm.] |
| 1049 | |
Phong Tran | 5e1bc93 | 2019-11-06 20:09:50 +0700 | [diff] [blame] | 1050 | Answer: |
| 1051 | Consider the following sequence of events: |
Paul E. McKenney | dd81eca | 2005-09-10 00:26:24 -0700 | [diff] [blame] | 1052 | |
| 1053 | 1. CPU 0 acquires some unrelated lock, call it |
Paul E. McKenney | d19720a | 2006-02-01 03:06:42 -0800 | [diff] [blame] | 1054 | "problematic_lock", disabling irq via |
| 1055 | spin_lock_irqsave(). |
Paul E. McKenney | dd81eca | 2005-09-10 00:26:24 -0700 | [diff] [blame] | 1056 | |
| 1057 | 2. CPU 1 enters synchronize_rcu(), write-acquiring |
| 1058 | rcu_gp_mutex. |
| 1059 | |
| 1060 | 3. CPU 0 enters rcu_read_lock(), but must wait |
| 1061 | because CPU 1 holds rcu_gp_mutex. |
| 1062 | |
| 1063 | 4. CPU 1 is interrupted, and the irq handler |
| 1064 | attempts to acquire problematic_lock. |
| 1065 | |
| 1066 | The system is now deadlocked. |
| 1067 | |
| 1068 | One way to avoid this deadlock is to use an approach like |
| 1069 | that of CONFIG_PREEMPT_RT, where all normal spinlocks |
| 1070 | become blocking locks, and all irq handlers execute in |
| 1071 | the context of special tasks. In this case, in step 4 |
| 1072 | above, the irq handler would block, allowing CPU 1 to |
| 1073 | release rcu_gp_mutex, avoiding the deadlock. |
| 1074 | |
| 1075 | Even in the absence of deadlock, this RCU implementation |
| 1076 | allows latency to "bleed" from readers to other |
| 1077 | readers through synchronize_rcu(). To see this, |
| 1078 | consider task A in an RCU read-side critical section |
| 1079 | (thus read-holding rcu_gp_mutex), task B blocked |
| 1080 | attempting to write-acquire rcu_gp_mutex, and |
| 1081 | task C blocked in rcu_read_lock() attempting to |
| 1082 | read_acquire rcu_gp_mutex. Task A's RCU read-side |
| 1083 | latency is holding up task C, albeit indirectly via |
| 1084 | task B. |
| 1085 | |
| 1086 | Realtime RCU implementations therefore use a counter-based |
| 1087 | approach where tasks in RCU read-side critical sections |
| 1088 | cannot be blocked by tasks executing synchronize_rcu(). |
| 1089 | |
Phong Tran | 5e1bc93 | 2019-11-06 20:09:50 +0700 | [diff] [blame] | 1090 | :ref:`Back to Quick Quiz #1 <quiz_1>` |
Paul E. McKenney | dd81eca | 2005-09-10 00:26:24 -0700 | [diff] [blame] | 1091 | |
Phong Tran | 5e1bc93 | 2019-11-06 20:09:50 +0700 | [diff] [blame] | 1092 | Quick Quiz #2: |
| 1093 | Give an example where Classic RCU's read-side |
| 1094 | overhead is **negative**. |
| 1095 | |
| 1096 | Answer: |
| 1097 | Imagine a single-CPU system with a non-CONFIG_PREEMPT |
Paul E. McKenney | dd81eca | 2005-09-10 00:26:24 -0700 | [diff] [blame] | 1098 | kernel where a routing table is used by process-context |
| 1099 | code, but can be updated by irq-context code (for example, |
| 1100 | by an "ICMP REDIRECT" packet). The usual way of handling |
| 1101 | this would be to have the process-context code disable |
| 1102 | interrupts while searching the routing table. Use of |
| 1103 | RCU allows such interrupt-disabling to be dispensed with. |
| 1104 | Thus, without RCU, you pay the cost of disabling interrupts, |
| 1105 | and with RCU you don't. |
| 1106 | |
| 1107 | One can argue that the overhead of RCU in this |
| 1108 | case is negative with respect to the single-CPU |
| 1109 | interrupt-disabling approach. Others might argue that |
| 1110 | the overhead of RCU is merely zero, and that replacing |
| 1111 | the positive overhead of the interrupt-disabling scheme |
| 1112 | with the zero-overhead RCU scheme does not constitute |
| 1113 | negative overhead. |
| 1114 | |
| 1115 | In real life, of course, things are more complex. But |
| 1116 | even the theoretical possibility of negative overhead for |
| 1117 | a synchronization primitive is a bit unexpected. ;-) |
| 1118 | |
Phong Tran | 5e1bc93 | 2019-11-06 20:09:50 +0700 | [diff] [blame] | 1119 | :ref:`Back to Quick Quiz #2 <quiz_2>` |
| 1120 | |
| 1121 | Quick Quiz #3: |
| 1122 | If it is illegal to block in an RCU read-side |
Paul E. McKenney | dd81eca | 2005-09-10 00:26:24 -0700 | [diff] [blame] | 1123 | critical section, what the heck do you do in |
| 1124 | PREEMPT_RT, where normal spinlocks can block??? |
| 1125 | |
Phong Tran | 5e1bc93 | 2019-11-06 20:09:50 +0700 | [diff] [blame] | 1126 | Answer: |
| 1127 | Just as PREEMPT_RT permits preemption of spinlock |
Paul E. McKenney | dd81eca | 2005-09-10 00:26:24 -0700 | [diff] [blame] | 1128 | critical sections, it permits preemption of RCU |
| 1129 | read-side critical sections. It also permits |
| 1130 | spinlocks blocking while in RCU read-side critical |
| 1131 | sections. |
| 1132 | |
Joel Fernandes (Google) | 3398496 | 2018-10-05 16:18:10 -0700 | [diff] [blame] | 1133 | Why the apparent inconsistency? Because it is |
Paul E. McKenney | dd81eca | 2005-09-10 00:26:24 -0700 | [diff] [blame] | 1134 | possible to use priority boosting to keep the RCU |
| 1135 | grace periods short if need be (for example, if running |
| 1136 | short of memory). In contrast, if blocking waiting |
| 1137 | for (say) network reception, there is no way to know |
| 1138 | what should be boosted. Especially given that the |
| 1139 | process we need to boost might well be a human being |
| 1140 | who just went out for a pizza or something. And although |
| 1141 | a computer-operated cattle prod might arouse serious |
| 1142 | interest, it might also provoke serious objections. |
| 1143 | Besides, how does the computer know what pizza parlor |
| 1144 | the human being went to??? |
| 1145 | |
Phong Tran | 5e1bc93 | 2019-11-06 20:09:50 +0700 | [diff] [blame] | 1146 | :ref:`Back to Quick Quiz #3 <quiz_3>` |
Paul E. McKenney | dd81eca | 2005-09-10 00:26:24 -0700 | [diff] [blame] | 1147 | |
| 1148 | ACKNOWLEDGEMENTS |
| 1149 | |
| 1150 | My thanks to the people who helped make this human-readable, including |
Paul E. McKenney | d19720a | 2006-02-01 03:06:42 -0800 | [diff] [blame] | 1151 | Jon Walpole, Josh Triplett, Serge Hallyn, Suzanne Wood, and Alan Stern. |
Paul E. McKenney | dd81eca | 2005-09-10 00:26:24 -0700 | [diff] [blame] | 1152 | |
| 1153 | |
| 1154 | For more information, see http://www.rdrop.com/users/paulmck/RCU. |