0001 ======================================================
0002 A Tour Through TREE_RCU's Grace-Period Memory Ordering
0003 ======================================================
0004
0005 August 8, 2017
0006
0007 This article was contributed by Paul E. McKenney
0008
0009 Introduction
0010 ============
0011
0012 This document gives a rough visual overview of how Tree RCU's
0013 grace-period memory ordering guarantee is provided.
0014
0015 What Is Tree RCU's Grace Period Memory Ordering Guarantee?
0016 ==========================================================
0017
0018 RCU grace periods provide extremely strong memory-ordering guarantees
0019 for non-idle non-offline code.
0020 Any code that happens after the end of a given RCU grace period is guaranteed
0021 to see the effects of all accesses prior to the beginning of that grace
0022 period that are within RCU read-side critical sections.
0023 Similarly, any code that happens before the beginning of a given RCU grace
0024 period is guaranteed to not see the effects of all accesses following the end
0025 of that grace period that are within RCU read-side critical sections.
0026
0027 Note well that RCU-sched read-side critical sections include any region
0028 of code for which preemption is disabled.
0029 Given that each individual machine instruction can be thought of as
0030 an extremely small region of preemption-disabled code, one can think of
0031 ``synchronize_rcu()`` as ``smp_mb()`` on steroids.
0032
0033 RCU updaters use this guarantee by splitting their updates into
0034 two phases, one of which is executed before the grace period and
0035 the other of which is executed after the grace period.
0036 In the most common use case, phase one removes an element from
0037 a linked RCU-protected data structure, and phase two frees that element.
0038 For this to work, any readers that have witnessed state prior to the
0039 phase-one update (in the common case, removal) must not witness state
0040 following the phase-two update (in the common case, freeing).
0041
0042 The RCU implementation provides this guarantee using a network
0043 of lock-based critical sections, memory barriers, and per-CPU
0044 processing, as is described in the following sections.
0045
0046 Tree RCU Grace Period Memory Ordering Building Blocks
0047 =====================================================
0048
0049 The workhorse for RCU's grace-period memory ordering is the
0050 critical section for the ``rcu_node`` structure's
0051 ``->lock``. These critical sections use helper functions for lock
0052 acquisition, including ``raw_spin_lock_rcu_node()``,
0053 ``raw_spin_lock_irq_rcu_node()``, and ``raw_spin_lock_irqsave_rcu_node()``.
0054 Their lock-release counterparts are ``raw_spin_unlock_rcu_node()``,
0055 ``raw_spin_unlock_irq_rcu_node()``, and
0056 ``raw_spin_unlock_irqrestore_rcu_node()``, respectively.
0057 For completeness, a ``raw_spin_trylock_rcu_node()`` is also provided.
0058 The key point is that the lock-acquisition functions, including
0059 ``raw_spin_trylock_rcu_node()``, all invoke ``smp_mb__after_unlock_lock()``
0060 immediately after successful acquisition of the lock.
0061
0062 Therefore, for any given ``rcu_node`` structure, any access
0063 happening before one of the above lock-release functions will be seen
0064 by all CPUs as happening before any access happening after a later
0065 one of the above lock-acquisition functions.
0066 Furthermore, any access happening before one of the
0067 above lock-release function on any given CPU will be seen by all
0068 CPUs as happening before any access happening after a later one
0069 of the above lock-acquisition functions executing on that same CPU,
0070 even if the lock-release and lock-acquisition functions are operating
0071 on different ``rcu_node`` structures.
0072 Tree RCU uses these two ordering guarantees to form an ordering
0073 network among all CPUs that were in any way involved in the grace
0074 period, including any CPUs that came online or went offline during
0075 the grace period in question.
0076
0077 The following litmus test exhibits the ordering effects of these
0078 lock-acquisition and lock-release functions::
0079
0080 1 int x, y, z;
0081 2
0082 3 void task0(void)
0083 4 {
0084 5 raw_spin_lock_rcu_node(rnp);
0085 6 WRITE_ONCE(x, 1);
0086 7 r1 = READ_ONCE(y);
0087 8 raw_spin_unlock_rcu_node(rnp);
0088 9 }
0089 10
0090 11 void task1(void)
0091 12 {
0092 13 raw_spin_lock_rcu_node(rnp);
0093 14 WRITE_ONCE(y, 1);
0094 15 r2 = READ_ONCE(z);
0095 16 raw_spin_unlock_rcu_node(rnp);
0096 17 }
0097 18
0098 19 void task2(void)
0099 20 {
0100 21 WRITE_ONCE(z, 1);
0101 22 smp_mb();
0102 23 r3 = READ_ONCE(x);
0103 24 }
0104 25
0105 26 WARN_ON(r1 == 0 && r2 == 0 && r3 == 0);
0106
0107 The ``WARN_ON()`` is evaluated at "the end of time",
0108 after all changes have propagated throughout the system.
0109 Without the ``smp_mb__after_unlock_lock()`` provided by the
0110 acquisition functions, this ``WARN_ON()`` could trigger, for example
0111 on PowerPC.
0112 The ``smp_mb__after_unlock_lock()`` invocations prevent this
0113 ``WARN_ON()`` from triggering.
0114
0115 +-----------------------------------------------------------------------+
0116 | **Quick Quiz**: |
0117 +-----------------------------------------------------------------------+
0118 | But the chain of rcu_node-structure lock acquisitions guarantees |
0119 | that new readers will see all of the updater's pre-grace-period |
0120 | accesses and also guarantees that the updater's post-grace-period |
0121 | accesses will see all of the old reader's accesses. So why do we |
0122 | need all of those calls to smp_mb__after_unlock_lock()? |
0123 +-----------------------------------------------------------------------+
0124 | **Answer**: |
0125 +-----------------------------------------------------------------------+
0126 | Because we must provide ordering for RCU's polling grace-period |
0127 | primitives, for example, get_state_synchronize_rcu() and |
0128 | poll_state_synchronize_rcu(). Consider this code:: |
0129 | |
0130 | CPU 0 CPU 1 |
0131 | ---- ---- |
0132 | WRITE_ONCE(X, 1) WRITE_ONCE(Y, 1) |
0133 | g = get_state_synchronize_rcu() smp_mb() |
0134 | while (!poll_state_synchronize_rcu(g)) r1 = READ_ONCE(X) |
0135 | continue; |
0136 | r0 = READ_ONCE(Y) |
0137 | |
0138 | RCU guarantees that the outcome r0 == 0 && r1 == 0 will not |
0139 | happen, even if CPU 1 is in an RCU extended quiescent state |
0140 | (idle or offline) and thus won't interact directly with the RCU |
0141 | core processing at all. |
0142 +-----------------------------------------------------------------------+
0143
0144 This approach must be extended to include idle CPUs, which need
0145 RCU's grace-period memory ordering guarantee to extend to any
0146 RCU read-side critical sections preceding and following the current
0147 idle sojourn.
0148 This case is handled by calls to the strongly ordered
0149 ``atomic_add_return()`` read-modify-write atomic operation that
0150 is invoked within ``rcu_dynticks_eqs_enter()`` at idle-entry
0151 time and within ``rcu_dynticks_eqs_exit()`` at idle-exit time.
0152 The grace-period kthread invokes ``rcu_dynticks_snap()`` and
0153 ``rcu_dynticks_in_eqs_since()`` (both of which invoke
0154 an ``atomic_add_return()`` of zero) to detect idle CPUs.
0155
0156 +-----------------------------------------------------------------------+
0157 | **Quick Quiz**: |
0158 +-----------------------------------------------------------------------+
0159 | But what about CPUs that remain offline for the entire grace period? |
0160 +-----------------------------------------------------------------------+
0161 | **Answer**: |
0162 +-----------------------------------------------------------------------+
0163 | Such CPUs will be offline at the beginning of the grace period, so |
0164 | the grace period won't expect quiescent states from them. Races |
0165 | between grace-period start and CPU-hotplug operations are mediated |
0166 | by the CPU's leaf ``rcu_node`` structure's ``->lock`` as described |
0167 | above. |
0168 +-----------------------------------------------------------------------+
0169
0170 The approach must be extended to handle one final case, that of waking a
0171 task blocked in ``synchronize_rcu()``. This task might be affinitied to
0172 a CPU that is not yet aware that the grace period has ended, and thus
0173 might not yet be subject to the grace period's memory ordering.
0174 Therefore, there is an ``smp_mb()`` after the return from
0175 ``wait_for_completion()`` in the ``synchronize_rcu()`` code path.
0176
0177 +-----------------------------------------------------------------------+
0178 | **Quick Quiz**: |
0179 +-----------------------------------------------------------------------+
0180 | What? Where??? I don't see any ``smp_mb()`` after the return from |
0181 | ``wait_for_completion()``!!! |
0182 +-----------------------------------------------------------------------+
0183 | **Answer**: |
0184 +-----------------------------------------------------------------------+
0185 | That would be because I spotted the need for that ``smp_mb()`` during |
0186 | the creation of this documentation, and it is therefore unlikely to |
0187 | hit mainline before v4.14. Kudos to Lance Roy, Will Deacon, Peter |
0188 | Zijlstra, and Jonathan Cameron for asking questions that sensitized |
0189 | me to the rather elaborate sequence of events that demonstrate the |
0190 | need for this memory barrier. |
0191 +-----------------------------------------------------------------------+
0192
0193 Tree RCU's grace--period memory-ordering guarantees rely most heavily on
0194 the ``rcu_node`` structure's ``->lock`` field, so much so that it is
0195 necessary to abbreviate this pattern in the diagrams in the next
0196 section. For example, consider the ``rcu_prepare_for_idle()`` function
0197 shown below, which is one of several functions that enforce ordering of
0198 newly arrived RCU callbacks against future grace periods:
0199
0200 ::
0201
0202 1 static void rcu_prepare_for_idle(void)
0203 2 {
0204 3 bool needwake;
0205 4 struct rcu_data *rdp = this_cpu_ptr(&rcu_data);
0206 5 struct rcu_node *rnp;
0207 6 int tne;
0208 7
0209 8 lockdep_assert_irqs_disabled();
0210 9 if (rcu_rdp_is_offloaded(rdp))
0211 10 return;
0212 11
0213 12 /* Handle nohz enablement switches conservatively. */
0214 13 tne = READ_ONCE(tick_nohz_active);
0215 14 if (tne != rdp->tick_nohz_enabled_snap) {
0216 15 if (!rcu_segcblist_empty(&rdp->cblist))
0217 16 invoke_rcu_core(); /* force nohz to see update. */
0218 17 rdp->tick_nohz_enabled_snap = tne;
0219 18 return;
0220 19 }
0221 20 if (!tne)
0222 21 return;
0223 22
0224 23 /*
0225 24 * If we have not yet accelerated this jiffy, accelerate all
0226 25 * callbacks on this CPU.
0227 26 */
0228 27 if (rdp->last_accelerate == jiffies)
0229 28 return;
0230 29 rdp->last_accelerate = jiffies;
0231 30 if (rcu_segcblist_pend_cbs(&rdp->cblist)) {
0232 31 rnp = rdp->mynode;
0233 32 raw_spin_lock_rcu_node(rnp); /* irqs already disabled. */
0234 33 needwake = rcu_accelerate_cbs(rnp, rdp);
0235 34 raw_spin_unlock_rcu_node(rnp); /* irqs remain disabled. */
0236 35 if (needwake)
0237 36 rcu_gp_kthread_wake();
0238 37 }
0239 38 }
0240
0241 But the only part of ``rcu_prepare_for_idle()`` that really matters for
0242 this discussion are lines 32–34. We will therefore abbreviate this
0243 function as follows:
0244
0245 .. kernel-figure:: rcu_node-lock.svg
0246
0247 The box represents the ``rcu_node`` structure's ``->lock`` critical
0248 section, with the double line on top representing the additional
0249 ``smp_mb__after_unlock_lock()``.
0250
0251 Tree RCU Grace Period Memory Ordering Components
0252 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
0253
0254 Tree RCU's grace-period memory-ordering guarantee is provided by a
0255 number of RCU components:
0256
0257 #. `Callback Registry`_
0258 #. `Grace-Period Initialization`_
0259 #. `Self-Reported Quiescent States`_
0260 #. `Dynamic Tick Interface`_
0261 #. `CPU-Hotplug Interface`_
0262 #. `Forcing Quiescent States`_
0263 #. `Grace-Period Cleanup`_
0264 #. `Callback Invocation`_
0265
0266 Each of the following section looks at the corresponding component in
0267 detail.
0268
0269 Callback Registry
0270 ^^^^^^^^^^^^^^^^^
0271
0272 If RCU's grace-period guarantee is to mean anything at all, any access
0273 that happens before a given invocation of ``call_rcu()`` must also
0274 happen before the corresponding grace period. The implementation of this
0275 portion of RCU's grace period guarantee is shown in the following
0276 figure:
0277
0278 .. kernel-figure:: TreeRCU-callback-registry.svg
0279
0280 Because ``call_rcu()`` normally acts only on CPU-local state, it
0281 provides no ordering guarantees, either for itself or for phase one of
0282 the update (which again will usually be removal of an element from an
0283 RCU-protected data structure). It simply enqueues the ``rcu_head``
0284 structure on a per-CPU list, which cannot become associated with a grace
0285 period until a later call to ``rcu_accelerate_cbs()``, as shown in the
0286 diagram above.
0287
0288 One set of code paths shown on the left invokes ``rcu_accelerate_cbs()``
0289 via ``note_gp_changes()``, either directly from ``call_rcu()`` (if the
0290 current CPU is inundated with queued ``rcu_head`` structures) or more
0291 likely from an ``RCU_SOFTIRQ`` handler. Another code path in the middle
0292 is taken only in kernels built with ``CONFIG_RCU_FAST_NO_HZ=y``, which
0293 invokes ``rcu_accelerate_cbs()`` via ``rcu_prepare_for_idle()``. The
0294 final code path on the right is taken only in kernels built with
0295 ``CONFIG_HOTPLUG_CPU=y``, which invokes ``rcu_accelerate_cbs()`` via
0296 ``rcu_advance_cbs()``, ``rcu_migrate_callbacks``,
0297 ``rcutree_migrate_callbacks()``, and ``takedown_cpu()``, which in turn
0298 is invoked on a surviving CPU after the outgoing CPU has been completely
0299 offlined.
0300
0301 There are a few other code paths within grace-period processing that
0302 opportunistically invoke ``rcu_accelerate_cbs()``. However, either way,
0303 all of the CPU's recently queued ``rcu_head`` structures are associated
0304 with a future grace-period number under the protection of the CPU's lead
0305 ``rcu_node`` structure's ``->lock``. In all cases, there is full
0306 ordering against any prior critical section for that same ``rcu_node``
0307 structure's ``->lock``, and also full ordering against any of the
0308 current task's or CPU's prior critical sections for any ``rcu_node``
0309 structure's ``->lock``.
0310
0311 The next section will show how this ordering ensures that any accesses
0312 prior to the ``call_rcu()`` (particularly including phase one of the
0313 update) happen before the start of the corresponding grace period.
0314
0315 +-----------------------------------------------------------------------+
0316 | **Quick Quiz**: |
0317 +-----------------------------------------------------------------------+
0318 | But what about ``synchronize_rcu()``? |
0319 +-----------------------------------------------------------------------+
0320 | **Answer**: |
0321 +-----------------------------------------------------------------------+
0322 | The ``synchronize_rcu()`` passes ``call_rcu()`` to ``wait_rcu_gp()``, |
0323 | which invokes it. So either way, it eventually comes down to |
0324 | ``call_rcu()``. |
0325 +-----------------------------------------------------------------------+
0326
0327 Grace-Period Initialization
0328 ^^^^^^^^^^^^^^^^^^^^^^^^^^^
0329
0330 Grace-period initialization is carried out by the grace-period kernel
0331 thread, which makes several passes over the ``rcu_node`` tree within the
0332 ``rcu_gp_init()`` function. This means that showing the full flow of
0333 ordering through the grace-period computation will require duplicating
0334 this tree. If you find this confusing, please note that the state of the
0335 ``rcu_node`` changes over time, just like Heraclitus's river. However,
0336 to keep the ``rcu_node`` river tractable, the grace-period kernel
0337 thread's traversals are presented in multiple parts, starting in this
0338 section with the various phases of grace-period initialization.
0339
0340 The first ordering-related grace-period initialization action is to
0341 advance the ``rcu_state`` structure's ``->gp_seq`` grace-period-number
0342 counter, as shown below:
0343
0344 .. kernel-figure:: TreeRCU-gp-init-1.svg
0345
0346 The actual increment is carried out using ``smp_store_release()``, which
0347 helps reject false-positive RCU CPU stall detection. Note that only the
0348 root ``rcu_node`` structure is touched.
0349
0350 The first pass through the ``rcu_node`` tree updates bitmasks based on
0351 CPUs having come online or gone offline since the start of the previous
0352 grace period. In the common case where the number of online CPUs for
0353 this ``rcu_node`` structure has not transitioned to or from zero, this
0354 pass will scan only the leaf ``rcu_node`` structures. However, if the
0355 number of online CPUs for a given leaf ``rcu_node`` structure has
0356 transitioned from zero, ``rcu_init_new_rnp()`` will be invoked for the
0357 first incoming CPU. Similarly, if the number of online CPUs for a given
0358 leaf ``rcu_node`` structure has transitioned to zero,
0359 ``rcu_cleanup_dead_rnp()`` will be invoked for the last outgoing CPU.
0360 The diagram below shows the path of ordering if the leftmost
0361 ``rcu_node`` structure onlines its first CPU and if the next
0362 ``rcu_node`` structure has no online CPUs (or, alternatively if the
0363 leftmost ``rcu_node`` structure offlines its last CPU and if the next
0364 ``rcu_node`` structure has no online CPUs).
0365
0366 .. kernel-figure:: TreeRCU-gp-init-2.svg
0367
0368 The final ``rcu_gp_init()`` pass through the ``rcu_node`` tree traverses
0369 breadth-first, setting each ``rcu_node`` structure's ``->gp_seq`` field
0370 to the newly advanced value from the ``rcu_state`` structure, as shown
0371 in the following diagram.
0372
0373 .. kernel-figure:: TreeRCU-gp-init-3.svg
0374
0375 This change will also cause each CPU's next call to
0376 ``__note_gp_changes()`` to notice that a new grace period has started,
0377 as described in the next section. But because the grace-period kthread
0378 started the grace period at the root (with the advancing of the
0379 ``rcu_state`` structure's ``->gp_seq`` field) before setting each leaf
0380 ``rcu_node`` structure's ``->gp_seq`` field, each CPU's observation of
0381 the start of the grace period will happen after the actual start of the
0382 grace period.
0383
0384 +-----------------------------------------------------------------------+
0385 | **Quick Quiz**: |
0386 +-----------------------------------------------------------------------+
0387 | But what about the CPU that started the grace period? Why wouldn't it |
0388 | see the start of the grace period right when it started that grace |
0389 | period? |
0390 +-----------------------------------------------------------------------+
0391 | **Answer**: |
0392 +-----------------------------------------------------------------------+
0393 | In some deep philosophical and overly anthromorphized sense, yes, the |
0394 | CPU starting the grace period is immediately aware of having done so. |
0395 | However, if we instead assume that RCU is not self-aware, then even |
0396 | the CPU starting the grace period does not really become aware of the |
0397 | start of this grace period until its first call to |
0398 | ``__note_gp_changes()``. On the other hand, this CPU potentially gets |
0399 | early notification because it invokes ``__note_gp_changes()`` during |
0400 | its last ``rcu_gp_init()`` pass through its leaf ``rcu_node`` |
0401 | structure. |
0402 +-----------------------------------------------------------------------+
0403
0404 Self-Reported Quiescent States
0405 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
0406
0407 When all entities that might block the grace period have reported
0408 quiescent states (or as described in a later section, had quiescent
0409 states reported on their behalf), the grace period can end. Online
0410 non-idle CPUs report their own quiescent states, as shown in the
0411 following diagram:
0412
0413 .. kernel-figure:: TreeRCU-qs.svg
0414
0415 This is for the last CPU to report a quiescent state, which signals the
0416 end of the grace period. Earlier quiescent states would push up the
0417 ``rcu_node`` tree only until they encountered an ``rcu_node`` structure
0418 that is waiting for additional quiescent states. However, ordering is
0419 nevertheless preserved because some later quiescent state will acquire
0420 that ``rcu_node`` structure's ``->lock``.
0421
0422 Any number of events can lead up to a CPU invoking ``note_gp_changes``
0423 (or alternatively, directly invoking ``__note_gp_changes()``), at which
0424 point that CPU will notice the start of a new grace period while holding
0425 its leaf ``rcu_node`` lock. Therefore, all execution shown in this
0426 diagram happens after the start of the grace period. In addition, this
0427 CPU will consider any RCU read-side critical section that started before
0428 the invocation of ``__note_gp_changes()`` to have started before the
0429 grace period, and thus a critical section that the grace period must
0430 wait on.
0431
0432 +-----------------------------------------------------------------------+
0433 | **Quick Quiz**: |
0434 +-----------------------------------------------------------------------+
0435 | But a RCU read-side critical section might have started after the |
0436 | beginning of the grace period (the advancing of ``->gp_seq`` from |
0437 | earlier), so why should the grace period wait on such a critical |
0438 | section? |
0439 +-----------------------------------------------------------------------+
0440 | **Answer**: |
0441 +-----------------------------------------------------------------------+
0442 | It is indeed not necessary for the grace period to wait on such a |
0443 | critical section. However, it is permissible to wait on it. And it is |
0444 | furthermore important to wait on it, as this lazy approach is far |
0445 | more scalable than a “big bang” all-at-once grace-period start could |
0446 | possibly be. |
0447 +-----------------------------------------------------------------------+
0448
0449 If the CPU does a context switch, a quiescent state will be noted by
0450 ``rcu_note_context_switch()`` on the left. On the other hand, if the CPU
0451 takes a scheduler-clock interrupt while executing in usermode, a
0452 quiescent state will be noted by ``rcu_sched_clock_irq()`` on the right.
0453 Either way, the passage through a quiescent state will be noted in a
0454 per-CPU variable.
0455
0456 The next time an ``RCU_SOFTIRQ`` handler executes on this CPU (for
0457 example, after the next scheduler-clock interrupt), ``rcu_core()`` will
0458 invoke ``rcu_check_quiescent_state()``, which will notice the recorded
0459 quiescent state, and invoke ``rcu_report_qs_rdp()``. If
0460 ``rcu_report_qs_rdp()`` verifies that the quiescent state really does
0461 apply to the current grace period, it invokes ``rcu_report_rnp()`` which
0462 traverses up the ``rcu_node`` tree as shown at the bottom of the
0463 diagram, clearing bits from each ``rcu_node`` structure's ``->qsmask``
0464 field, and propagating up the tree when the result is zero.
0465
0466 Note that traversal passes upwards out of a given ``rcu_node`` structure
0467 only if the current CPU is reporting the last quiescent state for the
0468 subtree headed by that ``rcu_node`` structure. A key point is that if a
0469 CPU's traversal stops at a given ``rcu_node`` structure, then there will
0470 be a later traversal by another CPU (or perhaps the same one) that
0471 proceeds upwards from that point, and the ``rcu_node`` ``->lock``
0472 guarantees that the first CPU's quiescent state happens before the
0473 remainder of the second CPU's traversal. Applying this line of thought
0474 repeatedly shows that all CPUs' quiescent states happen before the last
0475 CPU traverses through the root ``rcu_node`` structure, the “last CPU”
0476 being the one that clears the last bit in the root ``rcu_node``
0477 structure's ``->qsmask`` field.
0478
0479 Dynamic Tick Interface
0480 ^^^^^^^^^^^^^^^^^^^^^^
0481
0482 Due to energy-efficiency considerations, RCU is forbidden from
0483 disturbing idle CPUs. CPUs are therefore required to notify RCU when
0484 entering or leaving idle state, which they do via fully ordered
0485 value-returning atomic operations on a per-CPU variable. The ordering
0486 effects are as shown below:
0487
0488 .. kernel-figure:: TreeRCU-dyntick.svg
0489
0490 The RCU grace-period kernel thread samples the per-CPU idleness variable
0491 while holding the corresponding CPU's leaf ``rcu_node`` structure's
0492 ``->lock``. This means that any RCU read-side critical sections that
0493 precede the idle period (the oval near the top of the diagram above)
0494 will happen before the end of the current grace period. Similarly, the
0495 beginning of the current grace period will happen before any RCU
0496 read-side critical sections that follow the idle period (the oval near
0497 the bottom of the diagram above).
0498
0499 Plumbing this into the full grace-period execution is described
0500 `below <Forcing Quiescent States_>`__.
0501
0502 CPU-Hotplug Interface
0503 ^^^^^^^^^^^^^^^^^^^^^
0504
0505 RCU is also forbidden from disturbing offline CPUs, which might well be
0506 powered off and removed from the system completely. CPUs are therefore
0507 required to notify RCU of their comings and goings as part of the
0508 corresponding CPU hotplug operations. The ordering effects are shown
0509 below:
0510
0511 .. kernel-figure:: TreeRCU-hotplug.svg
0512
0513 Because CPU hotplug operations are much less frequent than idle
0514 transitions, they are heavier weight, and thus acquire the CPU's leaf
0515 ``rcu_node`` structure's ``->lock`` and update this structure's
0516 ``->qsmaskinitnext``. The RCU grace-period kernel thread samples this
0517 mask to detect CPUs having gone offline since the beginning of this
0518 grace period.
0519
0520 Plumbing this into the full grace-period execution is described
0521 `below <Forcing Quiescent States_>`__.
0522
0523 Forcing Quiescent States
0524 ^^^^^^^^^^^^^^^^^^^^^^^^
0525
0526 As noted above, idle and offline CPUs cannot report their own quiescent
0527 states, and therefore the grace-period kernel thread must do the
0528 reporting on their behalf. This process is called “forcing quiescent
0529 states”, it is repeated every few jiffies, and its ordering effects are
0530 shown below:
0531
0532 .. kernel-figure:: TreeRCU-gp-fqs.svg
0533
0534 Each pass of quiescent state forcing is guaranteed to traverse the leaf
0535 ``rcu_node`` structures, and if there are no new quiescent states due to
0536 recently idled and/or offlined CPUs, then only the leaves are traversed.
0537 However, if there is a newly offlined CPU as illustrated on the left or
0538 a newly idled CPU as illustrated on the right, the corresponding
0539 quiescent state will be driven up towards the root. As with
0540 self-reported quiescent states, the upwards driving stops once it
0541 reaches an ``rcu_node`` structure that has quiescent states outstanding
0542 from other CPUs.
0543
0544 +-----------------------------------------------------------------------+
0545 | **Quick Quiz**: |
0546 +-----------------------------------------------------------------------+
0547 | The leftmost drive to root stopped before it reached the root |
0548 | ``rcu_node`` structure, which means that there are still CPUs |
0549 | subordinate to that structure on which the current grace period is |
0550 | waiting. Given that, how is it possible that the rightmost drive to |
0551 | root ended the grace period? |
0552 +-----------------------------------------------------------------------+
0553 | **Answer**: |
0554 +-----------------------------------------------------------------------+
0555 | Good analysis! It is in fact impossible in the absence of bugs in |
0556 | RCU. But this diagram is complex enough as it is, so simplicity |
0557 | overrode accuracy. You can think of it as poetic license, or you can |
0558 | think of it as misdirection that is resolved in the |
0559 | `stitched-together diagram <Putting It All Together_>`__. |
0560 +-----------------------------------------------------------------------+
0561
0562 Grace-Period Cleanup
0563 ^^^^^^^^^^^^^^^^^^^^
0564
0565 Grace-period cleanup first scans the ``rcu_node`` tree breadth-first
0566 advancing all the ``->gp_seq`` fields, then it advances the
0567 ``rcu_state`` structure's ``->gp_seq`` field. The ordering effects are
0568 shown below:
0569
0570 .. kernel-figure:: TreeRCU-gp-cleanup.svg
0571
0572 As indicated by the oval at the bottom of the diagram, once grace-period
0573 cleanup is complete, the next grace period can begin.
0574
0575 +-----------------------------------------------------------------------+
0576 | **Quick Quiz**: |
0577 +-----------------------------------------------------------------------+
0578 | But when precisely does the grace period end? |
0579 +-----------------------------------------------------------------------+
0580 | **Answer**: |
0581 +-----------------------------------------------------------------------+
0582 | There is no useful single point at which the grace period can be said |
0583 | to end. The earliest reasonable candidate is as soon as the last CPU |
0584 | has reported its quiescent state, but it may be some milliseconds |
0585 | before RCU becomes aware of this. The latest reasonable candidate is |
0586 | once the ``rcu_state`` structure's ``->gp_seq`` field has been |
0587 | updated, but it is quite possible that some CPUs have already |
0588 | completed phase two of their updates by that time. In short, if you |
0589 | are going to work with RCU, you need to learn to embrace uncertainty. |
0590 +-----------------------------------------------------------------------+
0591
0592 Callback Invocation
0593 ^^^^^^^^^^^^^^^^^^^
0594
0595 Once a given CPU's leaf ``rcu_node`` structure's ``->gp_seq`` field has
0596 been updated, that CPU can begin invoking its RCU callbacks that were
0597 waiting for this grace period to end. These callbacks are identified by
0598 ``rcu_advance_cbs()``, which is usually invoked by
0599 ``__note_gp_changes()``. As shown in the diagram below, this invocation
0600 can be triggered by the scheduling-clock interrupt
0601 (``rcu_sched_clock_irq()`` on the left) or by idle entry
0602 (``rcu_cleanup_after_idle()`` on the right, but only for kernels build
0603 with ``CONFIG_RCU_FAST_NO_HZ=y``). Either way, ``RCU_SOFTIRQ`` is
0604 raised, which results in ``rcu_do_batch()`` invoking the callbacks,
0605 which in turn allows those callbacks to carry out (either directly or
0606 indirectly via wakeup) the needed phase-two processing for each update.
0607
0608 .. kernel-figure:: TreeRCU-callback-invocation.svg
0609
0610 Please note that callback invocation can also be prompted by any number
0611 of corner-case code paths, for example, when a CPU notes that it has
0612 excessive numbers of callbacks queued. In all cases, the CPU acquires
0613 its leaf ``rcu_node`` structure's ``->lock`` before invoking callbacks,
0614 which preserves the required ordering against the newly completed grace
0615 period.
0616
0617 However, if the callback function communicates to other CPUs, for
0618 example, doing a wakeup, then it is that function's responsibility to
0619 maintain ordering. For example, if the callback function wakes up a task
0620 that runs on some other CPU, proper ordering must in place in both the
0621 callback function and the task being awakened. To see why this is
0622 important, consider the top half of the `grace-period
0623 cleanup`_ diagram. The callback might be
0624 running on a CPU corresponding to the leftmost leaf ``rcu_node``
0625 structure, and awaken a task that is to run on a CPU corresponding to
0626 the rightmost leaf ``rcu_node`` structure, and the grace-period kernel
0627 thread might not yet have reached the rightmost leaf. In this case, the
0628 grace period's memory ordering might not yet have reached that CPU, so
0629 again the callback function and the awakened task must supply proper
0630 ordering.
0631
0632 Putting It All Together
0633 ~~~~~~~~~~~~~~~~~~~~~~~
0634
0635 A stitched-together diagram is here:
0636
0637 .. kernel-figure:: TreeRCU-gp.svg
0638
0639 Legal Statement
0640 ~~~~~~~~~~~~~~~
0641
0642 This work represents the view of the author and does not necessarily
0643 represent the view of IBM.
0644
0645 Linux is a registered trademark of Linus Torvalds.
0646
0647 Other company, product, and service names may be trademarks or service
0648 marks of others.