0001 ============================
0002 LINUX KERNEL MEMORY BARRIERS
0003 ============================
0004
0005 By: David Howells <dhowells@redhat.com>
0006 Paul E. McKenney <paulmck@linux.ibm.com>
0007 Will Deacon <will.deacon@arm.com>
0008 Peter Zijlstra <peterz@infradead.org>
0009
0010 ==========
0011 DISCLAIMER
0012 ==========
0013
0014 This document is not a specification; it is intentionally (for the sake of
0015 brevity) and unintentionally (due to being human) incomplete. This document is
0016 meant as a guide to using the various memory barriers provided by Linux, but
0017 in case of any doubt (and there are many) please ask. Some doubts may be
0018 resolved by referring to the formal memory consistency model and related
0019 documentation at tools/memory-model/. Nevertheless, even this memory
0020 model should be viewed as the collective opinion of its maintainers rather
0021 than as an infallible oracle.
0022
0023 To repeat, this document is not a specification of what Linux expects from
0024 hardware.
0025
0026 The purpose of this document is twofold:
0027
0028 (1) to specify the minimum functionality that one can rely on for any
0029 particular barrier, and
0030
0031 (2) to provide a guide as to how to use the barriers that are available.
0032
0033 Note that an architecture can provide more than the minimum requirement
0034 for any particular barrier, but if the architecture provides less than
0035 that, that architecture is incorrect.
0036
0037 Note also that it is possible that a barrier may be a no-op for an
0038 architecture because the way that arch works renders an explicit barrier
0039 unnecessary in that case.
0040
0041
0042 ========
0043 CONTENTS
0044 ========
0045
0046 (*) Abstract memory access model.
0047
0048 - Device operations.
0049 - Guarantees.
0050
0051 (*) What are memory barriers?
0052
0053 - Varieties of memory barrier.
0054 - What may not be assumed about memory barriers?
0055 - Data dependency barriers (historical).
0056 - Control dependencies.
0057 - SMP barrier pairing.
0058 - Examples of memory barrier sequences.
0059 - Read memory barriers vs load speculation.
0060 - Multicopy atomicity.
0061
0062 (*) Explicit kernel barriers.
0063
0064 - Compiler barrier.
0065 - CPU memory barriers.
0066
0067 (*) Implicit kernel memory barriers.
0068
0069 - Lock acquisition functions.
0070 - Interrupt disabling functions.
0071 - Sleep and wake-up functions.
0072 - Miscellaneous functions.
0073
0074 (*) Inter-CPU acquiring barrier effects.
0075
0076 - Acquires vs memory accesses.
0077
0078 (*) Where are memory barriers needed?
0079
0080 - Interprocessor interaction.
0081 - Atomic operations.
0082 - Accessing devices.
0083 - Interrupts.
0084
0085 (*) Kernel I/O barrier effects.
0086
0087 (*) Assumed minimum execution ordering model.
0088
0089 (*) The effects of the cpu cache.
0090
0091 - Cache coherency.
0092 - Cache coherency vs DMA.
0093 - Cache coherency vs MMIO.
0094
0095 (*) The things CPUs get up to.
0096
0097 - And then there's the Alpha.
0098 - Virtual Machine Guests.
0099
0100 (*) Example uses.
0101
0102 - Circular buffers.
0103
0104 (*) References.
0105
0106
0107 ============================
0108 ABSTRACT MEMORY ACCESS MODEL
0109 ============================
0110
0111 Consider the following abstract model of the system:
0112
0113 : :
0114 : :
0115 : :
0116 +-------+ : +--------+ : +-------+
0117 | | : | | : | |
0118 | | : | | : | |
0119 | CPU 1 |<----->| Memory |<----->| CPU 2 |
0120 | | : | | : | |
0121 | | : | | : | |
0122 +-------+ : +--------+ : +-------+
0123 ^ : ^ : ^
0124 | : | : |
0125 | : | : |
0126 | : v : |
0127 | : +--------+ : |
0128 | : | | : |
0129 | : | | : |
0130 +---------->| Device |<----------+
0131 : | | :
0132 : | | :
0133 : +--------+ :
0134 : :
0135
0136 Each CPU executes a program that generates memory access operations. In the
0137 abstract CPU, memory operation ordering is very relaxed, and a CPU may actually
0138 perform the memory operations in any order it likes, provided program causality
0139 appears to be maintained. Similarly, the compiler may also arrange the
0140 instructions it emits in any order it likes, provided it doesn't affect the
0141 apparent operation of the program.
0142
0143 So in the above diagram, the effects of the memory operations performed by a
0144 CPU are perceived by the rest of the system as the operations cross the
0145 interface between the CPU and rest of the system (the dotted lines).
0146
0147
0148 For example, consider the following sequence of events:
0149
0150 CPU 1 CPU 2
0151 =============== ===============
0152 { A == 1; B == 2 }
0153 A = 3; x = B;
0154 B = 4; y = A;
0155
0156 The set of accesses as seen by the memory system in the middle can be arranged
0157 in 24 different combinations:
0158
0159 STORE A=3, STORE B=4, y=LOAD A->3, x=LOAD B->4
0160 STORE A=3, STORE B=4, x=LOAD B->4, y=LOAD A->3
0161 STORE A=3, y=LOAD A->3, STORE B=4, x=LOAD B->4
0162 STORE A=3, y=LOAD A->3, x=LOAD B->2, STORE B=4
0163 STORE A=3, x=LOAD B->2, STORE B=4, y=LOAD A->3
0164 STORE A=3, x=LOAD B->2, y=LOAD A->3, STORE B=4
0165 STORE B=4, STORE A=3, y=LOAD A->3, x=LOAD B->4
0166 STORE B=4, ...
0167 ...
0168
0169 and can thus result in four different combinations of values:
0170
0171 x == 2, y == 1
0172 x == 2, y == 3
0173 x == 4, y == 1
0174 x == 4, y == 3
0175
0176
0177 Furthermore, the stores committed by a CPU to the memory system may not be
0178 perceived by the loads made by another CPU in the same order as the stores were
0179 committed.
0180
0181
0182 As a further example, consider this sequence of events:
0183
0184 CPU 1 CPU 2
0185 =============== ===============
0186 { A == 1, B == 2, C == 3, P == &A, Q == &C }
0187 B = 4; Q = P;
0188 P = &B; D = *Q;
0189
0190 There is an obvious data dependency here, as the value loaded into D depends on
0191 the address retrieved from P by CPU 2. At the end of the sequence, any of the
0192 following results are possible:
0193
0194 (Q == &A) and (D == 1)
0195 (Q == &B) and (D == 2)
0196 (Q == &B) and (D == 4)
0197
0198 Note that CPU 2 will never try and load C into D because the CPU will load P
0199 into Q before issuing the load of *Q.
0200
0201
0202 DEVICE OPERATIONS
0203 -----------------
0204
0205 Some devices present their control interfaces as collections of memory
0206 locations, but the order in which the control registers are accessed is very
0207 important. For instance, imagine an ethernet card with a set of internal
0208 registers that are accessed through an address port register (A) and a data
0209 port register (D). To read internal register 5, the following code might then
0210 be used:
0211
0212 *A = 5;
0213 x = *D;
0214
0215 but this might show up as either of the following two sequences:
0216
0217 STORE *A = 5, x = LOAD *D
0218 x = LOAD *D, STORE *A = 5
0219
0220 the second of which will almost certainly result in a malfunction, since it set
0221 the address _after_ attempting to read the register.
0222
0223
0224 GUARANTEES
0225 ----------
0226
0227 There are some minimal guarantees that may be expected of a CPU:
0228
0229 (*) On any given CPU, dependent memory accesses will be issued in order, with
0230 respect to itself. This means that for:
0231
0232 Q = READ_ONCE(P); D = READ_ONCE(*Q);
0233
0234 the CPU will issue the following memory operations:
0235
0236 Q = LOAD P, D = LOAD *Q
0237
0238 and always in that order. However, on DEC Alpha, READ_ONCE() also
0239 emits a memory-barrier instruction, so that a DEC Alpha CPU will
0240 instead issue the following memory operations:
0241
0242 Q = LOAD P, MEMORY_BARRIER, D = LOAD *Q, MEMORY_BARRIER
0243
0244 Whether on DEC Alpha or not, the READ_ONCE() also prevents compiler
0245 mischief.
0246
0247 (*) Overlapping loads and stores within a particular CPU will appear to be
0248 ordered within that CPU. This means that for:
0249
0250 a = READ_ONCE(*X); WRITE_ONCE(*X, b);
0251
0252 the CPU will only issue the following sequence of memory operations:
0253
0254 a = LOAD *X, STORE *X = b
0255
0256 And for:
0257
0258 WRITE_ONCE(*X, c); d = READ_ONCE(*X);
0259
0260 the CPU will only issue:
0261
0262 STORE *X = c, d = LOAD *X
0263
0264 (Loads and stores overlap if they are targeted at overlapping pieces of
0265 memory).
0266
0267 And there are a number of things that _must_ or _must_not_ be assumed:
0268
0269 (*) It _must_not_ be assumed that the compiler will do what you want
0270 with memory references that are not protected by READ_ONCE() and
0271 WRITE_ONCE(). Without them, the compiler is within its rights to
0272 do all sorts of "creative" transformations, which are covered in
0273 the COMPILER BARRIER section.
0274
0275 (*) It _must_not_ be assumed that independent loads and stores will be issued
0276 in the order given. This means that for:
0277
0278 X = *A; Y = *B; *D = Z;
0279
0280 we may get any of the following sequences:
0281
0282 X = LOAD *A, Y = LOAD *B, STORE *D = Z
0283 X = LOAD *A, STORE *D = Z, Y = LOAD *B
0284 Y = LOAD *B, X = LOAD *A, STORE *D = Z
0285 Y = LOAD *B, STORE *D = Z, X = LOAD *A
0286 STORE *D = Z, X = LOAD *A, Y = LOAD *B
0287 STORE *D = Z, Y = LOAD *B, X = LOAD *A
0288
0289 (*) It _must_ be assumed that overlapping memory accesses may be merged or
0290 discarded. This means that for:
0291
0292 X = *A; Y = *(A + 4);
0293
0294 we may get any one of the following sequences:
0295
0296 X = LOAD *A; Y = LOAD *(A + 4);
0297 Y = LOAD *(A + 4); X = LOAD *A;
0298 {X, Y} = LOAD {*A, *(A + 4) };
0299
0300 And for:
0301
0302 *A = X; *(A + 4) = Y;
0303
0304 we may get any of:
0305
0306 STORE *A = X; STORE *(A + 4) = Y;
0307 STORE *(A + 4) = Y; STORE *A = X;
0308 STORE {*A, *(A + 4) } = {X, Y};
0309
0310 And there are anti-guarantees:
0311
0312 (*) These guarantees do not apply to bitfields, because compilers often
0313 generate code to modify these using non-atomic read-modify-write
0314 sequences. Do not attempt to use bitfields to synchronize parallel
0315 algorithms.
0316
0317 (*) Even in cases where bitfields are protected by locks, all fields
0318 in a given bitfield must be protected by one lock. If two fields
0319 in a given bitfield are protected by different locks, the compiler's
0320 non-atomic read-modify-write sequences can cause an update to one
0321 field to corrupt the value of an adjacent field.
0322
0323 (*) These guarantees apply only to properly aligned and sized scalar
0324 variables. "Properly sized" currently means variables that are
0325 the same size as "char", "short", "int" and "long". "Properly
0326 aligned" means the natural alignment, thus no constraints for
0327 "char", two-byte alignment for "short", four-byte alignment for
0328 "int", and either four-byte or eight-byte alignment for "long",
0329 on 32-bit and 64-bit systems, respectively. Note that these
0330 guarantees were introduced into the C11 standard, so beware when
0331 using older pre-C11 compilers (for example, gcc 4.6). The portion
0332 of the standard containing this guarantee is Section 3.14, which
0333 defines "memory location" as follows:
0334
0335 memory location
0336 either an object of scalar type, or a maximal sequence
0337 of adjacent bit-fields all having nonzero width
0338
0339 NOTE 1: Two threads of execution can update and access
0340 separate memory locations without interfering with
0341 each other.
0342
0343 NOTE 2: A bit-field and an adjacent non-bit-field member
0344 are in separate memory locations. The same applies
0345 to two bit-fields, if one is declared inside a nested
0346 structure declaration and the other is not, or if the two
0347 are separated by a zero-length bit-field declaration,
0348 or if they are separated by a non-bit-field member
0349 declaration. It is not safe to concurrently update two
0350 bit-fields in the same structure if all members declared
0351 between them are also bit-fields, no matter what the
0352 sizes of those intervening bit-fields happen to be.
0353
0354
0355 =========================
0356 WHAT ARE MEMORY BARRIERS?
0357 =========================
0358
0359 As can be seen above, independent memory operations are effectively performed
0360 in random order, but this can be a problem for CPU-CPU interaction and for I/O.
0361 What is required is some way of intervening to instruct the compiler and the
0362 CPU to restrict the order.
0363
0364 Memory barriers are such interventions. They impose a perceived partial
0365 ordering over the memory operations on either side of the barrier.
0366
0367 Such enforcement is important because the CPUs and other devices in a system
0368 can use a variety of tricks to improve performance, including reordering,
0369 deferral and combination of memory operations; speculative loads; speculative
0370 branch prediction and various types of caching. Memory barriers are used to
0371 override or suppress these tricks, allowing the code to sanely control the
0372 interaction of multiple CPUs and/or devices.
0373
0374
0375 VARIETIES OF MEMORY BARRIER
0376 ---------------------------
0377
0378 Memory barriers come in four basic varieties:
0379
0380 (1) Write (or store) memory barriers.
0381
0382 A write memory barrier gives a guarantee that all the STORE operations
0383 specified before the barrier will appear to happen before all the STORE
0384 operations specified after the barrier with respect to the other
0385 components of the system.
0386
0387 A write barrier is a partial ordering on stores only; it is not required
0388 to have any effect on loads.
0389
0390 A CPU can be viewed as committing a sequence of store operations to the
0391 memory system as time progresses. All stores _before_ a write barrier
0392 will occur _before_ all the stores after the write barrier.
0393
0394 [!] Note that write barriers should normally be paired with read or data
0395 dependency barriers; see the "SMP barrier pairing" subsection.
0396
0397
0398 (2) Data dependency barriers.
0399
0400 A data dependency barrier is a weaker form of read barrier. In the case
0401 where two loads are performed such that the second depends on the result
0402 of the first (eg: the first load retrieves the address to which the second
0403 load will be directed), a data dependency barrier would be required to
0404 make sure that the target of the second load is updated after the address
0405 obtained by the first load is accessed.
0406
0407 A data dependency barrier is a partial ordering on interdependent loads
0408 only; it is not required to have any effect on stores, independent loads
0409 or overlapping loads.
0410
0411 As mentioned in (1), the other CPUs in the system can be viewed as
0412 committing sequences of stores to the memory system that the CPU being
0413 considered can then perceive. A data dependency barrier issued by the CPU
0414 under consideration guarantees that for any load preceding it, if that
0415 load touches one of a sequence of stores from another CPU, then by the
0416 time the barrier completes, the effects of all the stores prior to that
0417 touched by the load will be perceptible to any loads issued after the data
0418 dependency barrier.
0419
0420 See the "Examples of memory barrier sequences" subsection for diagrams
0421 showing the ordering constraints.
0422
0423 [!] Note that the first load really has to have a _data_ dependency and
0424 not a control dependency. If the address for the second load is dependent
0425 on the first load, but the dependency is through a conditional rather than
0426 actually loading the address itself, then it's a _control_ dependency and
0427 a full read barrier or better is required. See the "Control dependencies"
0428 subsection for more information.
0429
0430 [!] Note that data dependency barriers should normally be paired with
0431 write barriers; see the "SMP barrier pairing" subsection.
0432
0433
0434 (3) Read (or load) memory barriers.
0435
0436 A read barrier is a data dependency barrier plus a guarantee that all the
0437 LOAD operations specified before the barrier will appear to happen before
0438 all the LOAD operations specified after the barrier with respect to the
0439 other components of the system.
0440
0441 A read barrier is a partial ordering on loads only; it is not required to
0442 have any effect on stores.
0443
0444 Read memory barriers imply data dependency barriers, and so can substitute
0445 for them.
0446
0447 [!] Note that read barriers should normally be paired with write barriers;
0448 see the "SMP barrier pairing" subsection.
0449
0450
0451 (4) General memory barriers.
0452
0453 A general memory barrier gives a guarantee that all the LOAD and STORE
0454 operations specified before the barrier will appear to happen before all
0455 the LOAD and STORE operations specified after the barrier with respect to
0456 the other components of the system.
0457
0458 A general memory barrier is a partial ordering over both loads and stores.
0459
0460 General memory barriers imply both read and write memory barriers, and so
0461 can substitute for either.
0462
0463
0464 And a couple of implicit varieties:
0465
0466 (5) ACQUIRE operations.
0467
0468 This acts as a one-way permeable barrier. It guarantees that all memory
0469 operations after the ACQUIRE operation will appear to happen after the
0470 ACQUIRE operation with respect to the other components of the system.
0471 ACQUIRE operations include LOCK operations and both smp_load_acquire()
0472 and smp_cond_load_acquire() operations.
0473
0474 Memory operations that occur before an ACQUIRE operation may appear to
0475 happen after it completes.
0476
0477 An ACQUIRE operation should almost always be paired with a RELEASE
0478 operation.
0479
0480
0481 (6) RELEASE operations.
0482
0483 This also acts as a one-way permeable barrier. It guarantees that all
0484 memory operations before the RELEASE operation will appear to happen
0485 before the RELEASE operation with respect to the other components of the
0486 system. RELEASE operations include UNLOCK operations and
0487 smp_store_release() operations.
0488
0489 Memory operations that occur after a RELEASE operation may appear to
0490 happen before it completes.
0491
0492 The use of ACQUIRE and RELEASE operations generally precludes the need
0493 for other sorts of memory barrier. In addition, a RELEASE+ACQUIRE pair is
0494 -not- guaranteed to act as a full memory barrier. However, after an
0495 ACQUIRE on a given variable, all memory accesses preceding any prior
0496 RELEASE on that same variable are guaranteed to be visible. In other
0497 words, within a given variable's critical section, all accesses of all
0498 previous critical sections for that variable are guaranteed to have
0499 completed.
0500
0501 This means that ACQUIRE acts as a minimal "acquire" operation and
0502 RELEASE acts as a minimal "release" operation.
0503
0504 A subset of the atomic operations described in atomic_t.txt have ACQUIRE and
0505 RELEASE variants in addition to fully-ordered and relaxed (no barrier
0506 semantics) definitions. For compound atomics performing both a load and a
0507 store, ACQUIRE semantics apply only to the load and RELEASE semantics apply
0508 only to the store portion of the operation.
0509
0510 Memory barriers are only required where there's a possibility of interaction
0511 between two CPUs or between a CPU and a device. If it can be guaranteed that
0512 there won't be any such interaction in any particular piece of code, then
0513 memory barriers are unnecessary in that piece of code.
0514
0515
0516 Note that these are the _minimum_ guarantees. Different architectures may give
0517 more substantial guarantees, but they may _not_ be relied upon outside of arch
0518 specific code.
0519
0520
0521 WHAT MAY NOT BE ASSUMED ABOUT MEMORY BARRIERS?
0522 ----------------------------------------------
0523
0524 There are certain things that the Linux kernel memory barriers do not guarantee:
0525
0526 (*) There is no guarantee that any of the memory accesses specified before a
0527 memory barrier will be _complete_ by the completion of a memory barrier
0528 instruction; the barrier can be considered to draw a line in that CPU's
0529 access queue that accesses of the appropriate type may not cross.
0530
0531 (*) There is no guarantee that issuing a memory barrier on one CPU will have
0532 any direct effect on another CPU or any other hardware in the system. The
0533 indirect effect will be the order in which the second CPU sees the effects
0534 of the first CPU's accesses occur, but see the next point:
0535
0536 (*) There is no guarantee that a CPU will see the correct order of effects
0537 from a second CPU's accesses, even _if_ the second CPU uses a memory
0538 barrier, unless the first CPU _also_ uses a matching memory barrier (see
0539 the subsection on "SMP Barrier Pairing").
0540
0541 (*) There is no guarantee that some intervening piece of off-the-CPU
0542 hardware[*] will not reorder the memory accesses. CPU cache coherency
0543 mechanisms should propagate the indirect effects of a memory barrier
0544 between CPUs, but might not do so in order.
0545
0546 [*] For information on bus mastering DMA and coherency please read:
0547
0548 Documentation/driver-api/pci/pci.rst
0549 Documentation/core-api/dma-api-howto.rst
0550 Documentation/core-api/dma-api.rst
0551
0552
0553 DATA DEPENDENCY BARRIERS (HISTORICAL)
0554 -------------------------------------
0555
0556 As of v4.15 of the Linux kernel, an smp_mb() was added to READ_ONCE() for
0557 DEC Alpha, which means that about the only people who need to pay attention
0558 to this section are those working on DEC Alpha architecture-specific code
0559 and those working on READ_ONCE() itself. For those who need it, and for
0560 those who are interested in the history, here is the story of
0561 data-dependency barriers.
0562
0563 The usage requirements of data dependency barriers are a little subtle, and
0564 it's not always obvious that they're needed. To illustrate, consider the
0565 following sequence of events:
0566
0567 CPU 1 CPU 2
0568 =============== ===============
0569 { A == 1, B == 2, C == 3, P == &A, Q == &C }
0570 B = 4;
0571 <write barrier>
0572 WRITE_ONCE(P, &B);
0573 Q = READ_ONCE(P);
0574 D = *Q;
0575
0576 There's a clear data dependency here, and it would seem that by the end of the
0577 sequence, Q must be either &A or &B, and that:
0578
0579 (Q == &A) implies (D == 1)
0580 (Q == &B) implies (D == 4)
0581
0582 But! CPU 2's perception of P may be updated _before_ its perception of B, thus
0583 leading to the following situation:
0584
0585 (Q == &B) and (D == 2) ????
0586
0587 While this may seem like a failure of coherency or causality maintenance, it
0588 isn't, and this behaviour can be observed on certain real CPUs (such as the DEC
0589 Alpha).
0590
0591 To deal with this, a data dependency barrier or better must be inserted
0592 between the address load and the data load:
0593
0594 CPU 1 CPU 2
0595 =============== ===============
0596 { A == 1, B == 2, C == 3, P == &A, Q == &C }
0597 B = 4;
0598 <write barrier>
0599 WRITE_ONCE(P, &B);
0600 Q = READ_ONCE(P);
0601 <data dependency barrier>
0602 D = *Q;
0603
0604 This enforces the occurrence of one of the two implications, and prevents the
0605 third possibility from arising.
0606
0607
0608 [!] Note that this extremely counterintuitive situation arises most easily on
0609 machines with split caches, so that, for example, one cache bank processes
0610 even-numbered cache lines and the other bank processes odd-numbered cache
0611 lines. The pointer P might be stored in an odd-numbered cache line, and the
0612 variable B might be stored in an even-numbered cache line. Then, if the
0613 even-numbered bank of the reading CPU's cache is extremely busy while the
0614 odd-numbered bank is idle, one can see the new value of the pointer P (&B),
0615 but the old value of the variable B (2).
0616
0617
0618 A data-dependency barrier is not required to order dependent writes
0619 because the CPUs that the Linux kernel supports don't do writes
0620 until they are certain (1) that the write will actually happen, (2)
0621 of the location of the write, and (3) of the value to be written.
0622 But please carefully read the "CONTROL DEPENDENCIES" section and the
0623 Documentation/RCU/rcu_dereference.rst file: The compiler can and does
0624 break dependencies in a great many highly creative ways.
0625
0626 CPU 1 CPU 2
0627 =============== ===============
0628 { A == 1, B == 2, C = 3, P == &A, Q == &C }
0629 B = 4;
0630 <write barrier>
0631 WRITE_ONCE(P, &B);
0632 Q = READ_ONCE(P);
0633 WRITE_ONCE(*Q, 5);
0634
0635 Therefore, no data-dependency barrier is required to order the read into
0636 Q with the store into *Q. In other words, this outcome is prohibited,
0637 even without a data-dependency barrier:
0638
0639 (Q == &B) && (B == 4)
0640
0641 Please note that this pattern should be rare. After all, the whole point
0642 of dependency ordering is to -prevent- writes to the data structure, along
0643 with the expensive cache misses associated with those writes. This pattern
0644 can be used to record rare error conditions and the like, and the CPUs'
0645 naturally occurring ordering prevents such records from being lost.
0646
0647
0648 Note well that the ordering provided by a data dependency is local to
0649 the CPU containing it. See the section on "Multicopy atomicity" for
0650 more information.
0651
0652
0653 The data dependency barrier is very important to the RCU system,
0654 for example. See rcu_assign_pointer() and rcu_dereference() in
0655 include/linux/rcupdate.h. This permits the current target of an RCU'd
0656 pointer to be replaced with a new modified target, without the replacement
0657 target appearing to be incompletely initialised.
0658
0659 See also the subsection on "Cache Coherency" for a more thorough example.
0660
0661
0662 CONTROL DEPENDENCIES
0663 --------------------
0664
0665 Control dependencies can be a bit tricky because current compilers do
0666 not understand them. The purpose of this section is to help you prevent
0667 the compiler's ignorance from breaking your code.
0668
0669 A load-load control dependency requires a full read memory barrier, not
0670 simply a data dependency barrier to make it work correctly. Consider the
0671 following bit of code:
0672
0673 q = READ_ONCE(a);
0674 if (q) {
0675 <data dependency barrier> /* BUG: No data dependency!!! */
0676 p = READ_ONCE(b);
0677 }
0678
0679 This will not have the desired effect because there is no actual data
0680 dependency, but rather a control dependency that the CPU may short-circuit
0681 by attempting to predict the outcome in advance, so that other CPUs see
0682 the load from b as having happened before the load from a. In such a
0683 case what's actually required is:
0684
0685 q = READ_ONCE(a);
0686 if (q) {
0687 <read barrier>
0688 p = READ_ONCE(b);
0689 }
0690
0691 However, stores are not speculated. This means that ordering -is- provided
0692 for load-store control dependencies, as in the following example:
0693
0694 q = READ_ONCE(a);
0695 if (q) {
0696 WRITE_ONCE(b, 1);
0697 }
0698
0699 Control dependencies pair normally with other types of barriers.
0700 That said, please note that neither READ_ONCE() nor WRITE_ONCE()
0701 are optional! Without the READ_ONCE(), the compiler might combine the
0702 load from 'a' with other loads from 'a'. Without the WRITE_ONCE(),
0703 the compiler might combine the store to 'b' with other stores to 'b'.
0704 Either can result in highly counterintuitive effects on ordering.
0705
0706 Worse yet, if the compiler is able to prove (say) that the value of
0707 variable 'a' is always non-zero, it would be well within its rights
0708 to optimize the original example by eliminating the "if" statement
0709 as follows:
0710
0711 q = a;
0712 b = 1; /* BUG: Compiler and CPU can both reorder!!! */
0713
0714 So don't leave out the READ_ONCE().
0715
0716 It is tempting to try to enforce ordering on identical stores on both
0717 branches of the "if" statement as follows:
0718
0719 q = READ_ONCE(a);
0720 if (q) {
0721 barrier();
0722 WRITE_ONCE(b, 1);
0723 do_something();
0724 } else {
0725 barrier();
0726 WRITE_ONCE(b, 1);
0727 do_something_else();
0728 }
0729
0730 Unfortunately, current compilers will transform this as follows at high
0731 optimization levels:
0732
0733 q = READ_ONCE(a);
0734 barrier();
0735 WRITE_ONCE(b, 1); /* BUG: No ordering vs. load from a!!! */
0736 if (q) {
0737 /* WRITE_ONCE(b, 1); -- moved up, BUG!!! */
0738 do_something();
0739 } else {
0740 /* WRITE_ONCE(b, 1); -- moved up, BUG!!! */
0741 do_something_else();
0742 }
0743
0744 Now there is no conditional between the load from 'a' and the store to
0745 'b', which means that the CPU is within its rights to reorder them:
0746 The conditional is absolutely required, and must be present in the
0747 assembly code even after all compiler optimizations have been applied.
0748 Therefore, if you need ordering in this example, you need explicit
0749 memory barriers, for example, smp_store_release():
0750
0751 q = READ_ONCE(a);
0752 if (q) {
0753 smp_store_release(&b, 1);
0754 do_something();
0755 } else {
0756 smp_store_release(&b, 1);
0757 do_something_else();
0758 }
0759
0760 In contrast, without explicit memory barriers, two-legged-if control
0761 ordering is guaranteed only when the stores differ, for example:
0762
0763 q = READ_ONCE(a);
0764 if (q) {
0765 WRITE_ONCE(b, 1);
0766 do_something();
0767 } else {
0768 WRITE_ONCE(b, 2);
0769 do_something_else();
0770 }
0771
0772 The initial READ_ONCE() is still required to prevent the compiler from
0773 proving the value of 'a'.
0774
0775 In addition, you need to be careful what you do with the local variable 'q',
0776 otherwise the compiler might be able to guess the value and again remove
0777 the needed conditional. For example:
0778
0779 q = READ_ONCE(a);
0780 if (q % MAX) {
0781 WRITE_ONCE(b, 1);
0782 do_something();
0783 } else {
0784 WRITE_ONCE(b, 2);
0785 do_something_else();
0786 }
0787
0788 If MAX is defined to be 1, then the compiler knows that (q % MAX) is
0789 equal to zero, in which case the compiler is within its rights to
0790 transform the above code into the following:
0791
0792 q = READ_ONCE(a);
0793 WRITE_ONCE(b, 2);
0794 do_something_else();
0795
0796 Given this transformation, the CPU is not required to respect the ordering
0797 between the load from variable 'a' and the store to variable 'b'. It is
0798 tempting to add a barrier(), but this does not help. The conditional
0799 is gone, and the barrier won't bring it back. Therefore, if you are
0800 relying on this ordering, you should make sure that MAX is greater than
0801 one, perhaps as follows:
0802
0803 q = READ_ONCE(a);
0804 BUILD_BUG_ON(MAX <= 1); /* Order load from a with store to b. */
0805 if (q % MAX) {
0806 WRITE_ONCE(b, 1);
0807 do_something();
0808 } else {
0809 WRITE_ONCE(b, 2);
0810 do_something_else();
0811 }
0812
0813 Please note once again that the stores to 'b' differ. If they were
0814 identical, as noted earlier, the compiler could pull this store outside
0815 of the 'if' statement.
0816
0817 You must also be careful not to rely too much on boolean short-circuit
0818 evaluation. Consider this example:
0819
0820 q = READ_ONCE(a);
0821 if (q || 1 > 0)
0822 WRITE_ONCE(b, 1);
0823
0824 Because the first condition cannot fault and the second condition is
0825 always true, the compiler can transform this example as following,
0826 defeating control dependency:
0827
0828 q = READ_ONCE(a);
0829 WRITE_ONCE(b, 1);
0830
0831 This example underscores the need to ensure that the compiler cannot
0832 out-guess your code. More generally, although READ_ONCE() does force
0833 the compiler to actually emit code for a given load, it does not force
0834 the compiler to use the results.
0835
0836 In addition, control dependencies apply only to the then-clause and
0837 else-clause of the if-statement in question. In particular, it does
0838 not necessarily apply to code following the if-statement:
0839
0840 q = READ_ONCE(a);
0841 if (q) {
0842 WRITE_ONCE(b, 1);
0843 } else {
0844 WRITE_ONCE(b, 2);
0845 }
0846 WRITE_ONCE(c, 1); /* BUG: No ordering against the read from 'a'. */
0847
0848 It is tempting to argue that there in fact is ordering because the
0849 compiler cannot reorder volatile accesses and also cannot reorder
0850 the writes to 'b' with the condition. Unfortunately for this line
0851 of reasoning, the compiler might compile the two writes to 'b' as
0852 conditional-move instructions, as in this fanciful pseudo-assembly
0853 language:
0854
0855 ld r1,a
0856 cmp r1,$0
0857 cmov,ne r4,$1
0858 cmov,eq r4,$2
0859 st r4,b
0860 st $1,c
0861
0862 A weakly ordered CPU would have no dependency of any sort between the load
0863 from 'a' and the store to 'c'. The control dependencies would extend
0864 only to the pair of cmov instructions and the store depending on them.
0865 In short, control dependencies apply only to the stores in the then-clause
0866 and else-clause of the if-statement in question (including functions
0867 invoked by those two clauses), not to code following that if-statement.
0868
0869
0870 Note well that the ordering provided by a control dependency is local
0871 to the CPU containing it. See the section on "Multicopy atomicity"
0872 for more information.
0873
0874
0875 In summary:
0876
0877 (*) Control dependencies can order prior loads against later stores.
0878 However, they do -not- guarantee any other sort of ordering:
0879 Not prior loads against later loads, nor prior stores against
0880 later anything. If you need these other forms of ordering,
0881 use smp_rmb(), smp_wmb(), or, in the case of prior stores and
0882 later loads, smp_mb().
0883
0884 (*) If both legs of the "if" statement begin with identical stores to
0885 the same variable, then those stores must be ordered, either by
0886 preceding both of them with smp_mb() or by using smp_store_release()
0887 to carry out the stores. Please note that it is -not- sufficient
0888 to use barrier() at beginning of each leg of the "if" statement
0889 because, as shown by the example above, optimizing compilers can
0890 destroy the control dependency while respecting the letter of the
0891 barrier() law.
0892
0893 (*) Control dependencies require at least one run-time conditional
0894 between the prior load and the subsequent store, and this
0895 conditional must involve the prior load. If the compiler is able
0896 to optimize the conditional away, it will have also optimized
0897 away the ordering. Careful use of READ_ONCE() and WRITE_ONCE()
0898 can help to preserve the needed conditional.
0899
0900 (*) Control dependencies require that the compiler avoid reordering the
0901 dependency into nonexistence. Careful use of READ_ONCE() or
0902 atomic{,64}_read() can help to preserve your control dependency.
0903 Please see the COMPILER BARRIER section for more information.
0904
0905 (*) Control dependencies apply only to the then-clause and else-clause
0906 of the if-statement containing the control dependency, including
0907 any functions that these two clauses call. Control dependencies
0908 do -not- apply to code following the if-statement containing the
0909 control dependency.
0910
0911 (*) Control dependencies pair normally with other types of barriers.
0912
0913 (*) Control dependencies do -not- provide multicopy atomicity. If you
0914 need all the CPUs to see a given store at the same time, use smp_mb().
0915
0916 (*) Compilers do not understand control dependencies. It is therefore
0917 your job to ensure that they do not break your code.
0918
0919
0920 SMP BARRIER PAIRING
0921 -------------------
0922
0923 When dealing with CPU-CPU interactions, certain types of memory barrier should
0924 always be paired. A lack of appropriate pairing is almost certainly an error.
0925
0926 General barriers pair with each other, though they also pair with most
0927 other types of barriers, albeit without multicopy atomicity. An acquire
0928 barrier pairs with a release barrier, but both may also pair with other
0929 barriers, including of course general barriers. A write barrier pairs
0930 with a data dependency barrier, a control dependency, an acquire barrier,
0931 a release barrier, a read barrier, or a general barrier. Similarly a
0932 read barrier, control dependency, or a data dependency barrier pairs
0933 with a write barrier, an acquire barrier, a release barrier, or a
0934 general barrier:
0935
0936 CPU 1 CPU 2
0937 =============== ===============
0938 WRITE_ONCE(a, 1);
0939 <write barrier>
0940 WRITE_ONCE(b, 2); x = READ_ONCE(b);
0941 <read barrier>
0942 y = READ_ONCE(a);
0943
0944 Or:
0945
0946 CPU 1 CPU 2
0947 =============== ===============================
0948 a = 1;
0949 <write barrier>
0950 WRITE_ONCE(b, &a); x = READ_ONCE(b);
0951 <data dependency barrier>
0952 y = *x;
0953
0954 Or even:
0955
0956 CPU 1 CPU 2
0957 =============== ===============================
0958 r1 = READ_ONCE(y);
0959 <general barrier>
0960 WRITE_ONCE(x, 1); if (r2 = READ_ONCE(x)) {
0961 <implicit control dependency>
0962 WRITE_ONCE(y, 1);
0963 }
0964
0965 assert(r1 == 0 || r2 == 0);
0966
0967 Basically, the read barrier always has to be there, even though it can be of
0968 the "weaker" type.
0969
0970 [!] Note that the stores before the write barrier would normally be expected to
0971 match the loads after the read barrier or the data dependency barrier, and vice
0972 versa:
0973
0974 CPU 1 CPU 2
0975 =================== ===================
0976 WRITE_ONCE(a, 1); }---- --->{ v = READ_ONCE(c);
0977 WRITE_ONCE(b, 2); } \ / { w = READ_ONCE(d);
0978 <write barrier> \ <read barrier>
0979 WRITE_ONCE(c, 3); } / \ { x = READ_ONCE(a);
0980 WRITE_ONCE(d, 4); }---- --->{ y = READ_ONCE(b);
0981
0982
0983 EXAMPLES OF MEMORY BARRIER SEQUENCES
0984 ------------------------------------
0985
0986 Firstly, write barriers act as partial orderings on store operations.
0987 Consider the following sequence of events:
0988
0989 CPU 1
0990 =======================
0991 STORE A = 1
0992 STORE B = 2
0993 STORE C = 3
0994 <write barrier>
0995 STORE D = 4
0996 STORE E = 5
0997
0998 This sequence of events is committed to the memory coherence system in an order
0999 that the rest of the system might perceive as the unordered set of { STORE A,
1000 STORE B, STORE C } all occurring before the unordered set of { STORE D, STORE E
1001 }:
1002
1003 +-------+ : :
1004 | | +------+
1005 | |------>| C=3 | } /\
1006 | | : +------+ }----- \ -----> Events perceptible to
1007 | | : | A=1 | } \/ the rest of the system
1008 | | : +------+ }
1009 | CPU 1 | : | B=2 | }
1010 | | +------+ }
1011 | | wwwwwwwwwwwwwwww } <--- At this point the write barrier
1012 | | +------+ } requires all stores prior to the
1013 | | : | E=5 | } barrier to be committed before
1014 | | : +------+ } further stores may take place
1015 | |------>| D=4 | }
1016 | | +------+
1017 +-------+ : :
1018 |
1019 | Sequence in which stores are committed to the
1020 | memory system by CPU 1
1021 V
1022
1023
1024 Secondly, data dependency barriers act as partial orderings on data-dependent
1025 loads. Consider the following sequence of events:
1026
1027 CPU 1 CPU 2
1028 ======================= =======================
1029 { B = 7; X = 9; Y = 8; C = &Y }
1030 STORE A = 1
1031 STORE B = 2
1032 <write barrier>
1033 STORE C = &B LOAD X
1034 STORE D = 4 LOAD C (gets &B)
1035 LOAD *C (reads B)
1036
1037 Without intervention, CPU 2 may perceive the events on CPU 1 in some
1038 effectively random order, despite the write barrier issued by CPU 1:
1039
1040 +-------+ : : : :
1041 | | +------+ +-------+ | Sequence of update
1042 | |------>| B=2 |----- --->| Y->8 | | of perception on
1043 | | : +------+ \ +-------+ | CPU 2
1044 | CPU 1 | : | A=1 | \ --->| C->&Y | V
1045 | | +------+ | +-------+
1046 | | wwwwwwwwwwwwwwww | : :
1047 | | +------+ | : :
1048 | | : | C=&B |--- | : : +-------+
1049 | | : +------+ \ | +-------+ | |
1050 | |------>| D=4 | ----------->| C->&B |------>| |
1051 | | +------+ | +-------+ | |
1052 +-------+ : : | : : | |
1053 | : : | |
1054 | : : | CPU 2 |
1055 | +-------+ | |
1056 Apparently incorrect ---> | | B->7 |------>| |
1057 perception of B (!) | +-------+ | |
1058 | : : | |
1059 | +-------+ | |
1060 The load of X holds ---> \ | X->9 |------>| |
1061 up the maintenance \ +-------+ | |
1062 of coherence of B ----->| B->2 | +-------+
1063 +-------+
1064 : :
1065
1066
1067 In the above example, CPU 2 perceives that B is 7, despite the load of *C
1068 (which would be B) coming after the LOAD of C.
1069
1070 If, however, a data dependency barrier were to be placed between the load of C
1071 and the load of *C (ie: B) on CPU 2:
1072
1073 CPU 1 CPU 2
1074 ======================= =======================
1075 { B = 7; X = 9; Y = 8; C = &Y }
1076 STORE A = 1
1077 STORE B = 2
1078 <write barrier>
1079 STORE C = &B LOAD X
1080 STORE D = 4 LOAD C (gets &B)
1081 <data dependency barrier>
1082 LOAD *C (reads B)
1083
1084 then the following will occur:
1085
1086 +-------+ : : : :
1087 | | +------+ +-------+
1088 | |------>| B=2 |----- --->| Y->8 |
1089 | | : +------+ \ +-------+
1090 | CPU 1 | : | A=1 | \ --->| C->&Y |
1091 | | +------+ | +-------+
1092 | | wwwwwwwwwwwwwwww | : :
1093 | | +------+ | : :
1094 | | : | C=&B |--- | : : +-------+
1095 | | : +------+ \ | +-------+ | |
1096 | |------>| D=4 | ----------->| C->&B |------>| |
1097 | | +------+ | +-------+ | |
1098 +-------+ : : | : : | |
1099 | : : | |
1100 | : : | CPU 2 |
1101 | +-------+ | |
1102 | | X->9 |------>| |
1103 | +-------+ | |
1104 Makes sure all effects ---> \ ddddddddddddddddd | |
1105 prior to the store of C \ +-------+ | |
1106 are perceptible to ----->| B->2 |------>| |
1107 subsequent loads +-------+ | |
1108 : : +-------+
1109
1110
1111 And thirdly, a read barrier acts as a partial order on loads. Consider the
1112 following sequence of events:
1113
1114 CPU 1 CPU 2
1115 ======================= =======================
1116 { A = 0, B = 9 }
1117 STORE A=1
1118 <write barrier>
1119 STORE B=2
1120 LOAD B
1121 LOAD A
1122
1123 Without intervention, CPU 2 may then choose to perceive the events on CPU 1 in
1124 some effectively random order, despite the write barrier issued by CPU 1:
1125
1126 +-------+ : : : :
1127 | | +------+ +-------+
1128 | |------>| A=1 |------ --->| A->0 |
1129 | | +------+ \ +-------+
1130 | CPU 1 | wwwwwwwwwwwwwwww \ --->| B->9 |
1131 | | +------+ | +-------+
1132 | |------>| B=2 |--- | : :
1133 | | +------+ \ | : : +-------+
1134 +-------+ : : \ | +-------+ | |
1135 ---------->| B->2 |------>| |
1136 | +-------+ | CPU 2 |
1137 | | A->0 |------>| |
1138 | +-------+ | |
1139 | : : +-------+
1140 \ : :
1141 \ +-------+
1142 ---->| A->1 |
1143 +-------+
1144 : :
1145
1146
1147 If, however, a read barrier were to be placed between the load of B and the
1148 load of A on CPU 2:
1149
1150 CPU 1 CPU 2
1151 ======================= =======================
1152 { A = 0, B = 9 }
1153 STORE A=1
1154 <write barrier>
1155 STORE B=2
1156 LOAD B
1157 <read barrier>
1158 LOAD A
1159
1160 then the partial ordering imposed by CPU 1 will be perceived correctly by CPU
1161 2:
1162
1163 +-------+ : : : :
1164 | | +------+ +-------+
1165 | |------>| A=1 |------ --->| A->0 |
1166 | | +------+ \ +-------+
1167 | CPU 1 | wwwwwwwwwwwwwwww \ --->| B->9 |
1168 | | +------+ | +-------+
1169 | |------>| B=2 |--- | : :
1170 | | +------+ \ | : : +-------+
1171 +-------+ : : \ | +-------+ | |
1172 ---------->| B->2 |------>| |
1173 | +-------+ | CPU 2 |
1174 | : : | |
1175 | : : | |
1176 At this point the read ----> \ rrrrrrrrrrrrrrrrr | |
1177 barrier causes all effects \ +-------+ | |
1178 prior to the storage of B ---->| A->1 |------>| |
1179 to be perceptible to CPU 2 +-------+ | |
1180 : : +-------+
1181
1182
1183 To illustrate this more completely, consider what could happen if the code
1184 contained a load of A either side of the read barrier:
1185
1186 CPU 1 CPU 2
1187 ======================= =======================
1188 { A = 0, B = 9 }
1189 STORE A=1
1190 <write barrier>
1191 STORE B=2
1192 LOAD B
1193 LOAD A [first load of A]
1194 <read barrier>
1195 LOAD A [second load of A]
1196
1197 Even though the two loads of A both occur after the load of B, they may both
1198 come up with different values:
1199
1200 +-------+ : : : :
1201 | | +------+ +-------+
1202 | |------>| A=1 |------ --->| A->0 |
1203 | | +------+ \ +-------+
1204 | CPU 1 | wwwwwwwwwwwwwwww \ --->| B->9 |
1205 | | +------+ | +-------+
1206 | |------>| B=2 |--- | : :
1207 | | +------+ \ | : : +-------+
1208 +-------+ : : \ | +-------+ | |
1209 ---------->| B->2 |------>| |
1210 | +-------+ | CPU 2 |
1211 | : : | |
1212 | : : | |
1213 | +-------+ | |
1214 | | A->0 |------>| 1st |
1215 | +-------+ | |
1216 At this point the read ----> \ rrrrrrrrrrrrrrrrr | |
1217 barrier causes all effects \ +-------+ | |
1218 prior to the storage of B ---->| A->1 |------>| 2nd |
1219 to be perceptible to CPU 2 +-------+ | |
1220 : : +-------+
1221
1222
1223 But it may be that the update to A from CPU 1 becomes perceptible to CPU 2
1224 before the read barrier completes anyway:
1225
1226 +-------+ : : : :
1227 | | +------+ +-------+
1228 | |------>| A=1 |------ --->| A->0 |
1229 | | +------+ \ +-------+
1230 | CPU 1 | wwwwwwwwwwwwwwww \ --->| B->9 |
1231 | | +------+ | +-------+
1232 | |------>| B=2 |--- | : :
1233 | | +------+ \ | : : +-------+
1234 +-------+ : : \ | +-------+ | |
1235 ---------->| B->2 |------>| |
1236 | +-------+ | CPU 2 |
1237 | : : | |
1238 \ : : | |
1239 \ +-------+ | |
1240 ---->| A->1 |------>| 1st |
1241 +-------+ | |
1242 rrrrrrrrrrrrrrrrr | |
1243 +-------+ | |
1244 | A->1 |------>| 2nd |
1245 +-------+ | |
1246 : : +-------+
1247
1248
1249 The guarantee is that the second load will always come up with A == 1 if the
1250 load of B came up with B == 2. No such guarantee exists for the first load of
1251 A; that may come up with either A == 0 or A == 1.
1252
1253
1254 READ MEMORY BARRIERS VS LOAD SPECULATION
1255 ----------------------------------------
1256
1257 Many CPUs speculate with loads: that is they see that they will need to load an
1258 item from memory, and they find a time where they're not using the bus for any
1259 other loads, and so do the load in advance - even though they haven't actually
1260 got to that point in the instruction execution flow yet. This permits the
1261 actual load instruction to potentially complete immediately because the CPU
1262 already has the value to hand.
1263
1264 It may turn out that the CPU didn't actually need the value - perhaps because a
1265 branch circumvented the load - in which case it can discard the value or just
1266 cache it for later use.
1267
1268 Consider:
1269
1270 CPU 1 CPU 2
1271 ======================= =======================
1272 LOAD B
1273 DIVIDE } Divide instructions generally
1274 DIVIDE } take a long time to perform
1275 LOAD A
1276
1277 Which might appear as this:
1278
1279 : : +-------+
1280 +-------+ | |
1281 --->| B->2 |------>| |
1282 +-------+ | CPU 2 |
1283 : :DIVIDE | |
1284 +-------+ | |
1285 The CPU being busy doing a ---> --->| A->0 |~~~~ | |
1286 division speculates on the +-------+ ~ | |
1287 LOAD of A : : ~ | |
1288 : :DIVIDE | |
1289 : : ~ | |
1290 Once the divisions are complete --> : : ~-->| |
1291 the CPU can then perform the : : | |
1292 LOAD with immediate effect : : +-------+
1293
1294
1295 Placing a read barrier or a data dependency barrier just before the second
1296 load:
1297
1298 CPU 1 CPU 2
1299 ======================= =======================
1300 LOAD B
1301 DIVIDE
1302 DIVIDE
1303 <read barrier>
1304 LOAD A
1305
1306 will force any value speculatively obtained to be reconsidered to an extent
1307 dependent on the type of barrier used. If there was no change made to the
1308 speculated memory location, then the speculated value will just be used:
1309
1310 : : +-------+
1311 +-------+ | |
1312 --->| B->2 |------>| |
1313 +-------+ | CPU 2 |
1314 : :DIVIDE | |
1315 +-------+ | |
1316 The CPU being busy doing a ---> --->| A->0 |~~~~ | |
1317 division speculates on the +-------+ ~ | |
1318 LOAD of A : : ~ | |
1319 : :DIVIDE | |
1320 : : ~ | |
1321 : : ~ | |
1322 rrrrrrrrrrrrrrrr~ | |
1323 : : ~ | |
1324 : : ~-->| |
1325 : : | |
1326 : : +-------+
1327
1328
1329 but if there was an update or an invalidation from another CPU pending, then
1330 the speculation will be cancelled and the value reloaded:
1331
1332 : : +-------+
1333 +-------+ | |
1334 --->| B->2 |------>| |
1335 +-------+ | CPU 2 |
1336 : :DIVIDE | |
1337 +-------+ | |
1338 The CPU being busy doing a ---> --->| A->0 |~~~~ | |
1339 division speculates on the +-------+ ~ | |
1340 LOAD of A : : ~ | |
1341 : :DIVIDE | |
1342 : : ~ | |
1343 : : ~ | |
1344 rrrrrrrrrrrrrrrrr | |
1345 +-------+ | |
1346 The speculation is discarded ---> --->| A->1 |------>| |
1347 and an updated value is +-------+ | |
1348 retrieved : : +-------+
1349
1350
1351 MULTICOPY ATOMICITY
1352 --------------------
1353
1354 Multicopy atomicity is a deeply intuitive notion about ordering that is
1355 not always provided by real computer systems, namely that a given store
1356 becomes visible at the same time to all CPUs, or, alternatively, that all
1357 CPUs agree on the order in which all stores become visible. However,
1358 support of full multicopy atomicity would rule out valuable hardware
1359 optimizations, so a weaker form called ``other multicopy atomicity''
1360 instead guarantees only that a given store becomes visible at the same
1361 time to all -other- CPUs. The remainder of this document discusses this
1362 weaker form, but for brevity will call it simply ``multicopy atomicity''.
1363
1364 The following example demonstrates multicopy atomicity:
1365
1366 CPU 1 CPU 2 CPU 3
1367 ======================= ======================= =======================
1368 { X = 0, Y = 0 }
1369 STORE X=1 r1=LOAD X (reads 1) LOAD Y (reads 1)
1370 <general barrier> <read barrier>
1371 STORE Y=r1 LOAD X
1372
1373 Suppose that CPU 2's load from X returns 1, which it then stores to Y,
1374 and CPU 3's load from Y returns 1. This indicates that CPU 1's store
1375 to X precedes CPU 2's load from X and that CPU 2's store to Y precedes
1376 CPU 3's load from Y. In addition, the memory barriers guarantee that
1377 CPU 2 executes its load before its store, and CPU 3 loads from Y before
1378 it loads from X. The question is then "Can CPU 3's load from X return 0?"
1379
1380 Because CPU 3's load from X in some sense comes after CPU 2's load, it
1381 is natural to expect that CPU 3's load from X must therefore return 1.
1382 This expectation follows from multicopy atomicity: if a load executing
1383 on CPU B follows a load from the same variable executing on CPU A (and
1384 CPU A did not originally store the value which it read), then on
1385 multicopy-atomic systems, CPU B's load must return either the same value
1386 that CPU A's load did or some later value. However, the Linux kernel
1387 does not require systems to be multicopy atomic.
1388
1389 The use of a general memory barrier in the example above compensates
1390 for any lack of multicopy atomicity. In the example, if CPU 2's load
1391 from X returns 1 and CPU 3's load from Y returns 1, then CPU 3's load
1392 from X must indeed also return 1.
1393
1394 However, dependencies, read barriers, and write barriers are not always
1395 able to compensate for non-multicopy atomicity. For example, suppose
1396 that CPU 2's general barrier is removed from the above example, leaving
1397 only the data dependency shown below:
1398
1399 CPU 1 CPU 2 CPU 3
1400 ======================= ======================= =======================
1401 { X = 0, Y = 0 }
1402 STORE X=1 r1=LOAD X (reads 1) LOAD Y (reads 1)
1403 <data dependency> <read barrier>
1404 STORE Y=r1 LOAD X (reads 0)
1405
1406 This substitution allows non-multicopy atomicity to run rampant: in
1407 this example, it is perfectly legal for CPU 2's load from X to return 1,
1408 CPU 3's load from Y to return 1, and its load from X to return 0.
1409
1410 The key point is that although CPU 2's data dependency orders its load
1411 and store, it does not guarantee to order CPU 1's store. Thus, if this
1412 example runs on a non-multicopy-atomic system where CPUs 1 and 2 share a
1413 store buffer or a level of cache, CPU 2 might have early access to CPU 1's
1414 writes. General barriers are therefore required to ensure that all CPUs
1415 agree on the combined order of multiple accesses.
1416
1417 General barriers can compensate not only for non-multicopy atomicity,
1418 but can also generate additional ordering that can ensure that -all-
1419 CPUs will perceive the same order of -all- operations. In contrast, a
1420 chain of release-acquire pairs do not provide this additional ordering,
1421 which means that only those CPUs on the chain are guaranteed to agree
1422 on the combined order of the accesses. For example, switching to C code
1423 in deference to the ghost of Herman Hollerith:
1424
1425 int u, v, x, y, z;
1426
1427 void cpu0(void)
1428 {
1429 r0 = smp_load_acquire(&x);
1430 WRITE_ONCE(u, 1);
1431 smp_store_release(&y, 1);
1432 }
1433
1434 void cpu1(void)
1435 {
1436 r1 = smp_load_acquire(&y);
1437 r4 = READ_ONCE(v);
1438 r5 = READ_ONCE(u);
1439 smp_store_release(&z, 1);
1440 }
1441
1442 void cpu2(void)
1443 {
1444 r2 = smp_load_acquire(&z);
1445 smp_store_release(&x, 1);
1446 }
1447
1448 void cpu3(void)
1449 {
1450 WRITE_ONCE(v, 1);
1451 smp_mb();
1452 r3 = READ_ONCE(u);
1453 }
1454
1455 Because cpu0(), cpu1(), and cpu2() participate in a chain of
1456 smp_store_release()/smp_load_acquire() pairs, the following outcome
1457 is prohibited:
1458
1459 r0 == 1 && r1 == 1 && r2 == 1
1460
1461 Furthermore, because of the release-acquire relationship between cpu0()
1462 and cpu1(), cpu1() must see cpu0()'s writes, so that the following
1463 outcome is prohibited:
1464
1465 r1 == 1 && r5 == 0
1466
1467 However, the ordering provided by a release-acquire chain is local
1468 to the CPUs participating in that chain and does not apply to cpu3(),
1469 at least aside from stores. Therefore, the following outcome is possible:
1470
1471 r0 == 0 && r1 == 1 && r2 == 1 && r3 == 0 && r4 == 0
1472
1473 As an aside, the following outcome is also possible:
1474
1475 r0 == 0 && r1 == 1 && r2 == 1 && r3 == 0 && r4 == 0 && r5 == 1
1476
1477 Although cpu0(), cpu1(), and cpu2() will see their respective reads and
1478 writes in order, CPUs not involved in the release-acquire chain might
1479 well disagree on the order. This disagreement stems from the fact that
1480 the weak memory-barrier instructions used to implement smp_load_acquire()
1481 and smp_store_release() are not required to order prior stores against
1482 subsequent loads in all cases. This means that cpu3() can see cpu0()'s
1483 store to u as happening -after- cpu1()'s load from v, even though
1484 both cpu0() and cpu1() agree that these two operations occurred in the
1485 intended order.
1486
1487 However, please keep in mind that smp_load_acquire() is not magic.
1488 In particular, it simply reads from its argument with ordering. It does
1489 -not- ensure that any particular value will be read. Therefore, the
1490 following outcome is possible:
1491
1492 r0 == 0 && r1 == 0 && r2 == 0 && r5 == 0
1493
1494 Note that this outcome can happen even on a mythical sequentially
1495 consistent system where nothing is ever reordered.
1496
1497 To reiterate, if your code requires full ordering of all operations,
1498 use general barriers throughout.
1499
1500
1501 ========================
1502 EXPLICIT KERNEL BARRIERS
1503 ========================
1504
1505 The Linux kernel has a variety of different barriers that act at different
1506 levels:
1507
1508 (*) Compiler barrier.
1509
1510 (*) CPU memory barriers.
1511
1512
1513 COMPILER BARRIER
1514 ----------------
1515
1516 The Linux kernel has an explicit compiler barrier function that prevents the
1517 compiler from moving the memory accesses either side of it to the other side:
1518
1519 barrier();
1520
1521 This is a general barrier -- there are no read-read or write-write
1522 variants of barrier(). However, READ_ONCE() and WRITE_ONCE() can be
1523 thought of as weak forms of barrier() that affect only the specific
1524 accesses flagged by the READ_ONCE() or WRITE_ONCE().
1525
1526 The barrier() function has the following effects:
1527
1528 (*) Prevents the compiler from reordering accesses following the
1529 barrier() to precede any accesses preceding the barrier().
1530 One example use for this property is to ease communication between
1531 interrupt-handler code and the code that was interrupted.
1532
1533 (*) Within a loop, forces the compiler to load the variables used
1534 in that loop's conditional on each pass through that loop.
1535
1536 The READ_ONCE() and WRITE_ONCE() functions can prevent any number of
1537 optimizations that, while perfectly safe in single-threaded code, can
1538 be fatal in concurrent code. Here are some examples of these sorts
1539 of optimizations:
1540
1541 (*) The compiler is within its rights to reorder loads and stores
1542 to the same variable, and in some cases, the CPU is within its
1543 rights to reorder loads to the same variable. This means that
1544 the following code:
1545
1546 a[0] = x;
1547 a[1] = x;
1548
1549 Might result in an older value of x stored in a[1] than in a[0].
1550 Prevent both the compiler and the CPU from doing this as follows:
1551
1552 a[0] = READ_ONCE(x);
1553 a[1] = READ_ONCE(x);
1554
1555 In short, READ_ONCE() and WRITE_ONCE() provide cache coherence for
1556 accesses from multiple CPUs to a single variable.
1557
1558 (*) The compiler is within its rights to merge successive loads from
1559 the same variable. Such merging can cause the compiler to "optimize"
1560 the following code:
1561
1562 while (tmp = a)
1563 do_something_with(tmp);
1564
1565 into the following code, which, although in some sense legitimate
1566 for single-threaded code, is almost certainly not what the developer
1567 intended:
1568
1569 if (tmp = a)
1570 for (;;)
1571 do_something_with(tmp);
1572
1573 Use READ_ONCE() to prevent the compiler from doing this to you:
1574
1575 while (tmp = READ_ONCE(a))
1576 do_something_with(tmp);
1577
1578 (*) The compiler is within its rights to reload a variable, for example,
1579 in cases where high register pressure prevents the compiler from
1580 keeping all data of interest in registers. The compiler might
1581 therefore optimize the variable 'tmp' out of our previous example:
1582
1583 while (tmp = a)
1584 do_something_with(tmp);
1585
1586 This could result in the following code, which is perfectly safe in
1587 single-threaded code, but can be fatal in concurrent code:
1588
1589 while (a)
1590 do_something_with(a);
1591
1592 For example, the optimized version of this code could result in
1593 passing a zero to do_something_with() in the case where the variable
1594 a was modified by some other CPU between the "while" statement and
1595 the call to do_something_with().
1596
1597 Again, use READ_ONCE() to prevent the compiler from doing this:
1598
1599 while (tmp = READ_ONCE(a))
1600 do_something_with(tmp);
1601
1602 Note that if the compiler runs short of registers, it might save
1603 tmp onto the stack. The overhead of this saving and later restoring
1604 is why compilers reload variables. Doing so is perfectly safe for
1605 single-threaded code, so you need to tell the compiler about cases
1606 where it is not safe.
1607
1608 (*) The compiler is within its rights to omit a load entirely if it knows
1609 what the value will be. For example, if the compiler can prove that
1610 the value of variable 'a' is always zero, it can optimize this code:
1611
1612 while (tmp = a)
1613 do_something_with(tmp);
1614
1615 Into this:
1616
1617 do { } while (0);
1618
1619 This transformation is a win for single-threaded code because it
1620 gets rid of a load and a branch. The problem is that the compiler
1621 will carry out its proof assuming that the current CPU is the only
1622 one updating variable 'a'. If variable 'a' is shared, then the
1623 compiler's proof will be erroneous. Use READ_ONCE() to tell the
1624 compiler that it doesn't know as much as it thinks it does:
1625
1626 while (tmp = READ_ONCE(a))
1627 do_something_with(tmp);
1628
1629 But please note that the compiler is also closely watching what you
1630 do with the value after the READ_ONCE(). For example, suppose you
1631 do the following and MAX is a preprocessor macro with the value 1:
1632
1633 while ((tmp = READ_ONCE(a)) % MAX)
1634 do_something_with(tmp);
1635
1636 Then the compiler knows that the result of the "%" operator applied
1637 to MAX will always be zero, again allowing the compiler to optimize
1638 the code into near-nonexistence. (It will still load from the
1639 variable 'a'.)
1640
1641 (*) Similarly, the compiler is within its rights to omit a store entirely
1642 if it knows that the variable already has the value being stored.
1643 Again, the compiler assumes that the current CPU is the only one
1644 storing into the variable, which can cause the compiler to do the
1645 wrong thing for shared variables. For example, suppose you have
1646 the following:
1647
1648 a = 0;
1649 ... Code that does not store to variable a ...
1650 a = 0;
1651
1652 The compiler sees that the value of variable 'a' is already zero, so
1653 it might well omit the second store. This would come as a fatal
1654 surprise if some other CPU might have stored to variable 'a' in the
1655 meantime.
1656
1657 Use WRITE_ONCE() to prevent the compiler from making this sort of
1658 wrong guess:
1659
1660 WRITE_ONCE(a, 0);
1661 ... Code that does not store to variable a ...
1662 WRITE_ONCE(a, 0);
1663
1664 (*) The compiler is within its rights to reorder memory accesses unless
1665 you tell it not to. For example, consider the following interaction
1666 between process-level code and an interrupt handler:
1667
1668 void process_level(void)
1669 {
1670 msg = get_message();
1671 flag = true;
1672 }
1673
1674 void interrupt_handler(void)
1675 {
1676 if (flag)
1677 process_message(msg);
1678 }
1679
1680 There is nothing to prevent the compiler from transforming
1681 process_level() to the following, in fact, this might well be a
1682 win for single-threaded code:
1683
1684 void process_level(void)
1685 {
1686 flag = true;
1687 msg = get_message();
1688 }
1689
1690 If the interrupt occurs between these two statement, then
1691 interrupt_handler() might be passed a garbled msg. Use WRITE_ONCE()
1692 to prevent this as follows:
1693
1694 void process_level(void)
1695 {
1696 WRITE_ONCE(msg, get_message());
1697 WRITE_ONCE(flag, true);
1698 }
1699
1700 void interrupt_handler(void)
1701 {
1702 if (READ_ONCE(flag))
1703 process_message(READ_ONCE(msg));
1704 }
1705
1706 Note that the READ_ONCE() and WRITE_ONCE() wrappers in
1707 interrupt_handler() are needed if this interrupt handler can itself
1708 be interrupted by something that also accesses 'flag' and 'msg',
1709 for example, a nested interrupt or an NMI. Otherwise, READ_ONCE()
1710 and WRITE_ONCE() are not needed in interrupt_handler() other than
1711 for documentation purposes. (Note also that nested interrupts
1712 do not typically occur in modern Linux kernels, in fact, if an
1713 interrupt handler returns with interrupts enabled, you will get a
1714 WARN_ONCE() splat.)
1715
1716 You should assume that the compiler can move READ_ONCE() and
1717 WRITE_ONCE() past code not containing READ_ONCE(), WRITE_ONCE(),
1718 barrier(), or similar primitives.
1719
1720 This effect could also be achieved using barrier(), but READ_ONCE()
1721 and WRITE_ONCE() are more selective: With READ_ONCE() and
1722 WRITE_ONCE(), the compiler need only forget the contents of the
1723 indicated memory locations, while with barrier() the compiler must
1724 discard the value of all memory locations that it has currently
1725 cached in any machine registers. Of course, the compiler must also
1726 respect the order in which the READ_ONCE()s and WRITE_ONCE()s occur,
1727 though the CPU of course need not do so.
1728
1729 (*) The compiler is within its rights to invent stores to a variable,
1730 as in the following example:
1731
1732 if (a)
1733 b = a;
1734 else
1735 b = 42;
1736
1737 The compiler might save a branch by optimizing this as follows:
1738
1739 b = 42;
1740 if (a)
1741 b = a;
1742
1743 In single-threaded code, this is not only safe, but also saves
1744 a branch. Unfortunately, in concurrent code, this optimization
1745 could cause some other CPU to see a spurious value of 42 -- even
1746 if variable 'a' was never zero -- when loading variable 'b'.
1747 Use WRITE_ONCE() to prevent this as follows:
1748
1749 if (a)
1750 WRITE_ONCE(b, a);
1751 else
1752 WRITE_ONCE(b, 42);
1753
1754 The compiler can also invent loads. These are usually less
1755 damaging, but they can result in cache-line bouncing and thus in
1756 poor performance and scalability. Use READ_ONCE() to prevent
1757 invented loads.
1758
1759 (*) For aligned memory locations whose size allows them to be accessed
1760 with a single memory-reference instruction, prevents "load tearing"
1761 and "store tearing," in which a single large access is replaced by
1762 multiple smaller accesses. For example, given an architecture having
1763 16-bit store instructions with 7-bit immediate fields, the compiler
1764 might be tempted to use two 16-bit store-immediate instructions to
1765 implement the following 32-bit store:
1766
1767 p = 0x00010002;
1768
1769 Please note that GCC really does use this sort of optimization,
1770 which is not surprising given that it would likely take more
1771 than two instructions to build the constant and then store it.
1772 This optimization can therefore be a win in single-threaded code.
1773 In fact, a recent bug (since fixed) caused GCC to incorrectly use
1774 this optimization in a volatile store. In the absence of such bugs,
1775 use of WRITE_ONCE() prevents store tearing in the following example:
1776
1777 WRITE_ONCE(p, 0x00010002);
1778
1779 Use of packed structures can also result in load and store tearing,
1780 as in this example:
1781
1782 struct __attribute__((__packed__)) foo {
1783 short a;
1784 int b;
1785 short c;
1786 };
1787 struct foo foo1, foo2;
1788 ...
1789
1790 foo2.a = foo1.a;
1791 foo2.b = foo1.b;
1792 foo2.c = foo1.c;
1793
1794 Because there are no READ_ONCE() or WRITE_ONCE() wrappers and no
1795 volatile markings, the compiler would be well within its rights to
1796 implement these three assignment statements as a pair of 32-bit
1797 loads followed by a pair of 32-bit stores. This would result in
1798 load tearing on 'foo1.b' and store tearing on 'foo2.b'. READ_ONCE()
1799 and WRITE_ONCE() again prevent tearing in this example:
1800
1801 foo2.a = foo1.a;
1802 WRITE_ONCE(foo2.b, READ_ONCE(foo1.b));
1803 foo2.c = foo1.c;
1804
1805 All that aside, it is never necessary to use READ_ONCE() and
1806 WRITE_ONCE() on a variable that has been marked volatile. For example,
1807 because 'jiffies' is marked volatile, it is never necessary to
1808 say READ_ONCE(jiffies). The reason for this is that READ_ONCE() and
1809 WRITE_ONCE() are implemented as volatile casts, which has no effect when
1810 its argument is already marked volatile.
1811
1812 Please note that these compiler barriers have no direct effect on the CPU,
1813 which may then reorder things however it wishes.
1814
1815
1816 CPU MEMORY BARRIERS
1817 -------------------
1818
1819 The Linux kernel has eight basic CPU memory barriers:
1820
1821 TYPE MANDATORY SMP CONDITIONAL
1822 =============== ======================= ===========================
1823 GENERAL mb() smp_mb()
1824 WRITE wmb() smp_wmb()
1825 READ rmb() smp_rmb()
1826 DATA DEPENDENCY READ_ONCE()
1827
1828
1829 All memory barriers except the data dependency barriers imply a compiler
1830 barrier. Data dependencies do not impose any additional compiler ordering.
1831
1832 Aside: In the case of data dependencies, the compiler would be expected
1833 to issue the loads in the correct order (eg. `a[b]` would have to load
1834 the value of b before loading a[b]), however there is no guarantee in
1835 the C specification that the compiler may not speculate the value of b
1836 (eg. is equal to 1) and load a[b] before b (eg. tmp = a[1]; if (b != 1)
1837 tmp = a[b]; ). There is also the problem of a compiler reloading b after
1838 having loaded a[b], thus having a newer copy of b than a[b]. A consensus
1839 has not yet been reached about these problems, however the READ_ONCE()
1840 macro is a good place to start looking.
1841
1842 SMP memory barriers are reduced to compiler barriers on uniprocessor compiled
1843 systems because it is assumed that a CPU will appear to be self-consistent,
1844 and will order overlapping accesses correctly with respect to itself.
1845 However, see the subsection on "Virtual Machine Guests" below.
1846
1847 [!] Note that SMP memory barriers _must_ be used to control the ordering of
1848 references to shared memory on SMP systems, though the use of locking instead
1849 is sufficient.
1850
1851 Mandatory barriers should not be used to control SMP effects, since mandatory
1852 barriers impose unnecessary overhead on both SMP and UP systems. They may,
1853 however, be used to control MMIO effects on accesses through relaxed memory I/O
1854 windows. These barriers are required even on non-SMP systems as they affect
1855 the order in which memory operations appear to a device by prohibiting both the
1856 compiler and the CPU from reordering them.
1857
1858
1859 There are some more advanced barrier functions:
1860
1861 (*) smp_store_mb(var, value)
1862
1863 This assigns the value to the variable and then inserts a full memory
1864 barrier after it. It isn't guaranteed to insert anything more than a
1865 compiler barrier in a UP compilation.
1866
1867
1868 (*) smp_mb__before_atomic();
1869 (*) smp_mb__after_atomic();
1870
1871 These are for use with atomic RMW functions that do not imply memory
1872 barriers, but where the code needs a memory barrier. Examples for atomic
1873 RMW functions that do not imply a memory barrier are e.g. add,
1874 subtract, (failed) conditional operations, _relaxed functions,
1875 but not atomic_read or atomic_set. A common example where a memory
1876 barrier may be required is when atomic ops are used for reference
1877 counting.
1878
1879 These are also used for atomic RMW bitop functions that do not imply a
1880 memory barrier (such as set_bit and clear_bit).
1881
1882 As an example, consider a piece of code that marks an object as being dead
1883 and then decrements the object's reference count:
1884
1885 obj->dead = 1;
1886 smp_mb__before_atomic();
1887 atomic_dec(&obj->ref_count);
1888
1889 This makes sure that the death mark on the object is perceived to be set
1890 *before* the reference counter is decremented.
1891
1892 See Documentation/atomic_{t,bitops}.txt for more information.
1893
1894
1895 (*) dma_wmb();
1896 (*) dma_rmb();
1897 (*) dma_mb();
1898
1899 These are for use with consistent memory to guarantee the ordering
1900 of writes or reads of shared memory accessible to both the CPU and a
1901 DMA capable device.
1902
1903 For example, consider a device driver that shares memory with a device
1904 and uses a descriptor status value to indicate if the descriptor belongs
1905 to the device or the CPU, and a doorbell to notify it when new
1906 descriptors are available:
1907
1908 if (desc->status != DEVICE_OWN) {
1909 /* do not read data until we own descriptor */
1910 dma_rmb();
1911
1912 /* read/modify data */
1913 read_data = desc->data;
1914 desc->data = write_data;
1915
1916 /* flush modifications before status update */
1917 dma_wmb();
1918
1919 /* assign ownership */
1920 desc->status = DEVICE_OWN;
1921
1922 /* notify device of new descriptors */
1923 writel(DESC_NOTIFY, doorbell);
1924 }
1925
1926 The dma_rmb() allows us guarantee the device has released ownership
1927 before we read the data from the descriptor, and the dma_wmb() allows
1928 us to guarantee the data is written to the descriptor before the device
1929 can see it now has ownership. The dma_mb() implies both a dma_rmb() and
1930 a dma_wmb(). Note that, when using writel(), a prior wmb() is not needed
1931 to guarantee that the cache coherent memory writes have completed before
1932 writing to the MMIO region. The cheaper writel_relaxed() does not provide
1933 this guarantee and must not be used here.
1934
1935 See the subsection "Kernel I/O barrier effects" for more information on
1936 relaxed I/O accessors and the Documentation/core-api/dma-api.rst file for
1937 more information on consistent memory.
1938
1939 (*) pmem_wmb();
1940
1941 This is for use with persistent memory to ensure that stores for which
1942 modifications are written to persistent storage reached a platform
1943 durability domain.
1944
1945 For example, after a non-temporal write to pmem region, we use pmem_wmb()
1946 to ensure that stores have reached a platform durability domain. This ensures
1947 that stores have updated persistent storage before any data access or
1948 data transfer caused by subsequent instructions is initiated. This is
1949 in addition to the ordering done by wmb().
1950
1951 For load from persistent memory, existing read memory barriers are sufficient
1952 to ensure read ordering.
1953
1954 (*) io_stop_wc();
1955
1956 For memory accesses with write-combining attributes (e.g. those returned
1957 by ioremap_wc(), the CPU may wait for prior accesses to be merged with
1958 subsequent ones. io_stop_wc() can be used to prevent the merging of
1959 write-combining memory accesses before this macro with those after it when
1960 such wait has performance implications.
1961
1962 ===============================
1963 IMPLICIT KERNEL MEMORY BARRIERS
1964 ===============================
1965
1966 Some of the other functions in the linux kernel imply memory barriers, amongst
1967 which are locking and scheduling functions.
1968
1969 This specification is a _minimum_ guarantee; any particular architecture may
1970 provide more substantial guarantees, but these may not be relied upon outside
1971 of arch specific code.
1972
1973
1974 LOCK ACQUISITION FUNCTIONS
1975 --------------------------
1976
1977 The Linux kernel has a number of locking constructs:
1978
1979 (*) spin locks
1980 (*) R/W spin locks
1981 (*) mutexes
1982 (*) semaphores
1983 (*) R/W semaphores
1984
1985 In all cases there are variants on "ACQUIRE" operations and "RELEASE" operations
1986 for each construct. These operations all imply certain barriers:
1987
1988 (1) ACQUIRE operation implication:
1989
1990 Memory operations issued after the ACQUIRE will be completed after the
1991 ACQUIRE operation has completed.
1992
1993 Memory operations issued before the ACQUIRE may be completed after
1994 the ACQUIRE operation has completed.
1995
1996 (2) RELEASE operation implication:
1997
1998 Memory operations issued before the RELEASE will be completed before the
1999 RELEASE operation has completed.
2000
2001 Memory operations issued after the RELEASE may be completed before the
2002 RELEASE operation has completed.
2003
2004 (3) ACQUIRE vs ACQUIRE implication:
2005
2006 All ACQUIRE operations issued before another ACQUIRE operation will be
2007 completed before that ACQUIRE operation.
2008
2009 (4) ACQUIRE vs RELEASE implication:
2010
2011 All ACQUIRE operations issued before a RELEASE operation will be
2012 completed before the RELEASE operation.
2013
2014 (5) Failed conditional ACQUIRE implication:
2015
2016 Certain locking variants of the ACQUIRE operation may fail, either due to
2017 being unable to get the lock immediately, or due to receiving an unblocked
2018 signal while asleep waiting for the lock to become available. Failed
2019 locks do not imply any sort of barrier.
2020
2021 [!] Note: one of the consequences of lock ACQUIREs and RELEASEs being only
2022 one-way barriers is that the effects of instructions outside of a critical
2023 section may seep into the inside of the critical section.
2024
2025 An ACQUIRE followed by a RELEASE may not be assumed to be full memory barrier
2026 because it is possible for an access preceding the ACQUIRE to happen after the
2027 ACQUIRE, and an access following the RELEASE to happen before the RELEASE, and
2028 the two accesses can themselves then cross:
2029
2030 *A = a;
2031 ACQUIRE M
2032 RELEASE M
2033 *B = b;
2034
2035 may occur as:
2036
2037 ACQUIRE M, STORE *B, STORE *A, RELEASE M
2038
2039 When the ACQUIRE and RELEASE are a lock acquisition and release,
2040 respectively, this same reordering can occur if the lock's ACQUIRE and
2041 RELEASE are to the same lock variable, but only from the perspective of
2042 another CPU not holding that lock. In short, a ACQUIRE followed by an
2043 RELEASE may -not- be assumed to be a full memory barrier.
2044
2045 Similarly, the reverse case of a RELEASE followed by an ACQUIRE does
2046 not imply a full memory barrier. Therefore, the CPU's execution of the
2047 critical sections corresponding to the RELEASE and the ACQUIRE can cross,
2048 so that:
2049
2050 *A = a;
2051 RELEASE M
2052 ACQUIRE N
2053 *B = b;
2054
2055 could occur as:
2056
2057 ACQUIRE N, STORE *B, STORE *A, RELEASE M
2058
2059 It might appear that this reordering could introduce a deadlock.
2060 However, this cannot happen because if such a deadlock threatened,
2061 the RELEASE would simply complete, thereby avoiding the deadlock.
2062
2063 Why does this work?
2064
2065 One key point is that we are only talking about the CPU doing
2066 the reordering, not the compiler. If the compiler (or, for
2067 that matter, the developer) switched the operations, deadlock
2068 -could- occur.
2069
2070 But suppose the CPU reordered the operations. In this case,
2071 the unlock precedes the lock in the assembly code. The CPU
2072 simply elected to try executing the later lock operation first.
2073 If there is a deadlock, this lock operation will simply spin (or
2074 try to sleep, but more on that later). The CPU will eventually
2075 execute the unlock operation (which preceded the lock operation
2076 in the assembly code), which will unravel the potential deadlock,
2077 allowing the lock operation to succeed.
2078
2079 But what if the lock is a sleeplock? In that case, the code will
2080 try to enter the scheduler, where it will eventually encounter
2081 a memory barrier, which will force the earlier unlock operation
2082 to complete, again unraveling the deadlock. There might be
2083 a sleep-unlock race, but the locking primitive needs to resolve
2084 such races properly in any case.
2085
2086 Locks and semaphores may not provide any guarantee of ordering on UP compiled
2087 systems, and so cannot be counted on in such a situation to actually achieve
2088 anything at all - especially with respect to I/O accesses - unless combined
2089 with interrupt disabling operations.
2090
2091 See also the section on "Inter-CPU acquiring barrier effects".
2092
2093
2094 As an example, consider the following:
2095
2096 *A = a;
2097 *B = b;
2098 ACQUIRE
2099 *C = c;
2100 *D = d;
2101 RELEASE
2102 *E = e;
2103 *F = f;
2104
2105 The following sequence of events is acceptable:
2106
2107 ACQUIRE, {*F,*A}, *E, {*C,*D}, *B, RELEASE
2108
2109 [+] Note that {*F,*A} indicates a combined access.
2110
2111 But none of the following are:
2112
2113 {*F,*A}, *B, ACQUIRE, *C, *D, RELEASE, *E
2114 *A, *B, *C, ACQUIRE, *D, RELEASE, *E, *F
2115 *A, *B, ACQUIRE, *C, RELEASE, *D, *E, *F
2116 *B, ACQUIRE, *C, *D, RELEASE, {*F,*A}, *E
2117
2118
2119
2120 INTERRUPT DISABLING FUNCTIONS
2121 -----------------------------
2122
2123 Functions that disable interrupts (ACQUIRE equivalent) and enable interrupts
2124 (RELEASE equivalent) will act as compiler barriers only. So if memory or I/O
2125 barriers are required in such a situation, they must be provided from some
2126 other means.
2127
2128
2129 SLEEP AND WAKE-UP FUNCTIONS
2130 ---------------------------
2131
2132 Sleeping and waking on an event flagged in global data can be viewed as an
2133 interaction between two pieces of data: the task state of the task waiting for
2134 the event and the global data used to indicate the event. To make sure that
2135 these appear to happen in the right order, the primitives to begin the process
2136 of going to sleep, and the primitives to initiate a wake up imply certain
2137 barriers.
2138
2139 Firstly, the sleeper normally follows something like this sequence of events:
2140
2141 for (;;) {
2142 set_current_state(TASK_UNINTERRUPTIBLE);
2143 if (event_indicated)
2144 break;
2145 schedule();
2146 }
2147
2148 A general memory barrier is interpolated automatically by set_current_state()
2149 after it has altered the task state:
2150
2151 CPU 1
2152 ===============================
2153 set_current_state();
2154 smp_store_mb();
2155 STORE current->state
2156 <general barrier>
2157 LOAD event_indicated
2158
2159 set_current_state() may be wrapped by:
2160
2161 prepare_to_wait();
2162 prepare_to_wait_exclusive();
2163
2164 which therefore also imply a general memory barrier after setting the state.
2165 The whole sequence above is available in various canned forms, all of which
2166 interpolate the memory barrier in the right place:
2167
2168 wait_event();
2169 wait_event_interruptible();
2170 wait_event_interruptible_exclusive();
2171 wait_event_interruptible_timeout();
2172 wait_event_killable();
2173 wait_event_timeout();
2174 wait_on_bit();
2175 wait_on_bit_lock();
2176
2177
2178 Secondly, code that performs a wake up normally follows something like this:
2179
2180 event_indicated = 1;
2181 wake_up(&event_wait_queue);
2182
2183 or:
2184
2185 event_indicated = 1;
2186 wake_up_process(event_daemon);
2187
2188 A general memory barrier is executed by wake_up() if it wakes something up.
2189 If it doesn't wake anything up then a memory barrier may or may not be
2190 executed; you must not rely on it. The barrier occurs before the task state
2191 is accessed, in particular, it sits between the STORE to indicate the event
2192 and the STORE to set TASK_RUNNING:
2193
2194 CPU 1 (Sleeper) CPU 2 (Waker)
2195 =============================== ===============================
2196 set_current_state(); STORE event_indicated
2197 smp_store_mb(); wake_up();
2198 STORE current->state ...
2199 <general barrier> <general barrier>
2200 LOAD event_indicated if ((LOAD task->state) & TASK_NORMAL)
2201 STORE task->state
2202
2203 where "task" is the thread being woken up and it equals CPU 1's "current".
2204
2205 To repeat, a general memory barrier is guaranteed to be executed by wake_up()
2206 if something is actually awakened, but otherwise there is no such guarantee.
2207 To see this, consider the following sequence of events, where X and Y are both
2208 initially zero:
2209
2210 CPU 1 CPU 2
2211 =============================== ===============================
2212 X = 1; Y = 1;
2213 smp_mb(); wake_up();
2214 LOAD Y LOAD X
2215
2216 If a wakeup does occur, one (at least) of the two loads must see 1. If, on
2217 the other hand, a wakeup does not occur, both loads might see 0.
2218
2219 wake_up_process() always executes a general memory barrier. The barrier again
2220 occurs before the task state is accessed. In particular, if the wake_up() in
2221 the previous snippet were replaced by a call to wake_up_process() then one of
2222 the two loads would be guaranteed to see 1.
2223
2224 The available waker functions include:
2225
2226 complete();
2227 wake_up();
2228 wake_up_all();
2229 wake_up_bit();
2230 wake_up_interruptible();
2231 wake_up_interruptible_all();
2232 wake_up_interruptible_nr();
2233 wake_up_interruptible_poll();
2234 wake_up_interruptible_sync();
2235 wake_up_interruptible_sync_poll();
2236 wake_up_locked();
2237 wake_up_locked_poll();
2238 wake_up_nr();
2239 wake_up_poll();
2240 wake_up_process();
2241
2242 In terms of memory ordering, these functions all provide the same guarantees of
2243 a wake_up() (or stronger).
2244
2245 [!] Note that the memory barriers implied by the sleeper and the waker do _not_
2246 order multiple stores before the wake-up with respect to loads of those stored
2247 values after the sleeper has called set_current_state(). For instance, if the
2248 sleeper does:
2249
2250 set_current_state(TASK_INTERRUPTIBLE);
2251 if (event_indicated)
2252 break;
2253 __set_current_state(TASK_RUNNING);
2254 do_something(my_data);
2255
2256 and the waker does:
2257
2258 my_data = value;
2259 event_indicated = 1;
2260 wake_up(&event_wait_queue);
2261
2262 there's no guarantee that the change to event_indicated will be perceived by
2263 the sleeper as coming after the change to my_data. In such a circumstance, the
2264 code on both sides must interpolate its own memory barriers between the
2265 separate data accesses. Thus the above sleeper ought to do:
2266
2267 set_current_state(TASK_INTERRUPTIBLE);
2268 if (event_indicated) {
2269 smp_rmb();
2270 do_something(my_data);
2271 }
2272
2273 and the waker should do:
2274
2275 my_data = value;
2276 smp_wmb();
2277 event_indicated = 1;
2278 wake_up(&event_wait_queue);
2279
2280
2281 MISCELLANEOUS FUNCTIONS
2282 -----------------------
2283
2284 Other functions that imply barriers:
2285
2286 (*) schedule() and similar imply full memory barriers.
2287
2288
2289 ===================================
2290 INTER-CPU ACQUIRING BARRIER EFFECTS
2291 ===================================
2292
2293 On SMP systems locking primitives give a more substantial form of barrier: one
2294 that does affect memory access ordering on other CPUs, within the context of
2295 conflict on any particular lock.
2296
2297
2298 ACQUIRES VS MEMORY ACCESSES
2299 ---------------------------
2300
2301 Consider the following: the system has a pair of spinlocks (M) and (Q), and
2302 three CPUs; then should the following sequence of events occur:
2303
2304 CPU 1 CPU 2
2305 =============================== ===============================
2306 WRITE_ONCE(*A, a); WRITE_ONCE(*E, e);
2307 ACQUIRE M ACQUIRE Q
2308 WRITE_ONCE(*B, b); WRITE_ONCE(*F, f);
2309 WRITE_ONCE(*C, c); WRITE_ONCE(*G, g);
2310 RELEASE M RELEASE Q
2311 WRITE_ONCE(*D, d); WRITE_ONCE(*H, h);
2312
2313 Then there is no guarantee as to what order CPU 3 will see the accesses to *A
2314 through *H occur in, other than the constraints imposed by the separate locks
2315 on the separate CPUs. It might, for example, see:
2316
2317 *E, ACQUIRE M, ACQUIRE Q, *G, *C, *F, *A, *B, RELEASE Q, *D, *H, RELEASE M
2318
2319 But it won't see any of:
2320
2321 *B, *C or *D preceding ACQUIRE M
2322 *A, *B or *C following RELEASE M
2323 *F, *G or *H preceding ACQUIRE Q
2324 *E, *F or *G following RELEASE Q
2325
2326
2327 =================================
2328 WHERE ARE MEMORY BARRIERS NEEDED?
2329 =================================
2330
2331 Under normal operation, memory operation reordering is generally not going to
2332 be a problem as a single-threaded linear piece of code will still appear to
2333 work correctly, even if it's in an SMP kernel. There are, however, four
2334 circumstances in which reordering definitely _could_ be a problem:
2335
2336 (*) Interprocessor interaction.
2337
2338 (*) Atomic operations.
2339
2340 (*) Accessing devices.
2341
2342 (*) Interrupts.
2343
2344
2345 INTERPROCESSOR INTERACTION
2346 --------------------------
2347
2348 When there's a system with more than one processor, more than one CPU in the
2349 system may be working on the same data set at the same time. This can cause
2350 synchronisation problems, and the usual way of dealing with them is to use
2351 locks. Locks, however, are quite expensive, and so it may be preferable to
2352 operate without the use of a lock if at all possible. In such a case
2353 operations that affect both CPUs may have to be carefully ordered to prevent
2354 a malfunction.
2355
2356 Consider, for example, the R/W semaphore slow path. Here a waiting process is
2357 queued on the semaphore, by virtue of it having a piece of its stack linked to
2358 the semaphore's list of waiting processes:
2359
2360 struct rw_semaphore {
2361 ...
2362 spinlock_t lock;
2363 struct list_head waiters;
2364 };
2365
2366 struct rwsem_waiter {
2367 struct list_head list;
2368 struct task_struct *task;
2369 };
2370
2371 To wake up a particular waiter, the up_read() or up_write() functions have to:
2372
2373 (1) read the next pointer from this waiter's record to know as to where the
2374 next waiter record is;
2375
2376 (2) read the pointer to the waiter's task structure;
2377
2378 (3) clear the task pointer to tell the waiter it has been given the semaphore;
2379
2380 (4) call wake_up_process() on the task; and
2381
2382 (5) release the reference held on the waiter's task struct.
2383
2384 In other words, it has to perform this sequence of events:
2385
2386 LOAD waiter->list.next;
2387 LOAD waiter->task;
2388 STORE waiter->task;
2389 CALL wakeup
2390 RELEASE task
2391
2392 and if any of these steps occur out of order, then the whole thing may
2393 malfunction.
2394
2395 Once it has queued itself and dropped the semaphore lock, the waiter does not
2396 get the lock again; it instead just waits for its task pointer to be cleared
2397 before proceeding. Since the record is on the waiter's stack, this means that
2398 if the task pointer is cleared _before_ the next pointer in the list is read,
2399 another CPU might start processing the waiter and might clobber the waiter's
2400 stack before the up*() function has a chance to read the next pointer.
2401
2402 Consider then what might happen to the above sequence of events:
2403
2404 CPU 1 CPU 2
2405 =============================== ===============================
2406 down_xxx()
2407 Queue waiter
2408 Sleep
2409 up_yyy()
2410 LOAD waiter->task;
2411 STORE waiter->task;
2412 Woken up by other event
2413 <preempt>
2414 Resume processing
2415 down_xxx() returns
2416 call foo()
2417 foo() clobbers *waiter
2418 </preempt>
2419 LOAD waiter->list.next;
2420 --- OOPS ---
2421
2422 This could be dealt with using the semaphore lock, but then the down_xxx()
2423 function has to needlessly get the spinlock again after being woken up.
2424
2425 The way to deal with this is to insert a general SMP memory barrier:
2426
2427 LOAD waiter->list.next;
2428 LOAD waiter->task;
2429 smp_mb();
2430 STORE waiter->task;
2431 CALL wakeup
2432 RELEASE task
2433
2434 In this case, the barrier makes a guarantee that all memory accesses before the
2435 barrier will appear to happen before all the memory accesses after the barrier
2436 with respect to the other CPUs on the system. It does _not_ guarantee that all
2437 the memory accesses before the barrier will be complete by the time the barrier
2438 instruction itself is complete.
2439
2440 On a UP system - where this wouldn't be a problem - the smp_mb() is just a
2441 compiler barrier, thus making sure the compiler emits the instructions in the
2442 right order without actually intervening in the CPU. Since there's only one
2443 CPU, that CPU's dependency ordering logic will take care of everything else.
2444
2445
2446 ATOMIC OPERATIONS
2447 -----------------
2448
2449 While they are technically interprocessor interaction considerations, atomic
2450 operations are noted specially as some of them imply full memory barriers and
2451 some don't, but they're very heavily relied on as a group throughout the
2452 kernel.
2453
2454 See Documentation/atomic_t.txt for more information.
2455
2456
2457 ACCESSING DEVICES
2458 -----------------
2459
2460 Many devices can be memory mapped, and so appear to the CPU as if they're just
2461 a set of memory locations. To control such a device, the driver usually has to
2462 make the right memory accesses in exactly the right order.
2463
2464 However, having a clever CPU or a clever compiler creates a potential problem
2465 in that the carefully sequenced accesses in the driver code won't reach the
2466 device in the requisite order if the CPU or the compiler thinks it is more
2467 efficient to reorder, combine or merge accesses - something that would cause
2468 the device to malfunction.
2469
2470 Inside of the Linux kernel, I/O should be done through the appropriate accessor
2471 routines - such as inb() or writel() - which know how to make such accesses
2472 appropriately sequential. While this, for the most part, renders the explicit
2473 use of memory barriers unnecessary, if the accessor functions are used to refer
2474 to an I/O memory window with relaxed memory access properties, then _mandatory_
2475 memory barriers are required to enforce ordering.
2476
2477 See Documentation/driver-api/device-io.rst for more information.
2478
2479
2480 INTERRUPTS
2481 ----------
2482
2483 A driver may be interrupted by its own interrupt service routine, and thus the
2484 two parts of the driver may interfere with each other's attempts to control or
2485 access the device.
2486
2487 This may be alleviated - at least in part - by disabling local interrupts (a
2488 form of locking), such that the critical operations are all contained within
2489 the interrupt-disabled section in the driver. While the driver's interrupt
2490 routine is executing, the driver's core may not run on the same CPU, and its
2491 interrupt is not permitted to happen again until the current interrupt has been
2492 handled, thus the interrupt handler does not need to lock against that.
2493
2494 However, consider a driver that was talking to an ethernet card that sports an
2495 address register and a data register. If that driver's core talks to the card
2496 under interrupt-disablement and then the driver's interrupt handler is invoked:
2497
2498 LOCAL IRQ DISABLE
2499 writew(ADDR, 3);
2500 writew(DATA, y);
2501 LOCAL IRQ ENABLE
2502 <interrupt>
2503 writew(ADDR, 4);
2504 q = readw(DATA);
2505 </interrupt>
2506
2507 The store to the data register might happen after the second store to the
2508 address register if ordering rules are sufficiently relaxed:
2509
2510 STORE *ADDR = 3, STORE *ADDR = 4, STORE *DATA = y, q = LOAD *DATA
2511
2512
2513 If ordering rules are relaxed, it must be assumed that accesses done inside an
2514 interrupt disabled section may leak outside of it and may interleave with
2515 accesses performed in an interrupt - and vice versa - unless implicit or
2516 explicit barriers are used.
2517
2518 Normally this won't be a problem because the I/O accesses done inside such
2519 sections will include synchronous load operations on strictly ordered I/O
2520 registers that form implicit I/O barriers.
2521
2522
2523 A similar situation may occur between an interrupt routine and two routines
2524 running on separate CPUs that communicate with each other. If such a case is
2525 likely, then interrupt-disabling locks should be used to guarantee ordering.
2526
2527
2528 ==========================
2529 KERNEL I/O BARRIER EFFECTS
2530 ==========================
2531
2532 Interfacing with peripherals via I/O accesses is deeply architecture and device
2533 specific. Therefore, drivers which are inherently non-portable may rely on
2534 specific behaviours of their target systems in order to achieve synchronization
2535 in the most lightweight manner possible. For drivers intending to be portable
2536 between multiple architectures and bus implementations, the kernel offers a
2537 series of accessor functions that provide various degrees of ordering
2538 guarantees:
2539
2540 (*) readX(), writeX():
2541
2542 The readX() and writeX() MMIO accessors take a pointer to the
2543 peripheral being accessed as an __iomem * parameter. For pointers
2544 mapped with the default I/O attributes (e.g. those returned by
2545 ioremap()), the ordering guarantees are as follows:
2546
2547 1. All readX() and writeX() accesses to the same peripheral are ordered
2548 with respect to each other. This ensures that MMIO register accesses
2549 by the same CPU thread to a particular device will arrive in program
2550 order.
2551
2552 2. A writeX() issued by a CPU thread holding a spinlock is ordered
2553 before a writeX() to the same peripheral from another CPU thread
2554 issued after a later acquisition of the same spinlock. This ensures
2555 that MMIO register writes to a particular device issued while holding
2556 a spinlock will arrive in an order consistent with acquisitions of
2557 the lock.
2558
2559 3. A writeX() by a CPU thread to the peripheral will first wait for the
2560 completion of all prior writes to memory either issued by, or
2561 propagated to, the same thread. This ensures that writes by the CPU
2562 to an outbound DMA buffer allocated by dma_alloc_coherent() will be
2563 visible to a DMA engine when the CPU writes to its MMIO control
2564 register to trigger the transfer.
2565
2566 4. A readX() by a CPU thread from the peripheral will complete before
2567 any subsequent reads from memory by the same thread can begin. This
2568 ensures that reads by the CPU from an incoming DMA buffer allocated
2569 by dma_alloc_coherent() will not see stale data after reading from
2570 the DMA engine's MMIO status register to establish that the DMA
2571 transfer has completed.
2572
2573 5. A readX() by a CPU thread from the peripheral will complete before
2574 any subsequent delay() loop can begin execution on the same thread.
2575 This ensures that two MMIO register writes by the CPU to a peripheral
2576 will arrive at least 1us apart if the first write is immediately read
2577 back with readX() and udelay(1) is called prior to the second
2578 writeX():
2579
2580 writel(42, DEVICE_REGISTER_0); // Arrives at the device...
2581 readl(DEVICE_REGISTER_0);
2582 udelay(1);
2583 writel(42, DEVICE_REGISTER_1); // ...at least 1us before this.
2584
2585 The ordering properties of __iomem pointers obtained with non-default
2586 attributes (e.g. those returned by ioremap_wc()) are specific to the
2587 underlying architecture and therefore the guarantees listed above cannot
2588 generally be relied upon for accesses to these types of mappings.
2589
2590 (*) readX_relaxed(), writeX_relaxed():
2591
2592 These are similar to readX() and writeX(), but provide weaker memory
2593 ordering guarantees. Specifically, they do not guarantee ordering with
2594 respect to locking, normal memory accesses or delay() loops (i.e.
2595 bullets 2-5 above) but they are still guaranteed to be ordered with
2596 respect to other accesses from the same CPU thread to the same
2597 peripheral when operating on __iomem pointers mapped with the default
2598 I/O attributes.
2599
2600 (*) readsX(), writesX():
2601
2602 The readsX() and writesX() MMIO accessors are designed for accessing
2603 register-based, memory-mapped FIFOs residing on peripherals that are not
2604 capable of performing DMA. Consequently, they provide only the ordering
2605 guarantees of readX_relaxed() and writeX_relaxed(), as documented above.
2606
2607 (*) inX(), outX():
2608
2609 The inX() and outX() accessors are intended to access legacy port-mapped
2610 I/O peripherals, which may require special instructions on some
2611 architectures (notably x86). The port number of the peripheral being
2612 accessed is passed as an argument.
2613
2614 Since many CPU architectures ultimately access these peripherals via an
2615 internal virtual memory mapping, the portable ordering guarantees
2616 provided by inX() and outX() are the same as those provided by readX()
2617 and writeX() respectively when accessing a mapping with the default I/O
2618 attributes.
2619
2620 Device drivers may expect outX() to emit a non-posted write transaction
2621 that waits for a completion response from the I/O peripheral before
2622 returning. This is not guaranteed by all architectures and is therefore
2623 not part of the portable ordering semantics.
2624
2625 (*) insX(), outsX():
2626
2627 As above, the insX() and outsX() accessors provide the same ordering
2628 guarantees as readsX() and writesX() respectively when accessing a
2629 mapping with the default I/O attributes.
2630
2631 (*) ioreadX(), iowriteX():
2632
2633 These will perform appropriately for the type of access they're actually
2634 doing, be it inX()/outX() or readX()/writeX().
2635
2636 With the exception of the string accessors (insX(), outsX(), readsX() and
2637 writesX()), all of the above assume that the underlying peripheral is
2638 little-endian and will therefore perform byte-swapping operations on big-endian
2639 architectures.
2640
2641
2642 ========================================
2643 ASSUMED MINIMUM EXECUTION ORDERING MODEL
2644 ========================================
2645
2646 It has to be assumed that the conceptual CPU is weakly-ordered but that it will
2647 maintain the appearance of program causality with respect to itself. Some CPUs
2648 (such as i386 or x86_64) are more constrained than others (such as powerpc or
2649 frv), and so the most relaxed case (namely DEC Alpha) must be assumed outside
2650 of arch-specific code.
2651
2652 This means that it must be considered that the CPU will execute its instruction
2653 stream in any order it feels like - or even in parallel - provided that if an
2654 instruction in the stream depends on an earlier instruction, then that
2655 earlier instruction must be sufficiently complete[*] before the later
2656 instruction may proceed; in other words: provided that the appearance of
2657 causality is maintained.
2658
2659 [*] Some instructions have more than one effect - such as changing the
2660 condition codes, changing registers or changing memory - and different
2661 instructions may depend on different effects.
2662
2663 A CPU may also discard any instruction sequence that winds up having no
2664 ultimate effect. For example, if two adjacent instructions both load an
2665 immediate value into the same register, the first may be discarded.
2666
2667
2668 Similarly, it has to be assumed that compiler might reorder the instruction
2669 stream in any way it sees fit, again provided the appearance of causality is
2670 maintained.
2671
2672
2673 ============================
2674 THE EFFECTS OF THE CPU CACHE
2675 ============================
2676
2677 The way cached memory operations are perceived across the system is affected to
2678 a certain extent by the caches that lie between CPUs and memory, and by the
2679 memory coherence system that maintains the consistency of state in the system.
2680
2681 As far as the way a CPU interacts with another part of the system through the
2682 caches goes, the memory system has to include the CPU's caches, and memory
2683 barriers for the most part act at the interface between the CPU and its cache
2684 (memory barriers logically act on the dotted line in the following diagram):
2685
2686 <--- CPU ---> : <----------- Memory ----------->
2687 :
2688 +--------+ +--------+ : +--------+ +-----------+
2689 | | | | : | | | | +--------+
2690 | CPU | | Memory | : | CPU | | | | |
2691 | Core |--->| Access |----->| Cache |<-->| | | |
2692 | | | Queue | : | | | |--->| Memory |
2693 | | | | : | | | | | |
2694 +--------+ +--------+ : +--------+ | | | |
2695 : | Cache | +--------+
2696 : | Coherency |
2697 : | Mechanism | +--------+
2698 +--------+ +--------+ : +--------+ | | | |
2699 | | | | : | | | | | |
2700 | CPU | | Memory | : | CPU | | |--->| Device |
2701 | Core |--->| Access |----->| Cache |<-->| | | |
2702 | | | Queue | : | | | | | |
2703 | | | | : | | | | +--------+
2704 +--------+ +--------+ : +--------+ +-----------+
2705 :
2706 :
2707
2708 Although any particular load or store may not actually appear outside of the
2709 CPU that issued it since it may have been satisfied within the CPU's own cache,
2710 it will still appear as if the full memory access had taken place as far as the
2711 other CPUs are concerned since the cache coherency mechanisms will migrate the
2712 cacheline over to the accessing CPU and propagate the effects upon conflict.
2713
2714 The CPU core may execute instructions in any order it deems fit, provided the
2715 expected program causality appears to be maintained. Some of the instructions
2716 generate load and store operations which then go into the queue of memory
2717 accesses to be performed. The core may place these in the queue in any order
2718 it wishes, and continue execution until it is forced to wait for an instruction
2719 to complete.
2720
2721 What memory barriers are concerned with is controlling the order in which
2722 accesses cross from the CPU side of things to the memory side of things, and
2723 the order in which the effects are perceived to happen by the other observers
2724 in the system.
2725
2726 [!] Memory barriers are _not_ needed within a given CPU, as CPUs always see
2727 their own loads and stores as if they had happened in program order.
2728
2729 [!] MMIO or other device accesses may bypass the cache system. This depends on
2730 the properties of the memory window through which devices are accessed and/or
2731 the use of any special device communication instructions the CPU may have.
2732
2733
2734 CACHE COHERENCY VS DMA
2735 ----------------------
2736
2737 Not all systems maintain cache coherency with respect to devices doing DMA. In
2738 such cases, a device attempting DMA may obtain stale data from RAM because
2739 dirty cache lines may be resident in the caches of various CPUs, and may not
2740 have been written back to RAM yet. To deal with this, the appropriate part of
2741 the kernel must flush the overlapping bits of cache on each CPU (and maybe
2742 invalidate them as well).
2743
2744 In addition, the data DMA'd to RAM by a device may be overwritten by dirty
2745 cache lines being written back to RAM from a CPU's cache after the device has
2746 installed its own data, or cache lines present in the CPU's cache may simply
2747 obscure the fact that RAM has been updated, until at such time as the cacheline
2748 is discarded from the CPU's cache and reloaded. To deal with this, the
2749 appropriate part of the kernel must invalidate the overlapping bits of the
2750 cache on each CPU.
2751
2752 See Documentation/core-api/cachetlb.rst for more information on cache management.
2753
2754
2755 CACHE COHERENCY VS MMIO
2756 -----------------------
2757
2758 Memory mapped I/O usually takes place through memory locations that are part of
2759 a window in the CPU's memory space that has different properties assigned than
2760 the usual RAM directed window.
2761
2762 Amongst these properties is usually the fact that such accesses bypass the
2763 caching entirely and go directly to the device buses. This means MMIO accesses
2764 may, in effect, overtake accesses to cached memory that were emitted earlier.
2765 A memory barrier isn't sufficient in such a case, but rather the cache must be
2766 flushed between the cached memory write and the MMIO access if the two are in
2767 any way dependent.
2768
2769
2770 =========================
2771 THE THINGS CPUS GET UP TO
2772 =========================
2773
2774 A programmer might take it for granted that the CPU will perform memory
2775 operations in exactly the order specified, so that if the CPU is, for example,
2776 given the following piece of code to execute:
2777
2778 a = READ_ONCE(*A);
2779 WRITE_ONCE(*B, b);
2780 c = READ_ONCE(*C);
2781 d = READ_ONCE(*D);
2782 WRITE_ONCE(*E, e);
2783
2784 they would then expect that the CPU will complete the memory operation for each
2785 instruction before moving on to the next one, leading to a definite sequence of
2786 operations as seen by external observers in the system:
2787
2788 LOAD *A, STORE *B, LOAD *C, LOAD *D, STORE *E.
2789
2790
2791 Reality is, of course, much messier. With many CPUs and compilers, the above
2792 assumption doesn't hold because:
2793
2794 (*) loads are more likely to need to be completed immediately to permit
2795 execution progress, whereas stores can often be deferred without a
2796 problem;
2797
2798 (*) loads may be done speculatively, and the result discarded should it prove
2799 to have been unnecessary;
2800
2801 (*) loads may be done speculatively, leading to the result having been fetched
2802 at the wrong time in the expected sequence of events;
2803
2804 (*) the order of the memory accesses may be rearranged to promote better use
2805 of the CPU buses and caches;
2806
2807 (*) loads and stores may be combined to improve performance when talking to
2808 memory or I/O hardware that can do batched accesses of adjacent locations,
2809 thus cutting down on transaction setup costs (memory and PCI devices may
2810 both be able to do this); and
2811
2812 (*) the CPU's data cache may affect the ordering, and while cache-coherency
2813 mechanisms may alleviate this - once the store has actually hit the cache
2814 - there's no guarantee that the coherency management will be propagated in
2815 order to other CPUs.
2816
2817 So what another CPU, say, might actually observe from the above piece of code
2818 is:
2819
2820 LOAD *A, ..., LOAD {*C,*D}, STORE *E, STORE *B
2821
2822 (Where "LOAD {*C,*D}" is a combined load)
2823
2824
2825 However, it is guaranteed that a CPU will be self-consistent: it will see its
2826 _own_ accesses appear to be correctly ordered, without the need for a memory
2827 barrier. For instance with the following code:
2828
2829 U = READ_ONCE(*A);
2830 WRITE_ONCE(*A, V);
2831 WRITE_ONCE(*A, W);
2832 X = READ_ONCE(*A);
2833 WRITE_ONCE(*A, Y);
2834 Z = READ_ONCE(*A);
2835
2836 and assuming no intervention by an external influence, it can be assumed that
2837 the final result will appear to be:
2838
2839 U == the original value of *A
2840 X == W
2841 Z == Y
2842 *A == Y
2843
2844 The code above may cause the CPU to generate the full sequence of memory
2845 accesses:
2846
2847 U=LOAD *A, STORE *A=V, STORE *A=W, X=LOAD *A, STORE *A=Y, Z=LOAD *A
2848
2849 in that order, but, without intervention, the sequence may have almost any
2850 combination of elements combined or discarded, provided the program's view
2851 of the world remains consistent. Note that READ_ONCE() and WRITE_ONCE()
2852 are -not- optional in the above example, as there are architectures
2853 where a given CPU might reorder successive loads to the same location.
2854 On such architectures, READ_ONCE() and WRITE_ONCE() do whatever is
2855 necessary to prevent this, for example, on Itanium the volatile casts
2856 used by READ_ONCE() and WRITE_ONCE() cause GCC to emit the special ld.acq
2857 and st.rel instructions (respectively) that prevent such reordering.
2858
2859 The compiler may also combine, discard or defer elements of the sequence before
2860 the CPU even sees them.
2861
2862 For instance:
2863
2864 *A = V;
2865 *A = W;
2866
2867 may be reduced to:
2868
2869 *A = W;
2870
2871 since, without either a write barrier or an WRITE_ONCE(), it can be
2872 assumed that the effect of the storage of V to *A is lost. Similarly:
2873
2874 *A = Y;
2875 Z = *A;
2876
2877 may, without a memory barrier or an READ_ONCE() and WRITE_ONCE(), be
2878 reduced to:
2879
2880 *A = Y;
2881 Z = Y;
2882
2883 and the LOAD operation never appear outside of the CPU.
2884
2885
2886 AND THEN THERE'S THE ALPHA
2887 --------------------------
2888
2889 The DEC Alpha CPU is one of the most relaxed CPUs there is. Not only that,
2890 some versions of the Alpha CPU have a split data cache, permitting them to have
2891 two semantically-related cache lines updated at separate times. This is where
2892 the data dependency barrier really becomes necessary as this synchronises both
2893 caches with the memory coherence system, thus making it seem like pointer
2894 changes vs new data occur in the right order.
2895
2896 The Alpha defines the Linux kernel's memory model, although as of v4.15
2897 the Linux kernel's addition of smp_mb() to READ_ONCE() on Alpha greatly
2898 reduced its impact on the memory model.
2899
2900
2901 VIRTUAL MACHINE GUESTS
2902 ----------------------
2903
2904 Guests running within virtual machines might be affected by SMP effects even if
2905 the guest itself is compiled without SMP support. This is an artifact of
2906 interfacing with an SMP host while running an UP kernel. Using mandatory
2907 barriers for this use-case would be possible but is often suboptimal.
2908
2909 To handle this case optimally, low-level virt_mb() etc macros are available.
2910 These have the same effect as smp_mb() etc when SMP is enabled, but generate
2911 identical code for SMP and non-SMP systems. For example, virtual machine guests
2912 should use virt_mb() rather than smp_mb() when synchronizing against a
2913 (possibly SMP) host.
2914
2915 These are equivalent to smp_mb() etc counterparts in all other respects,
2916 in particular, they do not control MMIO effects: to control
2917 MMIO effects, use mandatory barriers.
2918
2919
2920 ============
2921 EXAMPLE USES
2922 ============
2923
2924 CIRCULAR BUFFERS
2925 ----------------
2926
2927 Memory barriers can be used to implement circular buffering without the need
2928 of a lock to serialise the producer with the consumer. See:
2929
2930 Documentation/core-api/circular-buffers.rst
2931
2932 for details.
2933
2934
2935 ==========
2936 REFERENCES
2937 ==========
2938
2939 Alpha AXP Architecture Reference Manual, Second Edition (Sites & Witek,
2940 Digital Press)
2941 Chapter 5.2: Physical Address Space Characteristics
2942 Chapter 5.4: Caches and Write Buffers
2943 Chapter 5.5: Data Sharing
2944 Chapter 5.6: Read/Write Ordering
2945
2946 AMD64 Architecture Programmer's Manual Volume 2: System Programming
2947 Chapter 7.1: Memory-Access Ordering
2948 Chapter 7.4: Buffering and Combining Memory Writes
2949
2950 ARM Architecture Reference Manual (ARMv8, for ARMv8-A architecture profile)
2951 Chapter B2: The AArch64 Application Level Memory Model
2952
2953 IA-32 Intel Architecture Software Developer's Manual, Volume 3:
2954 System Programming Guide
2955 Chapter 7.1: Locked Atomic Operations
2956 Chapter 7.2: Memory Ordering
2957 Chapter 7.4: Serializing Instructions
2958
2959 The SPARC Architecture Manual, Version 9
2960 Chapter 8: Memory Models
2961 Appendix D: Formal Specification of the Memory Models
2962 Appendix J: Programming with the Memory Models
2963
2964 Storage in the PowerPC (Stone and Fitzgerald)
2965
2966 UltraSPARC Programmer Reference Manual
2967 Chapter 5: Memory Accesses and Cacheability
2968 Chapter 15: Sparc-V9 Memory Models
2969
2970 UltraSPARC III Cu User's Manual
2971 Chapter 9: Memory Models
2972
2973 UltraSPARC IIIi Processor User's Manual
2974 Chapter 8: Memory Models
2975
2976 UltraSPARC Architecture 2005
2977 Chapter 9: Memory
2978 Appendix D: Formal Specifications of the Memory Models
2979
2980 UltraSPARC T1 Supplement to the UltraSPARC Architecture 2005
2981 Chapter 8: Memory Models
2982 Appendix F: Caches and Cache Coherency
2983
2984 Solaris Internals, Core Kernel Architecture, p63-68:
2985 Chapter 3.3: Hardware Considerations for Locks and
2986 Synchronization
2987
2988 Unix Systems for Modern Architectures, Symmetric Multiprocessing and Caching
2989 for Kernel Programmers:
2990 Chapter 13: Other Memory Models
2991
2992 Intel Itanium Architecture Software Developer's Manual: Volume 1:
2993 Section 2.6: Speculation
2994 Section 4.4: Memory Access