Back to home page

OSCL-LXR

 
 

    


0001 MARKING SHARED-MEMORY ACCESSES
0002 ==============================
0003 
0004 This document provides guidelines for marking intentionally concurrent
0005 normal accesses to shared memory, that is "normal" as in accesses that do
0006 not use read-modify-write atomic operations.  It also describes how to
0007 document these accesses, both with comments and with special assertions
0008 processed by the Kernel Concurrency Sanitizer (KCSAN).  This discussion
0009 builds on an earlier LWN article [1].
0010 
0011 
0012 ACCESS-MARKING OPTIONS
0013 ======================
0014 
0015 The Linux kernel provides the following access-marking options:
0016 
0017 1.      Plain C-language accesses (unmarked), for example, "a = b;"
0018 
0019 2.      Data-race marking, for example, "data_race(a = b);"
0020 
0021 3.      READ_ONCE(), for example, "a = READ_ONCE(b);"
0022         The various forms of atomic_read() also fit in here.
0023 
0024 4.      WRITE_ONCE(), for example, "WRITE_ONCE(a, b);"
0025         The various forms of atomic_set() also fit in here.
0026 
0027 
0028 These may be used in combination, as shown in this admittedly improbable
0029 example:
0030 
0031         WRITE_ONCE(a, b + data_race(c + d) + READ_ONCE(e));
0032 
0033 Neither plain C-language accesses nor data_race() (#1 and #2 above) place
0034 any sort of constraint on the compiler's choice of optimizations [2].
0035 In contrast, READ_ONCE() and WRITE_ONCE() (#3 and #4 above) restrict the
0036 compiler's use of code-motion and common-subexpression optimizations.
0037 Therefore, if a given access is involved in an intentional data race,
0038 using READ_ONCE() for loads and WRITE_ONCE() for stores is usually
0039 preferable to data_race(), which in turn is usually preferable to plain
0040 C-language accesses.  It is permissible to combine #2 and #3, for example,
0041 data_race(READ_ONCE(a)), which will both restrict compiler optimizations
0042 and disable KCSAN diagnostics.
0043 
0044 KCSAN will complain about many types of data races involving plain
0045 C-language accesses, but marking all accesses involved in a given data
0046 race with one of data_race(), READ_ONCE(), or WRITE_ONCE(), will prevent
0047 KCSAN from complaining.  Of course, lack of KCSAN complaints does not
0048 imply correct code.  Therefore, please take a thoughtful approach
0049 when responding to KCSAN complaints.  Churning the code base with
0050 ill-considered additions of data_race(), READ_ONCE(), and WRITE_ONCE()
0051 is unhelpful.
0052 
0053 In fact, the following sections describe situations where use of
0054 data_race() and even plain C-language accesses is preferable to
0055 READ_ONCE() and WRITE_ONCE().
0056 
0057 
0058 Use of the data_race() Macro
0059 ----------------------------
0060 
0061 Here are some situations where data_race() should be used instead of
0062 READ_ONCE() and WRITE_ONCE():
0063 
0064 1.      Data-racy loads from shared variables whose values are used only
0065         for diagnostic purposes.
0066 
0067 2.      Data-racy reads whose values are checked against marked reload.
0068 
0069 3.      Reads whose values feed into error-tolerant heuristics.
0070 
0071 4.      Writes setting values that feed into error-tolerant heuristics.
0072 
0073 
0074 Data-Racy Reads for Approximate Diagnostics
0075 
0076 Approximate diagnostics include lockdep reports, monitoring/statistics
0077 (including /proc and /sys output), WARN*()/BUG*() checks whose return
0078 values are ignored, and other situations where reads from shared variables
0079 are not an integral part of the core concurrency design.
0080 
0081 In fact, use of data_race() instead READ_ONCE() for these diagnostic
0082 reads can enable better checking of the remaining accesses implementing
0083 the core concurrency design.  For example, suppose that the core design
0084 prevents any non-diagnostic reads from shared variable x from running
0085 concurrently with updates to x.  Then using plain C-language writes
0086 to x allows KCSAN to detect reads from x from within regions of code
0087 that fail to exclude the updates.  In this case, it is important to use
0088 data_race() for the diagnostic reads because otherwise KCSAN would give
0089 false-positive warnings about these diagnostic reads.
0090 
0091 If it is necessary to both restrict compiler optimizations and disable
0092 KCSAN diagnostics, use both data_race() and READ_ONCE(), for example,
0093 data_race(READ_ONCE(a)).
0094 
0095 In theory, plain C-language loads can also be used for this use case.
0096 However, in practice this will have the disadvantage of causing KCSAN
0097 to generate false positives because KCSAN will have no way of knowing
0098 that the resulting data race was intentional.
0099 
0100 
0101 Data-Racy Reads That Are Checked Against Marked Reload
0102 
0103 The values from some reads are not implicitly trusted.  They are instead
0104 fed into some operation that checks the full value against a later marked
0105 load from memory, which means that the occasional arbitrarily bogus value
0106 is not a problem.  For example, if a bogus value is fed into cmpxchg(),
0107 all that happens is that this cmpxchg() fails, which normally results
0108 in a retry.  Unless the race condition that resulted in the bogus value
0109 recurs, this retry will with high probability succeed, so no harm done.
0110 
0111 However, please keep in mind that a data_race() load feeding into
0112 a cmpxchg_relaxed() might still be subject to load fusing on some
0113 architectures.  Therefore, it is best to capture the return value from
0114 the failing cmpxchg() for the next iteration of the loop, an approach
0115 that provides the compiler much less scope for mischievous optimizations.
0116 Capturing the return value from cmpxchg() also saves a memory reference
0117 in many cases.
0118 
0119 In theory, plain C-language loads can also be used for this use case.
0120 However, in practice this will have the disadvantage of causing KCSAN
0121 to generate false positives because KCSAN will have no way of knowing
0122 that the resulting data race was intentional.
0123 
0124 
0125 Reads Feeding Into Error-Tolerant Heuristics
0126 
0127 Values from some reads feed into heuristics that can tolerate occasional
0128 errors.  Such reads can use data_race(), thus allowing KCSAN to focus on
0129 the other accesses to the relevant shared variables.  But please note
0130 that data_race() loads are subject to load fusing, which can result in
0131 consistent errors, which in turn are quite capable of breaking heuristics.
0132 Therefore use of data_race() should be limited to cases where some other
0133 code (such as a barrier() call) will force the occasional reload.
0134 
0135 Note that this use case requires that the heuristic be able to handle
0136 any possible error.  In contrast, if the heuristics might be fatally
0137 confused by one or more of the possible erroneous values, use READ_ONCE()
0138 instead of data_race().
0139 
0140 In theory, plain C-language loads can also be used for this use case.
0141 However, in practice this will have the disadvantage of causing KCSAN
0142 to generate false positives because KCSAN will have no way of knowing
0143 that the resulting data race was intentional.
0144 
0145 
0146 Writes Setting Values Feeding Into Error-Tolerant Heuristics
0147 
0148 The values read into error-tolerant heuristics come from somewhere,
0149 for example, from sysfs.  This means that some code in sysfs writes
0150 to this same variable, and these writes can also use data_race().
0151 After all, if the heuristic can tolerate the occasional bogus value
0152 due to compiler-mangled reads, it can also tolerate the occasional
0153 compiler-mangled write, at least assuming that the proper value is in
0154 place once the write completes.
0155 
0156 Plain C-language stores can also be used for this use case.  However,
0157 in kernels built with CONFIG_KCSAN_ASSUME_PLAIN_WRITES_ATOMIC=n, this
0158 will have the disadvantage of causing KCSAN to generate false positives
0159 because KCSAN will have no way of knowing that the resulting data race
0160 was intentional.
0161 
0162 
0163 Use of Plain C-Language Accesses
0164 --------------------------------
0165 
0166 Here are some example situations where plain C-language accesses should
0167 used instead of READ_ONCE(), WRITE_ONCE(), and data_race():
0168 
0169 1.      Accesses protected by mutual exclusion, including strict locking
0170         and sequence locking.
0171 
0172 2.      Initialization-time and cleanup-time accesses.  This covers a
0173         wide variety of situations, including the uniprocessor phase of
0174         system boot, variables to be used by not-yet-spawned kthreads,
0175         structures not yet published to reference-counted or RCU-protected
0176         data structures, and the cleanup side of any of these situations.
0177 
0178 3.      Per-CPU variables that are not accessed from other CPUs.
0179 
0180 4.      Private per-task variables, including on-stack variables, some
0181         fields in the task_struct structure, and task-private heap data.
0182 
0183 5.      Any other loads for which there is not supposed to be a concurrent
0184         store to that same variable.
0185 
0186 6.      Any other stores for which there should be neither concurrent
0187         loads nor concurrent stores to that same variable.
0188 
0189         But note that KCSAN makes two explicit exceptions to this rule
0190         by default, refraining from flagging plain C-language stores:
0191 
0192         a.      No matter what.  You can override this default by building
0193                 with CONFIG_KCSAN_ASSUME_PLAIN_WRITES_ATOMIC=n.
0194 
0195         b.      When the store writes the value already contained in
0196                 that variable.  You can override this default by building
0197                 with CONFIG_KCSAN_REPORT_VALUE_CHANGE_ONLY=n.
0198 
0199         c.      When one of the stores is in an interrupt handler and
0200                 the other in the interrupted code.  You can override this
0201                 default by building with CONFIG_KCSAN_INTERRUPT_WATCHER=y.
0202 
0203 Note that it is important to use plain C-language accesses in these cases,
0204 because doing otherwise prevents KCSAN from detecting violations of your
0205 code's synchronization rules.
0206 
0207 
0208 ACCESS-DOCUMENTATION OPTIONS
0209 ============================
0210 
0211 It is important to comment marked accesses so that people reading your
0212 code, yourself included, are reminded of the synchronization design.
0213 However, it is even more important to comment plain C-language accesses
0214 that are intentionally involved in data races.  Such comments are
0215 needed to remind people reading your code, again, yourself included,
0216 of how the compiler has been prevented from optimizing those accesses
0217 into concurrency bugs.
0218 
0219 It is also possible to tell KCSAN about your synchronization design.
0220 For example, ASSERT_EXCLUSIVE_ACCESS(foo) tells KCSAN that any
0221 concurrent access to variable foo by any other CPU is an error, even
0222 if that concurrent access is marked with READ_ONCE().  In addition,
0223 ASSERT_EXCLUSIVE_WRITER(foo) tells KCSAN that although it is OK for there
0224 to be concurrent reads from foo from other CPUs, it is an error for some
0225 other CPU to be concurrently writing to foo, even if that concurrent
0226 write is marked with data_race() or WRITE_ONCE().
0227 
0228 Note that although KCSAN will call out data races involving either
0229 ASSERT_EXCLUSIVE_ACCESS() or ASSERT_EXCLUSIVE_WRITER() on the one hand
0230 and data_race() writes on the other, KCSAN will not report the location
0231 of these data_race() writes.
0232 
0233 
0234 EXAMPLES
0235 ========
0236 
0237 As noted earlier, the goal is to prevent the compiler from destroying
0238 your concurrent algorithm, to help the human reader, and to inform
0239 KCSAN of aspects of your concurrency design.  This section looks at a
0240 few examples showing how this can be done.
0241 
0242 
0243 Lock Protection With Lockless Diagnostic Access
0244 -----------------------------------------------
0245 
0246 For example, suppose a shared variable "foo" is read only while a
0247 reader-writer spinlock is read-held, written only while that same
0248 spinlock is write-held, except that it is also read locklessly for
0249 diagnostic purposes.  The code might look as follows:
0250 
0251         int foo;
0252         DEFINE_RWLOCK(foo_rwlock);
0253 
0254         void update_foo(int newval)
0255         {
0256                 write_lock(&foo_rwlock);
0257                 foo = newval;
0258                 do_something(newval);
0259                 write_unlock(&foo_rwlock);
0260         }
0261 
0262         int read_foo(void)
0263         {
0264                 int ret;
0265 
0266                 read_lock(&foo_rwlock);
0267                 do_something_else();
0268                 ret = foo;
0269                 read_unlock(&foo_rwlock);
0270                 return ret;
0271         }
0272 
0273         void read_foo_diagnostic(void)
0274         {
0275                 pr_info("Current value of foo: %d\n", data_race(foo));
0276         }
0277 
0278 The reader-writer lock prevents the compiler from introducing concurrency
0279 bugs into any part of the main algorithm using foo, which means that
0280 the accesses to foo within both update_foo() and read_foo() can (and
0281 should) be plain C-language accesses.  One benefit of making them be
0282 plain C-language accesses is that KCSAN can detect any erroneous lockless
0283 reads from or updates to foo.  The data_race() in read_foo_diagnostic()
0284 tells KCSAN that data races are expected, and should be silently
0285 ignored.  This data_race() also tells the human reading the code that
0286 read_foo_diagnostic() might sometimes return a bogus value.
0287 
0288 If it is necessary to suppress compiler optimization and also detect
0289 buggy lockless writes, read_foo_diagnostic() can be updated as follows:
0290 
0291         void read_foo_diagnostic(void)
0292         {
0293                 pr_info("Current value of foo: %d\n", data_race(READ_ONCE(foo)));
0294         }
0295 
0296 Alternatively, given that KCSAN is to ignore all accesses in this function,
0297 this function can be marked __no_kcsan and the data_race() can be dropped:
0298 
0299         void __no_kcsan read_foo_diagnostic(void)
0300         {
0301                 pr_info("Current value of foo: %d\n", READ_ONCE(foo));
0302         }
0303 
0304 However, in order for KCSAN to detect buggy lockless writes, your kernel
0305 must be built with CONFIG_KCSAN_ASSUME_PLAIN_WRITES_ATOMIC=n.  If you
0306 need KCSAN to detect such a write even if that write did not change
0307 the value of foo, you also need CONFIG_KCSAN_REPORT_VALUE_CHANGE_ONLY=n.
0308 If you need KCSAN to detect such a write happening in an interrupt handler
0309 running on the same CPU doing the legitimate lock-protected write, you
0310 also need CONFIG_KCSAN_INTERRUPT_WATCHER=y.  With some or all of these
0311 Kconfig options set properly, KCSAN can be quite helpful, although
0312 it is not necessarily a full replacement for hardware watchpoints.
0313 On the other hand, neither are hardware watchpoints a full replacement
0314 for KCSAN because it is not always easy to tell hardware watchpoint to
0315 conditionally trap on accesses.
0316 
0317 
0318 Lock-Protected Writes With Lockless Reads
0319 -----------------------------------------
0320 
0321 For another example, suppose a shared variable "foo" is updated only
0322 while holding a spinlock, but is read locklessly.  The code might look
0323 as follows:
0324 
0325         int foo;
0326         DEFINE_SPINLOCK(foo_lock);
0327 
0328         void update_foo(int newval)
0329         {
0330                 spin_lock(&foo_lock);
0331                 WRITE_ONCE(foo, newval);
0332                 ASSERT_EXCLUSIVE_WRITER(foo);
0333                 do_something(newval);
0334                 spin_unlock(&foo_wlock);
0335         }
0336 
0337         int read_foo(void)
0338         {
0339                 do_something_else();
0340                 return READ_ONCE(foo);
0341         }
0342 
0343 Because foo is read locklessly, all accesses are marked.  The purpose
0344 of the ASSERT_EXCLUSIVE_WRITER() is to allow KCSAN to check for a buggy
0345 concurrent lockless write.
0346 
0347 
0348 Lock-Protected Writes With Heuristic Lockless Reads
0349 ---------------------------------------------------
0350 
0351 For another example, suppose that the code can normally make use of
0352 a per-data-structure lock, but there are times when a global lock
0353 is required.  These times are indicated via a global flag.  The code
0354 might look as follows, and is based loosely on nf_conntrack_lock(),
0355 nf_conntrack_all_lock(), and nf_conntrack_all_unlock():
0356 
0357         bool global_flag;
0358         DEFINE_SPINLOCK(global_lock);
0359         struct foo {
0360                 spinlock_t f_lock;
0361                 int f_data;
0362         };
0363 
0364         /* All foo structures are in the following array. */
0365         int nfoo;
0366         struct foo *foo_array;
0367 
0368         void do_something_locked(struct foo *fp)
0369         {
0370                 /* This works even if data_race() returns nonsense. */
0371                 if (!data_race(global_flag)) {
0372                         spin_lock(&fp->f_lock);
0373                         if (!smp_load_acquire(&global_flag)) {
0374                                 do_something(fp);
0375                                 spin_unlock(&fp->f_lock);
0376                                 return;
0377                         }
0378                         spin_unlock(&fp->f_lock);
0379                 }
0380                 spin_lock(&global_lock);
0381                 /* global_lock held, thus global flag cannot be set. */
0382                 spin_lock(&fp->f_lock);
0383                 spin_unlock(&global_lock);
0384                 /*
0385                  * global_flag might be set here, but begin_global()
0386                  * will wait for ->f_lock to be released.
0387                  */
0388                 do_something(fp);
0389                 spin_unlock(&fp->f_lock);
0390         }
0391 
0392         void begin_global(void)
0393         {
0394                 int i;
0395 
0396                 spin_lock(&global_lock);
0397                 WRITE_ONCE(global_flag, true);
0398                 for (i = 0; i < nfoo; i++) {
0399                         /*
0400                          * Wait for pre-existing local locks.  One at
0401                          * a time to avoid lockdep limitations.
0402                          */
0403                         spin_lock(&fp->f_lock);
0404                         spin_unlock(&fp->f_lock);
0405                 }
0406         }
0407 
0408         void end_global(void)
0409         {
0410                 smp_store_release(&global_flag, false);
0411                 spin_unlock(&global_lock);
0412         }
0413 
0414 All code paths leading from the do_something_locked() function's first
0415 read from global_flag acquire a lock, so endless load fusing cannot
0416 happen.
0417 
0418 If the value read from global_flag is true, then global_flag is
0419 rechecked while holding ->f_lock, which, if global_flag is now false,
0420 prevents begin_global() from completing.  It is therefore safe to invoke
0421 do_something().
0422 
0423 Otherwise, if either value read from global_flag is true, then after
0424 global_lock is acquired global_flag must be false.  The acquisition of
0425 ->f_lock will prevent any call to begin_global() from returning, which
0426 means that it is safe to release global_lock and invoke do_something().
0427 
0428 For this to work, only those foo structures in foo_array[] may be passed
0429 to do_something_locked().  The reason for this is that the synchronization
0430 with begin_global() relies on momentarily holding the lock of each and
0431 every foo structure.
0432 
0433 The smp_load_acquire() and smp_store_release() are required because
0434 changes to a foo structure between calls to begin_global() and
0435 end_global() are carried out without holding that structure's ->f_lock.
0436 The smp_load_acquire() and smp_store_release() ensure that the next
0437 invocation of do_something() from do_something_locked() will see those
0438 changes.
0439 
0440 
0441 Lockless Reads and Writes
0442 -------------------------
0443 
0444 For another example, suppose a shared variable "foo" is both read and
0445 updated locklessly.  The code might look as follows:
0446 
0447         int foo;
0448 
0449         int update_foo(int newval)
0450         {
0451                 int ret;
0452 
0453                 ret = xchg(&foo, newval);
0454                 do_something(newval);
0455                 return ret;
0456         }
0457 
0458         int read_foo(void)
0459         {
0460                 do_something_else();
0461                 return READ_ONCE(foo);
0462         }
0463 
0464 Because foo is accessed locklessly, all accesses are marked.  It does
0465 not make sense to use ASSERT_EXCLUSIVE_WRITER() in this case because
0466 there really can be concurrent lockless writers.  KCSAN would
0467 flag any concurrent plain C-language reads from foo, and given
0468 CONFIG_KCSAN_ASSUME_PLAIN_WRITES_ATOMIC=n, also any concurrent plain
0469 C-language writes to foo.
0470 
0471 
0472 Lockless Reads and Writes, But With Single-Threaded Initialization
0473 ------------------------------------------------------------------
0474 
0475 For yet another example, suppose that foo is initialized in a
0476 single-threaded manner, but that a number of kthreads are then created
0477 that locklessly and concurrently access foo.  Some snippets of this code
0478 might look as follows:
0479 
0480         int foo;
0481 
0482         void initialize_foo(int initval, int nkthreads)
0483         {
0484                 int i;
0485 
0486                 foo = initval;
0487                 ASSERT_EXCLUSIVE_ACCESS(foo);
0488                 for (i = 0; i < nkthreads; i++)
0489                         kthread_run(access_foo_concurrently, ...);
0490         }
0491 
0492         /* Called from access_foo_concurrently(). */
0493         int update_foo(int newval)
0494         {
0495                 int ret;
0496 
0497                 ret = xchg(&foo, newval);
0498                 do_something(newval);
0499                 return ret;
0500         }
0501 
0502         /* Also called from access_foo_concurrently(). */
0503         int read_foo(void)
0504         {
0505                 do_something_else();
0506                 return READ_ONCE(foo);
0507         }
0508 
0509 The initialize_foo() uses a plain C-language write to foo because there
0510 are not supposed to be concurrent accesses during initialization.  The
0511 ASSERT_EXCLUSIVE_ACCESS() allows KCSAN to flag buggy concurrent unmarked
0512 reads, and the ASSERT_EXCLUSIVE_ACCESS() call further allows KCSAN to
0513 flag buggy concurrent writes, even if:  (1) Those writes are marked or
0514 (2) The kernel was built with CONFIG_KCSAN_ASSUME_PLAIN_WRITES_ATOMIC=y.
0515 
0516 
0517 Checking Stress-Test Race Coverage
0518 ----------------------------------
0519 
0520 When designing stress tests it is important to ensure that race conditions
0521 of interest really do occur.  For example, consider the following code
0522 fragment:
0523 
0524         int foo;
0525 
0526         int update_foo(int newval)
0527         {
0528                 return xchg(&foo, newval);
0529         }
0530 
0531         int xor_shift_foo(int shift, int mask)
0532         {
0533                 int old, new, newold;
0534 
0535                 newold = data_race(foo); /* Checked by cmpxchg(). */
0536                 do {
0537                         old = newold;
0538                         new = (old << shift) ^ mask;
0539                         newold = cmpxchg(&foo, old, new);
0540                 } while (newold != old);
0541                 return old;
0542         }
0543 
0544         int read_foo(void)
0545         {
0546                 return READ_ONCE(foo);
0547         }
0548 
0549 If it is possible for update_foo(), xor_shift_foo(), and read_foo() to be
0550 invoked concurrently, the stress test should force this concurrency to
0551 actually happen.  KCSAN can evaluate the stress test when the above code
0552 is modified to read as follows:
0553 
0554         int foo;
0555 
0556         int update_foo(int newval)
0557         {
0558                 ASSERT_EXCLUSIVE_ACCESS(foo);
0559                 return xchg(&foo, newval);
0560         }
0561 
0562         int xor_shift_foo(int shift, int mask)
0563         {
0564                 int old, new, newold;
0565 
0566                 newold = data_race(foo); /* Checked by cmpxchg(). */
0567                 do {
0568                         old = newold;
0569                         new = (old << shift) ^ mask;
0570                         ASSERT_EXCLUSIVE_ACCESS(foo);
0571                         newold = cmpxchg(&foo, old, new);
0572                 } while (newold != old);
0573                 return old;
0574         }
0575 
0576 
0577         int read_foo(void)
0578         {
0579                 ASSERT_EXCLUSIVE_ACCESS(foo);
0580                 return READ_ONCE(foo);
0581         }
0582 
0583 If a given stress-test run does not result in KCSAN complaints from
0584 each possible pair of ASSERT_EXCLUSIVE_ACCESS() invocations, the
0585 stress test needs improvement.  If the stress test was to be evaluated
0586 on a regular basis, it would be wise to place the above instances of
0587 ASSERT_EXCLUSIVE_ACCESS() under #ifdef so that they did not result in
0588 false positives when not evaluating the stress test.
0589 
0590 
0591 REFERENCES
0592 ==========
0593 
0594 [1] "Concurrency bugs should fear the big bad data-race detector (part 2)"
0595     https://lwn.net/Articles/816854/
0596 
0597 [2] "Who's afraid of a big bad optimizing compiler?"
0598     https://lwn.net/Articles/793253/