Back to home page

OSCL-LXR

 
 

    


0001 =======================
0002 Generic Mutex Subsystem
0003 =======================
0004 
0005 started by Ingo Molnar <mingo@redhat.com>
0006 
0007 updated by Davidlohr Bueso <davidlohr@hp.com>
0008 
0009 What are mutexes?
0010 -----------------
0011 
0012 In the Linux kernel, mutexes refer to a particular locking primitive
0013 that enforces serialization on shared memory systems, and not only to
0014 the generic term referring to 'mutual exclusion' found in academia
0015 or similar theoretical text books. Mutexes are sleeping locks which
0016 behave similarly to binary semaphores, and were introduced in 2006[1]
0017 as an alternative to these. This new data structure provided a number
0018 of advantages, including simpler interfaces, and at that time smaller
0019 code (see Disadvantages).
0020 
0021 [1] https://lwn.net/Articles/164802/
0022 
0023 Implementation
0024 --------------
0025 
0026 Mutexes are represented by 'struct mutex', defined in include/linux/mutex.h
0027 and implemented in kernel/locking/mutex.c. These locks use an atomic variable
0028 (->owner) to keep track of the lock state during its lifetime.  Field owner
0029 actually contains `struct task_struct *` to the current lock owner and it is
0030 therefore NULL if not currently owned. Since task_struct pointers are aligned
0031 to at least L1_CACHE_BYTES, low bits (3) are used to store extra state (e.g.,
0032 if waiter list is non-empty).  In its most basic form it also includes a
0033 wait-queue and a spinlock that serializes access to it. Furthermore,
0034 CONFIG_MUTEX_SPIN_ON_OWNER=y systems use a spinner MCS lock (->osq), described
0035 below in (ii).
0036 
0037 When acquiring a mutex, there are three possible paths that can be
0038 taken, depending on the state of the lock:
0039 
0040 (i) fastpath: tries to atomically acquire the lock by cmpxchg()ing the owner with
0041     the current task. This only works in the uncontended case (cmpxchg() checks
0042     against 0UL, so all 3 state bits above have to be 0). If the lock is
0043     contended it goes to the next possible path.
0044 
0045 (ii) midpath: aka optimistic spinning, tries to spin for acquisition
0046      while the lock owner is running and there are no other tasks ready
0047      to run that have higher priority (need_resched). The rationale is
0048      that if the lock owner is running, it is likely to release the lock
0049      soon. The mutex spinners are queued up using MCS lock so that only
0050      one spinner can compete for the mutex.
0051 
0052      The MCS lock (proposed by Mellor-Crummey and Scott) is a simple spinlock
0053      with the desirable properties of being fair and with each cpu trying
0054      to acquire the lock spinning on a local variable. It avoids expensive
0055      cacheline bouncing that common test-and-set spinlock implementations
0056      incur. An MCS-like lock is specially tailored for optimistic spinning
0057      for sleeping lock implementation. An important feature of the customized
0058      MCS lock is that it has the extra property that spinners are able to exit
0059      the MCS spinlock queue when they need to reschedule. This further helps
0060      avoid situations where MCS spinners that need to reschedule would continue
0061      waiting to spin on mutex owner, only to go directly to slowpath upon
0062      obtaining the MCS lock.
0063 
0064 
0065 (iii) slowpath: last resort, if the lock is still unable to be acquired,
0066       the task is added to the wait-queue and sleeps until woken up by the
0067       unlock path. Under normal circumstances it blocks as TASK_UNINTERRUPTIBLE.
0068 
0069 While formally kernel mutexes are sleepable locks, it is path (ii) that
0070 makes them more practically a hybrid type. By simply not interrupting a
0071 task and busy-waiting for a few cycles instead of immediately sleeping,
0072 the performance of this lock has been seen to significantly improve a
0073 number of workloads. Note that this technique is also used for rw-semaphores.
0074 
0075 Semantics
0076 ---------
0077 
0078 The mutex subsystem checks and enforces the following rules:
0079 
0080     - Only one task can hold the mutex at a time.
0081     - Only the owner can unlock the mutex.
0082     - Multiple unlocks are not permitted.
0083     - Recursive locking/unlocking is not permitted.
0084     - A mutex must only be initialized via the API (see below).
0085     - A task may not exit with a mutex held.
0086     - Memory areas where held locks reside must not be freed.
0087     - Held mutexes must not be reinitialized.
0088     - Mutexes may not be used in hardware or software interrupt
0089       contexts such as tasklets and timers.
0090 
0091 These semantics are fully enforced when CONFIG DEBUG_MUTEXES is enabled.
0092 In addition, the mutex debugging code also implements a number of other
0093 features that make lock debugging easier and faster:
0094 
0095     - Uses symbolic names of mutexes, whenever they are printed
0096       in debug output.
0097     - Point-of-acquire tracking, symbolic lookup of function names,
0098       list of all locks held in the system, printout of them.
0099     - Owner tracking.
0100     - Detects self-recursing locks and prints out all relevant info.
0101     - Detects multi-task circular deadlocks and prints out all affected
0102       locks and tasks (and only those tasks).
0103 
0104 
0105 Interfaces
0106 ----------
0107 Statically define the mutex::
0108 
0109    DEFINE_MUTEX(name);
0110 
0111 Dynamically initialize the mutex::
0112 
0113    mutex_init(mutex);
0114 
0115 Acquire the mutex, uninterruptible::
0116 
0117    void mutex_lock(struct mutex *lock);
0118    void mutex_lock_nested(struct mutex *lock, unsigned int subclass);
0119    int  mutex_trylock(struct mutex *lock);
0120 
0121 Acquire the mutex, interruptible::
0122 
0123    int mutex_lock_interruptible_nested(struct mutex *lock,
0124                                        unsigned int subclass);
0125    int mutex_lock_interruptible(struct mutex *lock);
0126 
0127 Acquire the mutex, interruptible, if dec to 0::
0128 
0129    int atomic_dec_and_mutex_lock(atomic_t *cnt, struct mutex *lock);
0130 
0131 Unlock the mutex::
0132 
0133    void mutex_unlock(struct mutex *lock);
0134 
0135 Test if the mutex is taken::
0136 
0137    int mutex_is_locked(struct mutex *lock);
0138 
0139 Disadvantages
0140 -------------
0141 
0142 Unlike its original design and purpose, 'struct mutex' is among the largest
0143 locks in the kernel. E.g: on x86-64 it is 32 bytes, where 'struct semaphore'
0144 is 24 bytes and rw_semaphore is 40 bytes. Larger structure sizes mean more CPU
0145 cache and memory footprint.
0146 
0147 When to use mutexes
0148 -------------------
0149 
0150 Unless the strict semantics of mutexes are unsuitable and/or the critical
0151 region prevents the lock from being shared, always prefer them to any other
0152 locking primitive.