0001 .. _numa_memory_policy:
0002
0003 ==================
0004 NUMA Memory Policy
0005 ==================
0006
0007 What is NUMA Memory Policy?
0008 ============================
0009
0010 In the Linux kernel, "memory policy" determines from which node the kernel will
0011 allocate memory in a NUMA system or in an emulated NUMA system. Linux has
0012 supported platforms with Non-Uniform Memory Access architectures since 2.4.?.
0013 The current memory policy support was added to Linux 2.6 around May 2004. This
0014 document attempts to describe the concepts and APIs of the 2.6 memory policy
0015 support.
0016
0017 Memory policies should not be confused with cpusets
0018 (``Documentation/admin-guide/cgroup-v1/cpusets.rst``)
0019 which is an administrative mechanism for restricting the nodes from which
0020 memory may be allocated by a set of processes. Memory policies are a
0021 programming interface that a NUMA-aware application can take advantage of. When
0022 both cpusets and policies are applied to a task, the restrictions of the cpuset
0023 takes priority. See :ref:`Memory Policies and cpusets <mem_pol_and_cpusets>`
0024 below for more details.
0025
0026 Memory Policy Concepts
0027 ======================
0028
0029 Scope of Memory Policies
0030 ------------------------
0031
0032 The Linux kernel supports _scopes_ of memory policy, described here from
0033 most general to most specific:
0034
0035 System Default Policy
0036 this policy is "hard coded" into the kernel. It is the policy
0037 that governs all page allocations that aren't controlled by
0038 one of the more specific policy scopes discussed below. When
0039 the system is "up and running", the system default policy will
0040 use "local allocation" described below. However, during boot
0041 up, the system default policy will be set to interleave
0042 allocations across all nodes with "sufficient" memory, so as
0043 not to overload the initial boot node with boot-time
0044 allocations.
0045
0046 Task/Process Policy
0047 this is an optional, per-task policy. When defined for a
0048 specific task, this policy controls all page allocations made
0049 by or on behalf of the task that aren't controlled by a more
0050 specific scope. If a task does not define a task policy, then
0051 all page allocations that would have been controlled by the
0052 task policy "fall back" to the System Default Policy.
0053
0054 The task policy applies to the entire address space of a task. Thus,
0055 it is inheritable, and indeed is inherited, across both fork()
0056 [clone() w/o the CLONE_VM flag] and exec*(). This allows a parent task
0057 to establish the task policy for a child task exec()'d from an
0058 executable image that has no awareness of memory policy. See the
0059 :ref:`Memory Policy APIs <memory_policy_apis>` section,
0060 below, for an overview of the system call
0061 that a task may use to set/change its task/process policy.
0062
0063 In a multi-threaded task, task policies apply only to the thread
0064 [Linux kernel task] that installs the policy and any threads
0065 subsequently created by that thread. Any sibling threads existing
0066 at the time a new task policy is installed retain their current
0067 policy.
0068
0069 A task policy applies only to pages allocated after the policy is
0070 installed. Any pages already faulted in by the task when the task
0071 changes its task policy remain where they were allocated based on
0072 the policy at the time they were allocated.
0073
0074 .. _vma_policy:
0075
0076 VMA Policy
0077 A "VMA" or "Virtual Memory Area" refers to a range of a task's
0078 virtual address space. A task may define a specific policy for a range
0079 of its virtual address space. See the
0080 :ref:`Memory Policy APIs <memory_policy_apis>` section,
0081 below, for an overview of the mbind() system call used to set a VMA
0082 policy.
0083
0084 A VMA policy will govern the allocation of pages that back
0085 this region of the address space. Any regions of the task's
0086 address space that don't have an explicit VMA policy will fall
0087 back to the task policy, which may itself fall back to the
0088 System Default Policy.
0089
0090 VMA policies have a few complicating details:
0091
0092 * VMA policy applies ONLY to anonymous pages. These include
0093 pages allocated for anonymous segments, such as the task
0094 stack and heap, and any regions of the address space
0095 mmap()ed with the MAP_ANONYMOUS flag. If a VMA policy is
0096 applied to a file mapping, it will be ignored if the mapping
0097 used the MAP_SHARED flag. If the file mapping used the
0098 MAP_PRIVATE flag, the VMA policy will only be applied when
0099 an anonymous page is allocated on an attempt to write to the
0100 mapping-- i.e., at Copy-On-Write.
0101
0102 * VMA policies are shared between all tasks that share a
0103 virtual address space--a.k.a. threads--independent of when
0104 the policy is installed; and they are inherited across
0105 fork(). However, because VMA policies refer to a specific
0106 region of a task's address space, and because the address
0107 space is discarded and recreated on exec*(), VMA policies
0108 are NOT inheritable across exec(). Thus, only NUMA-aware
0109 applications may use VMA policies.
0110
0111 * A task may install a new VMA policy on a sub-range of a
0112 previously mmap()ed region. When this happens, Linux splits
0113 the existing virtual memory area into 2 or 3 VMAs, each with
0114 it's own policy.
0115
0116 * By default, VMA policy applies only to pages allocated after
0117 the policy is installed. Any pages already faulted into the
0118 VMA range remain where they were allocated based on the
0119 policy at the time they were allocated. However, since
0120 2.6.16, Linux supports page migration via the mbind() system
0121 call, so that page contents can be moved to match a newly
0122 installed policy.
0123
0124 Shared Policy
0125 Conceptually, shared policies apply to "memory objects" mapped
0126 shared into one or more tasks' distinct address spaces. An
0127 application installs shared policies the same way as VMA
0128 policies--using the mbind() system call specifying a range of
0129 virtual addresses that map the shared object. However, unlike
0130 VMA policies, which can be considered to be an attribute of a
0131 range of a task's address space, shared policies apply
0132 directly to the shared object. Thus, all tasks that attach to
0133 the object share the policy, and all pages allocated for the
0134 shared object, by any task, will obey the shared policy.
0135
0136 As of 2.6.22, only shared memory segments, created by shmget() or
0137 mmap(MAP_ANONYMOUS|MAP_SHARED), support shared policy. When shared
0138 policy support was added to Linux, the associated data structures were
0139 added to hugetlbfs shmem segments. At the time, hugetlbfs did not
0140 support allocation at fault time--a.k.a lazy allocation--so hugetlbfs
0141 shmem segments were never "hooked up" to the shared policy support.
0142 Although hugetlbfs segments now support lazy allocation, their support
0143 for shared policy has not been completed.
0144
0145 As mentioned above in :ref:`VMA policies <vma_policy>` section,
0146 allocations of page cache pages for regular files mmap()ed
0147 with MAP_SHARED ignore any VMA policy installed on the virtual
0148 address range backed by the shared file mapping. Rather,
0149 shared page cache pages, including pages backing private
0150 mappings that have not yet been written by the task, follow
0151 task policy, if any, else System Default Policy.
0152
0153 The shared policy infrastructure supports different policies on subset
0154 ranges of the shared object. However, Linux still splits the VMA of
0155 the task that installs the policy for each range of distinct policy.
0156 Thus, different tasks that attach to a shared memory segment can have
0157 different VMA configurations mapping that one shared object. This
0158 can be seen by examining the /proc/<pid>/numa_maps of tasks sharing
0159 a shared memory region, when one task has installed shared policy on
0160 one or more ranges of the region.
0161
0162 Components of Memory Policies
0163 -----------------------------
0164
0165 A NUMA memory policy consists of a "mode", optional mode flags, and
0166 an optional set of nodes. The mode determines the behavior of the
0167 policy, the optional mode flags determine the behavior of the mode,
0168 and the optional set of nodes can be viewed as the arguments to the
0169 policy behavior.
0170
0171 Internally, memory policies are implemented by a reference counted
0172 structure, struct mempolicy. Details of this structure will be
0173 discussed in context, below, as required to explain the behavior.
0174
0175 NUMA memory policy supports the following 4 behavioral modes:
0176
0177 Default Mode--MPOL_DEFAULT
0178 This mode is only used in the memory policy APIs. Internally,
0179 MPOL_DEFAULT is converted to the NULL memory policy in all
0180 policy scopes. Any existing non-default policy will simply be
0181 removed when MPOL_DEFAULT is specified. As a result,
0182 MPOL_DEFAULT means "fall back to the next most specific policy
0183 scope."
0184
0185 For example, a NULL or default task policy will fall back to the
0186 system default policy. A NULL or default vma policy will fall
0187 back to the task policy.
0188
0189 When specified in one of the memory policy APIs, the Default mode
0190 does not use the optional set of nodes.
0191
0192 It is an error for the set of nodes specified for this policy to
0193 be non-empty.
0194
0195 MPOL_BIND
0196 This mode specifies that memory must come from the set of
0197 nodes specified by the policy. Memory will be allocated from
0198 the node in the set with sufficient free memory that is
0199 closest to the node where the allocation takes place.
0200
0201 MPOL_PREFERRED
0202 This mode specifies that the allocation should be attempted
0203 from the single node specified in the policy. If that
0204 allocation fails, the kernel will search other nodes, in order
0205 of increasing distance from the preferred node based on
0206 information provided by the platform firmware.
0207
0208 Internally, the Preferred policy uses a single node--the
0209 preferred_node member of struct mempolicy. When the internal
0210 mode flag MPOL_F_LOCAL is set, the preferred_node is ignored
0211 and the policy is interpreted as local allocation. "Local"
0212 allocation policy can be viewed as a Preferred policy that
0213 starts at the node containing the cpu where the allocation
0214 takes place.
0215
0216 It is possible for the user to specify that local allocation
0217 is always preferred by passing an empty nodemask with this
0218 mode. If an empty nodemask is passed, the policy cannot use
0219 the MPOL_F_STATIC_NODES or MPOL_F_RELATIVE_NODES flags
0220 described below.
0221
0222 MPOL_INTERLEAVED
0223 This mode specifies that page allocations be interleaved, on a
0224 page granularity, across the nodes specified in the policy.
0225 This mode also behaves slightly differently, based on the
0226 context where it is used:
0227
0228 For allocation of anonymous pages and shared memory pages,
0229 Interleave mode indexes the set of nodes specified by the
0230 policy using the page offset of the faulting address into the
0231 segment [VMA] containing the address modulo the number of
0232 nodes specified by the policy. It then attempts to allocate a
0233 page, starting at the selected node, as if the node had been
0234 specified by a Preferred policy or had been selected by a
0235 local allocation. That is, allocation will follow the per
0236 node zonelist.
0237
0238 For allocation of page cache pages, Interleave mode indexes
0239 the set of nodes specified by the policy using a node counter
0240 maintained per task. This counter wraps around to the lowest
0241 specified node after it reaches the highest specified node.
0242 This will tend to spread the pages out over the nodes
0243 specified by the policy based on the order in which they are
0244 allocated, rather than based on any page offset into an
0245 address range or file. During system boot up, the temporary
0246 interleaved system default policy works in this mode.
0247
0248 MPOL_PREFERRED_MANY
0249 This mode specifices that the allocation should be preferrably
0250 satisfied from the nodemask specified in the policy. If there is
0251 a memory pressure on all nodes in the nodemask, the allocation
0252 can fall back to all existing numa nodes. This is effectively
0253 MPOL_PREFERRED allowed for a mask rather than a single node.
0254
0255 NUMA memory policy supports the following optional mode flags:
0256
0257 MPOL_F_STATIC_NODES
0258 This flag specifies that the nodemask passed by
0259 the user should not be remapped if the task or VMA's set of allowed
0260 nodes changes after the memory policy has been defined.
0261
0262 Without this flag, any time a mempolicy is rebound because of a
0263 change in the set of allowed nodes, the preferred nodemask (Preferred
0264 Many), preferred node (Preferred) or nodemask (Bind, Interleave) is
0265 remapped to the new set of allowed nodes. This may result in nodes
0266 being used that were previously undesired.
0267
0268 With this flag, if the user-specified nodes overlap with the
0269 nodes allowed by the task's cpuset, then the memory policy is
0270 applied to their intersection. If the two sets of nodes do not
0271 overlap, the Default policy is used.
0272
0273 For example, consider a task that is attached to a cpuset with
0274 mems 1-3 that sets an Interleave policy over the same set. If
0275 the cpuset's mems change to 3-5, the Interleave will now occur
0276 over nodes 3, 4, and 5. With this flag, however, since only node
0277 3 is allowed from the user's nodemask, the "interleave" only
0278 occurs over that node. If no nodes from the user's nodemask are
0279 now allowed, the Default behavior is used.
0280
0281 MPOL_F_STATIC_NODES cannot be combined with the
0282 MPOL_F_RELATIVE_NODES flag. It also cannot be used for
0283 MPOL_PREFERRED policies that were created with an empty nodemask
0284 (local allocation).
0285
0286 MPOL_F_RELATIVE_NODES
0287 This flag specifies that the nodemask passed
0288 by the user will be mapped relative to the set of the task or VMA's
0289 set of allowed nodes. The kernel stores the user-passed nodemask,
0290 and if the allowed nodes changes, then that original nodemask will
0291 be remapped relative to the new set of allowed nodes.
0292
0293 Without this flag (and without MPOL_F_STATIC_NODES), anytime a
0294 mempolicy is rebound because of a change in the set of allowed
0295 nodes, the node (Preferred) or nodemask (Bind, Interleave) is
0296 remapped to the new set of allowed nodes. That remap may not
0297 preserve the relative nature of the user's passed nodemask to its
0298 set of allowed nodes upon successive rebinds: a nodemask of
0299 1,3,5 may be remapped to 7-9 and then to 1-3 if the set of
0300 allowed nodes is restored to its original state.
0301
0302 With this flag, the remap is done so that the node numbers from
0303 the user's passed nodemask are relative to the set of allowed
0304 nodes. In other words, if nodes 0, 2, and 4 are set in the user's
0305 nodemask, the policy will be effected over the first (and in the
0306 Bind or Interleave case, the third and fifth) nodes in the set of
0307 allowed nodes. The nodemask passed by the user represents nodes
0308 relative to task or VMA's set of allowed nodes.
0309
0310 If the user's nodemask includes nodes that are outside the range
0311 of the new set of allowed nodes (for example, node 5 is set in
0312 the user's nodemask when the set of allowed nodes is only 0-3),
0313 then the remap wraps around to the beginning of the nodemask and,
0314 if not already set, sets the node in the mempolicy nodemask.
0315
0316 For example, consider a task that is attached to a cpuset with
0317 mems 2-5 that sets an Interleave policy over the same set with
0318 MPOL_F_RELATIVE_NODES. If the cpuset's mems change to 3-7, the
0319 interleave now occurs over nodes 3,5-7. If the cpuset's mems
0320 then change to 0,2-3,5, then the interleave occurs over nodes
0321 0,2-3,5.
0322
0323 Thanks to the consistent remapping, applications preparing
0324 nodemasks to specify memory policies using this flag should
0325 disregard their current, actual cpuset imposed memory placement
0326 and prepare the nodemask as if they were always located on
0327 memory nodes 0 to N-1, where N is the number of memory nodes the
0328 policy is intended to manage. Let the kernel then remap to the
0329 set of memory nodes allowed by the task's cpuset, as that may
0330 change over time.
0331
0332 MPOL_F_RELATIVE_NODES cannot be combined with the
0333 MPOL_F_STATIC_NODES flag. It also cannot be used for
0334 MPOL_PREFERRED policies that were created with an empty nodemask
0335 (local allocation).
0336
0337 Memory Policy Reference Counting
0338 ================================
0339
0340 To resolve use/free races, struct mempolicy contains an atomic reference
0341 count field. Internal interfaces, mpol_get()/mpol_put() increment and
0342 decrement this reference count, respectively. mpol_put() will only free
0343 the structure back to the mempolicy kmem cache when the reference count
0344 goes to zero.
0345
0346 When a new memory policy is allocated, its reference count is initialized
0347 to '1', representing the reference held by the task that is installing the
0348 new policy. When a pointer to a memory policy structure is stored in another
0349 structure, another reference is added, as the task's reference will be dropped
0350 on completion of the policy installation.
0351
0352 During run-time "usage" of the policy, we attempt to minimize atomic operations
0353 on the reference count, as this can lead to cache lines bouncing between cpus
0354 and NUMA nodes. "Usage" here means one of the following:
0355
0356 1) querying of the policy, either by the task itself [using the get_mempolicy()
0357 API discussed below] or by another task using the /proc/<pid>/numa_maps
0358 interface.
0359
0360 2) examination of the policy to determine the policy mode and associated node
0361 or node lists, if any, for page allocation. This is considered a "hot
0362 path". Note that for MPOL_BIND, the "usage" extends across the entire
0363 allocation process, which may sleep during page reclaimation, because the
0364 BIND policy nodemask is used, by reference, to filter ineligible nodes.
0365
0366 We can avoid taking an extra reference during the usages listed above as
0367 follows:
0368
0369 1) we never need to get/free the system default policy as this is never
0370 changed nor freed, once the system is up and running.
0371
0372 2) for querying the policy, we do not need to take an extra reference on the
0373 target task's task policy nor vma policies because we always acquire the
0374 task's mm's mmap_lock for read during the query. The set_mempolicy() and
0375 mbind() APIs [see below] always acquire the mmap_lock for write when
0376 installing or replacing task or vma policies. Thus, there is no possibility
0377 of a task or thread freeing a policy while another task or thread is
0378 querying it.
0379
0380 3) Page allocation usage of task or vma policy occurs in the fault path where
0381 we hold them mmap_lock for read. Again, because replacing the task or vma
0382 policy requires that the mmap_lock be held for write, the policy can't be
0383 freed out from under us while we're using it for page allocation.
0384
0385 4) Shared policies require special consideration. One task can replace a
0386 shared memory policy while another task, with a distinct mmap_lock, is
0387 querying or allocating a page based on the policy. To resolve this
0388 potential race, the shared policy infrastructure adds an extra reference
0389 to the shared policy during lookup while holding a spin lock on the shared
0390 policy management structure. This requires that we drop this extra
0391 reference when we're finished "using" the policy. We must drop the
0392 extra reference on shared policies in the same query/allocation paths
0393 used for non-shared policies. For this reason, shared policies are marked
0394 as such, and the extra reference is dropped "conditionally"--i.e., only
0395 for shared policies.
0396
0397 Because of this extra reference counting, and because we must lookup
0398 shared policies in a tree structure under spinlock, shared policies are
0399 more expensive to use in the page allocation path. This is especially
0400 true for shared policies on shared memory regions shared by tasks running
0401 on different NUMA nodes. This extra overhead can be avoided by always
0402 falling back to task or system default policy for shared memory regions,
0403 or by prefaulting the entire shared memory region into memory and locking
0404 it down. However, this might not be appropriate for all applications.
0405
0406 .. _memory_policy_apis:
0407
0408 Memory Policy APIs
0409 ==================
0410
0411 Linux supports 4 system calls for controlling memory policy. These APIS
0412 always affect only the calling task, the calling task's address space, or
0413 some shared object mapped into the calling task's address space.
0414
0415 .. note::
0416 the headers that define these APIs and the parameter data types for
0417 user space applications reside in a package that is not part of the
0418 Linux kernel. The kernel system call interfaces, with the 'sys\_'
0419 prefix, are defined in <linux/syscalls.h>; the mode and flag
0420 definitions are defined in <linux/mempolicy.h>.
0421
0422 Set [Task] Memory Policy::
0423
0424 long set_mempolicy(int mode, const unsigned long *nmask,
0425 unsigned long maxnode);
0426
0427 Set's the calling task's "task/process memory policy" to mode
0428 specified by the 'mode' argument and the set of nodes defined by
0429 'nmask'. 'nmask' points to a bit mask of node ids containing at least
0430 'maxnode' ids. Optional mode flags may be passed by combining the
0431 'mode' argument with the flag (for example: MPOL_INTERLEAVE |
0432 MPOL_F_STATIC_NODES).
0433
0434 See the set_mempolicy(2) man page for more details
0435
0436
0437 Get [Task] Memory Policy or Related Information::
0438
0439 long get_mempolicy(int *mode,
0440 const unsigned long *nmask, unsigned long maxnode,
0441 void *addr, int flags);
0442
0443 Queries the "task/process memory policy" of the calling task, or the
0444 policy or location of a specified virtual address, depending on the
0445 'flags' argument.
0446
0447 See the get_mempolicy(2) man page for more details
0448
0449
0450 Install VMA/Shared Policy for a Range of Task's Address Space::
0451
0452 long mbind(void *start, unsigned long len, int mode,
0453 const unsigned long *nmask, unsigned long maxnode,
0454 unsigned flags);
0455
0456 mbind() installs the policy specified by (mode, nmask, maxnodes) as a
0457 VMA policy for the range of the calling task's address space specified
0458 by the 'start' and 'len' arguments. Additional actions may be
0459 requested via the 'flags' argument.
0460
0461 See the mbind(2) man page for more details.
0462
0463 Set home node for a Range of Task's Address Spacec::
0464
0465 long sys_set_mempolicy_home_node(unsigned long start, unsigned long len,
0466 unsigned long home_node,
0467 unsigned long flags);
0468
0469 sys_set_mempolicy_home_node set the home node for a VMA policy present in the
0470 task's address range. The system call updates the home node only for the existing
0471 mempolicy range. Other address ranges are ignored. A home node is the NUMA node
0472 closest to which page allocation will come from. Specifying the home node override
0473 the default allocation policy to allocate memory close to the local node for an
0474 executing CPU.
0475
0476
0477 Memory Policy Command Line Interface
0478 ====================================
0479
0480 Although not strictly part of the Linux implementation of memory policy,
0481 a command line tool, numactl(8), exists that allows one to:
0482
0483 + set the task policy for a specified program via set_mempolicy(2), fork(2) and
0484 exec(2)
0485
0486 + set the shared policy for a shared memory segment via mbind(2)
0487
0488 The numactl(8) tool is packaged with the run-time version of the library
0489 containing the memory policy system call wrappers. Some distributions
0490 package the headers and compile-time libraries in a separate development
0491 package.
0492
0493 .. _mem_pol_and_cpusets:
0494
0495 Memory Policies and cpusets
0496 ===========================
0497
0498 Memory policies work within cpusets as described above. For memory policies
0499 that require a node or set of nodes, the nodes are restricted to the set of
0500 nodes whose memories are allowed by the cpuset constraints. If the nodemask
0501 specified for the policy contains nodes that are not allowed by the cpuset and
0502 MPOL_F_RELATIVE_NODES is not used, the intersection of the set of nodes
0503 specified for the policy and the set of nodes with memory is used. If the
0504 result is the empty set, the policy is considered invalid and cannot be
0505 installed. If MPOL_F_RELATIVE_NODES is used, the policy's nodes are mapped
0506 onto and folded into the task's set of allowed nodes as previously described.
0507
0508 The interaction of memory policies and cpusets can be problematic when tasks
0509 in two cpusets share access to a memory region, such as shared memory segments
0510 created by shmget() of mmap() with the MAP_ANONYMOUS and MAP_SHARED flags, and
0511 any of the tasks install shared policy on the region, only nodes whose
0512 memories are allowed in both cpusets may be used in the policies. Obtaining
0513 this information requires "stepping outside" the memory policy APIs to use the
0514 cpuset information and requires that one know in what cpusets other task might
0515 be attaching to the shared region. Furthermore, if the cpusets' allowed
0516 memory sets are disjoint, "local" allocation is the only valid policy.