0001 .. _page_migration:
0002
0003 ==============
0004 Page migration
0005 ==============
0006
0007 Page migration allows moving the physical location of pages between
0008 nodes in a NUMA system while the process is running. This means that the
0009 virtual addresses that the process sees do not change. However, the
0010 system rearranges the physical location of those pages.
0011
0012 Also see :ref:`Heterogeneous Memory Management (HMM) <hmm>`
0013 for migrating pages to or from device private memory.
0014
0015 The main intent of page migration is to reduce the latency of memory accesses
0016 by moving pages near to the processor where the process accessing that memory
0017 is running.
0018
0019 Page migration allows a process to manually relocate the node on which its
0020 pages are located through the MF_MOVE and MF_MOVE_ALL options while setting
0021 a new memory policy via mbind(). The pages of a process can also be relocated
0022 from another process using the sys_migrate_pages() function call. The
0023 migrate_pages() function call takes two sets of nodes and moves pages of a
0024 process that are located on the from nodes to the destination nodes.
0025 Page migration functions are provided by the numactl package by Andi Kleen
0026 (a version later than 0.9.3 is required. Get it from
0027 https://github.com/numactl/numactl.git). numactl provides libnuma
0028 which provides an interface similar to other NUMA functionality for page
0029 migration. cat ``/proc/<pid>/numa_maps`` allows an easy review of where the
0030 pages of a process are located. See also the numa_maps documentation in the
0031 proc(5) man page.
0032
0033 Manual migration is useful if for example the scheduler has relocated
0034 a process to a processor on a distant node. A batch scheduler or an
0035 administrator may detect the situation and move the pages of the process
0036 nearer to the new processor. The kernel itself only provides
0037 manual page migration support. Automatic page migration may be implemented
0038 through user space processes that move pages. A special function call
0039 "move_pages" allows the moving of individual pages within a process.
0040 For example, A NUMA profiler may obtain a log showing frequent off-node
0041 accesses and may use the result to move pages to more advantageous
0042 locations.
0043
0044 Larger installations usually partition the system using cpusets into
0045 sections of nodes. Paul Jackson has equipped cpusets with the ability to
0046 move pages when a task is moved to another cpuset (See
0047 :ref:`CPUSETS <cpusets>`).
0048 Cpusets allow the automation of process locality. If a task is moved to
0049 a new cpuset then also all its pages are moved with it so that the
0050 performance of the process does not sink dramatically. Also the pages
0051 of processes in a cpuset are moved if the allowed memory nodes of a
0052 cpuset are changed.
0053
0054 Page migration allows the preservation of the relative location of pages
0055 within a group of nodes for all migration techniques which will preserve a
0056 particular memory allocation pattern generated even after migrating a
0057 process. This is necessary in order to preserve the memory latencies.
0058 Processes will run with similar performance after migration.
0059
0060 Page migration occurs in several steps. First a high level
0061 description for those trying to use migrate_pages() from the kernel
0062 (for userspace usage see the Andi Kleen's numactl package mentioned above)
0063 and then a low level description of how the low level details work.
0064
0065 In kernel use of migrate_pages()
0066 ================================
0067
0068 1. Remove pages from the LRU.
0069
0070 Lists of pages to be migrated are generated by scanning over
0071 pages and moving them into lists. This is done by
0072 calling isolate_lru_page().
0073 Calling isolate_lru_page() increases the references to the page
0074 so that it cannot vanish while the page migration occurs.
0075 It also prevents the swapper or other scans from encountering
0076 the page.
0077
0078 2. We need to have a function of type new_page_t that can be
0079 passed to migrate_pages(). This function should figure out
0080 how to allocate the correct new page given the old page.
0081
0082 3. The migrate_pages() function is called which attempts
0083 to do the migration. It will call the function to allocate
0084 the new page for each page that is considered for
0085 moving.
0086
0087 How migrate_pages() works
0088 =========================
0089
0090 migrate_pages() does several passes over its list of pages. A page is moved
0091 if all references to a page are removable at the time. The page has
0092 already been removed from the LRU via isolate_lru_page() and the refcount
0093 is increased so that the page cannot be freed while page migration occurs.
0094
0095 Steps:
0096
0097 1. Lock the page to be migrated.
0098
0099 2. Ensure that writeback is complete.
0100
0101 3. Lock the new page that we want to move to. It is locked so that accesses to
0102 this (not yet up-to-date) page immediately block while the move is in progress.
0103
0104 4. All the page table references to the page are converted to migration
0105 entries. This decreases the mapcount of a page. If the resulting
0106 mapcount is not zero then we do not migrate the page. All user space
0107 processes that attempt to access the page will now wait on the page lock
0108 or wait for the migration page table entry to be removed.
0109
0110 5. The i_pages lock is taken. This will cause all processes trying
0111 to access the page via the mapping to block on the spinlock.
0112
0113 6. The refcount of the page is examined and we back out if references remain.
0114 Otherwise, we know that we are the only one referencing this page.
0115
0116 7. The radix tree is checked and if it does not contain the pointer to this
0117 page then we back out because someone else modified the radix tree.
0118
0119 8. The new page is prepped with some settings from the old page so that
0120 accesses to the new page will discover a page with the correct settings.
0121
0122 9. The radix tree is changed to point to the new page.
0123
0124 10. The reference count of the old page is dropped because the address space
0125 reference is gone. A reference to the new page is established because
0126 the new page is referenced by the address space.
0127
0128 11. The i_pages lock is dropped. With that lookups in the mapping
0129 become possible again. Processes will move from spinning on the lock
0130 to sleeping on the locked new page.
0131
0132 12. The page contents are copied to the new page.
0133
0134 13. The remaining page flags are copied to the new page.
0135
0136 14. The old page flags are cleared to indicate that the page does
0137 not provide any information anymore.
0138
0139 15. Queued up writeback on the new page is triggered.
0140
0141 16. If migration entries were inserted into the page table, then replace them
0142 with real ptes. Doing so will enable access for user space processes not
0143 already waiting for the page lock.
0144
0145 17. The page locks are dropped from the old and new page.
0146 Processes waiting on the page lock will redo their page faults
0147 and will reach the new page.
0148
0149 18. The new page is moved to the LRU and can be scanned by the swapper,
0150 etc. again.
0151
0152 Non-LRU page migration
0153 ======================
0154
0155 Although migration originally aimed for reducing the latency of memory
0156 accesses for NUMA, compaction also uses migration to create high-order
0157 pages. For compaction purposes, it is also useful to be able to move
0158 non-LRU pages, such as zsmalloc and virtio-balloon pages.
0159
0160 If a driver wants to make its pages movable, it should define a struct
0161 movable_operations. It then needs to call __SetPageMovable() on each
0162 page that it may be able to move. This uses the ``page->mapping`` field,
0163 so this field is not available for the driver to use for other purposes.
0164
0165 Monitoring Migration
0166 =====================
0167
0168 The following events (counters) can be used to monitor page migration.
0169
0170 1. PGMIGRATE_SUCCESS: Normal page migration success. Each count means that a
0171 page was migrated. If the page was a non-THP and non-hugetlb page, then
0172 this counter is increased by one. If the page was a THP or hugetlb, then
0173 this counter is increased by the number of THP or hugetlb subpages.
0174 For example, migration of a single 2MB THP that has 4KB-size base pages
0175 (subpages) will cause this counter to increase by 512.
0176
0177 2. PGMIGRATE_FAIL: Normal page migration failure. Same counting rules as for
0178 PGMIGRATE_SUCCESS, above: this will be increased by the number of subpages,
0179 if it was a THP or hugetlb.
0180
0181 3. THP_MIGRATION_SUCCESS: A THP was migrated without being split.
0182
0183 4. THP_MIGRATION_FAIL: A THP could not be migrated nor it could be split.
0184
0185 5. THP_MIGRATION_SPLIT: A THP was migrated, but not as such: first, the THP had
0186 to be split. After splitting, a migration retry was used for it's sub-pages.
0187
0188 THP_MIGRATION_* events also update the appropriate PGMIGRATE_SUCCESS or
0189 PGMIGRATE_FAIL events. For example, a THP migration failure will cause both
0190 THP_MIGRATION_FAIL and PGMIGRATE_FAIL to increase.
0191
0192 Christoph Lameter, May 8, 2006.
0193 Minchan Kim, Mar 28, 2016.
0194
0195 .. kernel-doc:: include/linux/migrate.h