Back to home page

LXR

 
 

    


0001                 Cache and TLB Flushing
0002                      Under Linux
0003 
0004             David S. Miller <davem@redhat.com>
0005 
0006 This document describes the cache/tlb flushing interfaces called
0007 by the Linux VM subsystem.  It enumerates over each interface,
0008 describes its intended purpose, and what side effect is expected
0009 after the interface is invoked.
0010 
0011 The side effects described below are stated for a uniprocessor
0012 implementation, and what is to happen on that single processor.  The
0013 SMP cases are a simple extension, in that you just extend the
0014 definition such that the side effect for a particular interface occurs
0015 on all processors in the system.  Don't let this scare you into
0016 thinking SMP cache/tlb flushing must be so inefficient, this is in
0017 fact an area where many optimizations are possible.  For example,
0018 if it can be proven that a user address space has never executed
0019 on a cpu (see mm_cpumask()), one need not perform a flush
0020 for this address space on that cpu.
0021 
0022 First, the TLB flushing interfaces, since they are the simplest.  The
0023 "TLB" is abstracted under Linux as something the cpu uses to cache
0024 virtual-->physical address translations obtained from the software
0025 page tables.  Meaning that if the software page tables change, it is
0026 possible for stale translations to exist in this "TLB" cache.
0027 Therefore when software page table changes occur, the kernel will
0028 invoke one of the following flush methods _after_ the page table
0029 changes occur:
0030 
0031 1) void flush_tlb_all(void)
0032 
0033         The most severe flush of all.  After this interface runs,
0034         any previous page table modification whatsoever will be
0035         visible to the cpu.
0036 
0037         This is usually invoked when the kernel page tables are
0038         changed, since such translations are "global" in nature.
0039 
0040 2) void flush_tlb_mm(struct mm_struct *mm)
0041 
0042         This interface flushes an entire user address space from
0043         the TLB.  After running, this interface must make sure that
0044         any previous page table modifications for the address space
0045         'mm' will be visible to the cpu.  That is, after running,
0046         there will be no entries in the TLB for 'mm'.
0047 
0048         This interface is used to handle whole address space
0049         page table operations such as what happens during
0050         fork, and exec.
0051 
0052 3) void flush_tlb_range(struct vm_area_struct *vma,
0053                         unsigned long start, unsigned long end)
0054 
0055         Here we are flushing a specific range of (user) virtual
0056         address translations from the TLB.  After running, this
0057         interface must make sure that any previous page table
0058         modifications for the address space 'vma->vm_mm' in the range
0059         'start' to 'end-1' will be visible to the cpu.  That is, after
0060         running, there will be no entries in the TLB for 'mm' for
0061         virtual addresses in the range 'start' to 'end-1'.
0062 
0063         The "vma" is the backing store being used for the region.
0064         Primarily, this is used for munmap() type operations.
0065 
0066         The interface is provided in hopes that the port can find
0067         a suitably efficient method for removing multiple page
0068         sized translations from the TLB, instead of having the kernel
0069         call flush_tlb_page (see below) for each entry which may be
0070         modified.
0071 
0072 4) void flush_tlb_page(struct vm_area_struct *vma, unsigned long addr)
0073 
0074         This time we need to remove the PAGE_SIZE sized translation
0075         from the TLB.  The 'vma' is the backing structure used by
0076         Linux to keep track of mmap'd regions for a process, the
0077         address space is available via vma->vm_mm.  Also, one may
0078         test (vma->vm_flags & VM_EXEC) to see if this region is
0079         executable (and thus could be in the 'instruction TLB' in
0080         split-tlb type setups).
0081 
0082         After running, this interface must make sure that any previous
0083         page table modification for address space 'vma->vm_mm' for
0084         user virtual address 'addr' will be visible to the cpu.  That
0085         is, after running, there will be no entries in the TLB for
0086         'vma->vm_mm' for virtual address 'addr'.
0087 
0088         This is used primarily during fault processing.
0089 
0090 5) void update_mmu_cache(struct vm_area_struct *vma,
0091                          unsigned long address, pte_t *ptep)
0092 
0093         At the end of every page fault, this routine is invoked to
0094         tell the architecture specific code that a translation
0095         now exists at virtual address "address" for address space
0096         "vma->vm_mm", in the software page tables.
0097 
0098         A port may use this information in any way it so chooses.
0099         For example, it could use this event to pre-load TLB
0100         translations for software managed TLB configurations.
0101         The sparc64 port currently does this.
0102 
0103 6) void tlb_migrate_finish(struct mm_struct *mm)
0104 
0105         This interface is called at the end of an explicit
0106         process migration. This interface provides a hook
0107         to allow a platform to update TLB or context-specific
0108         information for the address space.
0109 
0110         The ia64 sn2 platform is one example of a platform
0111         that uses this interface.
0112 
0113 Next, we have the cache flushing interfaces.  In general, when Linux
0114 is changing an existing virtual-->physical mapping to a new value,
0115 the sequence will be in one of the following forms:
0116 
0117         1) flush_cache_mm(mm);
0118            change_all_page_tables_of(mm);
0119            flush_tlb_mm(mm);
0120 
0121         2) flush_cache_range(vma, start, end);
0122            change_range_of_page_tables(mm, start, end);
0123            flush_tlb_range(vma, start, end);
0124 
0125         3) flush_cache_page(vma, addr, pfn);
0126            set_pte(pte_pointer, new_pte_val);
0127            flush_tlb_page(vma, addr);
0128 
0129 The cache level flush will always be first, because this allows
0130 us to properly handle systems whose caches are strict and require
0131 a virtual-->physical translation to exist for a virtual address
0132 when that virtual address is flushed from the cache.  The HyperSparc
0133 cpu is one such cpu with this attribute.
0134 
0135 The cache flushing routines below need only deal with cache flushing
0136 to the extent that it is necessary for a particular cpu.  Mostly,
0137 these routines must be implemented for cpus which have virtually
0138 indexed caches which must be flushed when virtual-->physical
0139 translations are changed or removed.  So, for example, the physically
0140 indexed physically tagged caches of IA32 processors have no need to
0141 implement these interfaces since the caches are fully synchronized
0142 and have no dependency on translation information.
0143 
0144 Here are the routines, one by one:
0145 
0146 1) void flush_cache_mm(struct mm_struct *mm)
0147 
0148         This interface flushes an entire user address space from
0149         the caches.  That is, after running, there will be no cache
0150         lines associated with 'mm'.
0151 
0152         This interface is used to handle whole address space
0153         page table operations such as what happens during exit and exec.
0154 
0155 2) void flush_cache_dup_mm(struct mm_struct *mm)
0156 
0157         This interface flushes an entire user address space from
0158         the caches.  That is, after running, there will be no cache
0159         lines associated with 'mm'.
0160 
0161         This interface is used to handle whole address space
0162         page table operations such as what happens during fork.
0163 
0164         This option is separate from flush_cache_mm to allow some
0165         optimizations for VIPT caches.
0166 
0167 3) void flush_cache_range(struct vm_area_struct *vma,
0168                           unsigned long start, unsigned long end)
0169 
0170         Here we are flushing a specific range of (user) virtual
0171         addresses from the cache.  After running, there will be no
0172         entries in the cache for 'vma->vm_mm' for virtual addresses in
0173         the range 'start' to 'end-1'.
0174 
0175         The "vma" is the backing store being used for the region.
0176         Primarily, this is used for munmap() type operations.
0177 
0178         The interface is provided in hopes that the port can find
0179         a suitably efficient method for removing multiple page
0180         sized regions from the cache, instead of having the kernel
0181         call flush_cache_page (see below) for each entry which may be
0182         modified.
0183 
0184 4) void flush_cache_page(struct vm_area_struct *vma, unsigned long addr, unsigned long pfn)
0185 
0186         This time we need to remove a PAGE_SIZE sized range
0187         from the cache.  The 'vma' is the backing structure used by
0188         Linux to keep track of mmap'd regions for a process, the
0189         address space is available via vma->vm_mm.  Also, one may
0190         test (vma->vm_flags & VM_EXEC) to see if this region is
0191         executable (and thus could be in the 'instruction cache' in
0192         "Harvard" type cache layouts).
0193 
0194         The 'pfn' indicates the physical page frame (shift this value
0195         left by PAGE_SHIFT to get the physical address) that 'addr'
0196         translates to.  It is this mapping which should be removed from
0197         the cache.
0198 
0199         After running, there will be no entries in the cache for
0200         'vma->vm_mm' for virtual address 'addr' which translates
0201         to 'pfn'.
0202 
0203         This is used primarily during fault processing.
0204 
0205 5) void flush_cache_kmaps(void)
0206 
0207         This routine need only be implemented if the platform utilizes
0208         highmem.  It will be called right before all of the kmaps
0209         are invalidated.
0210 
0211         After running, there will be no entries in the cache for
0212         the kernel virtual address range PKMAP_ADDR(0) to
0213         PKMAP_ADDR(LAST_PKMAP).
0214 
0215         This routing should be implemented in asm/highmem.h
0216 
0217 6) void flush_cache_vmap(unsigned long start, unsigned long end)
0218    void flush_cache_vunmap(unsigned long start, unsigned long end)
0219 
0220         Here in these two interfaces we are flushing a specific range
0221         of (kernel) virtual addresses from the cache.  After running,
0222         there will be no entries in the cache for the kernel address
0223         space for virtual addresses in the range 'start' to 'end-1'.
0224 
0225         The first of these two routines is invoked after map_vm_area()
0226         has installed the page table entries.  The second is invoked
0227         before unmap_kernel_range() deletes the page table entries.
0228 
0229 There exists another whole class of cpu cache issues which currently
0230 require a whole different set of interfaces to handle properly.
0231 The biggest problem is that of virtual aliasing in the data cache
0232 of a processor.
0233 
0234 Is your port susceptible to virtual aliasing in its D-cache?
0235 Well, if your D-cache is virtually indexed, is larger in size than
0236 PAGE_SIZE, and does not prevent multiple cache lines for the same
0237 physical address from existing at once, you have this problem.
0238 
0239 If your D-cache has this problem, first define asm/shmparam.h SHMLBA
0240 properly, it should essentially be the size of your virtually
0241 addressed D-cache (or if the size is variable, the largest possible
0242 size).  This setting will force the SYSv IPC layer to only allow user
0243 processes to mmap shared memory at address which are a multiple of
0244 this value.
0245 
0246 NOTE: This does not fix shared mmaps, check out the sparc64 port for
0247 one way to solve this (in particular SPARC_FLAG_MMAPSHARED).
0248 
0249 Next, you have to solve the D-cache aliasing issue for all
0250 other cases.  Please keep in mind that fact that, for a given page
0251 mapped into some user address space, there is always at least one more
0252 mapping, that of the kernel in its linear mapping starting at
0253 PAGE_OFFSET.  So immediately, once the first user maps a given
0254 physical page into its address space, by implication the D-cache
0255 aliasing problem has the potential to exist since the kernel already
0256 maps this page at its virtual address.
0257 
0258   void copy_user_page(void *to, void *from, unsigned long addr, struct page *page)
0259   void clear_user_page(void *to, unsigned long addr, struct page *page)
0260 
0261         These two routines store data in user anonymous or COW
0262         pages.  It allows a port to efficiently avoid D-cache alias
0263         issues between userspace and the kernel.
0264 
0265         For example, a port may temporarily map 'from' and 'to' to
0266         kernel virtual addresses during the copy.  The virtual address
0267         for these two pages is chosen in such a way that the kernel
0268         load/store instructions happen to virtual addresses which are
0269         of the same "color" as the user mapping of the page.  Sparc64
0270         for example, uses this technique.
0271 
0272         The 'addr' parameter tells the virtual address where the
0273         user will ultimately have this page mapped, and the 'page'
0274         parameter gives a pointer to the struct page of the target.
0275 
0276         If D-cache aliasing is not an issue, these two routines may
0277         simply call memcpy/memset directly and do nothing more.
0278 
0279   void flush_dcache_page(struct page *page)
0280 
0281         Any time the kernel writes to a page cache page, _OR_
0282         the kernel is about to read from a page cache page and
0283         user space shared/writable mappings of this page potentially
0284         exist, this routine is called.
0285 
0286         NOTE: This routine need only be called for page cache pages
0287               which can potentially ever be mapped into the address
0288               space of a user process.  So for example, VFS layer code
0289               handling vfs symlinks in the page cache need not call
0290               this interface at all.
0291 
0292         The phrase "kernel writes to a page cache page" means,
0293         specifically, that the kernel executes store instructions
0294         that dirty data in that page at the page->virtual mapping
0295         of that page.  It is important to flush here to handle
0296         D-cache aliasing, to make sure these kernel stores are
0297         visible to user space mappings of that page.
0298 
0299         The corollary case is just as important, if there are users
0300         which have shared+writable mappings of this file, we must make
0301         sure that kernel reads of these pages will see the most recent
0302         stores done by the user.
0303 
0304         If D-cache aliasing is not an issue, this routine may
0305         simply be defined as a nop on that architecture.
0306 
0307         There is a bit set aside in page->flags (PG_arch_1) as
0308         "architecture private".  The kernel guarantees that,
0309         for pagecache pages, it will clear this bit when such
0310         a page first enters the pagecache.
0311 
0312         This allows these interfaces to be implemented much more
0313         efficiently.  It allows one to "defer" (perhaps indefinitely)
0314         the actual flush if there are currently no user processes
0315         mapping this page.  See sparc64's flush_dcache_page and
0316         update_mmu_cache implementations for an example of how to go
0317         about doing this.
0318 
0319         The idea is, first at flush_dcache_page() time, if
0320         page->mapping->i_mmap is an empty tree, just mark the architecture
0321         private page flag bit.  Later, in update_mmu_cache(), a check is
0322         made of this flag bit, and if set the flush is done and the flag
0323         bit is cleared.
0324 
0325         IMPORTANT NOTE: It is often important, if you defer the flush,
0326                         that the actual flush occurs on the same CPU
0327                         as did the cpu stores into the page to make it
0328                         dirty.  Again, see sparc64 for examples of how
0329                         to deal with this.
0330 
0331   void copy_to_user_page(struct vm_area_struct *vma, struct page *page,
0332                          unsigned long user_vaddr,
0333                          void *dst, void *src, int len)
0334   void copy_from_user_page(struct vm_area_struct *vma, struct page *page,
0335                            unsigned long user_vaddr,
0336                            void *dst, void *src, int len)
0337         When the kernel needs to copy arbitrary data in and out
0338         of arbitrary user pages (f.e. for ptrace()) it will use
0339         these two routines.
0340 
0341         Any necessary cache flushing or other coherency operations
0342         that need to occur should happen here.  If the processor's
0343         instruction cache does not snoop cpu stores, it is very
0344         likely that you will need to flush the instruction cache
0345         for copy_to_user_page().
0346 
0347   void flush_anon_page(struct vm_area_struct *vma, struct page *page,
0348                        unsigned long vmaddr)
0349         When the kernel needs to access the contents of an anonymous
0350         page, it calls this function (currently only
0351         get_user_pages()).  Note: flush_dcache_page() deliberately
0352         doesn't work for an anonymous page.  The default
0353         implementation is a nop (and should remain so for all coherent
0354         architectures).  For incoherent architectures, it should flush
0355         the cache of the page at vmaddr.
0356 
0357   void flush_kernel_dcache_page(struct page *page)
0358         When the kernel needs to modify a user page is has obtained
0359         with kmap, it calls this function after all modifications are
0360         complete (but before kunmapping it) to bring the underlying
0361         page up to date.  It is assumed here that the user has no
0362         incoherent cached copies (i.e. the original page was obtained
0363         from a mechanism like get_user_pages()).  The default
0364         implementation is a nop and should remain so on all coherent
0365         architectures.  On incoherent architectures, this should flush
0366         the kernel cache for page (using page_address(page)).
0367 
0368 
0369   void flush_icache_range(unsigned long start, unsigned long end)
0370         When the kernel stores into addresses that it will execute
0371         out of (eg when loading modules), this function is called.
0372 
0373         If the icache does not snoop stores then this routine will need
0374         to flush it.
0375 
0376   void flush_icache_page(struct vm_area_struct *vma, struct page *page)
0377         All the functionality of flush_icache_page can be implemented in
0378         flush_dcache_page and update_mmu_cache. In the future, the hope
0379         is to remove this interface completely.
0380 
0381 The final category of APIs is for I/O to deliberately aliased address
0382 ranges inside the kernel.  Such aliases are set up by use of the
0383 vmap/vmalloc API.  Since kernel I/O goes via physical pages, the I/O
0384 subsystem assumes that the user mapping and kernel offset mapping are
0385 the only aliases.  This isn't true for vmap aliases, so anything in
0386 the kernel trying to do I/O to vmap areas must manually manage
0387 coherency.  It must do this by flushing the vmap range before doing
0388 I/O and invalidating it after the I/O returns.
0389 
0390   void flush_kernel_vmap_range(void *vaddr, int size)
0391        flushes the kernel cache for a given virtual address range in
0392        the vmap area.  This is to make sure that any data the kernel
0393        modified in the vmap range is made visible to the physical
0394        page.  The design is to make this area safe to perform I/O on.
0395        Note that this API does *not* also flush the offset map alias
0396        of the area.
0397 
0398   void invalidate_kernel_vmap_range(void *vaddr, int size) invalidates
0399        the cache for a given virtual address range in the vmap area
0400        which prevents the processor from making the cache stale by
0401        speculatively reading data while the I/O was occurring to the
0402        physical pages.  This is only necessary for data reads into the
0403        vmap area.