mm: Allow architectures to request 'old' entries when prefaulting
Commit 5c0a85fad949 ("mm: make faultaround produce old ptes") changed
the "faultaround" behaviour to initialise prefaulted PTEs as 'old',
since this avoids vmscan wrongly assuming that they are hot, despite
having never been explicitly accessed by userspace. The change has been
shown to benefit numerous arm64 micro-architectures (with hardware
access flag) running Android, where both application launch latency and
direct reclaim time are significantly reduced (by 10%+ and ~80%
respectively).
Unfortunately, commit 315d09bf30c2 ("Revert "mm: make faultaround
produce old ptes"") reverted the change due to it being identified as
the cause of a ~6% regression in unixbench on x86. Experiments on a
variety of recent arm64 micro-architectures indicate that unixbench is
not affected by the original commit, which appears to yield a 0-1%
performance improvement.
Since one size does not fit all for the initial state of prefaulted
PTEs, introduce arch_wants_old_prefaulted_pte(), which allows an
architecture to opt-in to 'old' prefaulted PTEs at runtime based on
whatever criteria it may have.
Cc: Jan Kara <jack@suse.cz>
Cc: Minchan Kim <minchan@kernel.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Reported-by: Vinayak Menon <vinmenon@codeaurora.org>
Signed-off-by: Will Deacon <will@kernel.org>
diff --git a/mm/filemap.c b/mm/filemap.c
index c1f2dc8..a6dc979 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -3019,6 +3019,7 @@ vm_fault_t filemap_map_pages(struct vm_fault *vmf,
struct address_space *mapping = file->f_mapping;
pgoff_t last_pgoff = start_pgoff;
unsigned long address = vmf->address;
+ unsigned long flags = vmf->flags;
XA_STATE(xas, &mapping->i_pages, start_pgoff);
struct page *head, *page;
unsigned int mmap_miss = READ_ONCE(file->f_ra.mmap_miss);
@@ -3051,14 +3052,18 @@ vm_fault_t filemap_map_pages(struct vm_fault *vmf,
if (!pte_none(*vmf->pte))
goto unlock;
+ /* We're about to handle the fault */
+ if (vmf->address == address) {
+ vmf->flags &= ~FAULT_FLAG_PREFAULT;
+ ret = VM_FAULT_NOPAGE;
+ } else {
+ vmf->flags |= FAULT_FLAG_PREFAULT;
+ }
+
do_set_pte(vmf, page);
/* no need to invalidate: a not-present page won't be cached */
update_mmu_cache(vma, vmf->address, vmf->pte);
unlock_page(head);
-
- /* The fault is handled */
- if (vmf->address == address)
- ret = VM_FAULT_NOPAGE;
continue;
unlock:
unlock_page(head);
@@ -3067,6 +3072,7 @@ vm_fault_t filemap_map_pages(struct vm_fault *vmf,
pte_unmap_unlock(vmf->pte, vmf->ptl);
out:
rcu_read_unlock();
+ vmf->flags = flags;
vmf->address = address;
WRITE_ONCE(file->f_ra.mmap_miss, mmap_miss);
return ret;