Mike Rapoport | 44f380f | 2018-03-21 21:22:41 +0200 | [diff] [blame] | 1 | .. _transhuge: |
Andrea Arcangeli | 1c9bf22 | 2011-01-13 15:46:30 -0800 | [diff] [blame] | 2 | |
Mike Rapoport | 44f380f | 2018-03-21 21:22:41 +0200 | [diff] [blame] | 3 | ============================ |
| 4 | Transparent Hugepage Support |
| 5 | ============================ |
| 6 | |
Ralph Campbell | 41f0a95 | 2019-04-26 11:04:29 -0700 | [diff] [blame] | 7 | This document describes design principles for Transparent Hugepage (THP) |
| 8 | support and its interaction with other parts of the memory management |
| 9 | system. |
Mike Rapoport | 07a8303 | 2018-05-14 11:13:38 +0300 | [diff] [blame] | 10 | |
| 11 | Design principles |
| 12 | ================= |
| 13 | |
| 14 | - "graceful fallback": mm components which don't have transparent hugepage |
| 15 | knowledge fall back to breaking huge pmd mapping into table of ptes and, |
| 16 | if necessary, split a transparent hugepage. Therefore these components |
| 17 | can continue working on the regular pages or regular pte mappings. |
| 18 | |
| 19 | - if a hugepage allocation fails because of memory fragmentation, |
| 20 | regular pages should be gracefully allocated instead and mixed in |
| 21 | the same vma without any failure or significant delay and without |
| 22 | userland noticing |
| 23 | |
| 24 | - if some task quits and more hugepages become available (either |
| 25 | immediately in the buddy or through the VM), guest physical memory |
| 26 | backed by regular pages should be relocated on hugepages |
| 27 | automatically (with khugepaged) |
| 28 | |
| 29 | - it doesn't require memory reservation and in turn it uses hugepages |
| 30 | whenever possible (the only possible reservation here is kernelcore= |
| 31 | to avoid unmovable pages to fragment all the memory but such a tweak |
| 32 | is not specific to transparent hugepage support and it's a generic |
| 33 | feature that applies to all dynamic high order allocations in the |
| 34 | kernel) |
| 35 | |
Mike Rapoport | 44f380f | 2018-03-21 21:22:41 +0200 | [diff] [blame] | 36 | get_user_pages and follow_page |
| 37 | ============================== |
Andrea Arcangeli | 1c9bf22 | 2011-01-13 15:46:30 -0800 | [diff] [blame] | 38 | |
| 39 | get_user_pages and follow_page if run on a hugepage, will return the |
| 40 | head or tail pages as usual (exactly as they would do on |
Ralph Campbell | 41f0a95 | 2019-04-26 11:04:29 -0700 | [diff] [blame] | 41 | hugetlbfs). Most GUP users will only care about the actual physical |
Andrea Arcangeli | 1c9bf22 | 2011-01-13 15:46:30 -0800 | [diff] [blame] | 42 | address of the page and its temporary pinning to release after the I/O |
| 43 | is complete, so they won't ever notice the fact the page is huge. But |
| 44 | if any driver is going to mangle over the page structure of the tail |
| 45 | page (like for checking page->mapping or other bits that are relevant |
| 46 | for the head page and not the tail page), it should be updated to jump |
Ralph Campbell | 41f0a95 | 2019-04-26 11:04:29 -0700 | [diff] [blame] | 47 | to check head page instead. Taking a reference on any head/tail page would |
| 48 | prevent the page from being split by anyone. |
Andrea Arcangeli | 1c9bf22 | 2011-01-13 15:46:30 -0800 | [diff] [blame] | 49 | |
Mike Rapoport | 44f380f | 2018-03-21 21:22:41 +0200 | [diff] [blame] | 50 | .. note:: |
| 51 | these aren't new constraints to the GUP API, and they match the |
Ralph Campbell | 41f0a95 | 2019-04-26 11:04:29 -0700 | [diff] [blame] | 52 | same constraints that apply to hugetlbfs too, so any driver capable |
Mike Rapoport | 44f380f | 2018-03-21 21:22:41 +0200 | [diff] [blame] | 53 | of handling GUP on hugetlbfs will also work fine on transparent |
| 54 | hugepage backed mappings. |
Andrea Arcangeli | 1c9bf22 | 2011-01-13 15:46:30 -0800 | [diff] [blame] | 55 | |
Mike Rapoport | 44f380f | 2018-03-21 21:22:41 +0200 | [diff] [blame] | 56 | Graceful fallback |
| 57 | ================= |
Andrea Arcangeli | 1c9bf22 | 2011-01-13 15:46:30 -0800 | [diff] [blame] | 58 | |
Eric Engestrom | 89474d5 | 2016-05-20 16:58:07 -0700 | [diff] [blame] | 59 | Code walking pagetables but unaware about huge pmds can simply call |
Kirill A. Shutemov | a46e637 | 2016-01-15 16:54:30 -0800 | [diff] [blame] | 60 | split_huge_pmd(vma, pmd, addr) where the pmd is the one returned by |
Andrea Arcangeli | 1c9bf22 | 2011-01-13 15:46:30 -0800 | [diff] [blame] | 61 | pmd_offset. It's trivial to make the code transparent hugepage aware |
Kirill A. Shutemov | a46e637 | 2016-01-15 16:54:30 -0800 | [diff] [blame] | 62 | by just grepping for "pmd_offset" and adding split_huge_pmd where |
Andrea Arcangeli | 1c9bf22 | 2011-01-13 15:46:30 -0800 | [diff] [blame] | 63 | missing after pmd_offset returns the pmd. Thanks to the graceful |
| 64 | fallback design, with a one liner change, you can avoid to write |
Ralph Campbell | 41f0a95 | 2019-04-26 11:04:29 -0700 | [diff] [blame] | 65 | hundreds if not thousands of lines of complex code to make your code |
Andrea Arcangeli | 1c9bf22 | 2011-01-13 15:46:30 -0800 | [diff] [blame] | 66 | hugepage aware. |
| 67 | |
| 68 | If you're not walking pagetables but you run into a physical hugepage |
Ralph Campbell | 41f0a95 | 2019-04-26 11:04:29 -0700 | [diff] [blame] | 69 | that you can't handle natively in your code, you can split it by |
Andrea Arcangeli | 1c9bf22 | 2011-01-13 15:46:30 -0800 | [diff] [blame] | 70 | calling split_huge_page(page). This is what the Linux VM does before |
Kirill A. Shutemov | a46e637 | 2016-01-15 16:54:30 -0800 | [diff] [blame] | 71 | it tries to swapout the hugepage for example. split_huge_page() can fail |
| 72 | if the page is pinned and you must handle this correctly. |
Andrea Arcangeli | 1c9bf22 | 2011-01-13 15:46:30 -0800 | [diff] [blame] | 73 | |
| 74 | Example to make mremap.c transparent hugepage aware with a one liner |
Mike Rapoport | 44f380f | 2018-03-21 21:22:41 +0200 | [diff] [blame] | 75 | change:: |
Andrea Arcangeli | 1c9bf22 | 2011-01-13 15:46:30 -0800 | [diff] [blame] | 76 | |
Mike Rapoport | 44f380f | 2018-03-21 21:22:41 +0200 | [diff] [blame] | 77 | diff --git a/mm/mremap.c b/mm/mremap.c |
| 78 | --- a/mm/mremap.c |
| 79 | +++ b/mm/mremap.c |
| 80 | @@ -41,6 +41,7 @@ static pmd_t *get_old_pmd(struct mm_stru |
| 81 | return NULL; |
Andrea Arcangeli | 1c9bf22 | 2011-01-13 15:46:30 -0800 | [diff] [blame] | 82 | |
Mike Rapoport | 44f380f | 2018-03-21 21:22:41 +0200 | [diff] [blame] | 83 | pmd = pmd_offset(pud, addr); |
| 84 | + split_huge_pmd(vma, pmd, addr); |
| 85 | if (pmd_none_or_clear_bad(pmd)) |
| 86 | return NULL; |
Andrea Arcangeli | 1c9bf22 | 2011-01-13 15:46:30 -0800 | [diff] [blame] | 87 | |
Mike Rapoport | 44f380f | 2018-03-21 21:22:41 +0200 | [diff] [blame] | 88 | Locking in hugepage aware code |
| 89 | ============================== |
Andrea Arcangeli | 1c9bf22 | 2011-01-13 15:46:30 -0800 | [diff] [blame] | 90 | |
| 91 | We want as much code as possible hugepage aware, as calling |
Kirill A. Shutemov | a46e637 | 2016-01-15 16:54:30 -0800 | [diff] [blame] | 92 | split_huge_page() or split_huge_pmd() has a cost. |
Andrea Arcangeli | 1c9bf22 | 2011-01-13 15:46:30 -0800 | [diff] [blame] | 93 | |
| 94 | To make pagetable walks huge pmd aware, all you need to do is to call |
| 95 | pmd_trans_huge() on the pmd returned by pmd_offset. You must hold the |
Michel Lespinasse | c1e8d7c | 2020-06-08 21:33:54 -0700 | [diff] [blame] | 96 | mmap_lock in read (or write) mode to be sure a huge pmd cannot be |
Andrea Arcangeli | 1c9bf22 | 2011-01-13 15:46:30 -0800 | [diff] [blame] | 97 | created from under you by khugepaged (khugepaged collapse_huge_page |
Michel Lespinasse | c1e8d7c | 2020-06-08 21:33:54 -0700 | [diff] [blame] | 98 | takes the mmap_lock in write mode in addition to the anon_vma lock). If |
Andrea Arcangeli | 1c9bf22 | 2011-01-13 15:46:30 -0800 | [diff] [blame] | 99 | pmd_trans_huge returns false, you just fallback in the old code |
| 100 | paths. If instead pmd_trans_huge returns true, you have to take the |
Kirill A. Shutemov | a46e637 | 2016-01-15 16:54:30 -0800 | [diff] [blame] | 101 | page table lock (pmd_lock()) and re-run pmd_trans_huge. Taking the |
Ralph Campbell | 41f0a95 | 2019-04-26 11:04:29 -0700 | [diff] [blame] | 102 | page table lock will prevent the huge pmd being converted into a |
Kirill A. Shutemov | a46e637 | 2016-01-15 16:54:30 -0800 | [diff] [blame] | 103 | regular pmd from under you (split_huge_pmd can run in parallel to the |
Andrea Arcangeli | 1c9bf22 | 2011-01-13 15:46:30 -0800 | [diff] [blame] | 104 | pagetable walk). If the second pmd_trans_huge returns false, you |
Kirill A. Shutemov | a46e637 | 2016-01-15 16:54:30 -0800 | [diff] [blame] | 105 | should just drop the page table lock and fallback to the old code as |
Ralph Campbell | 41f0a95 | 2019-04-26 11:04:29 -0700 | [diff] [blame] | 106 | before. Otherwise, you can proceed to process the huge pmd and the |
| 107 | hugepage natively. Once finished, you can drop the page table lock. |
Andrea Arcangeli | 1c9bf22 | 2011-01-13 15:46:30 -0800 | [diff] [blame] | 108 | |
Mike Rapoport | 44f380f | 2018-03-21 21:22:41 +0200 | [diff] [blame] | 109 | Refcounts and transparent huge pages |
| 110 | ==================================== |
Kirill A. Shutemov | a46e637 | 2016-01-15 16:54:30 -0800 | [diff] [blame] | 111 | |
| 112 | Refcounting on THP is mostly consistent with refcounting on other compound |
| 113 | pages: |
| 114 | |
Ralph Campbell | 41f0a95 | 2019-04-26 11:04:29 -0700 | [diff] [blame] | 115 | - get_page()/put_page() and GUP operate on head page's ->_refcount. |
Kirill A. Shutemov | a46e637 | 2016-01-15 16:54:30 -0800 | [diff] [blame] | 116 | |
Joonsoo Kim | 0139aa7 | 2016-05-19 17:10:49 -0700 | [diff] [blame] | 117 | - ->_refcount in tail pages is always zero: get_page_unless_zero() never |
Ralph Campbell | 41f0a95 | 2019-04-26 11:04:29 -0700 | [diff] [blame] | 118 | succeeds on tail pages. |
Kirill A. Shutemov | a46e637 | 2016-01-15 16:54:30 -0800 | [diff] [blame] | 119 | |
| 120 | - map/unmap of the pages with PTE entry increment/decrement ->_mapcount |
| 121 | on relevant sub-page of the compound page. |
| 122 | |
Ralph Campbell | 41f0a95 | 2019-04-26 11:04:29 -0700 | [diff] [blame] | 123 | - map/unmap of the whole compound page is accounted for in compound_mapcount |
Kirill A. Shutemov | 1b5946a | 2016-07-26 15:26:40 -0700 | [diff] [blame] | 124 | (stored in first tail page). For file huge pages, we also increment |
| 125 | ->_mapcount of all sub-pages in order to have race-free detection of |
| 126 | last unmap of subpages. |
Kirill A. Shutemov | a46e637 | 2016-01-15 16:54:30 -0800 | [diff] [blame] | 127 | |
Kirill A. Shutemov | 1b5946a | 2016-07-26 15:26:40 -0700 | [diff] [blame] | 128 | PageDoubleMap() indicates that the page is *possibly* mapped with PTEs. |
| 129 | |
Ralph Campbell | 41f0a95 | 2019-04-26 11:04:29 -0700 | [diff] [blame] | 130 | For anonymous pages, PageDoubleMap() also indicates ->_mapcount in all |
Kirill A. Shutemov | 1b5946a | 2016-07-26 15:26:40 -0700 | [diff] [blame] | 131 | subpages is offset up by one. This additional reference is required to |
| 132 | get race-free detection of unmap of subpages when we have them mapped with |
| 133 | both PMDs and PTEs. |
Kirill A. Shutemov | a46e637 | 2016-01-15 16:54:30 -0800 | [diff] [blame] | 134 | |
Ralph Campbell | 41f0a95 | 2019-04-26 11:04:29 -0700 | [diff] [blame] | 135 | This optimization is required to lower the overhead of per-subpage mapcount |
| 136 | tracking. The alternative is to alter ->_mapcount in all subpages on each |
Kirill A. Shutemov | a46e637 | 2016-01-15 16:54:30 -0800 | [diff] [blame] | 137 | map/unmap of the whole compound page. |
| 138 | |
Ralph Campbell | 41f0a95 | 2019-04-26 11:04:29 -0700 | [diff] [blame] | 139 | For anonymous pages, we set PG_double_map when a PMD of the page is split |
| 140 | for the first time, but still have a PMD mapping. The additional references |
| 141 | go away with the last compound_mapcount. |
Kirill A. Shutemov | 1b5946a | 2016-07-26 15:26:40 -0700 | [diff] [blame] | 142 | |
Ralph Campbell | 41f0a95 | 2019-04-26 11:04:29 -0700 | [diff] [blame] | 143 | File pages get PG_double_map set on the first map of the page with PTE and |
| 144 | goes away when the page gets evicted from the page cache. |
Andrea Arcangeli | 1c9bf22 | 2011-01-13 15:46:30 -0800 | [diff] [blame] | 145 | |
| 146 | split_huge_page internally has to distribute the refcounts in the head |
Kirill A. Shutemov | a46e637 | 2016-01-15 16:54:30 -0800 | [diff] [blame] | 147 | page to the tail pages before clearing all PG_head/tail bits from the page |
| 148 | structures. It can be done easily for refcounts taken by page table |
Ralph Campbell | 41f0a95 | 2019-04-26 11:04:29 -0700 | [diff] [blame] | 149 | entries, but we don't have enough information on how to distribute any |
Kirill A. Shutemov | a46e637 | 2016-01-15 16:54:30 -0800 | [diff] [blame] | 150 | additional pins (i.e. from get_user_pages). split_huge_page() fails any |
Ralph Campbell | 41f0a95 | 2019-04-26 11:04:29 -0700 | [diff] [blame] | 151 | requests to split pinned huge pages: it expects page count to be equal to |
| 152 | the sum of mapcount of all sub-pages plus one (split_huge_page caller must |
| 153 | have a reference to the head page). |
Kirill A. Shutemov | a46e637 | 2016-01-15 16:54:30 -0800 | [diff] [blame] | 154 | |
Joonsoo Kim | 0139aa7 | 2016-05-19 17:10:49 -0700 | [diff] [blame] | 155 | split_huge_page uses migration entries to stabilize page->_refcount and |
Ralph Campbell | 41f0a95 | 2019-04-26 11:04:29 -0700 | [diff] [blame] | 156 | page->_mapcount of anonymous pages. File pages just get unmapped. |
Kirill A. Shutemov | a46e637 | 2016-01-15 16:54:30 -0800 | [diff] [blame] | 157 | |
Ralph Campbell | 41f0a95 | 2019-04-26 11:04:29 -0700 | [diff] [blame] | 158 | We are safe against physical memory scanners too: the only legitimate way |
| 159 | a scanner can get a reference to a page is get_page_unless_zero(). |
Kirill A. Shutemov | a46e637 | 2016-01-15 16:54:30 -0800 | [diff] [blame] | 160 | |
Eric Engestrom | 89474d5 | 2016-05-20 16:58:07 -0700 | [diff] [blame] | 161 | All tail pages have zero ->_refcount until atomic_add(). This prevents the |
| 162 | scanner from getting a reference to the tail page up to that point. After the |
Ralph Campbell | 41f0a95 | 2019-04-26 11:04:29 -0700 | [diff] [blame] | 163 | atomic_add() we don't care about the ->_refcount value. We already know how |
Eric Engestrom | 89474d5 | 2016-05-20 16:58:07 -0700 | [diff] [blame] | 164 | many references should be uncharged from the head page. |
Kirill A. Shutemov | a46e637 | 2016-01-15 16:54:30 -0800 | [diff] [blame] | 165 | |
| 166 | For head page get_page_unless_zero() will succeed and we don't mind. It's |
Ralph Campbell | 41f0a95 | 2019-04-26 11:04:29 -0700 | [diff] [blame] | 167 | clear where references should go after split: it will stay on the head page. |
Kirill A. Shutemov | a46e637 | 2016-01-15 16:54:30 -0800 | [diff] [blame] | 168 | |
Ralph Campbell | 41f0a95 | 2019-04-26 11:04:29 -0700 | [diff] [blame] | 169 | Note that split_huge_pmd() doesn't have any limitations on refcounting: |
Kirill A. Shutemov | a46e637 | 2016-01-15 16:54:30 -0800 | [diff] [blame] | 170 | pmd can be split at any point and never fails. |
| 171 | |
Mike Rapoport | 44f380f | 2018-03-21 21:22:41 +0200 | [diff] [blame] | 172 | Partial unmap and deferred_split_huge_page() |
| 173 | ============================================ |
Kirill A. Shutemov | a46e637 | 2016-01-15 16:54:30 -0800 | [diff] [blame] | 174 | |
| 175 | Unmapping part of THP (with munmap() or other way) is not going to free |
| 176 | memory immediately. Instead, we detect that a subpage of THP is not in use |
| 177 | in page_remove_rmap() and queue the THP for splitting if memory pressure |
| 178 | comes. Splitting will free up unused subpages. |
| 179 | |
| 180 | Splitting the page right away is not an option due to locking context in |
Ralph Campbell | 41f0a95 | 2019-04-26 11:04:29 -0700 | [diff] [blame] | 181 | the place where we can detect partial unmap. It also might be |
SeongJae Park | 929f9d2 | 2017-05-08 15:59:02 -0700 | [diff] [blame] | 182 | counterproductive since in many cases partial unmap happens during exit(2) if |
| 183 | a THP crosses a VMA boundary. |
Kirill A. Shutemov | a46e637 | 2016-01-15 16:54:30 -0800 | [diff] [blame] | 184 | |
Ralph Campbell | 41f0a95 | 2019-04-26 11:04:29 -0700 | [diff] [blame] | 185 | The function deferred_split_huge_page() is used to queue a page for splitting. |
Kirill A. Shutemov | a46e637 | 2016-01-15 16:54:30 -0800 | [diff] [blame] | 186 | The splitting itself will happen when we get memory pressure via shrinker |
| 187 | interface. |