Mike Rapoport | 44f380f | 2018-03-21 21:22:41 +0200 | [diff] [blame] | 1 | .. _transhuge: |
Andrea Arcangeli | 1c9bf22 | 2011-01-13 15:46:30 -0800 | [diff] [blame] | 2 | |
Mike Rapoport | 44f380f | 2018-03-21 21:22:41 +0200 | [diff] [blame] | 3 | ============================ |
| 4 | Transparent Hugepage Support |
| 5 | ============================ |
| 6 | |
Mike Rapoport | 45c9a74 | 2018-05-14 11:13:40 +0300 | [diff] [blame] | 7 | This document describes design principles Transparent Hugepage (THP) |
| 8 | Support and its interaction with other parts of the memory management. |
Mike Rapoport | 07a8303 | 2018-05-14 11:13:38 +0300 | [diff] [blame] | 9 | |
| 10 | Design principles |
| 11 | ================= |
| 12 | |
| 13 | - "graceful fallback": mm components which don't have transparent hugepage |
| 14 | knowledge fall back to breaking huge pmd mapping into table of ptes and, |
| 15 | if necessary, split a transparent hugepage. Therefore these components |
| 16 | can continue working on the regular pages or regular pte mappings. |
| 17 | |
| 18 | - if a hugepage allocation fails because of memory fragmentation, |
| 19 | regular pages should be gracefully allocated instead and mixed in |
| 20 | the same vma without any failure or significant delay and without |
| 21 | userland noticing |
| 22 | |
| 23 | - if some task quits and more hugepages become available (either |
| 24 | immediately in the buddy or through the VM), guest physical memory |
| 25 | backed by regular pages should be relocated on hugepages |
| 26 | automatically (with khugepaged) |
| 27 | |
| 28 | - it doesn't require memory reservation and in turn it uses hugepages |
| 29 | whenever possible (the only possible reservation here is kernelcore= |
| 30 | to avoid unmovable pages to fragment all the memory but such a tweak |
| 31 | is not specific to transparent hugepage support and it's a generic |
| 32 | feature that applies to all dynamic high order allocations in the |
| 33 | kernel) |
| 34 | |
Mike Rapoport | 44f380f | 2018-03-21 21:22:41 +0200 | [diff] [blame] | 35 | get_user_pages and follow_page |
| 36 | ============================== |
Andrea Arcangeli | 1c9bf22 | 2011-01-13 15:46:30 -0800 | [diff] [blame] | 37 | |
| 38 | get_user_pages and follow_page if run on a hugepage, will return the |
| 39 | head or tail pages as usual (exactly as they would do on |
| 40 | hugetlbfs). Most gup users will only care about the actual physical |
| 41 | address of the page and its temporary pinning to release after the I/O |
| 42 | is complete, so they won't ever notice the fact the page is huge. But |
| 43 | if any driver is going to mangle over the page structure of the tail |
| 44 | page (like for checking page->mapping or other bits that are relevant |
| 45 | for the head page and not the tail page), it should be updated to jump |
Kirill A. Shutemov | a46e637 | 2016-01-15 16:54:30 -0800 | [diff] [blame] | 46 | to check head page instead. Taking reference on any head/tail page would |
| 47 | prevent page from being split by anyone. |
Andrea Arcangeli | 1c9bf22 | 2011-01-13 15:46:30 -0800 | [diff] [blame] | 48 | |
Mike Rapoport | 44f380f | 2018-03-21 21:22:41 +0200 | [diff] [blame] | 49 | .. note:: |
| 50 | these aren't new constraints to the GUP API, and they match the |
| 51 | same constrains that applies to hugetlbfs too, so any driver capable |
| 52 | of handling GUP on hugetlbfs will also work fine on transparent |
| 53 | hugepage backed mappings. |
Andrea Arcangeli | 1c9bf22 | 2011-01-13 15:46:30 -0800 | [diff] [blame] | 54 | |
| 55 | In case you can't handle compound pages if they're returned by |
| 56 | follow_page, the FOLL_SPLIT bit can be specified as parameter to |
| 57 | follow_page, so that it will split the hugepages before returning |
Yang Shi | a496696 | 2019-04-19 04:17:04 +0800 | [diff] [blame^] | 58 | them. |
Andrea Arcangeli | 1c9bf22 | 2011-01-13 15:46:30 -0800 | [diff] [blame] | 59 | |
Mike Rapoport | 44f380f | 2018-03-21 21:22:41 +0200 | [diff] [blame] | 60 | Graceful fallback |
| 61 | ================= |
Andrea Arcangeli | 1c9bf22 | 2011-01-13 15:46:30 -0800 | [diff] [blame] | 62 | |
Eric Engestrom | 89474d5 | 2016-05-20 16:58:07 -0700 | [diff] [blame] | 63 | Code walking pagetables but unaware about huge pmds can simply call |
Kirill A. Shutemov | a46e637 | 2016-01-15 16:54:30 -0800 | [diff] [blame] | 64 | split_huge_pmd(vma, pmd, addr) where the pmd is the one returned by |
Andrea Arcangeli | 1c9bf22 | 2011-01-13 15:46:30 -0800 | [diff] [blame] | 65 | pmd_offset. It's trivial to make the code transparent hugepage aware |
Kirill A. Shutemov | a46e637 | 2016-01-15 16:54:30 -0800 | [diff] [blame] | 66 | by just grepping for "pmd_offset" and adding split_huge_pmd where |
Andrea Arcangeli | 1c9bf22 | 2011-01-13 15:46:30 -0800 | [diff] [blame] | 67 | missing after pmd_offset returns the pmd. Thanks to the graceful |
| 68 | fallback design, with a one liner change, you can avoid to write |
| 69 | hundred if not thousand of lines of complex code to make your code |
| 70 | hugepage aware. |
| 71 | |
| 72 | If you're not walking pagetables but you run into a physical hugepage |
| 73 | but you can't handle it natively in your code, you can split it by |
| 74 | calling split_huge_page(page). This is what the Linux VM does before |
Kirill A. Shutemov | a46e637 | 2016-01-15 16:54:30 -0800 | [diff] [blame] | 75 | it tries to swapout the hugepage for example. split_huge_page() can fail |
| 76 | if the page is pinned and you must handle this correctly. |
Andrea Arcangeli | 1c9bf22 | 2011-01-13 15:46:30 -0800 | [diff] [blame] | 77 | |
| 78 | Example to make mremap.c transparent hugepage aware with a one liner |
Mike Rapoport | 44f380f | 2018-03-21 21:22:41 +0200 | [diff] [blame] | 79 | change:: |
Andrea Arcangeli | 1c9bf22 | 2011-01-13 15:46:30 -0800 | [diff] [blame] | 80 | |
Mike Rapoport | 44f380f | 2018-03-21 21:22:41 +0200 | [diff] [blame] | 81 | diff --git a/mm/mremap.c b/mm/mremap.c |
| 82 | --- a/mm/mremap.c |
| 83 | +++ b/mm/mremap.c |
| 84 | @@ -41,6 +41,7 @@ static pmd_t *get_old_pmd(struct mm_stru |
| 85 | return NULL; |
Andrea Arcangeli | 1c9bf22 | 2011-01-13 15:46:30 -0800 | [diff] [blame] | 86 | |
Mike Rapoport | 44f380f | 2018-03-21 21:22:41 +0200 | [diff] [blame] | 87 | pmd = pmd_offset(pud, addr); |
| 88 | + split_huge_pmd(vma, pmd, addr); |
| 89 | if (pmd_none_or_clear_bad(pmd)) |
| 90 | return NULL; |
Andrea Arcangeli | 1c9bf22 | 2011-01-13 15:46:30 -0800 | [diff] [blame] | 91 | |
Mike Rapoport | 44f380f | 2018-03-21 21:22:41 +0200 | [diff] [blame] | 92 | Locking in hugepage aware code |
| 93 | ============================== |
Andrea Arcangeli | 1c9bf22 | 2011-01-13 15:46:30 -0800 | [diff] [blame] | 94 | |
| 95 | We want as much code as possible hugepage aware, as calling |
Kirill A. Shutemov | a46e637 | 2016-01-15 16:54:30 -0800 | [diff] [blame] | 96 | split_huge_page() or split_huge_pmd() has a cost. |
Andrea Arcangeli | 1c9bf22 | 2011-01-13 15:46:30 -0800 | [diff] [blame] | 97 | |
| 98 | To make pagetable walks huge pmd aware, all you need to do is to call |
| 99 | pmd_trans_huge() on the pmd returned by pmd_offset. You must hold the |
| 100 | mmap_sem in read (or write) mode to be sure an huge pmd cannot be |
| 101 | created from under you by khugepaged (khugepaged collapse_huge_page |
| 102 | takes the mmap_sem in write mode in addition to the anon_vma lock). If |
| 103 | pmd_trans_huge returns false, you just fallback in the old code |
| 104 | paths. If instead pmd_trans_huge returns true, you have to take the |
Kirill A. Shutemov | a46e637 | 2016-01-15 16:54:30 -0800 | [diff] [blame] | 105 | page table lock (pmd_lock()) and re-run pmd_trans_huge. Taking the |
| 106 | page table lock will prevent the huge pmd to be converted into a |
| 107 | regular pmd from under you (split_huge_pmd can run in parallel to the |
Andrea Arcangeli | 1c9bf22 | 2011-01-13 15:46:30 -0800 | [diff] [blame] | 108 | pagetable walk). If the second pmd_trans_huge returns false, you |
Kirill A. Shutemov | a46e637 | 2016-01-15 16:54:30 -0800 | [diff] [blame] | 109 | should just drop the page table lock and fallback to the old code as |
| 110 | before. Otherwise you can proceed to process the huge pmd and the |
| 111 | hugepage natively. Once finished you can drop the page table lock. |
Andrea Arcangeli | 1c9bf22 | 2011-01-13 15:46:30 -0800 | [diff] [blame] | 112 | |
Mike Rapoport | 44f380f | 2018-03-21 21:22:41 +0200 | [diff] [blame] | 113 | Refcounts and transparent huge pages |
| 114 | ==================================== |
Kirill A. Shutemov | a46e637 | 2016-01-15 16:54:30 -0800 | [diff] [blame] | 115 | |
| 116 | Refcounting on THP is mostly consistent with refcounting on other compound |
| 117 | pages: |
| 118 | |
Joonsoo Kim | 0139aa7 | 2016-05-19 17:10:49 -0700 | [diff] [blame] | 119 | - get_page()/put_page() and GUP operate in head page's ->_refcount. |
Kirill A. Shutemov | a46e637 | 2016-01-15 16:54:30 -0800 | [diff] [blame] | 120 | |
Joonsoo Kim | 0139aa7 | 2016-05-19 17:10:49 -0700 | [diff] [blame] | 121 | - ->_refcount in tail pages is always zero: get_page_unless_zero() never |
Kirill A. Shutemov | a46e637 | 2016-01-15 16:54:30 -0800 | [diff] [blame] | 122 | succeed on tail pages. |
| 123 | |
| 124 | - map/unmap of the pages with PTE entry increment/decrement ->_mapcount |
| 125 | on relevant sub-page of the compound page. |
| 126 | |
| 127 | - map/unmap of the whole compound page accounted in compound_mapcount |
Kirill A. Shutemov | 1b5946a | 2016-07-26 15:26:40 -0700 | [diff] [blame] | 128 | (stored in first tail page). For file huge pages, we also increment |
| 129 | ->_mapcount of all sub-pages in order to have race-free detection of |
| 130 | last unmap of subpages. |
Kirill A. Shutemov | a46e637 | 2016-01-15 16:54:30 -0800 | [diff] [blame] | 131 | |
Kirill A. Shutemov | 1b5946a | 2016-07-26 15:26:40 -0700 | [diff] [blame] | 132 | PageDoubleMap() indicates that the page is *possibly* mapped with PTEs. |
| 133 | |
| 134 | For anonymous pages PageDoubleMap() also indicates ->_mapcount in all |
| 135 | subpages is offset up by one. This additional reference is required to |
| 136 | get race-free detection of unmap of subpages when we have them mapped with |
| 137 | both PMDs and PTEs. |
Kirill A. Shutemov | a46e637 | 2016-01-15 16:54:30 -0800 | [diff] [blame] | 138 | |
| 139 | This is optimization required to lower overhead of per-subpage mapcount |
| 140 | tracking. The alternative is alter ->_mapcount in all subpages on each |
| 141 | map/unmap of the whole compound page. |
| 142 | |
Kirill A. Shutemov | 1b5946a | 2016-07-26 15:26:40 -0700 | [diff] [blame] | 143 | For anonymous pages, we set PG_double_map when a PMD of the page got split |
| 144 | for the first time, but still have PMD mapping. The additional references |
| 145 | go away with last compound_mapcount. |
| 146 | |
| 147 | File pages get PG_double_map set on first map of the page with PTE and |
| 148 | goes away when the page gets evicted from page cache. |
Andrea Arcangeli | 1c9bf22 | 2011-01-13 15:46:30 -0800 | [diff] [blame] | 149 | |
| 150 | split_huge_page internally has to distribute the refcounts in the head |
Kirill A. Shutemov | a46e637 | 2016-01-15 16:54:30 -0800 | [diff] [blame] | 151 | page to the tail pages before clearing all PG_head/tail bits from the page |
| 152 | structures. It can be done easily for refcounts taken by page table |
| 153 | entries. But we don't have enough information on how to distribute any |
| 154 | additional pins (i.e. from get_user_pages). split_huge_page() fails any |
| 155 | requests to split pinned huge page: it expects page count to be equal to |
| 156 | sum of mapcount of all sub-pages plus one (split_huge_page caller must |
| 157 | have reference for head page). |
| 158 | |
Joonsoo Kim | 0139aa7 | 2016-05-19 17:10:49 -0700 | [diff] [blame] | 159 | split_huge_page uses migration entries to stabilize page->_refcount and |
Kirill A. Shutemov | 1b5946a | 2016-07-26 15:26:40 -0700 | [diff] [blame] | 160 | page->_mapcount of anonymous pages. File pages just got unmapped. |
Kirill A. Shutemov | a46e637 | 2016-01-15 16:54:30 -0800 | [diff] [blame] | 161 | |
| 162 | We safe against physical memory scanners too: the only legitimate way |
| 163 | scanner can get reference to a page is get_page_unless_zero(). |
| 164 | |
Eric Engestrom | 89474d5 | 2016-05-20 16:58:07 -0700 | [diff] [blame] | 165 | All tail pages have zero ->_refcount until atomic_add(). This prevents the |
| 166 | scanner from getting a reference to the tail page up to that point. After the |
SeongJae Park | 929f9d2 | 2017-05-08 15:59:02 -0700 | [diff] [blame] | 167 | atomic_add() we don't care about the ->_refcount value. We already known how |
Eric Engestrom | 89474d5 | 2016-05-20 16:58:07 -0700 | [diff] [blame] | 168 | many references should be uncharged from the head page. |
Kirill A. Shutemov | a46e637 | 2016-01-15 16:54:30 -0800 | [diff] [blame] | 169 | |
| 170 | For head page get_page_unless_zero() will succeed and we don't mind. It's |
| 171 | clear where reference should go after split: it will stay on head page. |
| 172 | |
| 173 | Note that split_huge_pmd() doesn't have any limitation on refcounting: |
| 174 | pmd can be split at any point and never fails. |
| 175 | |
Mike Rapoport | 44f380f | 2018-03-21 21:22:41 +0200 | [diff] [blame] | 176 | Partial unmap and deferred_split_huge_page() |
| 177 | ============================================ |
Kirill A. Shutemov | a46e637 | 2016-01-15 16:54:30 -0800 | [diff] [blame] | 178 | |
| 179 | Unmapping part of THP (with munmap() or other way) is not going to free |
| 180 | memory immediately. Instead, we detect that a subpage of THP is not in use |
| 181 | in page_remove_rmap() and queue the THP for splitting if memory pressure |
| 182 | comes. Splitting will free up unused subpages. |
| 183 | |
| 184 | Splitting the page right away is not an option due to locking context in |
| 185 | the place where we can detect partial unmap. It's also might be |
SeongJae Park | 929f9d2 | 2017-05-08 15:59:02 -0700 | [diff] [blame] | 186 | counterproductive since in many cases partial unmap happens during exit(2) if |
| 187 | a THP crosses a VMA boundary. |
Kirill A. Shutemov | a46e637 | 2016-01-15 16:54:30 -0800 | [diff] [blame] | 188 | |
| 189 | Function deferred_split_huge_page() is used to queue page for splitting. |
| 190 | The splitting itself will happen when we get memory pressure via shrinker |
| 191 | interface. |