Mike Rapoport | aa9f34e | 2018-03-21 21:22:22 +0200 | [diff] [blame] | 1 | .. hmm: |
| 2 | |
| 3 | ===================================== |
Jérôme Glisse | bffc33e | 2017-09-08 16:11:19 -0700 | [diff] [blame] | 4 | Heterogeneous Memory Management (HMM) |
Mike Rapoport | aa9f34e | 2018-03-21 21:22:22 +0200 | [diff] [blame] | 5 | ===================================== |
Jérôme Glisse | bffc33e | 2017-09-08 16:11:19 -0700 | [diff] [blame] | 6 | |
Jérôme Glisse | e8eddfd | 2018-04-10 16:29:16 -0700 | [diff] [blame] | 7 | Provide infrastructure and helpers to integrate non-conventional memory (device |
| 8 | memory like GPU on board memory) into regular kernel path, with the cornerstone |
| 9 | of this being specialized struct page for such memory (see sections 5 to 7 of |
| 10 | this document). |
Jérôme Glisse | bffc33e | 2017-09-08 16:11:19 -0700 | [diff] [blame] | 11 | |
Jérôme Glisse | e8eddfd | 2018-04-10 16:29:16 -0700 | [diff] [blame] | 12 | HMM also provides optional helpers for SVM (Share Virtual Memory), i.e., |
Jonathan Corbet | 24844fd | 2018-04-16 14:25:08 -0600 | [diff] [blame] | 13 | allowing a device to transparently access program address coherently with |
| 14 | the CPU meaning that any valid pointer on the CPU is also a valid pointer |
| 15 | for the device. This is becoming mandatory to simplify the use of advanced |
| 16 | heterogeneous computing where GPU, DSP, or FPGA are used to perform various |
Jérôme Glisse | e8eddfd | 2018-04-10 16:29:16 -0700 | [diff] [blame] | 17 | computations on behalf of a process. |
Ralph Campbell | 76ea470 | 2018-04-10 16:28:11 -0700 | [diff] [blame] | 18 | |
| 19 | This document is divided as follows: in the first section I expose the problems |
| 20 | related to using device specific memory allocators. In the second section, I |
| 21 | expose the hardware limitations that are inherent to many platforms. The third |
| 22 | section gives an overview of the HMM design. The fourth section explains how |
Jérôme Glisse | e8eddfd | 2018-04-10 16:29:16 -0700 | [diff] [blame] | 23 | CPU page-table mirroring works and the purpose of HMM in this context. The |
Ralph Campbell | 76ea470 | 2018-04-10 16:28:11 -0700 | [diff] [blame] | 24 | fifth section deals with how device memory is represented inside the kernel. |
| 25 | Finally, the last section presents a new migration helper that allows lever- |
| 26 | aging the device DMA engine. |
Jérôme Glisse | bffc33e | 2017-09-08 16:11:19 -0700 | [diff] [blame] | 27 | |
Mike Rapoport | aa9f34e | 2018-03-21 21:22:22 +0200 | [diff] [blame] | 28 | .. contents:: :local: |
Jérôme Glisse | bffc33e | 2017-09-08 16:11:19 -0700 | [diff] [blame] | 29 | |
Jonathan Corbet | 24844fd | 2018-04-16 14:25:08 -0600 | [diff] [blame] | 30 | Problems of using a device specific memory allocator |
| 31 | ==================================================== |
Jérôme Glisse | bffc33e | 2017-09-08 16:11:19 -0700 | [diff] [blame] | 32 | |
Jérôme Glisse | e8eddfd | 2018-04-10 16:29:16 -0700 | [diff] [blame] | 33 | Devices with a large amount of on board memory (several gigabytes) like GPUs |
Ralph Campbell | 76ea470 | 2018-04-10 16:28:11 -0700 | [diff] [blame] | 34 | have historically managed their memory through dedicated driver specific APIs. |
| 35 | This creates a disconnect between memory allocated and managed by a device |
| 36 | driver and regular application memory (private anonymous, shared memory, or |
| 37 | regular file backed memory). From here on I will refer to this aspect as split |
| 38 | address space. I use shared address space to refer to the opposite situation: |
| 39 | i.e., one in which any application memory region can be used by a device |
| 40 | transparently. |
Jérôme Glisse | bffc33e | 2017-09-08 16:11:19 -0700 | [diff] [blame] | 41 | |
Jérôme Glisse | e8eddfd | 2018-04-10 16:29:16 -0700 | [diff] [blame] | 42 | Split address space happens because device can only access memory allocated |
| 43 | through device specific API. This implies that all memory objects in a program |
| 44 | are not equal from the device point of view which complicates large programs |
| 45 | that rely on a wide set of libraries. |
Jérôme Glisse | bffc33e | 2017-09-08 16:11:19 -0700 | [diff] [blame] | 46 | |
Jérôme Glisse | e8eddfd | 2018-04-10 16:29:16 -0700 | [diff] [blame] | 47 | Concretely this means that code that wants to leverage devices like GPUs needs |
| 48 | to copy object between generically allocated memory (malloc, mmap private, mmap |
| 49 | share) and memory allocated through the device driver API (this still ends up |
| 50 | with an mmap but of the device file). |
Jérôme Glisse | bffc33e | 2017-09-08 16:11:19 -0700 | [diff] [blame] | 51 | |
Jérôme Glisse | e8eddfd | 2018-04-10 16:29:16 -0700 | [diff] [blame] | 52 | For flat data sets (array, grid, image, ...) this isn't too hard to achieve but |
| 53 | complex data sets (list, tree, ...) are hard to get right. Duplicating a |
| 54 | complex data set needs to re-map all the pointer relations between each of its |
Ralph Campbell | 76ea470 | 2018-04-10 16:28:11 -0700 | [diff] [blame] | 55 | elements. This is error prone and program gets harder to debug because of the |
Jérôme Glisse | e8eddfd | 2018-04-10 16:29:16 -0700 | [diff] [blame] | 56 | duplicate data set and addresses. |
Jérôme Glisse | bffc33e | 2017-09-08 16:11:19 -0700 | [diff] [blame] | 57 | |
Jérôme Glisse | e8eddfd | 2018-04-10 16:29:16 -0700 | [diff] [blame] | 58 | Split address space also means that libraries cannot transparently use data |
Ralph Campbell | 76ea470 | 2018-04-10 16:28:11 -0700 | [diff] [blame] | 59 | they are getting from the core program or another library and thus each library |
Jérôme Glisse | e8eddfd | 2018-04-10 16:29:16 -0700 | [diff] [blame] | 60 | might have to duplicate its input data set using the device specific memory |
Ralph Campbell | 76ea470 | 2018-04-10 16:28:11 -0700 | [diff] [blame] | 61 | allocator. Large projects suffer from this and waste resources because of the |
| 62 | various memory copies. |
Jérôme Glisse | bffc33e | 2017-09-08 16:11:19 -0700 | [diff] [blame] | 63 | |
Jérôme Glisse | e8eddfd | 2018-04-10 16:29:16 -0700 | [diff] [blame] | 64 | Duplicating each library API to accept as input or output memory allocated by |
Jérôme Glisse | bffc33e | 2017-09-08 16:11:19 -0700 | [diff] [blame] | 65 | each device specific allocator is not a viable option. It would lead to a |
Ralph Campbell | 76ea470 | 2018-04-10 16:28:11 -0700 | [diff] [blame] | 66 | combinatorial explosion in the library entry points. |
Jérôme Glisse | bffc33e | 2017-09-08 16:11:19 -0700 | [diff] [blame] | 67 | |
Ralph Campbell | 76ea470 | 2018-04-10 16:28:11 -0700 | [diff] [blame] | 68 | Finally, with the advance of high level language constructs (in C++ but in |
| 69 | other languages too) it is now possible for the compiler to leverage GPUs and |
| 70 | other devices without programmer knowledge. Some compiler identified patterns |
| 71 | are only do-able with a shared address space. It is also more reasonable to use |
| 72 | a shared address space for all other patterns. |
Jérôme Glisse | bffc33e | 2017-09-08 16:11:19 -0700 | [diff] [blame] | 73 | |
| 74 | |
Jonathan Corbet | 24844fd | 2018-04-16 14:25:08 -0600 | [diff] [blame] | 75 | I/O bus, device memory characteristics |
| 76 | ====================================== |
Jérôme Glisse | bffc33e | 2017-09-08 16:11:19 -0700 | [diff] [blame] | 77 | |
Jérôme Glisse | e8eddfd | 2018-04-10 16:29:16 -0700 | [diff] [blame] | 78 | I/O buses cripple shared address spaces due to a few limitations. Most I/O |
| 79 | buses only allow basic memory access from device to main memory; even cache |
| 80 | coherency is often optional. Access to device memory from CPU is even more |
| 81 | limited. More often than not, it is not cache coherent. |
Jérôme Glisse | bffc33e | 2017-09-08 16:11:19 -0700 | [diff] [blame] | 82 | |
Ralph Campbell | 76ea470 | 2018-04-10 16:28:11 -0700 | [diff] [blame] | 83 | If we only consider the PCIE bus, then a device can access main memory (often |
| 84 | through an IOMMU) and be cache coherent with the CPUs. However, it only allows |
| 85 | a limited set of atomic operations from device on main memory. This is worse |
Jérôme Glisse | e8eddfd | 2018-04-10 16:29:16 -0700 | [diff] [blame] | 86 | in the other direction: the CPU can only access a limited range of the device |
| 87 | memory and cannot perform atomic operations on it. Thus device memory cannot |
Ralph Campbell | 76ea470 | 2018-04-10 16:28:11 -0700 | [diff] [blame] | 88 | be considered the same as regular memory from the kernel point of view. |
Jérôme Glisse | bffc33e | 2017-09-08 16:11:19 -0700 | [diff] [blame] | 89 | |
| 90 | Another crippling factor is the limited bandwidth (~32GBytes/s with PCIE 4.0 |
Ralph Campbell | 76ea470 | 2018-04-10 16:28:11 -0700 | [diff] [blame] | 91 | and 16 lanes). This is 33 times less than the fastest GPU memory (1 TBytes/s). |
| 92 | The final limitation is latency. Access to main memory from the device has an |
| 93 | order of magnitude higher latency than when the device accesses its own memory. |
Jérôme Glisse | bffc33e | 2017-09-08 16:11:19 -0700 | [diff] [blame] | 94 | |
Ralph Campbell | 76ea470 | 2018-04-10 16:28:11 -0700 | [diff] [blame] | 95 | Some platforms are developing new I/O buses or additions/modifications to PCIE |
Jérôme Glisse | e8eddfd | 2018-04-10 16:29:16 -0700 | [diff] [blame] | 96 | to address some of these limitations (OpenCAPI, CCIX). They mainly allow two- |
Jérôme Glisse | bffc33e | 2017-09-08 16:11:19 -0700 | [diff] [blame] | 97 | way cache coherency between CPU and device and allow all atomic operations the |
Jérôme Glisse | e8eddfd | 2018-04-10 16:29:16 -0700 | [diff] [blame] | 98 | architecture supports. Sadly, not all platforms are following this trend and |
Ralph Campbell | 76ea470 | 2018-04-10 16:28:11 -0700 | [diff] [blame] | 99 | some major architectures are left without hardware solutions to these problems. |
Jérôme Glisse | bffc33e | 2017-09-08 16:11:19 -0700 | [diff] [blame] | 100 | |
Jérôme Glisse | e8eddfd | 2018-04-10 16:29:16 -0700 | [diff] [blame] | 101 | So for shared address space to make sense, not only must we allow devices to |
| 102 | access any memory but we must also permit any memory to be migrated to device |
| 103 | memory while device is using it (blocking CPU access while it happens). |
Jérôme Glisse | bffc33e | 2017-09-08 16:11:19 -0700 | [diff] [blame] | 104 | |
| 105 | |
Jonathan Corbet | 24844fd | 2018-04-16 14:25:08 -0600 | [diff] [blame] | 106 | Shared address space and migration |
| 107 | ================================== |
Jérôme Glisse | bffc33e | 2017-09-08 16:11:19 -0700 | [diff] [blame] | 108 | |
| 109 | HMM intends to provide two main features. First one is to share the address |
Ralph Campbell | 76ea470 | 2018-04-10 16:28:11 -0700 | [diff] [blame] | 110 | space by duplicating the CPU page table in the device page table so the same |
| 111 | address points to the same physical memory for any valid main memory address in |
Jérôme Glisse | bffc33e | 2017-09-08 16:11:19 -0700 | [diff] [blame] | 112 | the process address space. |
| 113 | |
Ralph Campbell | 76ea470 | 2018-04-10 16:28:11 -0700 | [diff] [blame] | 114 | To achieve this, HMM offers a set of helpers to populate the device page table |
Jérôme Glisse | bffc33e | 2017-09-08 16:11:19 -0700 | [diff] [blame] | 115 | while keeping track of CPU page table updates. Device page table updates are |
Ralph Campbell | 76ea470 | 2018-04-10 16:28:11 -0700 | [diff] [blame] | 116 | not as easy as CPU page table updates. To update the device page table, you must |
| 117 | allocate a buffer (or use a pool of pre-allocated buffers) and write GPU |
| 118 | specific commands in it to perform the update (unmap, cache invalidations, and |
Jérôme Glisse | e8eddfd | 2018-04-10 16:29:16 -0700 | [diff] [blame] | 119 | flush, ...). This cannot be done through common code for all devices. Hence |
Ralph Campbell | 76ea470 | 2018-04-10 16:28:11 -0700 | [diff] [blame] | 120 | why HMM provides helpers to factor out everything that can be while leaving the |
| 121 | hardware specific details to the device driver. |
Jérôme Glisse | bffc33e | 2017-09-08 16:11:19 -0700 | [diff] [blame] | 122 | |
Jérôme Glisse | e8eddfd | 2018-04-10 16:29:16 -0700 | [diff] [blame] | 123 | The second mechanism HMM provides is a new kind of ZONE_DEVICE memory that |
Ralph Campbell | 76ea470 | 2018-04-10 16:28:11 -0700 | [diff] [blame] | 124 | allows allocating a struct page for each page of the device memory. Those pages |
Jérôme Glisse | e8eddfd | 2018-04-10 16:29:16 -0700 | [diff] [blame] | 125 | are special because the CPU cannot map them. However, they allow migrating |
Ralph Campbell | 76ea470 | 2018-04-10 16:28:11 -0700 | [diff] [blame] | 126 | main memory to device memory using existing migration mechanisms and everything |
| 127 | looks like a page is swapped out to disk from the CPU point of view. Using a |
| 128 | struct page gives the easiest and cleanest integration with existing mm mech- |
| 129 | anisms. Here again, HMM only provides helpers, first to hotplug new ZONE_DEVICE |
| 130 | memory for the device memory and second to perform migration. Policy decisions |
| 131 | of what and when to migrate things is left to the device driver. |
Jérôme Glisse | bffc33e | 2017-09-08 16:11:19 -0700 | [diff] [blame] | 132 | |
Ralph Campbell | 76ea470 | 2018-04-10 16:28:11 -0700 | [diff] [blame] | 133 | Note that any CPU access to a device page triggers a page fault and a migration |
| 134 | back to main memory. For example, when a page backing a given CPU address A is |
| 135 | migrated from a main memory page to a device page, then any CPU access to |
| 136 | address A triggers a page fault and initiates a migration back to main memory. |
Jérôme Glisse | bffc33e | 2017-09-08 16:11:19 -0700 | [diff] [blame] | 137 | |
Ralph Campbell | 76ea470 | 2018-04-10 16:28:11 -0700 | [diff] [blame] | 138 | With these two features, HMM not only allows a device to mirror process address |
| 139 | space and keeping both CPU and device page table synchronized, but also lever- |
Jérôme Glisse | e8eddfd | 2018-04-10 16:29:16 -0700 | [diff] [blame] | 140 | ages device memory by migrating the part of the data set that is actively being |
Ralph Campbell | 76ea470 | 2018-04-10 16:28:11 -0700 | [diff] [blame] | 141 | used by the device. |
Jérôme Glisse | bffc33e | 2017-09-08 16:11:19 -0700 | [diff] [blame] | 142 | |
| 143 | |
Mike Rapoport | aa9f34e | 2018-03-21 21:22:22 +0200 | [diff] [blame] | 144 | Address space mirroring implementation and API |
| 145 | ============================================== |
Jérôme Glisse | bffc33e | 2017-09-08 16:11:19 -0700 | [diff] [blame] | 146 | |
Ralph Campbell | 76ea470 | 2018-04-10 16:28:11 -0700 | [diff] [blame] | 147 | Address space mirroring's main objective is to allow duplication of a range of |
| 148 | CPU page table into a device page table; HMM helps keep both synchronized. A |
Jérôme Glisse | e8eddfd | 2018-04-10 16:29:16 -0700 | [diff] [blame] | 149 | device driver that wants to mirror a process address space must start with the |
Mike Rapoport | aa9f34e | 2018-03-21 21:22:22 +0200 | [diff] [blame] | 150 | registration of an hmm_mirror struct:: |
Jérôme Glisse | bffc33e | 2017-09-08 16:11:19 -0700 | [diff] [blame] | 151 | |
| 152 | int hmm_mirror_register(struct hmm_mirror *mirror, |
| 153 | struct mm_struct *mm); |
| 154 | int hmm_mirror_register_locked(struct hmm_mirror *mirror, |
| 155 | struct mm_struct *mm); |
| 156 | |
Jonathan Corbet | 24844fd | 2018-04-16 14:25:08 -0600 | [diff] [blame] | 157 | |
Jérôme Glisse | e8eddfd | 2018-04-10 16:29:16 -0700 | [diff] [blame] | 158 | The locked variant is to be used when the driver is already holding mmap_sem |
Ralph Campbell | 76ea470 | 2018-04-10 16:28:11 -0700 | [diff] [blame] | 159 | of the mm in write mode. The mirror struct has a set of callbacks that are used |
Jonathan Corbet | 24844fd | 2018-04-16 14:25:08 -0600 | [diff] [blame] | 160 | to propagate CPU page tables:: |
Jérôme Glisse | bffc33e | 2017-09-08 16:11:19 -0700 | [diff] [blame] | 161 | |
| 162 | struct hmm_mirror_ops { |
| 163 | /* sync_cpu_device_pagetables() - synchronize page tables |
| 164 | * |
| 165 | * @mirror: pointer to struct hmm_mirror |
| 166 | * @update_type: type of update that occurred to the CPU page table |
| 167 | * @start: virtual start address of the range to update |
| 168 | * @end: virtual end address of the range to update |
| 169 | * |
| 170 | * This callback ultimately originates from mmu_notifiers when the CPU |
| 171 | * page table is updated. The device driver must update its page table |
| 172 | * in response to this callback. The update argument tells what action |
| 173 | * to perform. |
| 174 | * |
| 175 | * The device driver must not return from this callback until the device |
| 176 | * page tables are completely updated (TLBs flushed, etc); this is a |
| 177 | * synchronous call. |
| 178 | */ |
| 179 | void (*update)(struct hmm_mirror *mirror, |
| 180 | enum hmm_update action, |
| 181 | unsigned long start, |
| 182 | unsigned long end); |
| 183 | }; |
| 184 | |
Ralph Campbell | 76ea470 | 2018-04-10 16:28:11 -0700 | [diff] [blame] | 185 | The device driver must perform the update action to the range (mark range |
| 186 | read only, or fully unmap, ...). The device must be done with the update before |
| 187 | the driver callback returns. |
Jérôme Glisse | bffc33e | 2017-09-08 16:11:19 -0700 | [diff] [blame] | 188 | |
Ralph Campbell | 76ea470 | 2018-04-10 16:28:11 -0700 | [diff] [blame] | 189 | When the device driver wants to populate a range of virtual addresses, it can |
Jonathan Corbet | 24844fd | 2018-04-16 14:25:08 -0600 | [diff] [blame] | 190 | use either:: |
Jérôme Glisse | bffc33e | 2017-09-08 16:11:19 -0700 | [diff] [blame] | 191 | |
Jérôme Glisse | 25f23a0 | 2019-05-13 17:19:55 -0700 | [diff] [blame] | 192 | long hmm_range_snapshot(struct hmm_range *range); |
Jérôme Glisse | 7323161 | 2019-05-13 17:19:58 -0700 | [diff] [blame] | 193 | long hmm_range_fault(struct hmm_range *range, bool block); |
Jérôme Glisse | bffc33e | 2017-09-08 16:11:19 -0700 | [diff] [blame] | 194 | |
Jérôme Glisse | 25f23a0 | 2019-05-13 17:19:55 -0700 | [diff] [blame] | 195 | The first one (hmm_range_snapshot()) will only fetch present CPU page table |
Jérôme Glisse | e8eddfd | 2018-04-10 16:29:16 -0700 | [diff] [blame] | 196 | entries and will not trigger a page fault on missing or non-present entries. |
| 197 | The second one does trigger a page fault on missing or read-only entry if the |
Ralph Campbell | 76ea470 | 2018-04-10 16:28:11 -0700 | [diff] [blame] | 198 | write parameter is true. Page faults use the generic mm page fault code path |
| 199 | just like a CPU page fault. |
Jérôme Glisse | bffc33e | 2017-09-08 16:11:19 -0700 | [diff] [blame] | 200 | |
Ralph Campbell | 76ea470 | 2018-04-10 16:28:11 -0700 | [diff] [blame] | 201 | Both functions copy CPU page table entries into their pfns array argument. Each |
| 202 | entry in that array corresponds to an address in the virtual range. HMM |
| 203 | provides a set of flags to help the driver identify special CPU page table |
| 204 | entries. |
Jérôme Glisse | bffc33e | 2017-09-08 16:11:19 -0700 | [diff] [blame] | 205 | |
| 206 | Locking with the update() callback is the most important aspect the driver must |
Jonathan Corbet | 24844fd | 2018-04-16 14:25:08 -0600 | [diff] [blame] | 207 | respect in order to keep things properly synchronized. The usage pattern is:: |
Jérôme Glisse | bffc33e | 2017-09-08 16:11:19 -0700 | [diff] [blame] | 208 | |
| 209 | int driver_populate_range(...) |
| 210 | { |
| 211 | struct hmm_range range; |
| 212 | ... |
Jérôme Glisse | 25f23a0 | 2019-05-13 17:19:55 -0700 | [diff] [blame] | 213 | |
| 214 | range.start = ...; |
| 215 | range.end = ...; |
| 216 | range.pfns = ...; |
| 217 | range.flags = ...; |
| 218 | range.values = ...; |
| 219 | range.pfn_shift = ...; |
Jérôme Glisse | a3e0d41 | 2019-05-13 17:20:01 -0700 | [diff] [blame^] | 220 | hmm_range_register(&range); |
| 221 | |
| 222 | /* |
| 223 | * Just wait for range to be valid, safe to ignore return value as we |
| 224 | * will use the return value of hmm_range_snapshot() below under the |
| 225 | * mmap_sem to ascertain the validity of the range. |
| 226 | */ |
| 227 | hmm_range_wait_until_valid(&range, TIMEOUT_IN_MSEC); |
Jérôme Glisse | 25f23a0 | 2019-05-13 17:19:55 -0700 | [diff] [blame] | 228 | |
Jérôme Glisse | bffc33e | 2017-09-08 16:11:19 -0700 | [diff] [blame] | 229 | again: |
Jérôme Glisse | 25f23a0 | 2019-05-13 17:19:55 -0700 | [diff] [blame] | 230 | down_read(&mm->mmap_sem); |
Jérôme Glisse | 25f23a0 | 2019-05-13 17:19:55 -0700 | [diff] [blame] | 231 | ret = hmm_range_snapshot(&range); |
| 232 | if (ret) { |
| 233 | up_read(&mm->mmap_sem); |
Jérôme Glisse | a3e0d41 | 2019-05-13 17:20:01 -0700 | [diff] [blame^] | 234 | if (ret == -EAGAIN) { |
| 235 | /* |
| 236 | * No need to check hmm_range_wait_until_valid() return value |
| 237 | * on retry we will get proper error with hmm_range_snapshot() |
| 238 | */ |
| 239 | hmm_range_wait_until_valid(&range, TIMEOUT_IN_MSEC); |
| 240 | goto again; |
| 241 | } |
| 242 | hmm_mirror_unregister(&range); |
Jérôme Glisse | bffc33e | 2017-09-08 16:11:19 -0700 | [diff] [blame] | 243 | return ret; |
Jérôme Glisse | 25f23a0 | 2019-05-13 17:19:55 -0700 | [diff] [blame] | 244 | } |
Jérôme Glisse | bffc33e | 2017-09-08 16:11:19 -0700 | [diff] [blame] | 245 | take_lock(driver->update); |
Jérôme Glisse | a3e0d41 | 2019-05-13 17:20:01 -0700 | [diff] [blame^] | 246 | if (!range.valid) { |
Jérôme Glisse | bffc33e | 2017-09-08 16:11:19 -0700 | [diff] [blame] | 247 | release_lock(driver->update); |
Jérôme Glisse | 25f23a0 | 2019-05-13 17:19:55 -0700 | [diff] [blame] | 248 | up_read(&mm->mmap_sem); |
Jérôme Glisse | bffc33e | 2017-09-08 16:11:19 -0700 | [diff] [blame] | 249 | goto again; |
| 250 | } |
| 251 | |
| 252 | // Use pfns array content to update device page table |
| 253 | |
Jérôme Glisse | a3e0d41 | 2019-05-13 17:20:01 -0700 | [diff] [blame^] | 254 | hmm_mirror_unregister(&range); |
Jérôme Glisse | bffc33e | 2017-09-08 16:11:19 -0700 | [diff] [blame] | 255 | release_lock(driver->update); |
Jérôme Glisse | 25f23a0 | 2019-05-13 17:19:55 -0700 | [diff] [blame] | 256 | up_read(&mm->mmap_sem); |
Jérôme Glisse | bffc33e | 2017-09-08 16:11:19 -0700 | [diff] [blame] | 257 | return 0; |
| 258 | } |
| 259 | |
Ralph Campbell | 76ea470 | 2018-04-10 16:28:11 -0700 | [diff] [blame] | 260 | The driver->update lock is the same lock that the driver takes inside its |
Jérôme Glisse | a3e0d41 | 2019-05-13 17:20:01 -0700 | [diff] [blame^] | 261 | update() callback. That lock must be held before checking the range.valid |
| 262 | field to avoid any race with a concurrent CPU page table update. |
Jérôme Glisse | bffc33e | 2017-09-08 16:11:19 -0700 | [diff] [blame] | 263 | |
Ralph Campbell | 76ea470 | 2018-04-10 16:28:11 -0700 | [diff] [blame] | 264 | HMM implements all this on top of the mmu_notifier API because we wanted a |
| 265 | simpler API and also to be able to perform optimizations latter on like doing |
| 266 | concurrent device updates in multi-devices scenario. |
Jérôme Glisse | bffc33e | 2017-09-08 16:11:19 -0700 | [diff] [blame] | 267 | |
Jérôme Glisse | e8eddfd | 2018-04-10 16:29:16 -0700 | [diff] [blame] | 268 | HMM also serves as an impedance mismatch between how CPU page table updates |
Ralph Campbell | 76ea470 | 2018-04-10 16:28:11 -0700 | [diff] [blame] | 269 | are done (by CPU write to the page table and TLB flushes) and how devices |
| 270 | update their own page table. Device updates are a multi-step process. First, |
Jérôme Glisse | e8eddfd | 2018-04-10 16:29:16 -0700 | [diff] [blame] | 271 | appropriate commands are written to a buffer, then this buffer is scheduled for |
Ralph Campbell | 76ea470 | 2018-04-10 16:28:11 -0700 | [diff] [blame] | 272 | execution on the device. It is only once the device has executed commands in |
| 273 | the buffer that the update is done. Creating and scheduling the update command |
| 274 | buffer can happen concurrently for multiple devices. Waiting for each device to |
| 275 | report commands as executed is serialized (there is no point in doing this |
| 276 | concurrently). |
Jérôme Glisse | bffc33e | 2017-09-08 16:11:19 -0700 | [diff] [blame] | 277 | |
| 278 | |
Mike Rapoport | aa9f34e | 2018-03-21 21:22:22 +0200 | [diff] [blame] | 279 | Represent and manage device memory from core kernel point of view |
| 280 | ================================================================= |
Jérôme Glisse | bffc33e | 2017-09-08 16:11:19 -0700 | [diff] [blame] | 281 | |
Ralph Campbell | 76ea470 | 2018-04-10 16:28:11 -0700 | [diff] [blame] | 282 | Several different designs were tried to support device memory. First one used |
| 283 | a device specific data structure to keep information about migrated memory and |
| 284 | HMM hooked itself in various places of mm code to handle any access to |
| 285 | addresses that were backed by device memory. It turns out that this ended up |
| 286 | replicating most of the fields of struct page and also needed many kernel code |
| 287 | paths to be updated to understand this new kind of memory. |
Jérôme Glisse | bffc33e | 2017-09-08 16:11:19 -0700 | [diff] [blame] | 288 | |
Ralph Campbell | 76ea470 | 2018-04-10 16:28:11 -0700 | [diff] [blame] | 289 | Most kernel code paths never try to access the memory behind a page |
| 290 | but only care about struct page contents. Because of this, HMM switched to |
| 291 | directly using struct page for device memory which left most kernel code paths |
| 292 | unaware of the difference. We only need to make sure that no one ever tries to |
| 293 | map those pages from the CPU side. |
Jérôme Glisse | bffc33e | 2017-09-08 16:11:19 -0700 | [diff] [blame] | 294 | |
Ralph Campbell | 76ea470 | 2018-04-10 16:28:11 -0700 | [diff] [blame] | 295 | HMM provides a set of helpers to register and hotplug device memory as a new |
Jonathan Corbet | 24844fd | 2018-04-16 14:25:08 -0600 | [diff] [blame] | 296 | region needing a struct page. This is offered through a very simple API:: |
Jérôme Glisse | bffc33e | 2017-09-08 16:11:19 -0700 | [diff] [blame] | 297 | |
| 298 | struct hmm_devmem *hmm_devmem_add(const struct hmm_devmem_ops *ops, |
| 299 | struct device *device, |
| 300 | unsigned long size); |
| 301 | void hmm_devmem_remove(struct hmm_devmem *devmem); |
| 302 | |
Mike Rapoport | aa9f34e | 2018-03-21 21:22:22 +0200 | [diff] [blame] | 303 | The hmm_devmem_ops is where most of the important things are:: |
Jérôme Glisse | bffc33e | 2017-09-08 16:11:19 -0700 | [diff] [blame] | 304 | |
| 305 | struct hmm_devmem_ops { |
| 306 | void (*free)(struct hmm_devmem *devmem, struct page *page); |
| 307 | int (*fault)(struct hmm_devmem *devmem, |
| 308 | struct vm_area_struct *vma, |
| 309 | unsigned long addr, |
| 310 | struct page *page, |
| 311 | unsigned flags, |
| 312 | pmd_t *pmdp); |
| 313 | }; |
| 314 | |
| 315 | The first callback (free()) happens when the last reference on a device page is |
Ralph Campbell | 76ea470 | 2018-04-10 16:28:11 -0700 | [diff] [blame] | 316 | dropped. This means the device page is now free and no longer used by anyone. |
| 317 | The second callback happens whenever the CPU tries to access a device page |
Jérôme Glisse | e8eddfd | 2018-04-10 16:29:16 -0700 | [diff] [blame] | 318 | which it cannot do. This second callback must trigger a migration back to |
Ralph Campbell | 76ea470 | 2018-04-10 16:28:11 -0700 | [diff] [blame] | 319 | system memory. |
Jérôme Glisse | bffc33e | 2017-09-08 16:11:19 -0700 | [diff] [blame] | 320 | |
| 321 | |
Jonathan Corbet | 24844fd | 2018-04-16 14:25:08 -0600 | [diff] [blame] | 322 | Migration to and from device memory |
| 323 | =================================== |
Jérôme Glisse | bffc33e | 2017-09-08 16:11:19 -0700 | [diff] [blame] | 324 | |
Jérôme Glisse | e8eddfd | 2018-04-10 16:29:16 -0700 | [diff] [blame] | 325 | Because the CPU cannot access device memory, migration must use the device DMA |
Ralph Campbell | 76ea470 | 2018-04-10 16:28:11 -0700 | [diff] [blame] | 326 | engine to perform copy from and to device memory. For this we need a new |
Jonathan Corbet | 24844fd | 2018-04-16 14:25:08 -0600 | [diff] [blame] | 327 | migration helper:: |
Jérôme Glisse | bffc33e | 2017-09-08 16:11:19 -0700 | [diff] [blame] | 328 | |
| 329 | int migrate_vma(const struct migrate_vma_ops *ops, |
| 330 | struct vm_area_struct *vma, |
| 331 | unsigned long mentries, |
| 332 | unsigned long start, |
| 333 | unsigned long end, |
| 334 | unsigned long *src, |
| 335 | unsigned long *dst, |
| 336 | void *private); |
| 337 | |
Ralph Campbell | 76ea470 | 2018-04-10 16:28:11 -0700 | [diff] [blame] | 338 | Unlike other migration functions it works on a range of virtual address, there |
| 339 | are two reasons for that. First, device DMA copy has a high setup overhead cost |
Jérôme Glisse | bffc33e | 2017-09-08 16:11:19 -0700 | [diff] [blame] | 340 | and thus batching multiple pages is needed as otherwise the migration overhead |
Jérôme Glisse | e8eddfd | 2018-04-10 16:29:16 -0700 | [diff] [blame] | 341 | makes the whole exercise pointless. The second reason is because the |
Ralph Campbell | 76ea470 | 2018-04-10 16:28:11 -0700 | [diff] [blame] | 342 | migration might be for a range of addresses the device is actively accessing. |
Jérôme Glisse | bffc33e | 2017-09-08 16:11:19 -0700 | [diff] [blame] | 343 | |
Ralph Campbell | 76ea470 | 2018-04-10 16:28:11 -0700 | [diff] [blame] | 344 | The migrate_vma_ops struct defines two callbacks. First one (alloc_and_copy()) |
| 345 | controls destination memory allocation and copy operation. Second one is there |
Jonathan Corbet | 24844fd | 2018-04-16 14:25:08 -0600 | [diff] [blame] | 346 | to allow the device driver to perform cleanup operations after migration:: |
Jérôme Glisse | bffc33e | 2017-09-08 16:11:19 -0700 | [diff] [blame] | 347 | |
| 348 | struct migrate_vma_ops { |
| 349 | void (*alloc_and_copy)(struct vm_area_struct *vma, |
| 350 | const unsigned long *src, |
| 351 | unsigned long *dst, |
| 352 | unsigned long start, |
| 353 | unsigned long end, |
| 354 | void *private); |
| 355 | void (*finalize_and_map)(struct vm_area_struct *vma, |
| 356 | const unsigned long *src, |
| 357 | const unsigned long *dst, |
| 358 | unsigned long start, |
| 359 | unsigned long end, |
| 360 | void *private); |
| 361 | }; |
| 362 | |
Ralph Campbell | 76ea470 | 2018-04-10 16:28:11 -0700 | [diff] [blame] | 363 | It is important to stress that these migration helpers allow for holes in the |
Jérôme Glisse | bffc33e | 2017-09-08 16:11:19 -0700 | [diff] [blame] | 364 | virtual address range. Some pages in the range might not be migrated for all |
Ralph Campbell | 76ea470 | 2018-04-10 16:28:11 -0700 | [diff] [blame] | 365 | the usual reasons (page is pinned, page is locked, ...). This helper does not |
| 366 | fail but just skips over those pages. |
Jérôme Glisse | bffc33e | 2017-09-08 16:11:19 -0700 | [diff] [blame] | 367 | |
Ralph Campbell | 76ea470 | 2018-04-10 16:28:11 -0700 | [diff] [blame] | 368 | The alloc_and_copy() might decide to not migrate all pages in the |
| 369 | range (for reasons under the callback control). For those, the callback just |
| 370 | has to leave the corresponding dst entry empty. |
Jérôme Glisse | bffc33e | 2017-09-08 16:11:19 -0700 | [diff] [blame] | 371 | |
Ralph Campbell | 76ea470 | 2018-04-10 16:28:11 -0700 | [diff] [blame] | 372 | Finally, the migration of the struct page might fail (for file backed page) for |
Jérôme Glisse | bffc33e | 2017-09-08 16:11:19 -0700 | [diff] [blame] | 373 | various reasons (failure to freeze reference, or update page cache, ...). If |
Ralph Campbell | 76ea470 | 2018-04-10 16:28:11 -0700 | [diff] [blame] | 374 | that happens, then the finalize_and_map() can catch any pages that were not |
| 375 | migrated. Note those pages were still copied to a new page and thus we wasted |
Jérôme Glisse | bffc33e | 2017-09-08 16:11:19 -0700 | [diff] [blame] | 376 | bandwidth but this is considered as a rare event and a price that we are |
| 377 | willing to pay to keep all the code simpler. |
| 378 | |
| 379 | |
Mike Rapoport | aa9f34e | 2018-03-21 21:22:22 +0200 | [diff] [blame] | 380 | Memory cgroup (memcg) and rss accounting |
| 381 | ======================================== |
Jérôme Glisse | bffc33e | 2017-09-08 16:11:19 -0700 | [diff] [blame] | 382 | |
| 383 | For now device memory is accounted as any regular page in rss counters (either |
Ralph Campbell | 76ea470 | 2018-04-10 16:28:11 -0700 | [diff] [blame] | 384 | anonymous if device page is used for anonymous, file if device page is used for |
| 385 | file backed page or shmem if device page is used for shared memory). This is a |
| 386 | deliberate choice to keep existing applications, that might start using device |
| 387 | memory without knowing about it, running unimpacted. |
Jérôme Glisse | bffc33e | 2017-09-08 16:11:19 -0700 | [diff] [blame] | 388 | |
Jérôme Glisse | e8eddfd | 2018-04-10 16:29:16 -0700 | [diff] [blame] | 389 | A drawback is that the OOM killer might kill an application using a lot of |
Ralph Campbell | 76ea470 | 2018-04-10 16:28:11 -0700 | [diff] [blame] | 390 | device memory and not a lot of regular system memory and thus not freeing much |
| 391 | system memory. We want to gather more real world experience on how applications |
| 392 | and system react under memory pressure in the presence of device memory before |
Jérôme Glisse | bffc33e | 2017-09-08 16:11:19 -0700 | [diff] [blame] | 393 | deciding to account device memory differently. |
| 394 | |
| 395 | |
Ralph Campbell | 76ea470 | 2018-04-10 16:28:11 -0700 | [diff] [blame] | 396 | Same decision was made for memory cgroup. Device memory pages are accounted |
Jérôme Glisse | bffc33e | 2017-09-08 16:11:19 -0700 | [diff] [blame] | 397 | against same memory cgroup a regular page would be accounted to. This does |
| 398 | simplify migration to and from device memory. This also means that migration |
Jérôme Glisse | e8eddfd | 2018-04-10 16:29:16 -0700 | [diff] [blame] | 399 | back from device memory to regular memory cannot fail because it would |
Jérôme Glisse | bffc33e | 2017-09-08 16:11:19 -0700 | [diff] [blame] | 400 | go above memory cgroup limit. We might revisit this choice latter on once we |
Ralph Campbell | 76ea470 | 2018-04-10 16:28:11 -0700 | [diff] [blame] | 401 | get more experience in how device memory is used and its impact on memory |
Jérôme Glisse | bffc33e | 2017-09-08 16:11:19 -0700 | [diff] [blame] | 402 | resource control. |
| 403 | |
| 404 | |
Ralph Campbell | 76ea470 | 2018-04-10 16:28:11 -0700 | [diff] [blame] | 405 | Note that device memory can never be pinned by device driver nor through GUP |
Jérôme Glisse | bffc33e | 2017-09-08 16:11:19 -0700 | [diff] [blame] | 406 | and thus such memory is always free upon process exit. Or when last reference |
Ralph Campbell | 76ea470 | 2018-04-10 16:28:11 -0700 | [diff] [blame] | 407 | is dropped in case of shared memory or file backed memory. |