Thomas Gleixner | ec8f24b | 2019-05-19 13:07:45 +0100 | [diff] [blame] | 1 | # SPDX-License-Identifier: GPL-2.0-only |
Christoph Hellwig | 59e0b52 | 2018-07-31 13:39:35 +0200 | [diff] [blame] | 2 | |
| 3 | menu "Memory Management options" |
| 4 | |
Dave Hansen | e1785e8 | 2005-06-23 00:07:49 -0700 | [diff] [blame] | 5 | config SELECT_MEMORY_MODEL |
| 6 | def_bool y |
Kees Cook | a8826ee | 2013-01-16 18:54:17 -0800 | [diff] [blame] | 7 | depends on ARCH_SELECT_MEMORY_MODEL |
Dave Hansen | e1785e8 | 2005-06-23 00:07:49 -0700 | [diff] [blame] | 8 | |
Dave Hansen | 3a9da76 | 2005-06-23 00:07:42 -0700 | [diff] [blame] | 9 | choice |
| 10 | prompt "Memory model" |
Dave Hansen | e1785e8 | 2005-06-23 00:07:49 -0700 | [diff] [blame] | 11 | depends on SELECT_MEMORY_MODEL |
Andy Whitcroft | d41dee3 | 2005-06-23 00:07:54 -0700 | [diff] [blame] | 12 | default SPARSEMEM_MANUAL if ARCH_SPARSEMEM_DEFAULT |
Dave Hansen | e1785e8 | 2005-06-23 00:07:49 -0700 | [diff] [blame] | 13 | default FLATMEM_MANUAL |
Mike Rapoport | d66d109 | 2019-05-13 17:23:05 -0700 | [diff] [blame] | 14 | help |
| 15 | This option allows you to change some of the ways that |
| 16 | Linux manages its memory internally. Most users will |
| 17 | only have one option here selected by the architecture |
| 18 | configuration. This is normal. |
Dave Hansen | 3a9da76 | 2005-06-23 00:07:42 -0700 | [diff] [blame] | 19 | |
Dave Hansen | e1785e8 | 2005-06-23 00:07:49 -0700 | [diff] [blame] | 20 | config FLATMEM_MANUAL |
Dave Hansen | 3a9da76 | 2005-06-23 00:07:42 -0700 | [diff] [blame] | 21 | bool "Flat Memory" |
Mike Rapoport | bb1c50d | 2021-06-28 19:42:52 -0700 | [diff] [blame] | 22 | depends on !ARCH_SPARSEMEM_ENABLE || ARCH_FLATMEM_ENABLE |
Dave Hansen | 3a9da76 | 2005-06-23 00:07:42 -0700 | [diff] [blame] | 23 | help |
Mike Rapoport | d66d109 | 2019-05-13 17:23:05 -0700 | [diff] [blame] | 24 | This option is best suited for non-NUMA systems with |
| 25 | flat address space. The FLATMEM is the most efficient |
| 26 | system in terms of performance and resource consumption |
| 27 | and it is the best option for smaller systems. |
Dave Hansen | 3a9da76 | 2005-06-23 00:07:42 -0700 | [diff] [blame] | 28 | |
Mike Rapoport | d66d109 | 2019-05-13 17:23:05 -0700 | [diff] [blame] | 29 | For systems that have holes in their physical address |
| 30 | spaces and for features like NUMA and memory hotplug, |
Randy Dunlap | dd33d29 | 2019-11-30 17:58:26 -0800 | [diff] [blame] | 31 | choose "Sparse Memory". |
Andy Whitcroft | d41dee3 | 2005-06-23 00:07:54 -0700 | [diff] [blame] | 32 | |
| 33 | If unsure, choose this option (Flat Memory) over any other. |
Dave Hansen | 3a9da76 | 2005-06-23 00:07:42 -0700 | [diff] [blame] | 34 | |
Andy Whitcroft | d41dee3 | 2005-06-23 00:07:54 -0700 | [diff] [blame] | 35 | config SPARSEMEM_MANUAL |
| 36 | bool "Sparse Memory" |
| 37 | depends on ARCH_SPARSEMEM_ENABLE |
| 38 | help |
| 39 | This will be the only option for some systems, including |
Mike Rapoport | d66d109 | 2019-05-13 17:23:05 -0700 | [diff] [blame] | 40 | memory hot-plug systems. This is normal. |
Andy Whitcroft | d41dee3 | 2005-06-23 00:07:54 -0700 | [diff] [blame] | 41 | |
Mike Rapoport | d66d109 | 2019-05-13 17:23:05 -0700 | [diff] [blame] | 42 | This option provides efficient support for systems with |
| 43 | holes is their physical address space and allows memory |
| 44 | hot-plug and hot-remove. |
Andy Whitcroft | d41dee3 | 2005-06-23 00:07:54 -0700 | [diff] [blame] | 45 | |
Mike Rapoport | d66d109 | 2019-05-13 17:23:05 -0700 | [diff] [blame] | 46 | If unsure, choose "Flat Memory" over this option. |
Andy Whitcroft | d41dee3 | 2005-06-23 00:07:54 -0700 | [diff] [blame] | 47 | |
Dave Hansen | 3a9da76 | 2005-06-23 00:07:42 -0700 | [diff] [blame] | 48 | endchoice |
| 49 | |
Andy Whitcroft | d41dee3 | 2005-06-23 00:07:54 -0700 | [diff] [blame] | 50 | config SPARSEMEM |
| 51 | def_bool y |
Russell King | 1a83e17 | 2009-10-26 16:50:12 -0700 | [diff] [blame] | 52 | depends on (!SELECT_MEMORY_MODEL && ARCH_SPARSEMEM_ENABLE) || SPARSEMEM_MANUAL |
Andy Whitcroft | d41dee3 | 2005-06-23 00:07:54 -0700 | [diff] [blame] | 53 | |
Dave Hansen | e1785e8 | 2005-06-23 00:07:49 -0700 | [diff] [blame] | 54 | config FLATMEM |
| 55 | def_bool y |
Mike Rapoport | bb1c50d | 2021-06-28 19:42:52 -0700 | [diff] [blame] | 56 | depends on !SPARSEMEM || FLATMEM_MANUAL |
Andy Whitcroft | d41dee3 | 2005-06-23 00:07:54 -0700 | [diff] [blame] | 57 | |
Dave Hansen | 93b7504 | 2005-06-23 00:07:47 -0700 | [diff] [blame] | 58 | # |
Bob Picco | 3e34726 | 2005-09-03 15:54:28 -0700 | [diff] [blame] | 59 | # SPARSEMEM_EXTREME (which is the default) does some bootmem |
Mike Rapoport | c89ab04 | 2020-08-06 23:24:02 -0700 | [diff] [blame] | 60 | # allocations when sparse_init() is called. If this cannot |
Bob Picco | 3e34726 | 2005-09-03 15:54:28 -0700 | [diff] [blame] | 61 | # be done on your architecture, select this option. However, |
| 62 | # statically allocating the mem_section[] array can potentially |
| 63 | # consume vast quantities of .bss, so be careful. |
| 64 | # |
| 65 | # This option will also potentially produce smaller runtime code |
| 66 | # with gcc 3.4 and later. |
| 67 | # |
| 68 | config SPARSEMEM_STATIC |
Jan Beulich | 9ba1608 | 2008-10-15 22:01:38 -0700 | [diff] [blame] | 69 | bool |
Bob Picco | 3e34726 | 2005-09-03 15:54:28 -0700 | [diff] [blame] | 70 | |
| 71 | # |
Matt LaPlante | 44c0920 | 2006-10-03 22:34:14 +0200 | [diff] [blame] | 72 | # Architecture platforms which require a two level mem_section in SPARSEMEM |
Bob Picco | 802f192 | 2005-09-03 15:54:26 -0700 | [diff] [blame] | 73 | # must select this option. This is usually for architecture platforms with |
| 74 | # an extremely sparse physical address space. |
| 75 | # |
Bob Picco | 3e34726 | 2005-09-03 15:54:28 -0700 | [diff] [blame] | 76 | config SPARSEMEM_EXTREME |
| 77 | def_bool y |
| 78 | depends on SPARSEMEM && !SPARSEMEM_STATIC |
Hugh Dickins | 4c21e2f | 2005-10-29 18:16:40 -0700 | [diff] [blame] | 79 | |
Andy Whitcroft | 29c7111 | 2007-10-16 01:24:14 -0700 | [diff] [blame] | 80 | config SPARSEMEM_VMEMMAP_ENABLE |
Jan Beulich | 9ba1608 | 2008-10-15 22:01:38 -0700 | [diff] [blame] | 81 | bool |
Andy Whitcroft | 29c7111 | 2007-10-16 01:24:14 -0700 | [diff] [blame] | 82 | |
| 83 | config SPARSEMEM_VMEMMAP |
Geoff Levand | a5ee6da | 2007-12-17 16:19:53 -0800 | [diff] [blame] | 84 | bool "Sparse Memory virtual memmap" |
| 85 | depends on SPARSEMEM && SPARSEMEM_VMEMMAP_ENABLE |
| 86 | default y |
| 87 | help |
Krzysztof Kozlowski | 19fa40a | 2019-11-30 17:58:23 -0800 | [diff] [blame] | 88 | SPARSEMEM_VMEMMAP uses a virtually mapped memmap to optimise |
| 89 | pfn_to_page and page_to_pfn operations. This is the most |
| 90 | efficient option when sufficient kernel resources are available. |
Andy Whitcroft | 29c7111 | 2007-10-16 01:24:14 -0700 | [diff] [blame] | 91 | |
Philipp Hachtmann | 70210ed | 2014-01-29 18:16:01 +0100 | [diff] [blame] | 92 | config HAVE_MEMBLOCK_PHYS_MAP |
Christoph Jaeger | 6341e62 | 2014-12-20 15:41:11 -0500 | [diff] [blame] | 93 | bool |
Philipp Hachtmann | 70210ed | 2014-01-29 18:16:01 +0100 | [diff] [blame] | 94 | |
Christoph Hellwig | 67a929e | 2019-07-11 20:57:14 -0700 | [diff] [blame] | 95 | config HAVE_FAST_GUP |
Christoph Hellwig | 050a9ad | 2019-07-11 20:57:21 -0700 | [diff] [blame] | 96 | depends on MMU |
Christoph Jaeger | 6341e62 | 2014-12-20 15:41:11 -0500 | [diff] [blame] | 97 | bool |
Steve Capper | 2667f50 | 2014-10-09 15:29:14 -0700 | [diff] [blame] | 98 | |
David Hildenbrand | 52219ae | 2020-06-04 16:48:38 -0700 | [diff] [blame] | 99 | # Don't discard allocated memory used to track "memory" and "reserved" memblocks |
| 100 | # after early boot, so it can still be used to test for validity of memory. |
| 101 | # Also, memblocks are updated with memory hot(un)plug. |
Mike Rapoport | 350e88b | 2019-05-13 17:22:59 -0700 | [diff] [blame] | 102 | config ARCH_KEEP_MEMBLOCK |
Christoph Jaeger | 6341e62 | 2014-12-20 15:41:11 -0500 | [diff] [blame] | 103 | bool |
Tejun Heo | c378ddd | 2011-07-14 11:46:03 +0200 | [diff] [blame] | 104 | |
Dan Williams | 1e5d8e1 | 2020-02-16 12:01:04 -0800 | [diff] [blame] | 105 | # Keep arch NUMA mapping infrastructure post-init. |
| 106 | config NUMA_KEEP_MEMINFO |
| 107 | bool |
| 108 | |
Minchan Kim | ee6f509 | 2012-07-31 16:43:50 -0700 | [diff] [blame] | 109 | config MEMORY_ISOLATION |
Christoph Jaeger | 6341e62 | 2014-12-20 15:41:11 -0500 | [diff] [blame] | 110 | bool |
Minchan Kim | ee6f509 | 2012-07-31 16:43:50 -0700 | [diff] [blame] | 111 | |
David Hildenbrand | a9e7b8d | 2021-11-08 18:35:50 -0800 | [diff] [blame] | 112 | # IORESOURCE_SYSTEM_RAM regions in the kernel resource tree that are marked |
| 113 | # IORESOURCE_EXCLUSIVE cannot be mapped to user space, for example, via |
| 114 | # /dev/mem. |
| 115 | config EXCLUSIVE_SYSTEM_RAM |
| 116 | def_bool y |
| 117 | depends on !DEVMEM || STRICT_DEVMEM |
| 118 | |
Yasuaki Ishimatsu | 46723bf | 2013-02-22 16:33:00 -0800 | [diff] [blame] | 119 | # |
| 120 | # Only be set on architectures that have completely implemented memory hotplug |
| 121 | # feature. If you are not sure, don't touch it. |
| 122 | # |
| 123 | config HAVE_BOOTMEM_INFO_NODE |
| 124 | def_bool n |
| 125 | |
Anshuman Khandual | 91024b3 | 2021-05-04 18:38:17 -0700 | [diff] [blame] | 126 | config ARCH_ENABLE_MEMORY_HOTPLUG |
| 127 | bool |
| 128 | |
Dave Hansen | 3947be1 | 2005-10-29 18:16:54 -0700 | [diff] [blame] | 129 | # eventually, we can have this option just 'select SPARSEMEM' |
| 130 | config MEMORY_HOTPLUG |
| 131 | bool "Allow for memory hot-add" |
David Hildenbrand | b30c592 | 2020-10-15 20:08:23 -0700 | [diff] [blame] | 132 | select MEMORY_ISOLATION |
David Hildenbrand | 71b6f2d | 2021-11-05 13:44:20 -0700 | [diff] [blame] | 133 | depends on SPARSEMEM |
Stephen Rothwell | 40b3136 | 2013-05-21 13:49:35 +1000 | [diff] [blame] | 134 | depends on ARCH_ENABLE_MEMORY_HOTPLUG |
David Hildenbrand | 7ec58a2 | 2021-11-05 13:44:28 -0700 | [diff] [blame] | 135 | depends on 64BIT |
Dan Williams | 1e5d8e1 | 2020-02-16 12:01:04 -0800 | [diff] [blame] | 136 | select NUMA_KEEP_MEMINFO if NUMA |
Dave Hansen | 3947be1 | 2005-10-29 18:16:54 -0700 | [diff] [blame] | 137 | |
Vitaly Kuznetsov | 8604d9e | 2016-05-19 17:13:03 -0700 | [diff] [blame] | 138 | config MEMORY_HOTPLUG_DEFAULT_ONLINE |
Krzysztof Kozlowski | 19fa40a | 2019-11-30 17:58:23 -0800 | [diff] [blame] | 139 | bool "Online the newly added memory blocks by default" |
| 140 | depends on MEMORY_HOTPLUG |
| 141 | help |
Vitaly Kuznetsov | 8604d9e | 2016-05-19 17:13:03 -0700 | [diff] [blame] | 142 | This option sets the default policy setting for memory hotplug |
| 143 | onlining policy (/sys/devices/system/memory/auto_online_blocks) which |
| 144 | determines what happens to newly added memory regions. Policy setting |
| 145 | can always be changed at runtime. |
Mauro Carvalho Chehab | cb1aaeb | 2019-06-07 15:54:32 -0300 | [diff] [blame] | 146 | See Documentation/admin-guide/mm/memory-hotplug.rst for more information. |
Vitaly Kuznetsov | 8604d9e | 2016-05-19 17:13:03 -0700 | [diff] [blame] | 147 | |
| 148 | Say Y here if you want all hot-plugged memory blocks to appear in |
| 149 | 'online' state by default. |
| 150 | Say N here if you want the default policy to keep all hot-plugged |
| 151 | memory blocks in 'offline' state. |
| 152 | |
Anshuman Khandual | 91024b3 | 2021-05-04 18:38:17 -0700 | [diff] [blame] | 153 | config ARCH_ENABLE_MEMORY_HOTREMOVE |
| 154 | bool |
| 155 | |
KAMEZAWA Hiroyuki | 0c0e619 | 2007-10-16 01:26:12 -0700 | [diff] [blame] | 156 | config MEMORY_HOTREMOVE |
| 157 | bool "Allow for memory hot remove" |
Nathan Fontenot | f7e3334 | 2013-09-27 10:18:09 -0500 | [diff] [blame] | 158 | select HAVE_BOOTMEM_INFO_NODE if (X86_64 || PPC64) |
KAMEZAWA Hiroyuki | 0c0e619 | 2007-10-16 01:26:12 -0700 | [diff] [blame] | 159 | depends on MEMORY_HOTPLUG && ARCH_ENABLE_MEMORY_HOTREMOVE |
| 160 | depends on MIGRATION |
| 161 | |
Oscar Salvador | a08a2ae | 2021-05-04 18:39:42 -0700 | [diff] [blame] | 162 | config MHP_MEMMAP_ON_MEMORY |
| 163 | def_bool y |
| 164 | depends on MEMORY_HOTPLUG && SPARSEMEM_VMEMMAP |
| 165 | depends on ARCH_MHP_MEMMAP_ON_MEMORY_ENABLE |
| 166 | |
Hugh Dickins | 4c21e2f | 2005-10-29 18:16:40 -0700 | [diff] [blame] | 167 | # Heavily threaded applications may benefit from splitting the mm-wide |
| 168 | # page_table_lock, so that faults on different parts of the user address |
| 169 | # space can be handled with less contention: split it at this NR_CPUS. |
| 170 | # Default to 4 for wider testing, though 8 might be more appropriate. |
| 171 | # ARM's adjust_pte (unused if VIPT) depends on mm-wide page_table_lock. |
Hugh Dickins | 7b6ac9d | 2005-11-23 13:37:37 -0800 | [diff] [blame] | 172 | # PA-RISC 7xxx's spinlock_t would enlarge struct page from 32 to 44 bytes. |
Will Deacon | 60bccaa | 2020-05-26 18:33:01 +0100 | [diff] [blame] | 173 | # SPARC32 allocates multiple pte tables within a single page, and therefore |
| 174 | # a per-page lock leads to problems when multiple tables need to be locked |
| 175 | # at the same time (e.g. copy_page_range()). |
Hugh Dickins | a70caa8 | 2009-12-14 17:59:02 -0800 | [diff] [blame] | 176 | # DEBUG_SPINLOCK and DEBUG_LOCK_ALLOC spinlock_t also enlarge struct page. |
Hugh Dickins | 4c21e2f | 2005-10-29 18:16:40 -0700 | [diff] [blame] | 177 | # |
| 178 | config SPLIT_PTLOCK_CPUS |
| 179 | int |
Kirill A. Shutemov | 9164550 | 2014-04-07 15:37:14 -0700 | [diff] [blame] | 180 | default "999999" if !MMU |
Hugh Dickins | a70caa8 | 2009-12-14 17:59:02 -0800 | [diff] [blame] | 181 | default "999999" if ARM && !CPU_CACHE_VIPT |
| 182 | default "999999" if PARISC && !PA20 |
Will Deacon | 60bccaa | 2020-05-26 18:33:01 +0100 | [diff] [blame] | 183 | default "999999" if SPARC32 |
Hugh Dickins | 4c21e2f | 2005-10-29 18:16:40 -0700 | [diff] [blame] | 184 | default "4" |
Christoph Lameter | 7cbe34c | 2006-01-08 01:00:49 -0800 | [diff] [blame] | 185 | |
Kirill A. Shutemov | e009bb3 | 2013-11-14 14:31:07 -0800 | [diff] [blame] | 186 | config ARCH_ENABLE_SPLIT_PMD_PTLOCK |
Christoph Jaeger | 6341e62 | 2014-12-20 15:41:11 -0500 | [diff] [blame] | 187 | bool |
Kirill A. Shutemov | e009bb3 | 2013-11-14 14:31:07 -0800 | [diff] [blame] | 188 | |
Christoph Lameter | 7cbe34c | 2006-01-08 01:00:49 -0800 | [diff] [blame] | 189 | # |
Konstantin Khlebnikov | 09316c0 | 2014-10-09 15:29:32 -0700 | [diff] [blame] | 190 | # support for memory balloon |
| 191 | config MEMORY_BALLOON |
Christoph Jaeger | 6341e62 | 2014-12-20 15:41:11 -0500 | [diff] [blame] | 192 | bool |
Konstantin Khlebnikov | 09316c0 | 2014-10-09 15:29:32 -0700 | [diff] [blame] | 193 | |
| 194 | # |
Rafael Aquini | 18468d9 | 2012-12-11 16:02:38 -0800 | [diff] [blame] | 195 | # support for memory balloon compaction |
| 196 | config BALLOON_COMPACTION |
| 197 | bool "Allow for balloon memory compaction/migration" |
| 198 | def_bool y |
Konstantin Khlebnikov | 09316c0 | 2014-10-09 15:29:32 -0700 | [diff] [blame] | 199 | depends on COMPACTION && MEMORY_BALLOON |
Rafael Aquini | 18468d9 | 2012-12-11 16:02:38 -0800 | [diff] [blame] | 200 | help |
| 201 | Memory fragmentation introduced by ballooning might reduce |
| 202 | significantly the number of 2MB contiguous memory blocks that can be |
| 203 | used within a guest, thus imposing performance penalties associated |
| 204 | with the reduced number of transparent huge pages that could be used |
| 205 | by the guest workload. Allowing the compaction & migration for memory |
| 206 | pages enlisted as being part of memory balloon devices avoids the |
| 207 | scenario aforementioned and helps improving memory defragmentation. |
| 208 | |
| 209 | # |
Mel Gorman | e9e96b3 | 2010-05-24 14:32:21 -0700 | [diff] [blame] | 210 | # support for memory compaction |
| 211 | config COMPACTION |
| 212 | bool "Allow for memory compaction" |
Rik van Riel | 05106e6 | 2012-10-08 16:33:03 -0700 | [diff] [blame] | 213 | def_bool y |
Mel Gorman | e9e96b3 | 2010-05-24 14:32:21 -0700 | [diff] [blame] | 214 | select MIGRATION |
Andrea Arcangeli | 33a9387 | 2011-01-25 15:07:25 -0800 | [diff] [blame] | 215 | depends on MMU |
Mel Gorman | e9e96b3 | 2010-05-24 14:32:21 -0700 | [diff] [blame] | 216 | help |
Krzysztof Kozlowski | 19fa40a | 2019-11-30 17:58:23 -0800 | [diff] [blame] | 217 | Compaction is the only memory management component to form |
| 218 | high order (larger physically contiguous) memory blocks |
| 219 | reliably. The page allocator relies on compaction heavily and |
| 220 | the lack of the feature can lead to unexpected OOM killer |
| 221 | invocations for high order memory requests. You shouldn't |
| 222 | disable this option unless there really is a strong reason for |
| 223 | it and then we would be really interested to hear about that at |
| 224 | linux-mm@kvack.org. |
Mel Gorman | e9e96b3 | 2010-05-24 14:32:21 -0700 | [diff] [blame] | 225 | |
| 226 | # |
Alexander Duyck | 36e66c5 | 2020-04-06 20:04:56 -0700 | [diff] [blame] | 227 | # support for free page reporting |
| 228 | config PAGE_REPORTING |
| 229 | bool "Free page reporting" |
| 230 | def_bool n |
| 231 | help |
| 232 | Free page reporting allows for the incremental acquisition of |
| 233 | free pages from the buddy allocator for the purpose of reporting |
| 234 | those pages to another entity, such as a hypervisor, so that the |
| 235 | memory can be freed within the host for other uses. |
| 236 | |
| 237 | # |
Christoph Lameter | 7cbe34c | 2006-01-08 01:00:49 -0800 | [diff] [blame] | 238 | # support for page migration |
| 239 | # |
| 240 | config MIGRATION |
Christoph Lameter | b20a350 | 2006-03-22 00:09:12 -0800 | [diff] [blame] | 241 | bool "Page migration" |
Christoph Lameter | 6c5240a | 2006-06-23 02:03:37 -0700 | [diff] [blame] | 242 | def_bool y |
Chen Gang | de32a81 | 2013-09-12 15:14:08 -0700 | [diff] [blame] | 243 | depends on (NUMA || ARCH_ENABLE_MEMORY_HOTREMOVE || COMPACTION || CMA) && MMU |
Christoph Lameter | b20a350 | 2006-03-22 00:09:12 -0800 | [diff] [blame] | 244 | help |
| 245 | Allows the migration of the physical location of pages of processes |
Mel Gorman | e9e96b3 | 2010-05-24 14:32:21 -0700 | [diff] [blame] | 246 | while the virtual addresses are not changed. This is useful in |
| 247 | two situations. The first is on NUMA systems to put pages nearer |
| 248 | to the processors accessing. The second is when allocating huge |
| 249 | pages as migration can relocate pages to satisfy a huge page |
| 250 | allocation instead of reclaiming. |
Greg Kroah-Hartman | 6550e07 | 2006-06-12 17:11:31 -0700 | [diff] [blame] | 251 | |
Naoya Horiguchi | c177c81 | 2014-06-04 16:05:35 -0700 | [diff] [blame] | 252 | config ARCH_ENABLE_HUGEPAGE_MIGRATION |
Christoph Jaeger | 6341e62 | 2014-12-20 15:41:11 -0500 | [diff] [blame] | 253 | bool |
Naoya Horiguchi | c177c81 | 2014-06-04 16:05:35 -0700 | [diff] [blame] | 254 | |
Naoya Horiguchi | 9c670ea | 2017-09-08 16:10:53 -0700 | [diff] [blame] | 255 | config ARCH_ENABLE_THP_MIGRATION |
| 256 | bool |
| 257 | |
Anshuman Khandual | 4bfb68a | 2021-05-04 18:33:19 -0700 | [diff] [blame] | 258 | config HUGETLB_PAGE_SIZE_VARIABLE |
| 259 | def_bool n |
| 260 | help |
| 261 | Allows the pageblock_order value to be dynamic instead of just standard |
| 262 | HUGETLB_PAGE_ORDER when there are multiple HugeTLB page sizes available |
| 263 | on a platform. |
| 264 | |
Alexandre Ghiti | 8df995f | 2019-05-13 17:19:00 -0700 | [diff] [blame] | 265 | config CONTIG_ALLOC |
Krzysztof Kozlowski | 19fa40a | 2019-11-30 17:58:23 -0800 | [diff] [blame] | 266 | def_bool (MEMORY_ISOLATION && COMPACTION) || CMA |
Alexandre Ghiti | 8df995f | 2019-05-13 17:19:00 -0700 | [diff] [blame] | 267 | |
Jeremy Fitzhardinge | 600715d | 2008-09-11 01:31:45 -0700 | [diff] [blame] | 268 | config PHYS_ADDR_T_64BIT |
Christoph Hellwig | d4a451d | 2018-04-03 16:24:20 +0200 | [diff] [blame] | 269 | def_bool 64BIT |
Jeremy Fitzhardinge | 600715d | 2008-09-11 01:31:45 -0700 | [diff] [blame] | 270 | |
Christoph Lameter | 2a7326b | 2007-07-17 04:03:37 -0700 | [diff] [blame] | 271 | config BOUNCE |
Vinayak Menon | 9ca24e2 | 2013-04-29 15:08:55 -0700 | [diff] [blame] | 272 | bool "Enable bounce buffers" |
| 273 | default y |
Christoph Hellwig | ce288e0 | 2021-03-31 09:29:59 +0200 | [diff] [blame] | 274 | depends on BLOCK && MMU && HIGHMEM |
Vinayak Menon | 9ca24e2 | 2013-04-29 15:08:55 -0700 | [diff] [blame] | 275 | help |
Christoph Hellwig | ce288e0 | 2021-03-31 09:29:59 +0200 | [diff] [blame] | 276 | Enable bounce buffers for devices that cannot access the full range of |
| 277 | memory available to the CPU. Enabled by default when HIGHMEM is |
| 278 | selected, but you may say n to override this. |
Christoph Lameter | 2a7326b | 2007-07-17 04:03:37 -0700 | [diff] [blame] | 279 | |
Stephen Rothwell | f057eac | 2007-07-15 23:40:05 -0700 | [diff] [blame] | 280 | config VIRT_TO_BUS |
Stephen Rothwell | 4febd95 | 2013-03-07 15:48:16 +1100 | [diff] [blame] | 281 | bool |
| 282 | help |
| 283 | An architecture should select this if it implements the |
| 284 | deprecated interface virt_to_bus(). All new architectures |
| 285 | should probably not select this. |
| 286 | |
Andrea Arcangeli | cddb8a5 | 2008-07-28 15:46:29 -0700 | [diff] [blame] | 287 | |
| 288 | config MMU_NOTIFIER |
| 289 | bool |
Pranith Kumar | 83fe27e | 2014-12-05 11:24:45 -0500 | [diff] [blame] | 290 | select SRCU |
Jason Gunthorpe | 99cb252 | 2019-11-12 16:22:19 -0400 | [diff] [blame] | 291 | select INTERVAL_TREE |
David Howells | fc4d5c2 | 2009-05-06 16:03:05 -0700 | [diff] [blame] | 292 | |
Hugh Dickins | f8af4da | 2009-09-21 17:01:57 -0700 | [diff] [blame] | 293 | config KSM |
| 294 | bool "Enable KSM for page merging" |
| 295 | depends on MMU |
Timofey Titovets | 59e1a2f4 | 2018-12-28 00:34:05 -0800 | [diff] [blame] | 296 | select XXHASH |
Hugh Dickins | f8af4da | 2009-09-21 17:01:57 -0700 | [diff] [blame] | 297 | help |
| 298 | Enable Kernel Samepage Merging: KSM periodically scans those areas |
| 299 | of an application's address space that an app has advised may be |
| 300 | mergeable. When it finds pages of identical content, it replaces |
Hugh Dickins | d0f209f | 2009-12-14 17:59:34 -0800 | [diff] [blame] | 301 | the many instances by a single page with that content, so |
Hugh Dickins | f8af4da | 2009-09-21 17:01:57 -0700 | [diff] [blame] | 302 | saving memory until one or another app needs to modify the content. |
| 303 | Recommended for use with KVM, or with other duplicative applications. |
Mike Rapoport | ad56b73 | 2018-03-21 21:22:47 +0200 | [diff] [blame] | 304 | See Documentation/vm/ksm.rst for more information: KSM is inactive |
Hugh Dickins | c73602a | 2009-10-07 16:32:22 -0700 | [diff] [blame] | 305 | until a program has madvised that an area is MADV_MERGEABLE, and |
| 306 | root has set /sys/kernel/mm/ksm/run to 1 (if CONFIG_SYSFS is set). |
Hugh Dickins | f8af4da | 2009-09-21 17:01:57 -0700 | [diff] [blame] | 307 | |
Christoph Lameter | e0a94c2 | 2009-06-03 16:04:31 -0400 | [diff] [blame] | 308 | config DEFAULT_MMAP_MIN_ADDR |
Krzysztof Kozlowski | 19fa40a | 2019-11-30 17:58:23 -0800 | [diff] [blame] | 309 | int "Low address space to protect from user allocation" |
David Howells | 6e14154 | 2009-12-15 19:27:45 +0000 | [diff] [blame] | 310 | depends on MMU |
Krzysztof Kozlowski | 19fa40a | 2019-11-30 17:58:23 -0800 | [diff] [blame] | 311 | default 4096 |
| 312 | help |
Christoph Lameter | e0a94c2 | 2009-06-03 16:04:31 -0400 | [diff] [blame] | 313 | This is the portion of low virtual memory which should be protected |
| 314 | from userspace allocation. Keeping a user from writing to low pages |
| 315 | can help reduce the impact of kernel NULL pointer bugs. |
| 316 | |
| 317 | For most ia64, ppc64 and x86 users with lots of address space |
| 318 | a value of 65536 is reasonable and should cause no problems. |
| 319 | On arm and other archs it should not be higher than 32768. |
Eric Paris | 788084a | 2009-07-31 12:54:11 -0400 | [diff] [blame] | 320 | Programs which use vm86 functionality or have some need to map |
| 321 | this low address space will need CAP_SYS_RAWIO or disable this |
| 322 | protection by setting the value to 0. |
Christoph Lameter | e0a94c2 | 2009-06-03 16:04:31 -0400 | [diff] [blame] | 323 | |
| 324 | This value can be changed after boot using the |
| 325 | /proc/sys/vm/mmap_min_addr tunable. |
| 326 | |
Linus Torvalds | d949f36 | 2009-09-26 09:35:07 -0700 | [diff] [blame] | 327 | config ARCH_SUPPORTS_MEMORY_FAILURE |
| 328 | bool |
Christoph Lameter | e0a94c2 | 2009-06-03 16:04:31 -0400 | [diff] [blame] | 329 | |
Andi Kleen | 6a46079 | 2009-09-16 11:50:15 +0200 | [diff] [blame] | 330 | config MEMORY_FAILURE |
| 331 | depends on MMU |
Linus Torvalds | d949f36 | 2009-09-26 09:35:07 -0700 | [diff] [blame] | 332 | depends on ARCH_SUPPORTS_MEMORY_FAILURE |
Andi Kleen | 6a46079 | 2009-09-16 11:50:15 +0200 | [diff] [blame] | 333 | bool "Enable recovery from hardware memory errors" |
Minchan Kim | ee6f509 | 2012-07-31 16:43:50 -0700 | [diff] [blame] | 334 | select MEMORY_ISOLATION |
Xie XiuQi | 97f0b13 | 2015-06-24 16:57:36 -0700 | [diff] [blame] | 335 | select RAS |
Andi Kleen | 6a46079 | 2009-09-16 11:50:15 +0200 | [diff] [blame] | 336 | help |
| 337 | Enables code to recover from some memory failures on systems |
| 338 | with MCA recovery. This allows a system to continue running |
| 339 | even when some of its memory has uncorrected errors. This requires |
| 340 | special hardware support and typically ECC memory. |
| 341 | |
Andi Kleen | cae681f | 2009-09-16 11:50:17 +0200 | [diff] [blame] | 342 | config HWPOISON_INJECT |
Andi Kleen | 413f9ef | 2009-12-16 12:20:00 +0100 | [diff] [blame] | 343 | tristate "HWPoison pages injector" |
Andi Kleen | 27df506 | 2009-12-21 19:56:42 +0100 | [diff] [blame] | 344 | depends on MEMORY_FAILURE && DEBUG_KERNEL && PROC_FS |
Wu Fengguang | 478c5ff | 2009-12-16 12:19:59 +0100 | [diff] [blame] | 345 | select PROC_PAGE_MONITOR |
Andi Kleen | cae681f | 2009-09-16 11:50:17 +0200 | [diff] [blame] | 346 | |
David Howells | fc4d5c2 | 2009-05-06 16:03:05 -0700 | [diff] [blame] | 347 | config NOMMU_INITIAL_TRIM_EXCESS |
| 348 | int "Turn on mmap() excess space trimming before booting" |
| 349 | depends on !MMU |
| 350 | default 1 |
| 351 | help |
| 352 | The NOMMU mmap() frequently needs to allocate large contiguous chunks |
| 353 | of memory on which to store mappings, but it can only ask the system |
| 354 | allocator for chunks in 2^N*PAGE_SIZE amounts - which is frequently |
| 355 | more than it requires. To deal with this, mmap() is able to trim off |
| 356 | the excess and return it to the allocator. |
| 357 | |
| 358 | If trimming is enabled, the excess is trimmed off and returned to the |
| 359 | system allocator, which can cause extra fragmentation, particularly |
| 360 | if there are a lot of transient processes. |
| 361 | |
| 362 | If trimming is disabled, the excess is kept, but not used, which for |
| 363 | long-term mappings means that the space is wasted. |
| 364 | |
| 365 | Trimming can be dynamically controlled through a sysctl option |
| 366 | (/proc/sys/vm/nr_trim_pages) which specifies the minimum number of |
| 367 | excess pages there must be before trimming should occur, or zero if |
| 368 | no trimming is to occur. |
| 369 | |
| 370 | This option specifies the initial value of this option. The default |
| 371 | of 1 says that all excess pages should be trimmed. |
| 372 | |
Stephen Kitt | dd19d29 | 2020-08-12 11:22:30 +0200 | [diff] [blame] | 373 | See Documentation/admin-guide/mm/nommu-mmap.rst for more information. |
Tejun Heo | bbddff0 | 2010-09-03 18:22:48 +0200 | [diff] [blame] | 374 | |
Andrea Arcangeli | 4c76d9d | 2011-01-13 15:46:39 -0800 | [diff] [blame] | 375 | config TRANSPARENT_HUGEPAGE |
Andrea Arcangeli | 13ece88 | 2011-01-13 15:47:07 -0800 | [diff] [blame] | 376 | bool "Transparent Hugepage Support" |
Sebastian Andrzej Siewior | 554b0f3 | 2021-11-05 13:35:27 -0700 | [diff] [blame] | 377 | depends on HAVE_ARCH_TRANSPARENT_HUGEPAGE && !PREEMPT_RT |
Andrea Arcangeli | 5d68924 | 2011-01-13 15:47:07 -0800 | [diff] [blame] | 378 | select COMPACTION |
Matthew Wilcox | 3a08cd5 | 2018-09-22 16:14:30 -0400 | [diff] [blame] | 379 | select XARRAY_MULTI |
Andrea Arcangeli | 4c76d9d | 2011-01-13 15:46:39 -0800 | [diff] [blame] | 380 | help |
| 381 | Transparent Hugepages allows the kernel to use huge pages and |
| 382 | huge tlb transparently to the applications whenever possible. |
| 383 | This feature can improve computing performance to certain |
| 384 | applications by speeding up page faults during memory |
| 385 | allocation, by reducing the number of tlb misses and by speeding |
| 386 | up the pagetable walking. |
| 387 | |
| 388 | If memory constrained on embedded, you may want to say N. |
| 389 | |
Andrea Arcangeli | 13ece88 | 2011-01-13 15:47:07 -0800 | [diff] [blame] | 390 | choice |
| 391 | prompt "Transparent Hugepage Support sysfs defaults" |
| 392 | depends on TRANSPARENT_HUGEPAGE |
| 393 | default TRANSPARENT_HUGEPAGE_ALWAYS |
| 394 | help |
| 395 | Selects the sysfs defaults for Transparent Hugepage Support. |
| 396 | |
| 397 | config TRANSPARENT_HUGEPAGE_ALWAYS |
| 398 | bool "always" |
| 399 | help |
| 400 | Enabling Transparent Hugepage always, can increase the |
| 401 | memory footprint of applications without a guaranteed |
| 402 | benefit but it will work automatically for all applications. |
| 403 | |
| 404 | config TRANSPARENT_HUGEPAGE_MADVISE |
| 405 | bool "madvise" |
| 406 | help |
| 407 | Enabling Transparent Hugepage madvise, will only provide a |
| 408 | performance improvement benefit to the applications using |
| 409 | madvise(MADV_HUGEPAGE) but it won't risk to increase the |
| 410 | memory footprint of applications without a guaranteed |
| 411 | benefit. |
| 412 | endchoice |
| 413 | |
Huang Ying | 38d8b4e | 2017-07-06 15:37:18 -0700 | [diff] [blame] | 414 | config ARCH_WANTS_THP_SWAP |
Krzysztof Kozlowski | 19fa40a | 2019-11-30 17:58:23 -0800 | [diff] [blame] | 415 | def_bool n |
Huang Ying | 38d8b4e | 2017-07-06 15:37:18 -0700 | [diff] [blame] | 416 | |
| 417 | config THP_SWAP |
| 418 | def_bool y |
Huang Ying | 14fef28 | 2018-08-17 15:49:41 -0700 | [diff] [blame] | 419 | depends on TRANSPARENT_HUGEPAGE && ARCH_WANTS_THP_SWAP && SWAP |
Huang Ying | 38d8b4e | 2017-07-06 15:37:18 -0700 | [diff] [blame] | 420 | help |
| 421 | Swap transparent huge pages in one piece, without splitting. |
Huang Ying | 14fef28 | 2018-08-17 15:49:41 -0700 | [diff] [blame] | 422 | XXX: For now, swap cluster backing transparent huge page |
| 423 | will be split after swapout. |
Huang Ying | 38d8b4e | 2017-07-06 15:37:18 -0700 | [diff] [blame] | 424 | |
| 425 | For selection by architectures with reasonable THP sizes. |
| 426 | |
Kirill A. Shutemov | e496cf3 | 2016-07-26 15:26:35 -0700 | [diff] [blame] | 427 | # |
Tejun Heo | bbddff0 | 2010-09-03 18:22:48 +0200 | [diff] [blame] | 428 | # UP and nommu archs use km based percpu allocator |
| 429 | # |
| 430 | config NEED_PER_CPU_KM |
Vladimir Murzin | 3583521 | 2021-11-30 17:29:54 +0000 | [diff] [blame] | 431 | depends on !SMP || !MMU |
Tejun Heo | bbddff0 | 2010-09-03 18:22:48 +0200 | [diff] [blame] | 432 | bool |
| 433 | default y |
Dan Magenheimer | 077b1f8 | 2011-05-26 10:01:36 -0600 | [diff] [blame] | 434 | |
Kefeng Wang | 7ecd19c | 2022-01-19 18:07:41 -0800 | [diff] [blame] | 435 | config NEED_PER_CPU_EMBED_FIRST_CHUNK |
| 436 | bool |
| 437 | |
| 438 | config NEED_PER_CPU_PAGE_FIRST_CHUNK |
| 439 | bool |
| 440 | |
| 441 | config USE_PERCPU_NUMA_NODE_ID |
| 442 | bool |
| 443 | |
| 444 | config HAVE_SETUP_PER_CPU_AREA |
| 445 | bool |
| 446 | |
Dan Magenheimer | 27c6aec | 2012-04-09 17:10:34 -0600 | [diff] [blame] | 447 | config FRONTSWAP |
Christoph Hellwig | 6e61dde | 2022-01-21 22:15:14 -0800 | [diff] [blame] | 448 | bool |
Aneesh Kumar K.V | f825c73 | 2013-07-02 11:15:15 +0530 | [diff] [blame] | 449 | |
| 450 | config CMA |
| 451 | bool "Contiguous Memory Allocator" |
Mike Rapoport | aca52c3 | 2018-10-30 15:07:44 -0700 | [diff] [blame] | 452 | depends on MMU |
Aneesh Kumar K.V | f825c73 | 2013-07-02 11:15:15 +0530 | [diff] [blame] | 453 | select MIGRATION |
| 454 | select MEMORY_ISOLATION |
| 455 | help |
| 456 | This enables the Contiguous Memory Allocator which allows other |
| 457 | subsystems to allocate big physically-contiguous blocks of memory. |
| 458 | CMA reserves a region of memory and allows only movable pages to |
| 459 | be allocated from it. This way, the kernel can use the memory for |
| 460 | pagecache and when a subsystem requests for contiguous area, the |
| 461 | allocated pages are migrated away to serve the contiguous request. |
| 462 | |
| 463 | If unsure, say "n". |
| 464 | |
| 465 | config CMA_DEBUG |
| 466 | bool "CMA debug messages (DEVELOPMENT)" |
| 467 | depends on DEBUG_KERNEL && CMA |
| 468 | help |
| 469 | Turns on debug messages in CMA. This produces KERN_DEBUG |
| 470 | messages for every CMA call as well as various messages while |
| 471 | processing calls such as dma_alloc_from_contiguous(). |
| 472 | This option does not affect warning and error messages. |
Alexander Graf | bf550fc | 2013-08-29 00:41:59 +0200 | [diff] [blame] | 473 | |
Sasha Levin | 28b24c1 | 2015-04-14 15:44:57 -0700 | [diff] [blame] | 474 | config CMA_DEBUGFS |
| 475 | bool "CMA debugfs interface" |
| 476 | depends on CMA && DEBUG_FS |
| 477 | help |
| 478 | Turns on the DebugFS interface for CMA. |
| 479 | |
Minchan Kim | 43ca106 | 2021-05-04 18:37:28 -0700 | [diff] [blame] | 480 | config CMA_SYSFS |
| 481 | bool "CMA information through sysfs interface" |
| 482 | depends on CMA && SYSFS |
| 483 | help |
| 484 | This option exposes some sysfs attributes to get information |
| 485 | from CMA. |
| 486 | |
Joonsoo Kim | a254129 | 2014-08-06 16:05:25 -0700 | [diff] [blame] | 487 | config CMA_AREAS |
| 488 | int "Maximum count of the CMA areas" |
| 489 | depends on CMA |
Barry Song | b7176c2 | 2020-08-24 11:03:07 +1200 | [diff] [blame] | 490 | default 19 if NUMA |
Joonsoo Kim | a254129 | 2014-08-06 16:05:25 -0700 | [diff] [blame] | 491 | default 7 |
| 492 | help |
| 493 | CMA allows to create CMA areas for particular purpose, mainly, |
| 494 | used as device private area. This parameter sets the maximum |
| 495 | number of CMA area in the system. |
| 496 | |
Barry Song | b7176c2 | 2020-08-24 11:03:07 +1200 | [diff] [blame] | 497 | If unsure, leave the default value "7" in UMA and "19" in NUMA. |
Joonsoo Kim | a254129 | 2014-08-06 16:05:25 -0700 | [diff] [blame] | 498 | |
Dan Streetman | af8d417 | 2014-08-06 16:08:36 -0700 | [diff] [blame] | 499 | config MEM_SOFT_DIRTY |
| 500 | bool "Track memory changes" |
| 501 | depends on CHECKPOINT_RESTORE && HAVE_ARCH_SOFT_DIRTY && PROC_FS |
| 502 | select PROC_PAGE_MONITOR |
Seth Jennings | 4e2e277 | 2013-07-10 16:04:55 -0700 | [diff] [blame] | 503 | help |
Dan Streetman | af8d417 | 2014-08-06 16:08:36 -0700 | [diff] [blame] | 504 | This option enables memory changes tracking by introducing a |
| 505 | soft-dirty bit on pte-s. This bit it set when someone writes |
| 506 | into a page just as regular dirty bit, but unlike the latter |
| 507 | it can be cleared by hands. |
| 508 | |
Mike Rapoport | 1ad1335 | 2018-04-18 11:07:49 +0300 | [diff] [blame] | 509 | See Documentation/admin-guide/mm/soft-dirty.rst for more details. |
Seth Jennings | 4e2e277 | 2013-07-10 16:04:55 -0700 | [diff] [blame] | 510 | |
Seth Jennings | 2b28111 | 2013-07-10 16:05:03 -0700 | [diff] [blame] | 511 | config ZSWAP |
| 512 | bool "Compressed cache for swap pages (EXPERIMENTAL)" |
Christoph Hellwig | 6e61dde | 2022-01-21 22:15:14 -0800 | [diff] [blame] | 513 | depends on SWAP && CRYPTO=y |
| 514 | select FRONTSWAP |
Dan Streetman | 12d79d6 | 2014-08-06 16:08:40 -0700 | [diff] [blame] | 515 | select ZPOOL |
Seth Jennings | 2b28111 | 2013-07-10 16:05:03 -0700 | [diff] [blame] | 516 | help |
| 517 | A lightweight compressed cache for swap pages. It takes |
| 518 | pages that are in the process of being swapped out and attempts to |
| 519 | compress them into a dynamically allocated RAM-based memory pool. |
| 520 | This can result in a significant I/O reduction on swap device and, |
| 521 | in the case where decompressing from RAM is faster that swap device |
| 522 | reads, can also improve workload performance. |
| 523 | |
| 524 | This is marked experimental because it is a new feature (as of |
| 525 | v3.11) that interacts heavily with memory reclaim. While these |
| 526 | interactions don't cause any known issues on simple memory setups, |
| 527 | they have not be fully explored on the large set of potential |
| 528 | configurations and workloads that exist. |
| 529 | |
Maciej S. Szmigiero | bb8b93b | 2020-04-06 20:08:03 -0700 | [diff] [blame] | 530 | choice |
| 531 | prompt "Compressed cache for swap pages default compressor" |
| 532 | depends on ZSWAP |
| 533 | default ZSWAP_COMPRESSOR_DEFAULT_LZO |
| 534 | help |
| 535 | Selects the default compression algorithm for the compressed cache |
| 536 | for swap pages. |
| 537 | |
| 538 | For an overview what kind of performance can be expected from |
| 539 | a particular compression algorithm please refer to the benchmarks |
| 540 | available at the following LWN page: |
| 541 | https://lwn.net/Articles/751795/ |
| 542 | |
| 543 | If in doubt, select 'LZO'. |
| 544 | |
| 545 | The selection made here can be overridden by using the kernel |
| 546 | command line 'zswap.compressor=' option. |
| 547 | |
| 548 | config ZSWAP_COMPRESSOR_DEFAULT_DEFLATE |
| 549 | bool "Deflate" |
| 550 | select CRYPTO_DEFLATE |
| 551 | help |
| 552 | Use the Deflate algorithm as the default compression algorithm. |
| 553 | |
| 554 | config ZSWAP_COMPRESSOR_DEFAULT_LZO |
| 555 | bool "LZO" |
| 556 | select CRYPTO_LZO |
| 557 | help |
| 558 | Use the LZO algorithm as the default compression algorithm. |
| 559 | |
| 560 | config ZSWAP_COMPRESSOR_DEFAULT_842 |
| 561 | bool "842" |
| 562 | select CRYPTO_842 |
| 563 | help |
| 564 | Use the 842 algorithm as the default compression algorithm. |
| 565 | |
| 566 | config ZSWAP_COMPRESSOR_DEFAULT_LZ4 |
| 567 | bool "LZ4" |
| 568 | select CRYPTO_LZ4 |
| 569 | help |
| 570 | Use the LZ4 algorithm as the default compression algorithm. |
| 571 | |
| 572 | config ZSWAP_COMPRESSOR_DEFAULT_LZ4HC |
| 573 | bool "LZ4HC" |
| 574 | select CRYPTO_LZ4HC |
| 575 | help |
| 576 | Use the LZ4HC algorithm as the default compression algorithm. |
| 577 | |
| 578 | config ZSWAP_COMPRESSOR_DEFAULT_ZSTD |
| 579 | bool "zstd" |
| 580 | select CRYPTO_ZSTD |
| 581 | help |
| 582 | Use the zstd algorithm as the default compression algorithm. |
| 583 | endchoice |
| 584 | |
| 585 | config ZSWAP_COMPRESSOR_DEFAULT |
| 586 | string |
| 587 | depends on ZSWAP |
| 588 | default "deflate" if ZSWAP_COMPRESSOR_DEFAULT_DEFLATE |
| 589 | default "lzo" if ZSWAP_COMPRESSOR_DEFAULT_LZO |
| 590 | default "842" if ZSWAP_COMPRESSOR_DEFAULT_842 |
| 591 | default "lz4" if ZSWAP_COMPRESSOR_DEFAULT_LZ4 |
| 592 | default "lz4hc" if ZSWAP_COMPRESSOR_DEFAULT_LZ4HC |
| 593 | default "zstd" if ZSWAP_COMPRESSOR_DEFAULT_ZSTD |
| 594 | default "" |
| 595 | |
| 596 | choice |
| 597 | prompt "Compressed cache for swap pages default allocator" |
| 598 | depends on ZSWAP |
| 599 | default ZSWAP_ZPOOL_DEFAULT_ZBUD |
| 600 | help |
| 601 | Selects the default allocator for the compressed cache for |
| 602 | swap pages. |
| 603 | The default is 'zbud' for compatibility, however please do |
| 604 | read the description of each of the allocators below before |
| 605 | making a right choice. |
| 606 | |
| 607 | The selection made here can be overridden by using the kernel |
| 608 | command line 'zswap.zpool=' option. |
| 609 | |
| 610 | config ZSWAP_ZPOOL_DEFAULT_ZBUD |
| 611 | bool "zbud" |
| 612 | select ZBUD |
| 613 | help |
| 614 | Use the zbud allocator as the default allocator. |
| 615 | |
| 616 | config ZSWAP_ZPOOL_DEFAULT_Z3FOLD |
| 617 | bool "z3fold" |
| 618 | select Z3FOLD |
| 619 | help |
| 620 | Use the z3fold allocator as the default allocator. |
| 621 | |
| 622 | config ZSWAP_ZPOOL_DEFAULT_ZSMALLOC |
| 623 | bool "zsmalloc" |
| 624 | select ZSMALLOC |
| 625 | help |
| 626 | Use the zsmalloc allocator as the default allocator. |
| 627 | endchoice |
| 628 | |
| 629 | config ZSWAP_ZPOOL_DEFAULT |
| 630 | string |
| 631 | depends on ZSWAP |
| 632 | default "zbud" if ZSWAP_ZPOOL_DEFAULT_ZBUD |
| 633 | default "z3fold" if ZSWAP_ZPOOL_DEFAULT_Z3FOLD |
| 634 | default "zsmalloc" if ZSWAP_ZPOOL_DEFAULT_ZSMALLOC |
| 635 | default "" |
| 636 | |
| 637 | config ZSWAP_DEFAULT_ON |
| 638 | bool "Enable the compressed cache for swap pages by default" |
| 639 | depends on ZSWAP |
| 640 | help |
| 641 | If selected, the compressed cache for swap pages will be enabled |
| 642 | at boot, otherwise it will be disabled. |
| 643 | |
| 644 | The selection made here can be overridden by using the kernel |
| 645 | command line 'zswap.enabled=' option. |
| 646 | |
Dan Streetman | af8d417 | 2014-08-06 16:08:36 -0700 | [diff] [blame] | 647 | config ZPOOL |
| 648 | tristate "Common API for compressed memory storage" |
Pavel Emelyanov | 0f8975e | 2013-07-03 15:01:20 -0700 | [diff] [blame] | 649 | help |
Dan Streetman | af8d417 | 2014-08-06 16:08:36 -0700 | [diff] [blame] | 650 | Compressed memory storage API. This allows using either zbud or |
| 651 | zsmalloc. |
Pavel Emelyanov | 0f8975e | 2013-07-03 15:01:20 -0700 | [diff] [blame] | 652 | |
Dan Streetman | af8d417 | 2014-08-06 16:08:36 -0700 | [diff] [blame] | 653 | config ZBUD |
Vitaly Wool | 9a001fc | 2016-05-20 16:58:30 -0700 | [diff] [blame] | 654 | tristate "Low (Up to 2x) density storage for compressed pages" |
Miaohe Lin | 2a03085 | 2021-06-30 18:50:45 -0700 | [diff] [blame] | 655 | depends on ZPOOL |
Dan Streetman | af8d417 | 2014-08-06 16:08:36 -0700 | [diff] [blame] | 656 | help |
| 657 | A special purpose allocator for storing compressed pages. |
| 658 | It is designed to store up to two compressed pages per physical |
| 659 | page. While this design limits storage density, it has simple and |
| 660 | deterministic reclaim properties that make it preferable to a higher |
| 661 | density approach when reclaim will be used. |
Minchan Kim | bcf1647 | 2014-01-30 15:45:50 -0800 | [diff] [blame] | 662 | |
Vitaly Wool | 9a001fc | 2016-05-20 16:58:30 -0700 | [diff] [blame] | 663 | config Z3FOLD |
| 664 | tristate "Up to 3x density storage for compressed pages" |
| 665 | depends on ZPOOL |
Vitaly Wool | 9a001fc | 2016-05-20 16:58:30 -0700 | [diff] [blame] | 666 | help |
| 667 | A special purpose allocator for storing compressed pages. |
| 668 | It is designed to store up to three compressed pages per physical |
| 669 | page. It is a ZBUD derivative so the simplicity and determinism are |
| 670 | still there. |
| 671 | |
Minchan Kim | bcf1647 | 2014-01-30 15:45:50 -0800 | [diff] [blame] | 672 | config ZSMALLOC |
Minchan Kim | d867f20 | 2014-06-04 16:11:10 -0700 | [diff] [blame] | 673 | tristate "Memory allocator for compressed pages" |
Minchan Kim | bcf1647 | 2014-01-30 15:45:50 -0800 | [diff] [blame] | 674 | depends on MMU |
Minchan Kim | bcf1647 | 2014-01-30 15:45:50 -0800 | [diff] [blame] | 675 | help |
| 676 | zsmalloc is a slab-based memory allocator designed to store |
| 677 | compressed RAM pages. zsmalloc uses virtual memory mapping |
| 678 | in order to reduce fragmentation. However, this results in a |
| 679 | non-standard allocator interface where a handle, not a pointer, is |
| 680 | returned by an alloc(). This handle must be mapped in order to |
| 681 | access the allocated space. |
| 682 | |
Ganesh Mahendran | 0f050d9 | 2015-02-12 15:00:54 -0800 | [diff] [blame] | 683 | config ZSMALLOC_STAT |
| 684 | bool "Export zsmalloc statistics" |
| 685 | depends on ZSMALLOC |
| 686 | select DEBUG_FS |
| 687 | help |
| 688 | This option enables code in the zsmalloc to collect various |
Colin Ian King | 01ab1ed | 2020-12-18 14:05:32 -0800 | [diff] [blame] | 689 | statistics about what's happening in zsmalloc and exports that |
Ganesh Mahendran | 0f050d9 | 2015-02-12 15:00:54 -0800 | [diff] [blame] | 690 | information to userspace via debugfs. |
| 691 | If unsure, say N. |
| 692 | |
Mark Salter | 9e5c33d | 2014-04-07 15:39:48 -0700 | [diff] [blame] | 693 | config GENERIC_EARLY_IOREMAP |
| 694 | bool |
Helge Deller | 042d27a | 2014-04-30 23:26:02 +0200 | [diff] [blame] | 695 | |
Helge Deller | 22ee3ea | 2020-11-06 19:41:36 +0100 | [diff] [blame] | 696 | config STACK_MAX_DEFAULT_SIZE_MB |
| 697 | int "Default maximum user stack size for 32-bit processes (MB)" |
| 698 | default 100 |
Helge Deller | 042d27a | 2014-04-30 23:26:02 +0200 | [diff] [blame] | 699 | range 8 2048 |
| 700 | depends on STACK_GROWSUP && (!64BIT || COMPAT) |
| 701 | help |
| 702 | This is the maximum stack size in Megabytes in the VM layout of 32-bit |
| 703 | user processes when the stack grows upwards (currently only on parisc |
Helge Deller | 22ee3ea | 2020-11-06 19:41:36 +0100 | [diff] [blame] | 704 | arch) when the RLIMIT_STACK hard limit is unlimited. |
Helge Deller | 042d27a | 2014-04-30 23:26:02 +0200 | [diff] [blame] | 705 | |
Helge Deller | 22ee3ea | 2020-11-06 19:41:36 +0100 | [diff] [blame] | 706 | A sane initial value is 100 MB. |
Mel Gorman | 3a80a7f | 2015-06-30 14:57:02 -0700 | [diff] [blame] | 707 | |
Mel Gorman | 3a80a7f | 2015-06-30 14:57:02 -0700 | [diff] [blame] | 708 | config DEFERRED_STRUCT_PAGE_INIT |
Vlastimil Babka | 1ce2210 | 2016-02-05 15:36:21 -0800 | [diff] [blame] | 709 | bool "Defer initialisation of struct pages to kthreads" |
Mike Rapoport | d39f8fb | 2018-08-17 15:47:07 -0700 | [diff] [blame] | 710 | depends on SPARSEMEM |
Pavel Tatashin | ab1e8d8 | 2018-05-18 16:09:13 -0700 | [diff] [blame] | 711 | depends on !NEED_PER_CPU_KM |
Pasha Tatashin | 889c695 | 2018-09-20 12:22:30 -0700 | [diff] [blame] | 712 | depends on 64BIT |
Daniel Jordan | e444314 | 2020-06-03 15:59:51 -0700 | [diff] [blame] | 713 | select PADATA |
Mel Gorman | 3a80a7f | 2015-06-30 14:57:02 -0700 | [diff] [blame] | 714 | help |
| 715 | Ordinarily all struct pages are initialised during early boot in a |
| 716 | single thread. On very large machines this can take a considerable |
| 717 | amount of time. If this option is set, large machines will bring up |
Daniel Jordan | e444314 | 2020-06-03 15:59:51 -0700 | [diff] [blame] | 718 | a subset of memmap at boot and then initialise the rest in parallel. |
| 719 | This has a potential performance impact on tasks running early in the |
Vlastimil Babka | 1ce2210 | 2016-02-05 15:36:21 -0800 | [diff] [blame] | 720 | lifetime of the system until these kthreads finish the |
| 721 | initialisation. |
Dan Williams | 033fbae | 2015-08-09 15:29:06 -0400 | [diff] [blame] | 722 | |
SeongJae Park | 1c676e0 | 2021-09-07 19:56:40 -0700 | [diff] [blame] | 723 | config PAGE_IDLE_FLAG |
| 724 | bool |
| 725 | select PAGE_EXTENSION if !64BIT |
| 726 | help |
| 727 | This adds PG_idle and PG_young flags to 'struct page'. PTE Accessed |
| 728 | bit writers can set the state of the bit in the flags so that PTE |
| 729 | Accessed bit readers may avoid disturbance. |
| 730 | |
Vladimir Davydov | 33c3fc7 | 2015-09-09 15:35:45 -0700 | [diff] [blame] | 731 | config IDLE_PAGE_TRACKING |
| 732 | bool "Enable idle page tracking" |
| 733 | depends on SYSFS && MMU |
SeongJae Park | 1c676e0 | 2021-09-07 19:56:40 -0700 | [diff] [blame] | 734 | select PAGE_IDLE_FLAG |
Vladimir Davydov | 33c3fc7 | 2015-09-09 15:35:45 -0700 | [diff] [blame] | 735 | help |
| 736 | This feature allows to estimate the amount of user pages that have |
| 737 | not been touched during a given period of time. This information can |
| 738 | be useful to tune memory cgroup limits and/or for job placement |
| 739 | within a compute cluster. |
| 740 | |
Mike Rapoport | 1ad1335 | 2018-04-18 11:07:49 +0300 | [diff] [blame] | 741 | See Documentation/admin-guide/mm/idle_page_tracking.rst for |
| 742 | more details. |
Vladimir Davydov | 33c3fc7 | 2015-09-09 15:35:45 -0700 | [diff] [blame] | 743 | |
Anshuman Khandual | c2280be | 2021-05-04 18:38:09 -0700 | [diff] [blame] | 744 | config ARCH_HAS_CACHE_LINE_SIZE |
| 745 | bool |
| 746 | |
Robin Murphy | 1759673 | 2019-07-16 16:30:47 -0700 | [diff] [blame] | 747 | config ARCH_HAS_PTE_DEVMAP |
Oliver O'Halloran | 65f7d04 | 2017-06-28 11:32:31 +1000 | [diff] [blame] | 748 | bool |
| 749 | |
Kefeng Wang | 63703f3 | 2021-06-30 18:52:20 -0700 | [diff] [blame] | 750 | config ARCH_HAS_ZONE_DMA_SET |
| 751 | bool |
| 752 | |
| 753 | config ZONE_DMA |
| 754 | bool "Support DMA zone" if ARCH_HAS_ZONE_DMA_SET |
| 755 | default y if ARM64 || X86 |
| 756 | |
| 757 | config ZONE_DMA32 |
| 758 | bool "Support DMA32 zone" if ARCH_HAS_ZONE_DMA_SET |
| 759 | depends on !X86_32 |
| 760 | default y if ARM64 |
| 761 | |
Dan Williams | 033fbae | 2015-08-09 15:29:06 -0400 | [diff] [blame] | 762 | config ZONE_DEVICE |
Jérôme Glisse | 5042db4 | 2017-09-08 16:11:43 -0700 | [diff] [blame] | 763 | bool "Device memory (pmem, HMM, etc...) hotplug support" |
Dan Williams | 033fbae | 2015-08-09 15:29:06 -0400 | [diff] [blame] | 764 | depends on MEMORY_HOTPLUG |
| 765 | depends on MEMORY_HOTREMOVE |
Dan Williams | 99490f1 | 2016-03-17 14:19:58 -0700 | [diff] [blame] | 766 | depends on SPARSEMEM_VMEMMAP |
Robin Murphy | 1759673 | 2019-07-16 16:30:47 -0700 | [diff] [blame] | 767 | depends on ARCH_HAS_PTE_DEVMAP |
Matthew Wilcox | 3a08cd5 | 2018-09-22 16:14:30 -0400 | [diff] [blame] | 768 | select XARRAY_MULTI |
Dan Williams | 033fbae | 2015-08-09 15:29:06 -0400 | [diff] [blame] | 769 | |
| 770 | help |
| 771 | Device memory hotplug support allows for establishing pmem, |
| 772 | or other device driver discovered memory regions, in the |
| 773 | memmap. This allows pfn_to_page() lookups of otherwise |
| 774 | "device-physical" addresses which is needed for using a DAX |
| 775 | mapping in an O_DIRECT operation, among other things. |
| 776 | |
| 777 | If FS_DAX is enabled, then say Y. |
Linus Torvalds | 06a660a | 2015-09-11 16:42:39 -0700 | [diff] [blame] | 778 | |
Dan Williams | e7638488 | 2018-05-16 11:46:08 -0700 | [diff] [blame] | 779 | config DEV_PAGEMAP_OPS |
| 780 | bool |
| 781 | |
Christoph Hellwig | 9c240a7 | 2019-08-06 19:05:52 +0300 | [diff] [blame] | 782 | # |
| 783 | # Helpers to mirror range of the CPU page tables of a process into device page |
| 784 | # tables. |
| 785 | # |
Jérôme Glisse | c0b1240 | 2017-09-08 16:11:27 -0700 | [diff] [blame] | 786 | config HMM_MIRROR |
Christoph Hellwig | 9c240a7 | 2019-08-06 19:05:52 +0300 | [diff] [blame] | 787 | bool |
Christoph Hellwig | f442c28 | 2019-08-06 19:05:51 +0300 | [diff] [blame] | 788 | depends on MMU |
Jérôme Glisse | c0b1240 | 2017-09-08 16:11:27 -0700 | [diff] [blame] | 789 | |
Jérôme Glisse | 5042db4 | 2017-09-08 16:11:43 -0700 | [diff] [blame] | 790 | config DEVICE_PRIVATE |
| 791 | bool "Unaddressable device memory (GPU memory, ...)" |
Christoph Hellwig | 7328d9c | 2019-06-26 14:27:22 +0200 | [diff] [blame] | 792 | depends on ZONE_DEVICE |
Dan Williams | e7638488 | 2018-05-16 11:46:08 -0700 | [diff] [blame] | 793 | select DEV_PAGEMAP_OPS |
Jérôme Glisse | 5042db4 | 2017-09-08 16:11:43 -0700 | [diff] [blame] | 794 | |
| 795 | help |
| 796 | Allows creation of struct pages to represent unaddressable device |
| 797 | memory; i.e., memory that is only accessible from the device (or |
| 798 | group of devices). You likely also want to select HMM_MIRROR. |
| 799 | |
Christoph Hellwig | 3e9a9e2 | 2020-10-17 16:15:10 -0700 | [diff] [blame] | 800 | config VMAP_PFN |
| 801 | bool |
| 802 | |
Dave Hansen | 63c17fb | 2016-02-12 13:02:08 -0800 | [diff] [blame] | 803 | config ARCH_USES_HIGH_VMA_FLAGS |
| 804 | bool |
Dave Hansen | 66d3757 | 2016-02-12 13:02:32 -0800 | [diff] [blame] | 805 | config ARCH_HAS_PKEYS |
| 806 | bool |
Dennis Zhou | 30a5b53 | 2017-06-19 19:28:31 -0400 | [diff] [blame] | 807 | |
| 808 | config PERCPU_STATS |
| 809 | bool "Collect percpu memory statistics" |
Dennis Zhou | 30a5b53 | 2017-06-19 19:28:31 -0400 | [diff] [blame] | 810 | help |
| 811 | This feature collects and exposes statistics via debugfs. The |
| 812 | information includes global and per chunk statistics, which can |
| 813 | be used to help understand percpu memory usage. |
Kirill A. Shutemov | 64c349f | 2017-11-17 15:31:22 -0800 | [diff] [blame] | 814 | |
John Hubbard | 9c84f22 | 2020-12-14 19:05:05 -0800 | [diff] [blame] | 815 | config GUP_TEST |
| 816 | bool "Enable infrastructure for get_user_pages()-related unit tests" |
Barry Song | d0de824 | 2020-12-14 19:05:38 -0800 | [diff] [blame] | 817 | depends on DEBUG_FS |
Kirill A. Shutemov | 64c349f | 2017-11-17 15:31:22 -0800 | [diff] [blame] | 818 | help |
John Hubbard | 9c84f22 | 2020-12-14 19:05:05 -0800 | [diff] [blame] | 819 | Provides /sys/kernel/debug/gup_test, which in turn provides a way |
| 820 | to make ioctl calls that can launch kernel-based unit tests for |
| 821 | the get_user_pages*() and pin_user_pages*() family of API calls. |
Kirill A. Shutemov | 64c349f | 2017-11-17 15:31:22 -0800 | [diff] [blame] | 822 | |
John Hubbard | 9c84f22 | 2020-12-14 19:05:05 -0800 | [diff] [blame] | 823 | These tests include benchmark testing of the _fast variants of |
| 824 | get_user_pages*() and pin_user_pages*(), as well as smoke tests of |
| 825 | the non-_fast variants. |
| 826 | |
John Hubbard | f4f9bda | 2020-12-14 19:05:21 -0800 | [diff] [blame] | 827 | There is also a sub-test that allows running dump_page() on any |
| 828 | of up to eight pages (selected by command line args) within the |
| 829 | range of user-space addresses. These pages are either pinned via |
| 830 | pin_user_pages*(), or pinned via get_user_pages*(), as specified |
| 831 | by other command line arguments. |
| 832 | |
John Hubbard | 9c84f22 | 2020-12-14 19:05:05 -0800 | [diff] [blame] | 833 | See tools/testing/selftests/vm/gup_test.c |
Laurent Dufour | 3010a5e | 2018-06-07 17:06:08 -0700 | [diff] [blame] | 834 | |
Barry Song | d0de824 | 2020-12-14 19:05:38 -0800 | [diff] [blame] | 835 | comment "GUP_TEST needs to have DEBUG_FS enabled" |
| 836 | depends on !GUP_TEST && !DEBUG_FS |
Laurent Dufour | 3010a5e | 2018-06-07 17:06:08 -0700 | [diff] [blame] | 837 | |
Christoph Hellwig | 39656e8 | 2019-07-11 20:56:49 -0700 | [diff] [blame] | 838 | config GUP_GET_PTE_LOW_HIGH |
| 839 | bool |
| 840 | |
Song Liu | 99cb0db | 2019-09-23 15:38:00 -0700 | [diff] [blame] | 841 | config READ_ONLY_THP_FOR_FS |
| 842 | bool "Read-only THP for filesystems (EXPERIMENTAL)" |
Matthew Wilcox (Oracle) | 396bcc5 | 2020-04-06 20:04:35 -0700 | [diff] [blame] | 843 | depends on TRANSPARENT_HUGEPAGE && SHMEM |
Song Liu | 99cb0db | 2019-09-23 15:38:00 -0700 | [diff] [blame] | 844 | |
| 845 | help |
| 846 | Allow khugepaged to put read-only file-backed pages in THP. |
| 847 | |
| 848 | This is marked experimental because it is a new feature. Write |
| 849 | support of file THPs will be developed in the next few release |
| 850 | cycles. |
| 851 | |
Laurent Dufour | 3010a5e | 2018-06-07 17:06:08 -0700 | [diff] [blame] | 852 | config ARCH_HAS_PTE_SPECIAL |
| 853 | bool |
Christoph Hellwig | 59e0b52 | 2018-07-31 13:39:35 +0200 | [diff] [blame] | 854 | |
Christoph Hellwig | cbd34da | 2019-07-11 20:57:28 -0700 | [diff] [blame] | 855 | # |
| 856 | # Some architectures require a special hugepage directory format that is |
| 857 | # required to support multiple hugepage sizes. For example a4fe3ce76 |
| 858 | # "powerpc/mm: Allow more flexible layouts for hugepage pagetables" |
| 859 | # introduced it on powerpc. This allows for a more flexible hugepage |
| 860 | # pagetable layouts. |
| 861 | # |
| 862 | config ARCH_HAS_HUGEPD |
| 863 | bool |
| 864 | |
Thomas Hellstrom | c5acad8 | 2019-03-19 13:12:30 +0100 | [diff] [blame] | 865 | config MAPPING_DIRTY_HELPERS |
| 866 | bool |
| 867 | |
Thomas Gleixner | 298fa1a | 2020-11-03 10:27:18 +0100 | [diff] [blame] | 868 | config KMAP_LOCAL |
| 869 | bool |
| 870 | |
Ard Biesheuvel | 825c43f | 2021-11-19 16:43:55 -0800 | [diff] [blame] | 871 | config KMAP_LOCAL_NON_LINEAR_PTE_ARRAY |
| 872 | bool |
| 873 | |
Christoph Hellwig | 1fbaf8f | 2021-04-29 22:57:32 -0700 | [diff] [blame] | 874 | # struct io_mapping based helper. Selected by drivers that need them |
| 875 | config IO_MAPPING |
| 876 | bool |
Mike Rapoport | 1507f51 | 2021-07-07 18:08:03 -0700 | [diff] [blame] | 877 | |
| 878 | config SECRETMEM |
| 879 | def_bool ARCH_HAS_SET_DIRECT_MAP && !EMBEDDED |
| 880 | |
Colin Cross | 9a10064 | 2022-01-14 14:05:59 -0800 | [diff] [blame] | 881 | config ANON_VMA_NAME |
| 882 | bool "Anonymous VMA name support" |
| 883 | depends on PROC_FS && ADVISE_SYSCALLS && MMU |
| 884 | |
| 885 | help |
| 886 | Allow naming anonymous virtual memory areas. |
| 887 | |
| 888 | This feature allows assigning names to virtual memory areas. Assigned |
| 889 | names can be later retrieved from /proc/pid/maps and /proc/pid/smaps |
| 890 | and help identifying individual anonymous memory areas. |
| 891 | Assigning a name to anonymous virtual memory area might prevent that |
| 892 | area from being merged with adjacent virtual memory areas due to the |
| 893 | difference in their name. |
| 894 | |
SeongJae Park | 2224d84 | 2021-09-07 19:56:28 -0700 | [diff] [blame] | 895 | source "mm/damon/Kconfig" |
| 896 | |
Christoph Hellwig | 59e0b52 | 2018-07-31 13:39:35 +0200 | [diff] [blame] | 897 | endmenu |