Greg Kroah-Hartman | b244131 | 2017-11-01 15:07:57 +0100 | [diff] [blame] | 1 | # SPDX-License-Identifier: GPL-2.0 |
Mathieu Desnoyers | fb32e03 | 2008-02-02 15:10:33 -0500 | [diff] [blame] | 2 | # |
| 3 | # General architecture dependent options |
| 4 | # |
Mathieu Desnoyers | 125e564 | 2008-02-02 15:10:36 -0500 | [diff] [blame] | 5 | |
Hari Bathini | 692f66f | 2017-05-08 15:56:18 -0700 | [diff] [blame] | 6 | config CRASH_CORE |
| 7 | bool |
| 8 | |
Dave Young | 2965faa | 2015-09-09 15:38:55 -0700 | [diff] [blame] | 9 | config KEXEC_CORE |
Hari Bathini | 692f66f | 2017-05-08 15:56:18 -0700 | [diff] [blame] | 10 | select CRASH_CORE |
Dave Young | 2965faa | 2015-09-09 15:38:55 -0700 | [diff] [blame] | 11 | bool |
| 12 | |
Thiago Jung Bauermann | 467d278 | 2016-12-19 16:22:32 -0800 | [diff] [blame] | 13 | config HAVE_IMA_KEXEC |
| 14 | bool |
| 15 | |
Mathieu Desnoyers | 125e564 | 2008-02-02 15:10:36 -0500 | [diff] [blame] | 16 | config OPROFILE |
Robert Richter | b309a29 | 2010-02-26 15:01:23 +0100 | [diff] [blame] | 17 | tristate "OProfile system profiling" |
Mathieu Desnoyers | 125e564 | 2008-02-02 15:10:36 -0500 | [diff] [blame] | 18 | depends on PROFILING |
| 19 | depends on HAVE_OPROFILE |
Ingo Molnar | d69d59f | 2008-12-12 09:38:57 +0100 | [diff] [blame] | 20 | select RING_BUFFER |
Christian Borntraeger | 9a5963eb | 2009-09-16 21:56:49 +0200 | [diff] [blame] | 21 | select RING_BUFFER_ALLOW_SWAP |
Mathieu Desnoyers | 125e564 | 2008-02-02 15:10:36 -0500 | [diff] [blame] | 22 | help |
| 23 | OProfile is a profiling system capable of profiling the |
| 24 | whole system, include the kernel, kernel modules, libraries, |
| 25 | and applications. |
| 26 | |
| 27 | If unsure, say N. |
| 28 | |
Jason Yeh | 4d4036e | 2009-07-08 13:49:38 +0200 | [diff] [blame] | 29 | config OPROFILE_EVENT_MULTIPLEX |
| 30 | bool "OProfile multiplexing support (EXPERIMENTAL)" |
| 31 | default n |
| 32 | depends on OPROFILE && X86 |
| 33 | help |
| 34 | The number of hardware counters is limited. The multiplexing |
| 35 | feature enables OProfile to gather more events than counters |
| 36 | are provided by the hardware. This is realized by switching |
Masahiro Yamada | 9332ef9 | 2017-02-27 14:28:47 -0800 | [diff] [blame] | 37 | between events at a user specified time interval. |
Jason Yeh | 4d4036e | 2009-07-08 13:49:38 +0200 | [diff] [blame] | 38 | |
| 39 | If unsure, say N. |
| 40 | |
Mathieu Desnoyers | 125e564 | 2008-02-02 15:10:36 -0500 | [diff] [blame] | 41 | config HAVE_OPROFILE |
Jan Beulich | 9ba1608 | 2008-10-15 22:01:38 -0700 | [diff] [blame] | 42 | bool |
Mathieu Desnoyers | 125e564 | 2008-02-02 15:10:36 -0500 | [diff] [blame] | 43 | |
Robert Richter | dcfce4a | 2011-10-11 17:11:08 +0200 | [diff] [blame] | 44 | config OPROFILE_NMI_TIMER |
| 45 | def_bool y |
Anton Blanchard | af9feeb | 2015-04-09 12:52:55 +1000 | [diff] [blame] | 46 | depends on PERF_EVENTS && HAVE_PERF_EVENTS_NMI && !PPC64 |
Robert Richter | dcfce4a | 2011-10-11 17:11:08 +0200 | [diff] [blame] | 47 | |
Mathieu Desnoyers | 125e564 | 2008-02-02 15:10:36 -0500 | [diff] [blame] | 48 | config KPROBES |
| 49 | bool "Kprobes" |
Masami Hiramatsu | 05ed160 | 2010-09-13 19:25:41 +0900 | [diff] [blame] | 50 | depends on MODULES |
Mathieu Desnoyers | 125e564 | 2008-02-02 15:10:36 -0500 | [diff] [blame] | 51 | depends on HAVE_KPROBES |
Masami Hiramatsu | 05ed160 | 2010-09-13 19:25:41 +0900 | [diff] [blame] | 52 | select KALLSYMS |
Mathieu Desnoyers | 125e564 | 2008-02-02 15:10:36 -0500 | [diff] [blame] | 53 | help |
| 54 | Kprobes allows you to trap at almost any kernel address and |
| 55 | execute a callback function. register_kprobe() establishes |
| 56 | a probepoint and specifies the callback. Kprobes is useful |
| 57 | for kernel debugging, non-intrusive instrumentation and testing. |
| 58 | If in doubt, say "N". |
| 59 | |
Steven Rostedt | 45f81b1 | 2010-10-29 12:33:43 -0400 | [diff] [blame] | 60 | config JUMP_LABEL |
Ingo Molnar | c5905af | 2012-02-24 08:31:31 +0100 | [diff] [blame] | 61 | bool "Optimize very unlikely/likely branches" |
Steven Rostedt | 45f81b1 | 2010-10-29 12:33:43 -0400 | [diff] [blame] | 62 | depends on HAVE_ARCH_JUMP_LABEL |
| 63 | help |
Ingo Molnar | c5905af | 2012-02-24 08:31:31 +0100 | [diff] [blame] | 64 | This option enables a transparent branch optimization that |
| 65 | makes certain almost-always-true or almost-always-false branch |
| 66 | conditions even cheaper to execute within the kernel. |
Steven Rostedt | 45f81b1 | 2010-10-29 12:33:43 -0400 | [diff] [blame] | 67 | |
Ingo Molnar | c5905af | 2012-02-24 08:31:31 +0100 | [diff] [blame] | 68 | Certain performance-sensitive kernel code, such as trace points, |
| 69 | scheduler functionality, networking code and KVM have such |
| 70 | branches and include support for this optimization technique. |
| 71 | |
| 72 | If it is detected that the compiler has support for "asm goto", |
| 73 | the kernel will compile such branches with just a nop |
| 74 | instruction. When the condition flag is toggled to true, the |
| 75 | nop will be converted to a jump instruction to execute the |
| 76 | conditional block of instructions. |
| 77 | |
| 78 | This technique lowers overhead and stress on the branch prediction |
| 79 | of the processor and generally makes the kernel faster. The update |
| 80 | of the condition is slower, but those are always very rare. |
| 81 | |
| 82 | ( On 32-bit x86, the necessary options added to the compiler |
| 83 | flags may increase the size of the kernel slightly. ) |
Steven Rostedt | 45f81b1 | 2010-10-29 12:33:43 -0400 | [diff] [blame] | 84 | |
Peter Zijlstra | 1987c94 | 2015-07-27 18:32:09 +0200 | [diff] [blame] | 85 | config STATIC_KEYS_SELFTEST |
| 86 | bool "Static key selftest" |
| 87 | depends on JUMP_LABEL |
| 88 | help |
| 89 | Boot time self-test of the branch patching code. |
| 90 | |
Masami Hiramatsu | afd6625 | 2010-02-25 08:34:07 -0500 | [diff] [blame] | 91 | config OPTPROBES |
Masami Hiramatsu | 5cc718b | 2010-03-15 13:00:54 -0400 | [diff] [blame] | 92 | def_bool y |
| 93 | depends on KPROBES && HAVE_OPTPROBES |
Masami Hiramatsu | a30b85d | 2017-10-20 08:43:39 +0900 | [diff] [blame] | 94 | select TASKS_RCU if PREEMPT |
Masami Hiramatsu | afd6625 | 2010-02-25 08:34:07 -0500 | [diff] [blame] | 95 | |
Masami Hiramatsu | e7dbfe3 | 2012-09-28 17:15:20 +0900 | [diff] [blame] | 96 | config KPROBES_ON_FTRACE |
| 97 | def_bool y |
| 98 | depends on KPROBES && HAVE_KPROBES_ON_FTRACE |
| 99 | depends on DYNAMIC_FTRACE_WITH_REGS |
| 100 | help |
| 101 | If function tracer is enabled and the arch supports full |
| 102 | passing of pt_regs to function tracing, then kprobes can |
| 103 | optimize on top of function tracing. |
| 104 | |
Srikar Dronamraju | 2b14449 | 2012-02-09 14:56:42 +0530 | [diff] [blame] | 105 | config UPROBES |
David A. Long | 09294e3 | 2014-03-07 10:32:22 -0500 | [diff] [blame] | 106 | def_bool n |
Allen Pais | e8f4aa6 | 2016-10-13 10:06:13 +0530 | [diff] [blame] | 107 | depends on ARCH_SUPPORTS_UPROBES |
Srikar Dronamraju | 2b14449 | 2012-02-09 14:56:42 +0530 | [diff] [blame] | 108 | help |
Ingo Molnar | 7b2d81d | 2012-02-17 09:27:41 +0100 | [diff] [blame] | 109 | Uprobes is the user-space counterpart to kprobes: they |
| 110 | enable instrumentation applications (such as 'perf probe') |
| 111 | to establish unintrusive probes in user-space binaries and |
| 112 | libraries, by executing handler functions when the probes |
| 113 | are hit by user-space applications. |
| 114 | |
| 115 | ( These probes come in the form of single-byte breakpoints, |
| 116 | managed by the kernel and kept transparent to the probed |
| 117 | application. ) |
Srikar Dronamraju | 2b14449 | 2012-02-09 14:56:42 +0530 | [diff] [blame] | 118 | |
James Hogan | c19fa94 | 2012-05-30 11:23:23 +0100 | [diff] [blame] | 119 | config HAVE_64BIT_ALIGNED_ACCESS |
| 120 | def_bool 64BIT && !HAVE_EFFICIENT_UNALIGNED_ACCESS |
| 121 | help |
| 122 | Some architectures require 64 bit accesses to be 64 bit |
| 123 | aligned, which also requires structs containing 64 bit values |
| 124 | to be 64 bit aligned too. This includes some 32 bit |
| 125 | architectures which can do 64 bit accesses, as well as 64 bit |
| 126 | architectures without unaligned access. |
| 127 | |
| 128 | This symbol should be selected by an architecture if 64 bit |
| 129 | accesses are required to be 64 bit aligned in this way even |
| 130 | though it is not a 64 bit architecture. |
| 131 | |
| 132 | See Documentation/unaligned-memory-access.txt for more |
| 133 | information on the topic of unaligned memory accesses. |
| 134 | |
Johannes Berg | 58340a0 | 2008-07-25 01:45:33 -0700 | [diff] [blame] | 135 | config HAVE_EFFICIENT_UNALIGNED_ACCESS |
Jan Beulich | 9ba1608 | 2008-10-15 22:01:38 -0700 | [diff] [blame] | 136 | bool |
Johannes Berg | 58340a0 | 2008-07-25 01:45:33 -0700 | [diff] [blame] | 137 | help |
| 138 | Some architectures are unable to perform unaligned accesses |
| 139 | without the use of get_unaligned/put_unaligned. Others are |
| 140 | unable to perform such accesses efficiently (e.g. trap on |
| 141 | unaligned access and require fixing it up in the exception |
| 142 | handler.) |
| 143 | |
| 144 | This symbol should be selected by an architecture if it can |
| 145 | perform unaligned accesses efficiently to allow different |
| 146 | code paths to be selected for these cases. Some network |
| 147 | drivers, for example, could opt to not fix up alignment |
| 148 | problems with received packets if doing so would not help |
| 149 | much. |
| 150 | |
| 151 | See Documentation/unaligned-memory-access.txt for more |
| 152 | information on the topic of unaligned memory accesses. |
| 153 | |
David Woodhouse | cf66bb9 | 2012-12-03 16:25:40 +0000 | [diff] [blame] | 154 | config ARCH_USE_BUILTIN_BSWAP |
| 155 | bool |
| 156 | help |
| 157 | Modern versions of GCC (since 4.4) have builtin functions |
| 158 | for handling byte-swapping. Using these, instead of the old |
| 159 | inline assembler that the architecture code provides in the |
| 160 | __arch_bswapXX() macros, allows the compiler to see what's |
| 161 | happening and offers more opportunity for optimisation. In |
| 162 | particular, the compiler will be able to combine the byteswap |
| 163 | with a nearby load or store and use load-and-swap or |
| 164 | store-and-swap instructions if the architecture has them. It |
| 165 | should almost *never* result in code which is worse than the |
| 166 | hand-coded assembler in <asm/swab.h>. But just in case it |
| 167 | does, the use of the builtins is optional. |
| 168 | |
| 169 | Any architecture with load-and-swap or store-and-swap |
| 170 | instructions should set this. And it shouldn't hurt to set it |
| 171 | on architectures that don't have such instructions. |
| 172 | |
Ananth N Mavinakayanahalli | 9edddaa | 2008-03-04 14:28:37 -0800 | [diff] [blame] | 173 | config KRETPROBES |
| 174 | def_bool y |
| 175 | depends on KPROBES && HAVE_KRETPROBES |
| 176 | |
Avi Kivity | 7c68af6 | 2009-09-19 09:40:22 +0300 | [diff] [blame] | 177 | config USER_RETURN_NOTIFIER |
| 178 | bool |
| 179 | depends on HAVE_USER_RETURN_NOTIFIER |
| 180 | help |
| 181 | Provide a kernel-internal notification when a cpu is about to |
| 182 | switch to user mode. |
| 183 | |
Rik van Riel | 28b2ee2 | 2008-07-23 21:27:05 -0700 | [diff] [blame] | 184 | config HAVE_IOREMAP_PROT |
Jan Beulich | 9ba1608 | 2008-10-15 22:01:38 -0700 | [diff] [blame] | 185 | bool |
Rik van Riel | 28b2ee2 | 2008-07-23 21:27:05 -0700 | [diff] [blame] | 186 | |
Mathieu Desnoyers | 125e564 | 2008-02-02 15:10:36 -0500 | [diff] [blame] | 187 | config HAVE_KPROBES |
Jan Beulich | 9ba1608 | 2008-10-15 22:01:38 -0700 | [diff] [blame] | 188 | bool |
Ananth N Mavinakayanahalli | 9edddaa | 2008-03-04 14:28:37 -0800 | [diff] [blame] | 189 | |
| 190 | config HAVE_KRETPROBES |
Jan Beulich | 9ba1608 | 2008-10-15 22:01:38 -0700 | [diff] [blame] | 191 | bool |
Arthur Kepner | 74bc7ce | 2008-04-29 01:00:30 -0700 | [diff] [blame] | 192 | |
Masami Hiramatsu | afd6625 | 2010-02-25 08:34:07 -0500 | [diff] [blame] | 193 | config HAVE_OPTPROBES |
| 194 | bool |
Cong Wang | d314d74 | 2012-03-23 15:01:51 -0700 | [diff] [blame] | 195 | |
Masami Hiramatsu | e7dbfe3 | 2012-09-28 17:15:20 +0900 | [diff] [blame] | 196 | config HAVE_KPROBES_ON_FTRACE |
| 197 | bool |
| 198 | |
Masami Hiramatsu | 540adea | 2018-01-13 02:55:03 +0900 | [diff] [blame] | 199 | config HAVE_FUNCTION_ERROR_INJECTION |
Josef Bacik | 9802d86 | 2017-12-11 11:36:48 -0500 | [diff] [blame] | 200 | bool |
| 201 | |
Petr Mladek | 42a0bb3 | 2016-05-20 17:00:33 -0700 | [diff] [blame] | 202 | config HAVE_NMI |
| 203 | bool |
| 204 | |
Roland McGrath | 1f5a4ad | 2008-07-25 19:45:57 -0700 | [diff] [blame] | 205 | # |
| 206 | # An arch should select this if it provides all these things: |
| 207 | # |
| 208 | # task_pt_regs() in asm/processor.h or asm/ptrace.h |
| 209 | # arch_has_single_step() if there is hardware single-step support |
| 210 | # arch_has_block_step() if there is hardware block-step support |
Roland McGrath | 1f5a4ad | 2008-07-25 19:45:57 -0700 | [diff] [blame] | 211 | # asm/syscall.h supplying asm-generic/syscall.h interface |
| 212 | # linux/regset.h user_regset interfaces |
| 213 | # CORE_DUMP_USE_REGSET #define'd in linux/elf.h |
| 214 | # TIF_SYSCALL_TRACE calls tracehook_report_syscall_{entry,exit} |
| 215 | # TIF_NOTIFY_RESUME calls tracehook_notify_resume() |
| 216 | # signal delivery calls tracehook_signal_handler() |
| 217 | # |
| 218 | config HAVE_ARCH_TRACEHOOK |
Jan Beulich | 9ba1608 | 2008-10-15 22:01:38 -0700 | [diff] [blame] | 219 | bool |
Roland McGrath | 1f5a4ad | 2008-07-25 19:45:57 -0700 | [diff] [blame] | 220 | |
Marek Szyprowski | c64be2b | 2011-12-29 13:09:51 +0100 | [diff] [blame] | 221 | config HAVE_DMA_CONTIGUOUS |
| 222 | bool |
| 223 | |
Thomas Gleixner | 29d5e04 | 2012-04-20 13:05:45 +0000 | [diff] [blame] | 224 | config GENERIC_SMP_IDLE_THREAD |
| 225 | bool |
| 226 | |
Kevin Hilman | 485cf5d | 2013-04-24 17:19:13 -0700 | [diff] [blame] | 227 | config GENERIC_IDLE_POLL_SETUP |
| 228 | bool |
| 229 | |
Daniel Micay | 6974f0c | 2017-07-12 14:36:10 -0700 | [diff] [blame] | 230 | config ARCH_HAS_FORTIFY_SOURCE |
| 231 | bool |
| 232 | help |
| 233 | An architecture should select this when it can successfully |
| 234 | build and run with CONFIG_FORTIFY_SOURCE. |
| 235 | |
Daniel Borkmann | d2852a2 | 2017-02-21 16:09:33 +0100 | [diff] [blame] | 236 | # Select if arch has all set_memory_ro/rw/x/nx() functions in asm/cacheflush.h |
| 237 | config ARCH_HAS_SET_MEMORY |
| 238 | bool |
| 239 | |
David Howells | 0500871 | 2018-01-02 15:12:01 +0000 | [diff] [blame] | 240 | # Select if arch init_task must go in the __init_task_data section |
| 241 | config ARCH_TASK_STRUCT_ON_STACK |
Thomas Gleixner | a4a2eb4 | 2012-05-03 09:02:48 +0000 | [diff] [blame] | 242 | bool |
| 243 | |
Thomas Gleixner | f5e1028 | 2012-05-05 15:05:48 +0000 | [diff] [blame] | 244 | # Select if arch has its private alloc_task_struct() function |
| 245 | config ARCH_TASK_STRUCT_ALLOCATOR |
| 246 | bool |
| 247 | |
Kees Cook | 5905429 | 2017-08-16 13:00:58 -0700 | [diff] [blame] | 248 | config HAVE_ARCH_THREAD_STRUCT_WHITELIST |
| 249 | bool |
| 250 | depends on !ARCH_TASK_STRUCT_ALLOCATOR |
| 251 | help |
| 252 | An architecture should select this to provide hardened usercopy |
| 253 | knowledge about what region of the thread_struct should be |
| 254 | whitelisted for copying to userspace. Normally this is only the |
| 255 | FPU registers. Specifically, arch_thread_struct_whitelist() |
| 256 | should be implemented. Without this, the entire thread_struct |
| 257 | field in task_struct will be left whitelisted. |
| 258 | |
Linus Torvalds | b235bee | 2016-06-24 15:09:37 -0700 | [diff] [blame] | 259 | # Select if arch has its private alloc_thread_stack() function |
| 260 | config ARCH_THREAD_STACK_ALLOCATOR |
Thomas Gleixner | f5e1028 | 2012-05-05 15:05:48 +0000 | [diff] [blame] | 261 | bool |
| 262 | |
Ingo Molnar | 5aaeb5c | 2015-07-17 12:28:12 +0200 | [diff] [blame] | 263 | # Select if arch wants to size task_struct dynamically via arch_task_struct_size: |
| 264 | config ARCH_WANTS_DYNAMIC_TASK_STRUCT |
| 265 | bool |
| 266 | |
Heiko Carstens | f850c30c | 2010-02-10 17:25:17 +0100 | [diff] [blame] | 267 | config HAVE_REGS_AND_STACK_ACCESS_API |
| 268 | bool |
Heiko Carstens | e01292b | 2010-02-18 14:25:21 +0100 | [diff] [blame] | 269 | help |
| 270 | This symbol should be selected by an architecure if it supports |
| 271 | the API needed to access registers and stack entries from pt_regs, |
| 272 | declared in asm/ptrace.h |
| 273 | For example the kprobes-based event tracer needs this API. |
Heiko Carstens | f850c30c | 2010-02-10 17:25:17 +0100 | [diff] [blame] | 274 | |
Mathieu Desnoyers | d7822b1 | 2018-06-02 08:43:54 -0400 | [diff] [blame] | 275 | config HAVE_RSEQ |
| 276 | bool |
| 277 | depends on HAVE_REGS_AND_STACK_ACCESS_API |
| 278 | help |
| 279 | This symbol should be selected by an architecture if it |
| 280 | supports an implementation of restartable sequences. |
| 281 | |
David Brownell | 9483a57 | 2008-07-23 21:26:48 -0700 | [diff] [blame] | 282 | config HAVE_CLK |
Jan Beulich | 9ba1608 | 2008-10-15 22:01:38 -0700 | [diff] [blame] | 283 | bool |
David Brownell | 9483a57 | 2008-07-23 21:26:48 -0700 | [diff] [blame] | 284 | help |
| 285 | The <linux/clk.h> calls support software clock gating and |
| 286 | thus are a key power management tool on many systems. |
| 287 | |
K.Prasad | 62a038d | 2009-06-01 23:43:33 +0530 | [diff] [blame] | 288 | config HAVE_HW_BREAKPOINT |
| 289 | bool |
Frederic Weisbecker | 99e8c5a | 2009-12-17 01:33:54 +0100 | [diff] [blame] | 290 | depends on PERF_EVENTS |
K.Prasad | 62a038d | 2009-06-01 23:43:33 +0530 | [diff] [blame] | 291 | |
Frederic Weisbecker | 0102752 | 2010-04-11 18:55:56 +0200 | [diff] [blame] | 292 | config HAVE_MIXED_BREAKPOINTS_REGS |
| 293 | bool |
| 294 | depends on HAVE_HW_BREAKPOINT |
| 295 | help |
| 296 | Depending on the arch implementation of hardware breakpoints, |
| 297 | some of them have separate registers for data and instruction |
| 298 | breakpoints addresses, others have mixed registers to store |
| 299 | them but define the access type in a control register. |
| 300 | Select this option if your arch implements breakpoints under the |
| 301 | latter fashion. |
| 302 | |
Avi Kivity | 7c68af6 | 2009-09-19 09:40:22 +0300 | [diff] [blame] | 303 | config HAVE_USER_RETURN_NOTIFIER |
| 304 | bool |
Ingo Molnar | a1922ed | 2009-09-07 08:19:51 +0200 | [diff] [blame] | 305 | |
Frederic Weisbecker | c01d432 | 2010-05-15 22:57:48 +0200 | [diff] [blame] | 306 | config HAVE_PERF_EVENTS_NMI |
| 307 | bool |
Frederic Weisbecker | 23637d4 | 2010-05-15 23:15:20 +0200 | [diff] [blame] | 308 | help |
| 309 | System hardware can generate an NMI using the perf event |
| 310 | subsystem. Also has support for calculating CPU cycle events |
| 311 | to determine how many clock cycles in a given period. |
Frederic Weisbecker | c01d432 | 2010-05-15 22:57:48 +0200 | [diff] [blame] | 312 | |
Nicholas Piggin | 05a4a95 | 2017-07-12 14:35:46 -0700 | [diff] [blame] | 313 | config HAVE_HARDLOCKUP_DETECTOR_PERF |
| 314 | bool |
| 315 | depends on HAVE_PERF_EVENTS_NMI |
| 316 | help |
| 317 | The arch chooses to use the generic perf-NMI-based hardlockup |
| 318 | detector. Must define HAVE_PERF_EVENTS_NMI. |
| 319 | |
| 320 | config HAVE_NMI_WATCHDOG |
| 321 | depends on HAVE_NMI |
| 322 | bool |
| 323 | help |
| 324 | The arch provides a low level NMI watchdog. It provides |
| 325 | asm/nmi.h, and defines its own arch_touch_nmi_watchdog(). |
| 326 | |
| 327 | config HAVE_HARDLOCKUP_DETECTOR_ARCH |
| 328 | bool |
| 329 | select HAVE_NMI_WATCHDOG |
| 330 | help |
| 331 | The arch chooses to provide its own hardlockup detector, which is |
| 332 | a superset of the HAVE_NMI_WATCHDOG. It also conforms to config |
| 333 | interfaces and parameters provided by hardlockup detector subsystem. |
| 334 | |
Jiri Olsa | c5e6319 | 2012-08-07 15:20:36 +0200 | [diff] [blame] | 335 | config HAVE_PERF_REGS |
| 336 | bool |
| 337 | help |
| 338 | Support selective register dumps for perf events. This includes |
| 339 | bit-mapping of each registers and a unique architecture id. |
| 340 | |
Jiri Olsa | c5ebced | 2012-08-07 15:20:40 +0200 | [diff] [blame] | 341 | config HAVE_PERF_USER_STACK_DUMP |
| 342 | bool |
| 343 | help |
| 344 | Support user stack dumps for perf event samples. This needs |
| 345 | access to the user stack pointer which is not unified across |
| 346 | architectures. |
| 347 | |
Jason Baron | bf5438fc | 2010-09-17 11:09:00 -0400 | [diff] [blame] | 348 | config HAVE_ARCH_JUMP_LABEL |
| 349 | bool |
| 350 | |
Peter Zijlstra | 2672391 | 2011-05-24 17:12:00 -0700 | [diff] [blame] | 351 | config HAVE_RCU_TABLE_FREE |
| 352 | bool |
| 353 | |
Huang Ying | df013ff | 2011-07-13 13:14:22 +0800 | [diff] [blame] | 354 | config ARCH_HAVE_NMI_SAFE_CMPXCHG |
| 355 | bool |
| 356 | |
Heiko Carstens | 43570fd | 2012-01-12 17:17:27 -0800 | [diff] [blame] | 357 | config HAVE_ALIGNED_STRUCT_PAGE |
| 358 | bool |
| 359 | help |
| 360 | This makes sure that struct pages are double word aligned and that |
| 361 | e.g. the SLUB allocator can perform double word atomic operations |
| 362 | on a struct page for better performance. However selecting this |
| 363 | might increase the size of a struct page by a word. |
| 364 | |
Heiko Carstens | 4156153 | 2012-01-12 17:17:30 -0800 | [diff] [blame] | 365 | config HAVE_CMPXCHG_LOCAL |
| 366 | bool |
| 367 | |
Heiko Carstens | 2565409 | 2012-01-12 17:17:33 -0800 | [diff] [blame] | 368 | config HAVE_CMPXCHG_DOUBLE |
| 369 | bool |
| 370 | |
Paul E. McKenney | 77e5849 | 2017-01-14 13:32:50 -0800 | [diff] [blame] | 371 | config ARCH_WEAK_RELEASE_ACQUIRE |
| 372 | bool |
| 373 | |
Will Deacon | c1d7e01 | 2012-07-30 14:42:46 -0700 | [diff] [blame] | 374 | config ARCH_WANT_IPC_PARSE_VERSION |
| 375 | bool |
| 376 | |
| 377 | config ARCH_WANT_COMPAT_IPC_PARSE_VERSION |
| 378 | bool |
| 379 | |
Chris Metcalf | 48b25c4 | 2012-03-15 13:13:38 -0400 | [diff] [blame] | 380 | config ARCH_WANT_OLD_COMPAT_IPC |
Will Deacon | c1d7e01 | 2012-07-30 14:42:46 -0700 | [diff] [blame] | 381 | select ARCH_WANT_COMPAT_IPC_PARSE_VERSION |
Chris Metcalf | 48b25c4 | 2012-03-15 13:13:38 -0400 | [diff] [blame] | 382 | bool |
| 383 | |
Will Drewry | e2cfabdf | 2012-04-12 16:47:57 -0500 | [diff] [blame] | 384 | config HAVE_ARCH_SECCOMP_FILTER |
| 385 | bool |
| 386 | help |
Will Drewry | fb0fadf | 2012-04-12 16:48:02 -0500 | [diff] [blame] | 387 | An arch should select this symbol if it provides all of these things: |
Will Drewry | bb6ea43 | 2012-04-12 16:48:01 -0500 | [diff] [blame] | 388 | - syscall_get_arch() |
| 389 | - syscall_get_arguments() |
| 390 | - syscall_rollback() |
| 391 | - syscall_set_return_value() |
Will Drewry | fb0fadf | 2012-04-12 16:48:02 -0500 | [diff] [blame] | 392 | - SIGSYS siginfo_t support |
| 393 | - secure_computing is called from a ptrace_event()-safe context |
| 394 | - secure_computing return value is checked and a return value of -1 |
| 395 | results in the system call being skipped immediately. |
Kees Cook | 48dc92b | 2014-06-25 16:08:24 -0700 | [diff] [blame] | 396 | - seccomp syscall wired up |
Will Drewry | e2cfabdf | 2012-04-12 16:47:57 -0500 | [diff] [blame] | 397 | |
| 398 | config SECCOMP_FILTER |
| 399 | def_bool y |
| 400 | depends on HAVE_ARCH_SECCOMP_FILTER && SECCOMP && NET |
| 401 | help |
| 402 | Enable tasks to build secure computing environments defined |
| 403 | in terms of Berkeley Packet Filter programs which implement |
| 404 | task-defined system call filtering polices. |
| 405 | |
Mauro Carvalho Chehab | 5fb94e9 | 2018-05-08 15:14:57 -0300 | [diff] [blame] | 406 | See Documentation/userspace-api/seccomp_filter.rst for details. |
Will Drewry | e2cfabdf | 2012-04-12 16:47:57 -0500 | [diff] [blame] | 407 | |
Masahiro Yamada | d148eac | 2018-06-14 19:36:45 +0900 | [diff] [blame] | 408 | config HAVE_STACKPROTECTOR |
Kees Cook | 19952a9 | 2013-12-19 11:35:58 -0800 | [diff] [blame] | 409 | bool |
| 410 | help |
| 411 | An arch should select this symbol if: |
Kees Cook | 19952a9 | 2013-12-19 11:35:58 -0800 | [diff] [blame] | 412 | - it has implemented a stack canary (e.g. __stack_chk_guard) |
| 413 | |
Masahiro Yamada | 2a61f47 | 2018-05-28 18:22:00 +0900 | [diff] [blame] | 414 | config CC_HAS_STACKPROTECTOR_NONE |
| 415 | def_bool $(cc-option,-fno-stack-protector) |
| 416 | |
Linus Torvalds | 050e9ba | 2018-06-14 12:21:18 +0900 | [diff] [blame] | 417 | config STACKPROTECTOR |
Masahiro Yamada | 2a61f47 | 2018-05-28 18:22:00 +0900 | [diff] [blame] | 418 | bool "Stack Protector buffer overflow detection" |
Masahiro Yamada | d148eac | 2018-06-14 19:36:45 +0900 | [diff] [blame] | 419 | depends on HAVE_STACKPROTECTOR |
Masahiro Yamada | 2a61f47 | 2018-05-28 18:22:00 +0900 | [diff] [blame] | 420 | depends on $(cc-option,-fstack-protector) |
| 421 | default y |
Kees Cook | 8779657 | 2013-12-19 11:35:59 -0800 | [diff] [blame] | 422 | help |
| 423 | This option turns on the "stack-protector" GCC feature. This |
Kees Cook | 19952a9 | 2013-12-19 11:35:58 -0800 | [diff] [blame] | 424 | feature puts, at the beginning of functions, a canary value on |
| 425 | the stack just before the return address, and validates |
| 426 | the value just before actually returning. Stack based buffer |
| 427 | overflows (that need to overwrite this return address) now also |
| 428 | overwrite the canary, which gets detected and the attack is then |
| 429 | neutralized via a kernel panic. |
| 430 | |
Kees Cook | 8779657 | 2013-12-19 11:35:59 -0800 | [diff] [blame] | 431 | Functions will have the stack-protector canary logic added if they |
| 432 | have an 8-byte or larger character array on the stack. |
| 433 | |
Kees Cook | 19952a9 | 2013-12-19 11:35:58 -0800 | [diff] [blame] | 434 | This feature requires gcc version 4.2 or above, or a distribution |
Kees Cook | 8779657 | 2013-12-19 11:35:59 -0800 | [diff] [blame] | 435 | gcc with the feature backported ("-fstack-protector"). |
| 436 | |
| 437 | On an x86 "defconfig" build, this feature adds canary checks to |
| 438 | about 3% of all kernel functions, which increases kernel code size |
| 439 | by about 0.3%. |
| 440 | |
Linus Torvalds | 050e9ba | 2018-06-14 12:21:18 +0900 | [diff] [blame] | 441 | config STACKPROTECTOR_STRONG |
Masahiro Yamada | 2a61f47 | 2018-05-28 18:22:00 +0900 | [diff] [blame] | 442 | bool "Strong Stack Protector" |
Linus Torvalds | 050e9ba | 2018-06-14 12:21:18 +0900 | [diff] [blame] | 443 | depends on STACKPROTECTOR |
Masahiro Yamada | 2a61f47 | 2018-05-28 18:22:00 +0900 | [diff] [blame] | 444 | depends on $(cc-option,-fstack-protector-strong) |
| 445 | default y |
Kees Cook | 8779657 | 2013-12-19 11:35:59 -0800 | [diff] [blame] | 446 | help |
| 447 | Functions will have the stack-protector canary logic added in any |
| 448 | of the following conditions: |
| 449 | |
| 450 | - local variable's address used as part of the right hand side of an |
| 451 | assignment or function argument |
| 452 | - local variable is an array (or union containing an array), |
| 453 | regardless of array type or length |
| 454 | - uses register local variables |
| 455 | |
| 456 | This feature requires gcc version 4.9 or above, or a distribution |
| 457 | gcc with the feature backported ("-fstack-protector-strong"). |
| 458 | |
| 459 | On an x86 "defconfig" build, this feature adds canary checks to |
| 460 | about 20% of all kernel functions, which increases the kernel code |
| 461 | size by about 2%. |
| 462 | |
Kees Cook | 0f60a8e | 2016-07-12 16:19:48 -0700 | [diff] [blame] | 463 | config HAVE_ARCH_WITHIN_STACK_FRAMES |
| 464 | bool |
| 465 | help |
| 466 | An architecture should select this if it can walk the kernel stack |
| 467 | frames to determine if an object is part of either the arguments |
| 468 | or local variables (i.e. that it excludes saved return addresses, |
| 469 | and similar) by implementing an inline arch_within_stack_frames(), |
| 470 | which is used by CONFIG_HARDENED_USERCOPY. |
| 471 | |
Frederic Weisbecker | 91d1aa43 | 2012-11-27 19:33:25 +0100 | [diff] [blame] | 472 | config HAVE_CONTEXT_TRACKING |
Frederic Weisbecker | 2b1d502 | 2012-07-11 20:26:30 +0200 | [diff] [blame] | 473 | bool |
| 474 | help |
Frederic Weisbecker | 91d1aa43 | 2012-11-27 19:33:25 +0100 | [diff] [blame] | 475 | Provide kernel/user boundaries probes necessary for subsystems |
| 476 | that need it, such as userspace RCU extended quiescent state. |
| 477 | Syscalls need to be wrapped inside user_exit()-user_enter() through |
| 478 | the slow path using TIF_NOHZ flag. Exceptions handlers must be |
| 479 | wrapped as well. Irqs are already protected inside |
| 480 | rcu_irq_enter/rcu_irq_exit() but preemption or signal handling on |
| 481 | irq exit still need to be protected. |
Frederic Weisbecker | 2b1d502 | 2012-07-11 20:26:30 +0200 | [diff] [blame] | 482 | |
Frederic Weisbecker | b952741 | 2012-06-16 15:39:34 +0200 | [diff] [blame] | 483 | config HAVE_VIRT_CPU_ACCOUNTING |
| 484 | bool |
| 485 | |
Stanislaw Gruszka | 40565b5 | 2016-11-15 03:06:51 +0100 | [diff] [blame] | 486 | config ARCH_HAS_SCALED_CPUTIME |
| 487 | bool |
| 488 | |
Kevin Hilman | 554b000 | 2013-09-16 15:28:21 -0700 | [diff] [blame] | 489 | config HAVE_VIRT_CPU_ACCOUNTING_GEN |
| 490 | bool |
| 491 | default y if 64BIT |
| 492 | help |
| 493 | With VIRT_CPU_ACCOUNTING_GEN, cputime_t becomes 64-bit. |
| 494 | Before enabling this option, arch code must be audited |
| 495 | to ensure there are no races in concurrent read/write of |
| 496 | cputime_t. For example, reading/writing 64-bit cputime_t on |
| 497 | some 32-bit arches may require multiple accesses, so proper |
| 498 | locking is needed to protect against concurrent accesses. |
| 499 | |
| 500 | |
Frederic Weisbecker | fdf9c35 | 2012-09-09 14:56:31 +0200 | [diff] [blame] | 501 | config HAVE_IRQ_TIME_ACCOUNTING |
| 502 | bool |
| 503 | help |
| 504 | Archs need to ensure they use a high enough resolution clock to |
| 505 | support irq time accounting and then call enable_sched_clock_irqtime(). |
| 506 | |
Gerald Schaefer | 1562606 | 2012-10-08 16:30:04 -0700 | [diff] [blame] | 507 | config HAVE_ARCH_TRANSPARENT_HUGEPAGE |
| 508 | bool |
| 509 | |
Matthew Wilcox | a00cc7d | 2017-02-24 14:57:02 -0800 | [diff] [blame] | 510 | config HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD |
| 511 | bool |
| 512 | |
Toshi Kani | 0ddab1d | 2015-04-14 15:47:20 -0700 | [diff] [blame] | 513 | config HAVE_ARCH_HUGE_VMAP |
| 514 | bool |
| 515 | |
Pavel Emelyanov | 0f8975e | 2013-07-03 15:01:20 -0700 | [diff] [blame] | 516 | config HAVE_ARCH_SOFT_DIRTY |
| 517 | bool |
| 518 | |
David Howells | 786d35d | 2012-09-28 14:31:03 +0930 | [diff] [blame] | 519 | config HAVE_MOD_ARCH_SPECIFIC |
| 520 | bool |
| 521 | help |
| 522 | The arch uses struct mod_arch_specific to store data. Many arches |
| 523 | just need a simple module loader without arch specific data - those |
| 524 | should not enable this. |
| 525 | |
| 526 | config MODULES_USE_ELF_RELA |
| 527 | bool |
| 528 | help |
| 529 | Modules only use ELF RELA relocations. Modules with ELF REL |
| 530 | relocations will give an error. |
| 531 | |
| 532 | config MODULES_USE_ELF_REL |
| 533 | bool |
| 534 | help |
| 535 | Modules only use ELF REL relocations. Modules with ELF RELA |
| 536 | relocations will give an error. |
| 537 | |
Frederic Weisbecker | cc1f027 | 2013-09-24 17:17:47 +0200 | [diff] [blame] | 538 | config HAVE_IRQ_EXIT_ON_IRQ_STACK |
| 539 | bool |
| 540 | help |
| 541 | Architecture doesn't only execute the irq handler on the irq stack |
| 542 | but also irq_exit(). This way we can process softirqs on this irq |
| 543 | stack instead of switching to a new one when we call __do_softirq() |
| 544 | in the end of an hardirq. |
| 545 | This spares a stack switch and improves cache usage on softirq |
| 546 | processing. |
| 547 | |
Kirill A. Shutemov | 235a8f0 | 2015-04-14 15:46:17 -0700 | [diff] [blame] | 548 | config PGTABLE_LEVELS |
| 549 | int |
| 550 | default 2 |
| 551 | |
Kees Cook | 2b68f6c | 2015-04-14 15:48:00 -0700 | [diff] [blame] | 552 | config ARCH_HAS_ELF_RANDOMIZE |
| 553 | bool |
| 554 | help |
| 555 | An architecture supports choosing randomized locations for |
| 556 | stack, mmap, brk, and ET_DYN. Defined functions: |
| 557 | - arch_mmap_rnd() |
Kees Cook | 204db6e | 2015-04-14 15:48:12 -0700 | [diff] [blame] | 558 | - arch_randomize_brk() |
Kees Cook | 2b68f6c | 2015-04-14 15:48:00 -0700 | [diff] [blame] | 559 | |
Daniel Cashman | d07e225 | 2016-01-14 15:19:53 -0800 | [diff] [blame] | 560 | config HAVE_ARCH_MMAP_RND_BITS |
| 561 | bool |
| 562 | help |
| 563 | An arch should select this symbol if it supports setting a variable |
| 564 | number of bits for use in establishing the base address for mmap |
| 565 | allocations, has MMU enabled and provides values for both: |
| 566 | - ARCH_MMAP_RND_BITS_MIN |
| 567 | - ARCH_MMAP_RND_BITS_MAX |
| 568 | |
Jiri Slaby | 5f56a5d | 2016-05-20 17:00:16 -0700 | [diff] [blame] | 569 | config HAVE_EXIT_THREAD |
| 570 | bool |
| 571 | help |
| 572 | An architecture implements exit_thread. |
| 573 | |
Daniel Cashman | d07e225 | 2016-01-14 15:19:53 -0800 | [diff] [blame] | 574 | config ARCH_MMAP_RND_BITS_MIN |
| 575 | int |
| 576 | |
| 577 | config ARCH_MMAP_RND_BITS_MAX |
| 578 | int |
| 579 | |
| 580 | config ARCH_MMAP_RND_BITS_DEFAULT |
| 581 | int |
| 582 | |
| 583 | config ARCH_MMAP_RND_BITS |
| 584 | int "Number of bits to use for ASLR of mmap base address" if EXPERT |
| 585 | range ARCH_MMAP_RND_BITS_MIN ARCH_MMAP_RND_BITS_MAX |
| 586 | default ARCH_MMAP_RND_BITS_DEFAULT if ARCH_MMAP_RND_BITS_DEFAULT |
| 587 | default ARCH_MMAP_RND_BITS_MIN |
| 588 | depends on HAVE_ARCH_MMAP_RND_BITS |
| 589 | help |
| 590 | This value can be used to select the number of bits to use to |
| 591 | determine the random offset to the base address of vma regions |
| 592 | resulting from mmap allocations. This value will be bounded |
| 593 | by the architecture's minimum and maximum supported values. |
| 594 | |
| 595 | This value can be changed after boot using the |
| 596 | /proc/sys/vm/mmap_rnd_bits tunable |
| 597 | |
| 598 | config HAVE_ARCH_MMAP_RND_COMPAT_BITS |
| 599 | bool |
| 600 | help |
| 601 | An arch should select this symbol if it supports running applications |
| 602 | in compatibility mode, supports setting a variable number of bits for |
| 603 | use in establishing the base address for mmap allocations, has MMU |
| 604 | enabled and provides values for both: |
| 605 | - ARCH_MMAP_RND_COMPAT_BITS_MIN |
| 606 | - ARCH_MMAP_RND_COMPAT_BITS_MAX |
| 607 | |
| 608 | config ARCH_MMAP_RND_COMPAT_BITS_MIN |
| 609 | int |
| 610 | |
| 611 | config ARCH_MMAP_RND_COMPAT_BITS_MAX |
| 612 | int |
| 613 | |
| 614 | config ARCH_MMAP_RND_COMPAT_BITS_DEFAULT |
| 615 | int |
| 616 | |
| 617 | config ARCH_MMAP_RND_COMPAT_BITS |
| 618 | int "Number of bits to use for ASLR of mmap base address for compatible applications" if EXPERT |
| 619 | range ARCH_MMAP_RND_COMPAT_BITS_MIN ARCH_MMAP_RND_COMPAT_BITS_MAX |
| 620 | default ARCH_MMAP_RND_COMPAT_BITS_DEFAULT if ARCH_MMAP_RND_COMPAT_BITS_DEFAULT |
| 621 | default ARCH_MMAP_RND_COMPAT_BITS_MIN |
| 622 | depends on HAVE_ARCH_MMAP_RND_COMPAT_BITS |
| 623 | help |
| 624 | This value can be used to select the number of bits to use to |
| 625 | determine the random offset to the base address of vma regions |
| 626 | resulting from mmap allocations for compatible applications This |
| 627 | value will be bounded by the architecture's minimum and maximum |
| 628 | supported values. |
| 629 | |
| 630 | This value can be changed after boot using the |
| 631 | /proc/sys/vm/mmap_rnd_compat_bits tunable |
| 632 | |
Dmitry Safonov | 1b028f7 | 2017-03-06 17:17:19 +0300 | [diff] [blame] | 633 | config HAVE_ARCH_COMPAT_MMAP_BASES |
| 634 | bool |
| 635 | help |
| 636 | This allows 64bit applications to invoke 32-bit mmap() syscall |
| 637 | and vice-versa 32-bit applications to call 64-bit mmap(). |
| 638 | Required for applications doing different bitness syscalls. |
| 639 | |
Josh Triplett | 3033f14a | 2015-06-25 15:01:19 -0700 | [diff] [blame] | 640 | config HAVE_COPY_THREAD_TLS |
| 641 | bool |
| 642 | help |
| 643 | Architecture provides copy_thread_tls to accept tls argument via |
| 644 | normal C parameter passing, rather than extracting the syscall |
| 645 | argument from pt_regs. |
| 646 | |
Josh Poimboeuf | b9ab5eb | 2016-02-28 22:22:42 -0600 | [diff] [blame] | 647 | config HAVE_STACK_VALIDATION |
| 648 | bool |
| 649 | help |
| 650 | Architecture supports the 'objtool check' host tool command, which |
| 651 | performs compile-time stack metadata validation. |
| 652 | |
Josh Poimboeuf | af085d9 | 2017-02-13 19:42:28 -0600 | [diff] [blame] | 653 | config HAVE_RELIABLE_STACKTRACE |
| 654 | bool |
| 655 | help |
| 656 | Architecture has a save_stack_trace_tsk_reliable() function which |
| 657 | only returns a stack trace if it can guarantee the trace is reliable. |
| 658 | |
George Spelvin | 468a942 | 2016-05-26 22:11:51 -0400 | [diff] [blame] | 659 | config HAVE_ARCH_HASH |
| 660 | bool |
| 661 | default n |
| 662 | help |
| 663 | If this is set, the architecture provides an <asm/hash.h> |
| 664 | file which provides platform-specific implementations of some |
| 665 | functions in <linux/hash.h> or fs/namei.c. |
| 666 | |
William Breathitt Gray | 3a49551 | 2016-05-27 18:08:27 -0400 | [diff] [blame] | 667 | config ISA_BUS_API |
| 668 | def_bool ISA |
| 669 | |
Al Viro | d212504 | 2012-10-23 13:17:59 -0400 | [diff] [blame] | 670 | # |
| 671 | # ABI hall of shame |
| 672 | # |
| 673 | config CLONE_BACKWARDS |
| 674 | bool |
| 675 | help |
| 676 | Architecture has tls passed as the 4th argument of clone(2), |
| 677 | not the 5th one. |
| 678 | |
| 679 | config CLONE_BACKWARDS2 |
| 680 | bool |
| 681 | help |
| 682 | Architecture has the first two arguments of clone(2) swapped. |
| 683 | |
Michal Simek | dfa9771 | 2013-08-13 16:00:53 -0700 | [diff] [blame] | 684 | config CLONE_BACKWARDS3 |
| 685 | bool |
| 686 | help |
| 687 | Architecture has tls passed as the 3rd argument of clone(2), |
| 688 | not the 5th one. |
| 689 | |
Al Viro | eaca6ea | 2012-11-25 23:12:10 -0500 | [diff] [blame] | 690 | config ODD_RT_SIGACTION |
| 691 | bool |
| 692 | help |
| 693 | Architecture has unusual rt_sigaction(2) arguments |
| 694 | |
Al Viro | 0a0e8cd | 2012-12-25 16:04:12 -0500 | [diff] [blame] | 695 | config OLD_SIGSUSPEND |
| 696 | bool |
| 697 | help |
| 698 | Architecture has old sigsuspend(2) syscall, of one-argument variety |
| 699 | |
| 700 | config OLD_SIGSUSPEND3 |
| 701 | bool |
| 702 | help |
| 703 | Even weirder antique ABI - three-argument sigsuspend(2) |
| 704 | |
Al Viro | 495dfbf | 2012-12-25 19:09:45 -0500 | [diff] [blame] | 705 | config OLD_SIGACTION |
| 706 | bool |
| 707 | help |
| 708 | Architecture has old sigaction(2) syscall. Nope, not the same |
| 709 | as OLD_SIGSUSPEND | OLD_SIGSUSPEND3 - alpha has sigsuspend(2), |
| 710 | but fairly different variant of sigaction(2), thanks to OSF/1 |
| 711 | compatibility... |
| 712 | |
| 713 | config COMPAT_OLD_SIGACTION |
| 714 | bool |
| 715 | |
Deepa Dinamani | d4703dd | 2018-03-13 21:03:27 -0700 | [diff] [blame] | 716 | config 64BIT_TIME |
| 717 | def_bool ARCH_HAS_64BIT_TIME |
| 718 | help |
| 719 | This should be selected by all architectures that need to support |
| 720 | new system calls with a 64-bit time_t. This is relevant on all 32-bit |
| 721 | architectures, and 64-bit architectures as part of compat syscall |
| 722 | handling. |
| 723 | |
Deepa Dinamani | 17435e5 | 2018-03-13 21:03:28 -0700 | [diff] [blame] | 724 | config COMPAT_32BIT_TIME |
| 725 | def_bool (!64BIT && 64BIT_TIME) || COMPAT |
| 726 | help |
| 727 | This enables 32 bit time_t support in addition to 64 bit time_t support. |
| 728 | This is relevant on all 32-bit architectures, and 64-bit architectures |
| 729 | as part of compat syscall handling. |
| 730 | |
Christoph Hellwig | 0d4a619 | 2016-01-20 15:01:22 -0800 | [diff] [blame] | 731 | config ARCH_NO_COHERENT_DMA_MMAP |
| 732 | bool |
| 733 | |
Zhaoxiu Zeng | fff7fb0 | 2016-05-20 17:03:57 -0700 | [diff] [blame] | 734 | config CPU_NO_EFFICIENT_FFS |
| 735 | def_bool n |
| 736 | |
Andy Lutomirski | ba14a19 | 2016-08-11 02:35:21 -0700 | [diff] [blame] | 737 | config HAVE_ARCH_VMAP_STACK |
| 738 | def_bool n |
| 739 | help |
| 740 | An arch should select this symbol if it can support kernel stacks |
| 741 | in vmalloc space. This means: |
| 742 | |
| 743 | - vmalloc space must be large enough to hold many kernel stacks. |
| 744 | This may rule out many 32-bit architectures. |
| 745 | |
| 746 | - Stacks in vmalloc space need to work reliably. For example, if |
| 747 | vmap page tables are created on demand, either this mechanism |
| 748 | needs to work while the stack points to a virtual address with |
| 749 | unpopulated page tables or arch code (switch_to() and switch_mm(), |
| 750 | most likely) needs to ensure that the stack's page table entries |
| 751 | are populated before running on a possibly unpopulated stack. |
| 752 | |
| 753 | - If the stack overflows into a guard page, something reasonable |
| 754 | should happen. The definition of "reasonable" is flexible, but |
| 755 | instantly rebooting without logging anything would be unfriendly. |
| 756 | |
| 757 | config VMAP_STACK |
| 758 | default y |
| 759 | bool "Use a virtually-mapped stack" |
| 760 | depends on HAVE_ARCH_VMAP_STACK && !KASAN |
| 761 | ---help--- |
| 762 | Enable this if you want the use virtually-mapped kernel stacks |
| 763 | with guard pages. This causes kernel stack overflows to be |
| 764 | caught immediately rather than causing difficult-to-diagnose |
| 765 | corruption. |
| 766 | |
| 767 | This is presently incompatible with KASAN because KASAN expects |
| 768 | the stack to map directly to the KASAN shadow map using a formula |
| 769 | that is incorrect if the stack is in vmalloc space. |
| 770 | |
Laura Abbott | ad21fc4 | 2017-02-06 16:31:57 -0800 | [diff] [blame] | 771 | config ARCH_OPTIONAL_KERNEL_RWX |
| 772 | def_bool n |
| 773 | |
| 774 | config ARCH_OPTIONAL_KERNEL_RWX_DEFAULT |
| 775 | def_bool n |
| 776 | |
| 777 | config ARCH_HAS_STRICT_KERNEL_RWX |
| 778 | def_bool n |
| 779 | |
Laura Abbott | 0f5bf6d | 2017-02-06 16:31:58 -0800 | [diff] [blame] | 780 | config STRICT_KERNEL_RWX |
Laura Abbott | ad21fc4 | 2017-02-06 16:31:57 -0800 | [diff] [blame] | 781 | bool "Make kernel text and rodata read-only" if ARCH_OPTIONAL_KERNEL_RWX |
| 782 | depends on ARCH_HAS_STRICT_KERNEL_RWX |
| 783 | default !ARCH_OPTIONAL_KERNEL_RWX || ARCH_OPTIONAL_KERNEL_RWX_DEFAULT |
| 784 | help |
| 785 | If this is set, kernel text and rodata memory will be made read-only, |
| 786 | and non-text memory will be made non-executable. This provides |
| 787 | protection against certain security exploits (e.g. executing the heap |
| 788 | or modifying text) |
| 789 | |
| 790 | These features are considered standard security practice these days. |
| 791 | You should say Y here in almost all cases. |
| 792 | |
| 793 | config ARCH_HAS_STRICT_MODULE_RWX |
| 794 | def_bool n |
| 795 | |
Laura Abbott | 0f5bf6d | 2017-02-06 16:31:58 -0800 | [diff] [blame] | 796 | config STRICT_MODULE_RWX |
Laura Abbott | ad21fc4 | 2017-02-06 16:31:57 -0800 | [diff] [blame] | 797 | bool "Set loadable kernel module data as NX and text as RO" if ARCH_OPTIONAL_KERNEL_RWX |
| 798 | depends on ARCH_HAS_STRICT_MODULE_RWX && MODULES |
| 799 | default !ARCH_OPTIONAL_KERNEL_RWX || ARCH_OPTIONAL_KERNEL_RWX_DEFAULT |
| 800 | help |
| 801 | If this is set, module text and rodata memory will be made read-only, |
| 802 | and non-text memory will be made non-executable. This provides |
| 803 | protection against certain security exploits (e.g. writing to text) |
| 804 | |
Christoph Hellwig | ea8c64a | 2018-01-10 16:21:13 +0100 | [diff] [blame] | 805 | # select if the architecture provides an asm/dma-direct.h header |
| 806 | config ARCH_HAS_PHYS_TO_DMA |
| 807 | bool |
| 808 | |
Kees Cook | 7a46ec0 | 2017-08-15 09:19:24 -0700 | [diff] [blame] | 809 | config ARCH_HAS_REFCOUNT |
| 810 | bool |
| 811 | help |
| 812 | An architecture selects this when it has implemented refcount_t |
| 813 | using open coded assembly primitives that provide an optimized |
| 814 | refcount_t implementation, possibly at the expense of some full |
| 815 | refcount state checks of CONFIG_REFCOUNT_FULL=y. |
| 816 | |
| 817 | The refcount overflow check behavior, however, must be retained. |
| 818 | Catching overflows is the primary security concern for protecting |
| 819 | against bugs in reference counts. |
| 820 | |
Kees Cook | fd25d19f | 2017-06-21 13:00:26 -0700 | [diff] [blame] | 821 | config REFCOUNT_FULL |
| 822 | bool "Perform full reference count validation at the expense of speed" |
| 823 | help |
| 824 | Enabling this switches the refcounting infrastructure from a fast |
| 825 | unchecked atomic_t implementation to a fully state checked |
| 826 | implementation, which can be (slightly) slower but provides protections |
| 827 | against various use-after-free conditions that can be used in |
| 828 | security flaw exploits. |
| 829 | |
Peter Oberparleiter | 2521f2c | 2009-06-17 16:28:08 -0700 | [diff] [blame] | 830 | source "kernel/gcov/Kconfig" |
Masahiro Yamada | 45332b1 | 2018-07-05 15:24:12 +0900 | [diff] [blame^] | 831 | |
| 832 | source "scripts/gcc-plugins/Kconfig" |