blob: c9b64dea121628549d1b2de1e5665e54d474a107 [file] [log] [blame]
Thomas Gleixnerec8f24b2019-05-19 13:07:45 +01001# SPDX-License-Identifier: GPL-2.0-only
Arnaldo Carvalho de Melo16444a82008-05-12 21:20:42 +02002#
Steven Rostedt606576c2008-10-06 19:06:12 -04003# Architectures that offer an FUNCTION_TRACER implementation should
4# select HAVE_FUNCTION_TRACER:
Arnaldo Carvalho de Melo16444a82008-05-12 21:20:42 +02005#
Frédéric Weisbecker2a3a4f62008-09-21 20:12:14 +02006
Török Edwin8d264872008-11-23 12:39:08 +02007config USER_STACKTRACE_SUPPORT
8 bool
9
Frédéric Weisbecker2a3a4f62008-09-21 20:12:14 +020010config NOP_TRACER
11 bool
12
Steven Rostedt606576c2008-10-06 19:06:12 -040013config HAVE_FUNCTION_TRACER
Arnaldo Carvalho de Melo16444a82008-05-12 21:20:42 +020014 bool
Mike Frysinger555f3862009-09-14 20:10:15 -040015 help
Mauro Carvalho Chehab5fb94e92018-05-08 15:14:57 -030016 See Documentation/trace/ftrace-design.rst
Steven Rostedtbc0c38d2008-05-12 21:20:42 +020017
Frederic Weisbeckerfb526072008-11-25 21:07:04 +010018config HAVE_FUNCTION_GRAPH_TRACER
Frederic Weisbecker15e6cb32008-11-11 07:14:25 +010019 bool
Mike Frysinger555f3862009-09-14 20:10:15 -040020 help
Mauro Carvalho Chehab5fb94e92018-05-08 15:14:57 -030021 See Documentation/trace/ftrace-design.rst
Frederic Weisbecker15e6cb32008-11-11 07:14:25 +010022
Steven Rostedt677aa9f2008-05-17 00:01:36 -040023config HAVE_DYNAMIC_FTRACE
24 bool
Mike Frysinger555f3862009-09-14 20:10:15 -040025 help
Mauro Carvalho Chehab5fb94e92018-05-08 15:14:57 -030026 See Documentation/trace/ftrace-design.rst
Steven Rostedt677aa9f2008-05-17 00:01:36 -040027
Masami Hiramatsu06aeaae2012-09-28 17:15:17 +090028config HAVE_DYNAMIC_FTRACE_WITH_REGS
29 bool
30
Steven Rostedt (VMware)763e34e2019-11-08 13:07:06 -050031config HAVE_DYNAMIC_FTRACE_WITH_DIRECT_CALLS
32 bool
33
Steven Rostedt (VMware)02a474c2020-10-27 10:55:55 -040034config HAVE_DYNAMIC_FTRACE_WITH_ARGS
35 bool
36 help
37 If this is set, then arguments and stack can be found from
38 the pt_regs passed into the function callback regs parameter
39 by default, even without setting the REGS flag in the ftrace_ops.
40 This allows for use of regs_get_kernel_argument() and
41 kernel_stack_pointer().
42
Steven Rostedt8da38212008-08-14 15:45:07 -040043config HAVE_FTRACE_MCOUNT_RECORD
44 bool
Mike Frysinger555f3862009-09-14 20:10:15 -040045 help
Mauro Carvalho Chehab5fb94e92018-05-08 15:14:57 -030046 See Documentation/trace/ftrace-design.rst
Steven Rostedt8da38212008-08-14 15:45:07 -040047
Josh Stone66700002009-08-24 14:43:11 -070048config HAVE_SYSCALL_TRACEPOINTS
Frederic Weisbeckeree08c6e2009-03-07 05:52:59 +010049 bool
Mike Frysinger555f3862009-09-14 20:10:15 -040050 help
Mauro Carvalho Chehab5fb94e92018-05-08 15:14:57 -030051 See Documentation/trace/ftrace-design.rst
Frederic Weisbeckeree08c6e2009-03-07 05:52:59 +010052
Steven Rostedta2546fa2011-02-09 13:15:59 -050053config HAVE_FENTRY
54 bool
55 help
56 Arch supports the gcc options -pg with -mfentry
57
Vasily Gorbik2f4df002018-08-06 15:17:46 +020058config HAVE_NOP_MCOUNT
59 bool
60 help
61 Arch supports the gcc options -pg with -mrecord-mcount and -nop-mcount
62
Steven Rostedtcf4db252010-10-14 23:32:44 -040063config HAVE_C_RECORDMCOUNT
Steven Rostedt72441cb2010-10-13 17:12:30 -040064 bool
65 help
66 C version of recordmcount available?
67
Steven Rostedt352ad252008-05-12 21:20:42 +020068config TRACER_MAX_TRACE
69 bool
70
Josh Triplettea632e92012-09-02 19:45:14 -070071config TRACE_CLOCK
72 bool
73
Steven Rostedt7a8e76a2008-09-29 23:02:38 -040074config RING_BUFFER
75 bool
Josh Triplettea632e92012-09-02 19:45:14 -070076 select TRACE_CLOCK
Steven Rostedt (Red Hat)22287682013-05-03 11:16:18 -040077 select IRQ_WORK
Steven Rostedt7a8e76a2008-09-29 23:02:38 -040078
Tom Zanussi5f77a882009-04-08 03:14:01 -050079config EVENT_TRACING
Zhaoleib11c53e2009-05-25 18:11:59 +080080 select CONTEXT_SWITCH_TRACER
Krzysztof Kozlowskifc809bc2019-11-20 21:38:07 +080081 select GLOB
Zhaoleib11c53e2009-05-25 18:11:59 +080082 bool
83
84config CONTEXT_SWITCH_TRACER
Tom Zanussi5f77a882009-04-08 03:14:01 -050085 bool
86
Steven Rostedt85bac322009-09-04 14:24:40 -040087config RING_BUFFER_ALLOW_SWAP
88 bool
89 help
90 Allow the use of ring_buffer_swap_cpu.
91 Adds a very slight overhead to tracing when enabled.
92
Joel Fernandes (Google)c3bc8fd2018-07-30 15:24:23 -070093config PREEMPTIRQ_TRACEPOINTS
94 bool
95 depends on TRACE_PREEMPT_TOGGLE || TRACE_IRQFLAGS
96 select TRACING
97 default y
98 help
99 Create preempt/irq toggle tracepoints if needed, so that other parts
100 of the kernel can use them to generate or add hooks to them.
101
Steven Rostedt5e0a0932009-05-28 15:50:13 -0400102# All tracer options should select GENERIC_TRACER. For those options that are
103# enabled by all tracers (context switch and event tracer) they select TRACING.
104# This allows those options to appear when no other tracer is selected. But the
105# options do not appear when something else selects it. We need the two options
106# GENERIC_TRACER and TRACING to avoid circular dependencies to accomplish the
Randy Dunlap40892362009-12-21 12:01:17 -0800107# hiding of the automatic options.
Steven Rostedt5e0a0932009-05-28 15:50:13 -0400108
Steven Rostedtbc0c38d2008-05-12 21:20:42 +0200109config TRACING
110 bool
Steven Rostedt7a8e76a2008-09-29 23:02:38 -0400111 select RING_BUFFER
Al Viroc2c80522008-10-31 19:50:41 +0000112 select STACKTRACE if STACKTRACE_SUPPORT
Ingo Molnar5f87f112008-07-23 14:15:22 +0200113 select TRACEPOINTS
Steven Rostedtf3384b22008-10-29 11:15:57 -0400114 select NOP_TRACER
Frederic Weisbecker769b0442009-03-06 17:21:49 +0100115 select BINARY_PRINTF
Tom Zanussi5f77a882009-04-08 03:14:01 -0500116 select EVENT_TRACING
Josh Triplettea632e92012-09-02 19:45:14 -0700117 select TRACE_CLOCK
Steven Rostedtbc0c38d2008-05-12 21:20:42 +0200118
Steven Rostedt5e0a0932009-05-28 15:50:13 -0400119config GENERIC_TRACER
120 bool
121 select TRACING
122
Ingo Molnar40ada302009-03-05 21:19:55 +0100123#
124# Minimum requirements an architecture has to meet for us to
125# be able to offer generic tracing facilities:
126#
127config TRACING_SUPPORT
128 bool
Michael Ellerman0ea5ee02018-05-02 21:29:48 +1000129 depends on TRACE_IRQFLAGS_SUPPORT
Ingo Molnar40ada302009-03-05 21:19:55 +0100130 depends on STACKTRACE_SUPPORT
KOSAKI Motohiro422d3c72009-03-06 10:40:53 +0900131 default y
Ingo Molnar40ada302009-03-05 21:19:55 +0100132
133if TRACING_SUPPORT
134
Steven Rostedt4ed9f072009-04-20 10:47:36 -0400135menuconfig FTRACE
136 bool "Tracers"
Steven Rostedt65b77242009-05-07 12:49:27 -0400137 default y if DEBUG_KERNEL
Steven Rostedt4ed9f072009-04-20 10:47:36 -0400138 help
Randy Dunlap40892362009-12-21 12:01:17 -0800139 Enable the kernel tracing infrastructure.
Steven Rostedt4ed9f072009-04-20 10:47:36 -0400140
141if FTRACE
Peter Zijlstra17d80fd2008-10-21 16:31:18 +0200142
Steven Rostedt (VMware)1e837942020-01-29 16:30:30 -0500143config BOOTTIME_TRACING
144 bool "Boot-time Tracing support"
Masami Hiramatsud8a953d2020-02-20 21:18:33 +0900145 depends on TRACING
146 select BOOT_CONFIG
Steven Rostedt (VMware)1e837942020-01-29 16:30:30 -0500147 help
148 Enable developer to setup ftrace subsystem via supplemental
149 kernel cmdline at boot time for debugging (tracing) driver
150 initialization and boot process.
151
Steven Rostedt606576c2008-10-06 19:06:12 -0400152config FUNCTION_TRACER
Steven Rostedt1b29b012008-05-12 21:20:42 +0200153 bool "Kernel Function Tracer"
Steven Rostedt606576c2008-10-06 19:06:12 -0400154 depends on HAVE_FUNCTION_TRACER
Steven Rostedt4d7a0772009-02-18 22:06:18 -0500155 select KALLSYMS
Steven Rostedt5e0a0932009-05-28 15:50:13 -0400156 select GENERIC_TRACER
Steven Rostedt35e8e302008-05-12 21:20:42 +0200157 select CONTEXT_SWITCH_TRACER
Steven Rostedt (VMware)0598e4f2017-04-06 10:28:12 -0400158 select GLOB
Thomas Gleixner01b1d882019-07-26 23:19:38 +0200159 select TASKS_RCU if PREEMPTION
Paul E. McKenneye5a971d2020-04-03 12:10:28 -0700160 select TASKS_RUDE_RCU
Steven Rostedt1b29b012008-05-12 21:20:42 +0200161 help
162 Enable the kernel to trace every kernel function. This is done
163 by using a compiler feature to insert a small, 5-byte No-Operation
Randy Dunlap40892362009-12-21 12:01:17 -0800164 instruction at the beginning of every kernel function, which NOP
Steven Rostedt1b29b012008-05-12 21:20:42 +0200165 sequence is then dynamically patched into a tracer call when
166 tracing is enabled by the administrator. If it's runtime disabled
167 (the bootup default), then the overhead of the instructions is very
168 small and not measurable even in micro-benchmarks.
Steven Rostedt35e8e302008-05-12 21:20:42 +0200169
Frederic Weisbeckerfb526072008-11-25 21:07:04 +0100170config FUNCTION_GRAPH_TRACER
171 bool "Kernel Function Graph Tracer"
172 depends on HAVE_FUNCTION_GRAPH_TRACER
Frederic Weisbecker15e6cb32008-11-11 07:14:25 +0100173 depends on FUNCTION_TRACER
Steven Rostedteb4a0372009-06-18 12:53:21 -0400174 depends on !X86_32 || !CC_OPTIMIZE_FOR_SIZE
Ingo Molnar764f3b92008-12-03 10:33:58 +0100175 default y
Frederic Weisbecker15e6cb32008-11-11 07:14:25 +0100176 help
Frederic Weisbeckerfb526072008-11-25 21:07:04 +0100177 Enable the kernel to trace a function at both its return
178 and its entry.
Matt LaPlante692105b2009-01-26 11:12:25 +0100179 Its first purpose is to trace the duration of functions and
180 draw a call graph for each thread with some information like
Randy Dunlap40892362009-12-21 12:01:17 -0800181 the return value. This is done by setting the current return
Matt LaPlante692105b2009-01-26 11:12:25 +0100182 address on the current task structure into a stack of calls.
Frederic Weisbecker15e6cb32008-11-11 07:14:25 +0100183
Steven Rostedt (VMware)61778cd72020-01-29 16:19:10 -0500184config DYNAMIC_FTRACE
185 bool "enable/disable function tracing dynamically"
186 depends on FUNCTION_TRACER
187 depends on HAVE_DYNAMIC_FTRACE
188 default y
189 help
190 This option will modify all the calls to function tracing
191 dynamically (will patch them out of the binary image and
192 replace them with a No-Op instruction) on boot up. During
193 compile time, a table is made of all the locations that ftrace
194 can function trace, and this table is linked into the kernel
195 image. When this is enabled, functions can be individually
196 enabled, and the functions not enabled will not affect
197 performance of the system.
198
199 See the files in /sys/kernel/debug/tracing:
200 available_filter_functions
201 set_ftrace_filter
202 set_ftrace_notrace
203
204 This way a CONFIG_FUNCTION_TRACER kernel is slightly larger, but
205 otherwise has native performance as long as no tracing is active.
206
207config DYNAMIC_FTRACE_WITH_REGS
208 def_bool y
209 depends on DYNAMIC_FTRACE
210 depends on HAVE_DYNAMIC_FTRACE_WITH_REGS
211
212config DYNAMIC_FTRACE_WITH_DIRECT_CALLS
213 def_bool y
214 depends on DYNAMIC_FTRACE
215 depends on HAVE_DYNAMIC_FTRACE_WITH_DIRECT_CALLS
216
217config FUNCTION_PROFILER
218 bool "Kernel function profiler"
219 depends on FUNCTION_TRACER
220 default n
221 help
222 This option enables the kernel function profiler. A file is created
223 in debugfs called function_profile_enabled which defaults to zero.
224 When a 1 is echoed into this file profiling begins, and when a
225 zero is entered, profiling stops. A "functions" file is created in
226 the trace_stat directory; this file shows the list of functions that
227 have been hit and their counters.
228
229 If in doubt, say N.
230
231config STACK_TRACER
232 bool "Trace max stack"
233 depends on HAVE_FUNCTION_TRACER
234 select FUNCTION_TRACER
235 select STACKTRACE
236 select KALLSYMS
237 help
238 This special tracer records the maximum stack footprint of the
239 kernel and displays it in /sys/kernel/debug/tracing/stack_trace.
240
241 This tracer works by hooking into every function call that the
242 kernel executes, and keeping a maximum stack depth value and
243 stack-trace saved. If this is configured with DYNAMIC_FTRACE
244 then it will not have any overhead while the stack tracer
245 is disabled.
246
247 To enable the stack tracer on bootup, pass in 'stacktrace'
248 on the kernel command line.
249
250 The stack tracer can also be enabled or disabled via the
251 sysctl kernel.stack_tracer_enabled
252
253 Say N if unsure.
254
Joel Fernandes (Google)c3bc8fd2018-07-30 15:24:23 -0700255config TRACE_PREEMPT_TOGGLE
256 bool
257 help
258 Enables hooks which will be called when preemption is first disabled,
259 and last enabled.
Steven Rostedtbac429f2009-03-20 12:50:56 -0400260
Steven Rostedt81d68a92008-05-12 21:20:42 +0200261config IRQSOFF_TRACER
262 bool "Interrupts-off Latency Tracer"
263 default n
264 depends on TRACE_IRQFLAGS_SUPPORT
John Stultz592913e2010-07-13 17:56:20 -0700265 depends on !ARCH_USES_GETTIMEOFFSET
Steven Rostedt81d68a92008-05-12 21:20:42 +0200266 select TRACE_IRQFLAGS
Steven Rostedt5e0a0932009-05-28 15:50:13 -0400267 select GENERIC_TRACER
Steven Rostedt81d68a92008-05-12 21:20:42 +0200268 select TRACER_MAX_TRACE
Steven Rostedt85bac322009-09-04 14:24:40 -0400269 select RING_BUFFER_ALLOW_SWAP
Steven Rostedt (Red Hat)22cffc22013-03-05 07:30:24 -0500270 select TRACER_SNAPSHOT
Steven Rostedt (Red Hat)0b85ffc2013-03-05 14:50:23 -0500271 select TRACER_SNAPSHOT_PER_CPU_SWAP
Steven Rostedt81d68a92008-05-12 21:20:42 +0200272 help
273 This option measures the time spent in irqs-off critical
274 sections, with microsecond accuracy.
275
276 The default measurement method is a maximum search, which is
277 disabled by default and can be runtime (re-)started
278 via:
279
GeunSik Lim156f5a72009-06-02 15:01:37 +0900280 echo 0 > /sys/kernel/debug/tracing/tracing_max_latency
Steven Rostedt81d68a92008-05-12 21:20:42 +0200281
Randy Dunlap40892362009-12-21 12:01:17 -0800282 (Note that kernel size and overhead increase with this option
Steven Rostedt6cd8a4b2008-05-12 21:20:42 +0200283 enabled. This option and the preempt-off timing option can be
284 used together or separately.)
285
286config PREEMPT_TRACER
287 bool "Preemption-off Latency Tracer"
288 default n
John Stultz592913e2010-07-13 17:56:20 -0700289 depends on !ARCH_USES_GETTIMEOFFSET
Thomas Gleixner30c93702019-07-26 23:19:40 +0200290 depends on PREEMPTION
Steven Rostedt5e0a0932009-05-28 15:50:13 -0400291 select GENERIC_TRACER
Steven Rostedt6cd8a4b2008-05-12 21:20:42 +0200292 select TRACER_MAX_TRACE
Steven Rostedt85bac322009-09-04 14:24:40 -0400293 select RING_BUFFER_ALLOW_SWAP
Steven Rostedt (Red Hat)22cffc22013-03-05 07:30:24 -0500294 select TRACER_SNAPSHOT
Steven Rostedt (Red Hat)0b85ffc2013-03-05 14:50:23 -0500295 select TRACER_SNAPSHOT_PER_CPU_SWAP
Joel Fernandes (Google)c3bc8fd2018-07-30 15:24:23 -0700296 select TRACE_PREEMPT_TOGGLE
Steven Rostedt6cd8a4b2008-05-12 21:20:42 +0200297 help
Randy Dunlap40892362009-12-21 12:01:17 -0800298 This option measures the time spent in preemption-off critical
Steven Rostedt6cd8a4b2008-05-12 21:20:42 +0200299 sections, with microsecond accuracy.
300
301 The default measurement method is a maximum search, which is
302 disabled by default and can be runtime (re-)started
303 via:
304
GeunSik Lim156f5a72009-06-02 15:01:37 +0900305 echo 0 > /sys/kernel/debug/tracing/tracing_max_latency
Steven Rostedt6cd8a4b2008-05-12 21:20:42 +0200306
Randy Dunlap40892362009-12-21 12:01:17 -0800307 (Note that kernel size and overhead increase with this option
Steven Rostedt6cd8a4b2008-05-12 21:20:42 +0200308 enabled. This option and the irqs-off timing option can be
309 used together or separately.)
310
Steven Rostedt352ad252008-05-12 21:20:42 +0200311config SCHED_TRACER
312 bool "Scheduling Latency Tracer"
Steven Rostedt5e0a0932009-05-28 15:50:13 -0400313 select GENERIC_TRACER
Steven Rostedt352ad252008-05-12 21:20:42 +0200314 select CONTEXT_SWITCH_TRACER
315 select TRACER_MAX_TRACE
Steven Rostedt (Red Hat)22cffc22013-03-05 07:30:24 -0500316 select TRACER_SNAPSHOT
Steven Rostedt352ad252008-05-12 21:20:42 +0200317 help
318 This tracer tracks the latency of the highest priority task
319 to be scheduled in, starting from the point it has woken up.
320
Steven Rostedt (Red Hat)e7c15cd2016-06-23 12:45:36 -0400321config HWLAT_TRACER
322 bool "Tracer to detect hardware latencies (like SMIs)"
323 select GENERIC_TRACER
324 help
325 This tracer, when enabled will create one or more kernel threads,
Jesper Dangaard Brouerc5c1ea72017-06-13 13:06:59 +0200326 depending on what the cpumask file is set to, which each thread
Steven Rostedt (Red Hat)e7c15cd2016-06-23 12:45:36 -0400327 spinning in a loop looking for interruptions caused by
328 something other than the kernel. For example, if a
329 System Management Interrupt (SMI) takes a noticeable amount of
330 time, this tracer will detect it. This is useful for testing
331 if a system is reliable for Real Time tasks.
332
333 Some files are created in the tracing directory when this
334 is enabled:
335
336 hwlat_detector/width - time in usecs for how long to spin for
337 hwlat_detector/window - time in usecs between the start of each
338 iteration
339
340 A kernel thread is created that will spin with interrupts disabled
Jesper Dangaard Brouerc5c1ea72017-06-13 13:06:59 +0200341 for "width" microseconds in every "window" cycle. It will not spin
Steven Rostedt (Red Hat)e7c15cd2016-06-23 12:45:36 -0400342 for "window - width" microseconds, where the system can
343 continue to operate.
344
345 The output will appear in the trace and trace_pipe files.
346
347 When the tracer is not running, it has no affect on the system,
348 but when it is running, it can cause the system to be
349 periodically non responsive. Do not run this tracer on a
350 production system.
351
352 To enable this tracer, echo in "hwlat" into the current_tracer
353 file. Every time a latency is greater than tracing_thresh, it will
354 be recorded into the ring buffer.
355
Steven Rostedt (VMware)21b3ce32020-01-29 16:26:45 -0500356config MMIOTRACE
357 bool "Memory mapped IO tracing"
358 depends on HAVE_MMIOTRACE_SUPPORT && PCI
359 select GENERIC_TRACER
360 help
361 Mmiotrace traces Memory Mapped I/O access and is meant for
362 debugging and reverse engineering. It is called from the ioremap
363 implementation and works via page faults. Tracing is disabled by
364 default and can be enabled at run-time.
365
366 See Documentation/trace/mmiotrace.rst.
367 If you are not helping to develop drivers, say N.
368
Steven Rostedt897f17a2009-05-28 16:31:21 -0400369config ENABLE_DEFAULT_TRACERS
370 bool "Trace process context switches and events"
Steven Rostedt5e0a0932009-05-28 15:50:13 -0400371 depends on !GENERIC_TRACER
Steven Rostedtb77e38a2009-02-24 10:21:36 -0500372 select TRACING
373 help
Randy Dunlap40892362009-12-21 12:01:17 -0800374 This tracer hooks to various trace points in the kernel,
Steven Rostedtb77e38a2009-02-24 10:21:36 -0500375 allowing the user to pick and choose which trace point they
Steven Rostedt897f17a2009-05-28 16:31:21 -0400376 want to trace. It also includes the sched_switch tracer plugin.
Steven Rostedta7abe972009-04-20 10:59:34 -0400377
Frederic Weisbeckeree08c6e2009-03-07 05:52:59 +0100378config FTRACE_SYSCALLS
379 bool "Trace syscalls"
Josh Stone66700002009-08-24 14:43:11 -0700380 depends on HAVE_SYSCALL_TRACEPOINTS
Steven Rostedt5e0a0932009-05-28 15:50:13 -0400381 select GENERIC_TRACER
Frederic Weisbecker0ea1c412009-03-15 22:10:38 +0100382 select KALLSYMS
Frederic Weisbeckeree08c6e2009-03-07 05:52:59 +0100383 help
384 Basic tracer to catch the syscall entry and exit events.
385
Hiraku Toyookadebdd572012-12-26 11:53:00 +0900386config TRACER_SNAPSHOT
387 bool "Create a snapshot trace buffer"
388 select TRACER_MAX_TRACE
389 help
390 Allow tracing users to take snapshot of the current buffer using the
391 ftrace interface, e.g.:
392
393 echo 1 > /sys/kernel/debug/tracing/snapshot
394 cat snapshot
395
Steven Rostedt (Red Hat)0b85ffc2013-03-05 14:50:23 -0500396config TRACER_SNAPSHOT_PER_CPU_SWAP
Krzysztof Kozlowskifc809bc2019-11-20 21:38:07 +0800397 bool "Allow snapshot to swap per CPU"
Steven Rostedt (Red Hat)0b85ffc2013-03-05 14:50:23 -0500398 depends on TRACER_SNAPSHOT
399 select RING_BUFFER_ALLOW_SWAP
400 help
401 Allow doing a snapshot of a single CPU buffer instead of a
402 full swap (all buffers). If this is set, then the following is
403 allowed:
404
405 echo 1 > /sys/kernel/debug/tracing/per_cpu/cpu2/snapshot
406
407 After which, only the tracing buffer for CPU 2 was swapped with
408 the main tracing buffer, and the other CPU buffers remain the same.
409
410 When this is enabled, this adds a little more overhead to the
411 trace recording, as it needs to add some checks to synchronize
412 recording with swaps. But this does not affect the performance
413 of the overall system. This is enabled by default when the preempt
414 or irq latency tracers are enabled, as those need to swap as well
415 and already adds the overhead (plus a lot more).
416
Steven Rostedt2ed84ee2008-11-12 15:24:24 -0500417config TRACE_BRANCH_PROFILING
Steven Rostedt9ae5b872009-04-20 10:27:58 -0400418 bool
Steven Rostedt5e0a0932009-05-28 15:50:13 -0400419 select GENERIC_TRACER
Steven Rostedt9ae5b872009-04-20 10:27:58 -0400420
421choice
422 prompt "Branch Profiling"
423 default BRANCH_PROFILE_NONE
424 help
425 The branch profiling is a software profiler. It will add hooks
426 into the C conditionals to test which path a branch takes.
427
428 The likely/unlikely profiler only looks at the conditions that
429 are annotated with a likely or unlikely macro.
430
Randy Dunlap40892362009-12-21 12:01:17 -0800431 The "all branch" profiler will profile every if-statement in the
Steven Rostedt9ae5b872009-04-20 10:27:58 -0400432 kernel. This profiler will also enable the likely/unlikely
Randy Dunlap40892362009-12-21 12:01:17 -0800433 profiler.
Steven Rostedt9ae5b872009-04-20 10:27:58 -0400434
Randy Dunlap40892362009-12-21 12:01:17 -0800435 Either of the above profilers adds a bit of overhead to the system.
436 If unsure, choose "No branch profiling".
Steven Rostedt9ae5b872009-04-20 10:27:58 -0400437
438config BRANCH_PROFILE_NONE
439 bool "No branch profiling"
440 help
Randy Dunlap40892362009-12-21 12:01:17 -0800441 No branch profiling. Branch profiling adds a bit of overhead.
442 Only enable it if you want to analyse the branching behavior.
443 Otherwise keep it disabled.
Steven Rostedt9ae5b872009-04-20 10:27:58 -0400444
445config PROFILE_ANNOTATED_BRANCHES
446 bool "Trace likely/unlikely profiler"
447 select TRACE_BRANCH_PROFILING
Steven Rostedt1f0d69a2008-11-12 00:14:39 -0500448 help
Masanari Iida59bf8962012-04-18 00:01:21 +0900449 This tracer profiles all likely and unlikely macros
Steven Rostedt1f0d69a2008-11-12 00:14:39 -0500450 in the kernel. It will display the results in:
451
David Rientjes13e5bef2011-03-16 17:17:08 -0700452 /sys/kernel/debug/tracing/trace_stat/branch_annotated
Steven Rostedt1f0d69a2008-11-12 00:14:39 -0500453
Randy Dunlap40892362009-12-21 12:01:17 -0800454 Note: this will add a significant overhead; only turn this
Steven Rostedt1f0d69a2008-11-12 00:14:39 -0500455 on if you need to profile the system's use of these macros.
456
Steven Rostedt2bcd5212008-11-21 01:30:54 -0500457config PROFILE_ALL_BRANCHES
Randy Dunlap68e76e02018-01-15 11:07:27 -0800458 bool "Profile all if conditionals" if !FORTIFY_SOURCE
Steven Rostedt9ae5b872009-04-20 10:27:58 -0400459 select TRACE_BRANCH_PROFILING
Steven Rostedt2bcd5212008-11-21 01:30:54 -0500460 help
461 This tracer profiles all branch conditions. Every if ()
462 taken in the kernel is recorded whether it hit or miss.
463 The results will be displayed in:
464
David Rientjes13e5bef2011-03-16 17:17:08 -0700465 /sys/kernel/debug/tracing/trace_stat/branch_all
Steven Rostedt2bcd5212008-11-21 01:30:54 -0500466
Steven Rostedt9ae5b872009-04-20 10:27:58 -0400467 This option also enables the likely/unlikely profiler.
468
Steven Rostedt2bcd5212008-11-21 01:30:54 -0500469 This configuration, when enabled, will impose a great overhead
470 on the system. This should only be enabled when the system
Randy Dunlap40892362009-12-21 12:01:17 -0800471 is to be analyzed in much detail.
Steven Rostedt9ae5b872009-04-20 10:27:58 -0400472endchoice
Steven Rostedt2bcd5212008-11-21 01:30:54 -0500473
Steven Rostedt2ed84ee2008-11-12 15:24:24 -0500474config TRACING_BRANCHES
Steven Rostedt52f232c2008-11-12 00:14:40 -0500475 bool
476 help
477 Selected by tracers that will trace the likely and unlikely
478 conditions. This prevents the tracers themselves from being
479 profiled. Profiling the tracing infrastructure can only happen
480 when the likelys and unlikelys are not being traced.
481
Steven Rostedt2ed84ee2008-11-12 15:24:24 -0500482config BRANCH_TRACER
Steven Rostedt52f232c2008-11-12 00:14:40 -0500483 bool "Trace likely/unlikely instances"
Steven Rostedt2ed84ee2008-11-12 15:24:24 -0500484 depends on TRACE_BRANCH_PROFILING
485 select TRACING_BRANCHES
Steven Rostedt52f232c2008-11-12 00:14:40 -0500486 help
487 This traces the events of likely and unlikely condition
488 calls in the kernel. The difference between this and the
489 "Trace likely/unlikely profiler" is that this is not a
490 histogram of the callers, but actually places the calling
491 events into a running trace buffer to see when and where the
492 events happened, as well as their results.
493
494 Say N if unsure.
495
Frederic Weisbecker2db270a2009-02-07 20:46:45 +0100496config BLK_DEV_IO_TRACE
Randy Dunlap40892362009-12-21 12:01:17 -0800497 bool "Support for tracing block IO actions"
Frederic Weisbecker2db270a2009-02-07 20:46:45 +0100498 depends on SYSFS
Ingo Molnar1dfba052009-02-09 12:06:54 +0100499 depends on BLOCK
Frederic Weisbecker2db270a2009-02-07 20:46:45 +0100500 select RELAY
501 select DEBUG_FS
502 select TRACEPOINTS
Steven Rostedt5e0a0932009-05-28 15:50:13 -0400503 select GENERIC_TRACER
Frederic Weisbecker2db270a2009-02-07 20:46:45 +0100504 select STACKTRACE
505 help
506 Say Y here if you want to be able to trace the block layer actions
507 on a given queue. Tracing allows you to see any traffic happening
508 on a block device queue. For more information (and the userspace
509 support tools needed), fetch the blktrace tools from:
510
511 git://git.kernel.dk/blktrace.git
512
513 Tracing also is possible using the ftrace interface, e.g.:
514
515 echo 1 > /sys/block/sda/sda1/trace/enable
516 echo blk > /sys/kernel/debug/tracing/current_tracer
517 cat /sys/kernel/debug/tracing/trace_pipe
518
519 If unsure, say N.
Frederic Weisbecker36994e52008-12-29 13:42:23 -0800520
Anton Blanchard6b0b7552017-02-16 17:00:50 +1100521config KPROBE_EVENTS
Masami Hiramatsu413d37d2009-08-13 16:35:11 -0400522 depends on KPROBES
Heiko Carstensf850c30c2010-02-10 17:25:17 +0100523 depends on HAVE_REGS_AND_STACK_ACCESS_API
Masami Hiramatsu77b44d12009-11-03 19:12:47 -0500524 bool "Enable kprobes-based dynamic events"
Masami Hiramatsu413d37d2009-08-13 16:35:11 -0400525 select TRACING
Srikar Dronamraju8ab83f52012-04-09 14:41:44 +0530526 select PROBE_EVENTS
Masami Hiramatsu6212dd22018-11-05 18:02:36 +0900527 select DYNAMIC_EVENTS
Masami Hiramatsu77b44d12009-11-03 19:12:47 -0500528 default y
Masami Hiramatsu413d37d2009-08-13 16:35:11 -0400529 help
Randy Dunlap40892362009-12-21 12:01:17 -0800530 This allows the user to add tracing events (similar to tracepoints)
531 on the fly via the ftrace interface. See
Mauro Carvalho Chehab5fb94e92018-05-08 15:14:57 -0300532 Documentation/trace/kprobetrace.rst for more details.
Masami Hiramatsu77b44d12009-11-03 19:12:47 -0500533
534 Those events can be inserted wherever kprobes can probe, and record
535 various register and memory values.
536
Randy Dunlap40892362009-12-21 12:01:17 -0800537 This option is also required by perf-probe subcommand of perf tools.
538 If you want to use perf tools, this option is strongly recommended.
Masami Hiramatsu413d37d2009-08-13 16:35:11 -0400539
Masami Hiramatsu45408c42018-07-30 19:20:14 +0900540config KPROBE_EVENTS_ON_NOTRACE
541 bool "Do NOT protect notrace function from kprobe events"
542 depends on KPROBE_EVENTS
543 depends on KPROBES_ON_FTRACE
544 default n
545 help
546 This is only for the developers who want to debug ftrace itself
547 using kprobe events.
548
549 If kprobes can use ftrace instead of breakpoint, ftrace related
550 functions are protected from kprobe-events to prevent an infinit
551 recursion or any unexpected execution path which leads to a kernel
552 crash.
553
554 This option disables such protection and allows you to put kprobe
555 events on ftrace functions for debugging ftrace by itself.
556 Note that this might let you shoot yourself in the foot.
557
558 If unsure, say N.
559
Anton Blanchard6b0b7552017-02-16 17:00:50 +1100560config UPROBE_EVENTS
Srikar Dronamrajuf3f096c2012-04-11 16:00:43 +0530561 bool "Enable uprobes-based dynamic events"
562 depends on ARCH_SUPPORTS_UPROBES
563 depends on MMU
David A. Long09294e32014-03-07 10:32:22 -0500564 depends on PERF_EVENTS
Srikar Dronamrajuf3f096c2012-04-11 16:00:43 +0530565 select UPROBES
566 select PROBE_EVENTS
Masami Hiramatsu0597c492018-11-05 18:03:04 +0900567 select DYNAMIC_EVENTS
Srikar Dronamrajuf3f096c2012-04-11 16:00:43 +0530568 select TRACING
Arnaldo Carvalho de Melo61f35d72017-03-16 12:42:02 -0300569 default y
Srikar Dronamrajuf3f096c2012-04-11 16:00:43 +0530570 help
571 This allows the user to add tracing events on top of userspace
572 dynamic events (similar to tracepoints) on the fly via the trace
573 events interface. Those events can be inserted wherever uprobes
574 can probe, and record various registers.
575 This option is required if you plan to use perf-probe subcommand
576 of perf tools on user space applications.
577
Ingo Molnare1abf2c2015-04-02 15:51:39 +0200578config BPF_EVENTS
579 depends on BPF_SYSCALL
Anton Blanchard6b0b7552017-02-16 17:00:50 +1100580 depends on (KPROBE_EVENTS || UPROBE_EVENTS) && PERF_EVENTS
Ingo Molnare1abf2c2015-04-02 15:51:39 +0200581 bool
582 default y
583 help
Peter Wu5cbd22c2019-08-21 00:08:57 +0100584 This allows the user to attach BPF programs to kprobe, uprobe, and
585 tracepoint events.
Ingo Molnare1abf2c2015-04-02 15:51:39 +0200586
Masami Hiramatsu5448d442018-11-05 18:02:08 +0900587config DYNAMIC_EVENTS
588 def_bool n
589
Srikar Dronamraju8ab83f52012-04-09 14:41:44 +0530590config PROBE_EVENTS
591 def_bool n
592
Josef Bacik9802d862017-12-11 11:36:48 -0500593config BPF_KPROBE_OVERRIDE
594 bool "Enable BPF programs to override a kprobed function"
595 depends on BPF_EVENTS
Masami Hiramatsu540adea2018-01-13 02:55:03 +0900596 depends on FUNCTION_ERROR_INJECTION
Josef Bacik9802d862017-12-11 11:36:48 -0500597 default n
598 help
599 Allows BPF to override the execution of a probed function and
600 set a different return value. This is used for error injection.
601
Steven Rostedt8da38212008-08-14 15:45:07 -0400602config FTRACE_MCOUNT_RECORD
603 def_bool y
604 depends on DYNAMIC_FTRACE
605 depends on HAVE_FTRACE_MCOUNT_RECORD
606
Tom Zanussi08d43a52015-12-10 12:50:50 -0600607config TRACING_MAP
608 bool
609 depends on ARCH_HAVE_NMI_SAFE_CMPXCHG
Tom Zanussi08d43a52015-12-10 12:50:50 -0600610 help
611 tracing_map is a special-purpose lock-free map for tracing,
612 separated out as a stand-alone facility in order to allow it
613 to be shared between multiple tracers. It isn't meant to be
614 generally used outside of that context, and is normally
615 selected by tracers that use it.
616
Tom Zanussi726721a2020-05-28 14:32:37 -0500617config SYNTH_EVENTS
618 bool "Synthetic trace events"
619 select TRACING
620 select DYNAMIC_EVENTS
621 default n
622 help
623 Synthetic events are user-defined trace events that can be
624 used to combine data from other trace events or in fact any
625 data source. Synthetic events can be generated indirectly
626 via the trace() action of histogram triggers or directly
627 by way of an in-kernel API.
628
629 See Documentation/trace/events.rst or
630 Documentation/trace/histogram.rst for details and examples.
631
632 If in doubt, say N.
633
Tom Zanussi7ef224d2016-03-03 12:54:42 -0600634config HIST_TRIGGERS
635 bool "Histogram triggers"
636 depends on ARCH_HAVE_NMI_SAFE_CMPXCHG
637 select TRACING_MAP
Tom Zanussi7ad8fb62016-07-03 08:51:34 -0500638 select TRACING
Masami Hiramatsu7bbab382018-11-05 18:03:33 +0900639 select DYNAMIC_EVENTS
Tom Zanussi726721a2020-05-28 14:32:37 -0500640 select SYNTH_EVENTS
Tom Zanussi7ef224d2016-03-03 12:54:42 -0600641 default n
642 help
643 Hist triggers allow one or more arbitrary trace event fields
644 to be aggregated into hash tables and dumped to stdout by
645 reading a debugfs/tracefs file. They're useful for
646 gathering quick and dirty (though precise) summaries of
647 event activity as an initial guide for further investigation
648 using more advanced tools.
649
Tom Zanussi89e270c2018-01-15 20:52:10 -0600650 Inter-event tracing of quantities such as latencies is also
651 supported using hist triggers under this option.
652
Mauro Carvalho Chehabea272252018-06-26 06:49:11 -0300653 See Documentation/trace/histogram.rst.
Tom Zanussi7ef224d2016-03-03 12:54:42 -0600654 If in doubt, say N.
655
Cong Wang6c3edaf2019-11-29 20:52:18 -0800656config TRACE_EVENT_INJECT
657 bool "Trace event injection"
658 depends on TRACING
659 help
660 Allow user-space to inject a specific trace event into the ring
661 buffer. This is mainly used for testing purpose.
662
663 If unsure, say N.
664
Steven Rostedt (Red Hat)81dc9f02014-05-29 22:49:07 -0400665config TRACEPOINT_BENCHMARK
Krzysztof Kozlowskifc809bc2019-11-20 21:38:07 +0800666 bool "Add tracepoint that benchmarks tracepoints"
Steven Rostedt (Red Hat)81dc9f02014-05-29 22:49:07 -0400667 help
668 This option creates the tracepoint "benchmark:benchmark_event".
669 When the tracepoint is enabled, it kicks off a kernel thread that
670 goes into an infinite loop (calling cond_sched() to let other tasks
671 run), and calls the tracepoint. Each iteration will record the time
672 it took to write to the tracepoint and the next iteration that
673 data will be passed to the tracepoint itself. That is, the tracepoint
674 will report the time it took to do the previous tracepoint.
675 The string written to the tracepoint is a static string of 128 bytes
676 to keep the time the same. The initial string is simply a write of
677 "START". The second string records the cold cache time of the first
678 write which is not added to the rest of the calculations.
679
680 As it is a tight loop, it benchmarks as hot cache. That's fine because
681 we care most about hot paths that are probably in cache already.
682
683 An example of the output:
684
685 START
686 first=3672 [COLD CACHED]
687 last=632 first=3672 max=632 min=632 avg=316 std=446 std^2=199712
688 last=278 first=3672 max=632 min=278 avg=303 std=316 std^2=100337
689 last=277 first=3672 max=632 min=277 avg=296 std=258 std^2=67064
690 last=273 first=3672 max=632 min=273 avg=292 std=224 std^2=50411
691 last=273 first=3672 max=632 min=273 avg=288 std=200 std^2=40389
692 last=281 first=3672 max=632 min=273 avg=287 std=183 std^2=33666
693
694
Steven Rostedt5092dbc2009-05-05 22:47:18 -0400695config RING_BUFFER_BENCHMARK
696 tristate "Ring buffer benchmark stress tester"
697 depends on RING_BUFFER
698 help
Randy Dunlap40892362009-12-21 12:01:17 -0800699 This option creates a test to stress the ring buffer and benchmark it.
700 It creates its own ring buffer such that it will not interfere with
Steven Rostedt5092dbc2009-05-05 22:47:18 -0400701 any other users of the ring buffer (such as ftrace). It then creates
702 a producer and consumer that will run for 10 seconds and sleep for
703 10 seconds. Each interval it will print out the number of events
704 it recorded and give a rough estimate of how long each iteration took.
705
706 It does not disable interrupts or raise its priority, so it may be
707 affected by processes that are running.
708
Randy Dunlap40892362009-12-21 12:01:17 -0800709 If unsure, say N.
Steven Rostedt5092dbc2009-05-05 22:47:18 -0400710
Steven Rostedt (VMware)1e837942020-01-29 16:30:30 -0500711config TRACE_EVAL_MAP_FILE
712 bool "Show eval mappings for trace events"
713 depends on TRACING
714 help
715 The "print fmt" of the trace events will show the enum/sizeof names
716 instead of their values. This can cause problems for user space tools
717 that use this string to parse the raw data as user space does not know
718 how to convert the string to its value.
719
720 To fix this, there's a special macro in the kernel that can be used
721 to convert an enum/sizeof into its value. If this macro is used, then
722 the print fmt strings will be converted to their values.
723
724 If something does not get converted properly, this option can be
725 used to show what enums/sizeof the kernel tried to convert.
726
727 This option is for debugging the conversions. A file is created
728 in the tracing directory called "eval_map" that will show the
729 names matched with their values and what trace event system they
730 belong too.
731
732 Normally, the mapping of the strings to values will be freed after
733 boot up or module load. With this option, they will not be freed, as
734 they are needed for the "eval_map" file. Enabling this option will
735 increase the memory footprint of the running kernel.
736
737 If unsure, say N.
738
Steven Rostedt (VMware)773c1672020-11-05 21:32:46 -0500739config FTRACE_RECORD_RECURSION
740 bool "Record functions that recurse in function tracing"
741 depends on FUNCTION_TRACER
742 help
743 All callbacks that attach to the function tracing have some sort
744 of protection against recursion. Even though the protection exists,
745 it adds overhead. This option will create a file in the tracefs
746 file system called "recursed_functions" that will list the functions
747 that triggered a recursion.
748
749 This will add more overhead to cases that have recursion.
750
751 If unsure, say N
752
753config FTRACE_RECORD_RECURSION_SIZE
754 int "Max number of recursed functions to record"
755 default 128
756 depends on FTRACE_RECORD_RECURSION
757 help
758 This defines the limit of number of functions that can be
759 listed in the "recursed_functions" file, that lists all
760 the functions that caused a recursion to happen.
761 This file can be reset, but the limit can not change in
762 size at runtime.
763
Steven Rostedt (VMware)28575c62020-11-02 14:43:10 -0500764config RING_BUFFER_RECORD_RECURSION
765 bool "Record functions that recurse in the ring buffer"
766 depends on FTRACE_RECORD_RECURSION
767 # default y, because it is coupled with FTRACE_RECORD_RECURSION
768 default y
769 help
770 The ring buffer has its own internal recursion. Although when
771 recursion happens it wont cause harm because of the protection,
772 but it does cause an unwanted overhead. Enabling this option will
773 place where recursion was detected into the ftrace "recursed_functions"
774 file.
775
776 This will add more overhead to cases that have recursion.
777
Steven Rostedt (VMware)1e837942020-01-29 16:30:30 -0500778config GCOV_PROFILE_FTRACE
779 bool "Enable GCOV profiling on ftrace subsystem"
780 depends on GCOV_KERNEL
781 help
782 Enable GCOV profiling on ftrace subsystem for checking
783 which functions/lines are tested.
784
785 If unsure, say N.
786
787 Note that on a kernel compiled with this config, ftrace will
788 run significantly slower.
789
790config FTRACE_SELFTEST
791 bool
792
793config FTRACE_STARTUP_TEST
794 bool "Perform a startup test on ftrace"
795 depends on GENERIC_TRACER
796 select FTRACE_SELFTEST
797 help
798 This option performs a series of startup tests on ftrace. On bootup
799 a series of tests are made to verify that the tracer is
800 functioning properly. It will do tests on all the configured
801 tracers of ftrace.
802
803config EVENT_TRACE_STARTUP_TEST
804 bool "Run selftest on trace events"
805 depends on FTRACE_STARTUP_TEST
806 default y
807 help
808 This option performs a test on all trace events in the system.
809 It basically just enables each event and runs some code that
810 will trigger events (not necessarily the event it enables)
811 This may take some time run as there are a lot of events.
812
813config EVENT_TRACE_TEST_SYSCALLS
814 bool "Run selftest on syscall events"
815 depends on EVENT_TRACE_STARTUP_TEST
816 help
817 This option will also enable testing every syscall event.
818 It only enables the event and disables it and runs various loads
819 with the event enabled. This adds a bit more time for kernel boot
820 up since it runs this on every system call defined.
821
822 TBD - enable a way to actually call the syscalls as we test their
823 events
824
Steven Rostedt (Red Hat)6c43e552013-03-15 11:32:53 -0400825config RING_BUFFER_STARTUP_TEST
826 bool "Ring buffer startup self test"
827 depends on RING_BUFFER
828 help
Krzysztof Kozlowskifc809bc2019-11-20 21:38:07 +0800829 Run a simple self test on the ring buffer on boot up. Late in the
Steven Rostedt (Red Hat)6c43e552013-03-15 11:32:53 -0400830 kernel boot sequence, the test will start that kicks off
831 a thread per cpu. Each thread will write various size events
832 into the ring buffer. Another thread is created to send IPIs
833 to each of the threads, where the IPI handler will also write
834 to the ring buffer, to test/stress the nesting ability.
835 If any anomalies are discovered, a warning will be displayed
836 and all ring buffers will be disabled.
837
838 The test runs for 10 seconds. This will slow your boot time
839 by at least 10 more seconds.
840
841 At the end of the test, statics and more checks are done.
842 It will output the stats of each per cpu buffer. What
843 was written, the sizes, what was read, what was lost, and
844 other similar details.
845
846 If unsure, say N
847
Steven Rostedt (VMware)a48fc4f2020-01-29 16:23:04 -0500848config MMIOTRACE_TEST
849 tristate "Test module for mmiotrace"
850 depends on MMIOTRACE && m
851 help
852 This is a dumb module for testing mmiotrace. It is very dangerous
853 as it will write garbage to IO memory starting at a given address.
854 However, it should be safe to use on e.g. unused portion of VRAM.
855
856 Say N, unless you absolutely know what you are doing.
857
Joel Fernandes (Google)f96e8572018-07-12 14:36:11 -0700858config PREEMPTIRQ_DELAY_TEST
Steven Rostedt (VMware)a48fc4f2020-01-29 16:23:04 -0500859 tristate "Test module to create a preempt / IRQ disable delay thread to test latency tracers"
Joel Fernandes (Google)f96e8572018-07-12 14:36:11 -0700860 depends on m
861 help
862 Select this option to build a test module that can help test latency
863 tracers by executing a preempt or irq disable section with a user
864 configurable delay. The module busy waits for the duration of the
865 critical section.
866
Viktor Rosendahl (BMW)79393722019-10-09 00:08:22 +0200867 For example, the following invocation generates a burst of three
868 irq-disabled critical sections for 500us:
869 modprobe preemptirq_delay_test test_mode=irq delay=500 burst_size=3
Joel Fernandes (Google)f96e8572018-07-12 14:36:11 -0700870
871 If unsure, say N
872
Tom Zanussi9fe41ef2020-01-29 12:59:28 -0600873config SYNTH_EVENT_GEN_TEST
874 tristate "Test module for in-kernel synthetic event generation"
Tom Zanussi726721a2020-05-28 14:32:37 -0500875 depends on SYNTH_EVENTS
Tom Zanussi9fe41ef2020-01-29 12:59:28 -0600876 help
877 This option creates a test module to check the base
878 functionality of in-kernel synthetic event definition and
879 generation.
880
881 To test, insert the module, and then check the trace buffer
882 for the generated sample events.
883
884 If unsure, say N.
885
Tom Zanussi64836242020-01-29 12:59:31 -0600886config KPROBE_EVENT_GEN_TEST
887 tristate "Test module for in-kernel kprobe event generation"
888 depends on KPROBE_EVENTS
889 help
890 This option creates a test module to check the base
891 functionality of in-kernel kprobe event definition.
892
893 To test, insert the module, and then check the trace buffer
894 for the generated kprobe events.
895
896 If unsure, say N.
897
Tom Zanussi2d19bd72020-04-03 14:31:21 -0500898config HIST_TRIGGERS_DEBUG
899 bool "Hist trigger debug support"
900 depends on HIST_TRIGGERS
901 help
902 Add "hist_debug" file for each event, which when read will
903 dump out a bunch of internal details about the hist triggers
904 defined on that event.
905
906 The hist_debug file serves a couple of purposes:
907
908 - Helps developers verify that nothing is broken.
909
910 - Provides educational information to support the details
911 of the hist trigger internals as described by
912 Documentation/trace/histogram-design.rst.
913
914 The hist_debug output only covers the data structures
915 related to the histogram definitions themselves and doesn't
916 display the internals of map buckets or variable values of
917 running histograms.
918
919 If unsure, say N.
920
Steven Rostedt4ed9f072009-04-20 10:47:36 -0400921endif # FTRACE
Ingo Molnar40ada302009-03-05 21:19:55 +0100922
923endif # TRACING_SUPPORT
924