blob: aa8cfeabb7432d4011005f34a981fa3381caab09 [file] [log] [blame]
Ingo Molnare7bc62b2008-12-04 20:13:45 +01001
2Performance Counters for Linux
3------------------------------
4
5Performance counters are special hardware registers available on most modern
6CPUs. These registers count the number of certain types of hw events: such
7as instructions executed, cachemisses suffered, or branches mis-predicted -
8without slowing down the kernel or applications. These registers can also
9trigger interrupts when a threshold number of events have passed - and can
10thus be used to profile the code that runs on that CPU.
11
12The Linux Performance Counter subsystem provides an abstraction of these
Ingo Molnar447557a2008-12-11 20:40:18 +010013hardware capabilities. It provides per task and per CPU counters, counter
Paul Mackerrasf66c6b22009-03-23 10:29:36 +110014groups, and it provides event capabilities on top of those. It
15provides "virtual" 64-bit counters, regardless of the width of the
16underlying hardware counters.
Ingo Molnare7bc62b2008-12-04 20:13:45 +010017
18Performance counters are accessed via special file descriptors.
19There's one file descriptor per virtual counter used.
20
Ramkumar Ramachandrab68eebd2014-03-18 15:10:04 -040021The special file descriptor is opened via the sys_perf_event_open()
Ingo Molnare7bc62b2008-12-04 20:13:45 +010022system call:
23
Tim Blechmann0b413e42009-12-27 14:43:06 +010024 int sys_perf_event_open(struct perf_event_attr *hw_event_uptr,
Paul Mackerrasf66c6b22009-03-23 10:29:36 +110025 pid_t pid, int cpu, int group_fd,
26 unsigned long flags);
Ingo Molnare7bc62b2008-12-04 20:13:45 +010027
28The syscall returns the new fd. The fd can be used via the normal
29VFS system calls: read() can be used to read the counter, fcntl()
30can be used to set the blocking mode, etc.
31
32Multiple counters can be kept open at a time, and the counters
33can be poll()ed.
34
Tim Blechmann0b413e42009-12-27 14:43:06 +010035When creating a new counter fd, 'perf_event_attr' is:
Ingo Molnare7bc62b2008-12-04 20:13:45 +010036
Tim Blechmann0b413e42009-12-27 14:43:06 +010037struct perf_event_attr {
Peter Zijlstrae5791a82009-05-01 12:23:19 +020038 /*
39 * The MSB of the config word signifies if the rest contains cpu
40 * specific (raw) counter configuration data, if unset, the next
41 * 7 bits are an event type and the rest of the bits are the event
42 * identifier.
43 */
44 __u64 config;
Ingo Molnar447557a2008-12-11 20:40:18 +010045
Peter Zijlstrae5791a82009-05-01 12:23:19 +020046 __u64 irq_period;
47 __u32 record_type;
48 __u32 read_format;
Ingo Molnar447557a2008-12-11 20:40:18 +010049
Peter Zijlstrae5791a82009-05-01 12:23:19 +020050 __u64 disabled : 1, /* off by default */
Peter Zijlstrae5791a82009-05-01 12:23:19 +020051 inherit : 1, /* children inherit it */
52 pinned : 1, /* must always be on PMU */
53 exclusive : 1, /* only group on PMU */
54 exclude_user : 1, /* don't count user */
55 exclude_kernel : 1, /* ditto kernel */
56 exclude_hv : 1, /* ditto hypervisor */
57 exclude_idle : 1, /* don't count when idle */
58 mmap : 1, /* include mmap data */
59 munmap : 1, /* include munmap data */
60 comm : 1, /* include comm data */
Ingo Molnar447557a2008-12-11 20:40:18 +010061
Peter Zijlstrae5791a82009-05-01 12:23:19 +020062 __reserved_1 : 52;
Paul Mackerrasf66c6b22009-03-23 10:29:36 +110063
Peter Zijlstrae5791a82009-05-01 12:23:19 +020064 __u32 extra_config_len;
65 __u32 wakeup_events; /* wakeup every n events */
Paul Mackerrasf66c6b22009-03-23 10:29:36 +110066
Peter Zijlstrae5791a82009-05-01 12:23:19 +020067 __u64 __reserved_2;
68 __u64 __reserved_3;
Ingo Molnar447557a2008-12-11 20:40:18 +010069};
70
Peter Zijlstrae5791a82009-05-01 12:23:19 +020071The 'config' field specifies what the counter should count. It
Paul Mackerrasf66c6b22009-03-23 10:29:36 +110072is divided into 3 bit-fields:
73
Peter Zijlstrae5791a82009-05-01 12:23:19 +020074raw_type: 1 bit (most significant bit) 0x8000_0000_0000_0000
75type: 7 bits (next most significant) 0x7f00_0000_0000_0000
76event_id: 56 bits (least significant) 0x00ff_ffff_ffff_ffff
Paul Mackerrasf66c6b22009-03-23 10:29:36 +110077
78If 'raw_type' is 1, then the counter will count a hardware event
79specified by the remaining 63 bits of event_config. The encoding is
80machine-specific.
81
82If 'raw_type' is 0, then the 'type' field says what kind of counter
83this is, with the following encoding:
84
Ramkumar Ramachandrab68eebd2014-03-18 15:10:04 -040085enum perf_type_id {
Paul Mackerrasf66c6b22009-03-23 10:29:36 +110086 PERF_TYPE_HARDWARE = 0,
87 PERF_TYPE_SOFTWARE = 1,
88 PERF_TYPE_TRACEPOINT = 2,
89};
90
91A counter of PERF_TYPE_HARDWARE will count the hardware event
92specified by 'event_id':
93
Ingo Molnar447557a2008-12-11 20:40:18 +010094/*
Paul Mackerrasf66c6b22009-03-23 10:29:36 +110095 * Generalized performance counter event types, used by the hw_event.event_id
Ingo Molnarcdd6c482009-09-21 12:02:48 +020096 * parameter of the sys_perf_event_open() syscall:
Ingo Molnar447557a2008-12-11 20:40:18 +010097 */
Ramkumar Ramachandrab68eebd2014-03-18 15:10:04 -040098enum perf_hw_id {
Ingo Molnar447557a2008-12-11 20:40:18 +010099 /*
100 * Common hardware events, generalized by the kernel:
101 */
Peter Zijlstraf4dbfa82009-06-11 14:06:28 +0200102 PERF_COUNT_HW_CPU_CYCLES = 0,
103 PERF_COUNT_HW_INSTRUCTIONS = 1,
Kirill Smelkov0895cf02010-01-13 13:22:18 -0200104 PERF_COUNT_HW_CACHE_REFERENCES = 2,
Peter Zijlstraf4dbfa82009-06-11 14:06:28 +0200105 PERF_COUNT_HW_CACHE_MISSES = 3,
106 PERF_COUNT_HW_BRANCH_INSTRUCTIONS = 4,
Kirill Smelkov0895cf02010-01-13 13:22:18 -0200107 PERF_COUNT_HW_BRANCH_MISSES = 5,
Peter Zijlstraf4dbfa82009-06-11 14:06:28 +0200108 PERF_COUNT_HW_BUS_CYCLES = 6,
Like Xu438f1a92021-11-09 17:01:47 +0800109 PERF_COUNT_HW_STALLED_CYCLES_FRONTEND = 7,
110 PERF_COUNT_HW_STALLED_CYCLES_BACKEND = 8,
111 PERF_COUNT_HW_REF_CPU_CYCLES = 9,
Ingo Molnar447557a2008-12-11 20:40:18 +0100112};
Ingo Molnare7bc62b2008-12-04 20:13:45 +0100113
Paul Mackerrasf66c6b22009-03-23 10:29:36 +1100114These are standardized types of events that work relatively uniformly
115on all CPUs that implement Performance Counters support under Linux,
116although there may be variations (e.g., different CPUs might count
117cache references and misses at different levels of the cache hierarchy).
118If a CPU is not able to count the selected event, then the system call
119will return -EINVAL.
Ingo Molnare7bc62b2008-12-04 20:13:45 +0100120
Paul Mackerrasf66c6b22009-03-23 10:29:36 +1100121More hw_event_types are supported as well, but they are CPU-specific
122and accessed as raw events. For example, to count "External bus
123cycles while bus lock signal asserted" events on Intel Core CPUs, pass
124in a 0x4064 event_id value and set hw_event.raw_type to 1.
Ingo Molnare7bc62b2008-12-04 20:13:45 +0100125
Paul Mackerrasf66c6b22009-03-23 10:29:36 +1100126A counter of type PERF_TYPE_SOFTWARE will count one of the available
127software events, selected by 'event_id':
128
129/*
130 * Special "software" counters provided by the kernel, even if the hardware
131 * does not support performance counters. These counters measure various
132 * physical and sw events of the kernel (and allow the profiling of them as
133 * well):
134 */
Ramkumar Ramachandrab68eebd2014-03-18 15:10:04 -0400135enum perf_sw_ids {
Peter Zijlstraf4dbfa82009-06-11 14:06:28 +0200136 PERF_COUNT_SW_CPU_CLOCK = 0,
Kirill Smelkov0895cf02010-01-13 13:22:18 -0200137 PERF_COUNT_SW_TASK_CLOCK = 1,
138 PERF_COUNT_SW_PAGE_FAULTS = 2,
Peter Zijlstraf4dbfa82009-06-11 14:06:28 +0200139 PERF_COUNT_SW_CONTEXT_SWITCHES = 3,
140 PERF_COUNT_SW_CPU_MIGRATIONS = 4,
141 PERF_COUNT_SW_PAGE_FAULTS_MIN = 5,
142 PERF_COUNT_SW_PAGE_FAULTS_MAJ = 6,
Anton Blanchardf7d79862009-10-18 01:09:29 +0000143 PERF_COUNT_SW_ALIGNMENT_FAULTS = 7,
144 PERF_COUNT_SW_EMULATION_FAULTS = 8,
Paul Mackerrasf66c6b22009-03-23 10:29:36 +1100145};
146
Peter Zijlstrae5791a82009-05-01 12:23:19 +0200147Counters of the type PERF_TYPE_TRACEPOINT are available when the ftrace event
148tracer is available, and event_id values can be obtained from
149/debug/tracing/events/*/*/id
150
151
Paul Mackerrasf66c6b22009-03-23 10:29:36 +1100152Counters come in two flavours: counting counters and sampling
153counters. A "counting" counter is one that is used for counting the
154number of events that occur, and is characterised by having
Peter Zijlstrae5791a82009-05-01 12:23:19 +0200155irq_period = 0.
156
157
158A read() on a counter returns the current value of the counter and possible
159additional values as specified by 'read_format', each value is a u64 (8 bytes)
160in size.
161
162/*
163 * Bits that can be set in hw_event.read_format to request that
164 * reads on the counter should return the indicated quantities,
165 * in increasing order of bit value, after the counter value.
166 */
Ingo Molnarcdd6c482009-09-21 12:02:48 +0200167enum perf_event_read_format {
Peter Zijlstrae5791a82009-05-01 12:23:19 +0200168 PERF_FORMAT_TOTAL_TIME_ENABLED = 1,
169 PERF_FORMAT_TOTAL_TIME_RUNNING = 2,
170};
171
172Using these additional values one can establish the overcommit ratio for a
173particular counter allowing one to take the round-robin scheduling effect
174into account.
175
Paul Mackerrasf66c6b22009-03-23 10:29:36 +1100176
177A "sampling" counter is one that is set up to generate an interrupt
178every N events, where N is given by 'irq_period'. A sampling counter
Peter Zijlstrae5791a82009-05-01 12:23:19 +0200179has irq_period > 0. The record_type controls what data is recorded on each
180interrupt:
Ingo Molnare7bc62b2008-12-04 20:13:45 +0100181
Ingo Molnar447557a2008-12-11 20:40:18 +0100182/*
Peter Zijlstrae5791a82009-05-01 12:23:19 +0200183 * Bits that can be set in hw_event.record_type to request information
184 * in the overflow packets.
Ingo Molnar447557a2008-12-11 20:40:18 +0100185 */
Ingo Molnarcdd6c482009-09-21 12:02:48 +0200186enum perf_event_record_format {
Peter Zijlstrae5791a82009-05-01 12:23:19 +0200187 PERF_RECORD_IP = 1U << 0,
188 PERF_RECORD_TID = 1U << 1,
189 PERF_RECORD_TIME = 1U << 2,
190 PERF_RECORD_ADDR = 1U << 3,
191 PERF_RECORD_GROUP = 1U << 4,
192 PERF_RECORD_CALLCHAIN = 1U << 5,
Ingo Molnar447557a2008-12-11 20:40:18 +0100193};
Ingo Molnare7bc62b2008-12-04 20:13:45 +0100194
Peter Zijlstrae5791a82009-05-01 12:23:19 +0200195Such (and other) events will be recorded in a ring-buffer, which is
196available to user-space using mmap() (see below).
Ingo Molnare7bc62b2008-12-04 20:13:45 +0100197
Paul Mackerrasf66c6b22009-03-23 10:29:36 +1100198The 'disabled' bit specifies whether the counter starts out disabled
199or enabled. If it is initially disabled, it can be enabled by ioctl
200or prctl (see below).
Ingo Molnar447557a2008-12-11 20:40:18 +0100201
Paul Mackerrasf66c6b22009-03-23 10:29:36 +1100202The 'inherit' bit, if set, specifies that this counter should count
203events on descendant tasks as well as the task specified. This only
204applies to new descendents, not to any existing descendents at the
205time the counter is created (nor to any new descendents of existing
206descendents).
207
208The 'pinned' bit, if set, specifies that the counter should always be
209on the CPU if at all possible. It only applies to hardware counters
210and only to group leaders. If a pinned counter cannot be put onto the
211CPU (e.g. because there are not enough hardware counters or because of
212a conflict with some other event), then the counter goes into an
213'error' state, where reads return end-of-file (i.e. read() returns 0)
214until the counter is subsequently enabled or disabled.
215
216The 'exclusive' bit, if set, specifies that when this counter's group
217is on the CPU, it should be the only group using the CPU's counters.
218In future, this will allow sophisticated monitoring programs to supply
219extra configuration information via 'extra_config_len' to exploit
220advanced features of the CPU's Performance Monitor Unit (PMU) that are
221not otherwise accessible and that might disrupt other hardware
222counters.
223
224The 'exclude_user', 'exclude_kernel' and 'exclude_hv' bits provide a
225way to request that counting of events be restricted to times when the
226CPU is in user, kernel and/or hypervisor mode.
227
Andrew Murray23e232b2019-01-10 13:53:23 +0000228Furthermore the 'exclude_host' and 'exclude_guest' bits provide a way
229to request counting of events restricted to guest and host contexts when
230using Linux as the hypervisor.
231
Peter Zijlstrae5791a82009-05-01 12:23:19 +0200232The 'mmap' and 'munmap' bits allow recording of PROT_EXEC mmap/munmap
233operations, these can be used to relate userspace IP addresses to actual
234code, even after the mapping (or even the whole process) is gone,
235these events are recorded in the ring-buffer (see below).
236
237The 'comm' bit allows tracking of process comm data on process creation.
238This too is recorded in the ring-buffer (see below).
Paul Mackerrasf66c6b22009-03-23 10:29:36 +1100239
Ramkumar Ramachandrab68eebd2014-03-18 15:10:04 -0400240The 'pid' parameter to the sys_perf_event_open() system call allows the
Paul Mackerrasf66c6b22009-03-23 10:29:36 +1100241counter to be specific to a task:
Ingo Molnare7bc62b2008-12-04 20:13:45 +0100242
243 pid == 0: if the pid parameter is zero, the counter is attached to the
244 current task.
245
246 pid > 0: the counter is attached to a specific task (if the current task
247 has sufficient privilege to do so)
248
249 pid < 0: all tasks are counted (per cpu counters)
250
Paul Mackerrasf66c6b22009-03-23 10:29:36 +1100251The 'cpu' parameter allows a counter to be made specific to a CPU:
Ingo Molnare7bc62b2008-12-04 20:13:45 +0100252
253 cpu >= 0: the counter is restricted to a specific CPU
254 cpu == -1: the counter counts on all CPUs
255
Ingo Molnar447557a2008-12-11 20:40:18 +0100256(Note: the combination of 'pid == -1' and 'cpu == -1' is not valid.)
Ingo Molnare7bc62b2008-12-04 20:13:45 +0100257
258A 'pid > 0' and 'cpu == -1' counter is a per task counter that counts
259events of that task and 'follows' that task to whatever CPU the task
260gets schedule to. Per task counters can be created by any user, for
261their own tasks.
262
263A 'pid == -1' and 'cpu == x' counter is a per CPU counter that counts
Alexey Budankov6b3e0e22020-04-02 11:47:35 +0300264all events on CPU-x. Per CPU counters need CAP_PERFMON or CAP_SYS_ADMIN
265privilege.
Ingo Molnare7bc62b2008-12-04 20:13:45 +0100266
Paul Mackerrasf66c6b22009-03-23 10:29:36 +1100267The 'flags' parameter is currently unused and must be zero.
268
269The 'group_fd' parameter allows counter "groups" to be set up. A
270counter group has one counter which is the group "leader". The leader
Ramkumar Ramachandrab68eebd2014-03-18 15:10:04 -0400271is created first, with group_fd = -1 in the sys_perf_event_open call
Paul Mackerrasf66c6b22009-03-23 10:29:36 +1100272that creates it. The rest of the group members are created
273subsequently, with group_fd giving the fd of the group leader.
274(A single counter on its own is created with group_fd = -1 and is
275considered to be a group with only 1 member.)
276
277A counter group is scheduled onto the CPU as a unit, that is, it will
278only be put onto the CPU if all of the counters in the group can be
279put onto the CPU. This means that the values of the member counters
280can be meaningfully compared, added, divided (to get ratios), etc.,
281with each other, since they have counted events for the same set of
282executed instructions.
283
Peter Zijlstrae5791a82009-05-01 12:23:19 +0200284
285Like stated, asynchronous events, like counter overflow or PROT_EXEC mmap
286tracking are logged into a ring-buffer. This ring-buffer is created and
287accessed through mmap().
288
289The mmap size should be 1+2^n pages, where the first page is a meta-data page
Ingo Molnarcdd6c482009-09-21 12:02:48 +0200290(struct perf_event_mmap_page) that contains various bits of information such
Peter Zijlstrae5791a82009-05-01 12:23:19 +0200291as where the ring-buffer head is.
292
293/*
294 * Structure of the page that can be mapped via mmap
295 */
Ingo Molnarcdd6c482009-09-21 12:02:48 +0200296struct perf_event_mmap_page {
Peter Zijlstrae5791a82009-05-01 12:23:19 +0200297 __u32 version; /* version number of this structure */
298 __u32 compat_version; /* lowest version this is compat with */
299
300 /*
301 * Bits needed to read the hw counters in user-space.
302 *
303 * u32 seq;
304 * s64 count;
305 *
306 * do {
307 * seq = pc->lock;
308 *
309 * barrier()
310 * if (pc->index) {
311 * count = pmc_read(pc->index - 1);
312 * count += pc->offset;
313 * } else
314 * goto regular_read;
315 *
316 * barrier();
317 * } while (pc->lock != seq);
318 *
319 * NOTE: for obvious reason this only works on self-monitoring
320 * processes.
321 */
322 __u32 lock; /* seqlock for synchronization */
323 __u32 index; /* hardware counter identifier */
324 __s64 offset; /* add to hardware counter value */
325
326 /*
327 * Control data for the mmap() data buffer.
328 *
329 * User-space reading this value should issue an rmb(), on SMP capable
Ingo Molnarcdd6c482009-09-21 12:02:48 +0200330 * platforms, after reading this value -- see perf_event_wakeup().
Peter Zijlstrae5791a82009-05-01 12:23:19 +0200331 */
332 __u32 data_head; /* head in the data section */
333};
334
335NOTE: the hw-counter userspace bits are arch specific and are currently only
336 implemented on powerpc.
337
338The following 2^n pages are the ring-buffer which contains events of the form:
339
Ingo Molnarcdd6c482009-09-21 12:02:48 +0200340#define PERF_RECORD_MISC_KERNEL (1 << 0)
341#define PERF_RECORD_MISC_USER (1 << 1)
342#define PERF_RECORD_MISC_OVERFLOW (1 << 2)
Peter Zijlstrae5791a82009-05-01 12:23:19 +0200343
344struct perf_event_header {
345 __u32 type;
346 __u16 misc;
347 __u16 size;
348};
349
350enum perf_event_type {
351
352 /*
353 * The MMAP events record the PROT_EXEC mappings so that we can
354 * correlate userspace IPs to code. They have the following structure:
355 *
356 * struct {
357 * struct perf_event_header header;
358 *
359 * u32 pid, tid;
360 * u64 addr;
361 * u64 len;
362 * u64 pgoff;
363 * char filename[];
364 * };
365 */
Ingo Molnarcdd6c482009-09-21 12:02:48 +0200366 PERF_RECORD_MMAP = 1,
367 PERF_RECORD_MUNMAP = 2,
Peter Zijlstrae5791a82009-05-01 12:23:19 +0200368
369 /*
370 * struct {
371 * struct perf_event_header header;
372 *
373 * u32 pid, tid;
374 * char comm[];
375 * };
376 */
Ingo Molnarcdd6c482009-09-21 12:02:48 +0200377 PERF_RECORD_COMM = 3,
Peter Zijlstrae5791a82009-05-01 12:23:19 +0200378
379 /*
Ingo Molnarcdd6c482009-09-21 12:02:48 +0200380 * When header.misc & PERF_RECORD_MISC_OVERFLOW the event_type field
Peter Zijlstrae5791a82009-05-01 12:23:19 +0200381 * will be PERF_RECORD_*
382 *
383 * struct {
384 * struct perf_event_header header;
385 *
386 * { u64 ip; } && PERF_RECORD_IP
387 * { u32 pid, tid; } && PERF_RECORD_TID
388 * { u64 time; } && PERF_RECORD_TIME
389 * { u64 addr; } && PERF_RECORD_ADDR
390 *
391 * { u64 nr;
392 * { u64 event, val; } cnt[nr]; } && PERF_RECORD_GROUP
393 *
394 * { u16 nr,
395 * hv,
396 * kernel,
397 * user;
398 * u64 ips[nr]; } && PERF_RECORD_CALLCHAIN
399 * };
400 */
401};
402
403NOTE: PERF_RECORD_CALLCHAIN is arch specific and currently only implemented
404 on x86.
405
406Notification of new events is possible through poll()/select()/epoll() and
407fcntl() managing signals.
408
409Normally a notification is generated for every page filled, however one can
Tim Blechmann0b413e42009-12-27 14:43:06 +0100410additionally set perf_event_attr.wakeup_events to generate one every
Peter Zijlstrae5791a82009-05-01 12:23:19 +0200411so many counter overflow events.
412
413Future work will include a splice() interface to the ring-buffer.
414
415
Paul Mackerrasf66c6b22009-03-23 10:29:36 +1100416Counters can be enabled and disabled in two ways: via ioctl and via
417prctl. When a counter is disabled, it doesn't count or generate
418events but does continue to exist and maintain its count value.
419
Namhyung Kima59e64a2012-05-31 14:51:45 +0900420An individual counter can be enabled with
Paul Mackerrasf66c6b22009-03-23 10:29:36 +1100421
Namhyung Kima59e64a2012-05-31 14:51:45 +0900422 ioctl(fd, PERF_EVENT_IOC_ENABLE, 0);
Paul Mackerrasf66c6b22009-03-23 10:29:36 +1100423
424or disabled with
425
Namhyung Kima59e64a2012-05-31 14:51:45 +0900426 ioctl(fd, PERF_EVENT_IOC_DISABLE, 0);
Paul Mackerrasf66c6b22009-03-23 10:29:36 +1100427
Namhyung Kima59e64a2012-05-31 14:51:45 +0900428For a counter group, pass PERF_IOC_FLAG_GROUP as the third argument.
Paul Mackerrasf66c6b22009-03-23 10:29:36 +1100429Enabling or disabling the leader of a group enables or disables the
430whole group; that is, while the group leader is disabled, none of the
431counters in the group will count. Enabling or disabling a member of a
432group other than the leader only affects that counter - disabling an
433non-leader stops that counter from counting but doesn't affect any
434other counter.
435
Peter Zijlstrae5791a82009-05-01 12:23:19 +0200436Additionally, non-inherited overflow counters can use
437
Ingo Molnarcdd6c482009-09-21 12:02:48 +0200438 ioctl(fd, PERF_EVENT_IOC_REFRESH, nr);
Peter Zijlstrae5791a82009-05-01 12:23:19 +0200439
440to enable a counter for 'nr' events, after which it gets disabled again.
441
Paul Mackerrasf66c6b22009-03-23 10:29:36 +1100442A process can enable or disable all the counter groups that are
443attached to it, using prctl:
444
Ingo Molnarcdd6c482009-09-21 12:02:48 +0200445 prctl(PR_TASK_PERF_EVENTS_ENABLE);
Paul Mackerrasf66c6b22009-03-23 10:29:36 +1100446
Ingo Molnarcdd6c482009-09-21 12:02:48 +0200447 prctl(PR_TASK_PERF_EVENTS_DISABLE);
Paul Mackerrasf66c6b22009-03-23 10:29:36 +1100448
449This applies to all counters on the current process, whether created
450by this process or by another, and doesn't affect any counters that
451this process has created on other processes. It only enables or
452disables the group leaders, not any other members in the groups.
Ingo Molnar447557a2008-12-11 20:40:18 +0100453
Mike Frysinger018df722009-06-12 13:17:43 -0400454
455Arch requirements
456-----------------
457
458If your architecture does not have hardware performance metrics, you can
459still use the generic software counters based on hrtimers for sampling.
460
Ingo Molnarcdd6c482009-09-21 12:02:48 +0200461So to start with, in order to add HAVE_PERF_EVENTS to your Kconfig, you
Mike Frysinger018df722009-06-12 13:17:43 -0400462will need at least this:
Ingo Molnarcdd6c482009-09-21 12:02:48 +0200463 - asm/perf_event.h - a basic stub will suffice at first
Mike Frysinger018df722009-06-12 13:17:43 -0400464 - support for atomic64 types (and associated helper functions)
Mike Frysinger018df722009-06-12 13:17:43 -0400465
466If your architecture does have hardware capabilities, you can override the
Ingo Molnarcdd6c482009-09-21 12:02:48 +0200467weak stub hw_perf_event_init() to register hardware counters.
Peter Zijlstra906010b2009-09-21 16:08:49 +0200468
469Architectures that have d-cache aliassing issues, such as Sparc and ARM,
470should select PERF_USE_VMALLOC in order to avoid these for perf mmap().