Merge tag 'scsi-misc' of git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi
[linux-2.6-block.git] / tools / perf / design.txt
CommitLineData
e7bc62b6
IM
1
2Performance Counters for Linux
3------------------------------
4
5Performance counters are special hardware registers available on most modern
6CPUs. These registers count the number of certain types of hw events: such
7as instructions executed, cachemisses suffered, or branches mis-predicted -
8without slowing down the kernel or applications. These registers can also
9trigger interrupts when a threshold number of events have passed - and can
10thus be used to profile the code that runs on that CPU.
11
12The Linux Performance Counter subsystem provides an abstraction of these
447557ac 13hardware capabilities. It provides per task and per CPU counters, counter
f66c6b20
PM
14groups, and it provides event capabilities on top of those. It
15provides "virtual" 64-bit counters, regardless of the width of the
16underlying hardware counters.
e7bc62b6
IM
17
18Performance counters are accessed via special file descriptors.
19There's one file descriptor per virtual counter used.
20
b68eebd1 21The special file descriptor is opened via the sys_perf_event_open()
e7bc62b6
IM
22system call:
23
0b413e44 24 int sys_perf_event_open(struct perf_event_attr *hw_event_uptr,
f66c6b20
PM
25 pid_t pid, int cpu, int group_fd,
26 unsigned long flags);
e7bc62b6
IM
27
28The syscall returns the new fd. The fd can be used via the normal
29VFS system calls: read() can be used to read the counter, fcntl()
30can be used to set the blocking mode, etc.
31
32Multiple counters can be kept open at a time, and the counters
33can be poll()ed.
34
0b413e44 35When creating a new counter fd, 'perf_event_attr' is:
447557ac 36
0b413e44 37struct perf_event_attr {
e5791a80
PZ
38 /*
39 * The MSB of the config word signifies if the rest contains cpu
40 * specific (raw) counter configuration data, if unset, the next
41 * 7 bits are an event type and the rest of the bits are the event
42 * identifier.
43 */
44 __u64 config;
45
46 __u64 irq_period;
47 __u32 record_type;
48 __u32 read_format;
49
50 __u64 disabled : 1, /* off by default */
e5791a80
PZ
51 inherit : 1, /* children inherit it */
52 pinned : 1, /* must always be on PMU */
53 exclusive : 1, /* only group on PMU */
54 exclude_user : 1, /* don't count user */
55 exclude_kernel : 1, /* ditto kernel */
56 exclude_hv : 1, /* ditto hypervisor */
57 exclude_idle : 1, /* don't count when idle */
58 mmap : 1, /* include mmap data */
59 munmap : 1, /* include munmap data */
60 comm : 1, /* include comm data */
61
62 __reserved_1 : 52;
63
64 __u32 extra_config_len;
65 __u32 wakeup_events; /* wakeup every n events */
66
67 __u64 __reserved_2;
68 __u64 __reserved_3;
447557ac
IM
69};
70
e5791a80 71The 'config' field specifies what the counter should count. It
f66c6b20
PM
72is divided into 3 bit-fields:
73
e5791a80
PZ
74raw_type: 1 bit (most significant bit) 0x8000_0000_0000_0000
75type: 7 bits (next most significant) 0x7f00_0000_0000_0000
76event_id: 56 bits (least significant) 0x00ff_ffff_ffff_ffff
f66c6b20
PM
77
78If 'raw_type' is 1, then the counter will count a hardware event
79specified by the remaining 63 bits of event_config. The encoding is
80machine-specific.
81
82If 'raw_type' is 0, then the 'type' field says what kind of counter
83this is, with the following encoding:
84
b68eebd1 85enum perf_type_id {
f66c6b20
PM
86 PERF_TYPE_HARDWARE = 0,
87 PERF_TYPE_SOFTWARE = 1,
88 PERF_TYPE_TRACEPOINT = 2,
89};
90
91A counter of PERF_TYPE_HARDWARE will count the hardware event
92specified by 'event_id':
93
447557ac 94/*
f66c6b20 95 * Generalized performance counter event types, used by the hw_event.event_id
cdd6c482 96 * parameter of the sys_perf_event_open() syscall:
447557ac 97 */
b68eebd1 98enum perf_hw_id {
447557ac
IM
99 /*
100 * Common hardware events, generalized by the kernel:
101 */
f4dbfa8f
PZ
102 PERF_COUNT_HW_CPU_CYCLES = 0,
103 PERF_COUNT_HW_INSTRUCTIONS = 1,
0895cf0a 104 PERF_COUNT_HW_CACHE_REFERENCES = 2,
f4dbfa8f
PZ
105 PERF_COUNT_HW_CACHE_MISSES = 3,
106 PERF_COUNT_HW_BRANCH_INSTRUCTIONS = 4,
0895cf0a 107 PERF_COUNT_HW_BRANCH_MISSES = 5,
f4dbfa8f 108 PERF_COUNT_HW_BUS_CYCLES = 6,
447557ac 109};
e7bc62b6 110
f66c6b20
PM
111These are standardized types of events that work relatively uniformly
112on all CPUs that implement Performance Counters support under Linux,
113although there may be variations (e.g., different CPUs might count
114cache references and misses at different levels of the cache hierarchy).
115If a CPU is not able to count the selected event, then the system call
116will return -EINVAL.
e7bc62b6 117
f66c6b20
PM
118More hw_event_types are supported as well, but they are CPU-specific
119and accessed as raw events. For example, to count "External bus
120cycles while bus lock signal asserted" events on Intel Core CPUs, pass
121in a 0x4064 event_id value and set hw_event.raw_type to 1.
e7bc62b6 122
f66c6b20
PM
123A counter of type PERF_TYPE_SOFTWARE will count one of the available
124software events, selected by 'event_id':
e7bc62b6 125
447557ac 126/*
f66c6b20
PM
127 * Special "software" counters provided by the kernel, even if the hardware
128 * does not support performance counters. These counters measure various
129 * physical and sw events of the kernel (and allow the profiling of them as
130 * well):
447557ac 131 */
b68eebd1 132enum perf_sw_ids {
f4dbfa8f 133 PERF_COUNT_SW_CPU_CLOCK = 0,
0895cf0a
KS
134 PERF_COUNT_SW_TASK_CLOCK = 1,
135 PERF_COUNT_SW_PAGE_FAULTS = 2,
f4dbfa8f
PZ
136 PERF_COUNT_SW_CONTEXT_SWITCHES = 3,
137 PERF_COUNT_SW_CPU_MIGRATIONS = 4,
138 PERF_COUNT_SW_PAGE_FAULTS_MIN = 5,
139 PERF_COUNT_SW_PAGE_FAULTS_MAJ = 6,
f7d79860
AB
140 PERF_COUNT_SW_ALIGNMENT_FAULTS = 7,
141 PERF_COUNT_SW_EMULATION_FAULTS = 8,
447557ac 142};
e7bc62b6 143
e5791a80
PZ
144Counters of the type PERF_TYPE_TRACEPOINT are available when the ftrace event
145tracer is available, and event_id values can be obtained from
146/debug/tracing/events/*/*/id
147
148
f66c6b20
PM
149Counters come in two flavours: counting counters and sampling
150counters. A "counting" counter is one that is used for counting the
151number of events that occur, and is characterised by having
e5791a80
PZ
152irq_period = 0.
153
154
155A read() on a counter returns the current value of the counter and possible
156additional values as specified by 'read_format', each value is a u64 (8 bytes)
157in size.
158
159/*
160 * Bits that can be set in hw_event.read_format to request that
161 * reads on the counter should return the indicated quantities,
162 * in increasing order of bit value, after the counter value.
163 */
cdd6c482 164enum perf_event_read_format {
e5791a80
PZ
165 PERF_FORMAT_TOTAL_TIME_ENABLED = 1,
166 PERF_FORMAT_TOTAL_TIME_RUNNING = 2,
167};
168
169Using these additional values one can establish the overcommit ratio for a
170particular counter allowing one to take the round-robin scheduling effect
171into account.
172
e7bc62b6 173
f66c6b20
PM
174A "sampling" counter is one that is set up to generate an interrupt
175every N events, where N is given by 'irq_period'. A sampling counter
e5791a80
PZ
176has irq_period > 0. The record_type controls what data is recorded on each
177interrupt:
e7bc62b6 178
f66c6b20 179/*
e5791a80
PZ
180 * Bits that can be set in hw_event.record_type to request information
181 * in the overflow packets.
f66c6b20 182 */
cdd6c482 183enum perf_event_record_format {
e5791a80
PZ
184 PERF_RECORD_IP = 1U << 0,
185 PERF_RECORD_TID = 1U << 1,
186 PERF_RECORD_TIME = 1U << 2,
187 PERF_RECORD_ADDR = 1U << 3,
188 PERF_RECORD_GROUP = 1U << 4,
189 PERF_RECORD_CALLCHAIN = 1U << 5,
f66c6b20 190};
447557ac 191
e5791a80
PZ
192Such (and other) events will be recorded in a ring-buffer, which is
193available to user-space using mmap() (see below).
f66c6b20
PM
194
195The 'disabled' bit specifies whether the counter starts out disabled
196or enabled. If it is initially disabled, it can be enabled by ioctl
197or prctl (see below).
198
f66c6b20
PM
199The 'inherit' bit, if set, specifies that this counter should count
200events on descendant tasks as well as the task specified. This only
201applies to new descendents, not to any existing descendents at the
202time the counter is created (nor to any new descendents of existing
203descendents).
204
205The 'pinned' bit, if set, specifies that the counter should always be
206on the CPU if at all possible. It only applies to hardware counters
207and only to group leaders. If a pinned counter cannot be put onto the
208CPU (e.g. because there are not enough hardware counters or because of
209a conflict with some other event), then the counter goes into an
210'error' state, where reads return end-of-file (i.e. read() returns 0)
211until the counter is subsequently enabled or disabled.
212
213The 'exclusive' bit, if set, specifies that when this counter's group
214is on the CPU, it should be the only group using the CPU's counters.
215In future, this will allow sophisticated monitoring programs to supply
216extra configuration information via 'extra_config_len' to exploit
217advanced features of the CPU's Performance Monitor Unit (PMU) that are
218not otherwise accessible and that might disrupt other hardware
219counters.
220
221The 'exclude_user', 'exclude_kernel' and 'exclude_hv' bits provide a
222way to request that counting of events be restricted to times when the
223CPU is in user, kernel and/or hypervisor mode.
224
e5791a80
PZ
225The 'mmap' and 'munmap' bits allow recording of PROT_EXEC mmap/munmap
226operations, these can be used to relate userspace IP addresses to actual
227code, even after the mapping (or even the whole process) is gone,
228these events are recorded in the ring-buffer (see below).
229
230The 'comm' bit allows tracking of process comm data on process creation.
231This too is recorded in the ring-buffer (see below).
f66c6b20 232
b68eebd1 233The 'pid' parameter to the sys_perf_event_open() system call allows the
f66c6b20 234counter to be specific to a task:
e7bc62b6
IM
235
236 pid == 0: if the pid parameter is zero, the counter is attached to the
237 current task.
238
239 pid > 0: the counter is attached to a specific task (if the current task
240 has sufficient privilege to do so)
241
242 pid < 0: all tasks are counted (per cpu counters)
243
f66c6b20 244The 'cpu' parameter allows a counter to be made specific to a CPU:
e7bc62b6
IM
245
246 cpu >= 0: the counter is restricted to a specific CPU
247 cpu == -1: the counter counts on all CPUs
248
447557ac 249(Note: the combination of 'pid == -1' and 'cpu == -1' is not valid.)
e7bc62b6
IM
250
251A 'pid > 0' and 'cpu == -1' counter is a per task counter that counts
252events of that task and 'follows' that task to whatever CPU the task
253gets schedule to. Per task counters can be created by any user, for
254their own tasks.
255
256A 'pid == -1' and 'cpu == x' counter is a per CPU counter that counts
257all events on CPU-x. Per CPU counters need CAP_SYS_ADMIN privilege.
258
f66c6b20
PM
259The 'flags' parameter is currently unused and must be zero.
260
261The 'group_fd' parameter allows counter "groups" to be set up. A
262counter group has one counter which is the group "leader". The leader
b68eebd1 263is created first, with group_fd = -1 in the sys_perf_event_open call
f66c6b20
PM
264that creates it. The rest of the group members are created
265subsequently, with group_fd giving the fd of the group leader.
266(A single counter on its own is created with group_fd = -1 and is
267considered to be a group with only 1 member.)
268
269A counter group is scheduled onto the CPU as a unit, that is, it will
270only be put onto the CPU if all of the counters in the group can be
271put onto the CPU. This means that the values of the member counters
272can be meaningfully compared, added, divided (to get ratios), etc.,
273with each other, since they have counted events for the same set of
274executed instructions.
275
e5791a80
PZ
276
277Like stated, asynchronous events, like counter overflow or PROT_EXEC mmap
278tracking are logged into a ring-buffer. This ring-buffer is created and
279accessed through mmap().
280
281The mmap size should be 1+2^n pages, where the first page is a meta-data page
cdd6c482 282(struct perf_event_mmap_page) that contains various bits of information such
e5791a80
PZ
283as where the ring-buffer head is.
284
285/*
286 * Structure of the page that can be mapped via mmap
287 */
cdd6c482 288struct perf_event_mmap_page {
e5791a80
PZ
289 __u32 version; /* version number of this structure */
290 __u32 compat_version; /* lowest version this is compat with */
291
292 /*
293 * Bits needed to read the hw counters in user-space.
294 *
295 * u32 seq;
296 * s64 count;
297 *
298 * do {
299 * seq = pc->lock;
300 *
301 * barrier()
302 * if (pc->index) {
303 * count = pmc_read(pc->index - 1);
304 * count += pc->offset;
305 * } else
306 * goto regular_read;
307 *
308 * barrier();
309 * } while (pc->lock != seq);
310 *
311 * NOTE: for obvious reason this only works on self-monitoring
312 * processes.
313 */
314 __u32 lock; /* seqlock for synchronization */
315 __u32 index; /* hardware counter identifier */
316 __s64 offset; /* add to hardware counter value */
317
318 /*
319 * Control data for the mmap() data buffer.
320 *
321 * User-space reading this value should issue an rmb(), on SMP capable
cdd6c482 322 * platforms, after reading this value -- see perf_event_wakeup().
e5791a80
PZ
323 */
324 __u32 data_head; /* head in the data section */
325};
326
327NOTE: the hw-counter userspace bits are arch specific and are currently only
328 implemented on powerpc.
329
330The following 2^n pages are the ring-buffer which contains events of the form:
331
cdd6c482
IM
332#define PERF_RECORD_MISC_KERNEL (1 << 0)
333#define PERF_RECORD_MISC_USER (1 << 1)
334#define PERF_RECORD_MISC_OVERFLOW (1 << 2)
e5791a80
PZ
335
336struct perf_event_header {
337 __u32 type;
338 __u16 misc;
339 __u16 size;
340};
341
342enum perf_event_type {
343
344 /*
345 * The MMAP events record the PROT_EXEC mappings so that we can
346 * correlate userspace IPs to code. They have the following structure:
347 *
348 * struct {
349 * struct perf_event_header header;
350 *
351 * u32 pid, tid;
352 * u64 addr;
353 * u64 len;
354 * u64 pgoff;
355 * char filename[];
356 * };
357 */
cdd6c482
IM
358 PERF_RECORD_MMAP = 1,
359 PERF_RECORD_MUNMAP = 2,
e5791a80
PZ
360
361 /*
362 * struct {
363 * struct perf_event_header header;
364 *
365 * u32 pid, tid;
366 * char comm[];
367 * };
368 */
cdd6c482 369 PERF_RECORD_COMM = 3,
e5791a80
PZ
370
371 /*
cdd6c482 372 * When header.misc & PERF_RECORD_MISC_OVERFLOW the event_type field
e5791a80
PZ
373 * will be PERF_RECORD_*
374 *
375 * struct {
376 * struct perf_event_header header;
377 *
378 * { u64 ip; } && PERF_RECORD_IP
379 * { u32 pid, tid; } && PERF_RECORD_TID
380 * { u64 time; } && PERF_RECORD_TIME
381 * { u64 addr; } && PERF_RECORD_ADDR
382 *
383 * { u64 nr;
384 * { u64 event, val; } cnt[nr]; } && PERF_RECORD_GROUP
385 *
386 * { u16 nr,
387 * hv,
388 * kernel,
389 * user;
390 * u64 ips[nr]; } && PERF_RECORD_CALLCHAIN
391 * };
392 */
393};
394
395NOTE: PERF_RECORD_CALLCHAIN is arch specific and currently only implemented
396 on x86.
397
398Notification of new events is possible through poll()/select()/epoll() and
399fcntl() managing signals.
400
401Normally a notification is generated for every page filled, however one can
0b413e44 402additionally set perf_event_attr.wakeup_events to generate one every
e5791a80
PZ
403so many counter overflow events.
404
405Future work will include a splice() interface to the ring-buffer.
406
407
f66c6b20
PM
408Counters can be enabled and disabled in two ways: via ioctl and via
409prctl. When a counter is disabled, it doesn't count or generate
410events but does continue to exist and maintain its count value.
411
a59e64a1 412An individual counter can be enabled with
f66c6b20 413
a59e64a1 414 ioctl(fd, PERF_EVENT_IOC_ENABLE, 0);
f66c6b20
PM
415
416or disabled with
417
a59e64a1 418 ioctl(fd, PERF_EVENT_IOC_DISABLE, 0);
f66c6b20 419
a59e64a1 420For a counter group, pass PERF_IOC_FLAG_GROUP as the third argument.
f66c6b20
PM
421Enabling or disabling the leader of a group enables or disables the
422whole group; that is, while the group leader is disabled, none of the
423counters in the group will count. Enabling or disabling a member of a
424group other than the leader only affects that counter - disabling an
425non-leader stops that counter from counting but doesn't affect any
426other counter.
427
e5791a80
PZ
428Additionally, non-inherited overflow counters can use
429
cdd6c482 430 ioctl(fd, PERF_EVENT_IOC_REFRESH, nr);
e5791a80
PZ
431
432to enable a counter for 'nr' events, after which it gets disabled again.
433
f66c6b20
PM
434A process can enable or disable all the counter groups that are
435attached to it, using prctl:
436
cdd6c482 437 prctl(PR_TASK_PERF_EVENTS_ENABLE);
f66c6b20 438
cdd6c482 439 prctl(PR_TASK_PERF_EVENTS_DISABLE);
f66c6b20
PM
440
441This applies to all counters on the current process, whether created
442by this process or by another, and doesn't affect any counters that
443this process has created on other processes. It only enables or
444disables the group leaders, not any other members in the groups.
447557ac 445
018df72d
MF
446
447Arch requirements
448-----------------
449
450If your architecture does not have hardware performance metrics, you can
451still use the generic software counters based on hrtimers for sampling.
452
cdd6c482 453So to start with, in order to add HAVE_PERF_EVENTS to your Kconfig, you
018df72d 454will need at least this:
cdd6c482 455 - asm/perf_event.h - a basic stub will suffice at first
018df72d 456 - support for atomic64 types (and associated helper functions)
018df72d
MF
457
458If your architecture does have hardware capabilities, you can override the
cdd6c482 459weak stub hw_perf_event_init() to register hardware counters.
906010b2
PZ
460
461Architectures that have d-cache aliassing issues, such as Sparc and ARM,
462should select PERF_USE_VMALLOC in order to avoid these for perf mmap().