Merge tag 'pm-6.12-rc1-2' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael...
[linux-2.6-block.git] / Documentation / trace / histogram.rst
CommitLineData
ea272257
MCC
1================
2Event Histograms
3================
b8df4a36 4
ea272257 5Documentation written by Tom Zanussi
b8df4a36
TZ
6
71. Introduction
8===============
9
10 Histogram triggers are special event triggers that can be used to
11 aggregate trace event data into histograms. For information on
5fb94e9c 12 trace events and event triggers, see Documentation/trace/events.rst.
b8df4a36
TZ
13
14
152. Histogram Trigger Command
16============================
17
18 A histogram trigger command is an event trigger command that
19 aggregates event hits into a hash table keyed on one or more trace
20 event format fields (or stacktrace) and a set of running totals
21 derived from one or more trace event format fields and/or event
22 counts (hitcount).
23
ea272257 24 The format of a hist trigger is as follows::
b8df4a36
TZ
25
26 hist:keys=<field1[,field2,...]>[:values=<field1[,field2,...]>]
27 [:sort=<field1[,field2,...]>][:size=#entries][:pause][:continue]
8c2b9979 28 [:clear][:name=histname1][:nohitcount][:<handler>.<action>] [if <filter>]
b8df4a36
TZ
29
30 When a matching event is hit, an entry is added to a hash table
31 using the key(s) and value(s) named. Keys and values correspond to
32 fields in the event's format description. Values must correspond to
33 numeric fields - on an event hit, the value(s) will be added to a
34 sum kept for that field. The special string 'hitcount' can be used
35 in place of an explicit value field - this is simply a count of
36 event hits. If 'values' isn't specified, an implicit 'hitcount'
37 value will be automatically created and used as the only value.
4b512860 38 Keys can be any field, or the special string 'common_stacktrace', which
b8df4a36
TZ
39 will use the event's kernel stacktrace as the key. The keywords
40 'keys' or 'key' can be used to specify keys, and the keywords
41 'values', 'vals', or 'val' can be used to specify values. Compound
a635beea 42 keys consisting of up to three fields can be specified by the 'keys'
b8df4a36
TZ
43 keyword. Hashing a compound key produces a unique entry in the
44 table for each unique combination of component keys, and can be
45 useful for providing more fine-grained summaries of event data.
46 Additionally, sort keys consisting of up to two fields can be
47 specified by the 'sort' keyword. If more than one field is
48 specified, the result will be a 'sort within a sort': the first key
49 is taken to be the primary sort key and the second the secondary
50 key. If a hist trigger is given a name using the 'name' parameter,
51 its histogram data will be shared with other triggers of the same
52 name, and trigger hits will update this common data. Only triggers
53 with 'compatible' fields can be combined in this way; triggers are
54 'compatible' if the fields named in the trigger share the same
55 number and type of fields and those fields also have the same names.
56 Note that any two events always share the compatible 'hitcount' and
4b512860 57 'common_stacktrace' fields and can therefore be combined using those
b8df4a36
TZ
58 fields, however pointless that may be.
59
60 'hist' triggers add a 'hist' file to each event's subdirectory.
61 Reading the 'hist' file for the event will dump the hash table in
62 its entirety to stdout. If there are multiple hist triggers
63 attached to an event, there will be a table for each trigger in the
64 output. The table displayed for a named trigger will be the same as
65 any other instance having the same name. Each printed hash table
66 entry is a simple list of the keys and values comprising the entry;
67 keys are printed first and are delineated by curly braces, and are
68 followed by the set of value fields for the entry. By default,
69 numeric fields are displayed as base-10 integers. This can be
70 modified by appending any of the following modifiers to the field
71 name:
72
4420f5b1
SRV
73 ============= =================================================
74 .hex display a number as a hex value
75 .sym display an address as a symbol
76 .sym-offset display an address as a symbol and offset
77 .syscall display a syscall id as a system call name
78 .execname display a common_pid as a program name
79 .log2 display log2 value rather than raw number
37036435 80 .buckets=size display grouping of values rather than raw number
4420f5b1 81 .usecs display a common_timestamp in microseconds
8c2b9979
MHG
82 .percent display a number of percentage value
83 .graph display a bar-graph of a value
88238513 84 .stacktrace display as a stacktrace (must by a long[] type)
4420f5b1 85 ============= =================================================
b8df4a36
TZ
86
87 Note that in general the semantics of a given field aren't
88 interpreted when applying a modifier to it, but there are some
89 restrictions to be aware of in this regard:
90
91 - only the 'hex' modifier can be used for values (because values
92 are essentially sums, and the other modifiers don't make sense
93 in that context).
94 - the 'execname' modifier can only be used on a 'common_pid'. The
95 reason for this is that the execname is simply the 'comm' value
96 saved for the 'current' process when an event was triggered,
97 which is the same as the common_pid value saved by the event
98 tracing code. Trying to apply that comm value to other pid
99 values wouldn't be correct, and typically events that care save
100 pid-specific comm fields in the event itself.
101
102 A typical usage scenario would be the following to enable a hist
ea272257 103 trigger, read its current contents, and then turn it off::
b8df4a36 104
ea272257 105 # echo 'hist:keys=skbaddr.hex:vals=len' > \
2abfcd29 106 /sys/kernel/tracing/events/net/netif_rx/trigger
b8df4a36 107
2abfcd29 108 # cat /sys/kernel/tracing/events/net/netif_rx/hist
b8df4a36 109
ea272257 110 # echo '!hist:keys=skbaddr.hex:vals=len' > \
2abfcd29 111 /sys/kernel/tracing/events/net/netif_rx/trigger
b8df4a36
TZ
112
113 The trigger file itself can be read to show the details of the
114 currently attached hist trigger. This information is also displayed
115 at the top of the 'hist' file when read.
116
117 By default, the size of the hash table is 2048 entries. The 'size'
118 parameter can be used to specify more or fewer than that. The units
119 are in terms of hashtable entries - if a run uses more entries than
120 specified, the results will show the number of 'drops', the number
121 of hits that were ignored. The size should be a power of 2 between
122 128 and 131072 (any non- power-of-2 number specified will be rounded
123 up).
124
125 The 'sort' parameter can be used to specify a value field to sort
126 on. The default if unspecified is 'hitcount' and the default sort
127 order is 'ascending'. To sort in the opposite direction, append
128 .descending' to the sort key.
129
130 The 'pause' parameter can be used to pause an existing hist trigger
131 or to start a hist trigger but not log any events until told to do
132 so. 'continue' or 'cont' can be used to start or restart a paused
133 hist trigger.
134
135 The 'clear' parameter will clear the contents of a running hist
136 trigger and leave its current paused/active state.
137
138 Note that the 'pause', 'cont', and 'clear' parameters should be
139 applied using 'append' shell operator ('>>') if applied to an
140 existing trigger, rather than via the '>' operator, which will cause
141 the trigger to be removed through truncation.
142
8c2b9979
MHG
143 The 'nohitcount' (or NOHC) parameter will suppress display of
144 raw hitcount in the histogram. This option requires at least one
145 value field which is not a 'raw hitcount'. For example,
146 'hist:...:vals=hitcount:nohitcount' is rejected, but
147 'hist:...:vals=hitcount.percent:nohitcount' is OK.
148
b8df4a36
TZ
149- enable_hist/disable_hist
150
151 The enable_hist and disable_hist triggers can be used to have one
152 event conditionally start and stop another event's already-attached
153 hist trigger. Any number of enable_hist and disable_hist triggers
154 can be attached to a given event, allowing that event to kick off
155 and stop aggregations on a host of other events.
156
ea272257 157 The format is very similar to the enable/disable_event triggers::
b8df4a36
TZ
158
159 enable_hist:<system>:<event>[:count]
160 disable_hist:<system>:<event>[:count]
161
162 Instead of enabling or disabling the tracing of the target event
163 into the trace buffer as the enable/disable_event triggers do, the
164 enable/disable_hist triggers enable or disable the aggregation of
165 the target event into a hash table.
166
167 A typical usage scenario for the enable_hist/disable_hist triggers
168 would be to first set up a paused hist trigger on some event,
169 followed by an enable_hist/disable_hist pair that turns the hist
ea272257 170 aggregation on and off when conditions of interest are hit::
b8df4a36 171
ea272257 172 # echo 'hist:keys=skbaddr.hex:vals=len:pause' > \
2abfcd29 173 /sys/kernel/tracing/events/net/netif_receive_skb/trigger
b8df4a36 174
ea272257 175 # echo 'enable_hist:net:netif_receive_skb if filename==/usr/bin/wget' > \
2abfcd29 176 /sys/kernel/tracing/events/sched/sched_process_exec/trigger
b8df4a36 177
ea272257 178 # echo 'disable_hist:net:netif_receive_skb if comm==wget' > \
2abfcd29 179 /sys/kernel/tracing/events/sched/sched_process_exit/trigger
b8df4a36
TZ
180
181 The above sets up an initially paused hist trigger which is unpaused
182 and starts aggregating events when a given program is executed, and
183 which stops aggregating when the process exits and the hist trigger
184 is paused again.
185
186 The examples below provide a more concrete illustration of the
187 concepts and typical usage patterns discussed above.
188
ea272257
MCC
189'special' event fields
190------------------------
8b7622bf
TZ
191
192 There are a number of 'special event fields' available for use as
193 keys or values in a hist trigger. These look like and behave as if
194 they were actual event fields, but aren't really part of the event's
195 field definition or format file. They are however available for any
196 event, and can be used anywhere an actual event field could be.
197 They are:
198
ea272257
MCC
199 ====================== ==== =======================================
200 common_timestamp u64 timestamp (from ring buffer) associated
201 with the event, in nanoseconds. May be
202 modified by .usecs to have timestamps
203 interpreted as microseconds.
1e3bac71 204 common_cpu int the cpu on which the event occurred.
ea272257 205 ====================== ==== =======================================
b8df4a36 206
ea272257
MCC
207Extended error information
208--------------------------
f404da6e
TZ
209
210 For some error conditions encountered when invoking a hist trigger
211 command, extended error information is available via the
26a94491
TZ
212 tracing/error_log file. See Error Conditions in
213 :file:`Documentation/trace/ftrace.rst` for details.
f404da6e 214
b8df4a36
TZ
2156.2 'hist' trigger examples
216---------------------------
217
218 The first set of examples creates aggregations using the kmalloc
219 event. The fields that can be used for the hist trigger are listed
ea272257 220 in the kmalloc event's format file::
b8df4a36 221
2abfcd29 222 # cat /sys/kernel/tracing/events/kmem/kmalloc/format
b8df4a36
TZ
223 name: kmalloc
224 ID: 374
225 format:
226 field:unsigned short common_type; offset:0; size:2; signed:0;
227 field:unsigned char common_flags; offset:2; size:1; signed:0;
228 field:unsigned char common_preempt_count; offset:3; size:1; signed:0;
229 field:int common_pid; offset:4; size:4; signed:1;
230
231 field:unsigned long call_site; offset:8; size:8; signed:0;
232 field:const void * ptr; offset:16; size:8; signed:0;
233 field:size_t bytes_req; offset:24; size:8; signed:0;
234 field:size_t bytes_alloc; offset:32; size:8; signed:0;
235 field:gfp_t gfp_flags; offset:40; size:4; signed:0;
236
237 We'll start by creating a hist trigger that generates a simple table
238 that lists the total number of bytes requested for each function in
ea272257 239 the kernel that made one or more calls to kmalloc::
b8df4a36 240
37036435 241 # echo 'hist:key=call_site:val=bytes_req.buckets=32' > \
2abfcd29 242 /sys/kernel/tracing/events/kmem/kmalloc/trigger
b8df4a36
TZ
243
244 This tells the tracing system to create a 'hist' trigger using the
245 call_site field of the kmalloc event as the key for the table, which
246 just means that each unique call_site address will have an entry
247 created for it in the table. The 'val=bytes_req' parameter tells
248 the hist trigger that for each unique entry (call_site) in the
249 table, it should keep a running total of the number of bytes
250 requested by that call_site.
251
252 We'll let it run for awhile and then dump the contents of the 'hist'
253 file in the kmalloc event's subdirectory (for readability, a number
ea272257 254 of entries have been omitted)::
b8df4a36 255
2abfcd29 256 # cat /sys/kernel/tracing/events/kmem/kmalloc/hist
b8df4a36
TZ
257 # trigger info: hist:keys=call_site:vals=bytes_req:sort=hitcount:size=2048 [active]
258
259 { call_site: 18446744072106379007 } hitcount: 1 bytes_req: 176
260 { call_site: 18446744071579557049 } hitcount: 1 bytes_req: 1024
261 { call_site: 18446744071580608289 } hitcount: 1 bytes_req: 16384
262 { call_site: 18446744071581827654 } hitcount: 1 bytes_req: 24
263 { call_site: 18446744071580700980 } hitcount: 1 bytes_req: 8
264 { call_site: 18446744071579359876 } hitcount: 1 bytes_req: 152
265 { call_site: 18446744071580795365 } hitcount: 3 bytes_req: 144
266 { call_site: 18446744071581303129 } hitcount: 3 bytes_req: 144
267 { call_site: 18446744071580713234 } hitcount: 4 bytes_req: 2560
268 { call_site: 18446744071580933750 } hitcount: 4 bytes_req: 736
269 .
270 .
271 .
272 { call_site: 18446744072106047046 } hitcount: 69 bytes_req: 5576
273 { call_site: 18446744071582116407 } hitcount: 73 bytes_req: 2336
274 { call_site: 18446744072106054684 } hitcount: 136 bytes_req: 140504
275 { call_site: 18446744072106224230 } hitcount: 136 bytes_req: 19584
276 { call_site: 18446744072106078074 } hitcount: 153 bytes_req: 2448
277 { call_site: 18446744072106062406 } hitcount: 153 bytes_req: 36720
278 { call_site: 18446744071582507929 } hitcount: 153 bytes_req: 37088
279 { call_site: 18446744072102520590 } hitcount: 273 bytes_req: 10920
280 { call_site: 18446744071582143559 } hitcount: 358 bytes_req: 716
281 { call_site: 18446744072106465852 } hitcount: 417 bytes_req: 56712
282 { call_site: 18446744072102523378 } hitcount: 485 bytes_req: 27160
283 { call_site: 18446744072099568646 } hitcount: 1676 bytes_req: 33520
284
285 Totals:
286 Hits: 4610
287 Entries: 45
288 Dropped: 0
289
290 The output displays a line for each entry, beginning with the key
291 specified in the trigger, followed by the value(s) also specified in
292 the trigger. At the beginning of the output is a line that displays
293 the trigger info, which can also be displayed by reading the
ea272257 294 'trigger' file::
b8df4a36 295
2abfcd29 296 # cat /sys/kernel/tracing/events/kmem/kmalloc/trigger
b8df4a36
TZ
297 hist:keys=call_site:vals=bytes_req:sort=hitcount:size=2048 [active]
298
299 At the end of the output are a few lines that display the overall
300 totals for the run. The 'Hits' field shows the total number of
301 times the event trigger was hit, the 'Entries' field shows the total
302 number of used entries in the hash table, and the 'Dropped' field
303 shows the number of hits that were dropped because the number of
304 used entries for the run exceeded the maximum number of entries
305 allowed for the table (normally 0, but if not a hint that you may
306 want to increase the size of the table using the 'size' parameter).
307
308 Notice in the above output that there's an extra field, 'hitcount',
309 which wasn't specified in the trigger. Also notice that in the
310 trigger info output, there's a parameter, 'sort=hitcount', which
311 wasn't specified in the trigger either. The reason for that is that
312 every trigger implicitly keeps a count of the total number of hits
313 attributed to a given entry, called the 'hitcount'. That hitcount
314 information is explicitly displayed in the output, and in the
315 absence of a user-specified sort parameter, is used as the default
316 sort field.
317
318 The value 'hitcount' can be used in place of an explicit value in
319 the 'values' parameter if you don't really need to have any
320 particular field summed and are mainly interested in hit
321 frequencies.
322
323 To turn the hist trigger off, simply call up the trigger in the
ea272257 324 command history and re-execute it with a '!' prepended::
b8df4a36
TZ
325
326 # echo '!hist:key=call_site:val=bytes_req' > \
2abfcd29 327 /sys/kernel/tracing/events/kmem/kmalloc/trigger
b8df4a36
TZ
328
329 Finally, notice that the call_site as displayed in the output above
330 isn't really very useful. It's an address, but normally addresses
331 are displayed in hex. To have a numeric field displayed as a hex
ea272257 332 value, simply append '.hex' to the field name in the trigger::
b8df4a36
TZ
333
334 # echo 'hist:key=call_site.hex:val=bytes_req' > \
2abfcd29 335 /sys/kernel/tracing/events/kmem/kmalloc/trigger
b8df4a36 336
2abfcd29 337 # cat /sys/kernel/tracing/events/kmem/kmalloc/hist
b8df4a36
TZ
338 # trigger info: hist:keys=call_site.hex:vals=bytes_req:sort=hitcount:size=2048 [active]
339
340 { call_site: ffffffffa026b291 } hitcount: 1 bytes_req: 433
341 { call_site: ffffffffa07186ff } hitcount: 1 bytes_req: 176
342 { call_site: ffffffff811ae721 } hitcount: 1 bytes_req: 16384
343 { call_site: ffffffff811c5134 } hitcount: 1 bytes_req: 8
344 { call_site: ffffffffa04a9ebb } hitcount: 1 bytes_req: 511
345 { call_site: ffffffff8122e0a6 } hitcount: 1 bytes_req: 12
346 { call_site: ffffffff8107da84 } hitcount: 1 bytes_req: 152
347 { call_site: ffffffff812d8246 } hitcount: 1 bytes_req: 24
348 { call_site: ffffffff811dc1e5 } hitcount: 3 bytes_req: 144
349 { call_site: ffffffffa02515e8 } hitcount: 3 bytes_req: 648
350 { call_site: ffffffff81258159 } hitcount: 3 bytes_req: 144
351 { call_site: ffffffff811c80f4 } hitcount: 4 bytes_req: 544
352 .
353 .
354 .
355 { call_site: ffffffffa06c7646 } hitcount: 106 bytes_req: 8024
356 { call_site: ffffffffa06cb246 } hitcount: 132 bytes_req: 31680
357 { call_site: ffffffffa06cef7a } hitcount: 132 bytes_req: 2112
358 { call_site: ffffffff8137e399 } hitcount: 132 bytes_req: 23232
359 { call_site: ffffffffa06c941c } hitcount: 185 bytes_req: 171360
360 { call_site: ffffffffa06f2a66 } hitcount: 185 bytes_req: 26640
361 { call_site: ffffffffa036a70e } hitcount: 265 bytes_req: 10600
362 { call_site: ffffffff81325447 } hitcount: 292 bytes_req: 584
363 { call_site: ffffffffa072da3c } hitcount: 446 bytes_req: 60656
364 { call_site: ffffffffa036b1f2 } hitcount: 526 bytes_req: 29456
365 { call_site: ffffffffa0099c06 } hitcount: 1780 bytes_req: 35600
366
367 Totals:
368 Hits: 4775
369 Entries: 46
370 Dropped: 0
371
372 Even that's only marginally more useful - while hex values do look
373 more like addresses, what users are typically more interested in
374 when looking at text addresses are the corresponding symbols
375 instead. To have an address displayed as symbolic value instead,
376 simply append '.sym' or '.sym-offset' to the field name in the
ea272257 377 trigger::
b8df4a36
TZ
378
379 # echo 'hist:key=call_site.sym:val=bytes_req' > \
2abfcd29 380 /sys/kernel/tracing/events/kmem/kmalloc/trigger
b8df4a36 381
2abfcd29 382 # cat /sys/kernel/tracing/events/kmem/kmalloc/hist
b8df4a36
TZ
383 # trigger info: hist:keys=call_site.sym:vals=bytes_req:sort=hitcount:size=2048 [active]
384
385 { call_site: [ffffffff810adcb9] syslog_print_all } hitcount: 1 bytes_req: 1024
386 { call_site: [ffffffff8154bc62] usb_control_msg } hitcount: 1 bytes_req: 8
387 { call_site: [ffffffffa00bf6fe] hidraw_send_report [hid] } hitcount: 1 bytes_req: 7
388 { call_site: [ffffffff8154acbe] usb_alloc_urb } hitcount: 1 bytes_req: 192
389 { call_site: [ffffffffa00bf1ca] hidraw_report_event [hid] } hitcount: 1 bytes_req: 7
390 { call_site: [ffffffff811e3a25] __seq_open_private } hitcount: 1 bytes_req: 40
391 { call_site: [ffffffff8109524a] alloc_fair_sched_group } hitcount: 2 bytes_req: 128
392 { call_site: [ffffffff811febd5] fsnotify_alloc_group } hitcount: 2 bytes_req: 528
393 { call_site: [ffffffff81440f58] __tty_buffer_request_room } hitcount: 2 bytes_req: 2624
394 { call_site: [ffffffff81200ba6] inotify_new_group } hitcount: 2 bytes_req: 96
395 { call_site: [ffffffffa05e19af] ieee80211_start_tx_ba_session [mac80211] } hitcount: 2 bytes_req: 464
396 { call_site: [ffffffff81672406] tcp_get_metrics } hitcount: 2 bytes_req: 304
397 { call_site: [ffffffff81097ec2] alloc_rt_sched_group } hitcount: 2 bytes_req: 128
398 { call_site: [ffffffff81089b05] sched_create_group } hitcount: 2 bytes_req: 1424
399 .
400 .
401 .
402 { call_site: [ffffffffa04a580c] intel_crtc_page_flip [i915] } hitcount: 1185 bytes_req: 123240
403 { call_site: [ffffffffa0287592] drm_mode_page_flip_ioctl [drm] } hitcount: 1185 bytes_req: 104280
404 { call_site: [ffffffffa04c4a3c] intel_plane_duplicate_state [i915] } hitcount: 1402 bytes_req: 190672
405 { call_site: [ffffffff812891ca] ext4_find_extent } hitcount: 1518 bytes_req: 146208
406 { call_site: [ffffffffa029070e] drm_vma_node_allow [drm] } hitcount: 1746 bytes_req: 69840
407 { call_site: [ffffffffa045e7c4] i915_gem_do_execbuffer.isra.23 [i915] } hitcount: 2021 bytes_req: 792312
408 { call_site: [ffffffffa02911f2] drm_modeset_lock_crtc [drm] } hitcount: 2592 bytes_req: 145152
409 { call_site: [ffffffffa0489a66] intel_ring_begin [i915] } hitcount: 2629 bytes_req: 378576
410 { call_site: [ffffffffa046041c] i915_gem_execbuffer2 [i915] } hitcount: 2629 bytes_req: 3783248
411 { call_site: [ffffffff81325607] apparmor_file_alloc_security } hitcount: 5192 bytes_req: 10384
412 { call_site: [ffffffffa00b7c06] hid_report_raw_event [hid] } hitcount: 5529 bytes_req: 110584
413 { call_site: [ffffffff8131ebf7] aa_alloc_task_context } hitcount: 21943 bytes_req: 702176
414 { call_site: [ffffffff8125847d] ext4_htree_store_dirent } hitcount: 55759 bytes_req: 5074265
415
416 Totals:
417 Hits: 109928
418 Entries: 71
419 Dropped: 0
420
421 Because the default sort key above is 'hitcount', the above shows a
422 the list of call_sites by increasing hitcount, so that at the bottom
423 we see the functions that made the most kmalloc calls during the
7f77ebbf 424 run. If instead we wanted to see the top kmalloc callers in
b8df4a36
TZ
425 terms of the number of bytes requested rather than the number of
426 calls, and we wanted the top caller to appear at the top, we can use
ea272257 427 the 'sort' parameter, along with the 'descending' modifier::
b8df4a36
TZ
428
429 # echo 'hist:key=call_site.sym:val=bytes_req:sort=bytes_req.descending' > \
2abfcd29 430 /sys/kernel/tracing/events/kmem/kmalloc/trigger
b8df4a36 431
2abfcd29 432 # cat /sys/kernel/tracing/events/kmem/kmalloc/hist
b8df4a36
TZ
433 # trigger info: hist:keys=call_site.sym:vals=bytes_req:sort=bytes_req.descending:size=2048 [active]
434
435 { call_site: [ffffffffa046041c] i915_gem_execbuffer2 [i915] } hitcount: 2186 bytes_req: 3397464
436 { call_site: [ffffffffa045e7c4] i915_gem_do_execbuffer.isra.23 [i915] } hitcount: 1790 bytes_req: 712176
437 { call_site: [ffffffff8125847d] ext4_htree_store_dirent } hitcount: 8132 bytes_req: 513135
438 { call_site: [ffffffff811e2a1b] seq_buf_alloc } hitcount: 106 bytes_req: 440128
439 { call_site: [ffffffffa0489a66] intel_ring_begin [i915] } hitcount: 2186 bytes_req: 314784
440 { call_site: [ffffffff812891ca] ext4_find_extent } hitcount: 2174 bytes_req: 208992
441 { call_site: [ffffffff811ae8e1] __kmalloc } hitcount: 8 bytes_req: 131072
442 { call_site: [ffffffffa04c4a3c] intel_plane_duplicate_state [i915] } hitcount: 859 bytes_req: 116824
443 { call_site: [ffffffffa02911f2] drm_modeset_lock_crtc [drm] } hitcount: 1834 bytes_req: 102704
444 { call_site: [ffffffffa04a580c] intel_crtc_page_flip [i915] } hitcount: 972 bytes_req: 101088
445 { call_site: [ffffffffa0287592] drm_mode_page_flip_ioctl [drm] } hitcount: 972 bytes_req: 85536
446 { call_site: [ffffffffa00b7c06] hid_report_raw_event [hid] } hitcount: 3333 bytes_req: 66664
447 { call_site: [ffffffff8137e559] sg_kmalloc } hitcount: 209 bytes_req: 61632
448 .
449 .
450 .
451 { call_site: [ffffffff81095225] alloc_fair_sched_group } hitcount: 2 bytes_req: 128
452 { call_site: [ffffffff81097ec2] alloc_rt_sched_group } hitcount: 2 bytes_req: 128
453 { call_site: [ffffffff812d8406] copy_semundo } hitcount: 2 bytes_req: 48
454 { call_site: [ffffffff81200ba6] inotify_new_group } hitcount: 1 bytes_req: 48
455 { call_site: [ffffffffa027121a] drm_getmagic [drm] } hitcount: 1 bytes_req: 48
456 { call_site: [ffffffff811e3a25] __seq_open_private } hitcount: 1 bytes_req: 40
457 { call_site: [ffffffff811c52f4] bprm_change_interp } hitcount: 2 bytes_req: 16
458 { call_site: [ffffffff8154bc62] usb_control_msg } hitcount: 1 bytes_req: 8
459 { call_site: [ffffffffa00bf1ca] hidraw_report_event [hid] } hitcount: 1 bytes_req: 7
460 { call_site: [ffffffffa00bf6fe] hidraw_send_report [hid] } hitcount: 1 bytes_req: 7
461
462 Totals:
463 Hits: 32133
464 Entries: 81
465 Dropped: 0
466
467 To display the offset and size information in addition to the symbol
ea272257 468 name, just use 'sym-offset' instead::
b8df4a36
TZ
469
470 # echo 'hist:key=call_site.sym-offset:val=bytes_req:sort=bytes_req.descending' > \
2abfcd29 471 /sys/kernel/tracing/events/kmem/kmalloc/trigger
b8df4a36 472
2abfcd29 473 # cat /sys/kernel/tracing/events/kmem/kmalloc/hist
b8df4a36
TZ
474 # trigger info: hist:keys=call_site.sym-offset:vals=bytes_req:sort=bytes_req.descending:size=2048 [active]
475
476 { call_site: [ffffffffa046041c] i915_gem_execbuffer2+0x6c/0x2c0 [i915] } hitcount: 4569 bytes_req: 3163720
477 { call_site: [ffffffffa0489a66] intel_ring_begin+0xc6/0x1f0 [i915] } hitcount: 4569 bytes_req: 657936
478 { call_site: [ffffffffa045e7c4] i915_gem_do_execbuffer.isra.23+0x694/0x1020 [i915] } hitcount: 1519 bytes_req: 472936
479 { call_site: [ffffffffa045e646] i915_gem_do_execbuffer.isra.23+0x516/0x1020 [i915] } hitcount: 3050 bytes_req: 211832
480 { call_site: [ffffffff811e2a1b] seq_buf_alloc+0x1b/0x50 } hitcount: 34 bytes_req: 148384
481 { call_site: [ffffffffa04a580c] intel_crtc_page_flip+0xbc/0x870 [i915] } hitcount: 1385 bytes_req: 144040
482 { call_site: [ffffffff811ae8e1] __kmalloc+0x191/0x1b0 } hitcount: 8 bytes_req: 131072
483 { call_site: [ffffffffa0287592] drm_mode_page_flip_ioctl+0x282/0x360 [drm] } hitcount: 1385 bytes_req: 121880
484 { call_site: [ffffffffa02911f2] drm_modeset_lock_crtc+0x32/0x100 [drm] } hitcount: 1848 bytes_req: 103488
485 { call_site: [ffffffffa04c4a3c] intel_plane_duplicate_state+0x2c/0xa0 [i915] } hitcount: 461 bytes_req: 62696
486 { call_site: [ffffffffa029070e] drm_vma_node_allow+0x2e/0xd0 [drm] } hitcount: 1541 bytes_req: 61640
487 { call_site: [ffffffff815f8d7b] sk_prot_alloc+0xcb/0x1b0 } hitcount: 57 bytes_req: 57456
488 .
489 .
490 .
491 { call_site: [ffffffff8109524a] alloc_fair_sched_group+0x5a/0x1a0 } hitcount: 2 bytes_req: 128
492 { call_site: [ffffffffa027b921] drm_vm_open_locked+0x31/0xa0 [drm] } hitcount: 3 bytes_req: 96
493 { call_site: [ffffffff8122e266] proc_self_follow_link+0x76/0xb0 } hitcount: 8 bytes_req: 96
494 { call_site: [ffffffff81213e80] load_elf_binary+0x240/0x1650 } hitcount: 3 bytes_req: 84
495 { call_site: [ffffffff8154bc62] usb_control_msg+0x42/0x110 } hitcount: 1 bytes_req: 8
496 { call_site: [ffffffffa00bf6fe] hidraw_send_report+0x7e/0x1a0 [hid] } hitcount: 1 bytes_req: 7
497 { call_site: [ffffffffa00bf1ca] hidraw_report_event+0x8a/0x120 [hid] } hitcount: 1 bytes_req: 7
498
499 Totals:
500 Hits: 26098
501 Entries: 64
502 Dropped: 0
503
504 We can also add multiple fields to the 'values' parameter. For
505 example, we might want to see the total number of bytes allocated
506 alongside bytes requested, and display the result sorted by bytes
ea272257 507 allocated in a descending order::
b8df4a36
TZ
508
509 # echo 'hist:keys=call_site.sym:values=bytes_req,bytes_alloc:sort=bytes_alloc.descending' > \
2abfcd29 510 /sys/kernel/tracing/events/kmem/kmalloc/trigger
b8df4a36 511
2abfcd29 512 # cat /sys/kernel/tracing/events/kmem/kmalloc/hist
b8df4a36
TZ
513 # trigger info: hist:keys=call_site.sym:vals=bytes_req,bytes_alloc:sort=bytes_alloc.descending:size=2048 [active]
514
515 { call_site: [ffffffffa046041c] i915_gem_execbuffer2 [i915] } hitcount: 7403 bytes_req: 4084360 bytes_alloc: 5958016
516 { call_site: [ffffffff811e2a1b] seq_buf_alloc } hitcount: 541 bytes_req: 2213968 bytes_alloc: 2228224
517 { call_site: [ffffffffa0489a66] intel_ring_begin [i915] } hitcount: 7404 bytes_req: 1066176 bytes_alloc: 1421568
518 { call_site: [ffffffffa045e7c4] i915_gem_do_execbuffer.isra.23 [i915] } hitcount: 1565 bytes_req: 557368 bytes_alloc: 1037760
519 { call_site: [ffffffff8125847d] ext4_htree_store_dirent } hitcount: 9557 bytes_req: 595778 bytes_alloc: 695744
520 { call_site: [ffffffffa045e646] i915_gem_do_execbuffer.isra.23 [i915] } hitcount: 5839 bytes_req: 430680 bytes_alloc: 470400
521 { call_site: [ffffffffa04c4a3c] intel_plane_duplicate_state [i915] } hitcount: 2388 bytes_req: 324768 bytes_alloc: 458496
522 { call_site: [ffffffffa02911f2] drm_modeset_lock_crtc [drm] } hitcount: 3911 bytes_req: 219016 bytes_alloc: 250304
523 { call_site: [ffffffff815f8d7b] sk_prot_alloc } hitcount: 235 bytes_req: 236880 bytes_alloc: 240640
524 { call_site: [ffffffff8137e559] sg_kmalloc } hitcount: 557 bytes_req: 169024 bytes_alloc: 221760
525 { call_site: [ffffffffa00b7c06] hid_report_raw_event [hid] } hitcount: 9378 bytes_req: 187548 bytes_alloc: 206312
526 { call_site: [ffffffffa04a580c] intel_crtc_page_flip [i915] } hitcount: 1519 bytes_req: 157976 bytes_alloc: 194432
527 .
528 .
529 .
530 { call_site: [ffffffff8109bd3b] sched_autogroup_create_attach } hitcount: 2 bytes_req: 144 bytes_alloc: 192
531 { call_site: [ffffffff81097ee8] alloc_rt_sched_group } hitcount: 2 bytes_req: 128 bytes_alloc: 128
532 { call_site: [ffffffff8109524a] alloc_fair_sched_group } hitcount: 2 bytes_req: 128 bytes_alloc: 128
533 { call_site: [ffffffff81095225] alloc_fair_sched_group } hitcount: 2 bytes_req: 128 bytes_alloc: 128
534 { call_site: [ffffffff81097ec2] alloc_rt_sched_group } hitcount: 2 bytes_req: 128 bytes_alloc: 128
535 { call_site: [ffffffff81213e80] load_elf_binary } hitcount: 3 bytes_req: 84 bytes_alloc: 96
536 { call_site: [ffffffff81079a2e] kthread_create_on_node } hitcount: 1 bytes_req: 56 bytes_alloc: 64
537 { call_site: [ffffffffa00bf6fe] hidraw_send_report [hid] } hitcount: 1 bytes_req: 7 bytes_alloc: 8
538 { call_site: [ffffffff8154bc62] usb_control_msg } hitcount: 1 bytes_req: 8 bytes_alloc: 8
539 { call_site: [ffffffffa00bf1ca] hidraw_report_event [hid] } hitcount: 1 bytes_req: 7 bytes_alloc: 8
540
541 Totals:
542 Hits: 66598
543 Entries: 65
544 Dropped: 0
545
546 Finally, to finish off our kmalloc example, instead of simply having
547 the hist trigger display symbolic call_sites, we can have the hist
548 trigger additionally display the complete set of kernel stack traces
549 that led to each call_site. To do that, we simply use the special
4b512860 550 value 'common_stacktrace' for the key parameter::
b8df4a36 551
4b512860 552 # echo 'hist:keys=common_stacktrace:values=bytes_req,bytes_alloc:sort=bytes_alloc' > \
2abfcd29 553 /sys/kernel/tracing/events/kmem/kmalloc/trigger
b8df4a36
TZ
554
555 The above trigger will use the kernel stack trace in effect when an
556 event is triggered as the key for the hash table. This allows the
557 enumeration of every kernel callpath that led up to a particular
558 event, along with a running total of any of the event fields for
559 that event. Here we tally bytes requested and bytes allocated for
560 every callpath in the system that led up to a kmalloc (in this case
ea272257 561 every callpath to a kmalloc for a kernel compile)::
b8df4a36 562
2abfcd29 563 # cat /sys/kernel/tracing/events/kmem/kmalloc/hist
4b512860 564 # trigger info: hist:keys=common_stacktrace:vals=bytes_req,bytes_alloc:sort=bytes_alloc:size=2048 [active]
b8df4a36 565
4b512860 566 { common_stacktrace:
b8df4a36
TZ
567 __kmalloc_track_caller+0x10b/0x1a0
568 kmemdup+0x20/0x50
569 hidraw_report_event+0x8a/0x120 [hid]
570 hid_report_raw_event+0x3ea/0x440 [hid]
571 hid_input_report+0x112/0x190 [hid]
572 hid_irq_in+0xc2/0x260 [usbhid]
573 __usb_hcd_giveback_urb+0x72/0x120
574 usb_giveback_urb_bh+0x9e/0xe0
575 tasklet_hi_action+0xf8/0x100
576 __do_softirq+0x114/0x2c0
577 irq_exit+0xa5/0xb0
578 do_IRQ+0x5a/0xf0
579 ret_from_intr+0x0/0x30
580 cpuidle_enter+0x17/0x20
581 cpu_startup_entry+0x315/0x3e0
582 rest_init+0x7c/0x80
583 } hitcount: 3 bytes_req: 21 bytes_alloc: 24
4b512860 584 { common_stacktrace:
b8df4a36
TZ
585 __kmalloc_track_caller+0x10b/0x1a0
586 kmemdup+0x20/0x50
587 hidraw_report_event+0x8a/0x120 [hid]
588 hid_report_raw_event+0x3ea/0x440 [hid]
589 hid_input_report+0x112/0x190 [hid]
590 hid_irq_in+0xc2/0x260 [usbhid]
591 __usb_hcd_giveback_urb+0x72/0x120
592 usb_giveback_urb_bh+0x9e/0xe0
593 tasklet_hi_action+0xf8/0x100
594 __do_softirq+0x114/0x2c0
595 irq_exit+0xa5/0xb0
596 do_IRQ+0x5a/0xf0
597 ret_from_intr+0x0/0x30
598 } hitcount: 3 bytes_req: 21 bytes_alloc: 24
4b512860 599 { common_stacktrace:
b8df4a36
TZ
600 kmem_cache_alloc_trace+0xeb/0x150
601 aa_alloc_task_context+0x27/0x40
602 apparmor_cred_prepare+0x1f/0x50
603 security_prepare_creds+0x16/0x20
604 prepare_creds+0xdf/0x1a0
605 SyS_capset+0xb5/0x200
606 system_call_fastpath+0x12/0x6a
607 } hitcount: 1 bytes_req: 32 bytes_alloc: 32
608 .
609 .
610 .
4b512860 611 { common_stacktrace:
b8df4a36
TZ
612 __kmalloc+0x11b/0x1b0
613 i915_gem_execbuffer2+0x6c/0x2c0 [i915]
614 drm_ioctl+0x349/0x670 [drm]
615 do_vfs_ioctl+0x2f0/0x4f0
616 SyS_ioctl+0x81/0xa0
617 system_call_fastpath+0x12/0x6a
618 } hitcount: 17726 bytes_req: 13944120 bytes_alloc: 19593808
4b512860 619 { common_stacktrace:
b8df4a36
TZ
620 __kmalloc+0x11b/0x1b0
621 load_elf_phdrs+0x76/0xa0
622 load_elf_binary+0x102/0x1650
623 search_binary_handler+0x97/0x1d0
624 do_execveat_common.isra.34+0x551/0x6e0
625 SyS_execve+0x3a/0x50
626 return_from_execve+0x0/0x23
627 } hitcount: 33348 bytes_req: 17152128 bytes_alloc: 20226048
4b512860 628 { common_stacktrace:
b8df4a36
TZ
629 kmem_cache_alloc_trace+0xeb/0x150
630 apparmor_file_alloc_security+0x27/0x40
631 security_file_alloc+0x16/0x20
632 get_empty_filp+0x93/0x1c0
633 path_openat+0x31/0x5f0
634 do_filp_open+0x3a/0x90
635 do_sys_open+0x128/0x220
636 SyS_open+0x1e/0x20
637 system_call_fastpath+0x12/0x6a
638 } hitcount: 4766422 bytes_req: 9532844 bytes_alloc: 38131376
4b512860 639 { common_stacktrace:
b8df4a36
TZ
640 __kmalloc+0x11b/0x1b0
641 seq_buf_alloc+0x1b/0x50
642 seq_read+0x2cc/0x370
643 proc_reg_read+0x3d/0x80
644 __vfs_read+0x28/0xe0
645 vfs_read+0x86/0x140
646 SyS_read+0x46/0xb0
647 system_call_fastpath+0x12/0x6a
648 } hitcount: 19133 bytes_req: 78368768 bytes_alloc: 78368768
649
650 Totals:
651 Hits: 6085872
652 Entries: 253
653 Dropped: 0
654
655 If you key a hist trigger on common_pid, in order for example to
656 gather and display sorted totals for each process, you can use the
657 special .execname modifier to display the executable names for the
658 processes in the table rather than raw pids. The example below
ea272257 659 keeps a per-process sum of total bytes read::
b8df4a36
TZ
660
661 # echo 'hist:key=common_pid.execname:val=count:sort=count.descending' > \
2abfcd29 662 /sys/kernel/tracing/events/syscalls/sys_enter_read/trigger
b8df4a36 663
2abfcd29 664 # cat /sys/kernel/tracing/events/syscalls/sys_enter_read/hist
b8df4a36
TZ
665 # trigger info: hist:keys=common_pid.execname:vals=count:sort=count.descending:size=2048 [active]
666
667 { common_pid: gnome-terminal [ 3196] } hitcount: 280 count: 1093512
668 { common_pid: Xorg [ 1309] } hitcount: 525 count: 256640
669 { common_pid: compiz [ 2889] } hitcount: 59 count: 254400
670 { common_pid: bash [ 8710] } hitcount: 3 count: 66369
671 { common_pid: dbus-daemon-lau [ 8703] } hitcount: 49 count: 47739
672 { common_pid: irqbalance [ 1252] } hitcount: 27 count: 27648
673 { common_pid: 01ifupdown [ 8705] } hitcount: 3 count: 17216
674 { common_pid: dbus-daemon [ 772] } hitcount: 10 count: 12396
675 { common_pid: Socket Thread [ 8342] } hitcount: 11 count: 11264
676 { common_pid: nm-dhcp-client. [ 8701] } hitcount: 6 count: 7424
677 { common_pid: gmain [ 1315] } hitcount: 18 count: 6336
678 .
679 .
680 .
681 { common_pid: postgres [ 1892] } hitcount: 2 count: 32
682 { common_pid: postgres [ 1891] } hitcount: 2 count: 32
683 { common_pid: gmain [ 8704] } hitcount: 2 count: 32
684 { common_pid: upstart-dbus-br [ 2740] } hitcount: 21 count: 21
685 { common_pid: nm-dispatcher.a [ 8696] } hitcount: 1 count: 16
686 { common_pid: indicator-datet [ 2904] } hitcount: 1 count: 16
687 { common_pid: gdbus [ 2998] } hitcount: 1 count: 16
688 { common_pid: rtkit-daemon [ 2052] } hitcount: 1 count: 8
689 { common_pid: init [ 1] } hitcount: 2 count: 2
690
691 Totals:
692 Hits: 2116
693 Entries: 51
694 Dropped: 0
695
696 Similarly, if you key a hist trigger on syscall id, for example to
697 gather and display a list of systemwide syscall hits, you can use
698 the special .syscall modifier to display the syscall names rather
699 than raw ids. The example below keeps a running total of syscall
ea272257 700 counts for the system during the run::
b8df4a36
TZ
701
702 # echo 'hist:key=id.syscall:val=hitcount' > \
2abfcd29 703 /sys/kernel/tracing/events/raw_syscalls/sys_enter/trigger
b8df4a36 704
2abfcd29 705 # cat /sys/kernel/tracing/events/raw_syscalls/sys_enter/hist
b8df4a36
TZ
706 # trigger info: hist:keys=id.syscall:vals=hitcount:sort=hitcount:size=2048 [active]
707
708 { id: sys_fsync [ 74] } hitcount: 1
709 { id: sys_newuname [ 63] } hitcount: 1
710 { id: sys_prctl [157] } hitcount: 1
711 { id: sys_statfs [137] } hitcount: 1
712 { id: sys_symlink [ 88] } hitcount: 1
713 { id: sys_sendmmsg [307] } hitcount: 1
714 { id: sys_semctl [ 66] } hitcount: 1
715 { id: sys_readlink [ 89] } hitcount: 3
716 { id: sys_bind [ 49] } hitcount: 3
717 { id: sys_getsockname [ 51] } hitcount: 3
718 { id: sys_unlink [ 87] } hitcount: 3
719 { id: sys_rename [ 82] } hitcount: 4
720 { id: unknown_syscall [ 58] } hitcount: 4
721 { id: sys_connect [ 42] } hitcount: 4
722 { id: sys_getpid [ 39] } hitcount: 4
723 .
724 .
725 .
726 { id: sys_rt_sigprocmask [ 14] } hitcount: 952
727 { id: sys_futex [202] } hitcount: 1534
728 { id: sys_write [ 1] } hitcount: 2689
729 { id: sys_setitimer [ 38] } hitcount: 2797
730 { id: sys_read [ 0] } hitcount: 3202
731 { id: sys_select [ 23] } hitcount: 3773
732 { id: sys_writev [ 20] } hitcount: 4531
733 { id: sys_poll [ 7] } hitcount: 8314
734 { id: sys_recvmsg [ 47] } hitcount: 13738
735 { id: sys_ioctl [ 16] } hitcount: 21843
736
737 Totals:
738 Hits: 67612
739 Entries: 72
740 Dropped: 0
741
ea272257
MCC
742 The syscall counts above provide a rough overall picture of system
743 call activity on the system; we can see for example that the most
744 popular system call on this system was the 'sys_ioctl' system call.
b8df4a36 745
ea272257
MCC
746 We can use 'compound' keys to refine that number and provide some
747 further insight as to which processes exactly contribute to the
748 overall ioctl count.
b8df4a36 749
ea272257
MCC
750 The command below keeps a hitcount for every unique combination of
751 system call id and pid - the end result is essentially a table
752 that keeps a per-pid sum of system call hits. The results are
753 sorted using the system call id as the primary key, and the
754 hitcount sum as the secondary key::
b8df4a36
TZ
755
756 # echo 'hist:key=id.syscall,common_pid.execname:val=hitcount:sort=id,hitcount' > \
2abfcd29 757 /sys/kernel/tracing/events/raw_syscalls/sys_enter/trigger
b8df4a36 758
2abfcd29 759 # cat /sys/kernel/tracing/events/raw_syscalls/sys_enter/hist
b8df4a36
TZ
760 # trigger info: hist:keys=id.syscall,common_pid.execname:vals=hitcount:sort=id.syscall,hitcount:size=2048 [active]
761
762 { id: sys_read [ 0], common_pid: rtkit-daemon [ 1877] } hitcount: 1
763 { id: sys_read [ 0], common_pid: gdbus [ 2976] } hitcount: 1
764 { id: sys_read [ 0], common_pid: console-kit-dae [ 3400] } hitcount: 1
765 { id: sys_read [ 0], common_pid: postgres [ 1865] } hitcount: 1
766 { id: sys_read [ 0], common_pid: deja-dup-monito [ 3543] } hitcount: 2
767 { id: sys_read [ 0], common_pid: NetworkManager [ 890] } hitcount: 2
768 { id: sys_read [ 0], common_pid: evolution-calen [ 3048] } hitcount: 2
769 { id: sys_read [ 0], common_pid: postgres [ 1864] } hitcount: 2
770 { id: sys_read [ 0], common_pid: nm-applet [ 3022] } hitcount: 2
771 { id: sys_read [ 0], common_pid: whoopsie [ 1212] } hitcount: 2
772 .
773 .
774 .
775 { id: sys_ioctl [ 16], common_pid: bash [ 8479] } hitcount: 1
776 { id: sys_ioctl [ 16], common_pid: bash [ 3472] } hitcount: 12
777 { id: sys_ioctl [ 16], common_pid: gnome-terminal [ 3199] } hitcount: 16
778 { id: sys_ioctl [ 16], common_pid: Xorg [ 1267] } hitcount: 1808
779 { id: sys_ioctl [ 16], common_pid: compiz [ 2994] } hitcount: 5580
780 .
781 .
782 .
783 { id: sys_waitid [247], common_pid: upstart-dbus-br [ 2690] } hitcount: 3
784 { id: sys_waitid [247], common_pid: upstart-dbus-br [ 2688] } hitcount: 16
785 { id: sys_inotify_add_watch [254], common_pid: gmain [ 975] } hitcount: 2
786 { id: sys_inotify_add_watch [254], common_pid: gmain [ 3204] } hitcount: 4
787 { id: sys_inotify_add_watch [254], common_pid: gmain [ 2888] } hitcount: 4
788 { id: sys_inotify_add_watch [254], common_pid: gmain [ 3003] } hitcount: 4
789 { id: sys_inotify_add_watch [254], common_pid: gmain [ 2873] } hitcount: 4
790 { id: sys_inotify_add_watch [254], common_pid: gmain [ 3196] } hitcount: 6
791 { id: sys_openat [257], common_pid: java [ 2623] } hitcount: 2
792 { id: sys_eventfd2 [290], common_pid: ibus-ui-gtk3 [ 2760] } hitcount: 4
793 { id: sys_eventfd2 [290], common_pid: compiz [ 2994] } hitcount: 6
794
795 Totals:
796 Hits: 31536
797 Entries: 323
798 Dropped: 0
799
ea272257
MCC
800 The above list does give us a breakdown of the ioctl syscall by
801 pid, but it also gives us quite a bit more than that, which we
802 don't really care about at the moment. Since we know the syscall
803 id for sys_ioctl (16, displayed next to the sys_ioctl name), we
804 can use that to filter out all the other syscalls::
b8df4a36
TZ
805
806 # echo 'hist:key=id.syscall,common_pid.execname:val=hitcount:sort=id,hitcount if id == 16' > \
2abfcd29 807 /sys/kernel/tracing/events/raw_syscalls/sys_enter/trigger
b8df4a36 808
2abfcd29 809 # cat /sys/kernel/tracing/events/raw_syscalls/sys_enter/hist
b8df4a36
TZ
810 # trigger info: hist:keys=id.syscall,common_pid.execname:vals=hitcount:sort=id.syscall,hitcount:size=2048 if id == 16 [active]
811
812 { id: sys_ioctl [ 16], common_pid: gmain [ 2769] } hitcount: 1
813 { id: sys_ioctl [ 16], common_pid: evolution-addre [ 8571] } hitcount: 1
814 { id: sys_ioctl [ 16], common_pid: gmain [ 3003] } hitcount: 1
815 { id: sys_ioctl [ 16], common_pid: gmain [ 2781] } hitcount: 1
816 { id: sys_ioctl [ 16], common_pid: gmain [ 2829] } hitcount: 1
817 { id: sys_ioctl [ 16], common_pid: bash [ 8726] } hitcount: 1
818 { id: sys_ioctl [ 16], common_pid: bash [ 8508] } hitcount: 1
819 { id: sys_ioctl [ 16], common_pid: gmain [ 2970] } hitcount: 1
820 { id: sys_ioctl [ 16], common_pid: gmain [ 2768] } hitcount: 1
821 .
822 .
823 .
824 { id: sys_ioctl [ 16], common_pid: pool [ 8559] } hitcount: 45
825 { id: sys_ioctl [ 16], common_pid: pool [ 8555] } hitcount: 48
826 { id: sys_ioctl [ 16], common_pid: pool [ 8551] } hitcount: 48
827 { id: sys_ioctl [ 16], common_pid: avahi-daemon [ 896] } hitcount: 66
828 { id: sys_ioctl [ 16], common_pid: Xorg [ 1267] } hitcount: 26674
829 { id: sys_ioctl [ 16], common_pid: compiz [ 2994] } hitcount: 73443
830
831 Totals:
832 Hits: 101162
833 Entries: 103
834 Dropped: 0
835
ea272257
MCC
836 The above output shows that 'compiz' and 'Xorg' are far and away
837 the heaviest ioctl callers (which might lead to questions about
838 whether they really need to be making all those calls and to
839 possible avenues for further investigation.)
b8df4a36 840
ea272257
MCC
841 The compound key examples used a key and a sum value (hitcount) to
842 sort the output, but we can just as easily use two keys instead.
843 Here's an example where we use a compound key composed of the the
844 common_pid and size event fields. Sorting with pid as the primary
845 key and 'size' as the secondary key allows us to display an
846 ordered summary of the recvfrom sizes, with counts, received by
847 each process::
b8df4a36
TZ
848
849 # echo 'hist:key=common_pid.execname,size:val=hitcount:sort=common_pid,size' > \
2abfcd29 850 /sys/kernel/tracing/events/syscalls/sys_enter_recvfrom/trigger
b8df4a36 851
2abfcd29 852 # cat /sys/kernel/tracing/events/syscalls/sys_enter_recvfrom/hist
b8df4a36
TZ
853 # trigger info: hist:keys=common_pid.execname,size:vals=hitcount:sort=common_pid.execname,size:size=2048 [active]
854
855 { common_pid: smbd [ 784], size: 4 } hitcount: 1
856 { common_pid: dnsmasq [ 1412], size: 4096 } hitcount: 672
857 { common_pid: postgres [ 1796], size: 1000 } hitcount: 6
858 { common_pid: postgres [ 1867], size: 1000 } hitcount: 10
859 { common_pid: bamfdaemon [ 2787], size: 28 } hitcount: 2
860 { common_pid: bamfdaemon [ 2787], size: 14360 } hitcount: 1
861 { common_pid: compiz [ 2994], size: 8 } hitcount: 1
862 { common_pid: compiz [ 2994], size: 20 } hitcount: 11
863 { common_pid: gnome-terminal [ 3199], size: 4 } hitcount: 2
864 { common_pid: firefox [ 8817], size: 4 } hitcount: 1
865 { common_pid: firefox [ 8817], size: 8 } hitcount: 5
866 { common_pid: firefox [ 8817], size: 588 } hitcount: 2
867 { common_pid: firefox [ 8817], size: 628 } hitcount: 1
868 { common_pid: firefox [ 8817], size: 6944 } hitcount: 1
869 { common_pid: firefox [ 8817], size: 408880 } hitcount: 2
870 { common_pid: firefox [ 8822], size: 8 } hitcount: 2
871 { common_pid: firefox [ 8822], size: 160 } hitcount: 2
872 { common_pid: firefox [ 8822], size: 320 } hitcount: 2
873 { common_pid: firefox [ 8822], size: 352 } hitcount: 1
874 .
875 .
876 .
877 { common_pid: pool [ 8923], size: 1960 } hitcount: 10
878 { common_pid: pool [ 8923], size: 2048 } hitcount: 10
879 { common_pid: pool [ 8924], size: 1960 } hitcount: 10
880 { common_pid: pool [ 8924], size: 2048 } hitcount: 10
881 { common_pid: pool [ 8928], size: 1964 } hitcount: 4
882 { common_pid: pool [ 8928], size: 1965 } hitcount: 2
883 { common_pid: pool [ 8928], size: 2048 } hitcount: 6
884 { common_pid: pool [ 8929], size: 1982 } hitcount: 1
885 { common_pid: pool [ 8929], size: 2048 } hitcount: 1
886
887 Totals:
888 Hits: 2016
889 Entries: 224
890 Dropped: 0
891
892 The above example also illustrates the fact that although a compound
893 key is treated as a single entity for hashing purposes, the sub-keys
894 it's composed of can be accessed independently.
895
896 The next example uses a string field as the hash key and
897 demonstrates how you can manually pause and continue a hist trigger.
898 In this example, we'll aggregate fork counts and don't expect a
899 large number of entries in the hash table, so we'll drop it to a
ea272257 900 much smaller number, say 256::
b8df4a36
TZ
901
902 # echo 'hist:key=child_comm:val=hitcount:size=256' > \
2abfcd29 903 /sys/kernel/tracing/events/sched/sched_process_fork/trigger
b8df4a36 904
2abfcd29 905 # cat /sys/kernel/tracing/events/sched/sched_process_fork/hist
b8df4a36
TZ
906 # trigger info: hist:keys=child_comm:vals=hitcount:sort=hitcount:size=256 [active]
907
908 { child_comm: dconf worker } hitcount: 1
909 { child_comm: ibus-daemon } hitcount: 1
910 { child_comm: whoopsie } hitcount: 1
911 { child_comm: smbd } hitcount: 1
912 { child_comm: gdbus } hitcount: 1
913 { child_comm: kthreadd } hitcount: 1
914 { child_comm: dconf worker } hitcount: 1
915 { child_comm: evolution-alarm } hitcount: 2
916 { child_comm: Socket Thread } hitcount: 2
917 { child_comm: postgres } hitcount: 2
918 { child_comm: bash } hitcount: 3
919 { child_comm: compiz } hitcount: 3
920 { child_comm: evolution-sourc } hitcount: 4
921 { child_comm: dhclient } hitcount: 4
922 { child_comm: pool } hitcount: 5
923 { child_comm: nm-dispatcher.a } hitcount: 8
924 { child_comm: firefox } hitcount: 8
925 { child_comm: dbus-daemon } hitcount: 8
926 { child_comm: glib-pacrunner } hitcount: 10
927 { child_comm: evolution } hitcount: 23
928
929 Totals:
930 Hits: 89
931 Entries: 20
932 Dropped: 0
933
934 If we want to pause the hist trigger, we can simply append :pause to
935 the command that started the trigger. Notice that the trigger info
ea272257 936 displays as [paused]::
b8df4a36
TZ
937
938 # echo 'hist:key=child_comm:val=hitcount:size=256:pause' >> \
2abfcd29 939 /sys/kernel/tracing/events/sched/sched_process_fork/trigger
b8df4a36 940
2abfcd29 941 # cat /sys/kernel/tracing/events/sched/sched_process_fork/hist
b8df4a36
TZ
942 # trigger info: hist:keys=child_comm:vals=hitcount:sort=hitcount:size=256 [paused]
943
944 { child_comm: dconf worker } hitcount: 1
945 { child_comm: kthreadd } hitcount: 1
946 { child_comm: dconf worker } hitcount: 1
947 { child_comm: gdbus } hitcount: 1
948 { child_comm: ibus-daemon } hitcount: 1
949 { child_comm: Socket Thread } hitcount: 2
950 { child_comm: evolution-alarm } hitcount: 2
951 { child_comm: smbd } hitcount: 2
952 { child_comm: bash } hitcount: 3
953 { child_comm: whoopsie } hitcount: 3
954 { child_comm: compiz } hitcount: 3
955 { child_comm: evolution-sourc } hitcount: 4
956 { child_comm: pool } hitcount: 5
957 { child_comm: postgres } hitcount: 6
958 { child_comm: firefox } hitcount: 8
959 { child_comm: dhclient } hitcount: 10
960 { child_comm: emacs } hitcount: 12
961 { child_comm: dbus-daemon } hitcount: 20
962 { child_comm: nm-dispatcher.a } hitcount: 20
963 { child_comm: evolution } hitcount: 35
964 { child_comm: glib-pacrunner } hitcount: 59
965
966 Totals:
967 Hits: 199
968 Entries: 21
969 Dropped: 0
970
971 To manually continue having the trigger aggregate events, append
972 :cont instead. Notice that the trigger info displays as [active]
ea272257 973 again, and the data has changed::
b8df4a36
TZ
974
975 # echo 'hist:key=child_comm:val=hitcount:size=256:cont' >> \
2abfcd29 976 /sys/kernel/tracing/events/sched/sched_process_fork/trigger
b8df4a36 977
2abfcd29 978 # cat /sys/kernel/tracing/events/sched/sched_process_fork/hist
b8df4a36
TZ
979 # trigger info: hist:keys=child_comm:vals=hitcount:sort=hitcount:size=256 [active]
980
981 { child_comm: dconf worker } hitcount: 1
982 { child_comm: dconf worker } hitcount: 1
983 { child_comm: kthreadd } hitcount: 1
984 { child_comm: gdbus } hitcount: 1
985 { child_comm: ibus-daemon } hitcount: 1
986 { child_comm: Socket Thread } hitcount: 2
987 { child_comm: evolution-alarm } hitcount: 2
988 { child_comm: smbd } hitcount: 2
989 { child_comm: whoopsie } hitcount: 3
990 { child_comm: compiz } hitcount: 3
991 { child_comm: evolution-sourc } hitcount: 4
992 { child_comm: bash } hitcount: 5
993 { child_comm: pool } hitcount: 5
994 { child_comm: postgres } hitcount: 6
995 { child_comm: firefox } hitcount: 8
996 { child_comm: dhclient } hitcount: 11
997 { child_comm: emacs } hitcount: 12
998 { child_comm: dbus-daemon } hitcount: 22
999 { child_comm: nm-dispatcher.a } hitcount: 22
1000 { child_comm: evolution } hitcount: 35
1001 { child_comm: glib-pacrunner } hitcount: 59
1002
1003 Totals:
1004 Hits: 206
1005 Entries: 21
1006 Dropped: 0
1007
1008 The previous example showed how to start and stop a hist trigger by
1009 appending 'pause' and 'continue' to the hist trigger command. A
1010 hist trigger can also be started in a paused state by initially
1011 starting the trigger with ':pause' appended. This allows you to
1012 start the trigger only when you're ready to start collecting data
1013 and not before. For example, you could start the trigger in a
1014 paused state, then unpause it and do something you want to measure,
1015 then pause the trigger again when done.
1016
1017 Of course, doing this manually can be difficult and error-prone, but
1018 it is possible to automatically start and stop a hist trigger based
1019 on some condition, via the enable_hist and disable_hist triggers.
1020
1021 For example, suppose we wanted to take a look at the relative
1022 weights in terms of skb length for each callpath that leads to a
fe4ec72c 1023 netif_receive_skb event when downloading a decent-sized file using
b8df4a36
TZ
1024 wget.
1025
1026 First we set up an initially paused stacktrace trigger on the
ea272257 1027 netif_receive_skb event::
b8df4a36 1028
4b512860 1029 # echo 'hist:key=common_stacktrace:vals=len:pause' > \
2abfcd29 1030 /sys/kernel/tracing/events/net/netif_receive_skb/trigger
b8df4a36
TZ
1031
1032 Next, we set up an 'enable_hist' trigger on the sched_process_exec
1033 event, with an 'if filename==/usr/bin/wget' filter. The effect of
1034 this new trigger is that it will 'unpause' the hist trigger we just
1035 set up on netif_receive_skb if and only if it sees a
1036 sched_process_exec event with a filename of '/usr/bin/wget'. When
1037 that happens, all netif_receive_skb events are aggregated into a
ea272257 1038 hash table keyed on stacktrace::
b8df4a36
TZ
1039
1040 # echo 'enable_hist:net:netif_receive_skb if filename==/usr/bin/wget' > \
2abfcd29 1041 /sys/kernel/tracing/events/sched/sched_process_exec/trigger
b8df4a36
TZ
1042
1043 The aggregation continues until the netif_receive_skb is paused
1044 again, which is what the following disable_hist event does by
1045 creating a similar setup on the sched_process_exit event, using the
ea272257 1046 filter 'comm==wget'::
b8df4a36
TZ
1047
1048 # echo 'disable_hist:net:netif_receive_skb if comm==wget' > \
2abfcd29 1049 /sys/kernel/tracing/events/sched/sched_process_exit/trigger
b8df4a36
TZ
1050
1051 Whenever a process exits and the comm field of the disable_hist
1052 trigger filter matches 'comm==wget', the netif_receive_skb hist
1053 trigger is disabled.
1054
1055 The overall effect is that netif_receive_skb events are aggregated
1056 into the hash table for only the duration of the wget. Executing a
1057 wget command and then listing the 'hist' file will display the
ea272257 1058 output generated by the wget command::
b8df4a36
TZ
1059
1060 $ wget https://www.kernel.org/pub/linux/kernel/v3.x/patch-3.19.xz
1061
2abfcd29 1062 # cat /sys/kernel/tracing/events/net/netif_receive_skb/hist
4b512860 1063 # trigger info: hist:keys=common_stacktrace:vals=len:sort=hitcount:size=2048 [paused]
b8df4a36 1064
4b512860 1065 { common_stacktrace:
b8df4a36
TZ
1066 __netif_receive_skb_core+0x46d/0x990
1067 __netif_receive_skb+0x18/0x60
1068 netif_receive_skb_internal+0x23/0x90
1069 napi_gro_receive+0xc8/0x100
1070 ieee80211_deliver_skb+0xd6/0x270 [mac80211]
1071 ieee80211_rx_handlers+0xccf/0x22f0 [mac80211]
1072 ieee80211_prepare_and_rx_handle+0x4e7/0xc40 [mac80211]
1073 ieee80211_rx+0x31d/0x900 [mac80211]
1074 iwlagn_rx_reply_rx+0x3db/0x6f0 [iwldvm]
1075 iwl_rx_dispatch+0x8e/0xf0 [iwldvm]
1076 iwl_pcie_irq_handler+0xe3c/0x12f0 [iwlwifi]
1077 irq_thread_fn+0x20/0x50
1078 irq_thread+0x11f/0x150
1079 kthread+0xd2/0xf0
1080 ret_from_fork+0x42/0x70
1081 } hitcount: 85 len: 28884
4b512860 1082 { common_stacktrace:
b8df4a36
TZ
1083 __netif_receive_skb_core+0x46d/0x990
1084 __netif_receive_skb+0x18/0x60
1085 netif_receive_skb_internal+0x23/0x90
1086 napi_gro_complete+0xa4/0xe0
1087 dev_gro_receive+0x23a/0x360
1088 napi_gro_receive+0x30/0x100
1089 ieee80211_deliver_skb+0xd6/0x270 [mac80211]
1090 ieee80211_rx_handlers+0xccf/0x22f0 [mac80211]
1091 ieee80211_prepare_and_rx_handle+0x4e7/0xc40 [mac80211]
1092 ieee80211_rx+0x31d/0x900 [mac80211]
1093 iwlagn_rx_reply_rx+0x3db/0x6f0 [iwldvm]
1094 iwl_rx_dispatch+0x8e/0xf0 [iwldvm]
1095 iwl_pcie_irq_handler+0xe3c/0x12f0 [iwlwifi]
1096 irq_thread_fn+0x20/0x50
1097 irq_thread+0x11f/0x150
1098 kthread+0xd2/0xf0
1099 } hitcount: 98 len: 664329
4b512860 1100 { common_stacktrace:
b8df4a36
TZ
1101 __netif_receive_skb_core+0x46d/0x990
1102 __netif_receive_skb+0x18/0x60
1103 process_backlog+0xa8/0x150
1104 net_rx_action+0x15d/0x340
1105 __do_softirq+0x114/0x2c0
1106 do_softirq_own_stack+0x1c/0x30
1107 do_softirq+0x65/0x70
1108 __local_bh_enable_ip+0xb5/0xc0
1109 ip_finish_output+0x1f4/0x840
1110 ip_output+0x6b/0xc0
1111 ip_local_out_sk+0x31/0x40
1112 ip_send_skb+0x1a/0x50
1113 udp_send_skb+0x173/0x2a0
1114 udp_sendmsg+0x2bf/0x9f0
1115 inet_sendmsg+0x64/0xa0
1116 sock_sendmsg+0x3d/0x50
1117 } hitcount: 115 len: 13030
4b512860 1118 { common_stacktrace:
b8df4a36
TZ
1119 __netif_receive_skb_core+0x46d/0x990
1120 __netif_receive_skb+0x18/0x60
1121 netif_receive_skb_internal+0x23/0x90
1122 napi_gro_complete+0xa4/0xe0
1123 napi_gro_flush+0x6d/0x90
1124 iwl_pcie_irq_handler+0x92a/0x12f0 [iwlwifi]
1125 irq_thread_fn+0x20/0x50
1126 irq_thread+0x11f/0x150
1127 kthread+0xd2/0xf0
1128 ret_from_fork+0x42/0x70
1129 } hitcount: 934 len: 5512212
1130
1131 Totals:
1132 Hits: 1232
1133 Entries: 4
1134 Dropped: 0
1135
1136 The above shows all the netif_receive_skb callpaths and their total
1137 lengths for the duration of the wget command.
1138
1139 The 'clear' hist trigger param can be used to clear the hash table.
1140 Suppose we wanted to try another run of the previous example but
1141 this time also wanted to see the complete list of events that went
1142 into the histogram. In order to avoid having to set everything up
ea272257 1143 again, we can just clear the histogram first::
b8df4a36 1144
4b512860 1145 # echo 'hist:key=common_stacktrace:vals=len:clear' >> \
2abfcd29 1146 /sys/kernel/tracing/events/net/netif_receive_skb/trigger
b8df4a36
TZ
1147
1148 Just to verify that it is in fact cleared, here's what we now see in
ea272257 1149 the hist file::
b8df4a36 1150
2abfcd29 1151 # cat /sys/kernel/tracing/events/net/netif_receive_skb/hist
4b512860 1152 # trigger info: hist:keys=common_stacktrace:vals=len:sort=hitcount:size=2048 [paused]
b8df4a36
TZ
1153
1154 Totals:
1155 Hits: 0
1156 Entries: 0
1157 Dropped: 0
1158
1159 Since we want to see the detailed list of every netif_receive_skb
1160 event occurring during the new run, which are in fact the same
1161 events being aggregated into the hash table, we add some additional
1162 'enable_event' events to the triggering sched_process_exec and
ea272257 1163 sched_process_exit events as such::
b8df4a36
TZ
1164
1165 # echo 'enable_event:net:netif_receive_skb if filename==/usr/bin/wget' > \
2abfcd29 1166 /sys/kernel/tracing/events/sched/sched_process_exec/trigger
b8df4a36
TZ
1167
1168 # echo 'disable_event:net:netif_receive_skb if comm==wget' > \
2abfcd29 1169 /sys/kernel/tracing/events/sched/sched_process_exit/trigger
b8df4a36
TZ
1170
1171 If you read the trigger files for the sched_process_exec and
1172 sched_process_exit triggers, you should see two triggers for each:
1173 one enabling/disabling the hist aggregation and the other
ea272257 1174 enabling/disabling the logging of events::
b8df4a36 1175
2abfcd29 1176 # cat /sys/kernel/tracing/events/sched/sched_process_exec/trigger
b8df4a36
TZ
1177 enable_event:net:netif_receive_skb:unlimited if filename==/usr/bin/wget
1178 enable_hist:net:netif_receive_skb:unlimited if filename==/usr/bin/wget
1179
2abfcd29 1180 # cat /sys/kernel/tracing/events/sched/sched_process_exit/trigger
b8df4a36
TZ
1181 enable_event:net:netif_receive_skb:unlimited if comm==wget
1182 disable_hist:net:netif_receive_skb:unlimited if comm==wget
1183
1184 In other words, whenever either of the sched_process_exec or
1185 sched_process_exit events is hit and matches 'wget', it enables or
1186 disables both the histogram and the event log, and what you end up
1187 with is a hash table and set of events just covering the specified
ea272257 1188 duration. Run the wget command again::
b8df4a36
TZ
1189
1190 $ wget https://www.kernel.org/pub/linux/kernel/v3.x/patch-3.19.xz
1191
1192 Displaying the 'hist' file should show something similar to what you
1193 saw in the last run, but this time you should also see the
ea272257 1194 individual events in the trace file::
b8df4a36 1195
2abfcd29 1196 # cat /sys/kernel/tracing/trace
b8df4a36
TZ
1197
1198 # tracer: nop
1199 #
1200 # entries-in-buffer/entries-written: 183/1426 #P:4
1201 #
1202 # _-----=> irqs-off
1203 # / _----=> need-resched
1204 # | / _---=> hardirq/softirq
1205 # || / _--=> preempt-depth
1206 # ||| / delay
1207 # TASK-PID CPU# |||| TIMESTAMP FUNCTION
1208 # | | | |||| | |
1209 wget-15108 [000] ..s1 31769.606929: netif_receive_skb: dev=lo skbaddr=ffff88009c353100 len=60
1210 wget-15108 [000] ..s1 31769.606999: netif_receive_skb: dev=lo skbaddr=ffff88009c353200 len=60
1211 dnsmasq-1382 [000] ..s1 31769.677652: netif_receive_skb: dev=lo skbaddr=ffff88009c352b00 len=130
1212 dnsmasq-1382 [000] ..s1 31769.685917: netif_receive_skb: dev=lo skbaddr=ffff88009c352200 len=138
1213 ##### CPU 2 buffer started ####
1214 irq/29-iwlwifi-559 [002] ..s. 31772.031529: netif_receive_skb: dev=wlan0 skbaddr=ffff88009d433d00 len=2948
1215 irq/29-iwlwifi-559 [002] ..s. 31772.031572: netif_receive_skb: dev=wlan0 skbaddr=ffff88009d432200 len=1500
1216 irq/29-iwlwifi-559 [002] ..s. 31772.032196: netif_receive_skb: dev=wlan0 skbaddr=ffff88009d433100 len=2948
1217 irq/29-iwlwifi-559 [002] ..s. 31772.032761: netif_receive_skb: dev=wlan0 skbaddr=ffff88009d433000 len=2948
1218 irq/29-iwlwifi-559 [002] ..s. 31772.033220: netif_receive_skb: dev=wlan0 skbaddr=ffff88009d432e00 len=1500
1219 .
1220 .
1221 .
1222
1223 The following example demonstrates how multiple hist triggers can be
1224 attached to a given event. This capability can be useful for
1225 creating a set of different summaries derived from the same set of
1226 events, or for comparing the effects of different filters, among
ea272257 1227 other things::
b8df4a36
TZ
1228
1229 # echo 'hist:keys=skbaddr.hex:vals=len if len < 0' >> \
2abfcd29 1230 /sys/kernel/tracing/events/net/netif_receive_skb/trigger
b8df4a36 1231 # echo 'hist:keys=skbaddr.hex:vals=len if len > 4096' >> \
2abfcd29 1232 /sys/kernel/tracing/events/net/netif_receive_skb/trigger
b8df4a36 1233 # echo 'hist:keys=skbaddr.hex:vals=len if len == 256' >> \
2abfcd29 1234 /sys/kernel/tracing/events/net/netif_receive_skb/trigger
b8df4a36 1235 # echo 'hist:keys=skbaddr.hex:vals=len' >> \
2abfcd29 1236 /sys/kernel/tracing/events/net/netif_receive_skb/trigger
b8df4a36 1237 # echo 'hist:keys=len:vals=common_preempt_count' >> \
2abfcd29 1238 /sys/kernel/tracing/events/net/netif_receive_skb/trigger
b8df4a36
TZ
1239
1240 The above set of commands create four triggers differing only in
1241 their filters, along with a completely different though fairly
1242 nonsensical trigger. Note that in order to append multiple hist
1243 triggers to the same file, you should use the '>>' operator to
1244 append them ('>' will also add the new hist trigger, but will remove
1245 any existing hist triggers beforehand).
1246
1247 Displaying the contents of the 'hist' file for the event shows the
ea272257 1248 contents of all five histograms::
b8df4a36 1249
2abfcd29 1250 # cat /sys/kernel/tracing/events/net/netif_receive_skb/hist
b8df4a36
TZ
1251
1252 # event histogram
1253 #
1254 # trigger info: hist:keys=len:vals=hitcount,common_preempt_count:sort=hitcount:size=2048 [active]
1255 #
1256
1257 { len: 176 } hitcount: 1 common_preempt_count: 0
1258 { len: 223 } hitcount: 1 common_preempt_count: 0
1259 { len: 4854 } hitcount: 1 common_preempt_count: 0
1260 { len: 395 } hitcount: 1 common_preempt_count: 0
1261 { len: 177 } hitcount: 1 common_preempt_count: 0
1262 { len: 446 } hitcount: 1 common_preempt_count: 0
1263 { len: 1601 } hitcount: 1 common_preempt_count: 0
1264 .
1265 .
1266 .
1267 { len: 1280 } hitcount: 66 common_preempt_count: 0
1268 { len: 116 } hitcount: 81 common_preempt_count: 40
1269 { len: 708 } hitcount: 112 common_preempt_count: 0
1270 { len: 46 } hitcount: 221 common_preempt_count: 0
1271 { len: 1264 } hitcount: 458 common_preempt_count: 0
1272
1273 Totals:
1274 Hits: 1428
1275 Entries: 147
1276 Dropped: 0
1277
1278
1279 # event histogram
1280 #
1281 # trigger info: hist:keys=skbaddr.hex:vals=hitcount,len:sort=hitcount:size=2048 [active]
1282 #
1283
1284 { skbaddr: ffff8800baee5e00 } hitcount: 1 len: 130
1285 { skbaddr: ffff88005f3d5600 } hitcount: 1 len: 1280
1286 { skbaddr: ffff88005f3d4900 } hitcount: 1 len: 1280
1287 { skbaddr: ffff88009fed6300 } hitcount: 1 len: 115
1288 { skbaddr: ffff88009fe0ad00 } hitcount: 1 len: 115
1289 { skbaddr: ffff88008cdb1900 } hitcount: 1 len: 46
1290 { skbaddr: ffff880064b5ef00 } hitcount: 1 len: 118
1291 { skbaddr: ffff880044e3c700 } hitcount: 1 len: 60
1292 { skbaddr: ffff880100065900 } hitcount: 1 len: 46
1293 { skbaddr: ffff8800d46bd500 } hitcount: 1 len: 116
1294 { skbaddr: ffff88005f3d5f00 } hitcount: 1 len: 1280
1295 { skbaddr: ffff880100064700 } hitcount: 1 len: 365
1296 { skbaddr: ffff8800badb6f00 } hitcount: 1 len: 60
1297 .
1298 .
1299 .
1300 { skbaddr: ffff88009fe0be00 } hitcount: 27 len: 24677
1301 { skbaddr: ffff88009fe0a400 } hitcount: 27 len: 23052
1302 { skbaddr: ffff88009fe0b700 } hitcount: 31 len: 25589
1303 { skbaddr: ffff88009fe0b600 } hitcount: 32 len: 27326
1304 { skbaddr: ffff88006a462800 } hitcount: 68 len: 71678
1305 { skbaddr: ffff88006a463700 } hitcount: 70 len: 72678
1306 { skbaddr: ffff88006a462b00 } hitcount: 71 len: 77589
1307 { skbaddr: ffff88006a463600 } hitcount: 73 len: 71307
1308 { skbaddr: ffff88006a462200 } hitcount: 81 len: 81032
1309
1310 Totals:
1311 Hits: 1451
1312 Entries: 318
1313 Dropped: 0
1314
1315
1316 # event histogram
1317 #
1318 # trigger info: hist:keys=skbaddr.hex:vals=hitcount,len:sort=hitcount:size=2048 if len == 256 [active]
1319 #
1320
1321
1322 Totals:
1323 Hits: 0
1324 Entries: 0
1325 Dropped: 0
1326
1327
1328 # event histogram
1329 #
1330 # trigger info: hist:keys=skbaddr.hex:vals=hitcount,len:sort=hitcount:size=2048 if len > 4096 [active]
1331 #
1332
1333 { skbaddr: ffff88009fd2c300 } hitcount: 1 len: 7212
1334 { skbaddr: ffff8800d2bcce00 } hitcount: 1 len: 7212
1335 { skbaddr: ffff8800d2bcd700 } hitcount: 1 len: 7212
1336 { skbaddr: ffff8800d2bcda00 } hitcount: 1 len: 21492
1337 { skbaddr: ffff8800ae2e2d00 } hitcount: 1 len: 7212
1338 { skbaddr: ffff8800d2bcdb00 } hitcount: 1 len: 7212
1339 { skbaddr: ffff88006a4df500 } hitcount: 1 len: 4854
1340 { skbaddr: ffff88008ce47b00 } hitcount: 1 len: 18636
1341 { skbaddr: ffff8800ae2e2200 } hitcount: 1 len: 12924
1342 { skbaddr: ffff88005f3e1000 } hitcount: 1 len: 4356
1343 { skbaddr: ffff8800d2bcdc00 } hitcount: 2 len: 24420
1344 { skbaddr: ffff8800d2bcc200 } hitcount: 2 len: 12996
1345
1346 Totals:
1347 Hits: 14
1348 Entries: 12
1349 Dropped: 0
1350
1351
1352 # event histogram
1353 #
1354 # trigger info: hist:keys=skbaddr.hex:vals=hitcount,len:sort=hitcount:size=2048 if len < 0 [active]
1355 #
1356
1357
1358 Totals:
1359 Hits: 0
1360 Entries: 0
1361 Dropped: 0
1362
1363 Named triggers can be used to have triggers share a common set of
1364 histogram data. This capability is mostly useful for combining the
1365 output of events generated by tracepoints contained inside inline
1366 functions, but names can be used in a hist trigger on any event.
1367 For example, these two triggers when hit will update the same 'len'
ea272257 1368 field in the shared 'foo' histogram data::
b8df4a36
TZ
1369
1370 # echo 'hist:name=foo:keys=skbaddr.hex:vals=len' > \
2abfcd29 1371 /sys/kernel/tracing/events/net/netif_receive_skb/trigger
b8df4a36 1372 # echo 'hist:name=foo:keys=skbaddr.hex:vals=len' > \
2abfcd29 1373 /sys/kernel/tracing/events/net/netif_rx/trigger
b8df4a36
TZ
1374
1375 You can see that they're updating common histogram data by reading
ea272257 1376 each event's hist files at the same time::
b8df4a36 1377
2abfcd29
RZ
1378 # cat /sys/kernel/tracing/events/net/netif_receive_skb/hist;
1379 cat /sys/kernel/tracing/events/net/netif_rx/hist
b8df4a36
TZ
1380
1381 # event histogram
1382 #
1383 # trigger info: hist:name=foo:keys=skbaddr.hex:vals=hitcount,len:sort=hitcount:size=2048 [active]
1384 #
1385
1386 { skbaddr: ffff88000ad53500 } hitcount: 1 len: 46
1387 { skbaddr: ffff8800af5a1500 } hitcount: 1 len: 76
1388 { skbaddr: ffff8800d62a1900 } hitcount: 1 len: 46
1389 { skbaddr: ffff8800d2bccb00 } hitcount: 1 len: 468
1390 { skbaddr: ffff8800d3c69900 } hitcount: 1 len: 46
1391 { skbaddr: ffff88009ff09100 } hitcount: 1 len: 52
1392 { skbaddr: ffff88010f13ab00 } hitcount: 1 len: 168
1393 { skbaddr: ffff88006a54f400 } hitcount: 1 len: 46
1394 { skbaddr: ffff8800d2bcc500 } hitcount: 1 len: 260
1395 { skbaddr: ffff880064505000 } hitcount: 1 len: 46
1396 { skbaddr: ffff8800baf24e00 } hitcount: 1 len: 32
1397 { skbaddr: ffff88009fe0ad00 } hitcount: 1 len: 46
1398 { skbaddr: ffff8800d3edff00 } hitcount: 1 len: 44
1399 { skbaddr: ffff88009fe0b400 } hitcount: 1 len: 168
1400 { skbaddr: ffff8800a1c55a00 } hitcount: 1 len: 40
1401 { skbaddr: ffff8800d2bcd100 } hitcount: 1 len: 40
1402 { skbaddr: ffff880064505f00 } hitcount: 1 len: 174
1403 { skbaddr: ffff8800a8bff200 } hitcount: 1 len: 160
1404 { skbaddr: ffff880044e3cc00 } hitcount: 1 len: 76
1405 { skbaddr: ffff8800a8bfe700 } hitcount: 1 len: 46
1406 { skbaddr: ffff8800d2bcdc00 } hitcount: 1 len: 32
1407 { skbaddr: ffff8800a1f64800 } hitcount: 1 len: 46
1408 { skbaddr: ffff8800d2bcde00 } hitcount: 1 len: 988
1409 { skbaddr: ffff88006a5dea00 } hitcount: 1 len: 46
1410 { skbaddr: ffff88002e37a200 } hitcount: 1 len: 44
1411 { skbaddr: ffff8800a1f32c00 } hitcount: 2 len: 676
1412 { skbaddr: ffff88000ad52600 } hitcount: 2 len: 107
1413 { skbaddr: ffff8800a1f91e00 } hitcount: 2 len: 92
1414 { skbaddr: ffff8800af5a0200 } hitcount: 2 len: 142
1415 { skbaddr: ffff8800d2bcc600 } hitcount: 2 len: 220
1416 { skbaddr: ffff8800ba36f500 } hitcount: 2 len: 92
1417 { skbaddr: ffff8800d021f800 } hitcount: 2 len: 92
1418 { skbaddr: ffff8800a1f33600 } hitcount: 2 len: 675
1419 { skbaddr: ffff8800a8bfff00 } hitcount: 3 len: 138
1420 { skbaddr: ffff8800d62a1300 } hitcount: 3 len: 138
1421 { skbaddr: ffff88002e37a100 } hitcount: 4 len: 184
1422 { skbaddr: ffff880064504400 } hitcount: 4 len: 184
1423 { skbaddr: ffff8800a8bfec00 } hitcount: 4 len: 184
1424 { skbaddr: ffff88000ad53700 } hitcount: 5 len: 230
1425 { skbaddr: ffff8800d2bcdb00 } hitcount: 5 len: 196
1426 { skbaddr: ffff8800a1f90000 } hitcount: 6 len: 276
1427 { skbaddr: ffff88006a54f900 } hitcount: 6 len: 276
1428
1429 Totals:
1430 Hits: 81
1431 Entries: 42
1432 Dropped: 0
1433 # event histogram
1434 #
1435 # trigger info: hist:name=foo:keys=skbaddr.hex:vals=hitcount,len:sort=hitcount:size=2048 [active]
1436 #
1437
1438 { skbaddr: ffff88000ad53500 } hitcount: 1 len: 46
1439 { skbaddr: ffff8800af5a1500 } hitcount: 1 len: 76
1440 { skbaddr: ffff8800d62a1900 } hitcount: 1 len: 46
1441 { skbaddr: ffff8800d2bccb00 } hitcount: 1 len: 468
1442 { skbaddr: ffff8800d3c69900 } hitcount: 1 len: 46
1443 { skbaddr: ffff88009ff09100 } hitcount: 1 len: 52
1444 { skbaddr: ffff88010f13ab00 } hitcount: 1 len: 168
1445 { skbaddr: ffff88006a54f400 } hitcount: 1 len: 46
1446 { skbaddr: ffff8800d2bcc500 } hitcount: 1 len: 260
1447 { skbaddr: ffff880064505000 } hitcount: 1 len: 46
1448 { skbaddr: ffff8800baf24e00 } hitcount: 1 len: 32
1449 { skbaddr: ffff88009fe0ad00 } hitcount: 1 len: 46
1450 { skbaddr: ffff8800d3edff00 } hitcount: 1 len: 44
1451 { skbaddr: ffff88009fe0b400 } hitcount: 1 len: 168
1452 { skbaddr: ffff8800a1c55a00 } hitcount: 1 len: 40
1453 { skbaddr: ffff8800d2bcd100 } hitcount: 1 len: 40
1454 { skbaddr: ffff880064505f00 } hitcount: 1 len: 174
1455 { skbaddr: ffff8800a8bff200 } hitcount: 1 len: 160
1456 { skbaddr: ffff880044e3cc00 } hitcount: 1 len: 76
1457 { skbaddr: ffff8800a8bfe700 } hitcount: 1 len: 46
1458 { skbaddr: ffff8800d2bcdc00 } hitcount: 1 len: 32
1459 { skbaddr: ffff8800a1f64800 } hitcount: 1 len: 46
1460 { skbaddr: ffff8800d2bcde00 } hitcount: 1 len: 988
1461 { skbaddr: ffff88006a5dea00 } hitcount: 1 len: 46
1462 { skbaddr: ffff88002e37a200 } hitcount: 1 len: 44
1463 { skbaddr: ffff8800a1f32c00 } hitcount: 2 len: 676
1464 { skbaddr: ffff88000ad52600 } hitcount: 2 len: 107
1465 { skbaddr: ffff8800a1f91e00 } hitcount: 2 len: 92
1466 { skbaddr: ffff8800af5a0200 } hitcount: 2 len: 142
1467 { skbaddr: ffff8800d2bcc600 } hitcount: 2 len: 220
1468 { skbaddr: ffff8800ba36f500 } hitcount: 2 len: 92
1469 { skbaddr: ffff8800d021f800 } hitcount: 2 len: 92
1470 { skbaddr: ffff8800a1f33600 } hitcount: 2 len: 675
1471 { skbaddr: ffff8800a8bfff00 } hitcount: 3 len: 138
1472 { skbaddr: ffff8800d62a1300 } hitcount: 3 len: 138
1473 { skbaddr: ffff88002e37a100 } hitcount: 4 len: 184
1474 { skbaddr: ffff880064504400 } hitcount: 4 len: 184
1475 { skbaddr: ffff8800a8bfec00 } hitcount: 4 len: 184
1476 { skbaddr: ffff88000ad53700 } hitcount: 5 len: 230
1477 { skbaddr: ffff8800d2bcdb00 } hitcount: 5 len: 196
1478 { skbaddr: ffff8800a1f90000 } hitcount: 6 len: 276
1479 { skbaddr: ffff88006a54f900 } hitcount: 6 len: 276
1480
1481 Totals:
1482 Hits: 81
1483 Entries: 42
1484 Dropped: 0
1485
1486 And here's an example that shows how to combine histogram data from
1487 any two events even if they don't share any 'compatible' fields
4b512860 1488 other than 'hitcount' and 'common_stacktrace'. These commands create a
ea272257 1489 couple of triggers named 'bar' using those fields::
b8df4a36 1490
4b512860 1491 # echo 'hist:name=bar:key=common_stacktrace:val=hitcount' > \
2abfcd29 1492 /sys/kernel/tracing/events/sched/sched_process_fork/trigger
4b512860 1493 # echo 'hist:name=bar:key=common_stacktrace:val=hitcount' > \
2abfcd29 1494 /sys/kernel/tracing/events/net/netif_rx/trigger
b8df4a36
TZ
1495
1496 And displaying the output of either shows some interesting if
ea272257 1497 somewhat confusing output::
b8df4a36 1498
2abfcd29
RZ
1499 # cat /sys/kernel/tracing/events/sched/sched_process_fork/hist
1500 # cat /sys/kernel/tracing/events/net/netif_rx/hist
b8df4a36
TZ
1501
1502 # event histogram
1503 #
4b512860 1504 # trigger info: hist:name=bar:keys=common_stacktrace:vals=hitcount:sort=hitcount:size=2048 [active]
b8df4a36
TZ
1505 #
1506
4b512860 1507 { common_stacktrace:
eea11285 1508 kernel_clone+0x18e/0x330
b8df4a36
TZ
1509 kernel_thread+0x29/0x30
1510 kthreadd+0x154/0x1b0
1511 ret_from_fork+0x3f/0x70
1512 } hitcount: 1
4b512860 1513 { common_stacktrace:
b8df4a36
TZ
1514 netif_rx_internal+0xb2/0xd0
1515 netif_rx_ni+0x20/0x70
1516 dev_loopback_xmit+0xaa/0xd0
1517 ip_mc_output+0x126/0x240
1518 ip_local_out_sk+0x31/0x40
1519 igmp_send_report+0x1e9/0x230
1520 igmp_timer_expire+0xe9/0x120
1521 call_timer_fn+0x39/0xf0
1522 run_timer_softirq+0x1e1/0x290
1523 __do_softirq+0xfd/0x290
1524 irq_exit+0x98/0xb0
1525 smp_apic_timer_interrupt+0x4a/0x60
1526 apic_timer_interrupt+0x6d/0x80
1527 cpuidle_enter+0x17/0x20
1528 call_cpuidle+0x3b/0x60
1529 cpu_startup_entry+0x22d/0x310
1530 } hitcount: 1
4b512860 1531 { common_stacktrace:
b8df4a36
TZ
1532 netif_rx_internal+0xb2/0xd0
1533 netif_rx_ni+0x20/0x70
1534 dev_loopback_xmit+0xaa/0xd0
1535 ip_mc_output+0x17f/0x240
1536 ip_local_out_sk+0x31/0x40
1537 ip_send_skb+0x1a/0x50
1538 udp_send_skb+0x13e/0x270
1539 udp_sendmsg+0x2bf/0x980
1540 inet_sendmsg+0x67/0xa0
1541 sock_sendmsg+0x38/0x50
1542 SYSC_sendto+0xef/0x170
1543 SyS_sendto+0xe/0x10
1544 entry_SYSCALL_64_fastpath+0x12/0x6a
1545 } hitcount: 2
4b512860 1546 { common_stacktrace:
b8df4a36
TZ
1547 netif_rx_internal+0xb2/0xd0
1548 netif_rx+0x1c/0x60
1549 loopback_xmit+0x6c/0xb0
1550 dev_hard_start_xmit+0x219/0x3a0
1551 __dev_queue_xmit+0x415/0x4f0
1552 dev_queue_xmit_sk+0x13/0x20
1553 ip_finish_output2+0x237/0x340
1554 ip_finish_output+0x113/0x1d0
1555 ip_output+0x66/0xc0
1556 ip_local_out_sk+0x31/0x40
1557 ip_send_skb+0x1a/0x50
1558 udp_send_skb+0x16d/0x270
1559 udp_sendmsg+0x2bf/0x980
1560 inet_sendmsg+0x67/0xa0
1561 sock_sendmsg+0x38/0x50
1562 ___sys_sendmsg+0x14e/0x270
1563 } hitcount: 76
4b512860 1564 { common_stacktrace:
b8df4a36
TZ
1565 netif_rx_internal+0xb2/0xd0
1566 netif_rx+0x1c/0x60
1567 loopback_xmit+0x6c/0xb0
1568 dev_hard_start_xmit+0x219/0x3a0
1569 __dev_queue_xmit+0x415/0x4f0
1570 dev_queue_xmit_sk+0x13/0x20
1571 ip_finish_output2+0x237/0x340
1572 ip_finish_output+0x113/0x1d0
1573 ip_output+0x66/0xc0
1574 ip_local_out_sk+0x31/0x40
1575 ip_send_skb+0x1a/0x50
1576 udp_send_skb+0x16d/0x270
1577 udp_sendmsg+0x2bf/0x980
1578 inet_sendmsg+0x67/0xa0
1579 sock_sendmsg+0x38/0x50
1580 ___sys_sendmsg+0x269/0x270
1581 } hitcount: 77
4b512860 1582 { common_stacktrace:
b8df4a36
TZ
1583 netif_rx_internal+0xb2/0xd0
1584 netif_rx+0x1c/0x60
1585 loopback_xmit+0x6c/0xb0
1586 dev_hard_start_xmit+0x219/0x3a0
1587 __dev_queue_xmit+0x415/0x4f0
1588 dev_queue_xmit_sk+0x13/0x20
1589 ip_finish_output2+0x237/0x340
1590 ip_finish_output+0x113/0x1d0
1591 ip_output+0x66/0xc0
1592 ip_local_out_sk+0x31/0x40
1593 ip_send_skb+0x1a/0x50
1594 udp_send_skb+0x16d/0x270
1595 udp_sendmsg+0x2bf/0x980
1596 inet_sendmsg+0x67/0xa0
1597 sock_sendmsg+0x38/0x50
1598 SYSC_sendto+0xef/0x170
1599 } hitcount: 88
4b512860 1600 { common_stacktrace:
eea11285 1601 kernel_clone+0x18e/0x330
b8df4a36
TZ
1602 SyS_clone+0x19/0x20
1603 entry_SYSCALL_64_fastpath+0x12/0x6a
1604 } hitcount: 244
1605
1606 Totals:
1607 Hits: 489
1608 Entries: 7
1609 Dropped: 0
033cbcee 1610
033cbcee
TZ
16112.2 Inter-event hist triggers
1612-----------------------------
1613
1614Inter-event hist triggers are hist triggers that combine values from
1615one or more other events and create a histogram using that data. Data
1616from an inter-event histogram can in turn become the source for
1617further combined histograms, thus providing a chain of related
1618histograms, which is important for some applications.
1619
1620The most important example of an inter-event quantity that can be used
1621in this manner is latency, which is simply a difference in timestamps
1622between two events. Although latency is the most important
1623inter-event quantity, note that because the support is completely
1624general across the trace event subsystem, any event field can be used
1625in an inter-event quantity.
1626
1627An example of a histogram that combines data from other histograms
1628into a useful chain would be a 'wakeupswitch latency' histogram that
1629combines a 'wakeup latency' histogram and a 'switch latency'
1630histogram.
1631
1632Normally, a hist trigger specification consists of a (possibly
1633compound) key along with one or more numeric values, which are
1634continually updated sums associated with that key. A histogram
1635specification in this case consists of individual key and value
1636specifications that refer to trace event fields associated with a
1637single event type.
1638
1639The inter-event hist trigger extension allows fields from multiple
1640events to be referenced and combined into a multi-event histogram
1641specification. In support of this overall goal, a few enabling
1642features have been added to the hist trigger support:
1643
1644 - In order to compute an inter-event quantity, a value from one
1645 event needs to saved and then referenced from another event. This
1646 requires the introduction of support for histogram 'variables'.
1647
1648 - The computation of inter-event quantities and their combination
1649 require some minimal amount of support for applying simple
1650 expressions to variables (+ and -).
1651
1652 - A histogram consisting of inter-event quantities isn't logically a
1653 histogram on either event (so having the 'hist' file for either
1654 event host the histogram output doesn't really make sense). To
1655 address the idea that the histogram is associated with a
1656 combination of events, support is added allowing the creation of
1657 'synthetic' events that are events derived from other events.
1658 These synthetic events are full-fledged events just like any other
1659 and can be used as such, as for instance to create the
1660 'combination' histograms mentioned previously.
1661
1662 - A set of 'actions' can be associated with histogram entries -
1663 these can be used to generate the previously mentioned synthetic
1664 events, but can also be used for other purposes, such as for
1665 example saving context when a 'max' latency has been hit.
1666
1667 - Trace events don't have a 'timestamp' associated with them, but
1668 there is an implicit timestamp saved along with an event in the
1669 underlying ftrace ring buffer. This timestamp is now exposed as a
1670 a synthetic field named 'common_timestamp' which can be used in
1671 histograms as if it were any other event field; it isn't an actual
1672 field in the trace format but rather is a synthesized value that
1673 nonetheless can be used as if it were an actual field. By default
1674 it is in units of nanoseconds; appending '.usecs' to a
1675 common_timestamp field changes the units to microseconds.
1676
a4072fe8
TZ
1677A note on inter-event timestamps: If common_timestamp is used in a
1678histogram, the trace buffer is automatically switched over to using
1679absolute timestamps and the "global" trace clock, in order to avoid
1680bogus timestamp differences with other clocks that aren't coherent
1681across CPUs. This can be overridden by specifying one of the other
1682trace clocks instead, using the "clock=XXX" hist trigger attribute,
1683where XXX is any of the clocks listed in the tracing/trace_clock
1684pseudo-file.
1685
1686These features are described in more detail in the following sections.
033cbcee
TZ
1687
16882.2.1 Histogram Variables
1689-------------------------
1690
1691Variables are simply named locations used for saving and retrieving
1692values between matching events. A 'matching' event is defined as an
1693event that has a matching key - if a variable is saved for a histogram
1694entry corresponding to that key, any subsequent event with a matching
1695key can access that variable.
1696
1697A variable's value is normally available to any subsequent event until
1698it is set to something else by a subsequent event. The one exception
1699to that rule is that any variable used in an expression is essentially
1700'read-once' - once it's used by an expression in a subsequent event,
1701it's reset to its 'unset' state, which means it can't be used again
1702unless it's set again. This ensures not only that an event doesn't
1703use an uninitialized variable in a calculation, but that that variable
1704is used only once and not for any unrelated subsequent match.
1705
1706The basic syntax for saving a variable is to simply prefix a unique
1707variable name not corresponding to any keyword along with an '=' sign
1708to any event field.
1709
1710Either keys or values can be saved and retrieved in this way. This
1711creates a variable named 'ts0' for a histogram entry with the key
ea272257 1712'next_pid'::
033cbcee
TZ
1713
1714 # echo 'hist:keys=next_pid:vals=$ts0:ts0=common_timestamp ... >> \
1715 event/trigger
1716
1717The ts0 variable can be accessed by any subsequent event having the
1718same pid as 'next_pid'.
1719
1720Variable references are formed by prepending the variable name with
1721the '$' sign. Thus for example, the ts0 variable above would be
1722referenced as '$ts0' in expressions.
1723
1724Because 'vals=' is used, the common_timestamp variable value above
1725will also be summed as a normal histogram value would (though for a
1726timestamp it makes little sense).
1727
ea272257 1728The below shows that a key value can also be saved in the same way::
033cbcee
TZ
1729
1730 # echo 'hist:timer_pid=common_pid:key=timer_pid ...' >> event/trigger
1731
1732If a variable isn't a key variable or prefixed with 'vals=', the
1733associated event field will be saved in a variable but won't be summed
ea272257 1734as a value::
033cbcee 1735
064f35a9 1736 # echo 'hist:keys=next_pid:ts1=common_timestamp ...' >> event/trigger
033cbcee
TZ
1737
1738Multiple variables can be assigned at the same time. The below would
1739result in both ts0 and b being created as variables, with both
ea272257 1740common_timestamp and field1 additionally being summed as values::
033cbcee 1741
064f35a9 1742 # echo 'hist:keys=pid:vals=$ts0,$b:ts0=common_timestamp,b=field1 ...' >> \
033cbcee
TZ
1743 event/trigger
1744
1745Note that variable assignments can appear either preceding or
1746following their use. The command below behaves identically to the
ea272257 1747command above::
033cbcee 1748
064f35a9 1749 # echo 'hist:keys=pid:ts0=common_timestamp,b=field1:vals=$ts0,$b ...' >> \
033cbcee
TZ
1750 event/trigger
1751
1752Any number of variables not bound to a 'vals=' prefix can also be
1753assigned by simply separating them with colons. Below is the same
ea272257 1754thing but without the values being summed in the histogram::
033cbcee 1755
064f35a9 1756 # echo 'hist:keys=pid:ts0=common_timestamp:b=field1 ...' >> event/trigger
033cbcee
TZ
1757
1758Variables set as above can be referenced and used in expressions on
1759another event.
1760
ea272257 1761For example, here's how a latency can be calculated::
033cbcee 1762
064f35a9
JFG
1763 # echo 'hist:keys=pid,prio:ts0=common_timestamp ...' >> event1/trigger
1764 # echo 'hist:keys=next_pid:wakeup_lat=common_timestamp-$ts0 ...' >> event2/trigger
033cbcee 1765
aea74de4 1766In the first line above, the event's timestamp is saved into the
033cbcee
TZ
1767variable ts0. In the next line, ts0 is subtracted from the second
1768event's timestamp to produce the latency, which is then assigned into
1769yet another variable, 'wakeup_lat'. The hist trigger below in turn
1770makes use of the wakeup_lat variable to compute a combined latency
ea272257 1771using the same key and variable from yet another event::
033cbcee 1772
064f35a9 1773 # echo 'hist:key=pid:wakeupswitch_lat=$wakeup_lat+$switchtime_lat ...' >> event3/trigger
033cbcee 1774
2d2f6d4b 1775Expressions support the use of addition, subtraction, multiplication and
93d76e4a 1776division operators (+-\*/).
2d2f6d4b 1777
0ca6d12c
KS
1778Note if division by zero cannot be detected at parse time (i.e. the
1779divisor is not a constant), the result will be -1.
2d2f6d4b
KS
1780
1781Numeric constants can also be used directly in an expression::
1782
1783 # echo 'hist:keys=next_pid:timestamp_secs=common_timestamp/1000000 ...' >> event/trigger
1784
1785or assigned to a variable and referenced in a subsequent expression::
1786
1787 # echo 'hist:keys=next_pid:us_per_sec=1000000 ...' >> event/trigger
1788 # echo 'hist:keys=next_pid:timestamp_secs=common_timestamp/$us_per_sec ...' >> event/trigger
1789
88238513
SRG
1790Variables can even hold stacktraces, which are useful with synthetic events.
1791
033cbcee
TZ
17922.2.2 Synthetic Events
1793----------------------
1794
1795Synthetic events are user-defined events generated from hist trigger
1796variables or fields associated with one or more other events. Their
1797purpose is to provide a mechanism for displaying data spanning
1798multiple events consistent with the existing and already familiar
1799usage for normal events.
1800
1801To define a synthetic event, the user writes a simple specification
1802consisting of the name of the new event along with one or more
1803variables and their types, which can be any valid field type,
1804separated by semicolons, to the tracing/synthetic_events file.
1805
bd82631d
TZ
1806See synth_field_size() for available types.
1807
1808If field_name contains [n], the field is considered to be a static array.
1809
1810If field_names contains[] (no subscript), the field is considered to
1811be a dynamic array, which will only take as much space in the event as
1812is required to hold the array.
1813
1814A string field can be specified using either the static notation:
1815
1816 char name[32];
1817
1818Or the dynamic:
1819
1820 char name[];
1821
1822The size limit for either is 256.
1823
033cbcee
TZ
1824For instance, the following creates a new event named 'wakeup_latency'
1825with 3 fields: lat, pid, and prio. Each of those fields is simply a
ea272257 1826variable reference to a variable on another event::
033cbcee
TZ
1827
1828 # echo 'wakeup_latency \
1829 u64 lat; \
1830 pid_t pid; \
1831 int prio' >> \
2abfcd29 1832 /sys/kernel/tracing/synthetic_events
033cbcee
TZ
1833
1834Reading the tracing/synthetic_events file lists all the currently
ea272257 1835defined synthetic events, in this case the event defined above::
033cbcee 1836
2abfcd29 1837 # cat /sys/kernel/tracing/synthetic_events
033cbcee
TZ
1838 wakeup_latency u64 lat; pid_t pid; int prio
1839
1840An existing synthetic event definition can be removed by prepending
ea272257 1841the command that defined it with a '!'::
033cbcee
TZ
1842
1843 # echo '!wakeup_latency u64 lat pid_t pid int prio' >> \
2abfcd29 1844 /sys/kernel/tracing/synthetic_events
033cbcee
TZ
1845
1846At this point, there isn't yet an actual 'wakeup_latency' event
aea74de4 1847instantiated in the event subsystem - for this to happen, a 'hist
033cbcee 1848trigger action' needs to be instantiated and bound to actual fields
064f35a9
JFG
1849and variables defined on other events (see Section 2.2.3 below on
1850how that is done using hist trigger 'onmatch' action). Once that is
1851done, the 'wakeup_latency' synthetic event instance is created.
033cbcee 1852
033cbcee 1853The new event is created under the tracing/events/synthetic/ directory
ea272257 1854and looks and behaves just like any other event::
033cbcee 1855
2abfcd29 1856 # ls /sys/kernel/tracing/events/synthetic/wakeup_latency
033cbcee
TZ
1857 enable filter format hist id trigger
1858
37036435
SRV
1859A histogram can now be defined for the new synthetic event::
1860
1861 # echo 'hist:keys=pid,prio,lat.log2:sort=lat' >> \
2abfcd29 1862 /sys/kernel/tracing/events/synthetic/wakeup_latency/trigger
37036435
SRV
1863
1864The above shows the latency "lat" in a power of 2 grouping.
1865
033cbcee 1866Like any other event, once a histogram is enabled for the event, the
a2ff84a5 1867output can be displayed by reading the event's 'hist' file::
033cbcee 1868
2abfcd29 1869 # cat /sys/kernel/tracing/events/synthetic/wakeup_latency/hist
37036435
SRV
1870
1871 # event histogram
1872 #
1873 # trigger info: hist:keys=pid,prio,lat.log2:vals=hitcount:sort=lat.log2:size=2048 [active]
1874 #
1875
1876 { pid: 2035, prio: 9, lat: ~ 2^2 } hitcount: 43
1877 { pid: 2034, prio: 9, lat: ~ 2^2 } hitcount: 60
1878 { pid: 2029, prio: 9, lat: ~ 2^2 } hitcount: 965
1879 { pid: 2034, prio: 120, lat: ~ 2^2 } hitcount: 9
1880 { pid: 2033, prio: 120, lat: ~ 2^2 } hitcount: 5
1881 { pid: 2030, prio: 9, lat: ~ 2^2 } hitcount: 335
1882 { pid: 2030, prio: 120, lat: ~ 2^2 } hitcount: 10
1883 { pid: 2032, prio: 120, lat: ~ 2^2 } hitcount: 1
1884 { pid: 2035, prio: 120, lat: ~ 2^2 } hitcount: 2
1885 { pid: 2031, prio: 9, lat: ~ 2^2 } hitcount: 176
1886 { pid: 2028, prio: 120, lat: ~ 2^2 } hitcount: 15
1887 { pid: 2033, prio: 9, lat: ~ 2^2 } hitcount: 91
1888 { pid: 2032, prio: 9, lat: ~ 2^2 } hitcount: 125
1889 { pid: 2029, prio: 120, lat: ~ 2^2 } hitcount: 4
1890 { pid: 2031, prio: 120, lat: ~ 2^2 } hitcount: 3
1891 { pid: 2029, prio: 120, lat: ~ 2^3 } hitcount: 2
1892 { pid: 2035, prio: 9, lat: ~ 2^3 } hitcount: 41
1893 { pid: 2030, prio: 120, lat: ~ 2^3 } hitcount: 1
1894 { pid: 2032, prio: 9, lat: ~ 2^3 } hitcount: 32
1895 { pid: 2031, prio: 9, lat: ~ 2^3 } hitcount: 44
1896 { pid: 2034, prio: 9, lat: ~ 2^3 } hitcount: 40
1897 { pid: 2030, prio: 9, lat: ~ 2^3 } hitcount: 29
1898 { pid: 2033, prio: 9, lat: ~ 2^3 } hitcount: 31
1899 { pid: 2029, prio: 9, lat: ~ 2^3 } hitcount: 31
1900 { pid: 2028, prio: 120, lat: ~ 2^3 } hitcount: 18
1901 { pid: 2031, prio: 120, lat: ~ 2^3 } hitcount: 2
1902 { pid: 2028, prio: 120, lat: ~ 2^4 } hitcount: 1
1903 { pid: 2029, prio: 9, lat: ~ 2^4 } hitcount: 4
1904 { pid: 2031, prio: 120, lat: ~ 2^7 } hitcount: 1
1905 { pid: 2032, prio: 120, lat: ~ 2^7 } hitcount: 1
1906
1907 Totals:
1908 Hits: 2122
1909 Entries: 30
1910 Dropped: 0
1911
1912
1913The latency values can also be grouped linearly by a given size with
a2ff84a5 1914the ".buckets" modifier and specify a size (in this case groups of 10)::
37036435
SRV
1915
1916 # echo 'hist:keys=pid,prio,lat.buckets=10:sort=lat' >> \
2abfcd29 1917 /sys/kernel/tracing/events/synthetic/wakeup_latency/trigger
37036435
SRV
1918
1919 # event histogram
1920 #
1921 # trigger info: hist:keys=pid,prio,lat.buckets=10:vals=hitcount:sort=lat.buckets=10:size=2048 [active]
1922 #
1923
1924 { pid: 2067, prio: 9, lat: ~ 0-9 } hitcount: 220
1925 { pid: 2068, prio: 9, lat: ~ 0-9 } hitcount: 157
1926 { pid: 2070, prio: 9, lat: ~ 0-9 } hitcount: 100
1927 { pid: 2067, prio: 120, lat: ~ 0-9 } hitcount: 6
1928 { pid: 2065, prio: 120, lat: ~ 0-9 } hitcount: 2
1929 { pid: 2066, prio: 120, lat: ~ 0-9 } hitcount: 2
1930 { pid: 2069, prio: 9, lat: ~ 0-9 } hitcount: 122
1931 { pid: 2069, prio: 120, lat: ~ 0-9 } hitcount: 8
1932 { pid: 2070, prio: 120, lat: ~ 0-9 } hitcount: 1
1933 { pid: 2068, prio: 120, lat: ~ 0-9 } hitcount: 7
1934 { pid: 2066, prio: 9, lat: ~ 0-9 } hitcount: 365
1935 { pid: 2064, prio: 120, lat: ~ 0-9 } hitcount: 35
1936 { pid: 2065, prio: 9, lat: ~ 0-9 } hitcount: 998
1937 { pid: 2071, prio: 9, lat: ~ 0-9 } hitcount: 85
1938 { pid: 2065, prio: 9, lat: ~ 10-19 } hitcount: 2
1939 { pid: 2064, prio: 120, lat: ~ 10-19 } hitcount: 2
1940
1941 Totals:
1942 Hits: 2112
1943 Entries: 16
1944 Dropped: 0
1945
88238513
SRG
1946To save stacktraces, create a synthetic event with a field of type "unsigned long[]"
1947or even just "long[]". For example, to see how long a task is blocked in an
a2ff84a5 1948uninterruptible state::
88238513
SRG
1949
1950 # cd /sys/kernel/tracing
1951 # echo 's:block_lat pid_t pid; u64 delta; unsigned long[] stack;' > dynamic_events
4b512860 1952 # echo 'hist:keys=next_pid:ts=common_timestamp.usecs,st=common_stacktrace if prev_state == 2' >> events/sched/sched_switch/trigger
88238513
SRG
1953 # echo 'hist:keys=prev_pid:delta=common_timestamp.usecs-$ts,s=$st:onmax($delta).trace(block_lat,prev_pid,$delta,$s)' >> events/sched/sched_switch/trigger
1954 # echo 1 > events/synthetic/block_lat/enable
1955 # cat trace
1956
1957 # tracer: nop
1958 #
1959 # entries-in-buffer/entries-written: 2/2 #P:8
1960 #
1961 # _-----=> irqs-off/BH-disabled
1962 # / _----=> need-resched
1963 # | / _---=> hardirq/softirq
1964 # || / _--=> preempt-depth
1965 # ||| / _-=> migrate-disable
1966 # |||| / delay
1967 # TASK-PID CPU# ||||| TIMESTAMP FUNCTION
1968 # | | | ||||| | |
1969 <idle>-0 [005] d..4. 521.164922: block_lat: pid=0 delta=8322 stack=STACK:
1970 => __schedule+0x448/0x7b0
1971 => schedule+0x5a/0xb0
1972 => io_schedule+0x42/0x70
1973 => bit_wait_io+0xd/0x60
1974 => __wait_on_bit+0x4b/0x140
1975 => out_of_line_wait_on_bit+0x91/0xb0
1976 => jbd2_journal_commit_transaction+0x1679/0x1a70
1977 => kjournald2+0xa9/0x280
1978 => kthread+0xe9/0x110
1979 => ret_from_fork+0x2c/0x50
1980
1981 <...>-2 [004] d..4. 525.184257: block_lat: pid=2 delta=76 stack=STACK:
1982 => __schedule+0x448/0x7b0
1983 => schedule+0x5a/0xb0
1984 => schedule_timeout+0x11a/0x150
1985 => wait_for_completion_killable+0x144/0x1f0
1986 => __kthread_create_on_node+0xe7/0x1e0
1987 => kthread_create_on_node+0x51/0x70
1988 => create_worker+0xcc/0x1a0
1989 => worker_thread+0x2ad/0x380
1990 => kthread+0xe9/0x110
1991 => ret_from_fork+0x2c/0x50
1992
a2ff84a5
BS
1993A synthetic event that has a stacktrace field may use it as a key in
1994histogram::
88238513 1995
d8f0ae3e 1996 # echo 'hist:keys=delta.buckets=100,stack.stacktrace:sort=delta' > events/synthetic/block_lat/trigger
88238513
SRG
1997 # cat events/synthetic/block_lat/hist
1998
1999 # event histogram
2000 #
d8f0ae3e 2001 # trigger info: hist:keys=delta.buckets=100,stack.stacktrace:vals=hitcount:sort=delta.buckets=100:size=2048 [active]
88238513 2002 #
d8f0ae3e
TZ
2003 { delta: ~ 0-99, stack.stacktrace __schedule+0xa19/0x1520
2004 schedule+0x6b/0x110
2005 io_schedule+0x46/0x80
2006 bit_wait_io+0x11/0x80
2007 __wait_on_bit+0x4e/0x120
2008 out_of_line_wait_on_bit+0x8d/0xb0
2009 __wait_on_buffer+0x33/0x40
2010 jbd2_journal_commit_transaction+0x155a/0x19b0
2011 kjournald2+0xab/0x270
2012 kthread+0xfa/0x130
2013 ret_from_fork+0x29/0x50
2014 } hitcount: 1
2015 { delta: ~ 0-99, stack.stacktrace __schedule+0xa19/0x1520
2016 schedule+0x6b/0x110
2017 io_schedule+0x46/0x80
2018 rq_qos_wait+0xd0/0x170
2019 wbt_wait+0x9e/0xf0
2020 __rq_qos_throttle+0x25/0x40
2021 blk_mq_submit_bio+0x2c3/0x5b0
2022 __submit_bio+0xff/0x190
2023 submit_bio_noacct_nocheck+0x25b/0x2b0
2024 submit_bio_noacct+0x20b/0x600
2025 submit_bio+0x28/0x90
2026 ext4_bio_write_page+0x1e0/0x8c0
2027 mpage_submit_page+0x60/0x80
2028 mpage_process_page_bufs+0x16c/0x180
2029 mpage_prepare_extent_to_map+0x23f/0x530
2030 } hitcount: 1
2031 { delta: ~ 0-99, stack.stacktrace __schedule+0xa19/0x1520
2032 schedule+0x6b/0x110
2033 schedule_hrtimeout_range_clock+0x97/0x110
2034 schedule_hrtimeout_range+0x13/0x20
2035 usleep_range_state+0x65/0x90
2036 __intel_wait_for_register+0x1c1/0x230 [i915]
2037 intel_psr_wait_for_idle_locked+0x171/0x2a0 [i915]
2038 intel_pipe_update_start+0x169/0x360 [i915]
2039 intel_update_crtc+0x112/0x490 [i915]
2040 skl_commit_modeset_enables+0x199/0x600 [i915]
2041 intel_atomic_commit_tail+0x7c4/0x1080 [i915]
2042 intel_atomic_commit_work+0x12/0x20 [i915]
2043 process_one_work+0x21c/0x3f0
2044 worker_thread+0x50/0x3e0
2045 kthread+0xfa/0x130
88238513 2046 } hitcount: 3
d8f0ae3e
TZ
2047 { delta: ~ 0-99, stack.stacktrace __schedule+0xa19/0x1520
2048 schedule+0x6b/0x110
2049 schedule_timeout+0x11e/0x160
2050 __wait_for_common+0x8f/0x190
2051 wait_for_completion+0x24/0x30
2052 __flush_work.isra.0+0x1cc/0x360
2053 flush_work+0xe/0x20
2054 drm_mode_rmfb+0x18b/0x1d0 [drm]
2055 drm_mode_rmfb_ioctl+0x10/0x20 [drm]
2056 drm_ioctl_kernel+0xb8/0x150 [drm]
2057 drm_ioctl+0x243/0x560 [drm]
2058 __x64_sys_ioctl+0x92/0xd0
2059 do_syscall_64+0x59/0x90
2060 entry_SYSCALL_64_after_hwframe+0x72/0xdc
88238513 2061 } hitcount: 1
d8f0ae3e
TZ
2062 { delta: ~ 0-99, stack.stacktrace __schedule+0xa19/0x1520
2063 schedule+0x6b/0x110
2064 schedule_timeout+0x87/0x160
2065 __wait_for_common+0x8f/0x190
2066 wait_for_completion_timeout+0x1d/0x30
2067 drm_atomic_helper_wait_for_flip_done+0x57/0x90 [drm_kms_helper]
2068 intel_atomic_commit_tail+0x8ce/0x1080 [i915]
2069 intel_atomic_commit_work+0x12/0x20 [i915]
2070 process_one_work+0x21c/0x3f0
2071 worker_thread+0x50/0x3e0
2072 kthread+0xfa/0x130
2073 ret_from_fork+0x29/0x50
2074 } hitcount: 1
2075 { delta: ~ 100-199, stack.stacktrace __schedule+0xa19/0x1520
2076 schedule+0x6b/0x110
2077 schedule_hrtimeout_range_clock+0x97/0x110
2078 schedule_hrtimeout_range+0x13/0x20
2079 usleep_range_state+0x65/0x90
2080 pci_set_low_power_state+0x17f/0x1f0
2081 pci_set_power_state+0x49/0x250
2082 pci_finish_runtime_suspend+0x4a/0x90
2083 pci_pm_runtime_suspend+0xcb/0x1b0
2084 __rpm_callback+0x48/0x120
2085 rpm_callback+0x67/0x70
2086 rpm_suspend+0x167/0x780
2087 rpm_idle+0x25a/0x380
2088 pm_runtime_work+0x93/0xc0
2089 process_one_work+0x21c/0x3f0
88238513
SRG
2090 } hitcount: 1
2091
2092 Totals:
d8f0ae3e
TZ
2093 Hits: 10
2094 Entries: 7
2095 Dropped: 0
88238513 2096
5032b381
TZ
20972.2.3 Hist trigger 'handlers' and 'actions'
2098-------------------------------------------
033cbcee 2099
5032b381
TZ
2100A hist trigger 'action' is a function that's executed (in most cases
2101conditionally) whenever a histogram entry is added or updated.
033cbcee 2102
5032b381
TZ
2103When a histogram entry is added or updated, a hist trigger 'handler'
2104is what decides whether the corresponding action is actually invoked
2105or not.
033cbcee 2106
5032b381
TZ
2107Hist trigger handlers and actions are paired together in the general
2108form:
2109
2110 <handler>.<action>
2111
2112To specify a handler.action pair for a given event, simply specify
2113that handler.action pair between colons in the hist trigger
2114specification.
2115
2116In theory, any handler can be combined with any action, but in
2117practice, not every handler.action combination is currently supported;
2118if a given handler.action combination isn't supported, the hist
2119trigger will fail with -EINVAL;
2120
fe4ec72c 2121The default 'handler.action' if none is explicitly specified is as it
5032b381
TZ
2122always has been, to simply update the set of values associated with an
2123entry. Some applications, however, may want to perform additional
2124actions at that point, such as generate another event, or compare and
2125save a maximum.
2126
2127The supported handlers and actions are listed below, and each is
2128described in more detail in the following paragraphs, in the context
2129of descriptions of some common and useful handler.action combinations.
2130
2131The available handlers are:
2132
2133 - onmatch(matching.event) - invoke action on any addition or update
2134 - onmax(var) - invoke action if var exceeds current max
ff0d35e2 2135 - onchange(var) - invoke action if var changes
5032b381
TZ
2136
2137The available actions are:
2138
e91eefd7 2139 - trace(<synthetic_event_name>,param list) - generate synthetic event
5032b381 2140 - save(field,...) - save current event fields
fd451a3d 2141 - snapshot() - snapshot the trace buffer
5032b381
TZ
2142
2143The following commonly-used handler.action pairs are available:
033cbcee 2144
e91eefd7 2145 - onmatch(matching.event).trace(<synthetic_event_name>,param list)
033cbcee 2146
e91eefd7
TZ
2147 The 'onmatch(matching.event).trace(<synthetic_event_name>,param
2148 list)' hist trigger action is invoked whenever an event matches
2149 and the histogram entry would be added or updated. It causes the
2150 named synthetic event to be generated with the values given in the
033cbcee
TZ
2151 'param list'. The result is the generation of a synthetic event
2152 that consists of the values contained in those variables at the
e91eefd7
TZ
2153 time the invoking event was hit. For example, if the synthetic
2154 event name is 'wakeup_latency', a wakeup_latency event is
2155 generated using onmatch(event).trace(wakeup_latency,arg1,arg2).
2156
2157 There is also an equivalent alternative form available for
2158 generating synthetic events. In this form, the synthetic event
2159 name is used as if it were a function name. For example, using
2160 the 'wakeup_latency' synthetic event name again, the
2161 wakeup_latency event would be generated by invoking it as if it
2162 were a function call, with the event field values passed in as
2163 arguments: onmatch(event).wakeup_latency(arg1,arg2). The syntax
2164 for this form is:
2165
2166 onmatch(matching.event).<synthetic_event_name>(param list)
2167
2168 In either case, the 'param list' consists of one or more
2169 parameters which may be either variables or fields defined on
2170 either the 'matching.event' or the target event. The variables or
2171 fields specified in the param list may be either fully-qualified
2172 or unqualified. If a variable is specified as unqualified, it
2173 must be unique between the two events. A field name used as a
2174 param can be unqualified if it refers to the target event, but
2175 must be fully qualified if it refers to the matching event. A
2176 fully-qualified name is of the form 'system.event_name.$var_name'
2177 or 'system.event_name.field'.
033cbcee
TZ
2178
2179 The 'matching.event' specification is simply the fully qualified
2180 event name of the event that matches the target event for the
39a39d5b
TS
2181 onmatch() functionality, in the form 'system.event_name'. Histogram
2182 keys of both events are compared to find if events match. In case
2183 multiple histogram keys are used, they all must match in the specified
2184 order.
033cbcee
TZ
2185
2186 Finally, the number and type of variables/fields in the 'param
2187 list' must match the number and types of the fields in the
2188 synthetic event being generated.
2189
2190 As an example the below defines a simple synthetic event and uses
2191 a variable defined on the sched_wakeup_new event as a parameter
2192 when invoking the synthetic event. Here we define the synthetic
ea272257 2193 event::
033cbcee 2194
ea272257 2195 # echo 'wakeup_new_test pid_t pid' >> \
2abfcd29 2196 /sys/kernel/tracing/synthetic_events
033cbcee 2197
2abfcd29 2198 # cat /sys/kernel/tracing/synthetic_events
ea272257 2199 wakeup_new_test pid_t pid
033cbcee
TZ
2200
2201 The following hist trigger both defines the missing testpid
2202 variable and specifies an onmatch() action that generates a
2203 wakeup_new_test synthetic event whenever a sched_wakeup_new event
2204 occurs, which because of the 'if comm == "cyclictest"' filter only
ea272257 2205 happens when the executable is cyclictest::
033cbcee 2206
ea272257
MCC
2207 # echo 'hist:keys=$testpid:testpid=pid:onmatch(sched.sched_wakeup_new).\
2208 wakeup_new_test($testpid) if comm=="cyclictest"' >> \
2abfcd29 2209 /sys/kernel/tracing/events/sched/sched_wakeup_new/trigger
033cbcee 2210
a2ff84a5 2211 Or, equivalently, using the 'trace' keyword syntax::
e91eefd7 2212
a2ff84a5
BS
2213 # echo 'hist:keys=$testpid:testpid=pid:onmatch(sched.sched_wakeup_new).\
2214 trace(wakeup_new_test,$testpid) if comm=="cyclictest"' >> \
b72b5fec 2215 /sys/kernel/tracing/events/sched/sched_wakeup_new/trigger
e91eefd7 2216
033cbcee
TZ
2217 Creating and displaying a histogram based on those events is now
2218 just a matter of using the fields and new synthetic event in the
ea272257 2219 tracing/events/synthetic directory, as usual::
033cbcee 2220
ea272257 2221 # echo 'hist:keys=pid:sort=pid' >> \
2abfcd29 2222 /sys/kernel/tracing/events/synthetic/wakeup_new_test/trigger
033cbcee
TZ
2223
2224 Running 'cyclictest' should cause wakeup_new events to generate
2225 wakeup_new_test synthetic events which should result in histogram
ea272257 2226 output in the wakeup_new_test event's hist file::
033cbcee 2227
2abfcd29 2228 # cat /sys/kernel/tracing/events/synthetic/wakeup_new_test/hist
033cbcee
TZ
2229
2230 A more typical usage would be to use two events to calculate a
2231 latency. The following example uses a set of hist triggers to
ea272257 2232 produce a 'wakeup_latency' histogram.
033cbcee 2233
ea272257 2234 First, we define a 'wakeup_latency' synthetic event::
033cbcee 2235
ea272257 2236 # echo 'wakeup_latency u64 lat; pid_t pid; int prio' >> \
2abfcd29 2237 /sys/kernel/tracing/synthetic_events
033cbcee
TZ
2238
2239 Next, we specify that whenever we see a sched_waking event for a
ea272257 2240 cyclictest thread, save the timestamp in a 'ts0' variable::
033cbcee 2241
ea272257
MCC
2242 # echo 'hist:keys=$saved_pid:saved_pid=pid:ts0=common_timestamp.usecs \
2243 if comm=="cyclictest"' >> \
2abfcd29 2244 /sys/kernel/tracing/events/sched/sched_waking/trigger
033cbcee
TZ
2245
2246 Then, when the corresponding thread is actually scheduled onto the
39a39d5b
TS
2247 CPU by a sched_switch event (saved_pid matches next_pid), calculate
2248 the latency and use that along with another variable and an event field
2249 to generate a wakeup_latency synthetic event::
033cbcee 2250
ea272257
MCC
2251 # echo 'hist:keys=next_pid:wakeup_lat=common_timestamp.usecs-$ts0:\
2252 onmatch(sched.sched_waking).wakeup_latency($wakeup_lat,\
2253 $saved_pid,next_prio) if next_comm=="cyclictest"' >> \
2abfcd29 2254 /sys/kernel/tracing/events/sched/sched_switch/trigger
033cbcee
TZ
2255
2256 We also need to create a histogram on the wakeup_latency synthetic
ea272257 2257 event in order to aggregate the generated synthetic event data::
033cbcee 2258
ea272257 2259 # echo 'hist:keys=pid,prio,lat:sort=pid,lat' >> \
2abfcd29 2260 /sys/kernel/tracing/events/synthetic/wakeup_latency/trigger
033cbcee
TZ
2261
2262 Finally, once we've run cyclictest to actually generate some
2263 events, we can see the output by looking at the wakeup_latency
ea272257 2264 synthetic event's hist file::
033cbcee 2265
2abfcd29 2266 # cat /sys/kernel/tracing/events/synthetic/wakeup_latency/hist
033cbcee
TZ
2267
2268 - onmax(var).save(field,.. .)
2269
2270 The 'onmax(var).save(field,...)' hist trigger action is invoked
2271 whenever the value of 'var' associated with a histogram entry
2272 exceeds the current maximum contained in that variable.
2273
2274 The end result is that the trace event fields specified as the
2275 onmax.save() params will be saved if 'var' exceeds the current
2276 maximum for that hist trigger entry. This allows context from the
2277 event that exhibited the new maximum to be saved for later
2278 reference. When the histogram is displayed, additional fields
2279 displaying the saved values will be printed.
2280
2281 As an example the below defines a couple of hist triggers, one for
2282 sched_waking and another for sched_switch, keyed on pid. Whenever
2283 a sched_waking occurs, the timestamp is saved in the entry
2284 corresponding to the current pid, and when the scheduler switches
2285 back to that pid, the timestamp difference is calculated. If the
2286 resulting latency, stored in wakeup_lat, exceeds the current
2287 maximum latency, the values specified in the save() fields are
ea272257 2288 recorded::
033cbcee 2289
ea272257
MCC
2290 # echo 'hist:keys=pid:ts0=common_timestamp.usecs \
2291 if comm=="cyclictest"' >> \
2abfcd29 2292 /sys/kernel/tracing/events/sched/sched_waking/trigger
033cbcee 2293
ea272257
MCC
2294 # echo 'hist:keys=next_pid:\
2295 wakeup_lat=common_timestamp.usecs-$ts0:\
2296 onmax($wakeup_lat).save(next_comm,prev_pid,prev_prio,prev_comm) \
2297 if next_comm=="cyclictest"' >> \
2abfcd29 2298 /sys/kernel/tracing/events/sched/sched_switch/trigger
033cbcee
TZ
2299
2300 When the histogram is displayed, the max value and the saved
2301 values corresponding to the max are displayed following the rest
ea272257 2302 of the fields::
033cbcee 2303
2abfcd29 2304 # cat /sys/kernel/tracing/events/sched/sched_switch/hist
ea272257
MCC
2305 { next_pid: 2255 } hitcount: 239
2306 common_timestamp-ts0: 0
2307 max: 27
2308 next_comm: cyclictest
2309 prev_pid: 0 prev_prio: 120 prev_comm: swapper/1
033cbcee 2310
ea272257
MCC
2311 { next_pid: 2256 } hitcount: 2355
2312 common_timestamp-ts0: 0
2313 max: 49 next_comm: cyclictest
2314 prev_pid: 0 prev_prio: 120 prev_comm: swapper/0
033cbcee 2315
ea272257
MCC
2316 Totals:
2317 Hits: 12970
2318 Entries: 2
2319 Dropped: 0
d3439f9d 2320
fd451a3d
TZ
2321 - onmax(var).snapshot()
2322
2323 The 'onmax(var).snapshot()' hist trigger action is invoked
2324 whenever the value of 'var' associated with a histogram entry
2325 exceeds the current maximum contained in that variable.
2326
2327 The end result is that a global snapshot of the trace buffer will
2328 be saved in the tracing/snapshot file if 'var' exceeds the current
2329 maximum for any hist trigger entry.
2330
2331 Note that in this case the maximum is a global maximum for the
2332 current trace instance, which is the maximum across all buckets of
2333 the histogram. The key of the specific trace event that caused
2334 the global maximum and the global maximum itself are displayed,
2335 along with a message stating that a snapshot has been taken and
2336 where to find it. The user can use the key information displayed
2337 to locate the corresponding bucket in the histogram for even more
2338 detail.
2339
2340 As an example the below defines a couple of hist triggers, one for
2341 sched_waking and another for sched_switch, keyed on pid. Whenever
2342 a sched_waking event occurs, the timestamp is saved in the entry
2343 corresponding to the current pid, and when the scheduler switches
2344 back to that pid, the timestamp difference is calculated. If the
2345 resulting latency, stored in wakeup_lat, exceeds the current
2346 maximum latency, a snapshot is taken. As part of the setup, all
2347 the scheduler events are also enabled, which are the events that
a2ff84a5 2348 will show up in the snapshot when it is taken at some point::
fd451a3d 2349
b72b5fec 2350 # echo 1 > /sys/kernel/tracing/events/sched/enable
fd451a3d 2351
a2ff84a5
BS
2352 # echo 'hist:keys=pid:ts0=common_timestamp.usecs \
2353 if comm=="cyclictest"' >> \
b72b5fec 2354 /sys/kernel/tracing/events/sched/sched_waking/trigger
fd451a3d 2355
a2ff84a5
BS
2356 # echo 'hist:keys=next_pid:wakeup_lat=common_timestamp.usecs-$ts0: \
2357 onmax($wakeup_lat).save(next_prio,next_comm,prev_pid,prev_prio, \
2358 prev_comm):onmax($wakeup_lat).snapshot() \
2359 if next_comm=="cyclictest"' >> \
b72b5fec 2360 /sys/kernel/tracing/events/sched/sched_switch/trigger
fd451a3d
TZ
2361
2362 When the histogram is displayed, for each bucket the max value
2363 and the saved values corresponding to the max are displayed
2364 following the rest of the fields.
2365
fe4ec72c 2366 If a snapshot was taken, there is also a message indicating that,
a2ff84a5 2367 along with the value and event that triggered the global maximum::
fd451a3d 2368
b72b5fec 2369 # cat /sys/kernel/tracing/events/sched/sched_switch/hist
a2ff84a5
BS
2370 { next_pid: 2101 } hitcount: 200
2371 max: 52 next_prio: 120 next_comm: cyclictest \
2372 prev_pid: 0 prev_prio: 120 prev_comm: swapper/6
fd451a3d 2373
a2ff84a5
BS
2374 { next_pid: 2103 } hitcount: 1326
2375 max: 572 next_prio: 19 next_comm: cyclictest \
2376 prev_pid: 0 prev_prio: 120 prev_comm: swapper/1
fd451a3d 2377
a2ff84a5
BS
2378 { next_pid: 2102 } hitcount: 1982 \
2379 max: 74 next_prio: 19 next_comm: cyclictest \
2380 prev_pid: 0 prev_prio: 120 prev_comm: swapper/5
fd451a3d 2381
a2ff84a5
BS
2382 Snapshot taken (see tracing/snapshot). Details:
2383 triggering value { onmax($wakeup_lat) }: 572 \
2384 triggered by event with key: { next_pid: 2103 }
fd451a3d 2385
a2ff84a5
BS
2386 Totals:
2387 Hits: 3508
2388 Entries: 3
2389 Dropped: 0
fd451a3d
TZ
2390
2391 In the above case, the event that triggered the global maximum has
2392 the key with next_pid == 2103. If you look at the bucket that has
2393 2103 as the key, you'll find the additional values save()'d along
2394 with the local maximum for that bucket, which should be the same
2395 as the global maximum (since that was the same value that
2396 triggered the global snapshot).
2397
2398 And finally, looking at the snapshot data should show at or near
2399 the end the event that triggered the snapshot (in this case you
2400 can verify the timestamps between the sched_waking and
2401 sched_switch events, which should match the time displayed in the
3df5ffd2
MCC
2402 global maximum)::
2403
2abfcd29 2404 # cat /sys/kernel/tracing/snapshot
3df5ffd2
MCC
2405
2406 <...>-2103 [005] d..3 309.873125: sched_switch: prev_comm=cyclictest prev_pid=2103 prev_prio=19 prev_state=D ==> next_comm=swapper/5 next_pid=0 next_prio=120
2407 <idle>-0 [005] d.h3 309.873611: sched_waking: comm=cyclictest pid=2102 prio=19 target_cpu=005
2408 <idle>-0 [005] dNh4 309.873613: sched_wakeup: comm=cyclictest pid=2102 prio=19 target_cpu=005
2409 <idle>-0 [005] d..3 309.873616: sched_switch: prev_comm=swapper/5 prev_pid=0 prev_prio=120 prev_state=S ==> next_comm=cyclictest next_pid=2102 next_prio=19
2410 <...>-2102 [005] d..3 309.873625: sched_switch: prev_comm=cyclictest prev_pid=2102 prev_prio=19 prev_state=D ==> next_comm=swapper/5 next_pid=0 next_prio=120
2411 <idle>-0 [005] d.h3 309.874624: sched_waking: comm=cyclictest pid=2102 prio=19 target_cpu=005
2412 <idle>-0 [005] dNh4 309.874626: sched_wakeup: comm=cyclictest pid=2102 prio=19 target_cpu=005
2413 <idle>-0 [005] dNh3 309.874628: sched_waking: comm=cyclictest pid=2103 prio=19 target_cpu=005
2414 <idle>-0 [005] dNh4 309.874630: sched_wakeup: comm=cyclictest pid=2103 prio=19 target_cpu=005
2415 <idle>-0 [005] d..3 309.874633: sched_switch: prev_comm=swapper/5 prev_pid=0 prev_prio=120 prev_state=S ==> next_comm=cyclictest next_pid=2102 next_prio=19
2416 <idle>-0 [004] d.h3 309.874757: sched_waking: comm=gnome-terminal- pid=1699 prio=120 target_cpu=004
2417 <idle>-0 [004] dNh4 309.874762: sched_wakeup: comm=gnome-terminal- pid=1699 prio=120 target_cpu=004
2418 <idle>-0 [004] d..3 309.874766: sched_switch: prev_comm=swapper/4 prev_pid=0 prev_prio=120 prev_state=S ==> next_comm=gnome-terminal- next_pid=1699 next_prio=120
2419 gnome-terminal--1699 [004] d.h2 309.874941: sched_stat_runtime: comm=gnome-terminal- pid=1699 runtime=180706 [ns] vruntime=1126870572 [ns]
2420 <idle>-0 [003] d.s4 309.874956: sched_waking: comm=rcu_sched pid=9 prio=120 target_cpu=007
2421 <idle>-0 [003] d.s5 309.874960: sched_wake_idle_without_ipi: cpu=7
2422 <idle>-0 [003] d.s5 309.874961: sched_wakeup: comm=rcu_sched pid=9 prio=120 target_cpu=007
2423 <idle>-0 [007] d..3 309.874963: sched_switch: prev_comm=swapper/7 prev_pid=0 prev_prio=120 prev_state=S ==> next_comm=rcu_sched next_pid=9 next_prio=120
2424 rcu_sched-9 [007] d..3 309.874973: sched_stat_runtime: comm=rcu_sched pid=9 runtime=13646 [ns] vruntime=22531430286 [ns]
2425 rcu_sched-9 [007] d..3 309.874978: sched_switch: prev_comm=rcu_sched prev_pid=9 prev_prio=120 prev_state=R+ ==> next_comm=swapper/7 next_pid=0 next_prio=120
2426 <...>-2102 [005] d..4 309.874994: sched_migrate_task: comm=cyclictest pid=2103 prio=19 orig_cpu=5 dest_cpu=1
2427 <...>-2102 [005] d..4 309.875185: sched_wake_idle_without_ipi: cpu=1
2428 <idle>-0 [001] d..3 309.875200: sched_switch: prev_comm=swapper/1 prev_pid=0 prev_prio=120 prev_state=S ==> next_comm=cyclictest next_pid=2103 next_prio=19
fd451a3d 2429
ff0d35e2
TZ
2430 - onchange(var).save(field,.. .)
2431
2432 The 'onchange(var).save(field,...)' hist trigger action is invoked
2433 whenever the value of 'var' associated with a histogram entry
2434 changes.
2435
2436 The end result is that the trace event fields specified as the
2437 onchange.save() params will be saved if 'var' changes for that
2438 hist trigger entry. This allows context from the event that
2439 changed the value to be saved for later reference. When the
2440 histogram is displayed, additional fields displaying the saved
2441 values will be printed.
2442
2443 - onchange(var).snapshot()
2444
2445 The 'onchange(var).snapshot()' hist trigger action is invoked
2446 whenever the value of 'var' associated with a histogram entry
2447 changes.
2448
2449 The end result is that a global snapshot of the trace buffer will
2450 be saved in the tracing/snapshot file if 'var' changes for any
2451 hist trigger entry.
2452
2453 Note that in this case the changed value is a global variable
fe4ec72c 2454 associated with current trace instance. The key of the specific
ff0d35e2
TZ
2455 trace event that caused the value to change and the global value
2456 itself are displayed, along with a message stating that a snapshot
2457 has been taken and where to find it. The user can use the key
2458 information displayed to locate the corresponding bucket in the
2459 histogram for even more detail.
2460
2461 As an example the below defines a hist trigger on the tcp_probe
2462 event, keyed on dport. Whenever a tcp_probe event occurs, the
2463 cwnd field is checked against the current value stored in the
2464 $cwnd variable. If the value has changed, a snapshot is taken.
2465 As part of the setup, all the scheduler and tcp events are also
2466 enabled, which are the events that will show up in the snapshot
a2ff84a5 2467 when it is taken at some point::
ff0d35e2 2468
b72b5fec
LT
2469 # echo 1 > /sys/kernel/tracing/events/sched/enable
2470 # echo 1 > /sys/kernel/tracing/events/tcp/enable
ff0d35e2 2471
a2ff84a5
BS
2472 # echo 'hist:keys=dport:cwnd=snd_cwnd: \
2473 onchange($cwnd).save(snd_wnd,srtt,rcv_wnd): \
2474 onchange($cwnd).snapshot()' >> \
b72b5fec 2475 /sys/kernel/tracing/events/tcp/tcp_probe/trigger
ff0d35e2
TZ
2476
2477 When the histogram is displayed, for each bucket the tracked value
2478 and the saved values corresponding to that value are displayed
2479 following the rest of the fields.
2480
fe4ec72c 2481 If a snapshot was taken, there is also a message indicating that,
3df5ffd2
MCC
2482 along with the value and event that triggered the snapshot::
2483
2abfcd29 2484 # cat /sys/kernel/tracing/events/tcp/tcp_probe/hist
ff0d35e2 2485
ff0d35e2
TZ
2486 { dport: 1521 } hitcount: 8
2487 changed: 10 snd_wnd: 35456 srtt: 154262 rcv_wnd: 42112
2488
2489 { dport: 80 } hitcount: 23
2490 changed: 10 snd_wnd: 28960 srtt: 19604 rcv_wnd: 29312
2491
2492 { dport: 9001 } hitcount: 172
2493 changed: 10 snd_wnd: 48384 srtt: 260444 rcv_wnd: 55168
2494
2495 { dport: 443 } hitcount: 211
2496 changed: 10 snd_wnd: 26960 srtt: 17379 rcv_wnd: 28800
2497
a2ff84a5 2498 Snapshot taken (see tracing/snapshot). Details:
3df5ffd2 2499
a2ff84a5
BS
2500 triggering value { onchange($cwnd) }: 10
2501 triggered by event with key: { dport: 80 }
ff0d35e2 2502
3df5ffd2
MCC
2503 Totals:
2504 Hits: 414
2505 Entries: 4
2506 Dropped: 0
ff0d35e2
TZ
2507
2508 In the above case, the event that triggered the snapshot has the
2509 key with dport == 80. If you look at the bucket that has 80 as
2510 the key, you'll find the additional values save()'d along with the
2511 changed value for that bucket, which should be the same as the
2512 global changed value (since that was the same value that triggered
2513 the global snapshot).
2514
2515 And finally, looking at the snapshot data should show at or near
3df5ffd2
MCC
2516 the end the event that triggered the snapshot::
2517
2abfcd29 2518 # cat /sys/kernel/tracing/snapshot
3df5ffd2
MCC
2519
2520 gnome-shell-1261 [006] dN.3 49.823113: sched_stat_runtime: comm=gnome-shell pid=1261 runtime=49347 [ns] vruntime=1835730389 [ns]
2521 kworker/u16:4-773 [003] d..3 49.823114: sched_switch: prev_comm=kworker/u16:4 prev_pid=773 prev_prio=120 prev_state=R+ ==> next_comm=kworker/3:2 next_pid=135 next_prio=120
2522 gnome-shell-1261 [006] d..3 49.823114: sched_switch: prev_comm=gnome-shell prev_pid=1261 prev_prio=120 prev_state=R+ ==> next_comm=kworker/6:2 next_pid=387 next_prio=120
2523 kworker/3:2-135 [003] d..3 49.823118: sched_stat_runtime: comm=kworker/3:2 pid=135 runtime=5339 [ns] vruntime=17815800388 [ns]
2524 kworker/6:2-387 [006] d..3 49.823120: sched_stat_runtime: comm=kworker/6:2 pid=387 runtime=9594 [ns] vruntime=14589605367 [ns]
2525 kworker/6:2-387 [006] d..3 49.823122: sched_switch: prev_comm=kworker/6:2 prev_pid=387 prev_prio=120 prev_state=R+ ==> next_comm=gnome-shell next_pid=1261 next_prio=120
2526 kworker/3:2-135 [003] d..3 49.823123: sched_switch: prev_comm=kworker/3:2 prev_pid=135 prev_prio=120 prev_state=T ==> next_comm=swapper/3 next_pid=0 next_prio=120
2527 <idle>-0 [004] ..s7 49.823798: tcp_probe: src=10.0.0.10:54326 dest=23.215.104.193:80 mark=0x0 length=32 snd_nxt=0xe3ae2ff5 snd_una=0xe3ae2ecd snd_cwnd=10 ssthresh=2147483647 snd_wnd=28960 srtt=19604 rcv_wnd=29312
ff0d35e2 2528
d3439f9d
SRV
25293. User space creating a trigger
2530--------------------------------
2531
2532Writing into /sys/kernel/tracing/trace_marker writes into the ftrace
2533ring buffer. This can also act like an event, by writing into the trigger
2534file located in /sys/kernel/tracing/events/ftrace/print/
2535
2536Modifying cyclictest to write into the trace_marker file before it sleeps
ea272257 2537and after it wakes up, something like this::
d3439f9d 2538
ea272257
MCC
2539 static void traceputs(char *str)
2540 {
d3439f9d
SRV
2541 /* tracemark_fd is the trace_marker file descriptor */
2542 if (tracemark_fd < 0)
2543 return;
2544 /* write the tracemark message */
2545 write(tracemark_fd, str, strlen(str));
ea272257 2546 }
d3439f9d 2547
ea272257 2548And later add something like::
d3439f9d
SRV
2549
2550 traceputs("start");
2551 clock_nanosleep(...);
2552 traceputs("end");
2553
ea272257 2554We can make a histogram from this::
d3439f9d
SRV
2555
2556 # cd /sys/kernel/tracing
2557 # echo 'latency u64 lat' > synthetic_events
2558 # echo 'hist:keys=common_pid:ts0=common_timestamp.usecs if buf == "start"' > events/ftrace/print/trigger
2559 # echo 'hist:keys=common_pid:lat=common_timestamp.usecs-$ts0:onmatch(ftrace.print).latency($lat) if buf == "end"' >> events/ftrace/print/trigger
2560 # echo 'hist:keys=lat,common_pid:sort=lat' > events/synthetic/latency/trigger
2561
2562The above created a synthetic event called "latency" and two histograms
2563against the trace_marker, one gets triggered when "start" is written into the
2564trace_marker file and the other when "end" is written. If the pids match, then
2565it will call the "latency" synthetic event with the calculated latency as its
2566parameter. Finally, a histogram is added to the latency synthetic event to
2567record the calculated latency along with the pid.
2568
ea272257 2569Now running cyclictest with::
d3439f9d
SRV
2570
2571 # ./cyclictest -p80 -d0 -i250 -n -a -t --tracemark -b 1000
2572
2573 -p80 : run threads at priority 80
2574 -d0 : have all threads run at the same interval
2575 -i250 : start the interval at 250 microseconds (all threads will do this)
2576 -n : sleep with nanosleep
2577 -a : affine all threads to a separate CPU
2578 -t : one thread per available CPU
2579 --tracemark : enable trace mark writing
2580 -b 1000 : stop if any latency is greater than 1000 microseconds
2581
2582Note, the -b 1000 is used just to make --tracemark available.
2583
ea272257 2584Then we can see the histogram created by this with::
d3439f9d
SRV
2585
2586 # cat events/synthetic/latency/hist
ea272257
MCC
2587 # event histogram
2588 #
2589 # trigger info: hist:keys=lat,common_pid:vals=hitcount:sort=lat:size=2048 [active]
2590 #
2591
2592 { lat: 107, common_pid: 2039 } hitcount: 1
2593 { lat: 122, common_pid: 2041 } hitcount: 1
2594 { lat: 166, common_pid: 2039 } hitcount: 1
2595 { lat: 174, common_pid: 2039 } hitcount: 1
2596 { lat: 194, common_pid: 2041 } hitcount: 1
2597 { lat: 196, common_pid: 2036 } hitcount: 1
2598 { lat: 197, common_pid: 2038 } hitcount: 1
2599 { lat: 198, common_pid: 2039 } hitcount: 1
2600 { lat: 199, common_pid: 2039 } hitcount: 1
2601 { lat: 200, common_pid: 2041 } hitcount: 1
2602 { lat: 201, common_pid: 2039 } hitcount: 2
2603 { lat: 202, common_pid: 2038 } hitcount: 1
2604 { lat: 202, common_pid: 2043 } hitcount: 1
2605 { lat: 203, common_pid: 2039 } hitcount: 1
2606 { lat: 203, common_pid: 2036 } hitcount: 1
2607 { lat: 203, common_pid: 2041 } hitcount: 1
2608 { lat: 206, common_pid: 2038 } hitcount: 2
2609 { lat: 207, common_pid: 2039 } hitcount: 1
2610 { lat: 207, common_pid: 2036 } hitcount: 1
2611 { lat: 208, common_pid: 2040 } hitcount: 1
2612 { lat: 209, common_pid: 2043 } hitcount: 1
2613 { lat: 210, common_pid: 2039 } hitcount: 1
2614 { lat: 211, common_pid: 2039 } hitcount: 4
2615 { lat: 212, common_pid: 2043 } hitcount: 1
2616 { lat: 212, common_pid: 2039 } hitcount: 2
2617 { lat: 213, common_pid: 2039 } hitcount: 1
2618 { lat: 214, common_pid: 2038 } hitcount: 1
2619 { lat: 214, common_pid: 2039 } hitcount: 2
2620 { lat: 214, common_pid: 2042 } hitcount: 1
2621 { lat: 215, common_pid: 2039 } hitcount: 1
2622 { lat: 217, common_pid: 2036 } hitcount: 1
2623 { lat: 217, common_pid: 2040 } hitcount: 1
2624 { lat: 217, common_pid: 2039 } hitcount: 1
2625 { lat: 218, common_pid: 2039 } hitcount: 6
2626 { lat: 219, common_pid: 2039 } hitcount: 9
2627 { lat: 220, common_pid: 2039 } hitcount: 11
2628 { lat: 221, common_pid: 2039 } hitcount: 5
2629 { lat: 221, common_pid: 2042 } hitcount: 1
2630 { lat: 222, common_pid: 2039 } hitcount: 7
2631 { lat: 223, common_pid: 2036 } hitcount: 1
2632 { lat: 223, common_pid: 2039 } hitcount: 3
2633 { lat: 224, common_pid: 2039 } hitcount: 4
2634 { lat: 224, common_pid: 2037 } hitcount: 1
2635 { lat: 224, common_pid: 2036 } hitcount: 2
2636 { lat: 225, common_pid: 2039 } hitcount: 5
2637 { lat: 225, common_pid: 2042 } hitcount: 1
2638 { lat: 226, common_pid: 2039 } hitcount: 7
2639 { lat: 226, common_pid: 2036 } hitcount: 4
2640 { lat: 227, common_pid: 2039 } hitcount: 6
2641 { lat: 227, common_pid: 2036 } hitcount: 12
2642 { lat: 227, common_pid: 2043 } hitcount: 1
2643 { lat: 228, common_pid: 2039 } hitcount: 7
2644 { lat: 228, common_pid: 2036 } hitcount: 14
2645 { lat: 229, common_pid: 2039 } hitcount: 9
2646 { lat: 229, common_pid: 2036 } hitcount: 8
2647 { lat: 229, common_pid: 2038 } hitcount: 1
2648 { lat: 230, common_pid: 2039 } hitcount: 11
2649 { lat: 230, common_pid: 2036 } hitcount: 6
2650 { lat: 230, common_pid: 2043 } hitcount: 1
2651 { lat: 230, common_pid: 2042 } hitcount: 2
2652 { lat: 231, common_pid: 2041 } hitcount: 1
2653 { lat: 231, common_pid: 2036 } hitcount: 6
2654 { lat: 231, common_pid: 2043 } hitcount: 1
2655 { lat: 231, common_pid: 2039 } hitcount: 8
2656 { lat: 232, common_pid: 2037 } hitcount: 1
2657 { lat: 232, common_pid: 2039 } hitcount: 6
2658 { lat: 232, common_pid: 2040 } hitcount: 2
2659 { lat: 232, common_pid: 2036 } hitcount: 5
2660 { lat: 232, common_pid: 2043 } hitcount: 1
2661 { lat: 233, common_pid: 2036 } hitcount: 5
2662 { lat: 233, common_pid: 2039 } hitcount: 11
2663 { lat: 234, common_pid: 2039 } hitcount: 4
2664 { lat: 234, common_pid: 2038 } hitcount: 2
2665 { lat: 234, common_pid: 2043 } hitcount: 2
2666 { lat: 234, common_pid: 2036 } hitcount: 11
2667 { lat: 234, common_pid: 2040 } hitcount: 1
2668 { lat: 235, common_pid: 2037 } hitcount: 2
2669 { lat: 235, common_pid: 2036 } hitcount: 8
2670 { lat: 235, common_pid: 2043 } hitcount: 2
2671 { lat: 235, common_pid: 2039 } hitcount: 5
2672 { lat: 235, common_pid: 2042 } hitcount: 2
2673 { lat: 235, common_pid: 2040 } hitcount: 4
2674 { lat: 235, common_pid: 2041 } hitcount: 1
2675 { lat: 236, common_pid: 2036 } hitcount: 7
2676 { lat: 236, common_pid: 2037 } hitcount: 1
2677 { lat: 236, common_pid: 2041 } hitcount: 5
2678 { lat: 236, common_pid: 2039 } hitcount: 3
2679 { lat: 236, common_pid: 2043 } hitcount: 9
2680 { lat: 236, common_pid: 2040 } hitcount: 7
2681 { lat: 237, common_pid: 2037 } hitcount: 1
2682 { lat: 237, common_pid: 2040 } hitcount: 1
2683 { lat: 237, common_pid: 2036 } hitcount: 9
2684 { lat: 237, common_pid: 2039 } hitcount: 3
2685 { lat: 237, common_pid: 2043 } hitcount: 8
2686 { lat: 237, common_pid: 2042 } hitcount: 2
2687 { lat: 237, common_pid: 2041 } hitcount: 2
2688 { lat: 238, common_pid: 2043 } hitcount: 10
2689 { lat: 238, common_pid: 2040 } hitcount: 1
2690 { lat: 238, common_pid: 2037 } hitcount: 9
2691 { lat: 238, common_pid: 2038 } hitcount: 1
2692 { lat: 238, common_pid: 2039 } hitcount: 1
2693 { lat: 238, common_pid: 2042 } hitcount: 3
2694 { lat: 238, common_pid: 2036 } hitcount: 7
2695 { lat: 239, common_pid: 2041 } hitcount: 1
2696 { lat: 239, common_pid: 2043 } hitcount: 11
2697 { lat: 239, common_pid: 2037 } hitcount: 11
2698 { lat: 239, common_pid: 2038 } hitcount: 6
2699 { lat: 239, common_pid: 2036 } hitcount: 7
2700 { lat: 239, common_pid: 2040 } hitcount: 1
2701 { lat: 239, common_pid: 2042 } hitcount: 9
2702 { lat: 240, common_pid: 2037 } hitcount: 29
2703 { lat: 240, common_pid: 2043 } hitcount: 15
2704 { lat: 240, common_pid: 2040 } hitcount: 44
2705 { lat: 240, common_pid: 2039 } hitcount: 1
2706 { lat: 240, common_pid: 2041 } hitcount: 2
2707 { lat: 240, common_pid: 2038 } hitcount: 1
2708 { lat: 240, common_pid: 2036 } hitcount: 10
2709 { lat: 240, common_pid: 2042 } hitcount: 13
2710 { lat: 241, common_pid: 2036 } hitcount: 21
2711 { lat: 241, common_pid: 2041 } hitcount: 36
2712 { lat: 241, common_pid: 2037 } hitcount: 34
2713 { lat: 241, common_pid: 2042 } hitcount: 14
2714 { lat: 241, common_pid: 2040 } hitcount: 94
2715 { lat: 241, common_pid: 2039 } hitcount: 12
2716 { lat: 241, common_pid: 2038 } hitcount: 2
2717 { lat: 241, common_pid: 2043 } hitcount: 28
2718 { lat: 242, common_pid: 2040 } hitcount: 109
2719 { lat: 242, common_pid: 2041 } hitcount: 506
2720 { lat: 242, common_pid: 2039 } hitcount: 155
2721 { lat: 242, common_pid: 2042 } hitcount: 21
2722 { lat: 242, common_pid: 2037 } hitcount: 52
2723 { lat: 242, common_pid: 2043 } hitcount: 21
2724 { lat: 242, common_pid: 2036 } hitcount: 16
2725 { lat: 242, common_pid: 2038 } hitcount: 156
2726 { lat: 243, common_pid: 2037 } hitcount: 46
2727 { lat: 243, common_pid: 2039 } hitcount: 40
2728 { lat: 243, common_pid: 2042 } hitcount: 119
2729 { lat: 243, common_pid: 2041 } hitcount: 611
2730 { lat: 243, common_pid: 2036 } hitcount: 69
2731 { lat: 243, common_pid: 2038 } hitcount: 784
2732 { lat: 243, common_pid: 2040 } hitcount: 323
2733 { lat: 243, common_pid: 2043 } hitcount: 14
2734 { lat: 244, common_pid: 2043 } hitcount: 35
2735 { lat: 244, common_pid: 2042 } hitcount: 305
2736 { lat: 244, common_pid: 2039 } hitcount: 8
2737 { lat: 244, common_pid: 2040 } hitcount: 4515
2738 { lat: 244, common_pid: 2038 } hitcount: 371
2739 { lat: 244, common_pid: 2037 } hitcount: 31
2740 { lat: 244, common_pid: 2036 } hitcount: 114
2741 { lat: 244, common_pid: 2041 } hitcount: 3396
2742 { lat: 245, common_pid: 2036 } hitcount: 700
2743 { lat: 245, common_pid: 2041 } hitcount: 2772
2744 { lat: 245, common_pid: 2037 } hitcount: 268
2745 { lat: 245, common_pid: 2039 } hitcount: 472
2746 { lat: 245, common_pid: 2038 } hitcount: 2758
2747 { lat: 245, common_pid: 2042 } hitcount: 3833
2748 { lat: 245, common_pid: 2040 } hitcount: 3105
2749 { lat: 245, common_pid: 2043 } hitcount: 645
2750 { lat: 246, common_pid: 2038 } hitcount: 3451
2751 { lat: 246, common_pid: 2041 } hitcount: 142
2752 { lat: 246, common_pid: 2037 } hitcount: 5101
2753 { lat: 246, common_pid: 2040 } hitcount: 68
2754 { lat: 246, common_pid: 2043 } hitcount: 5099
2755 { lat: 246, common_pid: 2039 } hitcount: 5608
2756 { lat: 246, common_pid: 2042 } hitcount: 3723
2757 { lat: 246, common_pid: 2036 } hitcount: 4738
2758 { lat: 247, common_pid: 2042 } hitcount: 312
2759 { lat: 247, common_pid: 2043 } hitcount: 2385
2760 { lat: 247, common_pid: 2041 } hitcount: 452
2761 { lat: 247, common_pid: 2038 } hitcount: 792
2762 { lat: 247, common_pid: 2040 } hitcount: 78
2763 { lat: 247, common_pid: 2036 } hitcount: 2375
2764 { lat: 247, common_pid: 2039 } hitcount: 1834
2765 { lat: 247, common_pid: 2037 } hitcount: 2655
2766 { lat: 248, common_pid: 2037 } hitcount: 36
2767 { lat: 248, common_pid: 2042 } hitcount: 11
2768 { lat: 248, common_pid: 2038 } hitcount: 122
2769 { lat: 248, common_pid: 2036 } hitcount: 135
2770 { lat: 248, common_pid: 2039 } hitcount: 26
2771 { lat: 248, common_pid: 2041 } hitcount: 503
2772 { lat: 248, common_pid: 2043 } hitcount: 66
2773 { lat: 248, common_pid: 2040 } hitcount: 46
2774 { lat: 249, common_pid: 2037 } hitcount: 29
2775 { lat: 249, common_pid: 2038 } hitcount: 1
2776 { lat: 249, common_pid: 2043 } hitcount: 29
2777 { lat: 249, common_pid: 2039 } hitcount: 8
2778 { lat: 249, common_pid: 2042 } hitcount: 56
2779 { lat: 249, common_pid: 2040 } hitcount: 27
2780 { lat: 249, common_pid: 2041 } hitcount: 11
2781 { lat: 249, common_pid: 2036 } hitcount: 27
2782 { lat: 250, common_pid: 2038 } hitcount: 1
2783 { lat: 250, common_pid: 2036 } hitcount: 30
2784 { lat: 250, common_pid: 2040 } hitcount: 19
2785 { lat: 250, common_pid: 2043 } hitcount: 22
2786 { lat: 250, common_pid: 2042 } hitcount: 20
2787 { lat: 250, common_pid: 2041 } hitcount: 1
2788 { lat: 250, common_pid: 2039 } hitcount: 6
2789 { lat: 250, common_pid: 2037 } hitcount: 48
2790 { lat: 251, common_pid: 2037 } hitcount: 43
2791 { lat: 251, common_pid: 2039 } hitcount: 1
2792 { lat: 251, common_pid: 2036 } hitcount: 12
2793 { lat: 251, common_pid: 2042 } hitcount: 2
2794 { lat: 251, common_pid: 2041 } hitcount: 1
2795 { lat: 251, common_pid: 2043 } hitcount: 15
2796 { lat: 251, common_pid: 2040 } hitcount: 3
2797 { lat: 252, common_pid: 2040 } hitcount: 1
2798 { lat: 252, common_pid: 2036 } hitcount: 12
2799 { lat: 252, common_pid: 2037 } hitcount: 21
2800 { lat: 252, common_pid: 2043 } hitcount: 14
2801 { lat: 253, common_pid: 2037 } hitcount: 21
2802 { lat: 253, common_pid: 2039 } hitcount: 2
2803 { lat: 253, common_pid: 2036 } hitcount: 9
2804 { lat: 253, common_pid: 2043 } hitcount: 6
2805 { lat: 253, common_pid: 2040 } hitcount: 1
2806 { lat: 254, common_pid: 2036 } hitcount: 8
2807 { lat: 254, common_pid: 2043 } hitcount: 3
2808 { lat: 254, common_pid: 2041 } hitcount: 1
2809 { lat: 254, common_pid: 2042 } hitcount: 1
2810 { lat: 254, common_pid: 2039 } hitcount: 1
2811 { lat: 254, common_pid: 2037 } hitcount: 12
2812 { lat: 255, common_pid: 2043 } hitcount: 1
2813 { lat: 255, common_pid: 2037 } hitcount: 2
2814 { lat: 255, common_pid: 2036 } hitcount: 2
2815 { lat: 255, common_pid: 2039 } hitcount: 8
2816 { lat: 256, common_pid: 2043 } hitcount: 1
2817 { lat: 256, common_pid: 2036 } hitcount: 4
2818 { lat: 256, common_pid: 2039 } hitcount: 6
2819 { lat: 257, common_pid: 2039 } hitcount: 5
2820 { lat: 257, common_pid: 2036 } hitcount: 4
2821 { lat: 258, common_pid: 2039 } hitcount: 5
2822 { lat: 258, common_pid: 2036 } hitcount: 2
2823 { lat: 259, common_pid: 2036 } hitcount: 7
2824 { lat: 259, common_pid: 2039 } hitcount: 7
2825 { lat: 260, common_pid: 2036 } hitcount: 8
2826 { lat: 260, common_pid: 2039 } hitcount: 6
2827 { lat: 261, common_pid: 2036 } hitcount: 5
2828 { lat: 261, common_pid: 2039 } hitcount: 7
2829 { lat: 262, common_pid: 2039 } hitcount: 5
2830 { lat: 262, common_pid: 2036 } hitcount: 5
2831 { lat: 263, common_pid: 2039 } hitcount: 7
2832 { lat: 263, common_pid: 2036 } hitcount: 7
2833 { lat: 264, common_pid: 2039 } hitcount: 9
2834 { lat: 264, common_pid: 2036 } hitcount: 9
2835 { lat: 265, common_pid: 2036 } hitcount: 5
2836 { lat: 265, common_pid: 2039 } hitcount: 1
2837 { lat: 266, common_pid: 2036 } hitcount: 1
2838 { lat: 266, common_pid: 2039 } hitcount: 3
2839 { lat: 267, common_pid: 2036 } hitcount: 1
2840 { lat: 267, common_pid: 2039 } hitcount: 3
2841 { lat: 268, common_pid: 2036 } hitcount: 1
2842 { lat: 268, common_pid: 2039 } hitcount: 6
2843 { lat: 269, common_pid: 2036 } hitcount: 1
2844 { lat: 269, common_pid: 2043 } hitcount: 1
2845 { lat: 269, common_pid: 2039 } hitcount: 2
2846 { lat: 270, common_pid: 2040 } hitcount: 1
2847 { lat: 270, common_pid: 2039 } hitcount: 6
2848 { lat: 271, common_pid: 2041 } hitcount: 1
2849 { lat: 271, common_pid: 2039 } hitcount: 5
2850 { lat: 272, common_pid: 2039 } hitcount: 10
2851 { lat: 273, common_pid: 2039 } hitcount: 8
2852 { lat: 274, common_pid: 2039 } hitcount: 2
2853 { lat: 275, common_pid: 2039 } hitcount: 1
2854 { lat: 276, common_pid: 2039 } hitcount: 2
2855 { lat: 276, common_pid: 2037 } hitcount: 1
2856 { lat: 276, common_pid: 2038 } hitcount: 1
2857 { lat: 277, common_pid: 2039 } hitcount: 1
2858 { lat: 277, common_pid: 2042 } hitcount: 1
2859 { lat: 278, common_pid: 2039 } hitcount: 1
2860 { lat: 279, common_pid: 2039 } hitcount: 4
2861 { lat: 279, common_pid: 2043 } hitcount: 1
2862 { lat: 280, common_pid: 2039 } hitcount: 3
2863 { lat: 283, common_pid: 2036 } hitcount: 2
2864 { lat: 284, common_pid: 2039 } hitcount: 1
2865 { lat: 284, common_pid: 2043 } hitcount: 1
2866 { lat: 288, common_pid: 2039 } hitcount: 1
2867 { lat: 289, common_pid: 2039 } hitcount: 1
2868 { lat: 300, common_pid: 2039 } hitcount: 1
2869 { lat: 384, common_pid: 2039 } hitcount: 1
2870
2871 Totals:
2872 Hits: 67625
2873 Entries: 278
2874 Dropped: 0
d3439f9d
SRV
2875
2876Note, the writes are around the sleep, so ideally they will all be of 250
2877microseconds. If you are wondering how there are several that are under
2878250 microseconds, that is because the way cyclictest works, is if one
2879iteration comes in late, the next one will set the timer to wake up less that
2880250. That is, if an iteration came in 50 microseconds late, the next wake up
2881will be at 200 microseconds.
2882
2883But this could easily be done in userspace. To make this even more
2884interesting, we can mix the histogram between events that happened in the
ea272257 2885kernel with trace_marker::
d3439f9d
SRV
2886
2887 # cd /sys/kernel/tracing
2888 # echo 'latency u64 lat' > synthetic_events
2889 # echo 'hist:keys=pid:ts0=common_timestamp.usecs' > events/sched/sched_waking/trigger
2890 # echo 'hist:keys=common_pid:lat=common_timestamp.usecs-$ts0:onmatch(sched.sched_waking).latency($lat) if buf == "end"' > events/ftrace/print/trigger
2891 # echo 'hist:keys=lat,common_pid:sort=lat' > events/synthetic/latency/trigger
2892
2893The difference this time is that instead of using the trace_marker to start
2894the latency, the sched_waking event is used, matching the common_pid for the
2895trace_marker write with the pid that is being woken by sched_waking.
2896
ea272257 2897After running cyclictest again with the same parameters, we now have::
d3439f9d
SRV
2898
2899 # cat events/synthetic/latency/hist
ea272257
MCC
2900 # event histogram
2901 #
2902 # trigger info: hist:keys=lat,common_pid:vals=hitcount:sort=lat:size=2048 [active]
2903 #
2904
2905 { lat: 7, common_pid: 2302 } hitcount: 640
2906 { lat: 7, common_pid: 2299 } hitcount: 42
2907 { lat: 7, common_pid: 2303 } hitcount: 18
2908 { lat: 7, common_pid: 2305 } hitcount: 166
2909 { lat: 7, common_pid: 2306 } hitcount: 1
2910 { lat: 7, common_pid: 2301 } hitcount: 91
2911 { lat: 7, common_pid: 2300 } hitcount: 17
2912 { lat: 8, common_pid: 2303 } hitcount: 8296
2913 { lat: 8, common_pid: 2304 } hitcount: 6864
2914 { lat: 8, common_pid: 2305 } hitcount: 9464
2915 { lat: 8, common_pid: 2301 } hitcount: 9213
2916 { lat: 8, common_pid: 2306 } hitcount: 6246
2917 { lat: 8, common_pid: 2302 } hitcount: 8797
2918 { lat: 8, common_pid: 2299 } hitcount: 8771
2919 { lat: 8, common_pid: 2300 } hitcount: 8119
2920 { lat: 9, common_pid: 2305 } hitcount: 1519
2921 { lat: 9, common_pid: 2299 } hitcount: 2346
2922 { lat: 9, common_pid: 2303 } hitcount: 2841
2923 { lat: 9, common_pid: 2301 } hitcount: 1846
2924 { lat: 9, common_pid: 2304 } hitcount: 3861
2925 { lat: 9, common_pid: 2302 } hitcount: 1210
2926 { lat: 9, common_pid: 2300 } hitcount: 2762
2927 { lat: 9, common_pid: 2306 } hitcount: 4247
2928 { lat: 10, common_pid: 2299 } hitcount: 16
2929 { lat: 10, common_pid: 2306 } hitcount: 333
2930 { lat: 10, common_pid: 2303 } hitcount: 16
2931 { lat: 10, common_pid: 2304 } hitcount: 168
2932 { lat: 10, common_pid: 2302 } hitcount: 240
2933 { lat: 10, common_pid: 2301 } hitcount: 28
2934 { lat: 10, common_pid: 2300 } hitcount: 95
2935 { lat: 10, common_pid: 2305 } hitcount: 18
2936 { lat: 11, common_pid: 2303 } hitcount: 5
2937 { lat: 11, common_pid: 2305 } hitcount: 8
2938 { lat: 11, common_pid: 2306 } hitcount: 221
2939 { lat: 11, common_pid: 2302 } hitcount: 76
2940 { lat: 11, common_pid: 2304 } hitcount: 26
2941 { lat: 11, common_pid: 2300 } hitcount: 125
2942 { lat: 11, common_pid: 2299 } hitcount: 2
2943 { lat: 12, common_pid: 2305 } hitcount: 3
2944 { lat: 12, common_pid: 2300 } hitcount: 6
2945 { lat: 12, common_pid: 2306 } hitcount: 90
2946 { lat: 12, common_pid: 2302 } hitcount: 4
2947 { lat: 12, common_pid: 2303 } hitcount: 1
2948 { lat: 12, common_pid: 2304 } hitcount: 122
2949 { lat: 13, common_pid: 2300 } hitcount: 12
2950 { lat: 13, common_pid: 2301 } hitcount: 1
2951 { lat: 13, common_pid: 2306 } hitcount: 32
2952 { lat: 13, common_pid: 2302 } hitcount: 5
2953 { lat: 13, common_pid: 2305 } hitcount: 1
2954 { lat: 13, common_pid: 2303 } hitcount: 1
2955 { lat: 13, common_pid: 2304 } hitcount: 61
2956 { lat: 14, common_pid: 2303 } hitcount: 4
2957 { lat: 14, common_pid: 2306 } hitcount: 5
2958 { lat: 14, common_pid: 2305 } hitcount: 4
2959 { lat: 14, common_pid: 2304 } hitcount: 62
2960 { lat: 14, common_pid: 2302 } hitcount: 19
2961 { lat: 14, common_pid: 2300 } hitcount: 33
2962 { lat: 14, common_pid: 2299 } hitcount: 1
2963 { lat: 14, common_pid: 2301 } hitcount: 4
2964 { lat: 15, common_pid: 2305 } hitcount: 1
2965 { lat: 15, common_pid: 2302 } hitcount: 25
2966 { lat: 15, common_pid: 2300 } hitcount: 11
2967 { lat: 15, common_pid: 2299 } hitcount: 5
2968 { lat: 15, common_pid: 2301 } hitcount: 1
2969 { lat: 15, common_pid: 2304 } hitcount: 8
2970 { lat: 15, common_pid: 2303 } hitcount: 1
2971 { lat: 15, common_pid: 2306 } hitcount: 6
2972 { lat: 16, common_pid: 2302 } hitcount: 31
2973 { lat: 16, common_pid: 2306 } hitcount: 3
2974 { lat: 16, common_pid: 2300 } hitcount: 5
2975 { lat: 17, common_pid: 2302 } hitcount: 6
2976 { lat: 17, common_pid: 2303 } hitcount: 1
2977 { lat: 18, common_pid: 2304 } hitcount: 1
2978 { lat: 18, common_pid: 2302 } hitcount: 8
2979 { lat: 18, common_pid: 2299 } hitcount: 1
2980 { lat: 18, common_pid: 2301 } hitcount: 1
2981 { lat: 19, common_pid: 2303 } hitcount: 4
2982 { lat: 19, common_pid: 2304 } hitcount: 5
2983 { lat: 19, common_pid: 2302 } hitcount: 4
2984 { lat: 19, common_pid: 2299 } hitcount: 3
2985 { lat: 19, common_pid: 2306 } hitcount: 1
2986 { lat: 19, common_pid: 2300 } hitcount: 4
2987 { lat: 19, common_pid: 2305 } hitcount: 5
2988 { lat: 20, common_pid: 2299 } hitcount: 2
2989 { lat: 20, common_pid: 2302 } hitcount: 3
2990 { lat: 20, common_pid: 2305 } hitcount: 1
2991 { lat: 20, common_pid: 2300 } hitcount: 2
2992 { lat: 20, common_pid: 2301 } hitcount: 2
2993 { lat: 20, common_pid: 2303 } hitcount: 3
2994 { lat: 21, common_pid: 2305 } hitcount: 1
2995 { lat: 21, common_pid: 2299 } hitcount: 5
2996 { lat: 21, common_pid: 2303 } hitcount: 4
2997 { lat: 21, common_pid: 2302 } hitcount: 7
2998 { lat: 21, common_pid: 2300 } hitcount: 1
2999 { lat: 21, common_pid: 2301 } hitcount: 5
3000 { lat: 21, common_pid: 2304 } hitcount: 2
3001 { lat: 22, common_pid: 2302 } hitcount: 5
3002 { lat: 22, common_pid: 2303 } hitcount: 1
3003 { lat: 22, common_pid: 2306 } hitcount: 3
3004 { lat: 22, common_pid: 2301 } hitcount: 2
3005 { lat: 22, common_pid: 2300 } hitcount: 1
3006 { lat: 22, common_pid: 2299 } hitcount: 1
3007 { lat: 22, common_pid: 2305 } hitcount: 1
3008 { lat: 22, common_pid: 2304 } hitcount: 1
3009 { lat: 23, common_pid: 2299 } hitcount: 1
3010 { lat: 23, common_pid: 2306 } hitcount: 2
3011 { lat: 23, common_pid: 2302 } hitcount: 6
3012 { lat: 24, common_pid: 2302 } hitcount: 3
3013 { lat: 24, common_pid: 2300 } hitcount: 1
3014 { lat: 24, common_pid: 2306 } hitcount: 2
3015 { lat: 24, common_pid: 2305 } hitcount: 1
3016 { lat: 24, common_pid: 2299 } hitcount: 1
3017 { lat: 25, common_pid: 2300 } hitcount: 1
3018 { lat: 25, common_pid: 2302 } hitcount: 4
3019 { lat: 26, common_pid: 2302 } hitcount: 2
3020 { lat: 27, common_pid: 2305 } hitcount: 1
3021 { lat: 27, common_pid: 2300 } hitcount: 1
3022 { lat: 27, common_pid: 2302 } hitcount: 3
3023 { lat: 28, common_pid: 2306 } hitcount: 1
3024 { lat: 28, common_pid: 2302 } hitcount: 4
3025 { lat: 29, common_pid: 2302 } hitcount: 1
3026 { lat: 29, common_pid: 2300 } hitcount: 2
3027 { lat: 29, common_pid: 2306 } hitcount: 1
3028 { lat: 29, common_pid: 2304 } hitcount: 1
3029 { lat: 30, common_pid: 2302 } hitcount: 4
3030 { lat: 31, common_pid: 2302 } hitcount: 6
3031 { lat: 32, common_pid: 2302 } hitcount: 1
3032 { lat: 33, common_pid: 2299 } hitcount: 1
3033 { lat: 33, common_pid: 2302 } hitcount: 3
3034 { lat: 34, common_pid: 2302 } hitcount: 2
3035 { lat: 35, common_pid: 2302 } hitcount: 1
3036 { lat: 35, common_pid: 2304 } hitcount: 1
3037 { lat: 36, common_pid: 2302 } hitcount: 4
3038 { lat: 37, common_pid: 2302 } hitcount: 6
3039 { lat: 38, common_pid: 2302 } hitcount: 2
3040 { lat: 39, common_pid: 2302 } hitcount: 2
3041 { lat: 39, common_pid: 2304 } hitcount: 1
3042 { lat: 40, common_pid: 2304 } hitcount: 2
3043 { lat: 40, common_pid: 2302 } hitcount: 5
3044 { lat: 41, common_pid: 2304 } hitcount: 1
3045 { lat: 41, common_pid: 2302 } hitcount: 8
3046 { lat: 42, common_pid: 2302 } hitcount: 6
3047 { lat: 42, common_pid: 2304 } hitcount: 1
3048 { lat: 43, common_pid: 2302 } hitcount: 3
3049 { lat: 43, common_pid: 2304 } hitcount: 4
3050 { lat: 44, common_pid: 2302 } hitcount: 6
3051 { lat: 45, common_pid: 2302 } hitcount: 5
3052 { lat: 46, common_pid: 2302 } hitcount: 5
3053 { lat: 47, common_pid: 2302 } hitcount: 7
3054 { lat: 48, common_pid: 2301 } hitcount: 1
3055 { lat: 48, common_pid: 2302 } hitcount: 9
3056 { lat: 49, common_pid: 2302 } hitcount: 3
3057 { lat: 50, common_pid: 2302 } hitcount: 1
3058 { lat: 50, common_pid: 2301 } hitcount: 1
3059 { lat: 51, common_pid: 2302 } hitcount: 2
3060 { lat: 51, common_pid: 2301 } hitcount: 1
3061 { lat: 61, common_pid: 2302 } hitcount: 1
3062 { lat: 110, common_pid: 2302 } hitcount: 1
3063
3064 Totals:
3065 Hits: 89565
3066 Entries: 158
3067 Dropped: 0
d3439f9d
SRV
3068
3069This doesn't tell us any information about how late cyclictest may have
3070woken up, but it does show us a nice histogram of how long it took from
3071the time that cyclictest was woken to the time it made it into user space.