Commit | Line | Data |
---|---|---|
16444a8a | 1 | # |
606576ce SR |
2 | # Architectures that offer an FUNCTION_TRACER implementation should |
3 | # select HAVE_FUNCTION_TRACER: | |
16444a8a | 4 | # |
2a3a4f66 | 5 | |
8d26487f TE |
6 | config USER_STACKTRACE_SUPPORT |
7 | bool | |
8 | ||
2a3a4f66 FW |
9 | config NOP_TRACER |
10 | bool | |
11 | ||
606576ce | 12 | config HAVE_FUNCTION_TRACER |
16444a8a | 13 | bool |
bc0c38d1 | 14 | |
fb52607a | 15 | config HAVE_FUNCTION_GRAPH_TRACER |
15e6cb36 FW |
16 | bool |
17 | ||
60a7ecf4 SR |
18 | config HAVE_FUNCTION_TRACE_MCOUNT_TEST |
19 | bool | |
20 | help | |
21 | This gets selected when the arch tests the function_trace_stop | |
22 | variable at the mcount call site. Otherwise, this variable | |
23 | is tested by the called function. | |
24 | ||
677aa9f7 SR |
25 | config HAVE_DYNAMIC_FTRACE |
26 | bool | |
27 | ||
8da3821b SR |
28 | config HAVE_FTRACE_MCOUNT_RECORD |
29 | bool | |
30 | ||
1e9b51c2 MM |
31 | config HAVE_HW_BRANCH_TRACER |
32 | bool | |
33 | ||
352ad25a SR |
34 | config TRACER_MAX_TRACE |
35 | bool | |
36 | ||
7a8e76a3 SR |
37 | config RING_BUFFER |
38 | bool | |
39 | ||
bc0c38d1 SR |
40 | config TRACING |
41 | bool | |
42 | select DEBUG_FS | |
7a8e76a3 | 43 | select RING_BUFFER |
c2c80529 | 44 | select STACKTRACE if STACKTRACE_SUPPORT |
5f87f112 | 45 | select TRACEPOINTS |
f3384b28 | 46 | select NOP_TRACER |
bc0c38d1 | 47 | |
17d80fd0 PZ |
48 | menu "Tracers" |
49 | ||
606576ce | 50 | config FUNCTION_TRACER |
1b29b018 | 51 | bool "Kernel Function Tracer" |
606576ce | 52 | depends on HAVE_FUNCTION_TRACER |
d3ee6d99 | 53 | depends on DEBUG_KERNEL |
1b29b018 SR |
54 | select FRAME_POINTER |
55 | select TRACING | |
35e8e302 | 56 | select CONTEXT_SWITCH_TRACER |
1b29b018 SR |
57 | help |
58 | Enable the kernel to trace every kernel function. This is done | |
59 | by using a compiler feature to insert a small, 5-byte No-Operation | |
60 | instruction to the beginning of every kernel function, which NOP | |
61 | sequence is then dynamically patched into a tracer call when | |
62 | tracing is enabled by the administrator. If it's runtime disabled | |
63 | (the bootup default), then the overhead of the instructions is very | |
64 | small and not measurable even in micro-benchmarks. | |
35e8e302 | 65 | |
fb52607a FW |
66 | config FUNCTION_GRAPH_TRACER |
67 | bool "Kernel Function Graph Tracer" | |
68 | depends on HAVE_FUNCTION_GRAPH_TRACER | |
15e6cb36 FW |
69 | depends on FUNCTION_TRACER |
70 | help | |
fb52607a FW |
71 | Enable the kernel to trace a function at both its return |
72 | and its entry. | |
73 | It's first purpose is to trace the duration of functions and | |
74 | draw a call graph for each thread with some informations like | |
75 | the return value. | |
76 | This is done by setting the current return address on the current | |
77 | task structure into a stack of calls. | |
15e6cb36 | 78 | |
81d68a96 SR |
79 | config IRQSOFF_TRACER |
80 | bool "Interrupts-off Latency Tracer" | |
81 | default n | |
82 | depends on TRACE_IRQFLAGS_SUPPORT | |
83 | depends on GENERIC_TIME | |
d3ee6d99 | 84 | depends on DEBUG_KERNEL |
81d68a96 SR |
85 | select TRACE_IRQFLAGS |
86 | select TRACING | |
87 | select TRACER_MAX_TRACE | |
88 | help | |
89 | This option measures the time spent in irqs-off critical | |
90 | sections, with microsecond accuracy. | |
91 | ||
92 | The default measurement method is a maximum search, which is | |
93 | disabled by default and can be runtime (re-)started | |
94 | via: | |
95 | ||
96 | echo 0 > /debugfs/tracing/tracing_max_latency | |
97 | ||
6cd8a4bb SR |
98 | (Note that kernel size and overhead increases with this option |
99 | enabled. This option and the preempt-off timing option can be | |
100 | used together or separately.) | |
101 | ||
102 | config PREEMPT_TRACER | |
103 | bool "Preemption-off Latency Tracer" | |
104 | default n | |
105 | depends on GENERIC_TIME | |
106 | depends on PREEMPT | |
d3ee6d99 | 107 | depends on DEBUG_KERNEL |
6cd8a4bb SR |
108 | select TRACING |
109 | select TRACER_MAX_TRACE | |
110 | help | |
111 | This option measures the time spent in preemption off critical | |
112 | sections, with microsecond accuracy. | |
113 | ||
114 | The default measurement method is a maximum search, which is | |
115 | disabled by default and can be runtime (re-)started | |
116 | via: | |
117 | ||
118 | echo 0 > /debugfs/tracing/tracing_max_latency | |
119 | ||
120 | (Note that kernel size and overhead increases with this option | |
121 | enabled. This option and the irqs-off timing option can be | |
122 | used together or separately.) | |
123 | ||
f06c3810 IM |
124 | config SYSPROF_TRACER |
125 | bool "Sysprof Tracer" | |
4d2df795 | 126 | depends on X86 |
f06c3810 IM |
127 | select TRACING |
128 | help | |
129 | This tracer provides the trace needed by the 'Sysprof' userspace | |
130 | tool. | |
131 | ||
352ad25a SR |
132 | config SCHED_TRACER |
133 | bool "Scheduling Latency Tracer" | |
d3ee6d99 | 134 | depends on DEBUG_KERNEL |
352ad25a SR |
135 | select TRACING |
136 | select CONTEXT_SWITCH_TRACER | |
137 | select TRACER_MAX_TRACE | |
138 | help | |
139 | This tracer tracks the latency of the highest priority task | |
140 | to be scheduled in, starting from the point it has woken up. | |
141 | ||
35e8e302 SR |
142 | config CONTEXT_SWITCH_TRACER |
143 | bool "Trace process context switches" | |
d3ee6d99 | 144 | depends on DEBUG_KERNEL |
35e8e302 SR |
145 | select TRACING |
146 | select MARKERS | |
147 | help | |
148 | This tracer gets called from the context switch and records | |
149 | all switching of tasks. | |
150 | ||
1f5c2abb FW |
151 | config BOOT_TRACER |
152 | bool "Trace boot initcalls" | |
1f5c2abb FW |
153 | depends on DEBUG_KERNEL |
154 | select TRACING | |
ea31e72d | 155 | select CONTEXT_SWITCH_TRACER |
1f5c2abb FW |
156 | help |
157 | This tracer helps developers to optimize boot times: it records | |
98d9c66a IM |
158 | the timings of the initcalls and traces key events and the identity |
159 | of tasks that can cause boot delays, such as context-switches. | |
160 | ||
161 | Its aim is to be parsed by the /scripts/bootgraph.pl tool to | |
162 | produce pretty graphics about boot inefficiencies, giving a visual | |
163 | representation of the delays during initcalls - but the raw | |
164 | /debug/tracing/trace text output is readable too. | |
165 | ||
166 | ( Note that tracing self tests can't be enabled if this tracer is | |
167 | selected, because the self-tests are an initcall as well and that | |
168 | would invalidate the boot trace. ) | |
1f5c2abb | 169 | |
2ed84eeb | 170 | config TRACE_BRANCH_PROFILING |
1f0d69a9 SR |
171 | bool "Trace likely/unlikely profiler" |
172 | depends on DEBUG_KERNEL | |
173 | select TRACING | |
174 | help | |
175 | This tracer profiles all the the likely and unlikely macros | |
176 | in the kernel. It will display the results in: | |
177 | ||
45b79749 | 178 | /debugfs/tracing/profile_annotated_branch |
1f0d69a9 SR |
179 | |
180 | Note: this will add a significant overhead, only turn this | |
181 | on if you need to profile the system's use of these macros. | |
182 | ||
183 | Say N if unsure. | |
184 | ||
2bcd521a SR |
185 | config PROFILE_ALL_BRANCHES |
186 | bool "Profile all if conditionals" | |
187 | depends on TRACE_BRANCH_PROFILING | |
188 | help | |
189 | This tracer profiles all branch conditions. Every if () | |
190 | taken in the kernel is recorded whether it hit or miss. | |
191 | The results will be displayed in: | |
192 | ||
193 | /debugfs/tracing/profile_branch | |
194 | ||
195 | This configuration, when enabled, will impose a great overhead | |
196 | on the system. This should only be enabled when the system | |
197 | is to be analyzed | |
198 | ||
199 | Say N if unsure. | |
200 | ||
2ed84eeb | 201 | config TRACING_BRANCHES |
52f232cb SR |
202 | bool |
203 | help | |
204 | Selected by tracers that will trace the likely and unlikely | |
205 | conditions. This prevents the tracers themselves from being | |
206 | profiled. Profiling the tracing infrastructure can only happen | |
207 | when the likelys and unlikelys are not being traced. | |
208 | ||
2ed84eeb | 209 | config BRANCH_TRACER |
52f232cb | 210 | bool "Trace likely/unlikely instances" |
2ed84eeb SR |
211 | depends on TRACE_BRANCH_PROFILING |
212 | select TRACING_BRANCHES | |
52f232cb SR |
213 | help |
214 | This traces the events of likely and unlikely condition | |
215 | calls in the kernel. The difference between this and the | |
216 | "Trace likely/unlikely profiler" is that this is not a | |
217 | histogram of the callers, but actually places the calling | |
218 | events into a running trace buffer to see when and where the | |
219 | events happened, as well as their results. | |
220 | ||
221 | Say N if unsure. | |
222 | ||
f3f47a67 AV |
223 | config POWER_TRACER |
224 | bool "Trace power consumption behavior" | |
225 | depends on DEBUG_KERNEL | |
226 | depends on X86 | |
227 | select TRACING | |
228 | help | |
229 | This tracer helps developers to analyze and optimize the kernels | |
230 | power management decisions, specifically the C-state and P-state | |
231 | behavior. | |
232 | ||
233 | ||
e5a81b62 SR |
234 | config STACK_TRACER |
235 | bool "Trace max stack" | |
606576ce | 236 | depends on HAVE_FUNCTION_TRACER |
2ff01c6a | 237 | depends on DEBUG_KERNEL |
606576ce | 238 | select FUNCTION_TRACER |
e5a81b62 SR |
239 | select STACKTRACE |
240 | help | |
4519d9e5 IM |
241 | This special tracer records the maximum stack footprint of the |
242 | kernel and displays it in debugfs/tracing/stack_trace. | |
243 | ||
244 | This tracer works by hooking into every function call that the | |
245 | kernel executes, and keeping a maximum stack depth value and | |
246 | stack-trace saved. Because this logic has to execute in every | |
247 | kernel function, all the time, this option can slow down the | |
248 | kernel measurably and is generally intended for kernel | |
249 | developers only. | |
250 | ||
251 | Say N if unsure. | |
e5a81b62 | 252 | |
1e9b51c2 MM |
253 | config BTS_TRACER |
254 | depends on HAVE_HW_BRANCH_TRACER | |
255 | bool "Trace branches" | |
256 | select TRACING | |
257 | help | |
258 | This tracer records all branches on the system in a circular | |
259 | buffer giving access to the last N branches for each cpu. | |
260 | ||
3d083395 SR |
261 | config DYNAMIC_FTRACE |
262 | bool "enable/disable ftrace tracepoints dynamically" | |
606576ce | 263 | depends on FUNCTION_TRACER |
677aa9f7 | 264 | depends on HAVE_DYNAMIC_FTRACE |
d3ee6d99 | 265 | depends on DEBUG_KERNEL |
3d083395 SR |
266 | default y |
267 | help | |
268 | This option will modify all the calls to ftrace dynamically | |
269 | (will patch them out of the binary image and replaces them | |
270 | with a No-Op instruction) as they are called. A table is | |
271 | created to dynamically enable them again. | |
272 | ||
606576ce | 273 | This way a CONFIG_FUNCTION_TRACER kernel is slightly larger, but otherwise |
3d083395 SR |
274 | has native performance as long as no tracing is active. |
275 | ||
276 | The changes to the code are done by a kernel thread that | |
277 | wakes up once a second and checks to see if any ftrace calls | |
278 | were made. If so, it runs stop_machine (stops all CPUS) | |
279 | and modifies the code to jump over the call to ftrace. | |
60a11774 | 280 | |
8da3821b SR |
281 | config FTRACE_MCOUNT_RECORD |
282 | def_bool y | |
283 | depends on DYNAMIC_FTRACE | |
284 | depends on HAVE_FTRACE_MCOUNT_RECORD | |
285 | ||
60a11774 SR |
286 | config FTRACE_SELFTEST |
287 | bool | |
288 | ||
289 | config FTRACE_STARTUP_TEST | |
290 | bool "Perform a startup test on ftrace" | |
3ce2b920 | 291 | depends on TRACING && DEBUG_KERNEL && !BOOT_TRACER |
60a11774 SR |
292 | select FTRACE_SELFTEST |
293 | help | |
294 | This option performs a series of startup tests on ftrace. On bootup | |
295 | a series of tests are made to verify that the tracer is | |
296 | functioning properly. It will do tests on all the configured | |
297 | tracers of ftrace. | |
17d80fd0 PZ |
298 | |
299 | endmenu |