2 # Architectures that offer an FUNCTION_TRACER implementation should
3 # select HAVE_FUNCTION_TRACER:
9 config HAVE_FUNCTION_TRACER
12 config HAVE_FUNCTION_RET_TRACER
15 config HAVE_FUNCTION_TRACE_MCOUNT_TEST
18 This gets selected when the arch tests the function_trace_stop
19 variable at the mcount call site. Otherwise, this variable
20 is tested by the called function.
22 config HAVE_DYNAMIC_FTRACE
25 config HAVE_FTRACE_MCOUNT_RECORD
28 config TRACER_MAX_TRACE
38 select STACKTRACE if STACKTRACE_SUPPORT
44 config FUNCTION_TRACER
45 bool "Kernel Function Tracer"
46 depends on HAVE_FUNCTION_TRACER
47 depends on DEBUG_KERNEL
50 select CONTEXT_SWITCH_TRACER
52 Enable the kernel to trace every kernel function. This is done
53 by using a compiler feature to insert a small, 5-byte No-Operation
54 instruction to the beginning of every kernel function, which NOP
55 sequence is then dynamically patched into a tracer call when
56 tracing is enabled by the administrator. If it's runtime disabled
57 (the bootup default), then the overhead of the instructions is very
58 small and not measurable even in micro-benchmarks.
60 config FUNCTION_RET_TRACER
61 bool "Kernel Function return Tracer"
62 depends on HAVE_FUNCTION_RET_TRACER
63 depends on FUNCTION_TRACER
65 Enable the kernel to trace a function at its return.
66 It's first purpose is to trace the duration of functions.
67 This is done by setting the current return address on the thread
68 info structure of the current task.
71 bool "Interrupts-off Latency Tracer"
73 depends on TRACE_IRQFLAGS_SUPPORT
74 depends on GENERIC_TIME
75 depends on DEBUG_KERNEL
78 select TRACER_MAX_TRACE
80 This option measures the time spent in irqs-off critical
81 sections, with microsecond accuracy.
83 The default measurement method is a maximum search, which is
84 disabled by default and can be runtime (re-)started
87 echo 0 > /debugfs/tracing/tracing_max_latency
89 (Note that kernel size and overhead increases with this option
90 enabled. This option and the preempt-off timing option can be
91 used together or separately.)
94 bool "Preemption-off Latency Tracer"
96 depends on GENERIC_TIME
98 depends on DEBUG_KERNEL
100 select TRACER_MAX_TRACE
102 This option measures the time spent in preemption off critical
103 sections, with microsecond accuracy.
105 The default measurement method is a maximum search, which is
106 disabled by default and can be runtime (re-)started
109 echo 0 > /debugfs/tracing/tracing_max_latency
111 (Note that kernel size and overhead increases with this option
112 enabled. This option and the irqs-off timing option can be
113 used together or separately.)
115 config SYSPROF_TRACER
116 bool "Sysprof Tracer"
120 This tracer provides the trace needed by the 'Sysprof' userspace
124 bool "Scheduling Latency Tracer"
125 depends on DEBUG_KERNEL
127 select CONTEXT_SWITCH_TRACER
128 select TRACER_MAX_TRACE
130 This tracer tracks the latency of the highest priority task
131 to be scheduled in, starting from the point it has woken up.
133 config CONTEXT_SWITCH_TRACER
134 bool "Trace process context switches"
135 depends on DEBUG_KERNEL
139 This tracer gets called from the context switch and records
140 all switching of tasks.
143 bool "Trace boot initcalls"
144 depends on DEBUG_KERNEL
146 select CONTEXT_SWITCH_TRACER
148 This tracer helps developers to optimize boot times: it records
149 the timings of the initcalls and traces key events and the identity
150 of tasks that can cause boot delays, such as context-switches.
152 Its aim is to be parsed by the /scripts/bootgraph.pl tool to
153 produce pretty graphics about boot inefficiencies, giving a visual
154 representation of the delays during initcalls - but the raw
155 /debug/tracing/trace text output is readable too.
157 ( Note that tracing self tests can't be enabled if this tracer is
158 selected, because the self-tests are an initcall as well and that
159 would invalidate the boot trace. )
161 config TRACE_BRANCH_PROFILING
162 bool "Trace likely/unlikely profiler"
163 depends on DEBUG_KERNEL
166 This tracer profiles all the the likely and unlikely macros
167 in the kernel. It will display the results in:
169 /debugfs/tracing/profile_annotated_branch
171 Note: this will add a significant overhead, only turn this
172 on if you need to profile the system's use of these macros.
176 config PROFILE_ALL_BRANCHES
177 bool "Profile all if conditionals"
178 depends on TRACE_BRANCH_PROFILING
180 This tracer profiles all branch conditions. Every if ()
181 taken in the kernel is recorded whether it hit or miss.
182 The results will be displayed in:
184 /debugfs/tracing/profile_branch
186 This configuration, when enabled, will impose a great overhead
187 on the system. This should only be enabled when the system
192 config TRACING_BRANCHES
195 Selected by tracers that will trace the likely and unlikely
196 conditions. This prevents the tracers themselves from being
197 profiled. Profiling the tracing infrastructure can only happen
198 when the likelys and unlikelys are not being traced.
201 bool "Trace likely/unlikely instances"
202 depends on TRACE_BRANCH_PROFILING
203 select TRACING_BRANCHES
205 This traces the events of likely and unlikely condition
206 calls in the kernel. The difference between this and the
207 "Trace likely/unlikely profiler" is that this is not a
208 histogram of the callers, but actually places the calling
209 events into a running trace buffer to see when and where the
210 events happened, as well as their results.
215 bool "Trace max stack"
216 depends on HAVE_FUNCTION_TRACER
217 depends on DEBUG_KERNEL
218 select FUNCTION_TRACER
221 This special tracer records the maximum stack footprint of the
222 kernel and displays it in debugfs/tracing/stack_trace.
224 This tracer works by hooking into every function call that the
225 kernel executes, and keeping a maximum stack depth value and
226 stack-trace saved. Because this logic has to execute in every
227 kernel function, all the time, this option can slow down the
228 kernel measurably and is generally intended for kernel
233 config DYNAMIC_FTRACE
234 bool "enable/disable ftrace tracepoints dynamically"
235 depends on FUNCTION_TRACER
236 depends on HAVE_DYNAMIC_FTRACE
237 depends on DEBUG_KERNEL
240 This option will modify all the calls to ftrace dynamically
241 (will patch them out of the binary image and replaces them
242 with a No-Op instruction) as they are called. A table is
243 created to dynamically enable them again.
245 This way a CONFIG_FUNCTION_TRACER kernel is slightly larger, but otherwise
246 has native performance as long as no tracing is active.
248 The changes to the code are done by a kernel thread that
249 wakes up once a second and checks to see if any ftrace calls
250 were made. If so, it runs stop_machine (stops all CPUS)
251 and modifies the code to jump over the call to ftrace.
253 config FTRACE_MCOUNT_RECORD
255 depends on DYNAMIC_FTRACE
256 depends on HAVE_FTRACE_MCOUNT_RECORD
258 config FTRACE_SELFTEST
261 config FTRACE_STARTUP_TEST
262 bool "Perform a startup test on ftrace"
263 depends on TRACING && DEBUG_KERNEL && !BOOT_TRACER
264 select FTRACE_SELFTEST
266 This option performs a series of startup tests on ftrace. On bootup
267 a series of tests are made to verify that the tracer is
268 functioning properly. It will do tests on all the configured