Merge branch 'for-linus-4.2' of git://git.kernel.org/pub/scm/linux/kernel/git/mason...
[linux-2.6-block.git] / arch / Kconfig
... / ...
CommitLineData
1#
2# General architecture dependent options
3#
4
5config OPROFILE
6 tristate "OProfile system profiling"
7 depends on PROFILING
8 depends on HAVE_OPROFILE
9 select RING_BUFFER
10 select RING_BUFFER_ALLOW_SWAP
11 help
12 OProfile is a profiling system capable of profiling the
13 whole system, include the kernel, kernel modules, libraries,
14 and applications.
15
16 If unsure, say N.
17
18config OPROFILE_EVENT_MULTIPLEX
19 bool "OProfile multiplexing support (EXPERIMENTAL)"
20 default n
21 depends on OPROFILE && X86
22 help
23 The number of hardware counters is limited. The multiplexing
24 feature enables OProfile to gather more events than counters
25 are provided by the hardware. This is realized by switching
26 between events at an user specified time interval.
27
28 If unsure, say N.
29
30config HAVE_OPROFILE
31 bool
32
33config OPROFILE_NMI_TIMER
34 def_bool y
35 depends on PERF_EVENTS && HAVE_PERF_EVENTS_NMI && !PPC64
36
37config KPROBES
38 bool "Kprobes"
39 depends on MODULES
40 depends on HAVE_KPROBES
41 select KALLSYMS
42 help
43 Kprobes allows you to trap at almost any kernel address and
44 execute a callback function. register_kprobe() establishes
45 a probepoint and specifies the callback. Kprobes is useful
46 for kernel debugging, non-intrusive instrumentation and testing.
47 If in doubt, say "N".
48
49config JUMP_LABEL
50 bool "Optimize very unlikely/likely branches"
51 depends on HAVE_ARCH_JUMP_LABEL
52 help
53 This option enables a transparent branch optimization that
54 makes certain almost-always-true or almost-always-false branch
55 conditions even cheaper to execute within the kernel.
56
57 Certain performance-sensitive kernel code, such as trace points,
58 scheduler functionality, networking code and KVM have such
59 branches and include support for this optimization technique.
60
61 If it is detected that the compiler has support for "asm goto",
62 the kernel will compile such branches with just a nop
63 instruction. When the condition flag is toggled to true, the
64 nop will be converted to a jump instruction to execute the
65 conditional block of instructions.
66
67 This technique lowers overhead and stress on the branch prediction
68 of the processor and generally makes the kernel faster. The update
69 of the condition is slower, but those are always very rare.
70
71 ( On 32-bit x86, the necessary options added to the compiler
72 flags may increase the size of the kernel slightly. )
73
74config OPTPROBES
75 def_bool y
76 depends on KPROBES && HAVE_OPTPROBES
77 depends on !PREEMPT
78
79config KPROBES_ON_FTRACE
80 def_bool y
81 depends on KPROBES && HAVE_KPROBES_ON_FTRACE
82 depends on DYNAMIC_FTRACE_WITH_REGS
83 help
84 If function tracer is enabled and the arch supports full
85 passing of pt_regs to function tracing, then kprobes can
86 optimize on top of function tracing.
87
88config UPROBES
89 def_bool n
90 select PERCPU_RWSEM
91 help
92 Uprobes is the user-space counterpart to kprobes: they
93 enable instrumentation applications (such as 'perf probe')
94 to establish unintrusive probes in user-space binaries and
95 libraries, by executing handler functions when the probes
96 are hit by user-space applications.
97
98 ( These probes come in the form of single-byte breakpoints,
99 managed by the kernel and kept transparent to the probed
100 application. )
101
102config HAVE_64BIT_ALIGNED_ACCESS
103 def_bool 64BIT && !HAVE_EFFICIENT_UNALIGNED_ACCESS
104 help
105 Some architectures require 64 bit accesses to be 64 bit
106 aligned, which also requires structs containing 64 bit values
107 to be 64 bit aligned too. This includes some 32 bit
108 architectures which can do 64 bit accesses, as well as 64 bit
109 architectures without unaligned access.
110
111 This symbol should be selected by an architecture if 64 bit
112 accesses are required to be 64 bit aligned in this way even
113 though it is not a 64 bit architecture.
114
115 See Documentation/unaligned-memory-access.txt for more
116 information on the topic of unaligned memory accesses.
117
118config HAVE_EFFICIENT_UNALIGNED_ACCESS
119 bool
120 help
121 Some architectures are unable to perform unaligned accesses
122 without the use of get_unaligned/put_unaligned. Others are
123 unable to perform such accesses efficiently (e.g. trap on
124 unaligned access and require fixing it up in the exception
125 handler.)
126
127 This symbol should be selected by an architecture if it can
128 perform unaligned accesses efficiently to allow different
129 code paths to be selected for these cases. Some network
130 drivers, for example, could opt to not fix up alignment
131 problems with received packets if doing so would not help
132 much.
133
134 See Documentation/unaligned-memory-access.txt for more
135 information on the topic of unaligned memory accesses.
136
137config ARCH_USE_BUILTIN_BSWAP
138 bool
139 help
140 Modern versions of GCC (since 4.4) have builtin functions
141 for handling byte-swapping. Using these, instead of the old
142 inline assembler that the architecture code provides in the
143 __arch_bswapXX() macros, allows the compiler to see what's
144 happening and offers more opportunity for optimisation. In
145 particular, the compiler will be able to combine the byteswap
146 with a nearby load or store and use load-and-swap or
147 store-and-swap instructions if the architecture has them. It
148 should almost *never* result in code which is worse than the
149 hand-coded assembler in <asm/swab.h>. But just in case it
150 does, the use of the builtins is optional.
151
152 Any architecture with load-and-swap or store-and-swap
153 instructions should set this. And it shouldn't hurt to set it
154 on architectures that don't have such instructions.
155
156config KRETPROBES
157 def_bool y
158 depends on KPROBES && HAVE_KRETPROBES
159
160config USER_RETURN_NOTIFIER
161 bool
162 depends on HAVE_USER_RETURN_NOTIFIER
163 help
164 Provide a kernel-internal notification when a cpu is about to
165 switch to user mode.
166
167config HAVE_IOREMAP_PROT
168 bool
169
170config HAVE_KPROBES
171 bool
172
173config HAVE_KRETPROBES
174 bool
175
176config HAVE_OPTPROBES
177 bool
178
179config HAVE_KPROBES_ON_FTRACE
180 bool
181
182config HAVE_NMI_WATCHDOG
183 bool
184#
185# An arch should select this if it provides all these things:
186#
187# task_pt_regs() in asm/processor.h or asm/ptrace.h
188# arch_has_single_step() if there is hardware single-step support
189# arch_has_block_step() if there is hardware block-step support
190# asm/syscall.h supplying asm-generic/syscall.h interface
191# linux/regset.h user_regset interfaces
192# CORE_DUMP_USE_REGSET #define'd in linux/elf.h
193# TIF_SYSCALL_TRACE calls tracehook_report_syscall_{entry,exit}
194# TIF_NOTIFY_RESUME calls tracehook_notify_resume()
195# signal delivery calls tracehook_signal_handler()
196#
197config HAVE_ARCH_TRACEHOOK
198 bool
199
200config HAVE_DMA_ATTRS
201 bool
202
203config HAVE_DMA_CONTIGUOUS
204 bool
205
206config GENERIC_SMP_IDLE_THREAD
207 bool
208
209config GENERIC_IDLE_POLL_SETUP
210 bool
211
212# Select if arch init_task initializer is different to init/init_task.c
213config ARCH_INIT_TASK
214 bool
215
216# Select if arch has its private alloc_task_struct() function
217config ARCH_TASK_STRUCT_ALLOCATOR
218 bool
219
220# Select if arch has its private alloc_thread_info() function
221config ARCH_THREAD_INFO_ALLOCATOR
222 bool
223
224# Select if arch wants to size task_struct dynamically via arch_task_struct_size:
225config ARCH_WANTS_DYNAMIC_TASK_STRUCT
226 bool
227
228config HAVE_REGS_AND_STACK_ACCESS_API
229 bool
230 help
231 This symbol should be selected by an architecure if it supports
232 the API needed to access registers and stack entries from pt_regs,
233 declared in asm/ptrace.h
234 For example the kprobes-based event tracer needs this API.
235
236config HAVE_CLK
237 bool
238 help
239 The <linux/clk.h> calls support software clock gating and
240 thus are a key power management tool on many systems.
241
242config HAVE_DMA_API_DEBUG
243 bool
244
245config HAVE_HW_BREAKPOINT
246 bool
247 depends on PERF_EVENTS
248
249config HAVE_MIXED_BREAKPOINTS_REGS
250 bool
251 depends on HAVE_HW_BREAKPOINT
252 help
253 Depending on the arch implementation of hardware breakpoints,
254 some of them have separate registers for data and instruction
255 breakpoints addresses, others have mixed registers to store
256 them but define the access type in a control register.
257 Select this option if your arch implements breakpoints under the
258 latter fashion.
259
260config HAVE_USER_RETURN_NOTIFIER
261 bool
262
263config HAVE_PERF_EVENTS_NMI
264 bool
265 help
266 System hardware can generate an NMI using the perf event
267 subsystem. Also has support for calculating CPU cycle events
268 to determine how many clock cycles in a given period.
269
270config HAVE_PERF_REGS
271 bool
272 help
273 Support selective register dumps for perf events. This includes
274 bit-mapping of each registers and a unique architecture id.
275
276config HAVE_PERF_USER_STACK_DUMP
277 bool
278 help
279 Support user stack dumps for perf event samples. This needs
280 access to the user stack pointer which is not unified across
281 architectures.
282
283config HAVE_ARCH_JUMP_LABEL
284 bool
285
286config HAVE_RCU_TABLE_FREE
287 bool
288
289config ARCH_HAVE_NMI_SAFE_CMPXCHG
290 bool
291
292config HAVE_ALIGNED_STRUCT_PAGE
293 bool
294 help
295 This makes sure that struct pages are double word aligned and that
296 e.g. the SLUB allocator can perform double word atomic operations
297 on a struct page for better performance. However selecting this
298 might increase the size of a struct page by a word.
299
300config HAVE_CMPXCHG_LOCAL
301 bool
302
303config HAVE_CMPXCHG_DOUBLE
304 bool
305
306config ARCH_WANT_IPC_PARSE_VERSION
307 bool
308
309config ARCH_WANT_COMPAT_IPC_PARSE_VERSION
310 bool
311
312config ARCH_WANT_OLD_COMPAT_IPC
313 select ARCH_WANT_COMPAT_IPC_PARSE_VERSION
314 bool
315
316config HAVE_ARCH_SECCOMP_FILTER
317 bool
318 help
319 An arch should select this symbol if it provides all of these things:
320 - syscall_get_arch()
321 - syscall_get_arguments()
322 - syscall_rollback()
323 - syscall_set_return_value()
324 - SIGSYS siginfo_t support
325 - secure_computing is called from a ptrace_event()-safe context
326 - secure_computing return value is checked and a return value of -1
327 results in the system call being skipped immediately.
328 - seccomp syscall wired up
329
330 For best performance, an arch should use seccomp_phase1 and
331 seccomp_phase2 directly. It should call seccomp_phase1 for all
332 syscalls if TIF_SECCOMP is set, but seccomp_phase1 does not
333 need to be called from a ptrace-safe context. It must then
334 call seccomp_phase2 if seccomp_phase1 returns anything other
335 than SECCOMP_PHASE1_OK or SECCOMP_PHASE1_SKIP.
336
337 As an additional optimization, an arch may provide seccomp_data
338 directly to seccomp_phase1; this avoids multiple calls
339 to the syscall_xyz helpers for every syscall.
340
341config SECCOMP_FILTER
342 def_bool y
343 depends on HAVE_ARCH_SECCOMP_FILTER && SECCOMP && NET
344 help
345 Enable tasks to build secure computing environments defined
346 in terms of Berkeley Packet Filter programs which implement
347 task-defined system call filtering polices.
348
349 See Documentation/prctl/seccomp_filter.txt for details.
350
351config HAVE_CC_STACKPROTECTOR
352 bool
353 help
354 An arch should select this symbol if:
355 - its compiler supports the -fstack-protector option
356 - it has implemented a stack canary (e.g. __stack_chk_guard)
357
358config CC_STACKPROTECTOR
359 def_bool n
360 help
361 Set when a stack-protector mode is enabled, so that the build
362 can enable kernel-side support for the GCC feature.
363
364choice
365 prompt "Stack Protector buffer overflow detection"
366 depends on HAVE_CC_STACKPROTECTOR
367 default CC_STACKPROTECTOR_NONE
368 help
369 This option turns on the "stack-protector" GCC feature. This
370 feature puts, at the beginning of functions, a canary value on
371 the stack just before the return address, and validates
372 the value just before actually returning. Stack based buffer
373 overflows (that need to overwrite this return address) now also
374 overwrite the canary, which gets detected and the attack is then
375 neutralized via a kernel panic.
376
377config CC_STACKPROTECTOR_NONE
378 bool "None"
379 help
380 Disable "stack-protector" GCC feature.
381
382config CC_STACKPROTECTOR_REGULAR
383 bool "Regular"
384 select CC_STACKPROTECTOR
385 help
386 Functions will have the stack-protector canary logic added if they
387 have an 8-byte or larger character array on the stack.
388
389 This feature requires gcc version 4.2 or above, or a distribution
390 gcc with the feature backported ("-fstack-protector").
391
392 On an x86 "defconfig" build, this feature adds canary checks to
393 about 3% of all kernel functions, which increases kernel code size
394 by about 0.3%.
395
396config CC_STACKPROTECTOR_STRONG
397 bool "Strong"
398 select CC_STACKPROTECTOR
399 help
400 Functions will have the stack-protector canary logic added in any
401 of the following conditions:
402
403 - local variable's address used as part of the right hand side of an
404 assignment or function argument
405 - local variable is an array (or union containing an array),
406 regardless of array type or length
407 - uses register local variables
408
409 This feature requires gcc version 4.9 or above, or a distribution
410 gcc with the feature backported ("-fstack-protector-strong").
411
412 On an x86 "defconfig" build, this feature adds canary checks to
413 about 20% of all kernel functions, which increases the kernel code
414 size by about 2%.
415
416endchoice
417
418config HAVE_CONTEXT_TRACKING
419 bool
420 help
421 Provide kernel/user boundaries probes necessary for subsystems
422 that need it, such as userspace RCU extended quiescent state.
423 Syscalls need to be wrapped inside user_exit()-user_enter() through
424 the slow path using TIF_NOHZ flag. Exceptions handlers must be
425 wrapped as well. Irqs are already protected inside
426 rcu_irq_enter/rcu_irq_exit() but preemption or signal handling on
427 irq exit still need to be protected.
428
429config HAVE_VIRT_CPU_ACCOUNTING
430 bool
431
432config HAVE_VIRT_CPU_ACCOUNTING_GEN
433 bool
434 default y if 64BIT
435 help
436 With VIRT_CPU_ACCOUNTING_GEN, cputime_t becomes 64-bit.
437 Before enabling this option, arch code must be audited
438 to ensure there are no races in concurrent read/write of
439 cputime_t. For example, reading/writing 64-bit cputime_t on
440 some 32-bit arches may require multiple accesses, so proper
441 locking is needed to protect against concurrent accesses.
442
443
444config HAVE_IRQ_TIME_ACCOUNTING
445 bool
446 help
447 Archs need to ensure they use a high enough resolution clock to
448 support irq time accounting and then call enable_sched_clock_irqtime().
449
450config HAVE_ARCH_TRANSPARENT_HUGEPAGE
451 bool
452
453config HAVE_ARCH_HUGE_VMAP
454 bool
455
456config HAVE_ARCH_SOFT_DIRTY
457 bool
458
459config HAVE_MOD_ARCH_SPECIFIC
460 bool
461 help
462 The arch uses struct mod_arch_specific to store data. Many arches
463 just need a simple module loader without arch specific data - those
464 should not enable this.
465
466config MODULES_USE_ELF_RELA
467 bool
468 help
469 Modules only use ELF RELA relocations. Modules with ELF REL
470 relocations will give an error.
471
472config MODULES_USE_ELF_REL
473 bool
474 help
475 Modules only use ELF REL relocations. Modules with ELF RELA
476 relocations will give an error.
477
478config HAVE_UNDERSCORE_SYMBOL_PREFIX
479 bool
480 help
481 Some architectures generate an _ in front of C symbols; things like
482 module loading and assembly files need to know about this.
483
484config HAVE_IRQ_EXIT_ON_IRQ_STACK
485 bool
486 help
487 Architecture doesn't only execute the irq handler on the irq stack
488 but also irq_exit(). This way we can process softirqs on this irq
489 stack instead of switching to a new one when we call __do_softirq()
490 in the end of an hardirq.
491 This spares a stack switch and improves cache usage on softirq
492 processing.
493
494config PGTABLE_LEVELS
495 int
496 default 2
497
498config ARCH_HAS_ELF_RANDOMIZE
499 bool
500 help
501 An architecture supports choosing randomized locations for
502 stack, mmap, brk, and ET_DYN. Defined functions:
503 - arch_mmap_rnd()
504 - arch_randomize_brk()
505
506config HAVE_COPY_THREAD_TLS
507 bool
508 help
509 Architecture provides copy_thread_tls to accept tls argument via
510 normal C parameter passing, rather than extracting the syscall
511 argument from pt_regs.
512
513#
514# ABI hall of shame
515#
516config CLONE_BACKWARDS
517 bool
518 help
519 Architecture has tls passed as the 4th argument of clone(2),
520 not the 5th one.
521
522config CLONE_BACKWARDS2
523 bool
524 help
525 Architecture has the first two arguments of clone(2) swapped.
526
527config CLONE_BACKWARDS3
528 bool
529 help
530 Architecture has tls passed as the 3rd argument of clone(2),
531 not the 5th one.
532
533config ODD_RT_SIGACTION
534 bool
535 help
536 Architecture has unusual rt_sigaction(2) arguments
537
538config OLD_SIGSUSPEND
539 bool
540 help
541 Architecture has old sigsuspend(2) syscall, of one-argument variety
542
543config OLD_SIGSUSPEND3
544 bool
545 help
546 Even weirder antique ABI - three-argument sigsuspend(2)
547
548config OLD_SIGACTION
549 bool
550 help
551 Architecture has old sigaction(2) syscall. Nope, not the same
552 as OLD_SIGSUSPEND | OLD_SIGSUSPEND3 - alpha has sigsuspend(2),
553 but fairly different variant of sigaction(2), thanks to OSF/1
554 compatibility...
555
556config COMPAT_OLD_SIGACTION
557 bool
558
559source "kernel/gcov/Kconfig"