From: Linus Torvalds Date: Mon, 23 May 2016 02:40:39 +0000 (-0700) Subject: Merge tag 'trace-v4.7-2' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt... X-Git-Tag: v4.7-rc1~79 X-Git-Url: https://git.kernel.dk/?a=commitdiff_plain;h=7639dad93a5564579987abded4ec05e3db13659d;hp=-c;p=linux-2.6-block.git Merge tag 'trace-v4.7-2' of git://git./linux/kernel/git/rostedt/linux-trace Pull motr tracing updates from Steven Rostedt: "Three more changes. - I forgot that I had another selftest to stress test the ftrace instance creation. It was actually suppose to go into the 4.6 merge window, but I never committed it. I almost forgot about it again, but noticed it was missing from your tree. - Soumya PN sent me a clean up patch to not disable interrupts when taking the tasklist_lock for read, as it's unnecessary because that lock is never taken for write in irq context. - Newer gcc's can cause the jump in the function_graph code to the global ftrace_stub label to be a short jump instead of a long one. As that jump is dynamically converted to jump to the trace code to do function graph tracing, and that conversion expects a long jump it can corrupt the ftrace_stub itself (it's directly after that call). One way to prevent gcc from using a short jump is to declare the ftrace_stub as a weak function, which we do here to keep gcc from optimizing too much" * tag 'trace-v4.7-2' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace: ftrace/x86: Set ftrace_stub to weak to prevent gcc from using short jumps to it ftrace: Don't disable irqs when taking the tasklist_lock read_lock ftracetest: Add instance created, delete, read and enable event test --- 7639dad93a5564579987abded4ec05e3db13659d diff --combined kernel/trace/ftrace.c index a6c8252d7776,a6804823a058..900dbb1efff2 --- a/kernel/trace/ftrace.c +++ b/kernel/trace/ftrace.c @@@ -1530,19 -1530,7 +1530,19 @@@ static int ftrace_cmp_recs(const void * return 0; } -static unsigned long ftrace_location_range(unsigned long start, unsigned long end) +/** + * ftrace_location_range - return the first address of a traced location + * if it touches the given ip range + * @start: start of range to search. + * @end: end of range to search (inclusive). @end points to the last byte + * to check. + * + * Returns rec->ip if the related ftrace location is a least partly within + * the given address range. That is, the first address of the instruction + * that is either a NOP or call to the function tracer. It checks the ftrace + * internal tables to determine if the address belongs or not. + */ +unsigned long ftrace_location_range(unsigned long start, unsigned long end) { struct ftrace_page *pg; struct dyn_ftrace *rec; @@@ -3456,23 -3444,11 +3456,23 @@@ struct ftrace_glob int type; }; +/* + * If symbols in an architecture don't correspond exactly to the user-visible + * name of what they represent, it is possible to define this function to + * perform the necessary adjustments. +*/ +char * __weak arch_ftrace_match_adjust(char *str, const char *search) +{ + return str; +} + static int ftrace_match(char *str, struct ftrace_glob *g) { int matched = 0; int slen; + str = arch_ftrace_match_adjust(str, g->search); + switch (g->type) { case MATCH_FULL: if (strcmp(str, g->search) == 0) @@@ -5737,7 -5713,6 +5737,6 @@@ static int alloc_retstack_tasklist(stru { int i; int ret = 0; - unsigned long flags; int start = 0, end = FTRACE_RETSTACK_ALLOC_SIZE; struct task_struct *g, *t; @@@ -5753,7 -5728,7 +5752,7 @@@ } } - read_lock_irqsave(&tasklist_lock, flags); + read_lock(&tasklist_lock); do_each_thread(g, t) { if (start == end) { ret = -EAGAIN; @@@ -5771,7 -5746,7 +5770,7 @@@ } while_each_thread(g, t); unlock: - read_unlock_irqrestore(&tasklist_lock, flags); + read_unlock(&tasklist_lock); free: for (i = start; i < end; i++) kfree(ret_stack_list[i]);