perf/x86: Fix PEBS threshold initialization
authorJiri Olsa <jolsa@kernel.org>
Thu, 18 Aug 2016 09:09:52 +0000 (11:09 +0200)
committerIngo Molnar <mingo@kernel.org>
Thu, 18 Aug 2016 09:58:02 +0000 (11:58 +0200)
Latest PEBS rework change could skip initialization
of the ds->pebs_interrupt_threshold for single event
PEBS threshold events.

Make sure the PEBS threshold gets always initialized.

Signed-off-by: Jiri Olsa <jolsa@redhat.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Fixes: 09e61b4f7849 ("perf/x86/intel: Rework the large PEBS setup code")
Link: http://lkml.kernel.org/r/1471511392-29875-1-git-send-email-jolsa@kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
arch/x86/events/intel/ds.c

index 248023f54c87fd3d2c8dc5508cd1e4179770a3fb..e0288d555367e83f925b697cc29312d8c1152dcc 100644 (file)
@@ -834,14 +834,24 @@ static inline void pebs_update_threshold(struct cpu_hw_events *cpuc)
 static void
 pebs_update_state(bool needed_cb, struct cpu_hw_events *cpuc, struct pmu *pmu)
 {
+       /*
+        * Make sure we get updated with the first PEBS
+        * event. It will trigger also during removal, but
+        * that does not hurt:
+        */
+       bool update = cpuc->n_pebs == 1;
+
        if (needed_cb != pebs_needs_sched_cb(cpuc)) {
                if (!needed_cb)
                        perf_sched_cb_inc(pmu);
                else
                        perf_sched_cb_dec(pmu);
 
-               pebs_update_threshold(cpuc);
+               update = true;
        }
+
+       if (update)
+               pebs_update_threshold(cpuc);
 }
 
 void intel_pmu_pebs_add(struct perf_event *event)