perf_counter: Fix throttling lock-up
authorIngo Molnar <mingo@elte.hu>
Wed, 3 Jun 2009 20:19:36 +0000 (22:19 +0200)
committerIngo Molnar <mingo@elte.hu>
Wed, 3 Jun 2009 21:39:51 +0000 (23:39 +0200)
Throttling logic is broken and we can lock up with too small
hw sampling intervals.

Make the throttling code more robust: disable counters even
if we already disabled them.

( Also clean up whitespace damage i noticed while reading
  various pieces of code related to throttling. )

Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
Cc: Marcelo Tosatti <mtosatti@redhat.com>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
LKML-Reference: <new-submission>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
arch/x86/kernel/cpu/perf_counter.c
kernel/perf_counter.c

index 12cc05ed9f4886953e9f2b37661a1b6ec3c121fa..8f53f3a7da29e23fad6eb7f7896cf0d58bcf9f62 100644 (file)
@@ -91,7 +91,7 @@ static u64 intel_pmu_raw_event(u64 event)
 #define CORE_EVNTSEL_INV_MASK          0x00800000ULL
 #define CORE_EVNTSEL_COUNTER_MASK      0xFF000000ULL
 
-#define CORE_EVNTSEL_MASK              \
+#define CORE_EVNTSEL_MASK              \
        (CORE_EVNTSEL_EVENT_MASK |      \
         CORE_EVNTSEL_UNIT_MASK  |      \
         CORE_EVNTSEL_EDGE_MASK  |      \
index ab4455447f84411d475f2ad852b604aa26d7f212..0bb03f15a5b67c6c594c93196d8260ca09cbd8e1 100644 (file)
@@ -2822,11 +2822,20 @@ int perf_counter_overflow(struct perf_counter *counter,
 
        if (!throttle) {
                counter->hw.interrupts++;
-       } else if (counter->hw.interrupts != MAX_INTERRUPTS) {
-               counter->hw.interrupts++;
-               if (HZ*counter->hw.interrupts > (u64)sysctl_perf_counter_limit) {
-                       counter->hw.interrupts = MAX_INTERRUPTS;
-                       perf_log_throttle(counter, 0);
+       } else {
+               if (counter->hw.interrupts != MAX_INTERRUPTS) {
+                       counter->hw.interrupts++;
+                       if (HZ*counter->hw.interrupts > (u64)sysctl_perf_counter_limit) {
+                               counter->hw.interrupts = MAX_INTERRUPTS;
+                               perf_log_throttle(counter, 0);
+                               ret = 1;
+                       }
+               } else {
+                       /*
+                        * Keep re-disabling counters even though on the previous
+                        * pass we disabled it - just in case we raced with a
+                        * sched-in and the counter got enabled again:
+                        */
                        ret = 1;
                }
        }