perf: Reimplement frequency driven sampling
authorPeter Zijlstra <a.p.zijlstra@chello.nl>
Tue, 26 Jan 2010 17:50:16 +0000 (18:50 +0100)
committerIngo Molnar <mingo@elte.hu>
Wed, 27 Jan 2010 07:39:33 +0000 (08:39 +0100)
commitabd50713944c8ea9e0af5b7bffa0aacae21cc91a
treec75a352aa13821a41791877f25d2f048568827b0
parentef12a141306c90336a3a10d40213ecd98624d274
perf: Reimplement frequency driven sampling

There was a bug in the old period code that caused intel_pmu_enable_all()
or native_write_msr_safe() to show up quite high in the profiles.

In staring at that code it made my head hurt, so I rewrote it in a
hopefully simpler fashion. Its now fully symetric between tick and
overflow driven adjustments and uses less data to boot.

The only complication is that it basically wants to do a u128 division.
The code approximates that in a rather simple truncate until it fits
fashion, taking care to balance the terms while truncating.

This version does not generate that sampling artefact.

Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
LKML-Reference: <new-submission>
Cc: <stable@kernel.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
include/linux/perf_event.h
kernel/perf_event.c