sched: Change cfs_rq load avg to unsigned long
authorAlex Shi <alex.shi@intel.com>
Thu, 20 Jun 2013 02:18:53 +0000 (10:18 +0800)
committerIngo Molnar <mingo@kernel.org>
Thu, 27 Jun 2013 08:07:38 +0000 (10:07 +0200)
commit72a4cf20cb71a327c636c7042fdacc25abffc87c
tree55679cadceb7ddf931f0c56a65e8eb031acd769d
parenta003a25b227d59ded9197ced109517f037d01c27
sched: Change cfs_rq load avg to unsigned long

Since the 'u64 runnable_load_avg, blocked_load_avg' in cfs_rq struct are
smaller than 'unsigned long' cfs_rq->load.weight. We don't need u64
vaiables to describe them. unsigned long is more efficient and convenience.

Signed-off-by: Alex Shi <alex.shi@intel.com>
Reviewed-by: Paul Turner <pjt@google.com>
Tested-by: Vincent Guittot <vincent.guittot@linaro.org>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1371694737-29336-10-git-send-email-alex.shi@intel.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
kernel/sched/debug.c
kernel/sched/fair.c
kernel/sched/sched.h