sched/rt: Make update_curr_rt() more accurate
authorWen Yang <wen.yang99@zte.com.cn>
Mon, 5 Feb 2018 03:18:41 +0000 (11:18 +0800)
committerIngo Molnar <mingo@kernel.org>
Tue, 6 Feb 2018 09:20:34 +0000 (10:20 +0100)
rq->clock_task may be updated between the two calls of
rq_clock_task() in update_curr_rt(). Calling rq_clock_task() only
once makes it more accurate and efficient, taking update_curr() as
reference.

Signed-off-by: Wen Yang <wen.yang99@zte.com.cn>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Jiang Biao <jiang.biao2@zte.com.cn>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: zhong.weidong@zte.com.cn
Link: http://lkml.kernel.org/r/1517800721-42092-1-git-send-email-wen.yang99@zte.com.cn
Signed-off-by: Ingo Molnar <mingo@kernel.org>
kernel/sched/rt.c

index 89a086ed2b16896525708981963b88515571fc0f..663b2355a3aa772d8bcc8c90b55a3e0e0e3a6e17 100644 (file)
@@ -950,12 +950,13 @@ static void update_curr_rt(struct rq *rq)
 {
        struct task_struct *curr = rq->curr;
        struct sched_rt_entity *rt_se = &curr->rt;
+       u64 now = rq_clock_task(rq);
        u64 delta_exec;
 
        if (curr->sched_class != &rt_sched_class)
                return;
 
-       delta_exec = rq_clock_task(rq) - curr->se.exec_start;
+       delta_exec = now - curr->se.exec_start;
        if (unlikely((s64)delta_exec <= 0))
                return;
 
@@ -968,7 +969,7 @@ static void update_curr_rt(struct rq *rq)
        curr->se.sum_exec_runtime += delta_exec;
        account_group_exec_runtime(curr, delta_exec);
 
-       curr->se.exec_start = rq_clock_task(rq);
+       curr->se.exec_start = now;
        cgroup_account_cputime(curr, delta_exec);
 
        sched_rt_avg_update(rq, delta_exec);