sched/balancing: Rename trigger_load_balance() => sched_balance_trigger()
authorIngo Molnar <mingo@kernel.org>
Fri, 8 Mar 2024 11:18:09 +0000 (12:18 +0100)
committerIngo Molnar <mingo@kernel.org>
Tue, 12 Mar 2024 10:59:59 +0000 (11:59 +0100)
Standardize scheduler load-balancing function names on the
sched_balance_() prefix.

Signed-off-by: Ingo Molnar <mingo@kernel.org>
Reviewed-by: Shrikanth Hegde <sshegde@linux.ibm.com>
Link: https://lore.kernel.org/r/20240308111819.1101550-4-mingo@kernel.org
Documentation/scheduler/sched-domains.rst
Documentation/translations/zh_CN/scheduler/sched-domains.rst
kernel/sched/core.c
kernel/sched/fair.c
kernel/sched/sched.h

index 541d6c617971bdae0d8cf74ccf87e84064ec73b3..c7ea05f4107bc072903005d0d9bed230932eb2a4 100644 (file)
@@ -31,7 +31,7 @@ is treated as one entity. The load of a group is defined as the sum of the
 load of each of its member CPUs, and only when the load of a group becomes
 out of balance are tasks moved between groups.
 
-In kernel/sched/core.c, trigger_load_balance() is run periodically on each CPU
+In kernel/sched/core.c, sched_balance_trigger() is run periodically on each CPU
 through sched_tick(). It raises a softirq after the next regularly scheduled
 rebalancing event for the current runqueue has arrived. The actual load
 balancing workhorse, sched_balance_softirq()->rebalance_domains(), is then run
index fa0c0bcc6ba54580f842433833468f3e1bf33dca..1a8587a971f9ee918bb17e7d955e69f9b236915a 100644 (file)
@@ -34,7 +34,7 @@ CPU共享。任意两个组的CPU掩码的交集不一定为空,如果是这
 调度域中的负载均衡发生在调度组中。也就是说,每个组被视为一个实体。组的负载被定义为它
 管辖的每个CPU的负载之和。仅当组的负载不均衡后,任务才在组之间发生迁移。
 
-在kernel/sched/core.c中,trigger_load_balance()在每个CPU上通过sched_tick()
+在kernel/sched/core.c中,sched_balance_trigger()在每个CPU上通过sched_tick()
 周期执行。在当前运行队列下一个定期调度再平衡事件到达后,它引发一个软中断。负载均衡真正
 的工作由sched_balance_softirq()->rebalance_domains()完成,在软中断上下文中执行
 (SCHED_SOFTIRQ)。
index 71b7a08a6502885f8b0e5b02ba055ef5b396a4b0..929fce69f555e804e6403c49d9830e3f501ec30d 100644 (file)
@@ -5700,7 +5700,7 @@ void sched_tick(void)
 
 #ifdef CONFIG_SMP
        rq->idle_balance = idle_cpu(cpu);
-       trigger_load_balance(rq);
+       sched_balance_trigger(rq);
 #endif
 }
 
index 953f39deb68e50bed1088664cba49cb7aa814a4a..e377b675920a43cab8c753466455ec213fb57a3b 100644 (file)
@@ -12438,7 +12438,7 @@ static __latent_entropy void sched_balance_softirq(struct softirq_action *h)
 /*
  * Trigger the SCHED_SOFTIRQ if it is time to do periodic load balancing.
  */
-void trigger_load_balance(struct rq *rq)
+void sched_balance_trigger(struct rq *rq)
 {
        /*
         * Don't need to rebalance while attached to NULL domain or
index d2242679239ec5ad49152400350882d7d7b9819c..5b0ddb0e60170ea860dfbd6249e46f1294985a8c 100644 (file)
@@ -2397,7 +2397,7 @@ extern struct task_struct *pick_next_task_idle(struct rq *rq);
 
 extern void update_group_capacity(struct sched_domain *sd, int cpu);
 
-extern void trigger_load_balance(struct rq *rq);
+extern void sched_balance_trigger(struct rq *rq);
 
 extern void set_cpus_allowed_common(struct task_struct *p, struct affinity_context *ctx);