tracing: Avoid possible softlockup in tracing_iter_reset()
authorZheng Yejian <zhengyejian@huaweicloud.com>
Tue, 27 Aug 2024 12:46:54 +0000 (20:46 +0800)
committerSteven Rostedt (Google) <rostedt@goodmis.org>
Thu, 5 Sep 2024 14:18:48 +0000 (10:18 -0400)
In __tracing_open(), when max latency tracers took place on the cpu,
the time start of its buffer would be updated, then event entries with
timestamps being earlier than start of the buffer would be skipped
(see tracing_iter_reset()).

Softlockup will occur if the kernel is non-preemptible and too many
entries were skipped in the loop that reset every cpu buffer, so add
cond_resched() to avoid it.

Cc: stable@vger.kernel.org
Fixes: 2f26ebd549b9a ("tracing: use timestamp to determine start of latency traces")
Link: https://lore.kernel.org/20240827124654.3817443-1-zhengyejian@huaweicloud.com
Suggested-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Zheng Yejian <zhengyejian@huaweicloud.com>
Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
kernel/trace/trace.c

index ebe7ce2f5f4a50f9402da8978e89b7db66592079..edf6bc817aa123011597c2ce267edada99db46d1 100644 (file)
@@ -3958,6 +3958,8 @@ void tracing_iter_reset(struct trace_iterator *iter, int cpu)
                        break;
                entries++;
                ring_buffer_iter_advance(buf_iter);
+               /* This could be a big loop */
+               cond_resched();
        }
 
        per_cpu_ptr(iter->array_buffer->data, cpu)->skipped_entries = entries;