locking/lockdep: Update comment
authorYuyang Du <duyuyang@gmail.com>
Mon, 6 May 2019 08:19:27 +0000 (16:19 +0800)
committerIngo Molnar <mingo@kernel.org>
Mon, 3 Jun 2019 09:55:44 +0000 (11:55 +0200)
A leftover comment is removed. While at it, add more explanatory
comments. Such a trivial patch!

Signed-off-by: Yuyang Du <duyuyang@gmail.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: bvanassche@acm.org
Cc: frederic@kernel.org
Cc: ming.lei@redhat.com
Cc: will.deacon@arm.com
Link: https://lkml.kernel.org/r/20190506081939.74287-12-duyuyang@gmail.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
kernel/locking/lockdep.c

index 6cf14c84eb6d7aa3699b5497f90812e50758f031..a9799f9ed093ca5282125ce24c26d17718596224 100644 (file)
@@ -2811,10 +2811,16 @@ static int validate_chain(struct task_struct *curr,
                 * - is softirq-safe, if this lock is hardirq-unsafe
                 *
                 * And check whether the new lock's dependency graph
-                * could lead back to the previous lock.
+                * could lead back to the previous lock:
                 *
-                * any of these scenarios could lead to a deadlock. If
-                * All validations
+                * - within the current held-lock stack
+                * - across our accumulated lock dependency records
+                *
+                * any of these scenarios could lead to a deadlock.
+                */
+               /*
+                * The simple case: does the current hold the same lock
+                * already?
                 */
                int ret = check_deadlock(curr, hlock, hlock->read);