Merge tag 'm68k-for-v6.4-tag1' of git://git.kernel.org/pub/scm/linux/kernel/git/geert...
[linux-block.git] / Documentation / RCU / torture.rst
CommitLineData
43cb5451
MCC
1.. SPDX-License-Identifier: GPL-2.0
2
3==========================
a241ec65 4RCU Torture Test Operation
43cb5451 5==========================
a241ec65
PM
6
7
8CONFIG_RCU_TORTURE_TEST
43cb5451 9=======================
a241ec65
PM
10
11The CONFIG_RCU_TORTURE_TEST config option is available for all RCU
12implementations. It creates an rcutorture kernel module that can
13be loaded to run a torture test. The test periodically outputs
14status messages via printk(), which can be examined via the dmesg
72e9bb54 15command (perhaps grepping for "torture"). The test is started
a241ec65
PM
16when the module is loaded, and stops when the module is unloaded.
17
6684880a
JW
18Module parameters are prefixed by "rcutorture." in
19Documentation/admin-guide/kernel-parameters.txt.
a241ec65 20
43cb5451
MCC
21Output
22======
a241ec65 23
43cb5451 24The statistics output is as follows::
a241ec65 25
63cd758e 26 rcu-torture:--- Start of test: nreaders=16 nfakewriters=4 stat_interval=30 verbose=0 test_no_idle_hz=1 shuffle_interval=3 stutter=5 irqreader=1 fqs_duration=0 fqs_holdoff=0 fqs_stutter=3 test_boost=1/0 test_boost_interval=7 test_boost_duration=4
fae4b54f 27 rcu-torture: rtc: (null) ver: 155441 tfle: 0 rta: 155441 rtaf: 8884 rtf: 155440 rtmbe: 0 rtbe: 0 rtbke: 0 rtbre: 0 rtbf: 0 rtb: 0 nt: 3055767
63cd758e
PM
28 rcu-torture: Reader Pipe: 727860534 34213 0 0 0 0 0 0 0 0 0
29 rcu-torture: Reader Batch: 727877838 17003 0 0 0 0 0 0 0 0 0
30 rcu-torture: Free-Block Circulation: 155440 155440 155440 155440 155440 155440 155440 155440 155440 155440 0
31 rcu-torture:--- End of test: SUCCESS: nreaders=16 nfakewriters=4 stat_interval=30 verbose=0 test_no_idle_hz=1 shuffle_interval=3 stutter=5 irqreader=1 fqs_duration=0 fqs_holdoff=0 fqs_stutter=3 test_boost=1/0 test_boost_interval=7 test_boost_duration=4
a241ec65 32
72e9bb54 33The command "dmesg | grep torture:" will extract this information on
a241ec65
PM
34most systems. On more esoteric configurations, it may be necessary to
35use other commands to access the output of the printk()s used by
36the RCU torture test. The printk()s use KERN_ALERT, so they should
37be evident. ;-)
38
63cd758e
PM
39The first and last lines show the rcutorture module parameters, and the
40last line shows either "SUCCESS" or "FAILURE", based on rcutorture's
41automatic determination as to whether RCU operated correctly.
42
a241ec65
PM
43The entries are as follows:
44
43cb5451 45* "rtc": The hexadecimal address of the structure currently visible
a241ec65
PM
46 to readers.
47
43cb5451 48* "ver": The number of times since boot that the RCU writer task
a241ec65
PM
49 has changed the structure visible to readers.
50
43cb5451 51* "tfle": If non-zero, indicates that the "torture freelist"
63cd758e 52 containing structures to be placed into the "rtc" area is empty.
a241ec65
PM
53 This condition is important, since it can fool you into thinking
54 that RCU is working when it is not. :-/
55
43cb5451 56* "rta": Number of structures allocated from the torture freelist.
a241ec65 57
43cb5451 58* "rtaf": Number of allocations from the torture freelist that have
63cd758e
PM
59 failed due to the list being empty. It is not unusual for this
60 to be non-zero, but it is bad for it to be a large fraction of
61 the value indicated by "rta".
a241ec65 62
43cb5451 63* "rtf": Number of frees into the torture freelist.
a241ec65 64
43cb5451 65* "rtmbe": A non-zero value indicates that rcutorture believes that
63cd758e
PM
66 rcu_assign_pointer() and rcu_dereference() are not working
67 correctly. This value should be zero.
68
43cb5451 69* "rtbe": A non-zero value indicates that one of the rcu_barrier()
fae4b54f
PM
70 family of functions is not working correctly.
71
43cb5451 72* "rtbke": rcutorture was unable to create the real-time kthreads
63cd758e
PM
73 used to force RCU priority inversion. This value should be zero.
74
43cb5451 75* "rtbre": Although rcutorture successfully created the kthreads
63cd758e
PM
76 used to force RCU priority inversion, it was unable to set them
77 to the real-time priority level of 1. This value should be zero.
78
43cb5451 79* "rtbf": The number of times that RCU priority boosting failed
63cd758e
PM
80 to resolve RCU priority inversion.
81
43cb5451 82* "rtb": The number of times that rcutorture attempted to force
63cd758e
PM
83 an RCU priority inversion condition. If you are testing RCU
84 priority boosting via the "test_boost" module parameter, this
85 value should be non-zero.
86
43cb5451 87* "nt": The number of times rcutorture ran RCU read-side code from
63cd758e
PM
88 within a timer handler. This value should be non-zero only
89 if you specified the "irqreader" module parameter.
90
43cb5451 91* "Reader Pipe": Histogram of "ages" of structures seen by readers.
a241ec65
PM
92 If any entries past the first two are non-zero, RCU is broken.
93 And rcutorture prints the error flag string "!!!" to make sure
94 you notice. The age of a newly allocated structure is zero,
95 it becomes one when removed from reader visibility, and is
96 incremented once per grace period subsequently -- and is freed
97 after passing through (RCU_TORTURE_PIPE_LEN-2) grace periods.
98
99 The output displayed above was taken from a correctly working
100 RCU. If you want to see what it looks like when broken, break
101 it yourself. ;-)
102
43cb5451 103* "Reader Batch": Another histogram of "ages" of structures seen
a241ec65
PM
104 by readers, but in terms of counter flips (or batches) rather
105 than in terms of grace periods. The legal number of non-zero
f85d6c71
PM
106 entries is again two. The reason for this separate view is that
107 it is sometimes easier to get the third entry to show up in the
a241ec65
PM
108 "Reader Batch" list than in the "Reader Pipe" list.
109
43cb5451 110* "Free-Block Circulation": Shows the number of torture structures
a241ec65
PM
111 that have reached a given point in the pipeline. The first element
112 should closely correspond to the number of structures allocated,
113 the second to the number that have been removed from reader view,
114 and all but the last remaining to the corresponding number of
115 passes through a grace period. The last entry should be zero,
116 as it is only incremented if a torture structure's counter
117 somehow gets incremented farther than it should.
118
b2896d2e 119Different implementations of RCU can provide implementation-specific
4de5f89e 120additional information. For example, Tree SRCU provides the following
43cb5451 121additional line::
b2896d2e 122
4de5f89e 123 srcud-torture: Tree SRCU per-CPU(idx=0): 0(35,-21) 1(-4,24) 2(1,1) 3(-26,20) 4(28,-47) 5(-9,4) 6(-10,14) 7(-14,11) T(1,6)
b2896d2e 124
4de5f89e
PM
125This line shows the per-CPU counter state, in this case for Tree SRCU
126using a dynamically allocated srcu_struct (hence "srcud-" rather than
127"srcu-"). The numbers in parentheses are the values of the "old" and
128"current" counters for the corresponding CPU. The "idx" value maps the
129"old" and "current" values to the underlying array, and is useful for
130debugging. The final "T" entry contains the totals of the counters.
240ebbf8 131
43cb5451
MCC
132Usage on Specific Kernel Builds
133===============================
a241ec65 134
9671f30e
PM
135It is sometimes desirable to torture RCU on a specific kernel build,
136for example, when preparing to put that kernel build into production.
137In that case, the kernel should be built with CONFIG_RCU_TORTURE_TEST=m
138so that the test can be started using modprobe and terminated using rmmod.
139
43cb5451 140For example, the following script may be used to torture RCU::
a241ec65
PM
141
142 #!/bin/sh
143
144 modprobe rcutorture
105617da 145 sleep 3600
a241ec65 146 rmmod rcutorture
72e9bb54 147 dmesg | grep torture:
a241ec65
PM
148
149The output can be manually inspected for the error flag of "!!!".
150One could of course create a more elaborate script that automatically
9b9ec9b9
PM
151checked for such errors. The "rmmod" command forces a "SUCCESS",
152"FAILURE", or "RCU_HOTPLUG" indication to be printk()ed. The first
153two are self-explanatory, while the last indicates that while there
154were no RCU failures, CPU-hotplug problems were detected.
4de5f89e 155
9671f30e 156
43cb5451
MCC
157Usage on Mainline Kernels
158=========================
9671f30e
PM
159
160When using rcutorture to test changes to RCU itself, it is often
161necessary to build a number of kernels in order to test that change
162across a broad range of combinations of the relevant Kconfig options
163and of the relevant kernel boot parameters. In this situation, use
164of modprobe and rmmod can be quite time-consuming and error-prone.
165
166Therefore, the tools/testing/selftests/rcutorture/bin/kvm.sh
167script is available for mainline testing for x86, arm64, and
168powerpc. By default, it will run the series of tests specified by
169tools/testing/selftests/rcutorture/configs/rcu/CFLIST, with each test
170running for 30 minutes within a guest OS using a minimal userspace
171supplied by an automatically generated initrd. After the tests are
172complete, the resulting build products and console output are analyzed
173for errors and the results of the runs are summarized.
174
175On larger systems, rcutorture testing can be accelerated by passing the
176--cpus argument to kvm.sh. For example, on a 64-CPU system, "--cpus 43"
177would use up to 43 CPUs to run tests concurrently, which as of v5.4 would
178complete all the scenarios in two batches, reducing the time to complete
179from about eight hours to about one hour (not counting the time to build
180the sixteen kernels). The "--dryrun sched" argument will not run tests,
181but rather tell you how the tests would be scheduled into batches. This
182can be useful when working out how many CPUs to specify in the --cpus
183argument.
184
185Not all changes require that all scenarios be run. For example, a change
186to Tree SRCU might run only the SRCU-N and SRCU-P scenarios using the
187--configs argument to kvm.sh as follows: "--configs 'SRCU-N SRCU-P'".
188Large systems can run multiple copies of of the full set of scenarios,
189for example, a system with 448 hardware threads can run five instances
43cb5451 190of the full set concurrently. To make this happen::
9671f30e
PM
191
192 kvm.sh --cpus 448 --configs '5*CFLIST'
193
194Alternatively, such a system can run 56 concurrent instances of a single
43cb5451 195eight-CPU scenario::
9671f30e
PM
196
197 kvm.sh --cpus 448 --configs '56*TREE04'
198
43cb5451 199Or 28 concurrent instances of each of two eight-CPU scenarios::
9671f30e
PM
200
201 kvm.sh --cpus 448 --configs '28*TREE03 28*TREE04'
202
203Of course, each concurrent instance will use memory, which can be
204limited using the --memory argument, which defaults to 512M. Small
205values for memory may require disabling the callback-flooding tests
206using the --bootargs parameter discussed below.
207
208Sometimes additional debugging is useful, and in such cases the --kconfig
0c208a79
PM
209parameter to kvm.sh may be used, for example, ``--kconfig 'CONFIG_RCU_EQS_DEBUG=y'``.
210In addition, there are the --gdb, --kasan, and --kcsan parameters.
211Note that --gdb limits you to one scenario per kvm.sh run and requires
212that you have another window open from which to run ``gdb`` as instructed
213by the script.
9671f30e
PM
214
215Kernel boot arguments can also be supplied, for example, to control
216rcutorture's module parameters. For example, to test a change to RCU's
217CPU stall-warning code, use "--bootargs 'rcutorture.stall_cpu=30'".
218This will of course result in the scripting reporting a failure, namely
c4af9e00 219the resulting RCU CPU stall warning. As noted above, reducing memory may
43cb5451 220require disabling rcutorture's callback-flooding tests::
9671f30e
PM
221
222 kvm.sh --cpus 448 --configs '56*TREE04' --memory 128M \
223 --bootargs 'rcutorture.fwd_progress=0'
224
225Sometimes all that is needed is a full set of kernel builds. This is
0c208a79 226what the --buildonly parameter does.
9671f30e 227
0c208a79
PM
228The --duration parameter can override the default run time of 30 minutes.
229For example, ``--duration 2d`` would run for two days, ``--duration 3h``
230would run for three hours, ``--duration 5m`` would run for five minutes,
231and ``--duration 45s`` would run for 45 seconds. This last can be useful
232for tracking down rare boot-time failures.
233
234Finally, the --trust-make parameter allows each kernel build to reuse what
235it can from the previous kernel build. Please note that without the
236--trust-make parameter, your tags files may be demolished.
9671f30e
PM
237
238There are additional more arcane arguments that are documented in the
239source code of the kvm.sh script.
240
241If a run contains failures, the number of buildtime and runtime failures
242is listed at the end of the kvm.sh output, which you really should redirect
243to a file. The build products and console output of each run is kept in
244tools/testing/selftests/rcutorture/res in timestamped directories. A
245given directory can be supplied to kvm-find-errors.sh in order to have
43cb5451 246it cycle you through summaries of errors and full error logs. For example::
9671f30e
PM
247
248 tools/testing/selftests/rcutorture/bin/kvm-find-errors.sh \
249 tools/testing/selftests/rcutorture/res/2020.01.20-15.54.23
250
251However, it is often more convenient to access the files directly.
252Files pertaining to all scenarios in a run reside in the top-level
253directory (2020.01.20-15.54.23 in the example above), while per-scenario
254files reside in a subdirectory named after the scenario (for example,
255"TREE04"). If a given scenario ran more than once (as in "--configs
256'56*TREE04'" above), the directories corresponding to the second and
257subsequent runs of that scenario include a sequence number, for example,
258"TREE04.2", "TREE04.3", and so on.
259
260The most frequently used file in the top-level directory is testid.txt.
261If the test ran in a git repository, then this file contains the commit
262that was tested and any uncommitted changes in diff format.
263
264The most frequently used files in each per-scenario-run directory are:
265
43cb5451
MCC
266.config:
267 This file contains the Kconfig options.
9671f30e 268
43cb5451
MCC
269Make.out:
270 This contains build output for a specific scenario.
9671f30e 271
43cb5451
MCC
272console.log:
273 This contains the console output for a specific scenario.
9671f30e
PM
274 This file may be examined once the kernel has booted, but
275 it might not exist if the build failed.
276
43cb5451
MCC
277vmlinux:
278 This contains the kernel, which can be useful with tools like
9671f30e
PM
279 objdump and gdb.
280
281A number of additional files are available, but are less frequently used.
282Many are intended for debugging of rcutorture itself or of its scripting.
283
284As of v5.4, a successful run with the default set of scenarios produces
43cb5451
MCC
285the following summary at the end of the run on a 12-CPU system::
286
287 SRCU-N ------- 804233 GPs (148.932/s) [srcu: g10008272 f0x0 ]
288 SRCU-P ------- 202320 GPs (37.4667/s) [srcud: g1809476 f0x0 ]
289 SRCU-t ------- 1122086 GPs (207.794/s) [srcu: g0 f0x0 ]
290 SRCU-u ------- 1111285 GPs (205.794/s) [srcud: g1 f0x0 ]
291 TASKS01 ------- 19666 GPs (3.64185/s) [tasks: g0 f0x0 ]
292 TASKS02 ------- 20541 GPs (3.80389/s) [tasks: g0 f0x0 ]
293 TASKS03 ------- 19416 GPs (3.59556/s) [tasks: g0 f0x0 ]
294 TINY01 ------- 836134 GPs (154.84/s) [rcu: g0 f0x0 ] n_max_cbs: 34198
295 TINY02 ------- 850371 GPs (157.476/s) [rcu: g0 f0x0 ] n_max_cbs: 2631
296 TREE01 ------- 162625 GPs (30.1157/s) [rcu: g1124169 f0x0 ]
297 TREE02 ------- 333003 GPs (61.6672/s) [rcu: g2647753 f0x0 ] n_max_cbs: 35844
298 TREE03 ------- 306623 GPs (56.782/s) [rcu: g2975325 f0x0 ] n_max_cbs: 1496497
299 CPU count limited from 16 to 12
300 TREE04 ------- 246149 GPs (45.5831/s) [rcu: g1695737 f0x0 ] n_max_cbs: 434961
301 TREE05 ------- 314603 GPs (58.2598/s) [rcu: g2257741 f0x2 ] n_max_cbs: 193997
302 TREE07 ------- 167347 GPs (30.9902/s) [rcu: g1079021 f0x0 ] n_max_cbs: 478732
303 CPU count limited from 16 to 12
304 TREE09 ------- 752238 GPs (139.303/s) [rcu: g13075057 f0x0 ] n_max_cbs: 99011
0c208a79
PM
305
306
307Repeated Runs
308=============
309
310Suppose that you are chasing down a rare boot-time failure. Although you
311could use kvm.sh, doing so will rebuild the kernel on each run. If you
312need (say) 1,000 runs to have confidence that you have fixed the bug,
313these pointless rebuilds can become extremely annoying.
314
315This is why kvm-again.sh exists.
316
317Suppose that a previous kvm.sh run left its output in this directory::
318
319 tools/testing/selftests/rcutorture/res/2022.11.03-11.26.28
320
321Then this run can be re-run without rebuilding as follow:
322
323 kvm-again.sh tools/testing/selftests/rcutorture/res/2022.11.03-11.26.28
324
325A few of the original run's kvm.sh parameters may be overridden, perhaps
326most notably --duration and --bootargs. For example::
327
328 kvm-again.sh tools/testing/selftests/rcutorture/res/2022.11.03-11.26.28 \
329 --duration 45s
330
331would re-run the previous test, but for only 45 seconds, thus facilitating
332tracking down the aforementioned rare boot-time failure.
333
334
335Distributed Runs
336================
337
338Although kvm.sh is quite useful, its testing is confined to a single
339system. It is not all that hard to use your favorite framework to cause
340(say) 5 instances of kvm.sh to run on your 5 systems, but this will very
341likely unnecessarily rebuild kernels. In addition, manually distributing
342the desired rcutorture scenarios across the available systems can be
343painstaking and error-prone.
344
345And this is why the kvm-remote.sh script exists.
346
347If you the following command works::
348
349 ssh system0 date
350
351and if it also works for system1, system2, system3, system4, and system5,
352and all of these systems have 64 CPUs, you can type::
353
354 kvm-remote.sh "system0 system1 system2 system3 system4 system5" \
355 --cpus 64 --duration 8h --configs "5*CFLIST"
356
357This will build each default scenario's kernel on the local system, then
358spread each of five instances of each scenario over the systems listed,
359running each scenario for eight hours. At the end of the runs, the
360results will be gathered, recorded, and printed. Most of the parameters
361that kvm.sh will accept can be passed to kvm-remote.sh, but the list of
362systems must come first.
363
364The kvm.sh ``--dryrun scenarios`` argument is useful for working out
365how many scenarios may be run in one batch across a group of systems.
366
367You can also re-run a previous remote run in a manner similar to kvm.sh:
368
369 kvm-remote.sh "system0 system1 system2 system3 system4 system5" \
370 tools/testing/selftests/rcutorture/res/2022.11.03-11.26.28-remote \
371 --duration 24h
372
c4af9e00 373In this case, most of the kvm-again.sh parameters may be supplied following
0c208a79 374the pathname of the old run-results directory.