Commit | Line | Data |
---|---|---|
72f924f6 VG |
1 | Block IO Controller |
2 | =================== | |
3 | Overview | |
4 | ======== | |
5 | cgroup subsys "blkio" implements the block io controller. There seems to be | |
6 | a need of various kinds of IO control policies (like proportional BW, max BW) | |
7 | both at leaf nodes as well as at intermediate nodes in a storage hierarchy. | |
8 | Plan is to use the same cgroup based management interface for blkio controller | |
9 | and based on user options switch IO policies in the background. | |
10 | ||
2786c4e5 VG |
11 | Currently two IO control policies are implemented. First one is proportional |
12 | weight time based division of disk policy. It is implemented in CFQ. Hence | |
13 | this policy takes effect only on leaf nodes when CFQ is being used. The second | |
14 | one is throttling policy which can be used to specify upper IO rate limits | |
15 | on devices. This policy is implemented in generic block layer and can be | |
16 | used on leaf nodes as well as higher level logical devices like device mapper. | |
72f924f6 VG |
17 | |
18 | HOWTO | |
19 | ===== | |
2786c4e5 VG |
20 | Proportional Weight division of bandwidth |
21 | ----------------------------------------- | |
72f924f6 VG |
22 | You can do a very simple testing of running two dd threads in two different |
23 | cgroups. Here is what you can do. | |
24 | ||
afc24d49 VG |
25 | - Enable Block IO controller |
26 | CONFIG_BLK_CGROUP=y | |
27 | ||
72f924f6 VG |
28 | - Enable group scheduling in CFQ |
29 | CONFIG_CFQ_GROUP_IOSCHED=y | |
30 | ||
f6e07d38 JS |
31 | - Compile and boot into kernel and mount IO controller (blkio); see |
32 | cgroups.txt, Why are cgroups needed?. | |
72f924f6 | 33 | |
f6e07d38 JS |
34 | mount -t tmpfs cgroup_root /sys/fs/cgroup |
35 | mkdir /sys/fs/cgroup/blkio | |
36 | mount -t cgroup -o blkio none /sys/fs/cgroup/blkio | |
72f924f6 VG |
37 | |
38 | - Create two cgroups | |
f6e07d38 | 39 | mkdir -p /sys/fs/cgroup/blkio/test1/ /sys/fs/cgroup/blkio/test2 |
72f924f6 VG |
40 | |
41 | - Set weights of group test1 and test2 | |
f6e07d38 JS |
42 | echo 1000 > /sys/fs/cgroup/blkio/test1/blkio.weight |
43 | echo 500 > /sys/fs/cgroup/blkio/test2/blkio.weight | |
72f924f6 VG |
44 | |
45 | - Create two same size files (say 512MB each) on same disk (file1, file2) and | |
46 | launch two dd threads in different cgroup to read those files. | |
47 | ||
48 | sync | |
49 | echo 3 > /proc/sys/vm/drop_caches | |
50 | ||
51 | dd if=/mnt/sdb/zerofile1 of=/dev/null & | |
f6e07d38 JS |
52 | echo $! > /sys/fs/cgroup/blkio/test1/tasks |
53 | cat /sys/fs/cgroup/blkio/test1/tasks | |
72f924f6 VG |
54 | |
55 | dd if=/mnt/sdb/zerofile2 of=/dev/null & | |
f6e07d38 JS |
56 | echo $! > /sys/fs/cgroup/blkio/test2/tasks |
57 | cat /sys/fs/cgroup/blkio/test2/tasks | |
72f924f6 VG |
58 | |
59 | - At macro level, first dd should finish first. To get more precise data, keep | |
60 | on looking at (with the help of script), at blkio.disk_time and | |
61 | blkio.disk_sectors files of both test1 and test2 groups. This will tell how | |
55d01595 | 62 | much disk time (in milliseconds), each group got and how many sectors each |
72f924f6 VG |
63 | group dispatched to the disk. We provide fairness in terms of disk time, so |
64 | ideally io.disk_time of cgroups should be in proportion to the weight. | |
65 | ||
2786c4e5 VG |
66 | Throttling/Upper Limit policy |
67 | ----------------------------- | |
68 | - Enable Block IO controller | |
69 | CONFIG_BLK_CGROUP=y | |
70 | ||
71 | - Enable throttling in block layer | |
72 | CONFIG_BLK_DEV_THROTTLING=y | |
73 | ||
f6e07d38 JS |
74 | - Mount blkio controller (see cgroups.txt, Why are cgroups needed?) |
75 | mount -t cgroup -o blkio none /sys/fs/cgroup/blkio | |
2786c4e5 VG |
76 | |
77 | - Specify a bandwidth rate on particular device for root group. The format | |
52b233c8 | 78 | for policy is "<major>:<minor> <bytes_per_second>". |
2786c4e5 | 79 | |
9b61fc4c | 80 | echo "8:16 1048576" > /sys/fs/cgroup/blkio/blkio.throttle.read_bps_device |
2786c4e5 VG |
81 | |
82 | Above will put a limit of 1MB/second on reads happening for root group | |
83 | on device having major/minor number 8:16. | |
84 | ||
85 | - Run dd to read a file and see if rate is throttled to 1MB/s or not. | |
86 | ||
87 | # dd if=/mnt/common/zerofile of=/dev/null bs=4K count=1024 | |
88 | # iflag=direct | |
89 | 1024+0 records in | |
90 | 1024+0 records out | |
91 | 4194304 bytes (4.2 MB) copied, 4.0001 s, 1.0 MB/s | |
92 | ||
9b61fc4c | 93 | Limits for writes can be put using blkio.throttle.write_bps_device file. |
2786c4e5 | 94 | |
bdc85df7 VG |
95 | Hierarchical Cgroups |
96 | ==================== | |
bdc85df7 | 97 | |
9138125b TH |
98 | Both CFQ and throttling implement hierarchy support; however, |
99 | throttling's hierarchy support is enabled iff "sane_behavior" is | |
100 | enabled from cgroup side, which currently is a development option and | |
101 | not publicly available. | |
102 | ||
103 | If somebody created a hierarchy like as follows. | |
bdc85df7 VG |
104 | |
105 | root | |
106 | / \ | |
107 | test1 test2 | |
108 | | | |
109 | test3 | |
110 | ||
9138125b TH |
111 | CFQ by default and throttling with "sane_behavior" will handle the |
112 | hierarchy correctly. For details on CFQ hierarchy support, refer to | |
113 | Documentation/block/cfq-iosched.txt. For throttling, all limits apply | |
114 | to the whole subtree while all statistics are local to the IOs | |
115 | directly generated by tasks in that cgroup. | |
116 | ||
117 | Throttling without "sane_behavior" enabled from cgroup side will | |
118 | practically treat all groups at same level as if it looks like the | |
119 | following. | |
bdc85df7 VG |
120 | |
121 | pivot | |
67de0162 | 122 | / / \ \ |
bdc85df7 VG |
123 | root test1 test2 test3 |
124 | ||
72f924f6 VG |
125 | Various user visible config options |
126 | =================================== | |
72f924f6 | 127 | CONFIG_BLK_CGROUP |
afc24d49 | 128 | - Block IO controller. |
72f924f6 VG |
129 | |
130 | CONFIG_DEBUG_BLK_CGROUP | |
afc24d49 VG |
131 | - Debug help. Right now some additional stats file show up in cgroup |
132 | if this option is enabled. | |
133 | ||
134 | CONFIG_CFQ_GROUP_IOSCHED | |
135 | - Enables group scheduling in CFQ. Currently only 1 level of group | |
136 | creation is allowed. | |
72f924f6 | 137 | |
2786c4e5 VG |
138 | CONFIG_BLK_DEV_THROTTLING |
139 | - Enable block device throttling support in block layer. | |
140 | ||
72f924f6 VG |
141 | Details of cgroup files |
142 | ======================= | |
2786c4e5 VG |
143 | Proportional weight policy files |
144 | -------------------------------- | |
72f924f6 | 145 | - blkio.weight |
da69da18 GJ |
146 | - Specifies per cgroup weight. This is default weight of the group |
147 | on all the devices until and unless overridden by per device rule. | |
148 | (See blkio.weight_device). | |
df457f84 | 149 | Currently allowed range of weights is from 10 to 1000. |
72f924f6 | 150 | |
da69da18 GJ |
151 | - blkio.weight_device |
152 | - One can specify per cgroup per device rules using this interface. | |
153 | These rules override the default value of group weight as specified | |
154 | by blkio.weight. | |
155 | ||
156 | Following is the format. | |
157 | ||
f6e07d38 | 158 | # echo dev_maj:dev_minor weight > blkio.weight_device |
da69da18 GJ |
159 | Configure weight=300 on /dev/sdb (8:16) in this cgroup |
160 | # echo 8:16 300 > blkio.weight_device | |
161 | # cat blkio.weight_device | |
162 | dev weight | |
163 | 8:16 300 | |
164 | ||
165 | Configure weight=500 on /dev/sda (8:0) in this cgroup | |
166 | # echo 8:0 500 > blkio.weight_device | |
167 | # cat blkio.weight_device | |
168 | dev weight | |
169 | 8:0 500 | |
170 | 8:16 300 | |
171 | ||
172 | Remove specific weight for /dev/sda in this cgroup | |
173 | # echo 8:0 0 > blkio.weight_device | |
174 | # cat blkio.weight_device | |
175 | dev weight | |
176 | 8:16 300 | |
177 | ||
d02f7aa8 TH |
178 | - blkio.leaf_weight[_device] |
179 | - Equivalents of blkio.weight[_device] for the purpose of | |
180 | deciding how much weight tasks in the given cgroup has while | |
181 | competing with the cgroup's child cgroups. For details, | |
182 | please refer to Documentation/block/cfq-iosched.txt. | |
183 | ||
72f924f6 VG |
184 | - blkio.time |
185 | - disk time allocated to cgroup per device in milliseconds. First | |
186 | two fields specify the major and minor number of the device and | |
187 | third field specifies the disk time allocated to group in | |
188 | milliseconds. | |
189 | ||
190 | - blkio.sectors | |
191 | - number of sectors transferred to/from disk by the group. First | |
192 | two fields specify the major and minor number of the device and | |
193 | third field specifies the number of sectors transferred by the | |
194 | group to/from the device. | |
195 | ||
84c124da DS |
196 | - blkio.io_service_bytes |
197 | - Number of bytes transferred to/from the disk by the group. These | |
198 | are further divided by the type of operation - read or write, sync | |
199 | or async. First two fields specify the major and minor number of the | |
200 | device, third field specifies the operation type and the fourth field | |
201 | specifies the number of bytes. | |
202 | ||
203 | - blkio.io_serviced | |
77ea7338 | 204 | - Number of IOs (bio) issued to the disk by the group. These |
84c124da DS |
205 | are further divided by the type of operation - read or write, sync |
206 | or async. First two fields specify the major and minor number of the | |
207 | device, third field specifies the operation type and the fourth field | |
208 | specifies the number of IOs. | |
209 | ||
210 | - blkio.io_service_time | |
211 | - Total amount of time between request dispatch and request completion | |
212 | for the IOs done by this cgroup. This is in nanoseconds to make it | |
213 | meaningful for flash devices too. For devices with queue depth of 1, | |
214 | this time represents the actual service time. When queue_depth > 1, | |
215 | that is no longer true as requests may be served out of order. This | |
216 | may cause the service time for a given IO to include the service time | |
217 | of multiple IOs when served out of order which may result in total | |
218 | io_service_time > actual time elapsed. This time is further divided by | |
219 | the type of operation - read or write, sync or async. First two fields | |
220 | specify the major and minor number of the device, third field | |
221 | specifies the operation type and the fourth field specifies the | |
222 | io_service_time in ns. | |
223 | ||
224 | - blkio.io_wait_time | |
225 | - Total amount of time the IOs for this cgroup spent waiting in the | |
226 | scheduler queues for service. This can be greater than the total time | |
227 | elapsed since it is cumulative io_wait_time for all IOs. It is not a | |
228 | measure of total time the cgroup spent waiting but rather a measure of | |
229 | the wait_time for its individual IOs. For devices with queue_depth > 1 | |
230 | this metric does not include the time spent waiting for service once | |
231 | the IO is dispatched to the device but till it actually gets serviced | |
232 | (there might be a time lag here due to re-ordering of requests by the | |
233 | device). This is in nanoseconds to make it meaningful for flash | |
234 | devices too. This time is further divided by the type of operation - | |
235 | read or write, sync or async. First two fields specify the major and | |
236 | minor number of the device, third field specifies the operation type | |
237 | and the fourth field specifies the io_wait_time in ns. | |
238 | ||
812d4026 DS |
239 | - blkio.io_merged |
240 | - Total number of bios/requests merged into requests belonging to this | |
241 | cgroup. This is further divided by the type of operation - read or | |
242 | write, sync or async. | |
243 | ||
cdc1184c DS |
244 | - blkio.io_queued |
245 | - Total number of requests queued up at any given instant for this | |
246 | cgroup. This is further divided by the type of operation - read or | |
247 | write, sync or async. | |
248 | ||
249 | - blkio.avg_queue_size | |
afc24d49 | 250 | - Debugging aid only enabled if CONFIG_DEBUG_BLK_CGROUP=y. |
cdc1184c DS |
251 | The average queue size for this cgroup over the entire time of this |
252 | cgroup's existence. Queue size samples are taken each time one of the | |
253 | queues of this cgroup gets a timeslice. | |
254 | ||
812df48d | 255 | - blkio.group_wait_time |
afc24d49 | 256 | - Debugging aid only enabled if CONFIG_DEBUG_BLK_CGROUP=y. |
812df48d DS |
257 | This is the amount of time the cgroup had to wait since it became busy |
258 | (i.e., went from 0 to 1 request queued) to get a timeslice for one of | |
259 | its queues. This is different from the io_wait_time which is the | |
260 | cumulative total of the amount of time spent by each IO in that cgroup | |
261 | waiting in the scheduler queue. This is in nanoseconds. If this is | |
262 | read when the cgroup is in a waiting (for timeslice) state, the stat | |
263 | will only report the group_wait_time accumulated till the last time it | |
264 | got a timeslice and will not include the current delta. | |
265 | ||
266 | - blkio.empty_time | |
afc24d49 | 267 | - Debugging aid only enabled if CONFIG_DEBUG_BLK_CGROUP=y. |
812df48d DS |
268 | This is the amount of time a cgroup spends without any pending |
269 | requests when not being served, i.e., it does not include any time | |
270 | spent idling for one of the queues of the cgroup. This is in | |
271 | nanoseconds. If this is read when the cgroup is in an empty state, | |
272 | the stat will only report the empty_time accumulated till the last | |
273 | time it had a pending request and will not include the current delta. | |
274 | ||
275 | - blkio.idle_time | |
afc24d49 | 276 | - Debugging aid only enabled if CONFIG_DEBUG_BLK_CGROUP=y. |
812df48d | 277 | This is the amount of time spent by the IO scheduler idling for a |
40e47125 | 278 | given cgroup in anticipation of a better request than the existing ones |
812df48d DS |
279 | from other queues/cgroups. This is in nanoseconds. If this is read |
280 | when the cgroup is in an idling state, the stat will only report the | |
281 | idle_time accumulated till the last idle period and will not include | |
282 | the current delta. | |
283 | ||
72f924f6 | 284 | - blkio.dequeue |
afc24d49 | 285 | - Debugging aid only enabled if CONFIG_DEBUG_BLK_CGROUP=y. This |
72f924f6 VG |
286 | gives the statistics about how many a times a group was dequeued |
287 | from service tree of the device. First two fields specify the major | |
288 | and minor number of the device and third field specifies the number | |
289 | of times a group was dequeued from a particular device. | |
290 | ||
d02f7aa8 TH |
291 | - blkio.*_recursive |
292 | - Recursive version of various stats. These files show the | |
293 | same information as their non-recursive counterparts but | |
294 | include stats from all the descendant cgroups. | |
295 | ||
2786c4e5 VG |
296 | Throttling/Upper limit policy files |
297 | ----------------------------------- | |
298 | - blkio.throttle.read_bps_device | |
299 | - Specifies upper limit on READ rate from the device. IO rate is | |
40e47125 | 300 | specified in bytes per second. Rules are per device. Following is |
2786c4e5 VG |
301 | the format. |
302 | ||
9b61fc4c | 303 | echo "<major>:<minor> <rate_bytes_per_second>" > /cgrp/blkio.throttle.read_bps_device |
2786c4e5 VG |
304 | |
305 | - blkio.throttle.write_bps_device | |
306 | - Specifies upper limit on WRITE rate to the device. IO rate is | |
40e47125 | 307 | specified in bytes per second. Rules are per device. Following is |
2786c4e5 VG |
308 | the format. |
309 | ||
9b61fc4c | 310 | echo "<major>:<minor> <rate_bytes_per_second>" > /cgrp/blkio.throttle.write_bps_device |
2786c4e5 VG |
311 | |
312 | - blkio.throttle.read_iops_device | |
313 | - Specifies upper limit on READ rate from the device. IO rate is | |
40e47125 | 314 | specified in IO per second. Rules are per device. Following is |
2786c4e5 VG |
315 | the format. |
316 | ||
9b61fc4c | 317 | echo "<major>:<minor> <rate_io_per_second>" > /cgrp/blkio.throttle.read_iops_device |
2786c4e5 VG |
318 | |
319 | - blkio.throttle.write_iops_device | |
320 | - Specifies upper limit on WRITE rate to the device. IO rate is | |
40e47125 | 321 | specified in io per second. Rules are per device. Following is |
2786c4e5 VG |
322 | the format. |
323 | ||
9b61fc4c | 324 | echo "<major>:<minor> <rate_io_per_second>" > /cgrp/blkio.throttle.write_iops_device |
2786c4e5 VG |
325 | |
326 | Note: If both BW and IOPS rules are specified for a device, then IO is | |
40e47125 | 327 | subjected to both the constraints. |
2786c4e5 VG |
328 | |
329 | - blkio.throttle.io_serviced | |
77ea7338 TH |
330 | - Number of IOs (bio) issued to the disk by the group. These |
331 | are further divided by the type of operation - read or write, sync | |
332 | or async. First two fields specify the major and minor number of the | |
333 | device, third field specifies the operation type and the fourth field | |
334 | specifies the number of IOs. | |
2786c4e5 VG |
335 | |
336 | - blkio.throttle.io_service_bytes | |
337 | - Number of bytes transferred to/from the disk by the group. These | |
338 | are further divided by the type of operation - read or write, sync | |
339 | or async. First two fields specify the major and minor number of the | |
340 | device, third field specifies the operation type and the fourth field | |
341 | specifies the number of bytes. | |
342 | ||
2786c4e5 VG |
343 | Common files among various policies |
344 | ----------------------------------- | |
84c124da DS |
345 | - blkio.reset_stats |
346 | - Writing an int to this file will result in resetting all the stats | |
347 | for that cgroup. | |
348 | ||
72f924f6 VG |
349 | CFQ sysfs tunable |
350 | ================= | |
6d6ac1c1 VG |
351 | /sys/block/<disk>/queue/iosched/slice_idle |
352 | ------------------------------------------ | |
353 | On a faster hardware CFQ can be slow, especially with sequential workload. | |
354 | This happens because CFQ idles on a single queue and single queue might not | |
355 | drive deeper request queue depths to keep the storage busy. In such scenarios | |
356 | one can try setting slice_idle=0 and that would switch CFQ to IOPS | |
357 | (IO operations per second) mode on NCQ supporting hardware. | |
358 | ||
359 | That means CFQ will not idle between cfq queues of a cfq group and hence be | |
360 | able to driver higher queue depth and achieve better throughput. That also | |
361 | means that cfq provides fairness among groups in terms of IOPS and not in | |
362 | terms of disk time. | |
363 | ||
364 | /sys/block/<disk>/queue/iosched/group_idle | |
365 | ------------------------------------------ | |
366 | If one disables idling on individual cfq queues and cfq service trees by | |
367 | setting slice_idle=0, group_idle kicks in. That means CFQ will still idle | |
368 | on the group in an attempt to provide fairness among groups. | |
369 | ||
370 | By default group_idle is same as slice_idle and does not do anything if | |
371 | slice_idle is enabled. | |
372 | ||
373 | One can experience an overall throughput drop if you have created multiple | |
374 | groups and put applications in that group which are not driving enough | |
375 | IO to keep disk busy. In that case set group_idle=0, and CFQ will not idle | |
376 | on individual groups and throughput should improve. | |
377 | ||
3e1534cf TH |
378 | Writeback |
379 | ========= | |
380 | ||
381 | Page cache is dirtied through buffered writes and shared mmaps and | |
382 | written asynchronously to the backing filesystem by the writeback | |
383 | mechanism. Writeback sits between the memory and IO domains and | |
384 | regulates the proportion of dirty memory by balancing dirtying and | |
385 | write IOs. | |
386 | ||
387 | On traditional cgroup hierarchies, relationships between different | |
388 | controllers cannot be established making it impossible for writeback | |
389 | to operate accounting for cgroup resource restrictions and all | |
390 | writeback IOs are attributed to the root cgroup. | |
391 | ||
392 | If both the blkio and memory controllers are used on the v2 hierarchy | |
393 | and the filesystem supports cgroup writeback, writeback operations | |
394 | correctly follow the resource restrictions imposed by both memory and | |
395 | blkio controllers. | |
396 | ||
397 | Writeback examines both system-wide and per-cgroup dirty memory status | |
398 | and enforces the more restrictive of the two. Also, writeback control | |
399 | parameters which are absolute values - vm.dirty_bytes and | |
400 | vm.dirty_background_bytes - are distributed across cgroups according | |
401 | to their current writeback bandwidth. | |
402 | ||
403 | There's a peculiarity stemming from the discrepancy in ownership | |
404 | granularity between memory controller and writeback. While memory | |
405 | controller tracks ownership per page, writeback operates on inode | |
406 | basis. cgroup writeback bridges the gap by tracking ownership by | |
407 | inode but migrating ownership if too many foreign pages, pages which | |
408 | don't match the current inode ownership, have been encountered while | |
409 | writing back the inode. | |
410 | ||
411 | This is a conscious design choice as writeback operations are | |
412 | inherently tied to inodes making strictly following page ownership | |
413 | complicated and inefficient. The only use case which suffers from | |
414 | this compromise is multiple cgroups concurrently dirtying disjoint | |
415 | regions of the same inode, which is an unlikely use case and decided | |
416 | to be unsupported. Note that as memory controller assigns page | |
417 | ownership on the first use and doesn't update it until the page is | |
418 | released, even if cgroup writeback strictly follows page ownership, | |
419 | multiple cgroups dirtying overlapping areas wouldn't work as expected. | |
420 | In general, write-sharing an inode across multiple cgroups is not well | |
421 | supported. | |
422 | ||
423 | Filesystem support for cgroup writeback | |
424 | --------------------------------------- | |
425 | ||
426 | A filesystem can make writeback IOs cgroup-aware by updating | |
427 | address_space_operations->writepage[s]() to annotate bio's using the | |
428 | following two functions. | |
429 | ||
430 | * wbc_init_bio(@wbc, @bio) | |
431 | ||
432 | Should be called for each bio carrying writeback data and associates | |
433 | the bio with the inode's owner cgroup. Can be called anytime | |
434 | between bio allocation and submission. | |
435 | ||
436 | * wbc_account_io(@wbc, @page, @bytes) | |
437 | ||
438 | Should be called for each data segment being written out. While | |
439 | this function doesn't care exactly when it's called during the | |
440 | writeback session, it's the easiest and most natural to call it as | |
441 | data segments are added to a bio. | |
442 | ||
443 | With writeback bio's annotated, cgroup support can be enabled per | |
444 | super_block by setting MS_CGROUPWB in ->s_flags. This allows for | |
445 | selective disabling of cgroup writeback support which is helpful when | |
446 | certain filesystem features, e.g. journaled data mode, are | |
447 | incompatible. | |
448 | ||
449 | wbc_init_bio() binds the specified bio to its cgroup. Depending on | |
450 | the configuration, the bio may be executed at a lower priority and if | |
451 | the writeback session is holding shared resources, e.g. a journal | |
452 | entry, may lead to priority inversion. There is no one easy solution | |
453 | for the problem. Filesystems can try to work around specific problem | |
454 | cases by skipping wbc_init_bio() or using bio_associate_blkcg() | |
455 | directly. |