Commit | Line | Data |
---|---|---|
e5ba9ea6 KK |
1 | .. _cgroup-v2: |
2 | ||
633b11be | 3 | ================ |
6c292092 | 4 | Control Group v2 |
633b11be | 5 | ================ |
6c292092 | 6 | |
633b11be MCC |
7 | :Date: October, 2015 |
8 | :Author: Tejun Heo <tj@kernel.org> | |
6c292092 TH |
9 | |
10 | This is the authoritative documentation on the design, interface and | |
11 | conventions of cgroup v2. It describes all userland-visible aspects | |
12 | of cgroup including core and specific controller behaviors. All | |
13 | future changes must be reflected in this document. Documentation for | |
373e8ffa | 14 | v1 is available under :ref:`Documentation/admin-guide/cgroup-v1/index.rst <cgroup-v1>`. |
6c292092 | 15 | |
633b11be MCC |
16 | .. CONTENTS |
17 | ||
18 | 1. Introduction | |
19 | 1-1. Terminology | |
20 | 1-2. What is cgroup? | |
21 | 2. Basic Operations | |
22 | 2-1. Mounting | |
8cfd8147 TH |
23 | 2-2. Organizing Processes and Threads |
24 | 2-2-1. Processes | |
25 | 2-2-2. Threads | |
633b11be MCC |
26 | 2-3. [Un]populated Notification |
27 | 2-4. Controlling Controllers | |
28 | 2-4-1. Enabling and Disabling | |
29 | 2-4-2. Top-down Constraint | |
30 | 2-4-3. No Internal Process Constraint | |
31 | 2-5. Delegation | |
32 | 2-5-1. Model of Delegation | |
33 | 2-5-2. Delegation Containment | |
34 | 2-6. Guidelines | |
35 | 2-6-1. Organize Once and Control | |
36 | 2-6-2. Avoid Name Collisions | |
37 | 3. Resource Distribution Models | |
38 | 3-1. Weights | |
39 | 3-2. Limits | |
40 | 3-3. Protections | |
41 | 3-4. Allocations | |
42 | 4. Interface Files | |
43 | 4-1. Format | |
44 | 4-2. Conventions | |
45 | 4-3. Core Interface Files | |
46 | 5. Controllers | |
47 | 5-1. CPU | |
48 | 5-1-1. CPU Interface Files | |
49 | 5-2. Memory | |
50 | 5-2-1. Memory Interface Files | |
51 | 5-2-2. Usage Guidelines | |
52 | 5-2-3. Memory Ownership | |
53 | 5-3. IO | |
54 | 5-3-1. IO Interface Files | |
55 | 5-3-2. Writeback | |
b351f0c7 JB |
56 | 5-3-3. IO Latency |
57 | 5-3-3-1. How IO Latency Throttling Works | |
58 | 5-3-3-2. IO Latency Interface Files | |
633b11be MCC |
59 | 5-4. PID |
60 | 5-4-1. PID Interface Files | |
4ec22e9c WL |
61 | 5-5. Cpuset |
62 | 5.5-1. Cpuset Interface Files | |
63 | 5-6. Device | |
64 | 5-7. RDMA | |
65 | 5-7-1. RDMA Interface Files | |
faced7e0 GS |
66 | 5-8. HugeTLB |
67 | 5.8-1. HugeTLB Interface Files | |
4ec22e9c WL |
68 | 5-8. Misc |
69 | 5-8-1. perf_event | |
c4e0842b MS |
70 | 5-N. Non-normative information |
71 | 5-N-1. CPU controller root cgroup process behaviour | |
72 | 5-N-2. IO controller root cgroup process behaviour | |
633b11be MCC |
73 | 6. Namespace |
74 | 6-1. Basics | |
75 | 6-2. The Root and Views | |
76 | 6-3. Migration and setns(2) | |
77 | 6-4. Interaction with Other Namespaces | |
78 | P. Information on Kernel Programming | |
79 | P-1. Filesystem Support for Writeback | |
80 | D. Deprecated v1 Core Features | |
81 | R. Issues with v1 and Rationales for v2 | |
82 | R-1. Multiple Hierarchies | |
83 | R-2. Thread Granularity | |
84 | R-3. Competition Between Inner Nodes and Threads | |
85 | R-4. Other Interface Issues | |
86 | R-5. Controller Issues and Remedies | |
87 | R-5-1. Memory | |
88 | ||
89 | ||
90 | Introduction | |
91 | ============ | |
92 | ||
93 | Terminology | |
94 | ----------- | |
6c292092 TH |
95 | |
96 | "cgroup" stands for "control group" and is never capitalized. The | |
97 | singular form is used to designate the whole feature and also as a | |
98 | qualifier as in "cgroup controllers". When explicitly referring to | |
99 | multiple individual control groups, the plural form "cgroups" is used. | |
100 | ||
101 | ||
633b11be MCC |
102 | What is cgroup? |
103 | --------------- | |
6c292092 TH |
104 | |
105 | cgroup is a mechanism to organize processes hierarchically and | |
106 | distribute system resources along the hierarchy in a controlled and | |
107 | configurable manner. | |
108 | ||
109 | cgroup is largely composed of two parts - the core and controllers. | |
110 | cgroup core is primarily responsible for hierarchically organizing | |
111 | processes. A cgroup controller is usually responsible for | |
112 | distributing a specific type of system resource along the hierarchy | |
113 | although there are utility controllers which serve purposes other than | |
114 | resource distribution. | |
115 | ||
116 | cgroups form a tree structure and every process in the system belongs | |
117 | to one and only one cgroup. All threads of a process belong to the | |
118 | same cgroup. On creation, all processes are put in the cgroup that | |
119 | the parent process belongs to at the time. A process can be migrated | |
120 | to another cgroup. Migration of a process doesn't affect already | |
121 | existing descendant processes. | |
122 | ||
123 | Following certain structural constraints, controllers may be enabled or | |
124 | disabled selectively on a cgroup. All controller behaviors are | |
125 | hierarchical - if a controller is enabled on a cgroup, it affects all | |
126 | processes which belong to the cgroups consisting the inclusive | |
127 | sub-hierarchy of the cgroup. When a controller is enabled on a nested | |
128 | cgroup, it always restricts the resource distribution further. The | |
129 | restrictions set closer to the root in the hierarchy can not be | |
130 | overridden from further away. | |
131 | ||
132 | ||
633b11be MCC |
133 | Basic Operations |
134 | ================ | |
6c292092 | 135 | |
633b11be MCC |
136 | Mounting |
137 | -------- | |
6c292092 TH |
138 | |
139 | Unlike v1, cgroup v2 has only single hierarchy. The cgroup v2 | |
633b11be | 140 | hierarchy can be mounted with the following mount command:: |
6c292092 TH |
141 | |
142 | # mount -t cgroup2 none $MOUNT_POINT | |
143 | ||
144 | cgroup2 filesystem has the magic number 0x63677270 ("cgrp"). All | |
145 | controllers which support v2 and are not bound to a v1 hierarchy are | |
146 | automatically bound to the v2 hierarchy and show up at the root. | |
147 | Controllers which are not in active use in the v2 hierarchy can be | |
148 | bound to other hierarchies. This allows mixing v2 hierarchy with the | |
149 | legacy v1 multiple hierarchies in a fully backward compatible way. | |
150 | ||
151 | A controller can be moved across hierarchies only after the controller | |
152 | is no longer referenced in its current hierarchy. Because per-cgroup | |
153 | controller states are destroyed asynchronously and controllers may | |
154 | have lingering references, a controller may not show up immediately on | |
155 | the v2 hierarchy after the final umount of the previous hierarchy. | |
156 | Similarly, a controller should be fully disabled to be moved out of | |
157 | the unified hierarchy and it may take some time for the disabled | |
158 | controller to become available for other hierarchies; furthermore, due | |
159 | to inter-controller dependencies, other controllers may need to be | |
160 | disabled too. | |
161 | ||
162 | While useful for development and manual configurations, moving | |
163 | controllers dynamically between the v2 and other hierarchies is | |
164 | strongly discouraged for production use. It is recommended to decide | |
165 | the hierarchies and controller associations before starting using the | |
166 | controllers after system boot. | |
167 | ||
1619b6d4 JW |
168 | During transition to v2, system management software might still |
169 | automount the v1 cgroup filesystem and so hijack all controllers | |
170 | during boot, before manual intervention is possible. To make testing | |
171 | and experimenting easier, the kernel parameter cgroup_no_v1= allows | |
172 | disabling controllers in v1 and make them always available in v2. | |
173 | ||
5136f636 TH |
174 | cgroup v2 currently supports the following mount options. |
175 | ||
176 | nsdelegate | |
177 | ||
178 | Consider cgroup namespaces as delegation boundaries. This | |
179 | option is system wide and can only be set on mount or modified | |
180 | through remount from the init namespace. The mount option is | |
181 | ignored on non-init namespace mounts. Please refer to the | |
182 | Delegation section for details. | |
183 | ||
9852ae3f CD |
184 | memory_localevents |
185 | ||
186 | Only populate memory.events with data for the current cgroup, | |
187 | and not any subtrees. This is legacy behaviour, the default | |
188 | behaviour without this option is to include subtree counts. | |
189 | This option is system wide and can only be set on mount or | |
190 | modified through remount from the init namespace. The mount | |
191 | option is ignored on non-init namespace mounts. | |
192 | ||
8a931f80 JW |
193 | memory_recursiveprot |
194 | ||
195 | Recursively apply memory.min and memory.low protection to | |
196 | entire subtrees, without requiring explicit downward | |
197 | propagation into leaf cgroups. This allows protecting entire | |
198 | subtrees from one another, while retaining free competition | |
199 | within those subtrees. This should have been the default | |
200 | behavior but is a mount-option to avoid regressing setups | |
201 | relying on the original semantics (e.g. specifying bogusly | |
202 | high 'bypass' protection values at higher tree levels). | |
203 | ||
6c292092 | 204 | |
8cfd8147 TH |
205 | Organizing Processes and Threads |
206 | -------------------------------- | |
207 | ||
208 | Processes | |
209 | ~~~~~~~~~ | |
6c292092 TH |
210 | |
211 | Initially, only the root cgroup exists to which all processes belong. | |
633b11be | 212 | A child cgroup can be created by creating a sub-directory:: |
6c292092 TH |
213 | |
214 | # mkdir $CGROUP_NAME | |
215 | ||
216 | A given cgroup may have multiple child cgroups forming a tree | |
217 | structure. Each cgroup has a read-writable interface file | |
218 | "cgroup.procs". When read, it lists the PIDs of all processes which | |
219 | belong to the cgroup one-per-line. The PIDs are not ordered and the | |
220 | same PID may show up more than once if the process got moved to | |
221 | another cgroup and then back or the PID got recycled while reading. | |
222 | ||
223 | A process can be migrated into a cgroup by writing its PID to the | |
224 | target cgroup's "cgroup.procs" file. Only one process can be migrated | |
225 | on a single write(2) call. If a process is composed of multiple | |
226 | threads, writing the PID of any thread migrates all threads of the | |
227 | process. | |
228 | ||
229 | When a process forks a child process, the new process is born into the | |
230 | cgroup that the forking process belongs to at the time of the | |
231 | operation. After exit, a process stays associated with the cgroup | |
232 | that it belonged to at the time of exit until it's reaped; however, a | |
233 | zombie process does not appear in "cgroup.procs" and thus can't be | |
234 | moved to another cgroup. | |
235 | ||
236 | A cgroup which doesn't have any children or live processes can be | |
237 | destroyed by removing the directory. Note that a cgroup which doesn't | |
238 | have any children and is associated only with zombie processes is | |
633b11be | 239 | considered empty and can be removed:: |
6c292092 TH |
240 | |
241 | # rmdir $CGROUP_NAME | |
242 | ||
243 | "/proc/$PID/cgroup" lists a process's cgroup membership. If legacy | |
244 | cgroup is in use in the system, this file may contain multiple lines, | |
245 | one for each hierarchy. The entry for cgroup v2 is always in the | |
633b11be | 246 | format "0::$PATH":: |
6c292092 TH |
247 | |
248 | # cat /proc/842/cgroup | |
249 | ... | |
250 | 0::/test-cgroup/test-cgroup-nested | |
251 | ||
252 | If the process becomes a zombie and the cgroup it was associated with | |
633b11be | 253 | is removed subsequently, " (deleted)" is appended to the path:: |
6c292092 TH |
254 | |
255 | # cat /proc/842/cgroup | |
256 | ... | |
257 | 0::/test-cgroup/test-cgroup-nested (deleted) | |
258 | ||
259 | ||
8cfd8147 TH |
260 | Threads |
261 | ~~~~~~~ | |
262 | ||
263 | cgroup v2 supports thread granularity for a subset of controllers to | |
264 | support use cases requiring hierarchical resource distribution across | |
265 | the threads of a group of processes. By default, all threads of a | |
266 | process belong to the same cgroup, which also serves as the resource | |
267 | domain to host resource consumptions which are not specific to a | |
268 | process or thread. The thread mode allows threads to be spread across | |
269 | a subtree while still maintaining the common resource domain for them. | |
270 | ||
271 | Controllers which support thread mode are called threaded controllers. | |
272 | The ones which don't are called domain controllers. | |
273 | ||
274 | Marking a cgroup threaded makes it join the resource domain of its | |
275 | parent as a threaded cgroup. The parent may be another threaded | |
276 | cgroup whose resource domain is further up in the hierarchy. The root | |
277 | of a threaded subtree, that is, the nearest ancestor which is not | |
278 | threaded, is called threaded domain or thread root interchangeably and | |
279 | serves as the resource domain for the entire subtree. | |
280 | ||
281 | Inside a threaded subtree, threads of a process can be put in | |
282 | different cgroups and are not subject to the no internal process | |
283 | constraint - threaded controllers can be enabled on non-leaf cgroups | |
284 | whether they have threads in them or not. | |
285 | ||
286 | As the threaded domain cgroup hosts all the domain resource | |
287 | consumptions of the subtree, it is considered to have internal | |
288 | resource consumptions whether there are processes in it or not and | |
289 | can't have populated child cgroups which aren't threaded. Because the | |
290 | root cgroup is not subject to no internal process constraint, it can | |
291 | serve both as a threaded domain and a parent to domain cgroups. | |
292 | ||
293 | The current operation mode or type of the cgroup is shown in the | |
294 | "cgroup.type" file which indicates whether the cgroup is a normal | |
295 | domain, a domain which is serving as the domain of a threaded subtree, | |
296 | or a threaded cgroup. | |
297 | ||
298 | On creation, a cgroup is always a domain cgroup and can be made | |
299 | threaded by writing "threaded" to the "cgroup.type" file. The | |
300 | operation is single direction:: | |
301 | ||
302 | # echo threaded > cgroup.type | |
303 | ||
304 | Once threaded, the cgroup can't be made a domain again. To enable the | |
305 | thread mode, the following conditions must be met. | |
306 | ||
307 | - As the cgroup will join the parent's resource domain. The parent | |
308 | must either be a valid (threaded) domain or a threaded cgroup. | |
309 | ||
918a8c2c TH |
310 | - When the parent is an unthreaded domain, it must not have any domain |
311 | controllers enabled or populated domain children. The root is | |
312 | exempt from this requirement. | |
8cfd8147 TH |
313 | |
314 | Topology-wise, a cgroup can be in an invalid state. Please consider | |
2877cbe6 | 315 | the following topology:: |
8cfd8147 TH |
316 | |
317 | A (threaded domain) - B (threaded) - C (domain, just created) | |
318 | ||
319 | C is created as a domain but isn't connected to a parent which can | |
320 | host child domains. C can't be used until it is turned into a | |
321 | threaded cgroup. "cgroup.type" file will report "domain (invalid)" in | |
322 | these cases. Operations which fail due to invalid topology use | |
323 | EOPNOTSUPP as the errno. | |
324 | ||
325 | A domain cgroup is turned into a threaded domain when one of its child | |
326 | cgroup becomes threaded or threaded controllers are enabled in the | |
327 | "cgroup.subtree_control" file while there are processes in the cgroup. | |
328 | A threaded domain reverts to a normal domain when the conditions | |
329 | clear. | |
330 | ||
331 | When read, "cgroup.threads" contains the list of the thread IDs of all | |
332 | threads in the cgroup. Except that the operations are per-thread | |
333 | instead of per-process, "cgroup.threads" has the same format and | |
334 | behaves the same way as "cgroup.procs". While "cgroup.threads" can be | |
335 | written to in any cgroup, as it can only move threads inside the same | |
336 | threaded domain, its operations are confined inside each threaded | |
337 | subtree. | |
338 | ||
339 | The threaded domain cgroup serves as the resource domain for the whole | |
340 | subtree, and, while the threads can be scattered across the subtree, | |
341 | all the processes are considered to be in the threaded domain cgroup. | |
342 | "cgroup.procs" in a threaded domain cgroup contains the PIDs of all | |
343 | processes in the subtree and is not readable in the subtree proper. | |
344 | However, "cgroup.procs" can be written to from anywhere in the subtree | |
345 | to migrate all threads of the matching process to the cgroup. | |
346 | ||
347 | Only threaded controllers can be enabled in a threaded subtree. When | |
348 | a threaded controller is enabled inside a threaded subtree, it only | |
349 | accounts for and controls resource consumptions associated with the | |
350 | threads in the cgroup and its descendants. All consumptions which | |
351 | aren't tied to a specific thread belong to the threaded domain cgroup. | |
352 | ||
353 | Because a threaded subtree is exempt from no internal process | |
354 | constraint, a threaded controller must be able to handle competition | |
355 | between threads in a non-leaf cgroup and its child cgroups. Each | |
356 | threaded controller defines how such competitions are handled. | |
357 | ||
358 | ||
633b11be MCC |
359 | [Un]populated Notification |
360 | -------------------------- | |
6c292092 TH |
361 | |
362 | Each non-root cgroup has a "cgroup.events" file which contains | |
363 | "populated" field indicating whether the cgroup's sub-hierarchy has | |
364 | live processes in it. Its value is 0 if there is no live process in | |
365 | the cgroup and its descendants; otherwise, 1. poll and [id]notify | |
366 | events are triggered when the value changes. This can be used, for | |
367 | example, to start a clean-up operation after all processes of a given | |
368 | sub-hierarchy have exited. The populated state updates and | |
369 | notifications are recursive. Consider the following sub-hierarchy | |
370 | where the numbers in the parentheses represent the numbers of processes | |
633b11be | 371 | in each cgroup:: |
6c292092 TH |
372 | |
373 | A(4) - B(0) - C(1) | |
374 | \ D(0) | |
375 | ||
376 | A, B and C's "populated" fields would be 1 while D's 0. After the one | |
377 | process in C exits, B and C's "populated" fields would flip to "0" and | |
378 | file modified events will be generated on the "cgroup.events" files of | |
379 | both cgroups. | |
380 | ||
381 | ||
633b11be MCC |
382 | Controlling Controllers |
383 | ----------------------- | |
6c292092 | 384 | |
633b11be MCC |
385 | Enabling and Disabling |
386 | ~~~~~~~~~~~~~~~~~~~~~~ | |
6c292092 TH |
387 | |
388 | Each cgroup has a "cgroup.controllers" file which lists all | |
633b11be | 389 | controllers available for the cgroup to enable:: |
6c292092 TH |
390 | |
391 | # cat cgroup.controllers | |
392 | cpu io memory | |
393 | ||
394 | No controller is enabled by default. Controllers can be enabled and | |
633b11be | 395 | disabled by writing to the "cgroup.subtree_control" file:: |
6c292092 TH |
396 | |
397 | # echo "+cpu +memory -io" > cgroup.subtree_control | |
398 | ||
399 | Only controllers which are listed in "cgroup.controllers" can be | |
400 | enabled. When multiple operations are specified as above, either they | |
401 | all succeed or fail. If multiple operations on the same controller | |
402 | are specified, the last one is effective. | |
403 | ||
404 | Enabling a controller in a cgroup indicates that the distribution of | |
405 | the target resource across its immediate children will be controlled. | |
406 | Consider the following sub-hierarchy. The enabled controllers are | |
633b11be | 407 | listed in parentheses:: |
6c292092 TH |
408 | |
409 | A(cpu,memory) - B(memory) - C() | |
410 | \ D() | |
411 | ||
412 | As A has "cpu" and "memory" enabled, A will control the distribution | |
413 | of CPU cycles and memory to its children, in this case, B. As B has | |
414 | "memory" enabled but not "CPU", C and D will compete freely on CPU | |
415 | cycles but their division of memory available to B will be controlled. | |
416 | ||
417 | As a controller regulates the distribution of the target resource to | |
418 | the cgroup's children, enabling it creates the controller's interface | |
419 | files in the child cgroups. In the above example, enabling "cpu" on B | |
420 | would create the "cpu." prefixed controller interface files in C and | |
421 | D. Likewise, disabling "memory" from B would remove the "memory." | |
422 | prefixed controller interface files from C and D. This means that the | |
423 | controller interface files - anything which doesn't start with | |
424 | "cgroup." are owned by the parent rather than the cgroup itself. | |
425 | ||
426 | ||
633b11be MCC |
427 | Top-down Constraint |
428 | ~~~~~~~~~~~~~~~~~~~ | |
6c292092 TH |
429 | |
430 | Resources are distributed top-down and a cgroup can further distribute | |
431 | a resource only if the resource has been distributed to it from the | |
432 | parent. This means that all non-root "cgroup.subtree_control" files | |
433 | can only contain controllers which are enabled in the parent's | |
434 | "cgroup.subtree_control" file. A controller can be enabled only if | |
435 | the parent has the controller enabled and a controller can't be | |
436 | disabled if one or more children have it enabled. | |
437 | ||
438 | ||
633b11be MCC |
439 | No Internal Process Constraint |
440 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ | |
6c292092 | 441 | |
8cfd8147 TH |
442 | Non-root cgroups can distribute domain resources to their children |
443 | only when they don't have any processes of their own. In other words, | |
444 | only domain cgroups which don't contain any processes can have domain | |
445 | controllers enabled in their "cgroup.subtree_control" files. | |
6c292092 | 446 | |
8cfd8147 TH |
447 | This guarantees that, when a domain controller is looking at the part |
448 | of the hierarchy which has it enabled, processes are always only on | |
449 | the leaves. This rules out situations where child cgroups compete | |
450 | against internal processes of the parent. | |
6c292092 TH |
451 | |
452 | The root cgroup is exempt from this restriction. Root contains | |
453 | processes and anonymous resource consumption which can't be associated | |
454 | with any other cgroups and requires special treatment from most | |
455 | controllers. How resource consumption in the root cgroup is governed | |
c4e0842b MS |
456 | is up to each controller (for more information on this topic please |
457 | refer to the Non-normative information section in the Controllers | |
458 | chapter). | |
6c292092 TH |
459 | |
460 | Note that the restriction doesn't get in the way if there is no | |
461 | enabled controller in the cgroup's "cgroup.subtree_control". This is | |
462 | important as otherwise it wouldn't be possible to create children of a | |
463 | populated cgroup. To control resource distribution of a cgroup, the | |
464 | cgroup must create children and transfer all its processes to the | |
465 | children before enabling controllers in its "cgroup.subtree_control" | |
466 | file. | |
467 | ||
468 | ||
633b11be MCC |
469 | Delegation |
470 | ---------- | |
6c292092 | 471 | |
633b11be MCC |
472 | Model of Delegation |
473 | ~~~~~~~~~~~~~~~~~~~ | |
6c292092 | 474 | |
5136f636 | 475 | A cgroup can be delegated in two ways. First, to a less privileged |
8cfd8147 TH |
476 | user by granting write access of the directory and its "cgroup.procs", |
477 | "cgroup.threads" and "cgroup.subtree_control" files to the user. | |
478 | Second, if the "nsdelegate" mount option is set, automatically to a | |
479 | cgroup namespace on namespace creation. | |
5136f636 TH |
480 | |
481 | Because the resource control interface files in a given directory | |
482 | control the distribution of the parent's resources, the delegatee | |
483 | shouldn't be allowed to write to them. For the first method, this is | |
484 | achieved by not granting access to these files. For the second, the | |
485 | kernel rejects writes to all files other than "cgroup.procs" and | |
486 | "cgroup.subtree_control" on a namespace root from inside the | |
487 | namespace. | |
488 | ||
489 | The end results are equivalent for both delegation types. Once | |
490 | delegated, the user can build sub-hierarchy under the directory, | |
491 | organize processes inside it as it sees fit and further distribute the | |
492 | resources it received from the parent. The limits and other settings | |
493 | of all resource controllers are hierarchical and regardless of what | |
494 | happens in the delegated sub-hierarchy, nothing can escape the | |
495 | resource restrictions imposed by the parent. | |
6c292092 TH |
496 | |
497 | Currently, cgroup doesn't impose any restrictions on the number of | |
498 | cgroups in or nesting depth of a delegated sub-hierarchy; however, | |
499 | this may be limited explicitly in the future. | |
500 | ||
501 | ||
633b11be MCC |
502 | Delegation Containment |
503 | ~~~~~~~~~~~~~~~~~~~~~~ | |
6c292092 TH |
504 | |
505 | A delegated sub-hierarchy is contained in the sense that processes | |
5136f636 TH |
506 | can't be moved into or out of the sub-hierarchy by the delegatee. |
507 | ||
508 | For delegations to a less privileged user, this is achieved by | |
509 | requiring the following conditions for a process with a non-root euid | |
510 | to migrate a target process into a cgroup by writing its PID to the | |
511 | "cgroup.procs" file. | |
6c292092 | 512 | |
6c292092 TH |
513 | - The writer must have write access to the "cgroup.procs" file. |
514 | ||
515 | - The writer must have write access to the "cgroup.procs" file of the | |
516 | common ancestor of the source and destination cgroups. | |
517 | ||
576dd464 | 518 | The above two constraints ensure that while a delegatee may migrate |
6c292092 TH |
519 | processes around freely in the delegated sub-hierarchy it can't pull |
520 | in from or push out to outside the sub-hierarchy. | |
521 | ||
522 | For an example, let's assume cgroups C0 and C1 have been delegated to | |
523 | user U0 who created C00, C01 under C0 and C10 under C1 as follows and | |
633b11be | 524 | all processes under C0 and C1 belong to U0:: |
6c292092 TH |
525 | |
526 | ~~~~~~~~~~~~~ - C0 - C00 | |
527 | ~ cgroup ~ \ C01 | |
528 | ~ hierarchy ~ | |
529 | ~~~~~~~~~~~~~ - C1 - C10 | |
530 | ||
531 | Let's also say U0 wants to write the PID of a process which is | |
532 | currently in C10 into "C00/cgroup.procs". U0 has write access to the | |
576dd464 TH |
533 | file; however, the common ancestor of the source cgroup C10 and the |
534 | destination cgroup C00 is above the points of delegation and U0 would | |
535 | not have write access to its "cgroup.procs" files and thus the write | |
536 | will be denied with -EACCES. | |
6c292092 | 537 | |
5136f636 TH |
538 | For delegations to namespaces, containment is achieved by requiring |
539 | that both the source and destination cgroups are reachable from the | |
540 | namespace of the process which is attempting the migration. If either | |
541 | is not reachable, the migration is rejected with -ENOENT. | |
542 | ||
6c292092 | 543 | |
633b11be MCC |
544 | Guidelines |
545 | ---------- | |
6c292092 | 546 | |
633b11be MCC |
547 | Organize Once and Control |
548 | ~~~~~~~~~~~~~~~~~~~~~~~~~ | |
6c292092 TH |
549 | |
550 | Migrating a process across cgroups is a relatively expensive operation | |
551 | and stateful resources such as memory are not moved together with the | |
552 | process. This is an explicit design decision as there often exist | |
553 | inherent trade-offs between migration and various hot paths in terms | |
554 | of synchronization cost. | |
555 | ||
556 | As such, migrating processes across cgroups frequently as a means to | |
557 | apply different resource restrictions is discouraged. A workload | |
558 | should be assigned to a cgroup according to the system's logical and | |
559 | resource structure once on start-up. Dynamic adjustments to resource | |
560 | distribution can be made by changing controller configuration through | |
561 | the interface files. | |
562 | ||
563 | ||
633b11be MCC |
564 | Avoid Name Collisions |
565 | ~~~~~~~~~~~~~~~~~~~~~ | |
6c292092 TH |
566 | |
567 | Interface files for a cgroup and its children cgroups occupy the same | |
568 | directory and it is possible to create children cgroups which collide | |
569 | with interface files. | |
570 | ||
571 | All cgroup core interface files are prefixed with "cgroup." and each | |
572 | controller's interface files are prefixed with the controller name and | |
573 | a dot. A controller's name is composed of lower case alphabets and | |
574 | '_'s but never begins with an '_' so it can be used as the prefix | |
575 | character for collision avoidance. Also, interface file names won't | |
576 | start or end with terms which are often used in categorizing workloads | |
577 | such as job, service, slice, unit or workload. | |
578 | ||
579 | cgroup doesn't do anything to prevent name collisions and it's the | |
580 | user's responsibility to avoid them. | |
581 | ||
582 | ||
633b11be MCC |
583 | Resource Distribution Models |
584 | ============================ | |
6c292092 TH |
585 | |
586 | cgroup controllers implement several resource distribution schemes | |
587 | depending on the resource type and expected use cases. This section | |
588 | describes major schemes in use along with their expected behaviors. | |
589 | ||
590 | ||
633b11be MCC |
591 | Weights |
592 | ------- | |
6c292092 TH |
593 | |
594 | A parent's resource is distributed by adding up the weights of all | |
595 | active children and giving each the fraction matching the ratio of its | |
596 | weight against the sum. As only children which can make use of the | |
597 | resource at the moment participate in the distribution, this is | |
598 | work-conserving. Due to the dynamic nature, this model is usually | |
599 | used for stateless resources. | |
600 | ||
601 | All weights are in the range [1, 10000] with the default at 100. This | |
602 | allows symmetric multiplicative biases in both directions at fine | |
603 | enough granularity while staying in the intuitive range. | |
604 | ||
605 | As long as the weight is in range, all configuration combinations are | |
606 | valid and there is no reason to reject configuration changes or | |
607 | process migrations. | |
608 | ||
609 | "cpu.weight" proportionally distributes CPU cycles to active children | |
610 | and is an example of this type. | |
611 | ||
612 | ||
633b11be MCC |
613 | Limits |
614 | ------ | |
6c292092 TH |
615 | |
616 | A child can only consume upto the configured amount of the resource. | |
617 | Limits can be over-committed - the sum of the limits of children can | |
618 | exceed the amount of resource available to the parent. | |
619 | ||
620 | Limits are in the range [0, max] and defaults to "max", which is noop. | |
621 | ||
622 | As limits can be over-committed, all configuration combinations are | |
623 | valid and there is no reason to reject configuration changes or | |
624 | process migrations. | |
625 | ||
626 | "io.max" limits the maximum BPS and/or IOPS that a cgroup can consume | |
627 | on an IO device and is an example of this type. | |
628 | ||
629 | ||
633b11be MCC |
630 | Protections |
631 | ----------- | |
6c292092 | 632 | |
9783aa99 CD |
633 | A cgroup is protected upto the configured amount of the resource |
634 | as long as the usages of all its ancestors are under their | |
6c292092 TH |
635 | protected levels. Protections can be hard guarantees or best effort |
636 | soft boundaries. Protections can also be over-committed in which case | |
637 | only upto the amount available to the parent is protected among | |
638 | children. | |
639 | ||
640 | Protections are in the range [0, max] and defaults to 0, which is | |
641 | noop. | |
642 | ||
643 | As protections can be over-committed, all configuration combinations | |
644 | are valid and there is no reason to reject configuration changes or | |
645 | process migrations. | |
646 | ||
647 | "memory.low" implements best-effort memory protection and is an | |
648 | example of this type. | |
649 | ||
650 | ||
633b11be MCC |
651 | Allocations |
652 | ----------- | |
6c292092 TH |
653 | |
654 | A cgroup is exclusively allocated a certain amount of a finite | |
655 | resource. Allocations can't be over-committed - the sum of the | |
656 | allocations of children can not exceed the amount of resource | |
657 | available to the parent. | |
658 | ||
659 | Allocations are in the range [0, max] and defaults to 0, which is no | |
660 | resource. | |
661 | ||
662 | As allocations can't be over-committed, some configuration | |
663 | combinations are invalid and should be rejected. Also, if the | |
664 | resource is mandatory for execution of processes, process migrations | |
665 | may be rejected. | |
666 | ||
667 | "cpu.rt.max" hard-allocates realtime slices and is an example of this | |
668 | type. | |
669 | ||
670 | ||
633b11be MCC |
671 | Interface Files |
672 | =============== | |
6c292092 | 673 | |
633b11be MCC |
674 | Format |
675 | ------ | |
6c292092 TH |
676 | |
677 | All interface files should be in one of the following formats whenever | |
633b11be | 678 | possible:: |
6c292092 TH |
679 | |
680 | New-line separated values | |
681 | (when only one value can be written at once) | |
682 | ||
683 | VAL0\n | |
684 | VAL1\n | |
685 | ... | |
686 | ||
687 | Space separated values | |
688 | (when read-only or multiple values can be written at once) | |
689 | ||
690 | VAL0 VAL1 ...\n | |
691 | ||
692 | Flat keyed | |
693 | ||
694 | KEY0 VAL0\n | |
695 | KEY1 VAL1\n | |
696 | ... | |
697 | ||
698 | Nested keyed | |
699 | ||
700 | KEY0 SUB_KEY0=VAL00 SUB_KEY1=VAL01... | |
701 | KEY1 SUB_KEY0=VAL10 SUB_KEY1=VAL11... | |
702 | ... | |
703 | ||
704 | For a writable file, the format for writing should generally match | |
705 | reading; however, controllers may allow omitting later fields or | |
706 | implement restricted shortcuts for most common use cases. | |
707 | ||
708 | For both flat and nested keyed files, only the values for a single key | |
709 | can be written at a time. For nested keyed files, the sub key pairs | |
710 | may be specified in any order and not all pairs have to be specified. | |
711 | ||
712 | ||
633b11be MCC |
713 | Conventions |
714 | ----------- | |
6c292092 TH |
715 | |
716 | - Settings for a single feature should be contained in a single file. | |
717 | ||
718 | - The root cgroup should be exempt from resource control and thus | |
936f2a70 | 719 | shouldn't have resource control interface files. |
6c292092 | 720 | |
a5e112e6 TH |
721 | - The default time unit is microseconds. If a different unit is ever |
722 | used, an explicit unit suffix must be present. | |
723 | ||
724 | - A parts-per quantity should use a percentage decimal with at least | |
725 | two digit fractional part - e.g. 13.40. | |
726 | ||
6c292092 TH |
727 | - If a controller implements weight based resource distribution, its |
728 | interface file should be named "weight" and have the range [1, | |
729 | 10000] with 100 as the default. The values are chosen to allow | |
730 | enough and symmetric bias in both directions while keeping it | |
731 | intuitive (the default is 100%). | |
732 | ||
733 | - If a controller implements an absolute resource guarantee and/or | |
734 | limit, the interface files should be named "min" and "max" | |
735 | respectively. If a controller implements best effort resource | |
736 | guarantee and/or limit, the interface files should be named "low" | |
737 | and "high" respectively. | |
738 | ||
739 | In the above four control files, the special token "max" should be | |
740 | used to represent upward infinity for both reading and writing. | |
741 | ||
742 | - If a setting has a configurable default value and keyed specific | |
743 | overrides, the default entry should be keyed with "default" and | |
744 | appear as the first entry in the file. | |
745 | ||
746 | The default value can be updated by writing either "default $VAL" or | |
747 | "$VAL". | |
748 | ||
749 | When writing to update a specific override, "default" can be used as | |
750 | the value to indicate removal of the override. Override entries | |
751 | with "default" as the value must not appear when read. | |
752 | ||
753 | For example, a setting which is keyed by major:minor device numbers | |
633b11be | 754 | with integer values may look like the following:: |
6c292092 TH |
755 | |
756 | # cat cgroup-example-interface-file | |
757 | default 150 | |
758 | 8:0 300 | |
759 | ||
633b11be | 760 | The default value can be updated by:: |
6c292092 TH |
761 | |
762 | # echo 125 > cgroup-example-interface-file | |
763 | ||
633b11be | 764 | or:: |
6c292092 TH |
765 | |
766 | # echo "default 125" > cgroup-example-interface-file | |
767 | ||
633b11be | 768 | An override can be set by:: |
6c292092 TH |
769 | |
770 | # echo "8:16 170" > cgroup-example-interface-file | |
771 | ||
633b11be | 772 | and cleared by:: |
6c292092 TH |
773 | |
774 | # echo "8:0 default" > cgroup-example-interface-file | |
775 | # cat cgroup-example-interface-file | |
776 | default 125 | |
777 | 8:16 170 | |
778 | ||
779 | - For events which are not very high frequency, an interface file | |
780 | "events" should be created which lists event key value pairs. | |
781 | Whenever a notifiable event happens, file modified event should be | |
782 | generated on the file. | |
783 | ||
784 | ||
633b11be MCC |
785 | Core Interface Files |
786 | -------------------- | |
6c292092 TH |
787 | |
788 | All cgroup core files are prefixed with "cgroup." | |
789 | ||
8cfd8147 | 790 | cgroup.type |
8cfd8147 TH |
791 | A read-write single value file which exists on non-root |
792 | cgroups. | |
793 | ||
794 | When read, it indicates the current type of the cgroup, which | |
795 | can be one of the following values. | |
796 | ||
797 | - "domain" : A normal valid domain cgroup. | |
798 | ||
799 | - "domain threaded" : A threaded domain cgroup which is | |
800 | serving as the root of a threaded subtree. | |
801 | ||
802 | - "domain invalid" : A cgroup which is in an invalid state. | |
803 | It can't be populated or have controllers enabled. It may | |
804 | be allowed to become a threaded cgroup. | |
805 | ||
806 | - "threaded" : A threaded cgroup which is a member of a | |
807 | threaded subtree. | |
808 | ||
809 | A cgroup can be turned into a threaded cgroup by writing | |
810 | "threaded" to this file. | |
811 | ||
6c292092 | 812 | cgroup.procs |
6c292092 TH |
813 | A read-write new-line separated values file which exists on |
814 | all cgroups. | |
815 | ||
816 | When read, it lists the PIDs of all processes which belong to | |
817 | the cgroup one-per-line. The PIDs are not ordered and the | |
818 | same PID may show up more than once if the process got moved | |
819 | to another cgroup and then back or the PID got recycled while | |
820 | reading. | |
821 | ||
822 | A PID can be written to migrate the process associated with | |
823 | the PID to the cgroup. The writer should match all of the | |
824 | following conditions. | |
825 | ||
6c292092 | 826 | - It must have write access to the "cgroup.procs" file. |
8cfd8147 TH |
827 | |
828 | - It must have write access to the "cgroup.procs" file of the | |
829 | common ancestor of the source and destination cgroups. | |
830 | ||
831 | When delegating a sub-hierarchy, write access to this file | |
832 | should be granted along with the containing directory. | |
833 | ||
834 | In a threaded cgroup, reading this file fails with EOPNOTSUPP | |
835 | as all the processes belong to the thread root. Writing is | |
836 | supported and moves every thread of the process to the cgroup. | |
837 | ||
838 | cgroup.threads | |
839 | A read-write new-line separated values file which exists on | |
840 | all cgroups. | |
841 | ||
842 | When read, it lists the TIDs of all threads which belong to | |
843 | the cgroup one-per-line. The TIDs are not ordered and the | |
844 | same TID may show up more than once if the thread got moved to | |
845 | another cgroup and then back or the TID got recycled while | |
846 | reading. | |
847 | ||
848 | A TID can be written to migrate the thread associated with the | |
849 | TID to the cgroup. The writer should match all of the | |
850 | following conditions. | |
851 | ||
852 | - It must have write access to the "cgroup.threads" file. | |
853 | ||
854 | - The cgroup that the thread is currently in must be in the | |
855 | same resource domain as the destination cgroup. | |
6c292092 TH |
856 | |
857 | - It must have write access to the "cgroup.procs" file of the | |
858 | common ancestor of the source and destination cgroups. | |
859 | ||
860 | When delegating a sub-hierarchy, write access to this file | |
861 | should be granted along with the containing directory. | |
862 | ||
863 | cgroup.controllers | |
6c292092 TH |
864 | A read-only space separated values file which exists on all |
865 | cgroups. | |
866 | ||
867 | It shows space separated list of all controllers available to | |
868 | the cgroup. The controllers are not ordered. | |
869 | ||
870 | cgroup.subtree_control | |
6c292092 TH |
871 | A read-write space separated values file which exists on all |
872 | cgroups. Starts out empty. | |
873 | ||
874 | When read, it shows space separated list of the controllers | |
875 | which are enabled to control resource distribution from the | |
876 | cgroup to its children. | |
877 | ||
878 | Space separated list of controllers prefixed with '+' or '-' | |
879 | can be written to enable or disable controllers. A controller | |
880 | name prefixed with '+' enables the controller and '-' | |
881 | disables. If a controller appears more than once on the list, | |
882 | the last one is effective. When multiple enable and disable | |
883 | operations are specified, either all succeed or all fail. | |
884 | ||
885 | cgroup.events | |
6c292092 TH |
886 | A read-only flat-keyed file which exists on non-root cgroups. |
887 | The following entries are defined. Unless specified | |
888 | otherwise, a value change in this file generates a file | |
889 | modified event. | |
890 | ||
891 | populated | |
6c292092 TH |
892 | 1 if the cgroup or its descendants contains any live |
893 | processes; otherwise, 0. | |
afe471ea RG |
894 | frozen |
895 | 1 if the cgroup is frozen; otherwise, 0. | |
6c292092 | 896 | |
1a926e0b RG |
897 | cgroup.max.descendants |
898 | A read-write single value files. The default is "max". | |
899 | ||
900 | Maximum allowed number of descent cgroups. | |
901 | If the actual number of descendants is equal or larger, | |
902 | an attempt to create a new cgroup in the hierarchy will fail. | |
903 | ||
904 | cgroup.max.depth | |
905 | A read-write single value files. The default is "max". | |
906 | ||
907 | Maximum allowed descent depth below the current cgroup. | |
908 | If the actual descent depth is equal or larger, | |
909 | an attempt to create a new child cgroup will fail. | |
910 | ||
ec39225c RG |
911 | cgroup.stat |
912 | A read-only flat-keyed file with the following entries: | |
913 | ||
914 | nr_descendants | |
915 | Total number of visible descendant cgroups. | |
916 | ||
917 | nr_dying_descendants | |
918 | Total number of dying descendant cgroups. A cgroup becomes | |
919 | dying after being deleted by a user. The cgroup will remain | |
920 | in dying state for some time undefined time (which can depend | |
921 | on system load) before being completely destroyed. | |
922 | ||
923 | A process can't enter a dying cgroup under any circumstances, | |
924 | a dying cgroup can't revive. | |
925 | ||
926 | A dying cgroup can consume system resources not exceeding | |
927 | limits, which were active at the moment of cgroup deletion. | |
928 | ||
afe471ea RG |
929 | cgroup.freeze |
930 | A read-write single value file which exists on non-root cgroups. | |
931 | Allowed values are "0" and "1". The default is "0". | |
932 | ||
933 | Writing "1" to the file causes freezing of the cgroup and all | |
934 | descendant cgroups. This means that all belonging processes will | |
935 | be stopped and will not run until the cgroup will be explicitly | |
936 | unfrozen. Freezing of the cgroup may take some time; when this action | |
937 | is completed, the "frozen" value in the cgroup.events control file | |
938 | will be updated to "1" and the corresponding notification will be | |
939 | issued. | |
940 | ||
941 | A cgroup can be frozen either by its own settings, or by settings | |
942 | of any ancestor cgroups. If any of ancestor cgroups is frozen, the | |
943 | cgroup will remain frozen. | |
944 | ||
945 | Processes in the frozen cgroup can be killed by a fatal signal. | |
946 | They also can enter and leave a frozen cgroup: either by an explicit | |
947 | move by a user, or if freezing of the cgroup races with fork(). | |
948 | If a process is moved to a frozen cgroup, it stops. If a process is | |
949 | moved out of a frozen cgroup, it becomes running. | |
950 | ||
951 | Frozen status of a cgroup doesn't affect any cgroup tree operations: | |
952 | it's possible to delete a frozen (and empty) cgroup, as well as | |
953 | create new sub-cgroups. | |
6c292092 | 954 | |
633b11be MCC |
955 | Controllers |
956 | =========== | |
6c292092 | 957 | |
e5ba9ea6 KK |
958 | .. _cgroup-v2-cpu: |
959 | ||
633b11be MCC |
960 | CPU |
961 | --- | |
6c292092 | 962 | |
6c292092 TH |
963 | The "cpu" controllers regulates distribution of CPU cycles. This |
964 | controller implements weight and absolute bandwidth limit models for | |
965 | normal scheduling policy and absolute bandwidth allocation model for | |
966 | realtime scheduling policy. | |
967 | ||
2480c093 PB |
968 | In all the above models, cycles distribution is defined only on a temporal |
969 | base and it does not account for the frequency at which tasks are executed. | |
970 | The (optional) utilization clamping support allows to hint the schedutil | |
971 | cpufreq governor about the minimum desired frequency which should always be | |
972 | provided by a CPU, as well as the maximum desired frequency, which should not | |
973 | be exceeded by a CPU. | |
974 | ||
c2f31b79 TH |
975 | WARNING: cgroup2 doesn't yet support control of realtime processes and |
976 | the cpu controller can only be enabled when all RT processes are in | |
977 | the root cgroup. Be aware that system management software may already | |
978 | have placed RT processes into nonroot cgroups during the system boot | |
979 | process, and these processes may need to be moved to the root cgroup | |
980 | before the cpu controller can be enabled. | |
981 | ||
6c292092 | 982 | |
633b11be MCC |
983 | CPU Interface Files |
984 | ~~~~~~~~~~~~~~~~~~~ | |
6c292092 TH |
985 | |
986 | All time durations are in microseconds. | |
987 | ||
988 | cpu.stat | |
936f2a70 | 989 | A read-only flat-keyed file. |
d41bf8c9 | 990 | This file exists whether the controller is enabled or not. |
6c292092 | 991 | |
d41bf8c9 | 992 | It always reports the following three stats: |
6c292092 | 993 | |
633b11be MCC |
994 | - usage_usec |
995 | - user_usec | |
996 | - system_usec | |
d41bf8c9 TH |
997 | |
998 | and the following three when the controller is enabled: | |
999 | ||
633b11be MCC |
1000 | - nr_periods |
1001 | - nr_throttled | |
1002 | - throttled_usec | |
6c292092 TH |
1003 | |
1004 | cpu.weight | |
6c292092 TH |
1005 | A read-write single value file which exists on non-root |
1006 | cgroups. The default is "100". | |
1007 | ||
1008 | The weight in the range [1, 10000]. | |
1009 | ||
0d593634 TH |
1010 | cpu.weight.nice |
1011 | A read-write single value file which exists on non-root | |
1012 | cgroups. The default is "0". | |
1013 | ||
1014 | The nice value is in the range [-20, 19]. | |
1015 | ||
1016 | This interface file is an alternative interface for | |
1017 | "cpu.weight" and allows reading and setting weight using the | |
1018 | same values used by nice(2). Because the range is smaller and | |
1019 | granularity is coarser for the nice values, the read value is | |
1020 | the closest approximation of the current weight. | |
1021 | ||
6c292092 | 1022 | cpu.max |
6c292092 TH |
1023 | A read-write two value file which exists on non-root cgroups. |
1024 | The default is "max 100000". | |
1025 | ||
633b11be | 1026 | The maximum bandwidth limit. It's in the following format:: |
6c292092 TH |
1027 | |
1028 | $MAX $PERIOD | |
1029 | ||
1030 | which indicates that the group may consume upto $MAX in each | |
1031 | $PERIOD duration. "max" for $MAX indicates no limit. If only | |
1032 | one number is written, $MAX is updated. | |
1033 | ||
2ce7135a JW |
1034 | cpu.pressure |
1035 | A read-only nested-key file which exists on non-root cgroups. | |
1036 | ||
1037 | Shows pressure stall information for CPU. See | |
373e8ffa | 1038 | :ref:`Documentation/accounting/psi.rst <psi>` for details. |
2ce7135a | 1039 | |
2480c093 PB |
1040 | cpu.uclamp.min |
1041 | A read-write single value file which exists on non-root cgroups. | |
1042 | The default is "0", i.e. no utilization boosting. | |
1043 | ||
1044 | The requested minimum utilization (protection) as a percentage | |
1045 | rational number, e.g. 12.34 for 12.34%. | |
1046 | ||
1047 | This interface allows reading and setting minimum utilization clamp | |
1048 | values similar to the sched_setattr(2). This minimum utilization | |
1049 | value is used to clamp the task specific minimum utilization clamp. | |
1050 | ||
1051 | The requested minimum utilization (protection) is always capped by | |
1052 | the current value for the maximum utilization (limit), i.e. | |
1053 | `cpu.uclamp.max`. | |
1054 | ||
1055 | cpu.uclamp.max | |
1056 | A read-write single value file which exists on non-root cgroups. | |
1057 | The default is "max". i.e. no utilization capping | |
1058 | ||
1059 | The requested maximum utilization (limit) as a percentage rational | |
1060 | number, e.g. 98.76 for 98.76%. | |
1061 | ||
1062 | This interface allows reading and setting maximum utilization clamp | |
1063 | values similar to the sched_setattr(2). This maximum utilization | |
1064 | value is used to clamp the task specific maximum utilization clamp. | |
1065 | ||
1066 | ||
6c292092 | 1067 | |
633b11be MCC |
1068 | Memory |
1069 | ------ | |
6c292092 TH |
1070 | |
1071 | The "memory" controller regulates distribution of memory. Memory is | |
1072 | stateful and implements both limit and protection models. Due to the | |
1073 | intertwining between memory usage and reclaim pressure and the | |
1074 | stateful nature of memory, the distribution model is relatively | |
1075 | complex. | |
1076 | ||
1077 | While not completely water-tight, all major memory usages by a given | |
1078 | cgroup are tracked so that the total memory consumption can be | |
1079 | accounted and controlled to a reasonable extent. Currently, the | |
1080 | following types of memory usages are tracked. | |
1081 | ||
1082 | - Userland memory - page cache and anonymous memory. | |
1083 | ||
1084 | - Kernel data structures such as dentries and inodes. | |
1085 | ||
1086 | - TCP socket buffers. | |
1087 | ||
1088 | The above list may expand in the future for better coverage. | |
1089 | ||
1090 | ||
633b11be MCC |
1091 | Memory Interface Files |
1092 | ~~~~~~~~~~~~~~~~~~~~~~ | |
6c292092 TH |
1093 | |
1094 | All memory amounts are in bytes. If a value which is not aligned to | |
1095 | PAGE_SIZE is written, the value may be rounded up to the closest | |
1096 | PAGE_SIZE multiple when read back. | |
1097 | ||
1098 | memory.current | |
6c292092 TH |
1099 | A read-only single value file which exists on non-root |
1100 | cgroups. | |
1101 | ||
1102 | The total amount of memory currently being used by the cgroup | |
1103 | and its descendants. | |
1104 | ||
bf8d5d52 RG |
1105 | memory.min |
1106 | A read-write single value file which exists on non-root | |
1107 | cgroups. The default is "0". | |
1108 | ||
1109 | Hard memory protection. If the memory usage of a cgroup | |
1110 | is within its effective min boundary, the cgroup's memory | |
1111 | won't be reclaimed under any conditions. If there is no | |
1112 | unprotected reclaimable memory available, OOM killer | |
9783aa99 CD |
1113 | is invoked. Above the effective min boundary (or |
1114 | effective low boundary if it is higher), pages are reclaimed | |
1115 | proportionally to the overage, reducing reclaim pressure for | |
1116 | smaller overages. | |
bf8d5d52 | 1117 | |
d0c3bacb | 1118 | Effective min boundary is limited by memory.min values of |
bf8d5d52 RG |
1119 | all ancestor cgroups. If there is memory.min overcommitment |
1120 | (child cgroup or cgroups are requiring more protected memory | |
1121 | than parent will allow), then each child cgroup will get | |
1122 | the part of parent's protection proportional to its | |
1123 | actual memory usage below memory.min. | |
1124 | ||
1125 | Putting more memory than generally available under this | |
1126 | protection is discouraged and may lead to constant OOMs. | |
1127 | ||
1128 | If a memory cgroup is not populated with processes, | |
1129 | its memory.min is ignored. | |
1130 | ||
6c292092 | 1131 | memory.low |
6c292092 TH |
1132 | A read-write single value file which exists on non-root |
1133 | cgroups. The default is "0". | |
1134 | ||
7854207f RG |
1135 | Best-effort memory protection. If the memory usage of a |
1136 | cgroup is within its effective low boundary, the cgroup's | |
6ee0fac1 JH |
1137 | memory won't be reclaimed unless there is no reclaimable |
1138 | memory available in unprotected cgroups. | |
822bbba0 | 1139 | Above the effective low boundary (or |
9783aa99 CD |
1140 | effective min boundary if it is higher), pages are reclaimed |
1141 | proportionally to the overage, reducing reclaim pressure for | |
1142 | smaller overages. | |
7854207f RG |
1143 | |
1144 | Effective low boundary is limited by memory.low values of | |
1145 | all ancestor cgroups. If there is memory.low overcommitment | |
bf8d5d52 | 1146 | (child cgroup or cgroups are requiring more protected memory |
7854207f | 1147 | than parent will allow), then each child cgroup will get |
bf8d5d52 | 1148 | the part of parent's protection proportional to its |
7854207f | 1149 | actual memory usage below memory.low. |
6c292092 TH |
1150 | |
1151 | Putting more memory than generally available under this | |
1152 | protection is discouraged. | |
1153 | ||
1154 | memory.high | |
6c292092 TH |
1155 | A read-write single value file which exists on non-root |
1156 | cgroups. The default is "max". | |
1157 | ||
1158 | Memory usage throttle limit. This is the main mechanism to | |
1159 | control memory usage of a cgroup. If a cgroup's usage goes | |
1160 | over the high boundary, the processes of the cgroup are | |
1161 | throttled and put under heavy reclaim pressure. | |
1162 | ||
1163 | Going over the high limit never invokes the OOM killer and | |
1164 | under extreme conditions the limit may be breached. | |
1165 | ||
1166 | memory.max | |
6c292092 TH |
1167 | A read-write single value file which exists on non-root |
1168 | cgroups. The default is "max". | |
1169 | ||
1170 | Memory usage hard limit. This is the final protection | |
1171 | mechanism. If a cgroup's memory usage reaches this limit and | |
1172 | can't be reduced, the OOM killer is invoked in the cgroup. | |
1173 | Under certain circumstances, the usage may go over the limit | |
1174 | temporarily. | |
1175 | ||
db33ec37 KK |
1176 | In default configuration regular 0-order allocations always |
1177 | succeed unless OOM killer chooses current task as a victim. | |
1178 | ||
1179 | Some kinds of allocations don't invoke the OOM killer. | |
1180 | Caller could retry them differently, return into userspace | |
1181 | as -ENOMEM or silently ignore in cases like disk readahead. | |
1182 | ||
6c292092 TH |
1183 | This is the ultimate protection mechanism. As long as the |
1184 | high limit is used and monitored properly, this limit's | |
1185 | utility is limited to providing the final safety net. | |
1186 | ||
3d8b38eb RG |
1187 | memory.oom.group |
1188 | A read-write single value file which exists on non-root | |
1189 | cgroups. The default value is "0". | |
1190 | ||
1191 | Determines whether the cgroup should be treated as | |
1192 | an indivisible workload by the OOM killer. If set, | |
1193 | all tasks belonging to the cgroup or to its descendants | |
1194 | (if the memory cgroup is not a leaf cgroup) are killed | |
1195 | together or not at all. This can be used to avoid | |
1196 | partial kills to guarantee workload integrity. | |
1197 | ||
1198 | Tasks with the OOM protection (oom_score_adj set to -1000) | |
1199 | are treated as an exception and are never killed. | |
1200 | ||
1201 | If the OOM killer is invoked in a cgroup, it's not going | |
1202 | to kill any tasks outside of this cgroup, regardless | |
1203 | memory.oom.group values of ancestor cgroups. | |
1204 | ||
6c292092 | 1205 | memory.events |
6c292092 TH |
1206 | A read-only flat-keyed file which exists on non-root cgroups. |
1207 | The following entries are defined. Unless specified | |
1208 | otherwise, a value change in this file generates a file | |
1209 | modified event. | |
1210 | ||
1e577f97 SB |
1211 | Note that all fields in this file are hierarchical and the |
1212 | file modified event can be generated due to an event down the | |
1213 | hierarchy. For for the local events at the cgroup level see | |
1214 | memory.events.local. | |
1215 | ||
6c292092 | 1216 | low |
6c292092 TH |
1217 | The number of times the cgroup is reclaimed due to |
1218 | high memory pressure even though its usage is under | |
1219 | the low boundary. This usually indicates that the low | |
1220 | boundary is over-committed. | |
1221 | ||
1222 | high | |
6c292092 TH |
1223 | The number of times processes of the cgroup are |
1224 | throttled and routed to perform direct memory reclaim | |
1225 | because the high memory boundary was exceeded. For a | |
1226 | cgroup whose memory usage is capped by the high limit | |
1227 | rather than global memory pressure, this event's | |
1228 | occurrences are expected. | |
1229 | ||
1230 | max | |
6c292092 TH |
1231 | The number of times the cgroup's memory usage was |
1232 | about to go over the max boundary. If direct reclaim | |
8e675f7a | 1233 | fails to bring it down, the cgroup goes to OOM state. |
6c292092 TH |
1234 | |
1235 | oom | |
8e675f7a KK |
1236 | The number of time the cgroup's memory usage was |
1237 | reached the limit and allocation was about to fail. | |
1238 | ||
7a1adfdd RG |
1239 | This event is not raised if the OOM killer is not |
1240 | considered as an option, e.g. for failed high-order | |
db33ec37 | 1241 | allocations or if caller asked to not retry attempts. |
7a1adfdd | 1242 | |
8e675f7a | 1243 | oom_kill |
8e675f7a KK |
1244 | The number of processes belonging to this cgroup |
1245 | killed by any kind of OOM killer. | |
6c292092 | 1246 | |
1e577f97 SB |
1247 | memory.events.local |
1248 | Similar to memory.events but the fields in the file are local | |
1249 | to the cgroup i.e. not hierarchical. The file modified event | |
1250 | generated on this file reflects only the local events. | |
1251 | ||
587d9f72 | 1252 | memory.stat |
587d9f72 JW |
1253 | A read-only flat-keyed file which exists on non-root cgroups. |
1254 | ||
1255 | This breaks down the cgroup's memory footprint into different | |
1256 | types of memory, type-specific details, and other information | |
1257 | on the state and past events of the memory management system. | |
1258 | ||
1259 | All memory amounts are in bytes. | |
1260 | ||
1261 | The entries are ordered to be human readable, and new entries | |
1262 | can show up in the middle. Don't rely on items remaining in a | |
1263 | fixed position; use the keys to look up specific values! | |
1264 | ||
a21e7bb3 KK |
1265 | If the entry has no per-node counter (or not show in the |
1266 | memory.numa_stat). We use 'npn' (non-per-node) as the tag | |
1267 | to indicate that it will not show in the memory.numa_stat. | |
5f9a4f4a | 1268 | |
587d9f72 | 1269 | anon |
587d9f72 JW |
1270 | Amount of memory used in anonymous mappings such as |
1271 | brk(), sbrk(), and mmap(MAP_ANONYMOUS) | |
1272 | ||
1273 | file | |
587d9f72 JW |
1274 | Amount of memory used to cache filesystem data, |
1275 | including tmpfs and shared memory. | |
1276 | ||
12580e4b | 1277 | kernel_stack |
12580e4b VD |
1278 | Amount of memory allocated to kernel stacks. |
1279 | ||
f0c0c115 SB |
1280 | pagetables |
1281 | Amount of memory allocated for page tables. | |
1282 | ||
a21e7bb3 | 1283 | percpu (npn) |
772616b0 RG |
1284 | Amount of memory used for storing per-cpu kernel |
1285 | data structures. | |
1286 | ||
a21e7bb3 | 1287 | sock (npn) |
4758e198 JW |
1288 | Amount of memory used in network transmission buffers |
1289 | ||
9a4caf1e | 1290 | shmem |
9a4caf1e JW |
1291 | Amount of cached filesystem data that is swap-backed, |
1292 | such as tmpfs, shm segments, shared anonymous mmap()s | |
1293 | ||
587d9f72 | 1294 | file_mapped |
587d9f72 JW |
1295 | Amount of cached filesystem data mapped with mmap() |
1296 | ||
1297 | file_dirty | |
587d9f72 JW |
1298 | Amount of cached filesystem data that was modified but |
1299 | not yet written back to disk | |
1300 | ||
1301 | file_writeback | |
587d9f72 JW |
1302 | Amount of cached filesystem data that was modified and |
1303 | is currently being written back to disk | |
1304 | ||
1ff9e6e1 CD |
1305 | anon_thp |
1306 | Amount of memory used in anonymous mappings backed by | |
1307 | transparent hugepages | |
b8eddff8 JW |
1308 | |
1309 | file_thp | |
1310 | Amount of cached filesystem data backed by transparent | |
1311 | hugepages | |
1312 | ||
1313 | shmem_thp | |
1314 | Amount of shm, tmpfs, shared anonymous mmap()s backed by | |
1315 | transparent hugepages | |
1ff9e6e1 | 1316 | |
633b11be | 1317 | inactive_anon, active_anon, inactive_file, active_file, unevictable |
587d9f72 JW |
1318 | Amount of memory, swap-backed and filesystem-backed, |
1319 | on the internal memory management lists used by the | |
1603c8d1 CD |
1320 | page reclaim algorithm. |
1321 | ||
1322 | As these represent internal list state (eg. shmem pages are on anon | |
1323 | memory management lists), inactive_foo + active_foo may not be equal to | |
1324 | the value for the foo counter, since the foo counter is type-based, not | |
1325 | list-based. | |
587d9f72 | 1326 | |
27ee57c9 | 1327 | slab_reclaimable |
27ee57c9 VD |
1328 | Part of "slab" that might be reclaimed, such as |
1329 | dentries and inodes. | |
1330 | ||
1331 | slab_unreclaimable | |
27ee57c9 VD |
1332 | Part of "slab" that cannot be reclaimed on memory |
1333 | pressure. | |
1334 | ||
a21e7bb3 | 1335 | slab (npn) |
5f9a4f4a MS |
1336 | Amount of memory used for storing in-kernel data |
1337 | structures. | |
587d9f72 | 1338 | |
8d3fe09d MS |
1339 | workingset_refault_anon |
1340 | Number of refaults of previously evicted anonymous pages. | |
b340959e | 1341 | |
8d3fe09d MS |
1342 | workingset_refault_file |
1343 | Number of refaults of previously evicted file pages. | |
b340959e | 1344 | |
8d3fe09d MS |
1345 | workingset_activate_anon |
1346 | Number of refaulted anonymous pages that were immediately | |
1347 | activated. | |
1348 | ||
1349 | workingset_activate_file | |
1350 | Number of refaulted file pages that were immediately activated. | |
1351 | ||
1352 | workingset_restore_anon | |
1353 | Number of restored anonymous pages which have been detected as | |
1354 | an active workingset before they got reclaimed. | |
1355 | ||
1356 | workingset_restore_file | |
1357 | Number of restored file pages which have been detected as an | |
1358 | active workingset before they got reclaimed. | |
a6f5576b | 1359 | |
b340959e | 1360 | workingset_nodereclaim |
b340959e RG |
1361 | Number of times a shadow node has been reclaimed |
1362 | ||
a21e7bb3 | 1363 | pgfault (npn) |
5f9a4f4a MS |
1364 | Total number of page faults incurred |
1365 | ||
a21e7bb3 | 1366 | pgmajfault (npn) |
5f9a4f4a MS |
1367 | Number of major page faults incurred |
1368 | ||
a21e7bb3 | 1369 | pgrefill (npn) |
2262185c RG |
1370 | Amount of scanned pages (in an active LRU list) |
1371 | ||
a21e7bb3 | 1372 | pgscan (npn) |
2262185c RG |
1373 | Amount of scanned pages (in an inactive LRU list) |
1374 | ||
a21e7bb3 | 1375 | pgsteal (npn) |
2262185c RG |
1376 | Amount of reclaimed pages |
1377 | ||
a21e7bb3 | 1378 | pgactivate (npn) |
2262185c RG |
1379 | Amount of pages moved to the active LRU list |
1380 | ||
a21e7bb3 | 1381 | pgdeactivate (npn) |
03189e8e | 1382 | Amount of pages moved to the inactive LRU list |
2262185c | 1383 | |
a21e7bb3 | 1384 | pglazyfree (npn) |
2262185c RG |
1385 | Amount of pages postponed to be freed under memory pressure |
1386 | ||
a21e7bb3 | 1387 | pglazyfreed (npn) |
2262185c RG |
1388 | Amount of reclaimed lazyfree pages |
1389 | ||
a21e7bb3 | 1390 | thp_fault_alloc (npn) |
1ff9e6e1 | 1391 | Number of transparent hugepages which were allocated to satisfy |
2a8bef32 YS |
1392 | a page fault. This counter is not present when CONFIG_TRANSPARENT_HUGEPAGE |
1393 | is not set. | |
1ff9e6e1 | 1394 | |
a21e7bb3 | 1395 | thp_collapse_alloc (npn) |
1ff9e6e1 CD |
1396 | Number of transparent hugepages which were allocated to allow |
1397 | collapsing an existing range of pages. This counter is not | |
1398 | present when CONFIG_TRANSPARENT_HUGEPAGE is not set. | |
1399 | ||
5f9a4f4a MS |
1400 | memory.numa_stat |
1401 | A read-only nested-keyed file which exists on non-root cgroups. | |
1402 | ||
1403 | This breaks down the cgroup's memory footprint into different | |
1404 | types of memory, type-specific details, and other information | |
1405 | per node on the state of the memory management system. | |
1406 | ||
1407 | This is useful for providing visibility into the NUMA locality | |
1408 | information within an memcg since the pages are allowed to be | |
1409 | allocated from any physical node. One of the use case is evaluating | |
1410 | application performance by combining this information with the | |
1411 | application's CPU allocation. | |
1412 | ||
1413 | All memory amounts are in bytes. | |
1414 | ||
1415 | The output format of memory.numa_stat is:: | |
1416 | ||
1417 | type N0=<bytes in node 0> N1=<bytes in node 1> ... | |
1418 | ||
1419 | The entries are ordered to be human readable, and new entries | |
1420 | can show up in the middle. Don't rely on items remaining in a | |
1421 | fixed position; use the keys to look up specific values! | |
1422 | ||
1423 | The entries can refer to the memory.stat. | |
1424 | ||
3e24b19d | 1425 | memory.swap.current |
3e24b19d VD |
1426 | A read-only single value file which exists on non-root |
1427 | cgroups. | |
1428 | ||
1429 | The total amount of swap currently being used by the cgroup | |
1430 | and its descendants. | |
1431 | ||
4b82ab4f JK |
1432 | memory.swap.high |
1433 | A read-write single value file which exists on non-root | |
1434 | cgroups. The default is "max". | |
1435 | ||
1436 | Swap usage throttle limit. If a cgroup's swap usage exceeds | |
1437 | this limit, all its further allocations will be throttled to | |
1438 | allow userspace to implement custom out-of-memory procedures. | |
1439 | ||
1440 | This limit marks a point of no return for the cgroup. It is NOT | |
1441 | designed to manage the amount of swapping a workload does | |
1442 | during regular operation. Compare to memory.swap.max, which | |
1443 | prohibits swapping past a set amount, but lets the cgroup | |
1444 | continue unimpeded as long as other memory can be reclaimed. | |
1445 | ||
1446 | Healthy workloads are not expected to reach this limit. | |
1447 | ||
3e24b19d | 1448 | memory.swap.max |
3e24b19d VD |
1449 | A read-write single value file which exists on non-root |
1450 | cgroups. The default is "max". | |
1451 | ||
1452 | Swap usage hard limit. If a cgroup's swap usage reaches this | |
2877cbe6 | 1453 | limit, anonymous memory of the cgroup will not be swapped out. |
3e24b19d | 1454 | |
f3a53a3a TH |
1455 | memory.swap.events |
1456 | A read-only flat-keyed file which exists on non-root cgroups. | |
1457 | The following entries are defined. Unless specified | |
1458 | otherwise, a value change in this file generates a file | |
1459 | modified event. | |
1460 | ||
4b82ab4f JK |
1461 | high |
1462 | The number of times the cgroup's swap usage was over | |
1463 | the high threshold. | |
1464 | ||
f3a53a3a TH |
1465 | max |
1466 | The number of times the cgroup's swap usage was about | |
1467 | to go over the max boundary and swap allocation | |
1468 | failed. | |
1469 | ||
1470 | fail | |
1471 | The number of times swap allocation failed either | |
1472 | because of running out of swap system-wide or max | |
1473 | limit. | |
1474 | ||
be09102b TH |
1475 | When reduced under the current usage, the existing swap |
1476 | entries are reclaimed gradually and the swap usage may stay | |
1477 | higher than the limit for an extended period of time. This | |
1478 | reduces the impact on the workload and memory management. | |
1479 | ||
2ce7135a JW |
1480 | memory.pressure |
1481 | A read-only nested-key file which exists on non-root cgroups. | |
1482 | ||
1483 | Shows pressure stall information for memory. See | |
373e8ffa | 1484 | :ref:`Documentation/accounting/psi.rst <psi>` for details. |
2ce7135a | 1485 | |
6c292092 | 1486 | |
633b11be MCC |
1487 | Usage Guidelines |
1488 | ~~~~~~~~~~~~~~~~ | |
6c292092 TH |
1489 | |
1490 | "memory.high" is the main mechanism to control memory usage. | |
1491 | Over-committing on high limit (sum of high limits > available memory) | |
1492 | and letting global memory pressure to distribute memory according to | |
1493 | usage is a viable strategy. | |
1494 | ||
1495 | Because breach of the high limit doesn't trigger the OOM killer but | |
1496 | throttles the offending cgroup, a management agent has ample | |
1497 | opportunities to monitor and take appropriate actions such as granting | |
1498 | more memory or terminating the workload. | |
1499 | ||
1500 | Determining whether a cgroup has enough memory is not trivial as | |
1501 | memory usage doesn't indicate whether the workload can benefit from | |
1502 | more memory. For example, a workload which writes data received from | |
1503 | network to a file can use all available memory but can also operate as | |
1504 | performant with a small amount of memory. A measure of memory | |
1505 | pressure - how much the workload is being impacted due to lack of | |
1506 | memory - is necessary to determine whether a workload needs more | |
1507 | memory; unfortunately, memory pressure monitoring mechanism isn't | |
1508 | implemented yet. | |
1509 | ||
1510 | ||
633b11be MCC |
1511 | Memory Ownership |
1512 | ~~~~~~~~~~~~~~~~ | |
6c292092 TH |
1513 | |
1514 | A memory area is charged to the cgroup which instantiated it and stays | |
1515 | charged to the cgroup until the area is released. Migrating a process | |
1516 | to a different cgroup doesn't move the memory usages that it | |
1517 | instantiated while in the previous cgroup to the new cgroup. | |
1518 | ||
1519 | A memory area may be used by processes belonging to different cgroups. | |
1520 | To which cgroup the area will be charged is in-deterministic; however, | |
1521 | over time, the memory area is likely to end up in a cgroup which has | |
1522 | enough memory allowance to avoid high reclaim pressure. | |
1523 | ||
1524 | If a cgroup sweeps a considerable amount of memory which is expected | |
1525 | to be accessed repeatedly by other cgroups, it may make sense to use | |
1526 | POSIX_FADV_DONTNEED to relinquish the ownership of memory areas | |
1527 | belonging to the affected files to ensure correct memory ownership. | |
1528 | ||
1529 | ||
633b11be MCC |
1530 | IO |
1531 | -- | |
6c292092 TH |
1532 | |
1533 | The "io" controller regulates the distribution of IO resources. This | |
1534 | controller implements both weight based and absolute bandwidth or IOPS | |
1535 | limit distribution; however, weight based distribution is available | |
1536 | only if cfq-iosched is in use and neither scheme is available for | |
1537 | blk-mq devices. | |
1538 | ||
1539 | ||
633b11be MCC |
1540 | IO Interface Files |
1541 | ~~~~~~~~~~~~~~~~~~ | |
6c292092 TH |
1542 | |
1543 | io.stat | |
ef45fe47 | 1544 | A read-only nested-keyed file. |
6c292092 TH |
1545 | |
1546 | Lines are keyed by $MAJ:$MIN device numbers and not ordered. | |
1547 | The following nested keys are defined. | |
1548 | ||
636620b6 | 1549 | ====== ===================== |
6c292092 TH |
1550 | rbytes Bytes read |
1551 | wbytes Bytes written | |
1552 | rios Number of read IOs | |
1553 | wios Number of write IOs | |
636620b6 TH |
1554 | dbytes Bytes discarded |
1555 | dios Number of discard IOs | |
1556 | ====== ===================== | |
6c292092 | 1557 | |
69654d37 | 1558 | An example read output follows:: |
6c292092 | 1559 | |
636620b6 TH |
1560 | 8:16 rbytes=1459200 wbytes=314773504 rios=192 wios=353 dbytes=0 dios=0 |
1561 | 8:0 rbytes=90430464 wbytes=299008000 rios=8950 wios=1252 dbytes=50331648 dios=3021 | |
6c292092 | 1562 | |
7caa4715 | 1563 | io.cost.qos |
c4c6b86a | 1564 | A read-write nested-keyed file which exists only on the root |
7caa4715 TH |
1565 | cgroup. |
1566 | ||
1567 | This file configures the Quality of Service of the IO cost | |
1568 | model based controller (CONFIG_BLK_CGROUP_IOCOST) which | |
1569 | currently implements "io.weight" proportional control. Lines | |
1570 | are keyed by $MAJ:$MIN device numbers and not ordered. The | |
1571 | line for a given device is populated on the first write for | |
1572 | the device on "io.cost.qos" or "io.cost.model". The following | |
1573 | nested keys are defined. | |
1574 | ||
1575 | ====== ===================================== | |
1576 | enable Weight-based control enable | |
1577 | ctrl "auto" or "user" | |
1578 | rpct Read latency percentile [0, 100] | |
1579 | rlat Read latency threshold | |
1580 | wpct Write latency percentile [0, 100] | |
1581 | wlat Write latency threshold | |
1582 | min Minimum scaling percentage [1, 10000] | |
1583 | max Maximum scaling percentage [1, 10000] | |
1584 | ====== ===================================== | |
1585 | ||
1586 | The controller is disabled by default and can be enabled by | |
1587 | setting "enable" to 1. "rpct" and "wpct" parameters default | |
1588 | to zero and the controller uses internal device saturation | |
1589 | state to adjust the overall IO rate between "min" and "max". | |
1590 | ||
1591 | When a better control quality is needed, latency QoS | |
1592 | parameters can be configured. For example:: | |
1593 | ||
1594 | 8:16 enable=1 ctrl=auto rpct=95.00 rlat=75000 wpct=95.00 wlat=150000 min=50.00 max=150.0 | |
1595 | ||
1596 | shows that on sdb, the controller is enabled, will consider | |
1597 | the device saturated if the 95th percentile of read completion | |
1598 | latencies is above 75ms or write 150ms, and adjust the overall | |
1599 | IO issue rate between 50% and 150% accordingly. | |
1600 | ||
1601 | The lower the saturation point, the better the latency QoS at | |
1602 | the cost of aggregate bandwidth. The narrower the allowed | |
1603 | adjustment range between "min" and "max", the more conformant | |
1604 | to the cost model the IO behavior. Note that the IO issue | |
1605 | base rate may be far off from 100% and setting "min" and "max" | |
1606 | blindly can lead to a significant loss of device capacity or | |
1607 | control quality. "min" and "max" are useful for regulating | |
1608 | devices which show wide temporary behavior changes - e.g. a | |
1609 | ssd which accepts writes at the line speed for a while and | |
1610 | then completely stalls for multiple seconds. | |
1611 | ||
1612 | When "ctrl" is "auto", the parameters are controlled by the | |
1613 | kernel and may change automatically. Setting "ctrl" to "user" | |
1614 | or setting any of the percentile and latency parameters puts | |
1615 | it into "user" mode and disables the automatic changes. The | |
1616 | automatic mode can be restored by setting "ctrl" to "auto". | |
1617 | ||
1618 | io.cost.model | |
c4c6b86a | 1619 | A read-write nested-keyed file which exists only on the root |
7caa4715 TH |
1620 | cgroup. |
1621 | ||
1622 | This file configures the cost model of the IO cost model based | |
1623 | controller (CONFIG_BLK_CGROUP_IOCOST) which currently | |
1624 | implements "io.weight" proportional control. Lines are keyed | |
1625 | by $MAJ:$MIN device numbers and not ordered. The line for a | |
1626 | given device is populated on the first write for the device on | |
1627 | "io.cost.qos" or "io.cost.model". The following nested keys | |
1628 | are defined. | |
1629 | ||
1630 | ===== ================================ | |
1631 | ctrl "auto" or "user" | |
1632 | model The cost model in use - "linear" | |
1633 | ===== ================================ | |
1634 | ||
1635 | When "ctrl" is "auto", the kernel may change all parameters | |
1636 | dynamically. When "ctrl" is set to "user" or any other | |
1637 | parameters are written to, "ctrl" become "user" and the | |
1638 | automatic changes are disabled. | |
1639 | ||
1640 | When "model" is "linear", the following model parameters are | |
1641 | defined. | |
1642 | ||
1643 | ============= ======================================== | |
1644 | [r|w]bps The maximum sequential IO throughput | |
1645 | [r|w]seqiops The maximum 4k sequential IOs per second | |
1646 | [r|w]randiops The maximum 4k random IOs per second | |
1647 | ============= ======================================== | |
1648 | ||
1649 | From the above, the builtin linear model determines the base | |
1650 | costs of a sequential and random IO and the cost coefficient | |
1651 | for the IO size. While simple, this model can cover most | |
1652 | common device classes acceptably. | |
1653 | ||
1654 | The IO cost model isn't expected to be accurate in absolute | |
1655 | sense and is scaled to the device behavior dynamically. | |
1656 | ||
8504dea7 TH |
1657 | If needed, tools/cgroup/iocost_coef_gen.py can be used to |
1658 | generate device-specific coefficients. | |
1659 | ||
6c292092 | 1660 | io.weight |
6c292092 TH |
1661 | A read-write flat-keyed file which exists on non-root cgroups. |
1662 | The default is "default 100". | |
1663 | ||
1664 | The first line is the default weight applied to devices | |
1665 | without specific override. The rest are overrides keyed by | |
1666 | $MAJ:$MIN device numbers and not ordered. The weights are in | |
1667 | the range [1, 10000] and specifies the relative amount IO time | |
1668 | the cgroup can use in relation to its siblings. | |
1669 | ||
1670 | The default weight can be updated by writing either "default | |
1671 | $WEIGHT" or simply "$WEIGHT". Overrides can be set by writing | |
1672 | "$MAJ:$MIN $WEIGHT" and unset by writing "$MAJ:$MIN default". | |
1673 | ||
633b11be | 1674 | An example read output follows:: |
6c292092 TH |
1675 | |
1676 | default 100 | |
1677 | 8:16 200 | |
1678 | 8:0 50 | |
1679 | ||
1680 | io.max | |
6c292092 TH |
1681 | A read-write nested-keyed file which exists on non-root |
1682 | cgroups. | |
1683 | ||
1684 | BPS and IOPS based IO limit. Lines are keyed by $MAJ:$MIN | |
1685 | device numbers and not ordered. The following nested keys are | |
1686 | defined. | |
1687 | ||
633b11be | 1688 | ===== ================================== |
6c292092 TH |
1689 | rbps Max read bytes per second |
1690 | wbps Max write bytes per second | |
1691 | riops Max read IO operations per second | |
1692 | wiops Max write IO operations per second | |
633b11be | 1693 | ===== ================================== |
6c292092 TH |
1694 | |
1695 | When writing, any number of nested key-value pairs can be | |
1696 | specified in any order. "max" can be specified as the value | |
1697 | to remove a specific limit. If the same key is specified | |
1698 | multiple times, the outcome is undefined. | |
1699 | ||
1700 | BPS and IOPS are measured in each IO direction and IOs are | |
1701 | delayed if limit is reached. Temporary bursts are allowed. | |
1702 | ||
633b11be | 1703 | Setting read limit at 2M BPS and write at 120 IOPS for 8:16:: |
6c292092 TH |
1704 | |
1705 | echo "8:16 rbps=2097152 wiops=120" > io.max | |
1706 | ||
633b11be | 1707 | Reading returns the following:: |
6c292092 TH |
1708 | |
1709 | 8:16 rbps=2097152 wbps=max riops=max wiops=120 | |
1710 | ||
633b11be | 1711 | Write IOPS limit can be removed by writing the following:: |
6c292092 TH |
1712 | |
1713 | echo "8:16 wiops=max" > io.max | |
1714 | ||
633b11be | 1715 | Reading now returns the following:: |
6c292092 TH |
1716 | |
1717 | 8:16 rbps=2097152 wbps=max riops=max wiops=max | |
1718 | ||
2ce7135a JW |
1719 | io.pressure |
1720 | A read-only nested-key file which exists on non-root cgroups. | |
1721 | ||
1722 | Shows pressure stall information for IO. See | |
373e8ffa | 1723 | :ref:`Documentation/accounting/psi.rst <psi>` for details. |
2ce7135a | 1724 | |
6c292092 | 1725 | |
633b11be MCC |
1726 | Writeback |
1727 | ~~~~~~~~~ | |
6c292092 TH |
1728 | |
1729 | Page cache is dirtied through buffered writes and shared mmaps and | |
1730 | written asynchronously to the backing filesystem by the writeback | |
1731 | mechanism. Writeback sits between the memory and IO domains and | |
1732 | regulates the proportion of dirty memory by balancing dirtying and | |
1733 | write IOs. | |
1734 | ||
1735 | The io controller, in conjunction with the memory controller, | |
1736 | implements control of page cache writeback IOs. The memory controller | |
1737 | defines the memory domain that dirty memory ratio is calculated and | |
1738 | maintained for and the io controller defines the io domain which | |
1739 | writes out dirty pages for the memory domain. Both system-wide and | |
1740 | per-cgroup dirty memory states are examined and the more restrictive | |
1741 | of the two is enforced. | |
1742 | ||
1743 | cgroup writeback requires explicit support from the underlying | |
1b932b7d ES |
1744 | filesystem. Currently, cgroup writeback is implemented on ext2, ext4, |
1745 | btrfs, f2fs, and xfs. On other filesystems, all writeback IOs are | |
1746 | attributed to the root cgroup. | |
6c292092 TH |
1747 | |
1748 | There are inherent differences in memory and writeback management | |
1749 | which affects how cgroup ownership is tracked. Memory is tracked per | |
1750 | page while writeback per inode. For the purpose of writeback, an | |
1751 | inode is assigned to a cgroup and all IO requests to write dirty pages | |
1752 | from the inode are attributed to that cgroup. | |
1753 | ||
1754 | As cgroup ownership for memory is tracked per page, there can be pages | |
1755 | which are associated with different cgroups than the one the inode is | |
1756 | associated with. These are called foreign pages. The writeback | |
1757 | constantly keeps track of foreign pages and, if a particular foreign | |
1758 | cgroup becomes the majority over a certain period of time, switches | |
1759 | the ownership of the inode to that cgroup. | |
1760 | ||
1761 | While this model is enough for most use cases where a given inode is | |
1762 | mostly dirtied by a single cgroup even when the main writing cgroup | |
1763 | changes over time, use cases where multiple cgroups write to a single | |
1764 | inode simultaneously are not supported well. In such circumstances, a | |
1765 | significant portion of IOs are likely to be attributed incorrectly. | |
1766 | As memory controller assigns page ownership on the first use and | |
1767 | doesn't update it until the page is released, even if writeback | |
1768 | strictly follows page ownership, multiple cgroups dirtying overlapping | |
1769 | areas wouldn't work as expected. It's recommended to avoid such usage | |
1770 | patterns. | |
1771 | ||
1772 | The sysctl knobs which affect writeback behavior are applied to cgroup | |
1773 | writeback as follows. | |
1774 | ||
633b11be | 1775 | vm.dirty_background_ratio, vm.dirty_ratio |
6c292092 TH |
1776 | These ratios apply the same to cgroup writeback with the |
1777 | amount of available memory capped by limits imposed by the | |
1778 | memory controller and system-wide clean memory. | |
1779 | ||
633b11be | 1780 | vm.dirty_background_bytes, vm.dirty_bytes |
6c292092 TH |
1781 | For cgroup writeback, this is calculated into ratio against |
1782 | total available memory and applied the same way as | |
1783 | vm.dirty[_background]_ratio. | |
1784 | ||
1785 | ||
b351f0c7 JB |
1786 | IO Latency |
1787 | ~~~~~~~~~~ | |
1788 | ||
1789 | This is a cgroup v2 controller for IO workload protection. You provide a group | |
1790 | with a latency target, and if the average latency exceeds that target the | |
1791 | controller will throttle any peers that have a lower latency target than the | |
1792 | protected workload. | |
1793 | ||
1794 | The limits are only applied at the peer level in the hierarchy. This means that | |
1795 | in the diagram below, only groups A, B, and C will influence each other, and | |
34b43446 | 1796 | groups D and F will influence each other. Group G will influence nobody:: |
b351f0c7 JB |
1797 | |
1798 | [root] | |
1799 | / | \ | |
1800 | A B C | |
1801 | / \ | | |
1802 | D F G | |
1803 | ||
1804 | ||
1805 | So the ideal way to configure this is to set io.latency in groups A, B, and C. | |
1806 | Generally you do not want to set a value lower than the latency your device | |
1807 | supports. Experiment to find the value that works best for your workload. | |
1808 | Start at higher than the expected latency for your device and watch the | |
c480bcf9 DZF |
1809 | avg_lat value in io.stat for your workload group to get an idea of the |
1810 | latency you see during normal operation. Use the avg_lat value as a basis for | |
1811 | your real setting, setting at 10-15% higher than the value in io.stat. | |
b351f0c7 JB |
1812 | |
1813 | How IO Latency Throttling Works | |
1814 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ | |
1815 | ||
1816 | io.latency is work conserving; so as long as everybody is meeting their latency | |
1817 | target the controller doesn't do anything. Once a group starts missing its | |
1818 | target it begins throttling any peer group that has a higher target than itself. | |
1819 | This throttling takes 2 forms: | |
1820 | ||
1821 | - Queue depth throttling. This is the number of outstanding IO's a group is | |
1822 | allowed to have. We will clamp down relatively quickly, starting at no limit | |
1823 | and going all the way down to 1 IO at a time. | |
1824 | ||
1825 | - Artificial delay induction. There are certain types of IO that cannot be | |
1826 | throttled without possibly adversely affecting higher priority groups. This | |
1827 | includes swapping and metadata IO. These types of IO are allowed to occur | |
1828 | normally, however they are "charged" to the originating group. If the | |
1829 | originating group is being throttled you will see the use_delay and delay | |
1830 | fields in io.stat increase. The delay value is how many microseconds that are | |
1831 | being added to any process that runs in this group. Because this number can | |
1832 | grow quite large if there is a lot of swapping or metadata IO occurring we | |
1833 | limit the individual delay events to 1 second at a time. | |
1834 | ||
1835 | Once the victimized group starts meeting its latency target again it will start | |
1836 | unthrottling any peer groups that were throttled previously. If the victimized | |
1837 | group simply stops doing IO the global counter will unthrottle appropriately. | |
1838 | ||
1839 | IO Latency Interface Files | |
1840 | ~~~~~~~~~~~~~~~~~~~~~~~~~~ | |
1841 | ||
1842 | io.latency | |
1843 | This takes a similar format as the other controllers. | |
1844 | ||
1845 | "MAJOR:MINOR target=<target time in microseconds" | |
1846 | ||
1847 | io.stat | |
1848 | If the controller is enabled you will see extra stats in io.stat in | |
1849 | addition to the normal ones. | |
1850 | ||
1851 | depth | |
1852 | This is the current queue depth for the group. | |
1853 | ||
1854 | avg_lat | |
c480bcf9 DZF |
1855 | This is an exponential moving average with a decay rate of 1/exp |
1856 | bound by the sampling interval. The decay rate interval can be | |
1857 | calculated by multiplying the win value in io.stat by the | |
1858 | corresponding number of samples based on the win value. | |
1859 | ||
1860 | win | |
1861 | The sampling window size in milliseconds. This is the minimum | |
1862 | duration of time between evaluation events. Windows only elapse | |
1863 | with IO activity. Idle periods extend the most recent window. | |
b351f0c7 | 1864 | |
633b11be MCC |
1865 | PID |
1866 | --- | |
20c56e59 HR |
1867 | |
1868 | The process number controller is used to allow a cgroup to stop any | |
1869 | new tasks from being fork()'d or clone()'d after a specified limit is | |
1870 | reached. | |
1871 | ||
1872 | The number of tasks in a cgroup can be exhausted in ways which other | |
1873 | controllers cannot prevent, thus warranting its own controller. For | |
1874 | example, a fork bomb is likely to exhaust the number of tasks before | |
1875 | hitting memory restrictions. | |
1876 | ||
1877 | Note that PIDs used in this controller refer to TIDs, process IDs as | |
1878 | used by the kernel. | |
1879 | ||
1880 | ||
633b11be MCC |
1881 | PID Interface Files |
1882 | ~~~~~~~~~~~~~~~~~~~ | |
20c56e59 HR |
1883 | |
1884 | pids.max | |
312eb712 TK |
1885 | A read-write single value file which exists on non-root |
1886 | cgroups. The default is "max". | |
20c56e59 | 1887 | |
312eb712 | 1888 | Hard limit of number of processes. |
20c56e59 HR |
1889 | |
1890 | pids.current | |
312eb712 | 1891 | A read-only single value file which exists on all cgroups. |
20c56e59 | 1892 | |
312eb712 TK |
1893 | The number of processes currently in the cgroup and its |
1894 | descendants. | |
20c56e59 HR |
1895 | |
1896 | Organisational operations are not blocked by cgroup policies, so it is | |
1897 | possible to have pids.current > pids.max. This can be done by either | |
1898 | setting the limit to be smaller than pids.current, or attaching enough | |
1899 | processes to the cgroup such that pids.current is larger than | |
1900 | pids.max. However, it is not possible to violate a cgroup PID policy | |
1901 | through fork() or clone(). These will return -EAGAIN if the creation | |
1902 | of a new process would cause a cgroup policy to be violated. | |
1903 | ||
1904 | ||
4ec22e9c WL |
1905 | Cpuset |
1906 | ------ | |
1907 | ||
1908 | The "cpuset" controller provides a mechanism for constraining | |
1909 | the CPU and memory node placement of tasks to only the resources | |
1910 | specified in the cpuset interface files in a task's current cgroup. | |
1911 | This is especially valuable on large NUMA systems where placing jobs | |
1912 | on properly sized subsets of the systems with careful processor and | |
1913 | memory placement to reduce cross-node memory access and contention | |
1914 | can improve overall system performance. | |
1915 | ||
1916 | The "cpuset" controller is hierarchical. That means the controller | |
1917 | cannot use CPUs or memory nodes not allowed in its parent. | |
1918 | ||
1919 | ||
1920 | Cpuset Interface Files | |
1921 | ~~~~~~~~~~~~~~~~~~~~~~ | |
1922 | ||
1923 | cpuset.cpus | |
1924 | A read-write multiple values file which exists on non-root | |
1925 | cpuset-enabled cgroups. | |
1926 | ||
1927 | It lists the requested CPUs to be used by tasks within this | |
1928 | cgroup. The actual list of CPUs to be granted, however, is | |
1929 | subjected to constraints imposed by its parent and can differ | |
1930 | from the requested CPUs. | |
1931 | ||
1932 | The CPU numbers are comma-separated numbers or ranges. | |
f3431ba7 | 1933 | For example:: |
4ec22e9c WL |
1934 | |
1935 | # cat cpuset.cpus | |
1936 | 0-4,6,8-10 | |
1937 | ||
1938 | An empty value indicates that the cgroup is using the same | |
1939 | setting as the nearest cgroup ancestor with a non-empty | |
1940 | "cpuset.cpus" or all the available CPUs if none is found. | |
1941 | ||
1942 | The value of "cpuset.cpus" stays constant until the next update | |
1943 | and won't be affected by any CPU hotplug events. | |
1944 | ||
1945 | cpuset.cpus.effective | |
5776cecc | 1946 | A read-only multiple values file which exists on all |
4ec22e9c WL |
1947 | cpuset-enabled cgroups. |
1948 | ||
1949 | It lists the onlined CPUs that are actually granted to this | |
1950 | cgroup by its parent. These CPUs are allowed to be used by | |
1951 | tasks within the current cgroup. | |
1952 | ||
1953 | If "cpuset.cpus" is empty, the "cpuset.cpus.effective" file shows | |
1954 | all the CPUs from the parent cgroup that can be available to | |
1955 | be used by this cgroup. Otherwise, it should be a subset of | |
1956 | "cpuset.cpus" unless none of the CPUs listed in "cpuset.cpus" | |
1957 | can be granted. In this case, it will be treated just like an | |
1958 | empty "cpuset.cpus". | |
1959 | ||
1960 | Its value will be affected by CPU hotplug events. | |
1961 | ||
1962 | cpuset.mems | |
1963 | A read-write multiple values file which exists on non-root | |
1964 | cpuset-enabled cgroups. | |
1965 | ||
1966 | It lists the requested memory nodes to be used by tasks within | |
1967 | this cgroup. The actual list of memory nodes granted, however, | |
1968 | is subjected to constraints imposed by its parent and can differ | |
1969 | from the requested memory nodes. | |
1970 | ||
1971 | The memory node numbers are comma-separated numbers or ranges. | |
f3431ba7 | 1972 | For example:: |
4ec22e9c WL |
1973 | |
1974 | # cat cpuset.mems | |
1975 | 0-1,3 | |
1976 | ||
1977 | An empty value indicates that the cgroup is using the same | |
1978 | setting as the nearest cgroup ancestor with a non-empty | |
1979 | "cpuset.mems" or all the available memory nodes if none | |
1980 | is found. | |
1981 | ||
1982 | The value of "cpuset.mems" stays constant until the next update | |
1983 | and won't be affected by any memory nodes hotplug events. | |
1984 | ||
1985 | cpuset.mems.effective | |
5776cecc | 1986 | A read-only multiple values file which exists on all |
4ec22e9c WL |
1987 | cpuset-enabled cgroups. |
1988 | ||
1989 | It lists the onlined memory nodes that are actually granted to | |
1990 | this cgroup by its parent. These memory nodes are allowed to | |
1991 | be used by tasks within the current cgroup. | |
1992 | ||
1993 | If "cpuset.mems" is empty, it shows all the memory nodes from the | |
1994 | parent cgroup that will be available to be used by this cgroup. | |
1995 | Otherwise, it should be a subset of "cpuset.mems" unless none of | |
1996 | the memory nodes listed in "cpuset.mems" can be granted. In this | |
1997 | case, it will be treated just like an empty "cpuset.mems". | |
1998 | ||
1999 | Its value will be affected by memory nodes hotplug events. | |
2000 | ||
b1e3aeb1 | 2001 | cpuset.cpus.partition |
90e92f2d WL |
2002 | A read-write single value file which exists on non-root |
2003 | cpuset-enabled cgroups. This flag is owned by the parent cgroup | |
2004 | and is not delegatable. | |
2005 | ||
2006 | It accepts only the following input values when written to. | |
2007 | ||
6ee0fac1 | 2008 | "root" - a partition root |
b1e3aeb1 | 2009 | "member" - a non-root member of a partition |
90e92f2d WL |
2010 | |
2011 | When set to be a partition root, the current cgroup is the | |
2012 | root of a new partition or scheduling domain that comprises | |
2013 | itself and all its descendants except those that are separate | |
2014 | partition roots themselves and their descendants. The root | |
2015 | cgroup is always a partition root. | |
2016 | ||
2017 | There are constraints on where a partition root can be set. | |
2018 | It can only be set in a cgroup if all the following conditions | |
2019 | are true. | |
2020 | ||
2021 | 1) The "cpuset.cpus" is not empty and the list of CPUs are | |
2022 | exclusive, i.e. they are not shared by any of its siblings. | |
2023 | 2) The parent cgroup is a partition root. | |
2024 | 3) The "cpuset.cpus" is also a proper subset of the parent's | |
2025 | "cpuset.cpus.effective". | |
2026 | 4) There is no child cgroups with cpuset enabled. This is for | |
2027 | eliminating corner cases that have to be handled if such a | |
2028 | condition is allowed. | |
2029 | ||
2030 | Setting it to partition root will take the CPUs away from the | |
2031 | effective CPUs of the parent cgroup. Once it is set, this | |
2032 | file cannot be reverted back to "member" if there are any child | |
2033 | cgroups with cpuset enabled. | |
2034 | ||
2035 | A parent partition cannot distribute all its CPUs to its | |
2036 | child partitions. There must be at least one cpu left in the | |
2037 | parent partition. | |
2038 | ||
2039 | Once becoming a partition root, changes to "cpuset.cpus" is | |
2040 | generally allowed as long as the first condition above is true, | |
2041 | the change will not take away all the CPUs from the parent | |
2042 | partition and the new "cpuset.cpus" value is a superset of its | |
2043 | children's "cpuset.cpus" values. | |
2044 | ||
2045 | Sometimes, external factors like changes to ancestors' | |
2046 | "cpuset.cpus" or cpu hotplug can cause the state of the partition | |
2047 | root to change. On read, the "cpuset.sched.partition" file | |
2048 | can show the following values. | |
2049 | ||
2050 | "member" Non-root member of a partition | |
2051 | "root" Partition root | |
2052 | "root invalid" Invalid partition root | |
2053 | ||
2054 | It is a partition root if the first 2 partition root conditions | |
2055 | above are true and at least one CPU from "cpuset.cpus" is | |
2056 | granted by the parent cgroup. | |
2057 | ||
2058 | A partition root can become invalid if none of CPUs requested | |
2059 | in "cpuset.cpus" can be granted by the parent cgroup or the | |
2060 | parent cgroup is no longer a partition root itself. In this | |
2061 | case, it is not a real partition even though the restriction | |
2062 | of the first partition root condition above will still apply. | |
2063 | The cpu affinity of all the tasks in the cgroup will then be | |
2064 | associated with CPUs in the nearest ancestor partition. | |
2065 | ||
2066 | An invalid partition root can be transitioned back to a | |
2067 | real partition root if at least one of the requested CPUs | |
2068 | can now be granted by its parent. In this case, the cpu | |
2069 | affinity of all the tasks in the formerly invalid partition | |
2070 | will be associated to the CPUs of the newly formed partition. | |
2071 | Changing the partition state of an invalid partition root to | |
2072 | "member" is always allowed even if child cpusets are present. | |
2073 | ||
4ec22e9c | 2074 | |
4ad5a321 RG |
2075 | Device controller |
2076 | ----------------- | |
2077 | ||
2078 | Device controller manages access to device files. It includes both | |
2079 | creation of new device files (using mknod), and access to the | |
2080 | existing device files. | |
2081 | ||
2082 | Cgroup v2 device controller has no interface files and is implemented | |
2083 | on top of cgroup BPF. To control access to device files, a user may | |
2084 | create bpf programs of the BPF_CGROUP_DEVICE type and attach them | |
2085 | to cgroups. On an attempt to access a device file, corresponding | |
2086 | BPF programs will be executed, and depending on the return value | |
2087 | the attempt will succeed or fail with -EPERM. | |
2088 | ||
2089 | A BPF_CGROUP_DEVICE program takes a pointer to the bpf_cgroup_dev_ctx | |
2090 | structure, which describes the device access attempt: access type | |
2091 | (mknod/read/write) and device (type, major and minor numbers). | |
2092 | If the program returns 0, the attempt fails with -EPERM, otherwise | |
2093 | it succeeds. | |
2094 | ||
2095 | An example of BPF_CGROUP_DEVICE program may be found in the kernel | |
2096 | source tree in the tools/testing/selftests/bpf/dev_cgroup.c file. | |
2097 | ||
2098 | ||
633b11be MCC |
2099 | RDMA |
2100 | ---- | |
968ebff1 | 2101 | |
9c1e67f9 | 2102 | The "rdma" controller regulates the distribution and accounting of |
aefea466 | 2103 | RDMA resources. |
9c1e67f9 | 2104 | |
633b11be MCC |
2105 | RDMA Interface Files |
2106 | ~~~~~~~~~~~~~~~~~~~~ | |
9c1e67f9 PP |
2107 | |
2108 | rdma.max | |
2109 | A readwrite nested-keyed file that exists for all the cgroups | |
2110 | except root that describes current configured resource limit | |
2111 | for a RDMA/IB device. | |
2112 | ||
2113 | Lines are keyed by device name and are not ordered. | |
2114 | Each line contains space separated resource name and its configured | |
2115 | limit that can be distributed. | |
2116 | ||
2117 | The following nested keys are defined. | |
2118 | ||
633b11be | 2119 | ========== ============================= |
9c1e67f9 PP |
2120 | hca_handle Maximum number of HCA Handles |
2121 | hca_object Maximum number of HCA Objects | |
633b11be | 2122 | ========== ============================= |
9c1e67f9 | 2123 | |
633b11be | 2124 | An example for mlx4 and ocrdma device follows:: |
9c1e67f9 PP |
2125 | |
2126 | mlx4_0 hca_handle=2 hca_object=2000 | |
2127 | ocrdma1 hca_handle=3 hca_object=max | |
2128 | ||
2129 | rdma.current | |
2130 | A read-only file that describes current resource usage. | |
2131 | It exists for all the cgroup except root. | |
2132 | ||
633b11be | 2133 | An example for mlx4 and ocrdma device follows:: |
9c1e67f9 PP |
2134 | |
2135 | mlx4_0 hca_handle=1 hca_object=20 | |
2136 | ocrdma1 hca_handle=1 hca_object=23 | |
2137 | ||
faced7e0 GS |
2138 | HugeTLB |
2139 | ------- | |
2140 | ||
2141 | The HugeTLB controller allows to limit the HugeTLB usage per control group and | |
2142 | enforces the controller limit during page fault. | |
2143 | ||
2144 | HugeTLB Interface Files | |
2145 | ~~~~~~~~~~~~~~~~~~~~~~~ | |
2146 | ||
2147 | hugetlb.<hugepagesize>.current | |
2148 | Show current usage for "hugepagesize" hugetlb. It exists for all | |
2149 | the cgroup except root. | |
2150 | ||
2151 | hugetlb.<hugepagesize>.max | |
2152 | Set/show the hard limit of "hugepagesize" hugetlb usage. | |
2153 | The default value is "max". It exists for all the cgroup except root. | |
2154 | ||
2155 | hugetlb.<hugepagesize>.events | |
2156 | A read-only flat-keyed file which exists on non-root cgroups. | |
2157 | ||
2158 | max | |
2159 | The number of allocation failure due to HugeTLB limit | |
2160 | ||
2161 | hugetlb.<hugepagesize>.events.local | |
2162 | Similar to hugetlb.<hugepagesize>.events but the fields in the file | |
2163 | are local to the cgroup i.e. not hierarchical. The file modified event | |
2164 | generated on this file reflects only the local events. | |
9c1e67f9 | 2165 | |
633b11be MCC |
2166 | Misc |
2167 | ---- | |
63f1ca59 | 2168 | |
633b11be MCC |
2169 | perf_event |
2170 | ~~~~~~~~~~ | |
968ebff1 TH |
2171 | |
2172 | perf_event controller, if not mounted on a legacy hierarchy, is | |
2173 | automatically enabled on the v2 hierarchy so that perf events can | |
2174 | always be filtered by cgroup v2 path. The controller can still be | |
2175 | moved to a legacy hierarchy after v2 hierarchy is populated. | |
2176 | ||
2177 | ||
c4e0842b MS |
2178 | Non-normative information |
2179 | ------------------------- | |
2180 | ||
2181 | This section contains information that isn't considered to be a part of | |
2182 | the stable kernel API and so is subject to change. | |
2183 | ||
2184 | ||
2185 | CPU controller root cgroup process behaviour | |
2186 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ | |
2187 | ||
2188 | When distributing CPU cycles in the root cgroup each thread in this | |
2189 | cgroup is treated as if it was hosted in a separate child cgroup of the | |
2190 | root cgroup. This child cgroup weight is dependent on its thread nice | |
2191 | level. | |
2192 | ||
2193 | For details of this mapping see sched_prio_to_weight array in | |
2194 | kernel/sched/core.c file (values from this array should be scaled | |
2195 | appropriately so the neutral - nice 0 - value is 100 instead of 1024). | |
2196 | ||
2197 | ||
2198 | IO controller root cgroup process behaviour | |
2199 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ | |
2200 | ||
2201 | Root cgroup processes are hosted in an implicit leaf child node. | |
2202 | When distributing IO resources this implicit child node is taken into | |
2203 | account as if it was a normal child cgroup of the root cgroup with a | |
2204 | weight value of 200. | |
2205 | ||
2206 | ||
633b11be MCC |
2207 | Namespace |
2208 | ========= | |
d4021f6c | 2209 | |
633b11be MCC |
2210 | Basics |
2211 | ------ | |
d4021f6c SH |
2212 | |
2213 | cgroup namespace provides a mechanism to virtualize the view of the | |
2214 | "/proc/$PID/cgroup" file and cgroup mounts. The CLONE_NEWCGROUP clone | |
2215 | flag can be used with clone(2) and unshare(2) to create a new cgroup | |
2216 | namespace. The process running inside the cgroup namespace will have | |
2217 | its "/proc/$PID/cgroup" output restricted to cgroupns root. The | |
2218 | cgroupns root is the cgroup of the process at the time of creation of | |
2219 | the cgroup namespace. | |
2220 | ||
2221 | Without cgroup namespace, the "/proc/$PID/cgroup" file shows the | |
2222 | complete path of the cgroup of a process. In a container setup where | |
2223 | a set of cgroups and namespaces are intended to isolate processes the | |
2224 | "/proc/$PID/cgroup" file may leak potential system level information | |
633b11be | 2225 | to the isolated processes. For Example:: |
d4021f6c SH |
2226 | |
2227 | # cat /proc/self/cgroup | |
2228 | 0::/batchjobs/container_id1 | |
2229 | ||
2230 | The path '/batchjobs/container_id1' can be considered as system-data | |
2231 | and undesirable to expose to the isolated processes. cgroup namespace | |
2232 | can be used to restrict visibility of this path. For example, before | |
633b11be | 2233 | creating a cgroup namespace, one would see:: |
d4021f6c SH |
2234 | |
2235 | # ls -l /proc/self/ns/cgroup | |
2236 | lrwxrwxrwx 1 root root 0 2014-07-15 10:37 /proc/self/ns/cgroup -> cgroup:[4026531835] | |
2237 | # cat /proc/self/cgroup | |
2238 | 0::/batchjobs/container_id1 | |
2239 | ||
633b11be | 2240 | After unsharing a new namespace, the view changes:: |
d4021f6c SH |
2241 | |
2242 | # ls -l /proc/self/ns/cgroup | |
2243 | lrwxrwxrwx 1 root root 0 2014-07-15 10:35 /proc/self/ns/cgroup -> cgroup:[4026532183] | |
2244 | # cat /proc/self/cgroup | |
2245 | 0::/ | |
2246 | ||
2247 | When some thread from a multi-threaded process unshares its cgroup | |
2248 | namespace, the new cgroupns gets applied to the entire process (all | |
2249 | the threads). This is natural for the v2 hierarchy; however, for the | |
2250 | legacy hierarchies, this may be unexpected. | |
2251 | ||
2252 | A cgroup namespace is alive as long as there are processes inside or | |
2253 | mounts pinning it. When the last usage goes away, the cgroup | |
2254 | namespace is destroyed. The cgroupns root and the actual cgroups | |
2255 | remain. | |
2256 | ||
2257 | ||
633b11be MCC |
2258 | The Root and Views |
2259 | ------------------ | |
d4021f6c SH |
2260 | |
2261 | The 'cgroupns root' for a cgroup namespace is the cgroup in which the | |
2262 | process calling unshare(2) is running. For example, if a process in | |
2263 | /batchjobs/container_id1 cgroup calls unshare, cgroup | |
2264 | /batchjobs/container_id1 becomes the cgroupns root. For the | |
2265 | init_cgroup_ns, this is the real root ('/') cgroup. | |
2266 | ||
2267 | The cgroupns root cgroup does not change even if the namespace creator | |
633b11be | 2268 | process later moves to a different cgroup:: |
d4021f6c SH |
2269 | |
2270 | # ~/unshare -c # unshare cgroupns in some cgroup | |
2271 | # cat /proc/self/cgroup | |
2272 | 0::/ | |
2273 | # mkdir sub_cgrp_1 | |
2274 | # echo 0 > sub_cgrp_1/cgroup.procs | |
2275 | # cat /proc/self/cgroup | |
2276 | 0::/sub_cgrp_1 | |
2277 | ||
2278 | Each process gets its namespace-specific view of "/proc/$PID/cgroup" | |
2279 | ||
2280 | Processes running inside the cgroup namespace will be able to see | |
2281 | cgroup paths (in /proc/self/cgroup) only inside their root cgroup. | |
633b11be | 2282 | From within an unshared cgroupns:: |
d4021f6c SH |
2283 | |
2284 | # sleep 100000 & | |
2285 | [1] 7353 | |
2286 | # echo 7353 > sub_cgrp_1/cgroup.procs | |
2287 | # cat /proc/7353/cgroup | |
2288 | 0::/sub_cgrp_1 | |
2289 | ||
2290 | From the initial cgroup namespace, the real cgroup path will be | |
633b11be | 2291 | visible:: |
d4021f6c SH |
2292 | |
2293 | $ cat /proc/7353/cgroup | |
2294 | 0::/batchjobs/container_id1/sub_cgrp_1 | |
2295 | ||
2296 | From a sibling cgroup namespace (that is, a namespace rooted at a | |
2297 | different cgroup), the cgroup path relative to its own cgroup | |
2298 | namespace root will be shown. For instance, if PID 7353's cgroup | |
633b11be | 2299 | namespace root is at '/batchjobs/container_id2', then it will see:: |
d4021f6c SH |
2300 | |
2301 | # cat /proc/7353/cgroup | |
2302 | 0::/../container_id2/sub_cgrp_1 | |
2303 | ||
2304 | Note that the relative path always starts with '/' to indicate that | |
2305 | its relative to the cgroup namespace root of the caller. | |
2306 | ||
2307 | ||
633b11be MCC |
2308 | Migration and setns(2) |
2309 | ---------------------- | |
d4021f6c SH |
2310 | |
2311 | Processes inside a cgroup namespace can move into and out of the | |
2312 | namespace root if they have proper access to external cgroups. For | |
2313 | example, from inside a namespace with cgroupns root at | |
2314 | /batchjobs/container_id1, and assuming that the global hierarchy is | |
633b11be | 2315 | still accessible inside cgroupns:: |
d4021f6c SH |
2316 | |
2317 | # cat /proc/7353/cgroup | |
2318 | 0::/sub_cgrp_1 | |
2319 | # echo 7353 > batchjobs/container_id2/cgroup.procs | |
2320 | # cat /proc/7353/cgroup | |
2321 | 0::/../container_id2 | |
2322 | ||
2323 | Note that this kind of setup is not encouraged. A task inside cgroup | |
2324 | namespace should only be exposed to its own cgroupns hierarchy. | |
2325 | ||
2326 | setns(2) to another cgroup namespace is allowed when: | |
2327 | ||
2328 | (a) the process has CAP_SYS_ADMIN against its current user namespace | |
2329 | (b) the process has CAP_SYS_ADMIN against the target cgroup | |
2330 | namespace's userns | |
2331 | ||
2332 | No implicit cgroup changes happen with attaching to another cgroup | |
2333 | namespace. It is expected that the someone moves the attaching | |
2334 | process under the target cgroup namespace root. | |
2335 | ||
2336 | ||
633b11be MCC |
2337 | Interaction with Other Namespaces |
2338 | --------------------------------- | |
d4021f6c SH |
2339 | |
2340 | Namespace specific cgroup hierarchy can be mounted by a process | |
633b11be | 2341 | running inside a non-init cgroup namespace:: |
d4021f6c SH |
2342 | |
2343 | # mount -t cgroup2 none $MOUNT_POINT | |
2344 | ||
2345 | This will mount the unified cgroup hierarchy with cgroupns root as the | |
2346 | filesystem root. The process needs CAP_SYS_ADMIN against its user and | |
2347 | mount namespaces. | |
2348 | ||
2349 | The virtualization of /proc/self/cgroup file combined with restricting | |
2350 | the view of cgroup hierarchy by namespace-private cgroupfs mount | |
2351 | provides a properly isolated cgroup view inside the container. | |
2352 | ||
2353 | ||
633b11be MCC |
2354 | Information on Kernel Programming |
2355 | ================================= | |
6c292092 TH |
2356 | |
2357 | This section contains kernel programming information in the areas | |
2358 | where interacting with cgroup is necessary. cgroup core and | |
2359 | controllers are not covered. | |
2360 | ||
2361 | ||
633b11be MCC |
2362 | Filesystem Support for Writeback |
2363 | -------------------------------- | |
6c292092 TH |
2364 | |
2365 | A filesystem can support cgroup writeback by updating | |
2366 | address_space_operations->writepage[s]() to annotate bio's using the | |
2367 | following two functions. | |
2368 | ||
2369 | wbc_init_bio(@wbc, @bio) | |
6c292092 | 2370 | Should be called for each bio carrying writeback data and |
fd42df30 DZ |
2371 | associates the bio with the inode's owner cgroup and the |
2372 | corresponding request queue. This must be called after | |
2373 | a queue (device) has been associated with the bio and | |
2374 | before submission. | |
6c292092 | 2375 | |
34e51a5e | 2376 | wbc_account_cgroup_owner(@wbc, @page, @bytes) |
6c292092 TH |
2377 | Should be called for each data segment being written out. |
2378 | While this function doesn't care exactly when it's called | |
2379 | during the writeback session, it's the easiest and most | |
2380 | natural to call it as data segments are added to a bio. | |
2381 | ||
2382 | With writeback bio's annotated, cgroup support can be enabled per | |
2383 | super_block by setting SB_I_CGROUPWB in ->s_iflags. This allows for | |
2384 | selective disabling of cgroup writeback support which is helpful when | |
2385 | certain filesystem features, e.g. journaled data mode, are | |
2386 | incompatible. | |
2387 | ||
2388 | wbc_init_bio() binds the specified bio to its cgroup. Depending on | |
2389 | the configuration, the bio may be executed at a lower priority and if | |
2390 | the writeback session is holding shared resources, e.g. a journal | |
2391 | entry, may lead to priority inversion. There is no one easy solution | |
2392 | for the problem. Filesystems can try to work around specific problem | |
fd42df30 | 2393 | cases by skipping wbc_init_bio() and using bio_associate_blkg() |
6c292092 TH |
2394 | directly. |
2395 | ||
2396 | ||
633b11be MCC |
2397 | Deprecated v1 Core Features |
2398 | =========================== | |
6c292092 TH |
2399 | |
2400 | - Multiple hierarchies including named ones are not supported. | |
2401 | ||
5136f636 | 2402 | - All v1 mount options are not supported. |
6c292092 TH |
2403 | |
2404 | - The "tasks" file is removed and "cgroup.procs" is not sorted. | |
2405 | ||
2406 | - "cgroup.clone_children" is removed. | |
2407 | ||
2408 | - /proc/cgroups is meaningless for v2. Use "cgroup.controllers" file | |
2409 | at the root instead. | |
2410 | ||
2411 | ||
633b11be MCC |
2412 | Issues with v1 and Rationales for v2 |
2413 | ==================================== | |
6c292092 | 2414 | |
633b11be MCC |
2415 | Multiple Hierarchies |
2416 | -------------------- | |
6c292092 TH |
2417 | |
2418 | cgroup v1 allowed an arbitrary number of hierarchies and each | |
2419 | hierarchy could host any number of controllers. While this seemed to | |
2420 | provide a high level of flexibility, it wasn't useful in practice. | |
2421 | ||
2422 | For example, as there is only one instance of each controller, utility | |
2423 | type controllers such as freezer which can be useful in all | |
2424 | hierarchies could only be used in one. The issue is exacerbated by | |
2425 | the fact that controllers couldn't be moved to another hierarchy once | |
2426 | hierarchies were populated. Another issue was that all controllers | |
2427 | bound to a hierarchy were forced to have exactly the same view of the | |
2428 | hierarchy. It wasn't possible to vary the granularity depending on | |
2429 | the specific controller. | |
2430 | ||
2431 | In practice, these issues heavily limited which controllers could be | |
2432 | put on the same hierarchy and most configurations resorted to putting | |
2433 | each controller on its own hierarchy. Only closely related ones, such | |
2434 | as the cpu and cpuacct controllers, made sense to be put on the same | |
2435 | hierarchy. This often meant that userland ended up managing multiple | |
2436 | similar hierarchies repeating the same steps on each hierarchy | |
2437 | whenever a hierarchy management operation was necessary. | |
2438 | ||
2439 | Furthermore, support for multiple hierarchies came at a steep cost. | |
2440 | It greatly complicated cgroup core implementation but more importantly | |
2441 | the support for multiple hierarchies restricted how cgroup could be | |
2442 | used in general and what controllers was able to do. | |
2443 | ||
2444 | There was no limit on how many hierarchies there might be, which meant | |
2445 | that a thread's cgroup membership couldn't be described in finite | |
2446 | length. The key might contain any number of entries and was unlimited | |
2447 | in length, which made it highly awkward to manipulate and led to | |
2448 | addition of controllers which existed only to identify membership, | |
2449 | which in turn exacerbated the original problem of proliferating number | |
2450 | of hierarchies. | |
2451 | ||
2452 | Also, as a controller couldn't have any expectation regarding the | |
2453 | topologies of hierarchies other controllers might be on, each | |
2454 | controller had to assume that all other controllers were attached to | |
2455 | completely orthogonal hierarchies. This made it impossible, or at | |
2456 | least very cumbersome, for controllers to cooperate with each other. | |
2457 | ||
2458 | In most use cases, putting controllers on hierarchies which are | |
2459 | completely orthogonal to each other isn't necessary. What usually is | |
2460 | called for is the ability to have differing levels of granularity | |
2461 | depending on the specific controller. In other words, hierarchy may | |
2462 | be collapsed from leaf towards root when viewed from specific | |
2463 | controllers. For example, a given configuration might not care about | |
2464 | how memory is distributed beyond a certain level while still wanting | |
2465 | to control how CPU cycles are distributed. | |
2466 | ||
2467 | ||
633b11be MCC |
2468 | Thread Granularity |
2469 | ------------------ | |
6c292092 TH |
2470 | |
2471 | cgroup v1 allowed threads of a process to belong to different cgroups. | |
2472 | This didn't make sense for some controllers and those controllers | |
2473 | ended up implementing different ways to ignore such situations but | |
2474 | much more importantly it blurred the line between API exposed to | |
2475 | individual applications and system management interface. | |
2476 | ||
2477 | Generally, in-process knowledge is available only to the process | |
2478 | itself; thus, unlike service-level organization of processes, | |
2479 | categorizing threads of a process requires active participation from | |
2480 | the application which owns the target process. | |
2481 | ||
2482 | cgroup v1 had an ambiguously defined delegation model which got abused | |
2483 | in combination with thread granularity. cgroups were delegated to | |
2484 | individual applications so that they can create and manage their own | |
2485 | sub-hierarchies and control resource distributions along them. This | |
2486 | effectively raised cgroup to the status of a syscall-like API exposed | |
2487 | to lay programs. | |
2488 | ||
2489 | First of all, cgroup has a fundamentally inadequate interface to be | |
2490 | exposed this way. For a process to access its own knobs, it has to | |
2491 | extract the path on the target hierarchy from /proc/self/cgroup, | |
2492 | construct the path by appending the name of the knob to the path, open | |
2493 | and then read and/or write to it. This is not only extremely clunky | |
2494 | and unusual but also inherently racy. There is no conventional way to | |
2495 | define transaction across the required steps and nothing can guarantee | |
2496 | that the process would actually be operating on its own sub-hierarchy. | |
2497 | ||
2498 | cgroup controllers implemented a number of knobs which would never be | |
2499 | accepted as public APIs because they were just adding control knobs to | |
2500 | system-management pseudo filesystem. cgroup ended up with interface | |
2501 | knobs which were not properly abstracted or refined and directly | |
2502 | revealed kernel internal details. These knobs got exposed to | |
2503 | individual applications through the ill-defined delegation mechanism | |
2504 | effectively abusing cgroup as a shortcut to implementing public APIs | |
2505 | without going through the required scrutiny. | |
2506 | ||
2507 | This was painful for both userland and kernel. Userland ended up with | |
2508 | misbehaving and poorly abstracted interfaces and kernel exposing and | |
2509 | locked into constructs inadvertently. | |
2510 | ||
2511 | ||
633b11be MCC |
2512 | Competition Between Inner Nodes and Threads |
2513 | ------------------------------------------- | |
6c292092 TH |
2514 | |
2515 | cgroup v1 allowed threads to be in any cgroups which created an | |
2516 | interesting problem where threads belonging to a parent cgroup and its | |
2517 | children cgroups competed for resources. This was nasty as two | |
2518 | different types of entities competed and there was no obvious way to | |
2519 | settle it. Different controllers did different things. | |
2520 | ||
2521 | The cpu controller considered threads and cgroups as equivalents and | |
2522 | mapped nice levels to cgroup weights. This worked for some cases but | |
2523 | fell flat when children wanted to be allocated specific ratios of CPU | |
2524 | cycles and the number of internal threads fluctuated - the ratios | |
2525 | constantly changed as the number of competing entities fluctuated. | |
2526 | There also were other issues. The mapping from nice level to weight | |
2527 | wasn't obvious or universal, and there were various other knobs which | |
2528 | simply weren't available for threads. | |
2529 | ||
2530 | The io controller implicitly created a hidden leaf node for each | |
2531 | cgroup to host the threads. The hidden leaf had its own copies of all | |
633b11be | 2532 | the knobs with ``leaf_`` prefixed. While this allowed equivalent |
6c292092 TH |
2533 | control over internal threads, it was with serious drawbacks. It |
2534 | always added an extra layer of nesting which wouldn't be necessary | |
2535 | otherwise, made the interface messy and significantly complicated the | |
2536 | implementation. | |
2537 | ||
2538 | The memory controller didn't have a way to control what happened | |
2539 | between internal tasks and child cgroups and the behavior was not | |
2540 | clearly defined. There were attempts to add ad-hoc behaviors and | |
2541 | knobs to tailor the behavior to specific workloads which would have | |
2542 | led to problems extremely difficult to resolve in the long term. | |
2543 | ||
2544 | Multiple controllers struggled with internal tasks and came up with | |
2545 | different ways to deal with it; unfortunately, all the approaches were | |
2546 | severely flawed and, furthermore, the widely different behaviors | |
2547 | made cgroup as a whole highly inconsistent. | |
2548 | ||
2549 | This clearly is a problem which needs to be addressed from cgroup core | |
2550 | in a uniform way. | |
2551 | ||
2552 | ||
633b11be MCC |
2553 | Other Interface Issues |
2554 | ---------------------- | |
6c292092 TH |
2555 | |
2556 | cgroup v1 grew without oversight and developed a large number of | |
2557 | idiosyncrasies and inconsistencies. One issue on the cgroup core side | |
2558 | was how an empty cgroup was notified - a userland helper binary was | |
2559 | forked and executed for each event. The event delivery wasn't | |
2560 | recursive or delegatable. The limitations of the mechanism also led | |
2561 | to in-kernel event delivery filtering mechanism further complicating | |
2562 | the interface. | |
2563 | ||
2564 | Controller interfaces were problematic too. An extreme example is | |
2565 | controllers completely ignoring hierarchical organization and treating | |
2566 | all cgroups as if they were all located directly under the root | |
2567 | cgroup. Some controllers exposed a large amount of inconsistent | |
2568 | implementation details to userland. | |
2569 | ||
2570 | There also was no consistency across controllers. When a new cgroup | |
2571 | was created, some controllers defaulted to not imposing extra | |
2572 | restrictions while others disallowed any resource usage until | |
2573 | explicitly configured. Configuration knobs for the same type of | |
2574 | control used widely differing naming schemes and formats. Statistics | |
2575 | and information knobs were named arbitrarily and used different | |
2576 | formats and units even in the same controller. | |
2577 | ||
2578 | cgroup v2 establishes common conventions where appropriate and updates | |
2579 | controllers so that they expose minimal and consistent interfaces. | |
2580 | ||
2581 | ||
633b11be MCC |
2582 | Controller Issues and Remedies |
2583 | ------------------------------ | |
6c292092 | 2584 | |
633b11be MCC |
2585 | Memory |
2586 | ~~~~~~ | |
6c292092 TH |
2587 | |
2588 | The original lower boundary, the soft limit, is defined as a limit | |
2589 | that is per default unset. As a result, the set of cgroups that | |
2590 | global reclaim prefers is opt-in, rather than opt-out. The costs for | |
2591 | optimizing these mostly negative lookups are so high that the | |
2592 | implementation, despite its enormous size, does not even provide the | |
2593 | basic desirable behavior. First off, the soft limit has no | |
2594 | hierarchical meaning. All configured groups are organized in a global | |
2595 | rbtree and treated like equal peers, regardless where they are located | |
2596 | in the hierarchy. This makes subtree delegation impossible. Second, | |
2597 | the soft limit reclaim pass is so aggressive that it not just | |
2598 | introduces high allocation latencies into the system, but also impacts | |
2599 | system performance due to overreclaim, to the point where the feature | |
2600 | becomes self-defeating. | |
2601 | ||
2602 | The memory.low boundary on the other hand is a top-down allocated | |
9783aa99 CD |
2603 | reserve. A cgroup enjoys reclaim protection when it's within its |
2604 | effective low, which makes delegation of subtrees possible. It also | |
2605 | enjoys having reclaim pressure proportional to its overage when | |
2606 | above its effective low. | |
6c292092 TH |
2607 | |
2608 | The original high boundary, the hard limit, is defined as a strict | |
2609 | limit that can not budge, even if the OOM killer has to be called. | |
2610 | But this generally goes against the goal of making the most out of the | |
2611 | available memory. The memory consumption of workloads varies during | |
2612 | runtime, and that requires users to overcommit. But doing that with a | |
2613 | strict upper limit requires either a fairly accurate prediction of the | |
2614 | working set size or adding slack to the limit. Since working set size | |
2615 | estimation is hard and error prone, and getting it wrong results in | |
2616 | OOM kills, most users tend to err on the side of a looser limit and | |
2617 | end up wasting precious resources. | |
2618 | ||
2619 | The memory.high boundary on the other hand can be set much more | |
2620 | conservatively. When hit, it throttles allocations by forcing them | |
2621 | into direct reclaim to work off the excess, but it never invokes the | |
2622 | OOM killer. As a result, a high boundary that is chosen too | |
2623 | aggressively will not terminate the processes, but instead it will | |
2624 | lead to gradual performance degradation. The user can monitor this | |
2625 | and make corrections until the minimal memory footprint that still | |
2626 | gives acceptable performance is found. | |
2627 | ||
2628 | In extreme cases, with many concurrent allocations and a complete | |
2629 | breakdown of reclaim progress within the group, the high boundary can | |
2630 | be exceeded. But even then it's mostly better to satisfy the | |
2631 | allocation from the slack available in other groups or the rest of the | |
2632 | system than killing the group. Otherwise, memory.max is there to | |
2633 | limit this type of spillover and ultimately contain buggy or even | |
2634 | malicious applications. | |
3e24b19d | 2635 | |
b6e6edcf JW |
2636 | Setting the original memory.limit_in_bytes below the current usage was |
2637 | subject to a race condition, where concurrent charges could cause the | |
2638 | limit setting to fail. memory.max on the other hand will first set the | |
2639 | limit to prevent new charges, and then reclaim and OOM kill until the | |
2640 | new limit is met - or the task writing to memory.max is killed. | |
2641 | ||
3e24b19d VD |
2642 | The combined memory+swap accounting and limiting is replaced by real |
2643 | control over swap space. | |
2644 | ||
2645 | The main argument for a combined memory+swap facility in the original | |
2646 | cgroup design was that global or parental pressure would always be | |
2647 | able to swap all anonymous memory of a child group, regardless of the | |
2648 | child's own (possibly untrusted) configuration. However, untrusted | |
2649 | groups can sabotage swapping by other means - such as referencing its | |
2650 | anonymous memory in a tight loop - and an admin can not assume full | |
2651 | swappability when overcommitting untrusted jobs. | |
2652 | ||
2653 | For trusted jobs, on the other hand, a combined counter is not an | |
2654 | intuitive userspace interface, and it flies in the face of the idea | |
2655 | that cgroup controllers should account and limit specific physical | |
2656 | resources. Swap space is a resource like all others in the system, | |
2657 | and that's why unified hierarchy allows distributing it separately. |