Commit | Line | Data |
---|---|---|
e5ba9ea6 KK |
1 | .. _cgroup-v2: |
2 | ||
633b11be | 3 | ================ |
6c292092 | 4 | Control Group v2 |
633b11be | 5 | ================ |
6c292092 | 6 | |
633b11be MCC |
7 | :Date: October, 2015 |
8 | :Author: Tejun Heo <tj@kernel.org> | |
6c292092 TH |
9 | |
10 | This is the authoritative documentation on the design, interface and | |
11 | conventions of cgroup v2. It describes all userland-visible aspects | |
12 | of cgroup including core and specific controller behaviors. All | |
13 | future changes must be reflected in this document. Documentation for | |
373e8ffa | 14 | v1 is available under :ref:`Documentation/admin-guide/cgroup-v1/index.rst <cgroup-v1>`. |
6c292092 | 15 | |
633b11be MCC |
16 | .. CONTENTS |
17 | ||
18 | 1. Introduction | |
19 | 1-1. Terminology | |
20 | 1-2. What is cgroup? | |
21 | 2. Basic Operations | |
22 | 2-1. Mounting | |
8cfd8147 TH |
23 | 2-2. Organizing Processes and Threads |
24 | 2-2-1. Processes | |
25 | 2-2-2. Threads | |
633b11be MCC |
26 | 2-3. [Un]populated Notification |
27 | 2-4. Controlling Controllers | |
28 | 2-4-1. Enabling and Disabling | |
29 | 2-4-2. Top-down Constraint | |
30 | 2-4-3. No Internal Process Constraint | |
31 | 2-5. Delegation | |
32 | 2-5-1. Model of Delegation | |
33 | 2-5-2. Delegation Containment | |
34 | 2-6. Guidelines | |
35 | 2-6-1. Organize Once and Control | |
36 | 2-6-2. Avoid Name Collisions | |
37 | 3. Resource Distribution Models | |
38 | 3-1. Weights | |
39 | 3-2. Limits | |
40 | 3-3. Protections | |
41 | 3-4. Allocations | |
42 | 4. Interface Files | |
43 | 4-1. Format | |
44 | 4-2. Conventions | |
45 | 4-3. Core Interface Files | |
46 | 5. Controllers | |
47 | 5-1. CPU | |
48 | 5-1-1. CPU Interface Files | |
49 | 5-2. Memory | |
50 | 5-2-1. Memory Interface Files | |
51 | 5-2-2. Usage Guidelines | |
52 | 5-2-3. Memory Ownership | |
53 | 5-3. IO | |
54 | 5-3-1. IO Interface Files | |
55 | 5-3-2. Writeback | |
b351f0c7 JB |
56 | 5-3-3. IO Latency |
57 | 5-3-3-1. How IO Latency Throttling Works | |
58 | 5-3-3-2. IO Latency Interface Files | |
556910e3 | 59 | 5-3-4. IO Priority |
633b11be MCC |
60 | 5-4. PID |
61 | 5-4-1. PID Interface Files | |
4ec22e9c WL |
62 | 5-5. Cpuset |
63 | 5.5-1. Cpuset Interface Files | |
64 | 5-6. Device | |
65 | 5-7. RDMA | |
66 | 5-7-1. RDMA Interface Files | |
b168ed45 ML |
67 | 5-8. DMEM |
68 | 5-9. HugeTLB | |
69 | 5.9-1. HugeTLB Interface Files | |
70 | 5-10. Misc | |
71 | 5.10-1 Miscellaneous cgroup Interface Files | |
72 | 5.10-2 Migration and Ownership | |
73 | 5-11. Others | |
74 | 5-11-1. perf_event | |
c4e0842b MS |
75 | 5-N. Non-normative information |
76 | 5-N-1. CPU controller root cgroup process behaviour | |
77 | 5-N-2. IO controller root cgroup process behaviour | |
633b11be MCC |
78 | 6. Namespace |
79 | 6-1. Basics | |
80 | 6-2. The Root and Views | |
81 | 6-3. Migration and setns(2) | |
82 | 6-4. Interaction with Other Namespaces | |
83 | P. Information on Kernel Programming | |
84 | P-1. Filesystem Support for Writeback | |
85 | D. Deprecated v1 Core Features | |
86 | R. Issues with v1 and Rationales for v2 | |
87 | R-1. Multiple Hierarchies | |
88 | R-2. Thread Granularity | |
89 | R-3. Competition Between Inner Nodes and Threads | |
90 | R-4. Other Interface Issues | |
91 | R-5. Controller Issues and Remedies | |
92 | R-5-1. Memory | |
93 | ||
94 | ||
95 | Introduction | |
96 | ============ | |
97 | ||
98 | Terminology | |
99 | ----------- | |
6c292092 TH |
100 | |
101 | "cgroup" stands for "control group" and is never capitalized. The | |
102 | singular form is used to designate the whole feature and also as a | |
103 | qualifier as in "cgroup controllers". When explicitly referring to | |
104 | multiple individual control groups, the plural form "cgroups" is used. | |
105 | ||
106 | ||
633b11be MCC |
107 | What is cgroup? |
108 | --------------- | |
6c292092 TH |
109 | |
110 | cgroup is a mechanism to organize processes hierarchically and | |
111 | distribute system resources along the hierarchy in a controlled and | |
112 | configurable manner. | |
113 | ||
114 | cgroup is largely composed of two parts - the core and controllers. | |
115 | cgroup core is primarily responsible for hierarchically organizing | |
116 | processes. A cgroup controller is usually responsible for | |
117 | distributing a specific type of system resource along the hierarchy | |
118 | although there are utility controllers which serve purposes other than | |
119 | resource distribution. | |
120 | ||
121 | cgroups form a tree structure and every process in the system belongs | |
122 | to one and only one cgroup. All threads of a process belong to the | |
123 | same cgroup. On creation, all processes are put in the cgroup that | |
124 | the parent process belongs to at the time. A process can be migrated | |
125 | to another cgroup. Migration of a process doesn't affect already | |
126 | existing descendant processes. | |
127 | ||
128 | Following certain structural constraints, controllers may be enabled or | |
129 | disabled selectively on a cgroup. All controller behaviors are | |
130 | hierarchical - if a controller is enabled on a cgroup, it affects all | |
131 | processes which belong to the cgroups consisting the inclusive | |
132 | sub-hierarchy of the cgroup. When a controller is enabled on a nested | |
133 | cgroup, it always restricts the resource distribution further. The | |
134 | restrictions set closer to the root in the hierarchy can not be | |
135 | overridden from further away. | |
136 | ||
137 | ||
633b11be MCC |
138 | Basic Operations |
139 | ================ | |
6c292092 | 140 | |
633b11be MCC |
141 | Mounting |
142 | -------- | |
6c292092 TH |
143 | |
144 | Unlike v1, cgroup v2 has only single hierarchy. The cgroup v2 | |
633b11be | 145 | hierarchy can be mounted with the following mount command:: |
6c292092 TH |
146 | |
147 | # mount -t cgroup2 none $MOUNT_POINT | |
148 | ||
149 | cgroup2 filesystem has the magic number 0x63677270 ("cgrp"). All | |
150 | controllers which support v2 and are not bound to a v1 hierarchy are | |
151 | automatically bound to the v2 hierarchy and show up at the root. | |
152 | Controllers which are not in active use in the v2 hierarchy can be | |
153 | bound to other hierarchies. This allows mixing v2 hierarchy with the | |
154 | legacy v1 multiple hierarchies in a fully backward compatible way. | |
155 | ||
156 | A controller can be moved across hierarchies only after the controller | |
157 | is no longer referenced in its current hierarchy. Because per-cgroup | |
158 | controller states are destroyed asynchronously and controllers may | |
159 | have lingering references, a controller may not show up immediately on | |
160 | the v2 hierarchy after the final umount of the previous hierarchy. | |
161 | Similarly, a controller should be fully disabled to be moved out of | |
162 | the unified hierarchy and it may take some time for the disabled | |
163 | controller to become available for other hierarchies; furthermore, due | |
164 | to inter-controller dependencies, other controllers may need to be | |
165 | disabled too. | |
166 | ||
167 | While useful for development and manual configurations, moving | |
168 | controllers dynamically between the v2 and other hierarchies is | |
169 | strongly discouraged for production use. It is recommended to decide | |
170 | the hierarchies and controller associations before starting using the | |
171 | controllers after system boot. | |
172 | ||
1619b6d4 JW |
173 | During transition to v2, system management software might still |
174 | automount the v1 cgroup filesystem and so hijack all controllers | |
175 | during boot, before manual intervention is possible. To make testing | |
176 | and experimenting easier, the kernel parameter cgroup_no_v1= allows | |
177 | disabling controllers in v1 and make them always available in v2. | |
178 | ||
5136f636 TH |
179 | cgroup v2 currently supports the following mount options. |
180 | ||
c808f463 | 181 | nsdelegate |
5136f636 TH |
182 | Consider cgroup namespaces as delegation boundaries. This |
183 | option is system wide and can only be set on mount or modified | |
184 | through remount from the init namespace. The mount option is | |
185 | ignored on non-init namespace mounts. Please refer to the | |
186 | Delegation section for details. | |
187 | ||
c808f463 | 188 | favordynmods |
6a010a49 TH |
189 | Reduce the latencies of dynamic cgroup modifications such as |
190 | task migrations and controller on/offs at the cost of making | |
191 | hot path operations such as forks and exits more expensive. | |
192 | The static usage pattern of creating a cgroup, enabling | |
193 | controllers, and then seeding it with CLONE_INTO_CGROUP is | |
194 | not affected by this option. | |
195 | ||
c808f463 | 196 | memory_localevents |
9852ae3f CD |
197 | Only populate memory.events with data for the current cgroup, |
198 | and not any subtrees. This is legacy behaviour, the default | |
199 | behaviour without this option is to include subtree counts. | |
200 | This option is system wide and can only be set on mount or | |
201 | modified through remount from the init namespace. The mount | |
202 | option is ignored on non-init namespace mounts. | |
203 | ||
c808f463 | 204 | memory_recursiveprot |
8a931f80 JW |
205 | Recursively apply memory.min and memory.low protection to |
206 | entire subtrees, without requiring explicit downward | |
207 | propagation into leaf cgroups. This allows protecting entire | |
208 | subtrees from one another, while retaining free competition | |
209 | within those subtrees. This should have been the default | |
210 | behavior but is a mount-option to avoid regressing setups | |
211 | relying on the original semantics (e.g. specifying bogusly | |
212 | high 'bypass' protection values at higher tree levels). | |
213 | ||
8cba9576 NP |
214 | memory_hugetlb_accounting |
215 | Count HugeTLB memory usage towards the cgroup's overall | |
216 | memory usage for the memory controller (for the purpose of | |
217 | statistics reporting and memory protetion). This is a new | |
218 | behavior that could regress existing setups, so it must be | |
219 | explicitly opted in with this mount option. | |
220 | ||
221 | A few caveats to keep in mind: | |
222 | ||
223 | * There is no HugeTLB pool management involved in the memory | |
224 | controller. The pre-allocated pool does not belong to anyone. | |
225 | Specifically, when a new HugeTLB folio is allocated to | |
226 | the pool, it is not accounted for from the perspective of the | |
227 | memory controller. It is only charged to a cgroup when it is | |
228 | actually used (for e.g at page fault time). Host memory | |
229 | overcommit management has to consider this when configuring | |
230 | hard limits. In general, HugeTLB pool management should be | |
231 | done via other mechanisms (such as the HugeTLB controller). | |
232 | * Failure to charge a HugeTLB folio to the memory controller | |
233 | results in SIGBUS. This could happen even if the HugeTLB pool | |
234 | still has pages available (but the cgroup limit is hit and | |
235 | reclaim attempt fails). | |
236 | * Charging HugeTLB memory towards the memory controller affects | |
237 | memory protection and reclaim dynamics. Any userspace tuning | |
238 | (of low, min limits for e.g) needs to take this into account. | |
239 | * HugeTLB pages utilized while this option is not selected | |
240 | will not be tracked by the memory controller (even if cgroup | |
241 | v2 is remounted later on). | |
242 | ||
73e75e6f | 243 | pids_localevents |
385a635c MK |
244 | The option restores v1-like behavior of pids.events:max, that is only |
245 | local (inside cgroup proper) fork failures are counted. Without this | |
246 | option pids.events.max represents any pids.max enforcemnt across | |
247 | cgroup's subtree. | |
248 | ||
73e75e6f | 249 | |
6c292092 | 250 | |
8cfd8147 TH |
251 | Organizing Processes and Threads |
252 | -------------------------------- | |
253 | ||
254 | Processes | |
255 | ~~~~~~~~~ | |
6c292092 TH |
256 | |
257 | Initially, only the root cgroup exists to which all processes belong. | |
633b11be | 258 | A child cgroup can be created by creating a sub-directory:: |
6c292092 TH |
259 | |
260 | # mkdir $CGROUP_NAME | |
261 | ||
262 | A given cgroup may have multiple child cgroups forming a tree | |
263 | structure. Each cgroup has a read-writable interface file | |
264 | "cgroup.procs". When read, it lists the PIDs of all processes which | |
265 | belong to the cgroup one-per-line. The PIDs are not ordered and the | |
266 | same PID may show up more than once if the process got moved to | |
267 | another cgroup and then back or the PID got recycled while reading. | |
268 | ||
269 | A process can be migrated into a cgroup by writing its PID to the | |
270 | target cgroup's "cgroup.procs" file. Only one process can be migrated | |
271 | on a single write(2) call. If a process is composed of multiple | |
272 | threads, writing the PID of any thread migrates all threads of the | |
273 | process. | |
274 | ||
275 | When a process forks a child process, the new process is born into the | |
276 | cgroup that the forking process belongs to at the time of the | |
277 | operation. After exit, a process stays associated with the cgroup | |
278 | that it belonged to at the time of exit until it's reaped; however, a | |
279 | zombie process does not appear in "cgroup.procs" and thus can't be | |
280 | moved to another cgroup. | |
281 | ||
282 | A cgroup which doesn't have any children or live processes can be | |
283 | destroyed by removing the directory. Note that a cgroup which doesn't | |
284 | have any children and is associated only with zombie processes is | |
633b11be | 285 | considered empty and can be removed:: |
6c292092 TH |
286 | |
287 | # rmdir $CGROUP_NAME | |
288 | ||
289 | "/proc/$PID/cgroup" lists a process's cgroup membership. If legacy | |
290 | cgroup is in use in the system, this file may contain multiple lines, | |
291 | one for each hierarchy. The entry for cgroup v2 is always in the | |
633b11be | 292 | format "0::$PATH":: |
6c292092 TH |
293 | |
294 | # cat /proc/842/cgroup | |
295 | ... | |
296 | 0::/test-cgroup/test-cgroup-nested | |
297 | ||
298 | If the process becomes a zombie and the cgroup it was associated with | |
633b11be | 299 | is removed subsequently, " (deleted)" is appended to the path:: |
6c292092 TH |
300 | |
301 | # cat /proc/842/cgroup | |
302 | ... | |
303 | 0::/test-cgroup/test-cgroup-nested (deleted) | |
304 | ||
305 | ||
8cfd8147 TH |
306 | Threads |
307 | ~~~~~~~ | |
308 | ||
309 | cgroup v2 supports thread granularity for a subset of controllers to | |
310 | support use cases requiring hierarchical resource distribution across | |
311 | the threads of a group of processes. By default, all threads of a | |
312 | process belong to the same cgroup, which also serves as the resource | |
313 | domain to host resource consumptions which are not specific to a | |
314 | process or thread. The thread mode allows threads to be spread across | |
315 | a subtree while still maintaining the common resource domain for them. | |
316 | ||
317 | Controllers which support thread mode are called threaded controllers. | |
318 | The ones which don't are called domain controllers. | |
319 | ||
320 | Marking a cgroup threaded makes it join the resource domain of its | |
321 | parent as a threaded cgroup. The parent may be another threaded | |
322 | cgroup whose resource domain is further up in the hierarchy. The root | |
323 | of a threaded subtree, that is, the nearest ancestor which is not | |
324 | threaded, is called threaded domain or thread root interchangeably and | |
325 | serves as the resource domain for the entire subtree. | |
326 | ||
327 | Inside a threaded subtree, threads of a process can be put in | |
328 | different cgroups and are not subject to the no internal process | |
329 | constraint - threaded controllers can be enabled on non-leaf cgroups | |
330 | whether they have threads in them or not. | |
331 | ||
332 | As the threaded domain cgroup hosts all the domain resource | |
333 | consumptions of the subtree, it is considered to have internal | |
334 | resource consumptions whether there are processes in it or not and | |
335 | can't have populated child cgroups which aren't threaded. Because the | |
336 | root cgroup is not subject to no internal process constraint, it can | |
337 | serve both as a threaded domain and a parent to domain cgroups. | |
338 | ||
339 | The current operation mode or type of the cgroup is shown in the | |
340 | "cgroup.type" file which indicates whether the cgroup is a normal | |
341 | domain, a domain which is serving as the domain of a threaded subtree, | |
342 | or a threaded cgroup. | |
343 | ||
344 | On creation, a cgroup is always a domain cgroup and can be made | |
345 | threaded by writing "threaded" to the "cgroup.type" file. The | |
346 | operation is single direction:: | |
347 | ||
348 | # echo threaded > cgroup.type | |
349 | ||
350 | Once threaded, the cgroup can't be made a domain again. To enable the | |
351 | thread mode, the following conditions must be met. | |
352 | ||
353 | - As the cgroup will join the parent's resource domain. The parent | |
354 | must either be a valid (threaded) domain or a threaded cgroup. | |
355 | ||
918a8c2c TH |
356 | - When the parent is an unthreaded domain, it must not have any domain |
357 | controllers enabled or populated domain children. The root is | |
358 | exempt from this requirement. | |
8cfd8147 TH |
359 | |
360 | Topology-wise, a cgroup can be in an invalid state. Please consider | |
2877cbe6 | 361 | the following topology:: |
8cfd8147 TH |
362 | |
363 | A (threaded domain) - B (threaded) - C (domain, just created) | |
364 | ||
365 | C is created as a domain but isn't connected to a parent which can | |
366 | host child domains. C can't be used until it is turned into a | |
367 | threaded cgroup. "cgroup.type" file will report "domain (invalid)" in | |
368 | these cases. Operations which fail due to invalid topology use | |
369 | EOPNOTSUPP as the errno. | |
370 | ||
371 | A domain cgroup is turned into a threaded domain when one of its child | |
372 | cgroup becomes threaded or threaded controllers are enabled in the | |
373 | "cgroup.subtree_control" file while there are processes in the cgroup. | |
374 | A threaded domain reverts to a normal domain when the conditions | |
375 | clear. | |
376 | ||
377 | When read, "cgroup.threads" contains the list of the thread IDs of all | |
378 | threads in the cgroup. Except that the operations are per-thread | |
379 | instead of per-process, "cgroup.threads" has the same format and | |
380 | behaves the same way as "cgroup.procs". While "cgroup.threads" can be | |
381 | written to in any cgroup, as it can only move threads inside the same | |
382 | threaded domain, its operations are confined inside each threaded | |
383 | subtree. | |
384 | ||
385 | The threaded domain cgroup serves as the resource domain for the whole | |
386 | subtree, and, while the threads can be scattered across the subtree, | |
387 | all the processes are considered to be in the threaded domain cgroup. | |
388 | "cgroup.procs" in a threaded domain cgroup contains the PIDs of all | |
389 | processes in the subtree and is not readable in the subtree proper. | |
390 | However, "cgroup.procs" can be written to from anywhere in the subtree | |
391 | to migrate all threads of the matching process to the cgroup. | |
392 | ||
393 | Only threaded controllers can be enabled in a threaded subtree. When | |
394 | a threaded controller is enabled inside a threaded subtree, it only | |
395 | accounts for and controls resource consumptions associated with the | |
396 | threads in the cgroup and its descendants. All consumptions which | |
397 | aren't tied to a specific thread belong to the threaded domain cgroup. | |
398 | ||
399 | Because a threaded subtree is exempt from no internal process | |
400 | constraint, a threaded controller must be able to handle competition | |
401 | between threads in a non-leaf cgroup and its child cgroups. Each | |
402 | threaded controller defines how such competitions are handled. | |
403 | ||
a41796b5 WL |
404 | Currently, the following controllers are threaded and can be enabled |
405 | in a threaded cgroup:: | |
406 | ||
407 | - cpu | |
408 | - cpuset | |
409 | - perf_event | |
410 | - pids | |
8cfd8147 | 411 | |
633b11be MCC |
412 | [Un]populated Notification |
413 | -------------------------- | |
6c292092 TH |
414 | |
415 | Each non-root cgroup has a "cgroup.events" file which contains | |
416 | "populated" field indicating whether the cgroup's sub-hierarchy has | |
417 | live processes in it. Its value is 0 if there is no live process in | |
418 | the cgroup and its descendants; otherwise, 1. poll and [id]notify | |
419 | events are triggered when the value changes. This can be used, for | |
420 | example, to start a clean-up operation after all processes of a given | |
421 | sub-hierarchy have exited. The populated state updates and | |
422 | notifications are recursive. Consider the following sub-hierarchy | |
423 | where the numbers in the parentheses represent the numbers of processes | |
633b11be | 424 | in each cgroup:: |
6c292092 TH |
425 | |
426 | A(4) - B(0) - C(1) | |
427 | \ D(0) | |
428 | ||
429 | A, B and C's "populated" fields would be 1 while D's 0. After the one | |
430 | process in C exits, B and C's "populated" fields would flip to "0" and | |
431 | file modified events will be generated on the "cgroup.events" files of | |
432 | both cgroups. | |
433 | ||
434 | ||
633b11be MCC |
435 | Controlling Controllers |
436 | ----------------------- | |
6c292092 | 437 | |
633b11be MCC |
438 | Enabling and Disabling |
439 | ~~~~~~~~~~~~~~~~~~~~~~ | |
6c292092 TH |
440 | |
441 | Each cgroup has a "cgroup.controllers" file which lists all | |
633b11be | 442 | controllers available for the cgroup to enable:: |
6c292092 TH |
443 | |
444 | # cat cgroup.controllers | |
445 | cpu io memory | |
446 | ||
447 | No controller is enabled by default. Controllers can be enabled and | |
633b11be | 448 | disabled by writing to the "cgroup.subtree_control" file:: |
6c292092 TH |
449 | |
450 | # echo "+cpu +memory -io" > cgroup.subtree_control | |
451 | ||
452 | Only controllers which are listed in "cgroup.controllers" can be | |
453 | enabled. When multiple operations are specified as above, either they | |
454 | all succeed or fail. If multiple operations on the same controller | |
455 | are specified, the last one is effective. | |
456 | ||
457 | Enabling a controller in a cgroup indicates that the distribution of | |
458 | the target resource across its immediate children will be controlled. | |
459 | Consider the following sub-hierarchy. The enabled controllers are | |
633b11be | 460 | listed in parentheses:: |
6c292092 TH |
461 | |
462 | A(cpu,memory) - B(memory) - C() | |
463 | \ D() | |
464 | ||
465 | As A has "cpu" and "memory" enabled, A will control the distribution | |
466 | of CPU cycles and memory to its children, in this case, B. As B has | |
467 | "memory" enabled but not "CPU", C and D will compete freely on CPU | |
468 | cycles but their division of memory available to B will be controlled. | |
469 | ||
470 | As a controller regulates the distribution of the target resource to | |
471 | the cgroup's children, enabling it creates the controller's interface | |
472 | files in the child cgroups. In the above example, enabling "cpu" on B | |
473 | would create the "cpu." prefixed controller interface files in C and | |
474 | D. Likewise, disabling "memory" from B would remove the "memory." | |
475 | prefixed controller interface files from C and D. This means that the | |
476 | controller interface files - anything which doesn't start with | |
477 | "cgroup." are owned by the parent rather than the cgroup itself. | |
478 | ||
479 | ||
633b11be MCC |
480 | Top-down Constraint |
481 | ~~~~~~~~~~~~~~~~~~~ | |
6c292092 TH |
482 | |
483 | Resources are distributed top-down and a cgroup can further distribute | |
484 | a resource only if the resource has been distributed to it from the | |
485 | parent. This means that all non-root "cgroup.subtree_control" files | |
486 | can only contain controllers which are enabled in the parent's | |
487 | "cgroup.subtree_control" file. A controller can be enabled only if | |
488 | the parent has the controller enabled and a controller can't be | |
489 | disabled if one or more children have it enabled. | |
490 | ||
491 | ||
633b11be MCC |
492 | No Internal Process Constraint |
493 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ | |
6c292092 | 494 | |
8cfd8147 TH |
495 | Non-root cgroups can distribute domain resources to their children |
496 | only when they don't have any processes of their own. In other words, | |
497 | only domain cgroups which don't contain any processes can have domain | |
498 | controllers enabled in their "cgroup.subtree_control" files. | |
6c292092 | 499 | |
8cfd8147 TH |
500 | This guarantees that, when a domain controller is looking at the part |
501 | of the hierarchy which has it enabled, processes are always only on | |
502 | the leaves. This rules out situations where child cgroups compete | |
503 | against internal processes of the parent. | |
6c292092 TH |
504 | |
505 | The root cgroup is exempt from this restriction. Root contains | |
506 | processes and anonymous resource consumption which can't be associated | |
507 | with any other cgroups and requires special treatment from most | |
508 | controllers. How resource consumption in the root cgroup is governed | |
c4e0842b MS |
509 | is up to each controller (for more information on this topic please |
510 | refer to the Non-normative information section in the Controllers | |
511 | chapter). | |
6c292092 TH |
512 | |
513 | Note that the restriction doesn't get in the way if there is no | |
514 | enabled controller in the cgroup's "cgroup.subtree_control". This is | |
515 | important as otherwise it wouldn't be possible to create children of a | |
516 | populated cgroup. To control resource distribution of a cgroup, the | |
517 | cgroup must create children and transfer all its processes to the | |
518 | children before enabling controllers in its "cgroup.subtree_control" | |
519 | file. | |
520 | ||
521 | ||
633b11be MCC |
522 | Delegation |
523 | ---------- | |
6c292092 | 524 | |
633b11be MCC |
525 | Model of Delegation |
526 | ~~~~~~~~~~~~~~~~~~~ | |
6c292092 | 527 | |
5136f636 | 528 | A cgroup can be delegated in two ways. First, to a less privileged |
8cfd8147 TH |
529 | user by granting write access of the directory and its "cgroup.procs", |
530 | "cgroup.threads" and "cgroup.subtree_control" files to the user. | |
531 | Second, if the "nsdelegate" mount option is set, automatically to a | |
532 | cgroup namespace on namespace creation. | |
5136f636 TH |
533 | |
534 | Because the resource control interface files in a given directory | |
535 | control the distribution of the parent's resources, the delegatee | |
536 | shouldn't be allowed to write to them. For the first method, this is | |
d1a92d2d CR |
537 | achieved by not granting access to these files. For the second, files |
538 | outside the namespace should be hidden from the delegatee by the means | |
539 | of at least mount namespacing, and the kernel rejects writes to all | |
540 | files on a namespace root from inside the cgroup namespace, except for | |
541 | those files listed in "/sys/kernel/cgroup/delegate" (including | |
542 | "cgroup.procs", "cgroup.threads", "cgroup.subtree_control", etc.). | |
5136f636 TH |
543 | |
544 | The end results are equivalent for both delegation types. Once | |
545 | delegated, the user can build sub-hierarchy under the directory, | |
546 | organize processes inside it as it sees fit and further distribute the | |
547 | resources it received from the parent. The limits and other settings | |
548 | of all resource controllers are hierarchical and regardless of what | |
549 | happens in the delegated sub-hierarchy, nothing can escape the | |
550 | resource restrictions imposed by the parent. | |
6c292092 TH |
551 | |
552 | Currently, cgroup doesn't impose any restrictions on the number of | |
553 | cgroups in or nesting depth of a delegated sub-hierarchy; however, | |
554 | this may be limited explicitly in the future. | |
555 | ||
556 | ||
633b11be MCC |
557 | Delegation Containment |
558 | ~~~~~~~~~~~~~~~~~~~~~~ | |
6c292092 TH |
559 | |
560 | A delegated sub-hierarchy is contained in the sense that processes | |
5136f636 TH |
561 | can't be moved into or out of the sub-hierarchy by the delegatee. |
562 | ||
563 | For delegations to a less privileged user, this is achieved by | |
564 | requiring the following conditions for a process with a non-root euid | |
565 | to migrate a target process into a cgroup by writing its PID to the | |
566 | "cgroup.procs" file. | |
6c292092 | 567 | |
6c292092 TH |
568 | - The writer must have write access to the "cgroup.procs" file. |
569 | ||
570 | - The writer must have write access to the "cgroup.procs" file of the | |
571 | common ancestor of the source and destination cgroups. | |
572 | ||
576dd464 | 573 | The above two constraints ensure that while a delegatee may migrate |
6c292092 TH |
574 | processes around freely in the delegated sub-hierarchy it can't pull |
575 | in from or push out to outside the sub-hierarchy. | |
576 | ||
577 | For an example, let's assume cgroups C0 and C1 have been delegated to | |
578 | user U0 who created C00, C01 under C0 and C10 under C1 as follows and | |
633b11be | 579 | all processes under C0 and C1 belong to U0:: |
6c292092 TH |
580 | |
581 | ~~~~~~~~~~~~~ - C0 - C00 | |
582 | ~ cgroup ~ \ C01 | |
583 | ~ hierarchy ~ | |
584 | ~~~~~~~~~~~~~ - C1 - C10 | |
585 | ||
586 | Let's also say U0 wants to write the PID of a process which is | |
587 | currently in C10 into "C00/cgroup.procs". U0 has write access to the | |
576dd464 TH |
588 | file; however, the common ancestor of the source cgroup C10 and the |
589 | destination cgroup C00 is above the points of delegation and U0 would | |
590 | not have write access to its "cgroup.procs" files and thus the write | |
591 | will be denied with -EACCES. | |
6c292092 | 592 | |
5136f636 TH |
593 | For delegations to namespaces, containment is achieved by requiring |
594 | that both the source and destination cgroups are reachable from the | |
595 | namespace of the process which is attempting the migration. If either | |
596 | is not reachable, the migration is rejected with -ENOENT. | |
597 | ||
6c292092 | 598 | |
633b11be MCC |
599 | Guidelines |
600 | ---------- | |
6c292092 | 601 | |
633b11be MCC |
602 | Organize Once and Control |
603 | ~~~~~~~~~~~~~~~~~~~~~~~~~ | |
6c292092 TH |
604 | |
605 | Migrating a process across cgroups is a relatively expensive operation | |
606 | and stateful resources such as memory are not moved together with the | |
607 | process. This is an explicit design decision as there often exist | |
608 | inherent trade-offs between migration and various hot paths in terms | |
609 | of synchronization cost. | |
610 | ||
611 | As such, migrating processes across cgroups frequently as a means to | |
612 | apply different resource restrictions is discouraged. A workload | |
613 | should be assigned to a cgroup according to the system's logical and | |
614 | resource structure once on start-up. Dynamic adjustments to resource | |
615 | distribution can be made by changing controller configuration through | |
616 | the interface files. | |
617 | ||
618 | ||
633b11be MCC |
619 | Avoid Name Collisions |
620 | ~~~~~~~~~~~~~~~~~~~~~ | |
6c292092 TH |
621 | |
622 | Interface files for a cgroup and its children cgroups occupy the same | |
623 | directory and it is possible to create children cgroups which collide | |
624 | with interface files. | |
625 | ||
626 | All cgroup core interface files are prefixed with "cgroup." and each | |
627 | controller's interface files are prefixed with the controller name and | |
628 | a dot. A controller's name is composed of lower case alphabets and | |
629 | '_'s but never begins with an '_' so it can be used as the prefix | |
630 | character for collision avoidance. Also, interface file names won't | |
631 | start or end with terms which are often used in categorizing workloads | |
632 | such as job, service, slice, unit or workload. | |
633 | ||
634 | cgroup doesn't do anything to prevent name collisions and it's the | |
635 | user's responsibility to avoid them. | |
636 | ||
637 | ||
633b11be MCC |
638 | Resource Distribution Models |
639 | ============================ | |
6c292092 TH |
640 | |
641 | cgroup controllers implement several resource distribution schemes | |
642 | depending on the resource type and expected use cases. This section | |
643 | describes major schemes in use along with their expected behaviors. | |
644 | ||
645 | ||
633b11be MCC |
646 | Weights |
647 | ------- | |
6c292092 TH |
648 | |
649 | A parent's resource is distributed by adding up the weights of all | |
650 | active children and giving each the fraction matching the ratio of its | |
651 | weight against the sum. As only children which can make use of the | |
652 | resource at the moment participate in the distribution, this is | |
653 | work-conserving. Due to the dynamic nature, this model is usually | |
654 | used for stateless resources. | |
655 | ||
656 | All weights are in the range [1, 10000] with the default at 100. This | |
657 | allows symmetric multiplicative biases in both directions at fine | |
658 | enough granularity while staying in the intuitive range. | |
659 | ||
660 | As long as the weight is in range, all configuration combinations are | |
661 | valid and there is no reason to reject configuration changes or | |
662 | process migrations. | |
663 | ||
664 | "cpu.weight" proportionally distributes CPU cycles to active children | |
665 | and is an example of this type. | |
666 | ||
667 | ||
acbee592 QY |
668 | .. _cgroupv2-limits-distributor: |
669 | ||
633b11be MCC |
670 | Limits |
671 | ------ | |
6c292092 | 672 | |
dbeb56fe | 673 | A child can only consume up to the configured amount of the resource. |
6c292092 TH |
674 | Limits can be over-committed - the sum of the limits of children can |
675 | exceed the amount of resource available to the parent. | |
676 | ||
677 | Limits are in the range [0, max] and defaults to "max", which is noop. | |
678 | ||
679 | As limits can be over-committed, all configuration combinations are | |
680 | valid and there is no reason to reject configuration changes or | |
681 | process migrations. | |
682 | ||
683 | "io.max" limits the maximum BPS and/or IOPS that a cgroup can consume | |
684 | on an IO device and is an example of this type. | |
685 | ||
acbee592 | 686 | .. _cgroupv2-protections-distributor: |
6c292092 | 687 | |
633b11be MCC |
688 | Protections |
689 | ----------- | |
6c292092 | 690 | |
dbeb56fe | 691 | A cgroup is protected up to the configured amount of the resource |
9783aa99 | 692 | as long as the usages of all its ancestors are under their |
6c292092 TH |
693 | protected levels. Protections can be hard guarantees or best effort |
694 | soft boundaries. Protections can also be over-committed in which case | |
dbeb56fe | 695 | only up to the amount available to the parent is protected among |
6c292092 TH |
696 | children. |
697 | ||
698 | Protections are in the range [0, max] and defaults to 0, which is | |
699 | noop. | |
700 | ||
701 | As protections can be over-committed, all configuration combinations | |
702 | are valid and there is no reason to reject configuration changes or | |
703 | process migrations. | |
704 | ||
705 | "memory.low" implements best-effort memory protection and is an | |
706 | example of this type. | |
707 | ||
708 | ||
633b11be MCC |
709 | Allocations |
710 | ----------- | |
6c292092 TH |
711 | |
712 | A cgroup is exclusively allocated a certain amount of a finite | |
713 | resource. Allocations can't be over-committed - the sum of the | |
714 | allocations of children can not exceed the amount of resource | |
715 | available to the parent. | |
716 | ||
717 | Allocations are in the range [0, max] and defaults to 0, which is no | |
718 | resource. | |
719 | ||
720 | As allocations can't be over-committed, some configuration | |
721 | combinations are invalid and should be rejected. Also, if the | |
722 | resource is mandatory for execution of processes, process migrations | |
723 | may be rejected. | |
724 | ||
725 | "cpu.rt.max" hard-allocates realtime slices and is an example of this | |
726 | type. | |
727 | ||
728 | ||
633b11be MCC |
729 | Interface Files |
730 | =============== | |
6c292092 | 731 | |
633b11be MCC |
732 | Format |
733 | ------ | |
6c292092 TH |
734 | |
735 | All interface files should be in one of the following formats whenever | |
633b11be | 736 | possible:: |
6c292092 TH |
737 | |
738 | New-line separated values | |
739 | (when only one value can be written at once) | |
740 | ||
741 | VAL0\n | |
742 | VAL1\n | |
743 | ... | |
744 | ||
745 | Space separated values | |
746 | (when read-only or multiple values can be written at once) | |
747 | ||
748 | VAL0 VAL1 ...\n | |
749 | ||
750 | Flat keyed | |
751 | ||
752 | KEY0 VAL0\n | |
753 | KEY1 VAL1\n | |
754 | ... | |
755 | ||
756 | Nested keyed | |
757 | ||
758 | KEY0 SUB_KEY0=VAL00 SUB_KEY1=VAL01... | |
759 | KEY1 SUB_KEY0=VAL10 SUB_KEY1=VAL11... | |
760 | ... | |
761 | ||
762 | For a writable file, the format for writing should generally match | |
763 | reading; however, controllers may allow omitting later fields or | |
764 | implement restricted shortcuts for most common use cases. | |
765 | ||
766 | For both flat and nested keyed files, only the values for a single key | |
767 | can be written at a time. For nested keyed files, the sub key pairs | |
768 | may be specified in any order and not all pairs have to be specified. | |
769 | ||
770 | ||
633b11be MCC |
771 | Conventions |
772 | ----------- | |
6c292092 TH |
773 | |
774 | - Settings for a single feature should be contained in a single file. | |
775 | ||
776 | - The root cgroup should be exempt from resource control and thus | |
936f2a70 | 777 | shouldn't have resource control interface files. |
6c292092 | 778 | |
a5e112e6 TH |
779 | - The default time unit is microseconds. If a different unit is ever |
780 | used, an explicit unit suffix must be present. | |
781 | ||
782 | - A parts-per quantity should use a percentage decimal with at least | |
783 | two digit fractional part - e.g. 13.40. | |
784 | ||
6c292092 TH |
785 | - If a controller implements weight based resource distribution, its |
786 | interface file should be named "weight" and have the range [1, | |
787 | 10000] with 100 as the default. The values are chosen to allow | |
788 | enough and symmetric bias in both directions while keeping it | |
789 | intuitive (the default is 100%). | |
790 | ||
791 | - If a controller implements an absolute resource guarantee and/or | |
792 | limit, the interface files should be named "min" and "max" | |
793 | respectively. If a controller implements best effort resource | |
794 | guarantee and/or limit, the interface files should be named "low" | |
795 | and "high" respectively. | |
796 | ||
797 | In the above four control files, the special token "max" should be | |
798 | used to represent upward infinity for both reading and writing. | |
799 | ||
800 | - If a setting has a configurable default value and keyed specific | |
801 | overrides, the default entry should be keyed with "default" and | |
802 | appear as the first entry in the file. | |
803 | ||
804 | The default value can be updated by writing either "default $VAL" or | |
805 | "$VAL". | |
806 | ||
807 | When writing to update a specific override, "default" can be used as | |
808 | the value to indicate removal of the override. Override entries | |
809 | with "default" as the value must not appear when read. | |
810 | ||
811 | For example, a setting which is keyed by major:minor device numbers | |
633b11be | 812 | with integer values may look like the following:: |
6c292092 TH |
813 | |
814 | # cat cgroup-example-interface-file | |
815 | default 150 | |
816 | 8:0 300 | |
817 | ||
633b11be | 818 | The default value can be updated by:: |
6c292092 TH |
819 | |
820 | # echo 125 > cgroup-example-interface-file | |
821 | ||
633b11be | 822 | or:: |
6c292092 TH |
823 | |
824 | # echo "default 125" > cgroup-example-interface-file | |
825 | ||
633b11be | 826 | An override can be set by:: |
6c292092 TH |
827 | |
828 | # echo "8:16 170" > cgroup-example-interface-file | |
829 | ||
633b11be | 830 | and cleared by:: |
6c292092 TH |
831 | |
832 | # echo "8:0 default" > cgroup-example-interface-file | |
833 | # cat cgroup-example-interface-file | |
834 | default 125 | |
835 | 8:16 170 | |
836 | ||
837 | - For events which are not very high frequency, an interface file | |
838 | "events" should be created which lists event key value pairs. | |
839 | Whenever a notifiable event happens, file modified event should be | |
840 | generated on the file. | |
841 | ||
842 | ||
633b11be MCC |
843 | Core Interface Files |
844 | -------------------- | |
6c292092 TH |
845 | |
846 | All cgroup core files are prefixed with "cgroup." | |
847 | ||
8cfd8147 | 848 | cgroup.type |
8cfd8147 TH |
849 | A read-write single value file which exists on non-root |
850 | cgroups. | |
851 | ||
852 | When read, it indicates the current type of the cgroup, which | |
853 | can be one of the following values. | |
854 | ||
855 | - "domain" : A normal valid domain cgroup. | |
856 | ||
857 | - "domain threaded" : A threaded domain cgroup which is | |
858 | serving as the root of a threaded subtree. | |
859 | ||
860 | - "domain invalid" : A cgroup which is in an invalid state. | |
861 | It can't be populated or have controllers enabled. It may | |
862 | be allowed to become a threaded cgroup. | |
863 | ||
864 | - "threaded" : A threaded cgroup which is a member of a | |
865 | threaded subtree. | |
866 | ||
867 | A cgroup can be turned into a threaded cgroup by writing | |
868 | "threaded" to this file. | |
869 | ||
6c292092 | 870 | cgroup.procs |
6c292092 TH |
871 | A read-write new-line separated values file which exists on |
872 | all cgroups. | |
873 | ||
874 | When read, it lists the PIDs of all processes which belong to | |
875 | the cgroup one-per-line. The PIDs are not ordered and the | |
876 | same PID may show up more than once if the process got moved | |
877 | to another cgroup and then back or the PID got recycled while | |
878 | reading. | |
879 | ||
880 | A PID can be written to migrate the process associated with | |
881 | the PID to the cgroup. The writer should match all of the | |
882 | following conditions. | |
883 | ||
6c292092 | 884 | - It must have write access to the "cgroup.procs" file. |
8cfd8147 TH |
885 | |
886 | - It must have write access to the "cgroup.procs" file of the | |
887 | common ancestor of the source and destination cgroups. | |
888 | ||
889 | When delegating a sub-hierarchy, write access to this file | |
890 | should be granted along with the containing directory. | |
891 | ||
892 | In a threaded cgroup, reading this file fails with EOPNOTSUPP | |
893 | as all the processes belong to the thread root. Writing is | |
894 | supported and moves every thread of the process to the cgroup. | |
895 | ||
896 | cgroup.threads | |
897 | A read-write new-line separated values file which exists on | |
898 | all cgroups. | |
899 | ||
900 | When read, it lists the TIDs of all threads which belong to | |
901 | the cgroup one-per-line. The TIDs are not ordered and the | |
902 | same TID may show up more than once if the thread got moved to | |
903 | another cgroup and then back or the TID got recycled while | |
904 | reading. | |
905 | ||
906 | A TID can be written to migrate the thread associated with the | |
907 | TID to the cgroup. The writer should match all of the | |
908 | following conditions. | |
909 | ||
910 | - It must have write access to the "cgroup.threads" file. | |
911 | ||
912 | - The cgroup that the thread is currently in must be in the | |
913 | same resource domain as the destination cgroup. | |
6c292092 TH |
914 | |
915 | - It must have write access to the "cgroup.procs" file of the | |
916 | common ancestor of the source and destination cgroups. | |
917 | ||
918 | When delegating a sub-hierarchy, write access to this file | |
919 | should be granted along with the containing directory. | |
920 | ||
921 | cgroup.controllers | |
6c292092 TH |
922 | A read-only space separated values file which exists on all |
923 | cgroups. | |
924 | ||
925 | It shows space separated list of all controllers available to | |
926 | the cgroup. The controllers are not ordered. | |
927 | ||
928 | cgroup.subtree_control | |
6c292092 TH |
929 | A read-write space separated values file which exists on all |
930 | cgroups. Starts out empty. | |
931 | ||
932 | When read, it shows space separated list of the controllers | |
933 | which are enabled to control resource distribution from the | |
934 | cgroup to its children. | |
935 | ||
936 | Space separated list of controllers prefixed with '+' or '-' | |
937 | can be written to enable or disable controllers. A controller | |
938 | name prefixed with '+' enables the controller and '-' | |
939 | disables. If a controller appears more than once on the list, | |
940 | the last one is effective. When multiple enable and disable | |
941 | operations are specified, either all succeed or all fail. | |
942 | ||
943 | cgroup.events | |
6c292092 TH |
944 | A read-only flat-keyed file which exists on non-root cgroups. |
945 | The following entries are defined. Unless specified | |
946 | otherwise, a value change in this file generates a file | |
947 | modified event. | |
948 | ||
949 | populated | |
6c292092 TH |
950 | 1 if the cgroup or its descendants contains any live |
951 | processes; otherwise, 0. | |
afe471ea RG |
952 | frozen |
953 | 1 if the cgroup is frozen; otherwise, 0. | |
6c292092 | 954 | |
1a926e0b RG |
955 | cgroup.max.descendants |
956 | A read-write single value files. The default is "max". | |
957 | ||
958 | Maximum allowed number of descent cgroups. | |
959 | If the actual number of descendants is equal or larger, | |
960 | an attempt to create a new cgroup in the hierarchy will fail. | |
961 | ||
962 | cgroup.max.depth | |
963 | A read-write single value files. The default is "max". | |
964 | ||
965 | Maximum allowed descent depth below the current cgroup. | |
966 | If the actual descent depth is equal or larger, | |
967 | an attempt to create a new child cgroup will fail. | |
968 | ||
ec39225c RG |
969 | cgroup.stat |
970 | A read-only flat-keyed file with the following entries: | |
971 | ||
972 | nr_descendants | |
973 | Total number of visible descendant cgroups. | |
974 | ||
975 | nr_dying_descendants | |
976 | Total number of dying descendant cgroups. A cgroup becomes | |
977 | dying after being deleted by a user. The cgroup will remain | |
978 | in dying state for some time undefined time (which can depend | |
979 | on system load) before being completely destroyed. | |
980 | ||
981 | A process can't enter a dying cgroup under any circumstances, | |
982 | a dying cgroup can't revive. | |
983 | ||
984 | A dying cgroup can consume system resources not exceeding | |
985 | limits, which were active at the moment of cgroup deletion. | |
986 | ||
ab031252 WL |
987 | nr_subsys_<cgroup_subsys> |
988 | Total number of live cgroup subsystems (e.g memory | |
989 | cgroup) at and beneath the current cgroup. | |
990 | ||
991 | nr_dying_subsys_<cgroup_subsys> | |
992 | Total number of dying cgroup subsystems (e.g. memory | |
993 | cgroup) at and beneath the current cgroup. | |
994 | ||
afe471ea RG |
995 | cgroup.freeze |
996 | A read-write single value file which exists on non-root cgroups. | |
997 | Allowed values are "0" and "1". The default is "0". | |
998 | ||
999 | Writing "1" to the file causes freezing of the cgroup and all | |
1000 | descendant cgroups. This means that all belonging processes will | |
1001 | be stopped and will not run until the cgroup will be explicitly | |
1002 | unfrozen. Freezing of the cgroup may take some time; when this action | |
1003 | is completed, the "frozen" value in the cgroup.events control file | |
1004 | will be updated to "1" and the corresponding notification will be | |
1005 | issued. | |
1006 | ||
1007 | A cgroup can be frozen either by its own settings, or by settings | |
1008 | of any ancestor cgroups. If any of ancestor cgroups is frozen, the | |
1009 | cgroup will remain frozen. | |
1010 | ||
1011 | Processes in the frozen cgroup can be killed by a fatal signal. | |
1012 | They also can enter and leave a frozen cgroup: either by an explicit | |
1013 | move by a user, or if freezing of the cgroup races with fork(). | |
1014 | If a process is moved to a frozen cgroup, it stops. If a process is | |
1015 | moved out of a frozen cgroup, it becomes running. | |
1016 | ||
1017 | Frozen status of a cgroup doesn't affect any cgroup tree operations: | |
1018 | it's possible to delete a frozen (and empty) cgroup, as well as | |
1019 | create new sub-cgroups. | |
6c292092 | 1020 | |
340272b0 CB |
1021 | cgroup.kill |
1022 | A write-only single value file which exists in non-root cgroups. | |
1023 | The only allowed value is "1". | |
1024 | ||
1025 | Writing "1" to the file causes the cgroup and all descendant cgroups to | |
1026 | be killed. This means that all processes located in the affected cgroup | |
1027 | tree will be killed via SIGKILL. | |
1028 | ||
1029 | Killing a cgroup tree will deal with concurrent forks appropriately and | |
1030 | is protected against migrations. | |
1031 | ||
1032 | In a threaded cgroup, writing this file fails with EOPNOTSUPP as | |
1033 | killing cgroups is a process directed operation, i.e. it affects | |
1034 | the whole thread-group. | |
1035 | ||
34f26a15 CZ |
1036 | cgroup.pressure |
1037 | A read-write single value file that allowed values are "0" and "1". | |
1038 | The default is "1". | |
1039 | ||
1040 | Writing "0" to the file will disable the cgroup PSI accounting. | |
1041 | Writing "1" to the file will re-enable the cgroup PSI accounting. | |
1042 | ||
1043 | This control attribute is not hierarchical, so disable or enable PSI | |
1044 | accounting in a cgroup does not affect PSI accounting in descendants | |
1045 | and doesn't need pass enablement via ancestors from root. | |
1046 | ||
1047 | The reason this control attribute exists is that PSI accounts stalls for | |
1048 | each cgroup separately and aggregates it at each level of the hierarchy. | |
1049 | This may cause non-negligible overhead for some workloads when under | |
1050 | deep level of the hierarchy, in which case this control attribute can | |
1051 | be used to disable PSI accounting in the non-leaf cgroups. | |
1052 | ||
52b1364b CZ |
1053 | irq.pressure |
1054 | A read-write nested-keyed file. | |
1055 | ||
1056 | Shows pressure stall information for IRQ/SOFTIRQ. See | |
1057 | :ref:`Documentation/accounting/psi.rst <psi>` for details. | |
1058 | ||
633b11be MCC |
1059 | Controllers |
1060 | =========== | |
6c292092 | 1061 | |
e5ba9ea6 KK |
1062 | .. _cgroup-v2-cpu: |
1063 | ||
633b11be MCC |
1064 | CPU |
1065 | --- | |
6c292092 | 1066 | |
6c292092 TH |
1067 | The "cpu" controllers regulates distribution of CPU cycles. This |
1068 | controller implements weight and absolute bandwidth limit models for | |
1069 | normal scheduling policy and absolute bandwidth allocation model for | |
1070 | realtime scheduling policy. | |
1071 | ||
2480c093 PB |
1072 | In all the above models, cycles distribution is defined only on a temporal |
1073 | base and it does not account for the frequency at which tasks are executed. | |
1074 | The (optional) utilization clamping support allows to hint the schedutil | |
1075 | cpufreq governor about the minimum desired frequency which should always be | |
1076 | provided by a CPU, as well as the maximum desired frequency, which should not | |
1077 | be exceeded by a CPU. | |
1078 | ||
dc9f08ba | 1079 | WARNING: cgroup2 cpu controller doesn't yet support the (bandwidth) control of |
c7461cca SB |
1080 | realtime processes. For a kernel built with the CONFIG_RT_GROUP_SCHED option |
1081 | enabled for group scheduling of realtime processes, the cpu controller can only | |
1082 | be enabled when all RT processes are in the root cgroup. Be aware that system | |
1083 | management software may already have placed RT processes into non-root cgroups | |
1084 | during the system boot process, and these processes may need to be moved to the | |
1085 | root cgroup before the cpu controller can be enabled with a | |
1086 | CONFIG_RT_GROUP_SCHED enabled kernel. | |
1087 | ||
1088 | With CONFIG_RT_GROUP_SCHED disabled, this limitation does not apply and some of | |
1089 | the interface files either affect realtime processes or account for them. See | |
1090 | the following section for details. Only the cpu controller is affected by | |
1091 | CONFIG_RT_GROUP_SCHED. Other controllers can be used for the resource control of | |
1092 | realtime processes irrespective of CONFIG_RT_GROUP_SCHED. | |
c2f31b79 | 1093 | |
6c292092 | 1094 | |
633b11be MCC |
1095 | CPU Interface Files |
1096 | ~~~~~~~~~~~~~~~~~~~ | |
6c292092 | 1097 | |
d16e7994 SB |
1098 | The interaction of a process with the cpu controller depends on its scheduling |
1099 | policy and the underlying scheduler. From the point of view of the cpu controller, | |
1100 | processes can be categorized as follows: | |
1101 | ||
1102 | * Processes under the fair-class scheduler | |
1103 | * Processes under a BPF scheduler with the ``cgroup_set_weight`` callback | |
1104 | * Everything else: ``SCHED_{FIFO,RR,DEADLINE}`` and processes under a BPF scheduler | |
1105 | without the ``cgroup_set_weight`` callback | |
1106 | ||
1107 | For details on when a process is under the fair-class scheduler or a BPF scheduler, | |
1108 | check out :ref:`Documentation/scheduler/sched-ext.rst <sched-ext>`. | |
1109 | ||
1110 | For each of the following interface files, the above categories | |
1111 | will be referred to. All time durations are in microseconds. | |
6c292092 TH |
1112 | |
1113 | cpu.stat | |
936f2a70 | 1114 | A read-only flat-keyed file. |
d41bf8c9 | 1115 | This file exists whether the controller is enabled or not. |
6c292092 | 1116 | |
d16e7994 SB |
1117 | It always reports the following three stats, which account for all the |
1118 | processes in the cgroup: | |
6c292092 | 1119 | |
633b11be MCC |
1120 | - usage_usec |
1121 | - user_usec | |
1122 | - system_usec | |
d41bf8c9 | 1123 | |
d16e7994 SB |
1124 | and the following five when the controller is enabled, which account for |
1125 | only the processes under the fair-class scheduler: | |
d41bf8c9 | 1126 | |
633b11be MCC |
1127 | - nr_periods |
1128 | - nr_throttled | |
1129 | - throttled_usec | |
d73df887 HC |
1130 | - nr_bursts |
1131 | - burst_usec | |
6c292092 TH |
1132 | |
1133 | cpu.weight | |
6c292092 TH |
1134 | A read-write single value file which exists on non-root |
1135 | cgroups. The default is "100". | |
1136 | ||
7b91eb60 JD |
1137 | For non idle groups (cpu.idle = 0), the weight is in the |
1138 | range [1, 10000]. | |
1139 | ||
1140 | If the cgroup has been configured to be SCHED_IDLE (cpu.idle = 1), | |
1141 | then the weight will show as a 0. | |
6c292092 | 1142 | |
d16e7994 SB |
1143 | This file affects only processes under the fair-class scheduler and a BPF |
1144 | scheduler with the ``cgroup_set_weight`` callback depending on what the | |
1145 | callback actually does. | |
1146 | ||
0d593634 TH |
1147 | cpu.weight.nice |
1148 | A read-write single value file which exists on non-root | |
1149 | cgroups. The default is "0". | |
1150 | ||
1151 | The nice value is in the range [-20, 19]. | |
1152 | ||
1153 | This interface file is an alternative interface for | |
1154 | "cpu.weight" and allows reading and setting weight using the | |
1155 | same values used by nice(2). Because the range is smaller and | |
1156 | granularity is coarser for the nice values, the read value is | |
1157 | the closest approximation of the current weight. | |
1158 | ||
d16e7994 SB |
1159 | This file affects only processes under the fair-class scheduler and a BPF |
1160 | scheduler with the ``cgroup_set_weight`` callback depending on what the | |
1161 | callback actually does. | |
1162 | ||
6c292092 | 1163 | cpu.max |
6c292092 TH |
1164 | A read-write two value file which exists on non-root cgroups. |
1165 | The default is "max 100000". | |
1166 | ||
633b11be | 1167 | The maximum bandwidth limit. It's in the following format:: |
6c292092 TH |
1168 | |
1169 | $MAX $PERIOD | |
1170 | ||
dbeb56fe | 1171 | which indicates that the group may consume up to $MAX in each |
6c292092 TH |
1172 | $PERIOD duration. "max" for $MAX indicates no limit. If only |
1173 | one number is written, $MAX is updated. | |
1174 | ||
d16e7994 SB |
1175 | This file affects only processes under the fair-class scheduler. |
1176 | ||
d73df887 HC |
1177 | cpu.max.burst |
1178 | A read-write single value file which exists on non-root | |
1179 | cgroups. The default is "0". | |
1180 | ||
1181 | The burst in the range [0, $MAX]. | |
1182 | ||
d16e7994 SB |
1183 | This file affects only processes under the fair-class scheduler. |
1184 | ||
2ce7135a | 1185 | cpu.pressure |
74bdd45c | 1186 | A read-write nested-keyed file. |
2ce7135a JW |
1187 | |
1188 | Shows pressure stall information for CPU. See | |
373e8ffa | 1189 | :ref:`Documentation/accounting/psi.rst <psi>` for details. |
2ce7135a | 1190 | |
d16e7994 SB |
1191 | This file accounts for all the processes in the cgroup. |
1192 | ||
2480c093 | 1193 | cpu.uclamp.min |
79bfa4b3 SB |
1194 | A read-write single value file which exists on non-root cgroups. |
1195 | The default is "0", i.e. no utilization boosting. | |
2480c093 | 1196 | |
79bfa4b3 SB |
1197 | The requested minimum utilization (protection) as a percentage |
1198 | rational number, e.g. 12.34 for 12.34%. | |
2480c093 | 1199 | |
79bfa4b3 SB |
1200 | This interface allows reading and setting minimum utilization clamp |
1201 | values similar to the sched_setattr(2). This minimum utilization | |
1202 | value is used to clamp the task specific minimum utilization clamp, | |
1203 | including those of realtime processes. | |
2480c093 | 1204 | |
79bfa4b3 SB |
1205 | The requested minimum utilization (protection) is always capped by |
1206 | the current value for the maximum utilization (limit), i.e. | |
1207 | `cpu.uclamp.max`. | |
2480c093 | 1208 | |
d16e7994 | 1209 | This file affects all the processes in the cgroup. |
2480c093 PB |
1210 | |
1211 | cpu.uclamp.max | |
79bfa4b3 SB |
1212 | A read-write single value file which exists on non-root cgroups. |
1213 | The default is "max". i.e. no utilization capping | |
2480c093 | 1214 | |
79bfa4b3 SB |
1215 | The requested maximum utilization (limit) as a percentage rational |
1216 | number, e.g. 98.76 for 98.76%. | |
2480c093 | 1217 | |
79bfa4b3 SB |
1218 | This interface allows reading and setting maximum utilization clamp |
1219 | values similar to the sched_setattr(2). This maximum utilization | |
1220 | value is used to clamp the task specific maximum utilization clamp, | |
1221 | including those of realtime processes. | |
2480c093 | 1222 | |
d16e7994 | 1223 | This file affects all the processes in the cgroup. |
2480c093 | 1224 | |
7b91eb60 JD |
1225 | cpu.idle |
1226 | A read-write single value file which exists on non-root cgroups. | |
1227 | The default is 0. | |
1228 | ||
1229 | This is the cgroup analog of the per-task SCHED_IDLE sched policy. | |
1230 | Setting this value to a 1 will make the scheduling policy of the | |
1231 | cgroup SCHED_IDLE. The threads inside the cgroup will retain their | |
1232 | own relative priorities, but the cgroup itself will be treated as | |
1233 | very low priority relative to its peers. | |
1234 | ||
d16e7994 | 1235 | This file affects only processes under the fair-class scheduler. |
6c292092 | 1236 | |
633b11be MCC |
1237 | Memory |
1238 | ------ | |
6c292092 TH |
1239 | |
1240 | The "memory" controller regulates distribution of memory. Memory is | |
1241 | stateful and implements both limit and protection models. Due to the | |
1242 | intertwining between memory usage and reclaim pressure and the | |
1243 | stateful nature of memory, the distribution model is relatively | |
1244 | complex. | |
1245 | ||
1246 | While not completely water-tight, all major memory usages by a given | |
1247 | cgroup are tracked so that the total memory consumption can be | |
1248 | accounted and controlled to a reasonable extent. Currently, the | |
1249 | following types of memory usages are tracked. | |
1250 | ||
1251 | - Userland memory - page cache and anonymous memory. | |
1252 | ||
1253 | - Kernel data structures such as dentries and inodes. | |
1254 | ||
1255 | - TCP socket buffers. | |
1256 | ||
1257 | The above list may expand in the future for better coverage. | |
1258 | ||
1259 | ||
633b11be MCC |
1260 | Memory Interface Files |
1261 | ~~~~~~~~~~~~~~~~~~~~~~ | |
6c292092 TH |
1262 | |
1263 | All memory amounts are in bytes. If a value which is not aligned to | |
1264 | PAGE_SIZE is written, the value may be rounded up to the closest | |
1265 | PAGE_SIZE multiple when read back. | |
1266 | ||
1267 | memory.current | |
6c292092 TH |
1268 | A read-only single value file which exists on non-root |
1269 | cgroups. | |
1270 | ||
1271 | The total amount of memory currently being used by the cgroup | |
1272 | and its descendants. | |
1273 | ||
bf8d5d52 RG |
1274 | memory.min |
1275 | A read-write single value file which exists on non-root | |
1276 | cgroups. The default is "0". | |
1277 | ||
1278 | Hard memory protection. If the memory usage of a cgroup | |
1279 | is within its effective min boundary, the cgroup's memory | |
1280 | won't be reclaimed under any conditions. If there is no | |
1281 | unprotected reclaimable memory available, OOM killer | |
9783aa99 CD |
1282 | is invoked. Above the effective min boundary (or |
1283 | effective low boundary if it is higher), pages are reclaimed | |
1284 | proportionally to the overage, reducing reclaim pressure for | |
1285 | smaller overages. | |
bf8d5d52 | 1286 | |
d0c3bacb | 1287 | Effective min boundary is limited by memory.min values of |
bf8d5d52 RG |
1288 | all ancestor cgroups. If there is memory.min overcommitment |
1289 | (child cgroup or cgroups are requiring more protected memory | |
1290 | than parent will allow), then each child cgroup will get | |
1291 | the part of parent's protection proportional to its | |
1292 | actual memory usage below memory.min. | |
1293 | ||
1294 | Putting more memory than generally available under this | |
1295 | protection is discouraged and may lead to constant OOMs. | |
1296 | ||
1297 | If a memory cgroup is not populated with processes, | |
1298 | its memory.min is ignored. | |
1299 | ||
6c292092 | 1300 | memory.low |
6c292092 TH |
1301 | A read-write single value file which exists on non-root |
1302 | cgroups. The default is "0". | |
1303 | ||
7854207f RG |
1304 | Best-effort memory protection. If the memory usage of a |
1305 | cgroup is within its effective low boundary, the cgroup's | |
6ee0fac1 JH |
1306 | memory won't be reclaimed unless there is no reclaimable |
1307 | memory available in unprotected cgroups. | |
822bbba0 | 1308 | Above the effective low boundary (or |
9783aa99 CD |
1309 | effective min boundary if it is higher), pages are reclaimed |
1310 | proportionally to the overage, reducing reclaim pressure for | |
1311 | smaller overages. | |
7854207f RG |
1312 | |
1313 | Effective low boundary is limited by memory.low values of | |
1314 | all ancestor cgroups. If there is memory.low overcommitment | |
bf8d5d52 | 1315 | (child cgroup or cgroups are requiring more protected memory |
7854207f | 1316 | than parent will allow), then each child cgroup will get |
bf8d5d52 | 1317 | the part of parent's protection proportional to its |
7854207f | 1318 | actual memory usage below memory.low. |
6c292092 TH |
1319 | |
1320 | Putting more memory than generally available under this | |
1321 | protection is discouraged. | |
1322 | ||
1323 | memory.high | |
6c292092 TH |
1324 | A read-write single value file which exists on non-root |
1325 | cgroups. The default is "max". | |
1326 | ||
5647e53f | 1327 | Memory usage throttle limit. If a cgroup's usage goes |
6c292092 TH |
1328 | over the high boundary, the processes of the cgroup are |
1329 | throttled and put under heavy reclaim pressure. | |
1330 | ||
1331 | Going over the high limit never invokes the OOM killer and | |
5647e53f DS |
1332 | under extreme conditions the limit may be breached. The high |
1333 | limit should be used in scenarios where an external process | |
1334 | monitors the limited cgroup to alleviate heavy reclaim | |
1335 | pressure. | |
6c292092 | 1336 | |
c6c895cf SB |
1337 | If memory.high is opened with O_NONBLOCK then the synchronous |
1338 | reclaim is bypassed. This is useful for admin processes that | |
1339 | need to dynamically adjust the job's memory limits without | |
1340 | expending their own CPU resources on memory reclamation. The | |
1341 | job will trigger the reclaim and/or get throttled on its | |
1342 | next charge request. | |
1343 | ||
1344 | Please note that with O_NONBLOCK, there is a chance that the | |
1345 | target memory cgroup may take indefinite amount of time to | |
1346 | reduce usage below the limit due to delayed charge request or | |
1347 | busy-hitting its memory to slow down reclaim. | |
c8e6002b | 1348 | |
6c292092 | 1349 | memory.max |
6c292092 TH |
1350 | A read-write single value file which exists on non-root |
1351 | cgroups. The default is "max". | |
1352 | ||
5647e53f DS |
1353 | Memory usage hard limit. This is the main mechanism to limit |
1354 | memory usage of a cgroup. If a cgroup's memory usage reaches | |
1355 | this limit and can't be reduced, the OOM killer is invoked in | |
1356 | the cgroup. Under certain circumstances, the usage may go | |
1357 | over the limit temporarily. | |
6c292092 | 1358 | |
db33ec37 KK |
1359 | In default configuration regular 0-order allocations always |
1360 | succeed unless OOM killer chooses current task as a victim. | |
1361 | ||
1362 | Some kinds of allocations don't invoke the OOM killer. | |
1363 | Caller could retry them differently, return into userspace | |
1364 | as -ENOMEM or silently ignore in cases like disk readahead. | |
1365 | ||
c6c895cf SB |
1366 | If memory.max is opened with O_NONBLOCK, then the synchronous |
1367 | reclaim and oom-kill are bypassed. This is useful for admin | |
1368 | processes that need to dynamically adjust the job's memory limits | |
1369 | without expending their own CPU resources on memory reclamation. | |
1370 | The job will trigger the reclaim and/or oom-kill on its next | |
1371 | charge request. | |
1372 | ||
1373 | Please note that with O_NONBLOCK, there is a chance that the | |
1374 | target memory cgroup may take indefinite amount of time to | |
1375 | reduce usage below the limit due to delayed charge request or | |
1376 | busy-hitting its memory to slow down reclaim. | |
c8e6002b | 1377 | |
94968384 SB |
1378 | memory.reclaim |
1379 | A write-only nested-keyed file which exists for all cgroups. | |
1380 | ||
1381 | This is a simple interface to trigger memory reclaim in the | |
1382 | target cgroup. | |
1383 | ||
94968384 SB |
1384 | Example:: |
1385 | ||
1386 | echo "1G" > memory.reclaim | |
1387 | ||
94968384 SB |
1388 | Please note that the kernel can over or under reclaim from |
1389 | the target cgroup. If less bytes are reclaimed than the | |
1390 | specified amount, -EAGAIN is returned. | |
1391 | ||
73b73bac YA |
1392 | Please note that the proactive reclaim (triggered by this |
1393 | interface) is not meant to indicate memory pressure on the | |
1394 | memory cgroup. Therefore socket memory balancing triggered by | |
1395 | the memory reclaim normally is not exercised in this case. | |
1396 | This means that the networking layer will not adapt based on | |
1397 | reclaim induced by memory.reclaim. | |
1398 | ||
68cd9050 DS |
1399 | The following nested keys are defined. |
1400 | ||
1401 | ========== ================================ | |
1402 | swappiness Swappiness value to reclaim with | |
1403 | ========== ================================ | |
1404 | ||
1405 | Specifying a swappiness value instructs the kernel to perform | |
1406 | the reclaim with that swappiness value. Note that this has the | |
1407 | same semantics as vm.swappiness applied to memcg reclaim with | |
1408 | all the existing limitations and potential future extensions. | |
1409 | ||
68a1436b ZH |
1410 | The valid range for swappiness is [0-200, max], setting |
1411 | swappiness=max exclusively reclaims anonymous memory. | |
1412 | ||
8e20d4b3 | 1413 | memory.peak |
c6f53ed8 DF |
1414 | A read-write single value file which exists on non-root cgroups. |
1415 | ||
1416 | The max memory usage recorded for the cgroup and its descendants since | |
1417 | either the creation of the cgroup or the most recent reset for that FD. | |
8e20d4b3 | 1418 | |
c6f53ed8 DF |
1419 | A write of any non-empty string to this file resets it to the |
1420 | current memory usage for subsequent reads through the same | |
1421 | file descriptor. | |
8e20d4b3 | 1422 | |
3d8b38eb RG |
1423 | memory.oom.group |
1424 | A read-write single value file which exists on non-root | |
1425 | cgroups. The default value is "0". | |
1426 | ||
1427 | Determines whether the cgroup should be treated as | |
1428 | an indivisible workload by the OOM killer. If set, | |
1429 | all tasks belonging to the cgroup or to its descendants | |
1430 | (if the memory cgroup is not a leaf cgroup) are killed | |
1431 | together or not at all. This can be used to avoid | |
1432 | partial kills to guarantee workload integrity. | |
1433 | ||
1434 | Tasks with the OOM protection (oom_score_adj set to -1000) | |
1435 | are treated as an exception and are never killed. | |
1436 | ||
1437 | If the OOM killer is invoked in a cgroup, it's not going | |
1438 | to kill any tasks outside of this cgroup, regardless | |
1439 | memory.oom.group values of ancestor cgroups. | |
1440 | ||
6c292092 | 1441 | memory.events |
6c292092 TH |
1442 | A read-only flat-keyed file which exists on non-root cgroups. |
1443 | The following entries are defined. Unless specified | |
1444 | otherwise, a value change in this file generates a file | |
1445 | modified event. | |
1446 | ||
1e577f97 SB |
1447 | Note that all fields in this file are hierarchical and the |
1448 | file modified event can be generated due to an event down the | |
22b12557 | 1449 | hierarchy. For the local events at the cgroup level see |
1e577f97 SB |
1450 | memory.events.local. |
1451 | ||
6c292092 | 1452 | low |
6c292092 TH |
1453 | The number of times the cgroup is reclaimed due to |
1454 | high memory pressure even though its usage is under | |
1455 | the low boundary. This usually indicates that the low | |
1456 | boundary is over-committed. | |
1457 | ||
1458 | high | |
6c292092 TH |
1459 | The number of times processes of the cgroup are |
1460 | throttled and routed to perform direct memory reclaim | |
1461 | because the high memory boundary was exceeded. For a | |
1462 | cgroup whose memory usage is capped by the high limit | |
1463 | rather than global memory pressure, this event's | |
1464 | occurrences are expected. | |
1465 | ||
1466 | max | |
6c292092 TH |
1467 | The number of times the cgroup's memory usage was |
1468 | about to go over the max boundary. If direct reclaim | |
8e675f7a | 1469 | fails to bring it down, the cgroup goes to OOM state. |
6c292092 TH |
1470 | |
1471 | oom | |
8e675f7a KK |
1472 | The number of time the cgroup's memory usage was |
1473 | reached the limit and allocation was about to fail. | |
1474 | ||
7a1adfdd RG |
1475 | This event is not raised if the OOM killer is not |
1476 | considered as an option, e.g. for failed high-order | |
db33ec37 | 1477 | allocations or if caller asked to not retry attempts. |
7a1adfdd | 1478 | |
8e675f7a | 1479 | oom_kill |
8e675f7a KK |
1480 | The number of processes belonging to this cgroup |
1481 | killed by any kind of OOM killer. | |
6c292092 | 1482 | |
b6bf9abb DS |
1483 | oom_group_kill |
1484 | The number of times a group OOM has occurred. | |
1485 | ||
1e577f97 SB |
1486 | memory.events.local |
1487 | Similar to memory.events but the fields in the file are local | |
1488 | to the cgroup i.e. not hierarchical. The file modified event | |
1489 | generated on this file reflects only the local events. | |
1490 | ||
587d9f72 | 1491 | memory.stat |
587d9f72 JW |
1492 | A read-only flat-keyed file which exists on non-root cgroups. |
1493 | ||
1494 | This breaks down the cgroup's memory footprint into different | |
1495 | types of memory, type-specific details, and other information | |
1496 | on the state and past events of the memory management system. | |
1497 | ||
1498 | All memory amounts are in bytes. | |
1499 | ||
1500 | The entries are ordered to be human readable, and new entries | |
1501 | can show up in the middle. Don't rely on items remaining in a | |
1502 | fixed position; use the keys to look up specific values! | |
1503 | ||
a21e7bb3 KK |
1504 | If the entry has no per-node counter (or not show in the |
1505 | memory.numa_stat). We use 'npn' (non-per-node) as the tag | |
1506 | to indicate that it will not show in the memory.numa_stat. | |
5f9a4f4a | 1507 | |
587d9f72 | 1508 | anon |
587d9f72 | 1509 | Amount of memory used in anonymous mappings such as |
74949222 DH |
1510 | brk(), sbrk(), and mmap(MAP_ANONYMOUS). Note that |
1511 | some kernel configurations might account complete larger | |
1512 | allocations (e.g., THP) if only some, but not all the | |
1513 | memory of such an allocation is mapped anymore. | |
587d9f72 JW |
1514 | |
1515 | file | |
587d9f72 JW |
1516 | Amount of memory used to cache filesystem data, |
1517 | including tmpfs and shared memory. | |
1518 | ||
a8c49af3 YA |
1519 | kernel (npn) |
1520 | Amount of total kernel memory, including | |
1521 | (kernel_stack, pagetables, percpu, vmalloc, slab) in | |
1522 | addition to other kernel memory use cases. | |
1523 | ||
12580e4b | 1524 | kernel_stack |
12580e4b VD |
1525 | Amount of memory allocated to kernel stacks. |
1526 | ||
f0c0c115 SB |
1527 | pagetables |
1528 | Amount of memory allocated for page tables. | |
1529 | ||
ebc97a52 YA |
1530 | sec_pagetables |
1531 | Amount of memory allocated for secondary page tables, | |
1532 | this currently includes KVM mmu allocations on x86 | |
212c5c07 | 1533 | and arm64 and IOMMU page tables. |
ebc97a52 | 1534 | |
a21e7bb3 | 1535 | percpu (npn) |
772616b0 RG |
1536 | Amount of memory used for storing per-cpu kernel |
1537 | data structures. | |
1538 | ||
a21e7bb3 | 1539 | sock (npn) |
4758e198 JW |
1540 | Amount of memory used in network transmission buffers |
1541 | ||
4e5aa1f4 SB |
1542 | vmalloc (npn) |
1543 | Amount of memory used for vmap backed memory. | |
1544 | ||
9a4caf1e | 1545 | shmem |
9a4caf1e JW |
1546 | Amount of cached filesystem data that is swap-backed, |
1547 | such as tmpfs, shm segments, shared anonymous mmap()s | |
1548 | ||
f4840ccf JW |
1549 | zswap |
1550 | Amount of memory consumed by the zswap compression backend. | |
1551 | ||
1552 | zswapped | |
1553 | Amount of application memory swapped out to zswap. | |
1554 | ||
587d9f72 | 1555 | file_mapped |
74949222 DH |
1556 | Amount of cached filesystem data mapped with mmap(). Note |
1557 | that some kernel configurations might account complete | |
1558 | larger allocations (e.g., THP) if only some, but not | |
1559 | not all the memory of such an allocation is mapped. | |
587d9f72 JW |
1560 | |
1561 | file_dirty | |
587d9f72 JW |
1562 | Amount of cached filesystem data that was modified but |
1563 | not yet written back to disk | |
1564 | ||
1565 | file_writeback | |
587d9f72 JW |
1566 | Amount of cached filesystem data that was modified and |
1567 | is currently being written back to disk | |
1568 | ||
b6038942 SB |
1569 | swapcached |
1570 | Amount of swap cached in memory. The swapcache is accounted | |
1571 | against both memory and swap usage. | |
1572 | ||
1ff9e6e1 CD |
1573 | anon_thp |
1574 | Amount of memory used in anonymous mappings backed by | |
1575 | transparent hugepages | |
b8eddff8 JW |
1576 | |
1577 | file_thp | |
1578 | Amount of cached filesystem data backed by transparent | |
1579 | hugepages | |
1580 | ||
1581 | shmem_thp | |
1582 | Amount of shm, tmpfs, shared anonymous mmap()s backed by | |
1583 | transparent hugepages | |
1ff9e6e1 | 1584 | |
633b11be | 1585 | inactive_anon, active_anon, inactive_file, active_file, unevictable |
587d9f72 JW |
1586 | Amount of memory, swap-backed and filesystem-backed, |
1587 | on the internal memory management lists used by the | |
1603c8d1 CD |
1588 | page reclaim algorithm. |
1589 | ||
1590 | As these represent internal list state (eg. shmem pages are on anon | |
1591 | memory management lists), inactive_foo + active_foo may not be equal to | |
1592 | the value for the foo counter, since the foo counter is type-based, not | |
1593 | list-based. | |
587d9f72 | 1594 | |
27ee57c9 | 1595 | slab_reclaimable |
27ee57c9 VD |
1596 | Part of "slab" that might be reclaimed, such as |
1597 | dentries and inodes. | |
1598 | ||
1599 | slab_unreclaimable | |
27ee57c9 VD |
1600 | Part of "slab" that cannot be reclaimed on memory |
1601 | pressure. | |
1602 | ||
a21e7bb3 | 1603 | slab (npn) |
5f9a4f4a MS |
1604 | Amount of memory used for storing in-kernel data |
1605 | structures. | |
587d9f72 | 1606 | |
8d3fe09d MS |
1607 | workingset_refault_anon |
1608 | Number of refaults of previously evicted anonymous pages. | |
b340959e | 1609 | |
8d3fe09d MS |
1610 | workingset_refault_file |
1611 | Number of refaults of previously evicted file pages. | |
b340959e | 1612 | |
8d3fe09d MS |
1613 | workingset_activate_anon |
1614 | Number of refaulted anonymous pages that were immediately | |
1615 | activated. | |
1616 | ||
1617 | workingset_activate_file | |
1618 | Number of refaulted file pages that were immediately activated. | |
1619 | ||
1620 | workingset_restore_anon | |
1621 | Number of restored anonymous pages which have been detected as | |
1622 | an active workingset before they got reclaimed. | |
1623 | ||
1624 | workingset_restore_file | |
1625 | Number of restored file pages which have been detected as an | |
1626 | active workingset before they got reclaimed. | |
a6f5576b | 1627 | |
b340959e | 1628 | workingset_nodereclaim |
b340959e RG |
1629 | Number of times a shadow node has been reclaimed |
1630 | ||
4c8bc7c4 HJ |
1631 | pswpin (npn) |
1632 | Number of pages swapped into memory | |
1633 | ||
1634 | pswpout (npn) | |
1635 | Number of pages swapped out of memory | |
1636 | ||
673520f8 QZ |
1637 | pgscan (npn) |
1638 | Amount of scanned pages (in an inactive LRU list) | |
1639 | ||
1640 | pgsteal (npn) | |
1641 | Amount of reclaimed pages | |
1642 | ||
1643 | pgscan_kswapd (npn) | |
1644 | Amount of scanned pages by kswapd (in an inactive LRU list) | |
1645 | ||
1646 | pgscan_direct (npn) | |
1647 | Amount of scanned pages directly (in an inactive LRU list) | |
1648 | ||
57e9cc50 JW |
1649 | pgscan_khugepaged (npn) |
1650 | Amount of scanned pages by khugepaged (in an inactive LRU list) | |
1651 | ||
e452872b HJ |
1652 | pgscan_proactive (npn) |
1653 | Amount of scanned pages proactively (in an inactive LRU list) | |
1654 | ||
673520f8 QZ |
1655 | pgsteal_kswapd (npn) |
1656 | Amount of reclaimed pages by kswapd | |
1657 | ||
1658 | pgsteal_direct (npn) | |
1659 | Amount of reclaimed pages directly | |
1660 | ||
57e9cc50 JW |
1661 | pgsteal_khugepaged (npn) |
1662 | Amount of reclaimed pages by khugepaged | |
1663 | ||
e452872b HJ |
1664 | pgsteal_proactive (npn) |
1665 | Amount of reclaimed pages proactively | |
1666 | ||
a21e7bb3 | 1667 | pgfault (npn) |
5f9a4f4a MS |
1668 | Total number of page faults incurred |
1669 | ||
a21e7bb3 | 1670 | pgmajfault (npn) |
5f9a4f4a MS |
1671 | Number of major page faults incurred |
1672 | ||
a21e7bb3 | 1673 | pgrefill (npn) |
2262185c RG |
1674 | Amount of scanned pages (in an active LRU list) |
1675 | ||
a21e7bb3 | 1676 | pgactivate (npn) |
2262185c RG |
1677 | Amount of pages moved to the active LRU list |
1678 | ||
a21e7bb3 | 1679 | pgdeactivate (npn) |
03189e8e | 1680 | Amount of pages moved to the inactive LRU list |
2262185c | 1681 | |
a21e7bb3 | 1682 | pglazyfree (npn) |
2262185c RG |
1683 | Amount of pages postponed to be freed under memory pressure |
1684 | ||
a21e7bb3 | 1685 | pglazyfreed (npn) |
2262185c RG |
1686 | Amount of reclaimed lazyfree pages |
1687 | ||
e7ac4dae BS |
1688 | swpin_zero |
1689 | Number of pages swapped into memory and filled with zero, where I/O | |
1690 | was optimized out because the page content was detected to be zero | |
1691 | during swapout. | |
1692 | ||
1693 | swpout_zero | |
1694 | Number of zero-filled pages swapped out with I/O skipped due to the | |
1695 | content being detected as zero. | |
1696 | ||
db5b4f32 UA |
1697 | zswpin |
1698 | Number of pages moved in to memory from zswap. | |
1699 | ||
1700 | zswpout | |
1701 | Number of pages moved out of memory to zswap. | |
1702 | ||
1703 | zswpwb | |
1704 | Number of pages written from zswap to swap. | |
1705 | ||
a21e7bb3 | 1706 | thp_fault_alloc (npn) |
1ff9e6e1 | 1707 | Number of transparent hugepages which were allocated to satisfy |
2a8bef32 YS |
1708 | a page fault. This counter is not present when CONFIG_TRANSPARENT_HUGEPAGE |
1709 | is not set. | |
1ff9e6e1 | 1710 | |
a21e7bb3 | 1711 | thp_collapse_alloc (npn) |
1ff9e6e1 CD |
1712 | Number of transparent hugepages which were allocated to allow |
1713 | collapsing an existing range of pages. This counter is not | |
1714 | present when CONFIG_TRANSPARENT_HUGEPAGE is not set. | |
1715 | ||
811244a5 XH |
1716 | thp_swpout (npn) |
1717 | Number of transparent hugepages which are swapout in one piece | |
1718 | without splitting. | |
1719 | ||
1720 | thp_swpout_fallback (npn) | |
1721 | Number of transparent hugepages which were split before swapout. | |
1722 | Usually because failed to allocate some continuous swap space | |
1723 | for the huge page. | |
1724 | ||
f77f0c75 KZ |
1725 | numa_pages_migrated (npn) |
1726 | Number of pages migrated by NUMA balancing. | |
1727 | ||
1728 | numa_pte_updates (npn) | |
1729 | Number of pages whose page table entries are modified by | |
1730 | NUMA balancing to produce NUMA hinting faults on access. | |
1731 | ||
1732 | numa_hint_faults (npn) | |
1733 | Number of NUMA hinting faults. | |
1734 | ||
1735 | pgdemote_kswapd | |
1736 | Number of pages demoted by kswapd. | |
1737 | ||
1738 | pgdemote_direct | |
1739 | Number of pages demoted directly. | |
1740 | ||
1741 | pgdemote_khugepaged | |
1742 | Number of pages demoted by khugepaged. | |
1743 | ||
e452872b HJ |
1744 | pgdemote_proactive |
1745 | Number of pages demoted by proactively. | |
1746 | ||
05d4532b JH |
1747 | hugetlb |
1748 | Amount of memory used by hugetlb pages. This metric only shows | |
1749 | up if hugetlb usage is accounted for in memory.current (i.e. | |
1750 | cgroup is mounted with the memory_hugetlb_accounting option). | |
1751 | ||
5f9a4f4a MS |
1752 | memory.numa_stat |
1753 | A read-only nested-keyed file which exists on non-root cgroups. | |
1754 | ||
1755 | This breaks down the cgroup's memory footprint into different | |
1756 | types of memory, type-specific details, and other information | |
1757 | per node on the state of the memory management system. | |
1758 | ||
1759 | This is useful for providing visibility into the NUMA locality | |
1760 | information within an memcg since the pages are allowed to be | |
1761 | allocated from any physical node. One of the use case is evaluating | |
1762 | application performance by combining this information with the | |
1763 | application's CPU allocation. | |
1764 | ||
1765 | All memory amounts are in bytes. | |
1766 | ||
1767 | The output format of memory.numa_stat is:: | |
1768 | ||
1769 | type N0=<bytes in node 0> N1=<bytes in node 1> ... | |
1770 | ||
1771 | The entries are ordered to be human readable, and new entries | |
1772 | can show up in the middle. Don't rely on items remaining in a | |
1773 | fixed position; use the keys to look up specific values! | |
1774 | ||
1775 | The entries can refer to the memory.stat. | |
1776 | ||
3e24b19d | 1777 | memory.swap.current |
3e24b19d VD |
1778 | A read-only single value file which exists on non-root |
1779 | cgroups. | |
1780 | ||
1781 | The total amount of swap currently being used by the cgroup | |
1782 | and its descendants. | |
1783 | ||
4b82ab4f JK |
1784 | memory.swap.high |
1785 | A read-write single value file which exists on non-root | |
1786 | cgroups. The default is "max". | |
1787 | ||
1788 | Swap usage throttle limit. If a cgroup's swap usage exceeds | |
1789 | this limit, all its further allocations will be throttled to | |
1790 | allow userspace to implement custom out-of-memory procedures. | |
1791 | ||
1792 | This limit marks a point of no return for the cgroup. It is NOT | |
1793 | designed to manage the amount of swapping a workload does | |
1794 | during regular operation. Compare to memory.swap.max, which | |
1795 | prohibits swapping past a set amount, but lets the cgroup | |
1796 | continue unimpeded as long as other memory can be reclaimed. | |
1797 | ||
1798 | Healthy workloads are not expected to reach this limit. | |
1799 | ||
e0e0b412 | 1800 | memory.swap.peak |
c6f53ed8 DF |
1801 | A read-write single value file which exists on non-root cgroups. |
1802 | ||
1803 | The max swap usage recorded for the cgroup and its descendants since | |
1804 | the creation of the cgroup or the most recent reset for that FD. | |
e0e0b412 | 1805 | |
c6f53ed8 DF |
1806 | A write of any non-empty string to this file resets it to the |
1807 | current memory usage for subsequent reads through the same | |
1808 | file descriptor. | |
e0e0b412 | 1809 | |
3e24b19d | 1810 | memory.swap.max |
3e24b19d VD |
1811 | A read-write single value file which exists on non-root |
1812 | cgroups. The default is "max". | |
1813 | ||
1814 | Swap usage hard limit. If a cgroup's swap usage reaches this | |
2877cbe6 | 1815 | limit, anonymous memory of the cgroup will not be swapped out. |
3e24b19d | 1816 | |
f3a53a3a TH |
1817 | memory.swap.events |
1818 | A read-only flat-keyed file which exists on non-root cgroups. | |
1819 | The following entries are defined. Unless specified | |
1820 | otherwise, a value change in this file generates a file | |
1821 | modified event. | |
1822 | ||
4b82ab4f JK |
1823 | high |
1824 | The number of times the cgroup's swap usage was over | |
1825 | the high threshold. | |
1826 | ||
f3a53a3a TH |
1827 | max |
1828 | The number of times the cgroup's swap usage was about | |
1829 | to go over the max boundary and swap allocation | |
1830 | failed. | |
1831 | ||
1832 | fail | |
1833 | The number of times swap allocation failed either | |
1834 | because of running out of swap system-wide or max | |
1835 | limit. | |
1836 | ||
be09102b TH |
1837 | When reduced under the current usage, the existing swap |
1838 | entries are reclaimed gradually and the swap usage may stay | |
1839 | higher than the limit for an extended period of time. This | |
1840 | reduces the impact on the workload and memory management. | |
1841 | ||
f4840ccf JW |
1842 | memory.zswap.current |
1843 | A read-only single value file which exists on non-root | |
1844 | cgroups. | |
1845 | ||
1846 | The total amount of memory consumed by the zswap compression | |
1847 | backend. | |
1848 | ||
1849 | memory.zswap.max | |
1850 | A read-write single value file which exists on non-root | |
1851 | cgroups. The default is "max". | |
1852 | ||
1853 | Zswap usage hard limit. If a cgroup's zswap pool reaches this | |
1854 | limit, it will refuse to take any more stores before existing | |
1855 | entries fault back in or are written out to disk. | |
1856 | ||
501a06fe | 1857 | memory.zswap.writeback |
e3992573 MY |
1858 | A read-write single value file. The default value is "1". |
1859 | Note that this setting is hierarchical, i.e. the writeback would be | |
1860 | implicitly disabled for child cgroups if the upper hierarchy | |
1861 | does so. | |
501a06fe NP |
1862 | |
1863 | When this is set to 0, all swapping attempts to swapping devices | |
1864 | are disabled. This included both zswap writebacks, and swapping due | |
1865 | to zswap store failures. If the zswap store failures are recurring | |
1866 | (for e.g if the pages are incompressible), users can observe | |
1867 | reclaim inefficiency after disabling writeback (because the same | |
1868 | pages might be rejected again and again). | |
1869 | ||
1870 | Note that this is subtly different from setting memory.swap.max to | |
1871 | 0, as it still allows for pages to be written to the zswap pool. | |
5a53623d MY |
1872 | This setting has no effect if zswap is disabled, and swapping |
1873 | is allowed unless memory.swap.max is set to 0. | |
501a06fe | 1874 | |
2ce7135a | 1875 | memory.pressure |
74bdd45c | 1876 | A read-only nested-keyed file. |
2ce7135a JW |
1877 | |
1878 | Shows pressure stall information for memory. See | |
373e8ffa | 1879 | :ref:`Documentation/accounting/psi.rst <psi>` for details. |
2ce7135a | 1880 | |
6c292092 | 1881 | |
633b11be MCC |
1882 | Usage Guidelines |
1883 | ~~~~~~~~~~~~~~~~ | |
6c292092 TH |
1884 | |
1885 | "memory.high" is the main mechanism to control memory usage. | |
1886 | Over-committing on high limit (sum of high limits > available memory) | |
1887 | and letting global memory pressure to distribute memory according to | |
1888 | usage is a viable strategy. | |
1889 | ||
1890 | Because breach of the high limit doesn't trigger the OOM killer but | |
1891 | throttles the offending cgroup, a management agent has ample | |
1892 | opportunities to monitor and take appropriate actions such as granting | |
1893 | more memory or terminating the workload. | |
1894 | ||
1895 | Determining whether a cgroup has enough memory is not trivial as | |
1896 | memory usage doesn't indicate whether the workload can benefit from | |
1897 | more memory. For example, a workload which writes data received from | |
1898 | network to a file can use all available memory but can also operate as | |
1899 | performant with a small amount of memory. A measure of memory | |
1900 | pressure - how much the workload is being impacted due to lack of | |
1901 | memory - is necessary to determine whether a workload needs more | |
1902 | memory; unfortunately, memory pressure monitoring mechanism isn't | |
1903 | implemented yet. | |
1904 | ||
1905 | ||
633b11be MCC |
1906 | Memory Ownership |
1907 | ~~~~~~~~~~~~~~~~ | |
6c292092 TH |
1908 | |
1909 | A memory area is charged to the cgroup which instantiated it and stays | |
1910 | charged to the cgroup until the area is released. Migrating a process | |
1911 | to a different cgroup doesn't move the memory usages that it | |
1912 | instantiated while in the previous cgroup to the new cgroup. | |
1913 | ||
1914 | A memory area may be used by processes belonging to different cgroups. | |
1915 | To which cgroup the area will be charged is in-deterministic; however, | |
1916 | over time, the memory area is likely to end up in a cgroup which has | |
1917 | enough memory allowance to avoid high reclaim pressure. | |
1918 | ||
1919 | If a cgroup sweeps a considerable amount of memory which is expected | |
1920 | to be accessed repeatedly by other cgroups, it may make sense to use | |
1921 | POSIX_FADV_DONTNEED to relinquish the ownership of memory areas | |
1922 | belonging to the affected files to ensure correct memory ownership. | |
1923 | ||
1924 | ||
633b11be MCC |
1925 | IO |
1926 | -- | |
6c292092 TH |
1927 | |
1928 | The "io" controller regulates the distribution of IO resources. This | |
1929 | controller implements both weight based and absolute bandwidth or IOPS | |
1930 | limit distribution; however, weight based distribution is available | |
1931 | only if cfq-iosched is in use and neither scheme is available for | |
1932 | blk-mq devices. | |
1933 | ||
1934 | ||
633b11be MCC |
1935 | IO Interface Files |
1936 | ~~~~~~~~~~~~~~~~~~ | |
6c292092 TH |
1937 | |
1938 | io.stat | |
ef45fe47 | 1939 | A read-only nested-keyed file. |
6c292092 TH |
1940 | |
1941 | Lines are keyed by $MAJ:$MIN device numbers and not ordered. | |
1942 | The following nested keys are defined. | |
1943 | ||
636620b6 | 1944 | ====== ===================== |
6c292092 TH |
1945 | rbytes Bytes read |
1946 | wbytes Bytes written | |
1947 | rios Number of read IOs | |
1948 | wios Number of write IOs | |
636620b6 TH |
1949 | dbytes Bytes discarded |
1950 | dios Number of discard IOs | |
1951 | ====== ===================== | |
6c292092 | 1952 | |
69654d37 | 1953 | An example read output follows:: |
6c292092 | 1954 | |
636620b6 TH |
1955 | 8:16 rbytes=1459200 wbytes=314773504 rios=192 wios=353 dbytes=0 dios=0 |
1956 | 8:0 rbytes=90430464 wbytes=299008000 rios=8950 wios=1252 dbytes=50331648 dios=3021 | |
6c292092 | 1957 | |
7caa4715 | 1958 | io.cost.qos |
c4c6b86a | 1959 | A read-write nested-keyed file which exists only on the root |
7caa4715 TH |
1960 | cgroup. |
1961 | ||
1962 | This file configures the Quality of Service of the IO cost | |
1963 | model based controller (CONFIG_BLK_CGROUP_IOCOST) which | |
1964 | currently implements "io.weight" proportional control. Lines | |
1965 | are keyed by $MAJ:$MIN device numbers and not ordered. The | |
1966 | line for a given device is populated on the first write for | |
1967 | the device on "io.cost.qos" or "io.cost.model". The following | |
1968 | nested keys are defined. | |
1969 | ||
1970 | ====== ===================================== | |
1971 | enable Weight-based control enable | |
1972 | ctrl "auto" or "user" | |
1973 | rpct Read latency percentile [0, 100] | |
1974 | rlat Read latency threshold | |
1975 | wpct Write latency percentile [0, 100] | |
1976 | wlat Write latency threshold | |
1977 | min Minimum scaling percentage [1, 10000] | |
1978 | max Maximum scaling percentage [1, 10000] | |
1979 | ====== ===================================== | |
1980 | ||
1981 | The controller is disabled by default and can be enabled by | |
1982 | setting "enable" to 1. "rpct" and "wpct" parameters default | |
1983 | to zero and the controller uses internal device saturation | |
1984 | state to adjust the overall IO rate between "min" and "max". | |
1985 | ||
1986 | When a better control quality is needed, latency QoS | |
1987 | parameters can be configured. For example:: | |
1988 | ||
1989 | 8:16 enable=1 ctrl=auto rpct=95.00 rlat=75000 wpct=95.00 wlat=150000 min=50.00 max=150.0 | |
1990 | ||
1991 | shows that on sdb, the controller is enabled, will consider | |
1992 | the device saturated if the 95th percentile of read completion | |
1993 | latencies is above 75ms or write 150ms, and adjust the overall | |
1994 | IO issue rate between 50% and 150% accordingly. | |
1995 | ||
1996 | The lower the saturation point, the better the latency QoS at | |
1997 | the cost of aggregate bandwidth. The narrower the allowed | |
1998 | adjustment range between "min" and "max", the more conformant | |
1999 | to the cost model the IO behavior. Note that the IO issue | |
2000 | base rate may be far off from 100% and setting "min" and "max" | |
2001 | blindly can lead to a significant loss of device capacity or | |
2002 | control quality. "min" and "max" are useful for regulating | |
2003 | devices which show wide temporary behavior changes - e.g. a | |
2004 | ssd which accepts writes at the line speed for a while and | |
2005 | then completely stalls for multiple seconds. | |
2006 | ||
2007 | When "ctrl" is "auto", the parameters are controlled by the | |
2008 | kernel and may change automatically. Setting "ctrl" to "user" | |
2009 | or setting any of the percentile and latency parameters puts | |
2010 | it into "user" mode and disables the automatic changes. The | |
2011 | automatic mode can be restored by setting "ctrl" to "auto". | |
2012 | ||
2013 | io.cost.model | |
c4c6b86a | 2014 | A read-write nested-keyed file which exists only on the root |
7caa4715 TH |
2015 | cgroup. |
2016 | ||
2017 | This file configures the cost model of the IO cost model based | |
2018 | controller (CONFIG_BLK_CGROUP_IOCOST) which currently | |
2019 | implements "io.weight" proportional control. Lines are keyed | |
2020 | by $MAJ:$MIN device numbers and not ordered. The line for a | |
2021 | given device is populated on the first write for the device on | |
2022 | "io.cost.qos" or "io.cost.model". The following nested keys | |
2023 | are defined. | |
2024 | ||
2025 | ===== ================================ | |
2026 | ctrl "auto" or "user" | |
2027 | model The cost model in use - "linear" | |
2028 | ===== ================================ | |
2029 | ||
2030 | When "ctrl" is "auto", the kernel may change all parameters | |
2031 | dynamically. When "ctrl" is set to "user" or any other | |
2032 | parameters are written to, "ctrl" become "user" and the | |
2033 | automatic changes are disabled. | |
2034 | ||
2035 | When "model" is "linear", the following model parameters are | |
2036 | defined. | |
2037 | ||
2038 | ============= ======================================== | |
2039 | [r|w]bps The maximum sequential IO throughput | |
2040 | [r|w]seqiops The maximum 4k sequential IOs per second | |
2041 | [r|w]randiops The maximum 4k random IOs per second | |
2042 | ============= ======================================== | |
2043 | ||
2044 | From the above, the builtin linear model determines the base | |
2045 | costs of a sequential and random IO and the cost coefficient | |
2046 | for the IO size. While simple, this model can cover most | |
2047 | common device classes acceptably. | |
2048 | ||
2049 | The IO cost model isn't expected to be accurate in absolute | |
2050 | sense and is scaled to the device behavior dynamically. | |
2051 | ||
8504dea7 TH |
2052 | If needed, tools/cgroup/iocost_coef_gen.py can be used to |
2053 | generate device-specific coefficients. | |
2054 | ||
6c292092 | 2055 | io.weight |
6c292092 TH |
2056 | A read-write flat-keyed file which exists on non-root cgroups. |
2057 | The default is "default 100". | |
2058 | ||
2059 | The first line is the default weight applied to devices | |
2060 | without specific override. The rest are overrides keyed by | |
2061 | $MAJ:$MIN device numbers and not ordered. The weights are in | |
2062 | the range [1, 10000] and specifies the relative amount IO time | |
2063 | the cgroup can use in relation to its siblings. | |
2064 | ||
2065 | The default weight can be updated by writing either "default | |
2066 | $WEIGHT" or simply "$WEIGHT". Overrides can be set by writing | |
2067 | "$MAJ:$MIN $WEIGHT" and unset by writing "$MAJ:$MIN default". | |
2068 | ||
633b11be | 2069 | An example read output follows:: |
6c292092 TH |
2070 | |
2071 | default 100 | |
2072 | 8:16 200 | |
2073 | 8:0 50 | |
2074 | ||
2075 | io.max | |
6c292092 TH |
2076 | A read-write nested-keyed file which exists on non-root |
2077 | cgroups. | |
2078 | ||
2079 | BPS and IOPS based IO limit. Lines are keyed by $MAJ:$MIN | |
2080 | device numbers and not ordered. The following nested keys are | |
2081 | defined. | |
2082 | ||
633b11be | 2083 | ===== ================================== |
6c292092 TH |
2084 | rbps Max read bytes per second |
2085 | wbps Max write bytes per second | |
2086 | riops Max read IO operations per second | |
2087 | wiops Max write IO operations per second | |
633b11be | 2088 | ===== ================================== |
6c292092 TH |
2089 | |
2090 | When writing, any number of nested key-value pairs can be | |
2091 | specified in any order. "max" can be specified as the value | |
2092 | to remove a specific limit. If the same key is specified | |
2093 | multiple times, the outcome is undefined. | |
2094 | ||
2095 | BPS and IOPS are measured in each IO direction and IOs are | |
2096 | delayed if limit is reached. Temporary bursts are allowed. | |
2097 | ||
633b11be | 2098 | Setting read limit at 2M BPS and write at 120 IOPS for 8:16:: |
6c292092 TH |
2099 | |
2100 | echo "8:16 rbps=2097152 wiops=120" > io.max | |
2101 | ||
633b11be | 2102 | Reading returns the following:: |
6c292092 TH |
2103 | |
2104 | 8:16 rbps=2097152 wbps=max riops=max wiops=120 | |
2105 | ||
633b11be | 2106 | Write IOPS limit can be removed by writing the following:: |
6c292092 TH |
2107 | |
2108 | echo "8:16 wiops=max" > io.max | |
2109 | ||
633b11be | 2110 | Reading now returns the following:: |
6c292092 TH |
2111 | |
2112 | 8:16 rbps=2097152 wbps=max riops=max wiops=max | |
2113 | ||
2ce7135a | 2114 | io.pressure |
74bdd45c | 2115 | A read-only nested-keyed file. |
2ce7135a JW |
2116 | |
2117 | Shows pressure stall information for IO. See | |
373e8ffa | 2118 | :ref:`Documentation/accounting/psi.rst <psi>` for details. |
2ce7135a | 2119 | |
6c292092 | 2120 | |
633b11be MCC |
2121 | Writeback |
2122 | ~~~~~~~~~ | |
6c292092 TH |
2123 | |
2124 | Page cache is dirtied through buffered writes and shared mmaps and | |
2125 | written asynchronously to the backing filesystem by the writeback | |
2126 | mechanism. Writeback sits between the memory and IO domains and | |
2127 | regulates the proportion of dirty memory by balancing dirtying and | |
2128 | write IOs. | |
2129 | ||
2130 | The io controller, in conjunction with the memory controller, | |
2131 | implements control of page cache writeback IOs. The memory controller | |
2132 | defines the memory domain that dirty memory ratio is calculated and | |
2133 | maintained for and the io controller defines the io domain which | |
2134 | writes out dirty pages for the memory domain. Both system-wide and | |
2135 | per-cgroup dirty memory states are examined and the more restrictive | |
2136 | of the two is enforced. | |
2137 | ||
2138 | cgroup writeback requires explicit support from the underlying | |
1b932b7d ES |
2139 | filesystem. Currently, cgroup writeback is implemented on ext2, ext4, |
2140 | btrfs, f2fs, and xfs. On other filesystems, all writeback IOs are | |
2141 | attributed to the root cgroup. | |
6c292092 TH |
2142 | |
2143 | There are inherent differences in memory and writeback management | |
2144 | which affects how cgroup ownership is tracked. Memory is tracked per | |
2145 | page while writeback per inode. For the purpose of writeback, an | |
2146 | inode is assigned to a cgroup and all IO requests to write dirty pages | |
2147 | from the inode are attributed to that cgroup. | |
2148 | ||
2149 | As cgroup ownership for memory is tracked per page, there can be pages | |
2150 | which are associated with different cgroups than the one the inode is | |
2151 | associated with. These are called foreign pages. The writeback | |
2152 | constantly keeps track of foreign pages and, if a particular foreign | |
2153 | cgroup becomes the majority over a certain period of time, switches | |
2154 | the ownership of the inode to that cgroup. | |
2155 | ||
2156 | While this model is enough for most use cases where a given inode is | |
2157 | mostly dirtied by a single cgroup even when the main writing cgroup | |
2158 | changes over time, use cases where multiple cgroups write to a single | |
2159 | inode simultaneously are not supported well. In such circumstances, a | |
2160 | significant portion of IOs are likely to be attributed incorrectly. | |
2161 | As memory controller assigns page ownership on the first use and | |
2162 | doesn't update it until the page is released, even if writeback | |
2163 | strictly follows page ownership, multiple cgroups dirtying overlapping | |
2164 | areas wouldn't work as expected. It's recommended to avoid such usage | |
2165 | patterns. | |
2166 | ||
2167 | The sysctl knobs which affect writeback behavior are applied to cgroup | |
2168 | writeback as follows. | |
2169 | ||
633b11be | 2170 | vm.dirty_background_ratio, vm.dirty_ratio |
6c292092 TH |
2171 | These ratios apply the same to cgroup writeback with the |
2172 | amount of available memory capped by limits imposed by the | |
2173 | memory controller and system-wide clean memory. | |
2174 | ||
633b11be | 2175 | vm.dirty_background_bytes, vm.dirty_bytes |
6c292092 TH |
2176 | For cgroup writeback, this is calculated into ratio against |
2177 | total available memory and applied the same way as | |
2178 | vm.dirty[_background]_ratio. | |
2179 | ||
2180 | ||
b351f0c7 JB |
2181 | IO Latency |
2182 | ~~~~~~~~~~ | |
2183 | ||
2184 | This is a cgroup v2 controller for IO workload protection. You provide a group | |
2185 | with a latency target, and if the average latency exceeds that target the | |
2186 | controller will throttle any peers that have a lower latency target than the | |
2187 | protected workload. | |
2188 | ||
2189 | The limits are only applied at the peer level in the hierarchy. This means that | |
2190 | in the diagram below, only groups A, B, and C will influence each other, and | |
34b43446 | 2191 | groups D and F will influence each other. Group G will influence nobody:: |
b351f0c7 JB |
2192 | |
2193 | [root] | |
2194 | / | \ | |
2195 | A B C | |
2196 | / \ | | |
2197 | D F G | |
2198 | ||
2199 | ||
2200 | So the ideal way to configure this is to set io.latency in groups A, B, and C. | |
2201 | Generally you do not want to set a value lower than the latency your device | |
2202 | supports. Experiment to find the value that works best for your workload. | |
2203 | Start at higher than the expected latency for your device and watch the | |
c480bcf9 DZF |
2204 | avg_lat value in io.stat for your workload group to get an idea of the |
2205 | latency you see during normal operation. Use the avg_lat value as a basis for | |
2206 | your real setting, setting at 10-15% higher than the value in io.stat. | |
b351f0c7 JB |
2207 | |
2208 | How IO Latency Throttling Works | |
2209 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ | |
2210 | ||
2211 | io.latency is work conserving; so as long as everybody is meeting their latency | |
2212 | target the controller doesn't do anything. Once a group starts missing its | |
2213 | target it begins throttling any peer group that has a higher target than itself. | |
2214 | This throttling takes 2 forms: | |
2215 | ||
2216 | - Queue depth throttling. This is the number of outstanding IO's a group is | |
2217 | allowed to have. We will clamp down relatively quickly, starting at no limit | |
2218 | and going all the way down to 1 IO at a time. | |
2219 | ||
2220 | - Artificial delay induction. There are certain types of IO that cannot be | |
2221 | throttled without possibly adversely affecting higher priority groups. This | |
2222 | includes swapping and metadata IO. These types of IO are allowed to occur | |
2223 | normally, however they are "charged" to the originating group. If the | |
2224 | originating group is being throttled you will see the use_delay and delay | |
2225 | fields in io.stat increase. The delay value is how many microseconds that are | |
2226 | being added to any process that runs in this group. Because this number can | |
2227 | grow quite large if there is a lot of swapping or metadata IO occurring we | |
2228 | limit the individual delay events to 1 second at a time. | |
2229 | ||
2230 | Once the victimized group starts meeting its latency target again it will start | |
2231 | unthrottling any peer groups that were throttled previously. If the victimized | |
2232 | group simply stops doing IO the global counter will unthrottle appropriately. | |
2233 | ||
2234 | IO Latency Interface Files | |
2235 | ~~~~~~~~~~~~~~~~~~~~~~~~~~ | |
2236 | ||
2237 | io.latency | |
2238 | This takes a similar format as the other controllers. | |
2239 | ||
a477b94d | 2240 | "MAJOR:MINOR target=<target time in microseconds>" |
b351f0c7 JB |
2241 | |
2242 | io.stat | |
2243 | If the controller is enabled you will see extra stats in io.stat in | |
2244 | addition to the normal ones. | |
2245 | ||
2246 | depth | |
2247 | This is the current queue depth for the group. | |
2248 | ||
2249 | avg_lat | |
c480bcf9 DZF |
2250 | This is an exponential moving average with a decay rate of 1/exp |
2251 | bound by the sampling interval. The decay rate interval can be | |
2252 | calculated by multiplying the win value in io.stat by the | |
2253 | corresponding number of samples based on the win value. | |
2254 | ||
2255 | win | |
2256 | The sampling window size in milliseconds. This is the minimum | |
2257 | duration of time between evaluation events. Windows only elapse | |
2258 | with IO activity. Idle periods extend the most recent window. | |
b351f0c7 | 2259 | |
556910e3 BVA |
2260 | IO Priority |
2261 | ~~~~~~~~~~~ | |
2262 | ||
2263 | A single attribute controls the behavior of the I/O priority cgroup policy, | |
c1081a7b | 2264 | namely the io.prio.class attribute. The following values are accepted for |
556910e3 BVA |
2265 | that attribute: |
2266 | ||
2267 | no-change | |
2268 | Do not modify the I/O priority class. | |
2269 | ||
ddf63516 HT |
2270 | promote-to-rt |
2271 | For requests that have a non-RT I/O priority class, change it into RT. | |
2272 | Also change the priority level of these requests to 4. Do not modify | |
2273 | the I/O priority of requests that have priority class RT. | |
556910e3 BVA |
2274 | |
2275 | restrict-to-be | |
2276 | For requests that do not have an I/O priority class or that have I/O | |
ddf63516 HT |
2277 | priority class RT, change it into BE. Also change the priority level |
2278 | of these requests to 0. Do not modify the I/O priority class of | |
2279 | requests that have priority class IDLE. | |
556910e3 BVA |
2280 | |
2281 | idle | |
2282 | Change the I/O priority class of all requests into IDLE, the lowest | |
2283 | I/O priority class. | |
2284 | ||
ddf63516 HT |
2285 | none-to-rt |
2286 | Deprecated. Just an alias for promote-to-rt. | |
2287 | ||
556910e3 BVA |
2288 | The following numerical values are associated with the I/O priority policies: |
2289 | ||
ddf63516 HT |
2290 | +----------------+---+ |
2291 | | no-change | 0 | | |
2292 | +----------------+---+ | |
c1081a7b | 2293 | | promote-to-rt | 1 | |
ddf63516 | 2294 | +----------------+---+ |
c1081a7b TY |
2295 | | restrict-to-be | 2 | |
2296 | +----------------+---+ | |
2297 | | idle | 3 | | |
ddf63516 | 2298 | +----------------+---+ |
556910e3 BVA |
2299 | |
2300 | The numerical value that corresponds to each I/O priority class is as follows: | |
2301 | ||
2302 | +-------------------------------+---+ | |
2303 | | IOPRIO_CLASS_NONE | 0 | | |
2304 | +-------------------------------+---+ | |
2305 | | IOPRIO_CLASS_RT (real-time) | 1 | | |
2306 | +-------------------------------+---+ | |
2307 | | IOPRIO_CLASS_BE (best effort) | 2 | | |
2308 | +-------------------------------+---+ | |
2309 | | IOPRIO_CLASS_IDLE | 3 | | |
2310 | +-------------------------------+---+ | |
2311 | ||
2312 | The algorithm to set the I/O priority class for a request is as follows: | |
2313 | ||
ddf63516 HT |
2314 | - If I/O priority class policy is promote-to-rt, change the request I/O |
2315 | priority class to IOPRIO_CLASS_RT and change the request I/O priority | |
2316 | level to 4. | |
c1081a7b | 2317 | - If I/O priority class policy is not promote-to-rt, translate the I/O priority |
ddf63516 HT |
2318 | class policy into a number, then change the request I/O priority class |
2319 | into the maximum of the I/O priority class policy number and the numerical | |
2320 | I/O priority class. | |
556910e3 | 2321 | |
633b11be MCC |
2322 | PID |
2323 | --- | |
20c56e59 HR |
2324 | |
2325 | The process number controller is used to allow a cgroup to stop any | |
2326 | new tasks from being fork()'d or clone()'d after a specified limit is | |
2327 | reached. | |
2328 | ||
2329 | The number of tasks in a cgroup can be exhausted in ways which other | |
2330 | controllers cannot prevent, thus warranting its own controller. For | |
2331 | example, a fork bomb is likely to exhaust the number of tasks before | |
2332 | hitting memory restrictions. | |
2333 | ||
2334 | Note that PIDs used in this controller refer to TIDs, process IDs as | |
2335 | used by the kernel. | |
2336 | ||
2337 | ||
633b11be MCC |
2338 | PID Interface Files |
2339 | ~~~~~~~~~~~~~~~~~~~ | |
20c56e59 HR |
2340 | |
2341 | pids.max | |
312eb712 TK |
2342 | A read-write single value file which exists on non-root |
2343 | cgroups. The default is "max". | |
20c56e59 | 2344 | |
312eb712 | 2345 | Hard limit of number of processes. |
20c56e59 HR |
2346 | |
2347 | pids.current | |
c9169291 | 2348 | A read-only single value file which exists on non-root cgroups. |
20c56e59 | 2349 | |
312eb712 TK |
2350 | The number of processes currently in the cgroup and its |
2351 | descendants. | |
20c56e59 | 2352 | |
c9169291 XJ |
2353 | pids.peak |
2354 | A read-only single value file which exists on non-root cgroups. | |
2355 | ||
2356 | The maximum value that the number of processes in the cgroup and its | |
2357 | descendants has ever reached. | |
2358 | ||
2359 | pids.events | |
73e75e6f MK |
2360 | A read-only flat-keyed file which exists on non-root cgroups. Unless |
2361 | specified otherwise, a value change in this file generates a file | |
2362 | modified event. The following entries are defined. | |
c9169291 XJ |
2363 | |
2364 | max | |
385a635c | 2365 | The number of times the cgroup's total number of processes hit the pids.max |
73e75e6f | 2366 | limit (see also pids_localevents). |
c9169291 | 2367 | |
3f26a885 MK |
2368 | pids.events.local |
2369 | Similar to pids.events but the fields in the file are local | |
2370 | to the cgroup i.e. not hierarchical. The file modified event | |
2371 | generated on this file reflects only the local events. | |
2372 | ||
20c56e59 HR |
2373 | Organisational operations are not blocked by cgroup policies, so it is |
2374 | possible to have pids.current > pids.max. This can be done by either | |
2375 | setting the limit to be smaller than pids.current, or attaching enough | |
2376 | processes to the cgroup such that pids.current is larger than | |
2377 | pids.max. However, it is not possible to violate a cgroup PID policy | |
2378 | through fork() or clone(). These will return -EAGAIN if the creation | |
2379 | of a new process would cause a cgroup policy to be violated. | |
2380 | ||
2381 | ||
4ec22e9c WL |
2382 | Cpuset |
2383 | ------ | |
2384 | ||
2385 | The "cpuset" controller provides a mechanism for constraining | |
2386 | the CPU and memory node placement of tasks to only the resources | |
2387 | specified in the cpuset interface files in a task's current cgroup. | |
2388 | This is especially valuable on large NUMA systems where placing jobs | |
2389 | on properly sized subsets of the systems with careful processor and | |
2390 | memory placement to reduce cross-node memory access and contention | |
2391 | can improve overall system performance. | |
2392 | ||
2393 | The "cpuset" controller is hierarchical. That means the controller | |
2394 | cannot use CPUs or memory nodes not allowed in its parent. | |
2395 | ||
2396 | ||
2397 | Cpuset Interface Files | |
2398 | ~~~~~~~~~~~~~~~~~~~~~~ | |
2399 | ||
2400 | cpuset.cpus | |
2401 | A read-write multiple values file which exists on non-root | |
2402 | cpuset-enabled cgroups. | |
2403 | ||
2404 | It lists the requested CPUs to be used by tasks within this | |
2405 | cgroup. The actual list of CPUs to be granted, however, is | |
2406 | subjected to constraints imposed by its parent and can differ | |
2407 | from the requested CPUs. | |
2408 | ||
2409 | The CPU numbers are comma-separated numbers or ranges. | |
f3431ba7 | 2410 | For example:: |
4ec22e9c WL |
2411 | |
2412 | # cat cpuset.cpus | |
2413 | 0-4,6,8-10 | |
2414 | ||
2415 | An empty value indicates that the cgroup is using the same | |
2416 | setting as the nearest cgroup ancestor with a non-empty | |
2417 | "cpuset.cpus" or all the available CPUs if none is found. | |
2418 | ||
2419 | The value of "cpuset.cpus" stays constant until the next update | |
2420 | and won't be affected by any CPU hotplug events. | |
2421 | ||
2422 | cpuset.cpus.effective | |
5776cecc | 2423 | A read-only multiple values file which exists on all |
4ec22e9c WL |
2424 | cpuset-enabled cgroups. |
2425 | ||
2426 | It lists the onlined CPUs that are actually granted to this | |
2427 | cgroup by its parent. These CPUs are allowed to be used by | |
2428 | tasks within the current cgroup. | |
2429 | ||
2430 | If "cpuset.cpus" is empty, the "cpuset.cpus.effective" file shows | |
2431 | all the CPUs from the parent cgroup that can be available to | |
2432 | be used by this cgroup. Otherwise, it should be a subset of | |
2433 | "cpuset.cpus" unless none of the CPUs listed in "cpuset.cpus" | |
2434 | can be granted. In this case, it will be treated just like an | |
2435 | empty "cpuset.cpus". | |
2436 | ||
2437 | Its value will be affected by CPU hotplug events. | |
2438 | ||
2439 | cpuset.mems | |
2440 | A read-write multiple values file which exists on non-root | |
2441 | cpuset-enabled cgroups. | |
2442 | ||
2443 | It lists the requested memory nodes to be used by tasks within | |
2444 | this cgroup. The actual list of memory nodes granted, however, | |
2445 | is subjected to constraints imposed by its parent and can differ | |
2446 | from the requested memory nodes. | |
2447 | ||
2448 | The memory node numbers are comma-separated numbers or ranges. | |
f3431ba7 | 2449 | For example:: |
4ec22e9c WL |
2450 | |
2451 | # cat cpuset.mems | |
2452 | 0-1,3 | |
2453 | ||
2454 | An empty value indicates that the cgroup is using the same | |
2455 | setting as the nearest cgroup ancestor with a non-empty | |
2456 | "cpuset.mems" or all the available memory nodes if none | |
2457 | is found. | |
2458 | ||
2459 | The value of "cpuset.mems" stays constant until the next update | |
2460 | and won't be affected by any memory nodes hotplug events. | |
2461 | ||
ee9707e8 WL |
2462 | Setting a non-empty value to "cpuset.mems" causes memory of |
2463 | tasks within the cgroup to be migrated to the designated nodes if | |
2464 | they are currently using memory outside of the designated nodes. | |
2465 | ||
2466 | There is a cost for this memory migration. The migration | |
2467 | may not be complete and some memory pages may be left behind. | |
2468 | So it is recommended that "cpuset.mems" should be set properly | |
2469 | before spawning new tasks into the cpuset. Even if there is | |
2470 | a need to change "cpuset.mems" with active tasks, it shouldn't | |
2471 | be done frequently. | |
2472 | ||
4ec22e9c | 2473 | cpuset.mems.effective |
5776cecc | 2474 | A read-only multiple values file which exists on all |
4ec22e9c WL |
2475 | cpuset-enabled cgroups. |
2476 | ||
2477 | It lists the onlined memory nodes that are actually granted to | |
2478 | this cgroup by its parent. These memory nodes are allowed to | |
2479 | be used by tasks within the current cgroup. | |
2480 | ||
2481 | If "cpuset.mems" is empty, it shows all the memory nodes from the | |
2482 | parent cgroup that will be available to be used by this cgroup. | |
2483 | Otherwise, it should be a subset of "cpuset.mems" unless none of | |
2484 | the memory nodes listed in "cpuset.mems" can be granted. In this | |
2485 | case, it will be treated just like an empty "cpuset.mems". | |
2486 | ||
2487 | Its value will be affected by memory nodes hotplug events. | |
2488 | ||
efdf7532 WL |
2489 | cpuset.cpus.exclusive |
2490 | A read-write multiple values file which exists on non-root | |
2491 | cpuset-enabled cgroups. | |
2492 | ||
2493 | It lists all the exclusive CPUs that are allowed to be used | |
2494 | to create a new cpuset partition. Its value is not used | |
2495 | unless the cgroup becomes a valid partition root. See the | |
2496 | "cpuset.cpus.partition" section below for a description of what | |
2497 | a cpuset partition is. | |
2498 | ||
2499 | When the cgroup becomes a partition root, the actual exclusive | |
2500 | CPUs that are allocated to that partition are listed in | |
2501 | "cpuset.cpus.exclusive.effective" which may be different | |
2502 | from "cpuset.cpus.exclusive". If "cpuset.cpus.exclusive" | |
2503 | has previously been set, "cpuset.cpus.exclusive.effective" | |
2504 | is always a subset of it. | |
2505 | ||
2506 | Users can manually set it to a value that is different from | |
fe8cd273 WL |
2507 | "cpuset.cpus". One constraint in setting it is that the list of |
2508 | CPUs must be exclusive with respect to "cpuset.cpus.exclusive" | |
2509 | of its sibling. If "cpuset.cpus.exclusive" of a sibling cgroup | |
2510 | isn't set, its "cpuset.cpus" value, if set, cannot be a subset | |
2511 | of it to leave at least one CPU available when the exclusive | |
2512 | CPUs are taken away. | |
efdf7532 WL |
2513 | |
2514 | For a parent cgroup, any one of its exclusive CPUs can only | |
2515 | be distributed to at most one of its child cgroups. Having an | |
2516 | exclusive CPU appearing in two or more of its child cgroups is | |
2517 | not allowed (the exclusivity rule). A value that violates the | |
2518 | exclusivity rule will be rejected with a write error. | |
2519 | ||
2520 | The root cgroup is a partition root and all its available CPUs | |
2521 | are in its exclusive CPU set. | |
2522 | ||
2523 | cpuset.cpus.exclusive.effective | |
2524 | A read-only multiple values file which exists on all non-root | |
2525 | cpuset-enabled cgroups. | |
2526 | ||
2527 | This file shows the effective set of exclusive CPUs that | |
737bb142 WL |
2528 | can be used to create a partition root. The content |
2529 | of this file will always be a subset of its parent's | |
efdf7532 WL |
2530 | "cpuset.cpus.exclusive.effective" if its parent is not the root |
2531 | cgroup. It will also be a subset of "cpuset.cpus.exclusive" | |
2532 | if it is set. If "cpuset.cpus.exclusive" is not set, it is | |
2533 | treated to have an implicit value of "cpuset.cpus" in the | |
2534 | formation of local partition. | |
2535 | ||
877c737d WL |
2536 | cpuset.cpus.isolated |
2537 | A read-only and root cgroup only multiple values file. | |
2538 | ||
2539 | This file shows the set of all isolated CPUs used in existing | |
2540 | isolated partitions. It will be empty if no isolated partition | |
2541 | is created. | |
2542 | ||
b1e3aeb1 | 2543 | cpuset.cpus.partition |
90e92f2d WL |
2544 | A read-write single value file which exists on non-root |
2545 | cpuset-enabled cgroups. This flag is owned by the parent cgroup | |
2546 | and is not delegatable. | |
2547 | ||
8a32d0fe | 2548 | It accepts only the following input values when written to. |
90e92f2d | 2549 | |
8cbfdc24 WL |
2550 | ========== ===================================== |
2551 | "member" Non-root member of a partition | |
2552 | "root" Partition root | |
2553 | "isolated" Partition root without load balancing | |
2554 | ========== ===================================== | |
2555 | ||
efdf7532 WL |
2556 | A cpuset partition is a collection of cpuset-enabled cgroups with |
2557 | a partition root at the top of the hierarchy and its descendants | |
2558 | except those that are separate partition roots themselves and | |
2559 | their descendants. A partition has exclusive access to the | |
2560 | set of exclusive CPUs allocated to it. Other cgroups outside | |
2561 | of that partition cannot use any CPUs in that set. | |
2562 | ||
2563 | There are two types of partitions - local and remote. A local | |
2564 | partition is one whose parent cgroup is also a valid partition | |
2565 | root. A remote partition is one whose parent cgroup is not a | |
2566 | valid partition root itself. Writing to "cpuset.cpus.exclusive" | |
2567 | is optional for the creation of a local partition as its | |
2568 | "cpuset.cpus.exclusive" file will assume an implicit value that | |
2569 | is the same as "cpuset.cpus" if it is not set. Writing the | |
2570 | proper "cpuset.cpus.exclusive" values down the cgroup hierarchy | |
2571 | before the target partition root is mandatory for the creation | |
2572 | of a remote partition. | |
2573 | ||
2574 | Currently, a remote partition cannot be created under a local | |
2575 | partition. All the ancestors of a remote partition root except | |
2576 | the root cgroup cannot be a partition root. | |
2577 | ||
2578 | The root cgroup is always a partition root and its state cannot | |
2579 | be changed. All other non-root cgroups start out as "member". | |
8cbfdc24 WL |
2580 | |
2581 | When set to "root", the current cgroup is the root of a new | |
efdf7532 WL |
2582 | partition or scheduling domain. The set of exclusive CPUs is |
2583 | determined by the value of its "cpuset.cpus.exclusive.effective". | |
8cbfdc24 | 2584 | |
72c6303a WL |
2585 | When set to "isolated", the CPUs in that partition will be in |
2586 | an isolated state without any load balancing from the scheduler | |
2587 | and excluded from the unbound workqueues. Tasks placed in such | |
2588 | a partition with multiple CPUs should be carefully distributed | |
2589 | and bound to each of the individual CPUs for optimal performance. | |
8cbfdc24 | 2590 | |
8cbfdc24 WL |
2591 | A partition root ("root" or "isolated") can be in one of the |
2592 | two possible states - valid or invalid. An invalid partition | |
2593 | root is in a degraded state where some state information may | |
2594 | be retained, but behaves more like a "member". | |
2595 | ||
2596 | All possible state transitions among "member", "root" and | |
2597 | "isolated" are allowed. | |
2598 | ||
2599 | On read, the "cpuset.cpus.partition" file can show the following | |
2600 | values. | |
2601 | ||
2602 | ============================= ===================================== | |
2603 | "member" Non-root member of a partition | |
2604 | "root" Partition root | |
2605 | "isolated" Partition root without load balancing | |
2606 | "root invalid (<reason>)" Invalid partition root | |
2607 | "isolated invalid (<reason>)" Invalid isolated partition root | |
2608 | ============================= ===================================== | |
2609 | ||
2610 | In the case of an invalid partition root, a descriptive string on | |
2611 | why the partition is invalid is included within parentheses. | |
2612 | ||
efdf7532 | 2613 | For a local partition root to be valid, the following conditions |
8cbfdc24 WL |
2614 | must be met. |
2615 | ||
efdf7532 WL |
2616 | 1) The parent cgroup is a valid partition root. |
2617 | 2) The "cpuset.cpus.exclusive.effective" file cannot be empty, | |
2618 | though it may contain offline CPUs. | |
2619 | 3) The "cpuset.cpus.effective" cannot be empty unless there is | |
8cbfdc24 WL |
2620 | no task associated with this partition. |
2621 | ||
efdf7532 WL |
2622 | For a remote partition root to be valid, all the above conditions |
2623 | except the first one must be met. | |
8cbfdc24 | 2624 | |
efdf7532 WL |
2625 | External events like hotplug or changes to "cpuset.cpus" or |
2626 | "cpuset.cpus.exclusive" can cause a valid partition root to | |
2627 | become invalid and vice versa. Note that a task cannot be | |
2628 | moved to a cgroup with empty "cpuset.cpus.effective". | |
8cbfdc24 WL |
2629 | |
2630 | A valid non-root parent partition may distribute out all its CPUs | |
efdf7532 WL |
2631 | to its child local partitions when there is no task associated |
2632 | with it. | |
8cbfdc24 | 2633 | |
efdf7532 WL |
2634 | Care must be taken to change a valid partition root to "member" |
2635 | as all its child local partitions, if present, will become | |
8cbfdc24 WL |
2636 | invalid causing disruption to tasks running in those child |
2637 | partitions. These inactivated partitions could be recovered if | |
2638 | their parent is switched back to a partition root with a proper | |
efdf7532 | 2639 | value in "cpuset.cpus" or "cpuset.cpus.exclusive". |
8cbfdc24 WL |
2640 | |
2641 | Poll and inotify events are triggered whenever the state of | |
2642 | "cpuset.cpus.partition" changes. That includes changes caused | |
2643 | by write to "cpuset.cpus.partition", cpu hotplug or other | |
2644 | changes that modify the validity status of the partition. | |
2645 | This will allow user space agents to monitor unexpected changes | |
2646 | to "cpuset.cpus.partition" without the need to do continuous | |
2647 | polling. | |
90e92f2d | 2648 | |
efdf7532 WL |
2649 | A user can pre-configure certain CPUs to an isolated state |
2650 | with load balancing disabled at boot time with the "isolcpus" | |
2651 | kernel boot command line option. If those CPUs are to be put | |
2652 | into a partition, they have to be used in an isolated partition. | |
2653 | ||
4ec22e9c | 2654 | |
4ad5a321 RG |
2655 | Device controller |
2656 | ----------------- | |
2657 | ||
2658 | Device controller manages access to device files. It includes both | |
2659 | creation of new device files (using mknod), and access to the | |
2660 | existing device files. | |
2661 | ||
2662 | Cgroup v2 device controller has no interface files and is implemented | |
2663 | on top of cgroup BPF. To control access to device files, a user may | |
c0002d11 A |
2664 | create bpf programs of type BPF_PROG_TYPE_CGROUP_DEVICE and attach |
2665 | them to cgroups with BPF_CGROUP_DEVICE flag. On an attempt to access a | |
2666 | device file, corresponding BPF programs will be executed, and depending | |
2667 | on the return value the attempt will succeed or fail with -EPERM. | |
2668 | ||
2669 | A BPF_PROG_TYPE_CGROUP_DEVICE program takes a pointer to the | |
2670 | bpf_cgroup_dev_ctx structure, which describes the device access attempt: | |
2671 | access type (mknod/read/write) and device (type, major and minor numbers). | |
2672 | If the program returns 0, the attempt fails with -EPERM, otherwise it | |
2673 | succeeds. | |
2674 | ||
2675 | An example of BPF_PROG_TYPE_CGROUP_DEVICE program may be found in | |
2676 | tools/testing/selftests/bpf/progs/dev_cgroup.c in the kernel source tree. | |
4ad5a321 RG |
2677 | |
2678 | ||
633b11be MCC |
2679 | RDMA |
2680 | ---- | |
968ebff1 | 2681 | |
9c1e67f9 | 2682 | The "rdma" controller regulates the distribution and accounting of |
aefea466 | 2683 | RDMA resources. |
9c1e67f9 | 2684 | |
633b11be MCC |
2685 | RDMA Interface Files |
2686 | ~~~~~~~~~~~~~~~~~~~~ | |
9c1e67f9 PP |
2687 | |
2688 | rdma.max | |
2689 | A readwrite nested-keyed file that exists for all the cgroups | |
2690 | except root that describes current configured resource limit | |
2691 | for a RDMA/IB device. | |
2692 | ||
2693 | Lines are keyed by device name and are not ordered. | |
2694 | Each line contains space separated resource name and its configured | |
2695 | limit that can be distributed. | |
2696 | ||
2697 | The following nested keys are defined. | |
2698 | ||
633b11be | 2699 | ========== ============================= |
9c1e67f9 PP |
2700 | hca_handle Maximum number of HCA Handles |
2701 | hca_object Maximum number of HCA Objects | |
633b11be | 2702 | ========== ============================= |
9c1e67f9 | 2703 | |
633b11be | 2704 | An example for mlx4 and ocrdma device follows:: |
9c1e67f9 PP |
2705 | |
2706 | mlx4_0 hca_handle=2 hca_object=2000 | |
2707 | ocrdma1 hca_handle=3 hca_object=max | |
2708 | ||
2709 | rdma.current | |
2710 | A read-only file that describes current resource usage. | |
2711 | It exists for all the cgroup except root. | |
2712 | ||
633b11be | 2713 | An example for mlx4 and ocrdma device follows:: |
9c1e67f9 PP |
2714 | |
2715 | mlx4_0 hca_handle=1 hca_object=20 | |
2716 | ocrdma1 hca_handle=1 hca_object=23 | |
2717 | ||
b168ed45 ML |
2718 | DMEM |
2719 | ---- | |
2720 | ||
2721 | The "dmem" controller regulates the distribution and accounting of | |
2722 | device memory regions. Because each memory region may have its own page size, | |
2723 | which does not have to be equal to the system page size, the units are always bytes. | |
2724 | ||
2725 | DMEM Interface Files | |
2726 | ~~~~~~~~~~~~~~~~~~~~ | |
2727 | ||
2728 | dmem.max, dmem.min, dmem.low | |
2729 | A readwrite nested-keyed file that exists for all the cgroups | |
2730 | except root that describes current configured resource limit | |
2731 | for a region. | |
2732 | ||
2733 | An example for xe follows:: | |
2734 | ||
2735 | drm/0000:03:00.0/vram0 1073741824 | |
2736 | drm/0000:03:00.0/stolen max | |
2737 | ||
2738 | The semantics are the same as for the memory cgroup controller, and are | |
2739 | calculated in the same way. | |
2740 | ||
2741 | dmem.capacity | |
2742 | A read-only file that describes maximum region capacity. | |
2743 | It only exists on the root cgroup. Not all memory can be | |
2744 | allocated by cgroups, as the kernel reserves some for | |
2745 | internal use. | |
2746 | ||
2747 | An example for xe follows:: | |
2748 | ||
2749 | drm/0000:03:00.0/vram0 8514437120 | |
2750 | drm/0000:03:00.0/stolen 67108864 | |
2751 | ||
2752 | dmem.current | |
2753 | A read-only file that describes current resource usage. | |
2754 | It exists for all the cgroup except root. | |
2755 | ||
2756 | An example for xe follows:: | |
2757 | ||
2758 | drm/0000:03:00.0/vram0 12550144 | |
2759 | drm/0000:03:00.0/stolen 8650752 | |
2760 | ||
faced7e0 GS |
2761 | HugeTLB |
2762 | ------- | |
2763 | ||
2764 | The HugeTLB controller allows to limit the HugeTLB usage per control group and | |
2765 | enforces the controller limit during page fault. | |
2766 | ||
2767 | HugeTLB Interface Files | |
2768 | ~~~~~~~~~~~~~~~~~~~~~~~ | |
2769 | ||
2770 | hugetlb.<hugepagesize>.current | |
2771 | Show current usage for "hugepagesize" hugetlb. It exists for all | |
2772 | the cgroup except root. | |
2773 | ||
2774 | hugetlb.<hugepagesize>.max | |
2775 | Set/show the hard limit of "hugepagesize" hugetlb usage. | |
2776 | The default value is "max". It exists for all the cgroup except root. | |
2777 | ||
2778 | hugetlb.<hugepagesize>.events | |
2779 | A read-only flat-keyed file which exists on non-root cgroups. | |
2780 | ||
2781 | max | |
2782 | The number of allocation failure due to HugeTLB limit | |
2783 | ||
2784 | hugetlb.<hugepagesize>.events.local | |
2785 | Similar to hugetlb.<hugepagesize>.events but the fields in the file | |
2786 | are local to the cgroup i.e. not hierarchical. The file modified event | |
2787 | generated on this file reflects only the local events. | |
9c1e67f9 | 2788 | |
f4776199 MA |
2789 | hugetlb.<hugepagesize>.numa_stat |
2790 | Similar to memory.numa_stat, it shows the numa information of the | |
2791 | hugetlb pages of <hugepagesize> in this cgroup. Only active in | |
2792 | use hugetlb pages are included. The per-node values are in bytes. | |
2793 | ||
633b11be MCC |
2794 | Misc |
2795 | ---- | |
63f1ca59 | 2796 | |
25259fc9 VS |
2797 | The Miscellaneous cgroup provides the resource limiting and tracking |
2798 | mechanism for the scalar resources which cannot be abstracted like the other | |
2799 | cgroup resources. Controller is enabled by the CONFIG_CGROUP_MISC config | |
2800 | option. | |
2801 | ||
2802 | A resource can be added to the controller via enum misc_res_type{} in the | |
2803 | include/linux/misc_cgroup.h file and the corresponding name via misc_res_name[] | |
2804 | in the kernel/cgroup/misc.c file. Provider of the resource must set its | |
2805 | capacity prior to using the resource by calling misc_cg_set_capacity(). | |
2806 | ||
2807 | Once a capacity is set then the resource usage can be updated using charge and | |
2808 | uncharge APIs. All of the APIs to interact with misc controller are in | |
2809 | include/linux/misc_cgroup.h. | |
2810 | ||
2811 | Misc Interface Files | |
2812 | ~~~~~~~~~~~~~~~~~~~~ | |
2813 | ||
2814 | Miscellaneous controller provides 3 interface files. If two misc resources (res_a and res_b) are registered then: | |
2815 | ||
2816 | misc.capacity | |
2817 | A read-only flat-keyed file shown only in the root cgroup. It shows | |
2818 | miscellaneous scalar resources available on the platform along with | |
2819 | their quantities:: | |
2820 | ||
2821 | $ cat misc.capacity | |
2822 | res_a 50 | |
2823 | res_b 10 | |
2824 | ||
2825 | misc.current | |
e973dfe9 | 2826 | A read-only flat-keyed file shown in the all cgroups. It shows |
25259fc9 VS |
2827 | the current usage of the resources in the cgroup and its children.:: |
2828 | ||
2829 | $ cat misc.current | |
2830 | res_a 3 | |
2831 | res_b 0 | |
2832 | ||
1028f391 XJ |
2833 | misc.peak |
2834 | A read-only flat-keyed file shown in all cgroups. It shows the | |
2835 | historical maximum usage of the resources in the cgroup and its | |
2836 | children.:: | |
2837 | ||
2838 | $ cat misc.peak | |
2839 | res_a 10 | |
2840 | res_b 8 | |
2841 | ||
25259fc9 VS |
2842 | misc.max |
2843 | A read-write flat-keyed file shown in the non root cgroups. Allowed | |
2844 | maximum usage of the resources in the cgroup and its children.:: | |
2845 | ||
2846 | $ cat misc.max | |
2847 | res_a max | |
2848 | res_b 4 | |
2849 | ||
2850 | Limit can be set by:: | |
2851 | ||
2852 | # echo res_a 1 > misc.max | |
2853 | ||
2854 | Limit can be set to max by:: | |
2855 | ||
2856 | # echo res_a max > misc.max | |
2857 | ||
2858 | Limits can be set higher than the capacity value in the misc.capacity | |
2859 | file. | |
2860 | ||
4b53bb87 CX |
2861 | misc.events |
2862 | A read-only flat-keyed file which exists on non-root cgroups. The | |
2863 | following entries are defined. Unless specified otherwise, a value | |
2864 | change in this file generates a file modified event. All fields in | |
2865 | this file are hierarchical. | |
2866 | ||
2867 | max | |
2868 | The number of times the cgroup's resource usage was | |
2869 | about to go over the max boundary. | |
2870 | ||
6a26f9c6 XJ |
2871 | misc.events.local |
2872 | Similar to misc.events but the fields in the file are local to the | |
2873 | cgroup i.e. not hierarchical. The file modified event generated on | |
2874 | this file reflects only the local events. | |
2875 | ||
25259fc9 VS |
2876 | Migration and Ownership |
2877 | ~~~~~~~~~~~~~~~~~~~~~~~ | |
2878 | ||
2879 | A miscellaneous scalar resource is charged to the cgroup in which it is used | |
2880 | first, and stays charged to that cgroup until that resource is freed. Migrating | |
2881 | a process to a different cgroup does not move the charge to the destination | |
2882 | cgroup where the process has moved. | |
2883 | ||
2884 | Others | |
2885 | ------ | |
2886 | ||
633b11be MCC |
2887 | perf_event |
2888 | ~~~~~~~~~~ | |
968ebff1 TH |
2889 | |
2890 | perf_event controller, if not mounted on a legacy hierarchy, is | |
2891 | automatically enabled on the v2 hierarchy so that perf events can | |
2892 | always be filtered by cgroup v2 path. The controller can still be | |
2893 | moved to a legacy hierarchy after v2 hierarchy is populated. | |
2894 | ||
2895 | ||
c4e0842b MS |
2896 | Non-normative information |
2897 | ------------------------- | |
2898 | ||
2899 | This section contains information that isn't considered to be a part of | |
2900 | the stable kernel API and so is subject to change. | |
2901 | ||
2902 | ||
2903 | CPU controller root cgroup process behaviour | |
2904 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ | |
2905 | ||
2906 | When distributing CPU cycles in the root cgroup each thread in this | |
2907 | cgroup is treated as if it was hosted in a separate child cgroup of the | |
2908 | root cgroup. This child cgroup weight is dependent on its thread nice | |
2909 | level. | |
2910 | ||
2911 | For details of this mapping see sched_prio_to_weight array in | |
2912 | kernel/sched/core.c file (values from this array should be scaled | |
2913 | appropriately so the neutral - nice 0 - value is 100 instead of 1024). | |
2914 | ||
2915 | ||
2916 | IO controller root cgroup process behaviour | |
2917 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ | |
2918 | ||
2919 | Root cgroup processes are hosted in an implicit leaf child node. | |
2920 | When distributing IO resources this implicit child node is taken into | |
2921 | account as if it was a normal child cgroup of the root cgroup with a | |
2922 | weight value of 200. | |
2923 | ||
2924 | ||
633b11be MCC |
2925 | Namespace |
2926 | ========= | |
d4021f6c | 2927 | |
633b11be MCC |
2928 | Basics |
2929 | ------ | |
d4021f6c SH |
2930 | |
2931 | cgroup namespace provides a mechanism to virtualize the view of the | |
2932 | "/proc/$PID/cgroup" file and cgroup mounts. The CLONE_NEWCGROUP clone | |
2933 | flag can be used with clone(2) and unshare(2) to create a new cgroup | |
2934 | namespace. The process running inside the cgroup namespace will have | |
2935 | its "/proc/$PID/cgroup" output restricted to cgroupns root. The | |
2936 | cgroupns root is the cgroup of the process at the time of creation of | |
2937 | the cgroup namespace. | |
2938 | ||
2939 | Without cgroup namespace, the "/proc/$PID/cgroup" file shows the | |
2940 | complete path of the cgroup of a process. In a container setup where | |
2941 | a set of cgroups and namespaces are intended to isolate processes the | |
2942 | "/proc/$PID/cgroup" file may leak potential system level information | |
7361ec68 | 2943 | to the isolated processes. For example:: |
d4021f6c SH |
2944 | |
2945 | # cat /proc/self/cgroup | |
2946 | 0::/batchjobs/container_id1 | |
2947 | ||
2948 | The path '/batchjobs/container_id1' can be considered as system-data | |
2949 | and undesirable to expose to the isolated processes. cgroup namespace | |
2950 | can be used to restrict visibility of this path. For example, before | |
633b11be | 2951 | creating a cgroup namespace, one would see:: |
d4021f6c SH |
2952 | |
2953 | # ls -l /proc/self/ns/cgroup | |
2954 | lrwxrwxrwx 1 root root 0 2014-07-15 10:37 /proc/self/ns/cgroup -> cgroup:[4026531835] | |
2955 | # cat /proc/self/cgroup | |
2956 | 0::/batchjobs/container_id1 | |
2957 | ||
633b11be | 2958 | After unsharing a new namespace, the view changes:: |
d4021f6c SH |
2959 | |
2960 | # ls -l /proc/self/ns/cgroup | |
2961 | lrwxrwxrwx 1 root root 0 2014-07-15 10:35 /proc/self/ns/cgroup -> cgroup:[4026532183] | |
2962 | # cat /proc/self/cgroup | |
2963 | 0::/ | |
2964 | ||
2965 | When some thread from a multi-threaded process unshares its cgroup | |
2966 | namespace, the new cgroupns gets applied to the entire process (all | |
2967 | the threads). This is natural for the v2 hierarchy; however, for the | |
2968 | legacy hierarchies, this may be unexpected. | |
2969 | ||
2970 | A cgroup namespace is alive as long as there are processes inside or | |
2971 | mounts pinning it. When the last usage goes away, the cgroup | |
2972 | namespace is destroyed. The cgroupns root and the actual cgroups | |
2973 | remain. | |
2974 | ||
2975 | ||
633b11be MCC |
2976 | The Root and Views |
2977 | ------------------ | |
d4021f6c SH |
2978 | |
2979 | The 'cgroupns root' for a cgroup namespace is the cgroup in which the | |
2980 | process calling unshare(2) is running. For example, if a process in | |
2981 | /batchjobs/container_id1 cgroup calls unshare, cgroup | |
2982 | /batchjobs/container_id1 becomes the cgroupns root. For the | |
2983 | init_cgroup_ns, this is the real root ('/') cgroup. | |
2984 | ||
2985 | The cgroupns root cgroup does not change even if the namespace creator | |
633b11be | 2986 | process later moves to a different cgroup:: |
d4021f6c SH |
2987 | |
2988 | # ~/unshare -c # unshare cgroupns in some cgroup | |
2989 | # cat /proc/self/cgroup | |
2990 | 0::/ | |
2991 | # mkdir sub_cgrp_1 | |
2992 | # echo 0 > sub_cgrp_1/cgroup.procs | |
2993 | # cat /proc/self/cgroup | |
2994 | 0::/sub_cgrp_1 | |
2995 | ||
2996 | Each process gets its namespace-specific view of "/proc/$PID/cgroup" | |
2997 | ||
2998 | Processes running inside the cgroup namespace will be able to see | |
2999 | cgroup paths (in /proc/self/cgroup) only inside their root cgroup. | |
633b11be | 3000 | From within an unshared cgroupns:: |
d4021f6c SH |
3001 | |
3002 | # sleep 100000 & | |
3003 | [1] 7353 | |
3004 | # echo 7353 > sub_cgrp_1/cgroup.procs | |
3005 | # cat /proc/7353/cgroup | |
3006 | 0::/sub_cgrp_1 | |
3007 | ||
3008 | From the initial cgroup namespace, the real cgroup path will be | |
633b11be | 3009 | visible:: |
d4021f6c SH |
3010 | |
3011 | $ cat /proc/7353/cgroup | |
3012 | 0::/batchjobs/container_id1/sub_cgrp_1 | |
3013 | ||
3014 | From a sibling cgroup namespace (that is, a namespace rooted at a | |
3015 | different cgroup), the cgroup path relative to its own cgroup | |
3016 | namespace root will be shown. For instance, if PID 7353's cgroup | |
633b11be | 3017 | namespace root is at '/batchjobs/container_id2', then it will see:: |
d4021f6c SH |
3018 | |
3019 | # cat /proc/7353/cgroup | |
3020 | 0::/../container_id2/sub_cgrp_1 | |
3021 | ||
3022 | Note that the relative path always starts with '/' to indicate that | |
3023 | its relative to the cgroup namespace root of the caller. | |
3024 | ||
3025 | ||
633b11be MCC |
3026 | Migration and setns(2) |
3027 | ---------------------- | |
d4021f6c SH |
3028 | |
3029 | Processes inside a cgroup namespace can move into and out of the | |
3030 | namespace root if they have proper access to external cgroups. For | |
3031 | example, from inside a namespace with cgroupns root at | |
3032 | /batchjobs/container_id1, and assuming that the global hierarchy is | |
633b11be | 3033 | still accessible inside cgroupns:: |
d4021f6c SH |
3034 | |
3035 | # cat /proc/7353/cgroup | |
3036 | 0::/sub_cgrp_1 | |
3037 | # echo 7353 > batchjobs/container_id2/cgroup.procs | |
3038 | # cat /proc/7353/cgroup | |
3039 | 0::/../container_id2 | |
3040 | ||
3041 | Note that this kind of setup is not encouraged. A task inside cgroup | |
3042 | namespace should only be exposed to its own cgroupns hierarchy. | |
3043 | ||
3044 | setns(2) to another cgroup namespace is allowed when: | |
3045 | ||
3046 | (a) the process has CAP_SYS_ADMIN against its current user namespace | |
3047 | (b) the process has CAP_SYS_ADMIN against the target cgroup | |
3048 | namespace's userns | |
3049 | ||
3050 | No implicit cgroup changes happen with attaching to another cgroup | |
3051 | namespace. It is expected that the someone moves the attaching | |
3052 | process under the target cgroup namespace root. | |
3053 | ||
3054 | ||
633b11be MCC |
3055 | Interaction with Other Namespaces |
3056 | --------------------------------- | |
d4021f6c SH |
3057 | |
3058 | Namespace specific cgroup hierarchy can be mounted by a process | |
633b11be | 3059 | running inside a non-init cgroup namespace:: |
d4021f6c SH |
3060 | |
3061 | # mount -t cgroup2 none $MOUNT_POINT | |
3062 | ||
3063 | This will mount the unified cgroup hierarchy with cgroupns root as the | |
3064 | filesystem root. The process needs CAP_SYS_ADMIN against its user and | |
3065 | mount namespaces. | |
3066 | ||
3067 | The virtualization of /proc/self/cgroup file combined with restricting | |
3068 | the view of cgroup hierarchy by namespace-private cgroupfs mount | |
3069 | provides a properly isolated cgroup view inside the container. | |
3070 | ||
3071 | ||
633b11be MCC |
3072 | Information on Kernel Programming |
3073 | ================================= | |
6c292092 TH |
3074 | |
3075 | This section contains kernel programming information in the areas | |
3076 | where interacting with cgroup is necessary. cgroup core and | |
3077 | controllers are not covered. | |
3078 | ||
3079 | ||
633b11be MCC |
3080 | Filesystem Support for Writeback |
3081 | -------------------------------- | |
6c292092 TH |
3082 | |
3083 | A filesystem can support cgroup writeback by updating | |
6b0dfabb | 3084 | address_space_operations->writepages() to annotate bio's using the |
6c292092 TH |
3085 | following two functions. |
3086 | ||
3087 | wbc_init_bio(@wbc, @bio) | |
6c292092 | 3088 | Should be called for each bio carrying writeback data and |
fd42df30 DZ |
3089 | associates the bio with the inode's owner cgroup and the |
3090 | corresponding request queue. This must be called after | |
3091 | a queue (device) has been associated with the bio and | |
3092 | before submission. | |
6c292092 | 3093 | |
30dac24e | 3094 | wbc_account_cgroup_owner(@wbc, @folio, @bytes) |
6c292092 TH |
3095 | Should be called for each data segment being written out. |
3096 | While this function doesn't care exactly when it's called | |
3097 | during the writeback session, it's the easiest and most | |
3098 | natural to call it as data segments are added to a bio. | |
3099 | ||
3100 | With writeback bio's annotated, cgroup support can be enabled per | |
3101 | super_block by setting SB_I_CGROUPWB in ->s_iflags. This allows for | |
3102 | selective disabling of cgroup writeback support which is helpful when | |
3103 | certain filesystem features, e.g. journaled data mode, are | |
3104 | incompatible. | |
3105 | ||
3106 | wbc_init_bio() binds the specified bio to its cgroup. Depending on | |
3107 | the configuration, the bio may be executed at a lower priority and if | |
3108 | the writeback session is holding shared resources, e.g. a journal | |
3109 | entry, may lead to priority inversion. There is no one easy solution | |
3110 | for the problem. Filesystems can try to work around specific problem | |
fd42df30 | 3111 | cases by skipping wbc_init_bio() and using bio_associate_blkg() |
6c292092 TH |
3112 | directly. |
3113 | ||
3114 | ||
633b11be MCC |
3115 | Deprecated v1 Core Features |
3116 | =========================== | |
6c292092 TH |
3117 | |
3118 | - Multiple hierarchies including named ones are not supported. | |
3119 | ||
5136f636 | 3120 | - All v1 mount options are not supported. |
6c292092 TH |
3121 | |
3122 | - The "tasks" file is removed and "cgroup.procs" is not sorted. | |
3123 | ||
3124 | - "cgroup.clone_children" is removed. | |
3125 | ||
ab031252 WL |
3126 | - /proc/cgroups is meaningless for v2. Use "cgroup.controllers" or |
3127 | "cgroup.stat" files at the root instead. | |
6c292092 TH |
3128 | |
3129 | ||
633b11be MCC |
3130 | Issues with v1 and Rationales for v2 |
3131 | ==================================== | |
6c292092 | 3132 | |
633b11be MCC |
3133 | Multiple Hierarchies |
3134 | -------------------- | |
6c292092 TH |
3135 | |
3136 | cgroup v1 allowed an arbitrary number of hierarchies and each | |
3137 | hierarchy could host any number of controllers. While this seemed to | |
3138 | provide a high level of flexibility, it wasn't useful in practice. | |
3139 | ||
3140 | For example, as there is only one instance of each controller, utility | |
3141 | type controllers such as freezer which can be useful in all | |
3142 | hierarchies could only be used in one. The issue is exacerbated by | |
3143 | the fact that controllers couldn't be moved to another hierarchy once | |
3144 | hierarchies were populated. Another issue was that all controllers | |
3145 | bound to a hierarchy were forced to have exactly the same view of the | |
3146 | hierarchy. It wasn't possible to vary the granularity depending on | |
3147 | the specific controller. | |
3148 | ||
3149 | In practice, these issues heavily limited which controllers could be | |
3150 | put on the same hierarchy and most configurations resorted to putting | |
3151 | each controller on its own hierarchy. Only closely related ones, such | |
3152 | as the cpu and cpuacct controllers, made sense to be put on the same | |
3153 | hierarchy. This often meant that userland ended up managing multiple | |
3154 | similar hierarchies repeating the same steps on each hierarchy | |
3155 | whenever a hierarchy management operation was necessary. | |
3156 | ||
3157 | Furthermore, support for multiple hierarchies came at a steep cost. | |
3158 | It greatly complicated cgroup core implementation but more importantly | |
3159 | the support for multiple hierarchies restricted how cgroup could be | |
3160 | used in general and what controllers was able to do. | |
3161 | ||
3162 | There was no limit on how many hierarchies there might be, which meant | |
3163 | that a thread's cgroup membership couldn't be described in finite | |
3164 | length. The key might contain any number of entries and was unlimited | |
3165 | in length, which made it highly awkward to manipulate and led to | |
3166 | addition of controllers which existed only to identify membership, | |
3167 | which in turn exacerbated the original problem of proliferating number | |
3168 | of hierarchies. | |
3169 | ||
3170 | Also, as a controller couldn't have any expectation regarding the | |
3171 | topologies of hierarchies other controllers might be on, each | |
3172 | controller had to assume that all other controllers were attached to | |
3173 | completely orthogonal hierarchies. This made it impossible, or at | |
3174 | least very cumbersome, for controllers to cooperate with each other. | |
3175 | ||
3176 | In most use cases, putting controllers on hierarchies which are | |
3177 | completely orthogonal to each other isn't necessary. What usually is | |
3178 | called for is the ability to have differing levels of granularity | |
3179 | depending on the specific controller. In other words, hierarchy may | |
3180 | be collapsed from leaf towards root when viewed from specific | |
3181 | controllers. For example, a given configuration might not care about | |
3182 | how memory is distributed beyond a certain level while still wanting | |
3183 | to control how CPU cycles are distributed. | |
3184 | ||
3185 | ||
633b11be MCC |
3186 | Thread Granularity |
3187 | ------------------ | |
6c292092 TH |
3188 | |
3189 | cgroup v1 allowed threads of a process to belong to different cgroups. | |
3190 | This didn't make sense for some controllers and those controllers | |
3191 | ended up implementing different ways to ignore such situations but | |
3192 | much more importantly it blurred the line between API exposed to | |
3193 | individual applications and system management interface. | |
3194 | ||
3195 | Generally, in-process knowledge is available only to the process | |
3196 | itself; thus, unlike service-level organization of processes, | |
3197 | categorizing threads of a process requires active participation from | |
3198 | the application which owns the target process. | |
3199 | ||
3200 | cgroup v1 had an ambiguously defined delegation model which got abused | |
3201 | in combination with thread granularity. cgroups were delegated to | |
3202 | individual applications so that they can create and manage their own | |
3203 | sub-hierarchies and control resource distributions along them. This | |
3204 | effectively raised cgroup to the status of a syscall-like API exposed | |
3205 | to lay programs. | |
3206 | ||
3207 | First of all, cgroup has a fundamentally inadequate interface to be | |
3208 | exposed this way. For a process to access its own knobs, it has to | |
3209 | extract the path on the target hierarchy from /proc/self/cgroup, | |
3210 | construct the path by appending the name of the knob to the path, open | |
3211 | and then read and/or write to it. This is not only extremely clunky | |
3212 | and unusual but also inherently racy. There is no conventional way to | |
3213 | define transaction across the required steps and nothing can guarantee | |
3214 | that the process would actually be operating on its own sub-hierarchy. | |
3215 | ||
3216 | cgroup controllers implemented a number of knobs which would never be | |
3217 | accepted as public APIs because they were just adding control knobs to | |
3218 | system-management pseudo filesystem. cgroup ended up with interface | |
3219 | knobs which were not properly abstracted or refined and directly | |
3220 | revealed kernel internal details. These knobs got exposed to | |
3221 | individual applications through the ill-defined delegation mechanism | |
3222 | effectively abusing cgroup as a shortcut to implementing public APIs | |
3223 | without going through the required scrutiny. | |
3224 | ||
3225 | This was painful for both userland and kernel. Userland ended up with | |
3226 | misbehaving and poorly abstracted interfaces and kernel exposing and | |
3227 | locked into constructs inadvertently. | |
3228 | ||
3229 | ||
633b11be MCC |
3230 | Competition Between Inner Nodes and Threads |
3231 | ------------------------------------------- | |
6c292092 TH |
3232 | |
3233 | cgroup v1 allowed threads to be in any cgroups which created an | |
3234 | interesting problem where threads belonging to a parent cgroup and its | |
3235 | children cgroups competed for resources. This was nasty as two | |
3236 | different types of entities competed and there was no obvious way to | |
3237 | settle it. Different controllers did different things. | |
3238 | ||
3239 | The cpu controller considered threads and cgroups as equivalents and | |
3240 | mapped nice levels to cgroup weights. This worked for some cases but | |
3241 | fell flat when children wanted to be allocated specific ratios of CPU | |
3242 | cycles and the number of internal threads fluctuated - the ratios | |
3243 | constantly changed as the number of competing entities fluctuated. | |
3244 | There also were other issues. The mapping from nice level to weight | |
3245 | wasn't obvious or universal, and there were various other knobs which | |
3246 | simply weren't available for threads. | |
3247 | ||
3248 | The io controller implicitly created a hidden leaf node for each | |
3249 | cgroup to host the threads. The hidden leaf had its own copies of all | |
633b11be | 3250 | the knobs with ``leaf_`` prefixed. While this allowed equivalent |
6c292092 TH |
3251 | control over internal threads, it was with serious drawbacks. It |
3252 | always added an extra layer of nesting which wouldn't be necessary | |
3253 | otherwise, made the interface messy and significantly complicated the | |
3254 | implementation. | |
3255 | ||
3256 | The memory controller didn't have a way to control what happened | |
3257 | between internal tasks and child cgroups and the behavior was not | |
3258 | clearly defined. There were attempts to add ad-hoc behaviors and | |
3259 | knobs to tailor the behavior to specific workloads which would have | |
3260 | led to problems extremely difficult to resolve in the long term. | |
3261 | ||
3262 | Multiple controllers struggled with internal tasks and came up with | |
3263 | different ways to deal with it; unfortunately, all the approaches were | |
3264 | severely flawed and, furthermore, the widely different behaviors | |
3265 | made cgroup as a whole highly inconsistent. | |
3266 | ||
3267 | This clearly is a problem which needs to be addressed from cgroup core | |
3268 | in a uniform way. | |
3269 | ||
3270 | ||
633b11be MCC |
3271 | Other Interface Issues |
3272 | ---------------------- | |
6c292092 TH |
3273 | |
3274 | cgroup v1 grew without oversight and developed a large number of | |
3275 | idiosyncrasies and inconsistencies. One issue on the cgroup core side | |
3276 | was how an empty cgroup was notified - a userland helper binary was | |
3277 | forked and executed for each event. The event delivery wasn't | |
3278 | recursive or delegatable. The limitations of the mechanism also led | |
3279 | to in-kernel event delivery filtering mechanism further complicating | |
3280 | the interface. | |
3281 | ||
3282 | Controller interfaces were problematic too. An extreme example is | |
3283 | controllers completely ignoring hierarchical organization and treating | |
3284 | all cgroups as if they were all located directly under the root | |
3285 | cgroup. Some controllers exposed a large amount of inconsistent | |
3286 | implementation details to userland. | |
3287 | ||
3288 | There also was no consistency across controllers. When a new cgroup | |
3289 | was created, some controllers defaulted to not imposing extra | |
3290 | restrictions while others disallowed any resource usage until | |
3291 | explicitly configured. Configuration knobs for the same type of | |
3292 | control used widely differing naming schemes and formats. Statistics | |
3293 | and information knobs were named arbitrarily and used different | |
3294 | formats and units even in the same controller. | |
3295 | ||
3296 | cgroup v2 establishes common conventions where appropriate and updates | |
3297 | controllers so that they expose minimal and consistent interfaces. | |
3298 | ||
3299 | ||
633b11be MCC |
3300 | Controller Issues and Remedies |
3301 | ------------------------------ | |
6c292092 | 3302 | |
633b11be MCC |
3303 | Memory |
3304 | ~~~~~~ | |
6c292092 TH |
3305 | |
3306 | The original lower boundary, the soft limit, is defined as a limit | |
3307 | that is per default unset. As a result, the set of cgroups that | |
3308 | global reclaim prefers is opt-in, rather than opt-out. The costs for | |
3309 | optimizing these mostly negative lookups are so high that the | |
3310 | implementation, despite its enormous size, does not even provide the | |
3311 | basic desirable behavior. First off, the soft limit has no | |
3312 | hierarchical meaning. All configured groups are organized in a global | |
3313 | rbtree and treated like equal peers, regardless where they are located | |
3314 | in the hierarchy. This makes subtree delegation impossible. Second, | |
3315 | the soft limit reclaim pass is so aggressive that it not just | |
3316 | introduces high allocation latencies into the system, but also impacts | |
3317 | system performance due to overreclaim, to the point where the feature | |
3318 | becomes self-defeating. | |
3319 | ||
3320 | The memory.low boundary on the other hand is a top-down allocated | |
9783aa99 CD |
3321 | reserve. A cgroup enjoys reclaim protection when it's within its |
3322 | effective low, which makes delegation of subtrees possible. It also | |
3323 | enjoys having reclaim pressure proportional to its overage when | |
3324 | above its effective low. | |
6c292092 TH |
3325 | |
3326 | The original high boundary, the hard limit, is defined as a strict | |
3327 | limit that can not budge, even if the OOM killer has to be called. | |
3328 | But this generally goes against the goal of making the most out of the | |
3329 | available memory. The memory consumption of workloads varies during | |
3330 | runtime, and that requires users to overcommit. But doing that with a | |
3331 | strict upper limit requires either a fairly accurate prediction of the | |
3332 | working set size or adding slack to the limit. Since working set size | |
3333 | estimation is hard and error prone, and getting it wrong results in | |
3334 | OOM kills, most users tend to err on the side of a looser limit and | |
3335 | end up wasting precious resources. | |
3336 | ||
3337 | The memory.high boundary on the other hand can be set much more | |
3338 | conservatively. When hit, it throttles allocations by forcing them | |
3339 | into direct reclaim to work off the excess, but it never invokes the | |
3340 | OOM killer. As a result, a high boundary that is chosen too | |
3341 | aggressively will not terminate the processes, but instead it will | |
3342 | lead to gradual performance degradation. The user can monitor this | |
3343 | and make corrections until the minimal memory footprint that still | |
3344 | gives acceptable performance is found. | |
3345 | ||
3346 | In extreme cases, with many concurrent allocations and a complete | |
3347 | breakdown of reclaim progress within the group, the high boundary can | |
3348 | be exceeded. But even then it's mostly better to satisfy the | |
3349 | allocation from the slack available in other groups or the rest of the | |
3350 | system than killing the group. Otherwise, memory.max is there to | |
3351 | limit this type of spillover and ultimately contain buggy or even | |
3352 | malicious applications. | |
3e24b19d | 3353 | |
b6e6edcf JW |
3354 | Setting the original memory.limit_in_bytes below the current usage was |
3355 | subject to a race condition, where concurrent charges could cause the | |
3356 | limit setting to fail. memory.max on the other hand will first set the | |
3357 | limit to prevent new charges, and then reclaim and OOM kill until the | |
3358 | new limit is met - or the task writing to memory.max is killed. | |
3359 | ||
3e24b19d VD |
3360 | The combined memory+swap accounting and limiting is replaced by real |
3361 | control over swap space. | |
3362 | ||
3363 | The main argument for a combined memory+swap facility in the original | |
3364 | cgroup design was that global or parental pressure would always be | |
3365 | able to swap all anonymous memory of a child group, regardless of the | |
3366 | child's own (possibly untrusted) configuration. However, untrusted | |
3367 | groups can sabotage swapping by other means - such as referencing its | |
3368 | anonymous memory in a tight loop - and an admin can not assume full | |
3369 | swappability when overcommitting untrusted jobs. | |
3370 | ||
3371 | For trusted jobs, on the other hand, a combined counter is not an | |
3372 | intuitive userspace interface, and it flies in the face of the idea | |
3373 | that cgroup controllers should account and limit specific physical | |
3374 | resources. Swap space is a resource like all others in the system, | |
3375 | and that's why unified hierarchy allows distributing it separately. |