8 :Author: Tejun Heo <tj@kernel.org>
10 This is the authoritative documentation on the design, interface and
11 conventions of cgroup v2. It describes all userland-visible aspects
12 of cgroup including core and specific controller behaviors. All
13 future changes must be reflected in this document. Documentation for
14 v1 is available under :ref:`Documentation/admin-guide/cgroup-v1/index.rst <cgroup-v1>`.
23 2-2. Organizing Processes and Threads
26 2-3. [Un]populated Notification
27 2-4. Controlling Controllers
28 2-4-1. Enabling and Disabling
29 2-4-2. Top-down Constraint
30 2-4-3. No Internal Process Constraint
32 2-5-1. Model of Delegation
33 2-5-2. Delegation Containment
35 2-6-1. Organize Once and Control
36 2-6-2. Avoid Name Collisions
37 3. Resource Distribution Models
45 4-3. Core Interface Files
48 5-1-1. CPU Interface Files
50 5-2-1. Memory Interface Files
51 5-2-2. Usage Guidelines
52 5-2-3. Memory Ownership
54 5-3-1. IO Interface Files
57 5-3-3-1. How IO Latency Throttling Works
58 5-3-3-2. IO Latency Interface Files
61 5-4-1. PID Interface Files
63 5.5-1. Cpuset Interface Files
66 5-7-1. RDMA Interface Files
69 5.9-1. HugeTLB Interface Files
71 5.10-1 Miscellaneous cgroup Interface Files
72 5.10-2 Migration and Ownership
75 5-N. Non-normative information
76 5-N-1. CPU controller root cgroup process behaviour
77 5-N-2. IO controller root cgroup process behaviour
80 6-2. The Root and Views
81 6-3. Migration and setns(2)
82 6-4. Interaction with Other Namespaces
83 P. Information on Kernel Programming
84 P-1. Filesystem Support for Writeback
85 D. Deprecated v1 Core Features
86 R. Issues with v1 and Rationales for v2
87 R-1. Multiple Hierarchies
88 R-2. Thread Granularity
89 R-3. Competition Between Inner Nodes and Threads
90 R-4. Other Interface Issues
91 R-5. Controller Issues and Remedies
101 "cgroup" stands for "control group" and is never capitalized. The
102 singular form is used to designate the whole feature and also as a
103 qualifier as in "cgroup controllers". When explicitly referring to
104 multiple individual control groups, the plural form "cgroups" is used.
110 cgroup is a mechanism to organize processes hierarchically and
111 distribute system resources along the hierarchy in a controlled and
114 cgroup is largely composed of two parts - the core and controllers.
115 cgroup core is primarily responsible for hierarchically organizing
116 processes. A cgroup controller is usually responsible for
117 distributing a specific type of system resource along the hierarchy
118 although there are utility controllers which serve purposes other than
119 resource distribution.
121 cgroups form a tree structure and every process in the system belongs
122 to one and only one cgroup. All threads of a process belong to the
123 same cgroup. On creation, all processes are put in the cgroup that
124 the parent process belongs to at the time. A process can be migrated
125 to another cgroup. Migration of a process doesn't affect already
126 existing descendant processes.
128 Following certain structural constraints, controllers may be enabled or
129 disabled selectively on a cgroup. All controller behaviors are
130 hierarchical - if a controller is enabled on a cgroup, it affects all
131 processes which belong to the cgroups consisting the inclusive
132 sub-hierarchy of the cgroup. When a controller is enabled on a nested
133 cgroup, it always restricts the resource distribution further. The
134 restrictions set closer to the root in the hierarchy can not be
135 overridden from further away.
144 Unlike v1, cgroup v2 has only single hierarchy. The cgroup v2
145 hierarchy can be mounted with the following mount command::
147 # mount -t cgroup2 none $MOUNT_POINT
149 cgroup2 filesystem has the magic number 0x63677270 ("cgrp"). All
150 controllers which support v2 and are not bound to a v1 hierarchy are
151 automatically bound to the v2 hierarchy and show up at the root.
152 Controllers which are not in active use in the v2 hierarchy can be
153 bound to other hierarchies. This allows mixing v2 hierarchy with the
154 legacy v1 multiple hierarchies in a fully backward compatible way.
156 A controller can be moved across hierarchies only after the controller
157 is no longer referenced in its current hierarchy. Because per-cgroup
158 controller states are destroyed asynchronously and controllers may
159 have lingering references, a controller may not show up immediately on
160 the v2 hierarchy after the final umount of the previous hierarchy.
161 Similarly, a controller should be fully disabled to be moved out of
162 the unified hierarchy and it may take some time for the disabled
163 controller to become available for other hierarchies; furthermore, due
164 to inter-controller dependencies, other controllers may need to be
167 While useful for development and manual configurations, moving
168 controllers dynamically between the v2 and other hierarchies is
169 strongly discouraged for production use. It is recommended to decide
170 the hierarchies and controller associations before starting using the
171 controllers after system boot.
173 During transition to v2, system management software might still
174 automount the v1 cgroup filesystem and so hijack all controllers
175 during boot, before manual intervention is possible. To make testing
176 and experimenting easier, the kernel parameter cgroup_no_v1= allows
177 disabling controllers in v1 and make them always available in v2.
179 cgroup v2 currently supports the following mount options.
182 Consider cgroup namespaces as delegation boundaries. This
183 option is system wide and can only be set on mount or modified
184 through remount from the init namespace. The mount option is
185 ignored on non-init namespace mounts. Please refer to the
186 Delegation section for details.
189 Reduce the latencies of dynamic cgroup modifications such as
190 task migrations and controller on/offs at the cost of making
191 hot path operations such as forks and exits more expensive.
192 The static usage pattern of creating a cgroup, enabling
193 controllers, and then seeding it with CLONE_INTO_CGROUP is
194 not affected by this option.
197 Only populate memory.events with data for the current cgroup,
198 and not any subtrees. This is legacy behaviour, the default
199 behaviour without this option is to include subtree counts.
200 This option is system wide and can only be set on mount or
201 modified through remount from the init namespace. The mount
202 option is ignored on non-init namespace mounts.
205 Recursively apply memory.min and memory.low protection to
206 entire subtrees, without requiring explicit downward
207 propagation into leaf cgroups. This allows protecting entire
208 subtrees from one another, while retaining free competition
209 within those subtrees. This should have been the default
210 behavior but is a mount-option to avoid regressing setups
211 relying on the original semantics (e.g. specifying bogusly
212 high 'bypass' protection values at higher tree levels).
214 memory_hugetlb_accounting
215 Count HugeTLB memory usage towards the cgroup's overall
216 memory usage for the memory controller (for the purpose of
217 statistics reporting and memory protetion). This is a new
218 behavior that could regress existing setups, so it must be
219 explicitly opted in with this mount option.
221 A few caveats to keep in mind:
223 * There is no HugeTLB pool management involved in the memory
224 controller. The pre-allocated pool does not belong to anyone.
225 Specifically, when a new HugeTLB folio is allocated to
226 the pool, it is not accounted for from the perspective of the
227 memory controller. It is only charged to a cgroup when it is
228 actually used (for e.g at page fault time). Host memory
229 overcommit management has to consider this when configuring
230 hard limits. In general, HugeTLB pool management should be
231 done via other mechanisms (such as the HugeTLB controller).
232 * Failure to charge a HugeTLB folio to the memory controller
233 results in SIGBUS. This could happen even if the HugeTLB pool
234 still has pages available (but the cgroup limit is hit and
235 reclaim attempt fails).
236 * Charging HugeTLB memory towards the memory controller affects
237 memory protection and reclaim dynamics. Any userspace tuning
238 (of low, min limits for e.g) needs to take this into account.
239 * HugeTLB pages utilized while this option is not selected
240 will not be tracked by the memory controller (even if cgroup
241 v2 is remounted later on).
244 The option restores v1-like behavior of pids.events:max, that is only
245 local (inside cgroup proper) fork failures are counted. Without this
246 option pids.events.max represents any pids.max enforcemnt across
251 Organizing Processes and Threads
252 --------------------------------
257 Initially, only the root cgroup exists to which all processes belong.
258 A child cgroup can be created by creating a sub-directory::
262 A given cgroup may have multiple child cgroups forming a tree
263 structure. Each cgroup has a read-writable interface file
264 "cgroup.procs". When read, it lists the PIDs of all processes which
265 belong to the cgroup one-per-line. The PIDs are not ordered and the
266 same PID may show up more than once if the process got moved to
267 another cgroup and then back or the PID got recycled while reading.
269 A process can be migrated into a cgroup by writing its PID to the
270 target cgroup's "cgroup.procs" file. Only one process can be migrated
271 on a single write(2) call. If a process is composed of multiple
272 threads, writing the PID of any thread migrates all threads of the
275 When a process forks a child process, the new process is born into the
276 cgroup that the forking process belongs to at the time of the
277 operation. After exit, a process stays associated with the cgroup
278 that it belonged to at the time of exit until it's reaped; however, a
279 zombie process does not appear in "cgroup.procs" and thus can't be
280 moved to another cgroup.
282 A cgroup which doesn't have any children or live processes can be
283 destroyed by removing the directory. Note that a cgroup which doesn't
284 have any children and is associated only with zombie processes is
285 considered empty and can be removed::
289 "/proc/$PID/cgroup" lists a process's cgroup membership. If legacy
290 cgroup is in use in the system, this file may contain multiple lines,
291 one for each hierarchy. The entry for cgroup v2 is always in the
294 # cat /proc/842/cgroup
296 0::/test-cgroup/test-cgroup-nested
298 If the process becomes a zombie and the cgroup it was associated with
299 is removed subsequently, " (deleted)" is appended to the path::
301 # cat /proc/842/cgroup
303 0::/test-cgroup/test-cgroup-nested (deleted)
309 cgroup v2 supports thread granularity for a subset of controllers to
310 support use cases requiring hierarchical resource distribution across
311 the threads of a group of processes. By default, all threads of a
312 process belong to the same cgroup, which also serves as the resource
313 domain to host resource consumptions which are not specific to a
314 process or thread. The thread mode allows threads to be spread across
315 a subtree while still maintaining the common resource domain for them.
317 Controllers which support thread mode are called threaded controllers.
318 The ones which don't are called domain controllers.
320 Marking a cgroup threaded makes it join the resource domain of its
321 parent as a threaded cgroup. The parent may be another threaded
322 cgroup whose resource domain is further up in the hierarchy. The root
323 of a threaded subtree, that is, the nearest ancestor which is not
324 threaded, is called threaded domain or thread root interchangeably and
325 serves as the resource domain for the entire subtree.
327 Inside a threaded subtree, threads of a process can be put in
328 different cgroups and are not subject to the no internal process
329 constraint - threaded controllers can be enabled on non-leaf cgroups
330 whether they have threads in them or not.
332 As the threaded domain cgroup hosts all the domain resource
333 consumptions of the subtree, it is considered to have internal
334 resource consumptions whether there are processes in it or not and
335 can't have populated child cgroups which aren't threaded. Because the
336 root cgroup is not subject to no internal process constraint, it can
337 serve both as a threaded domain and a parent to domain cgroups.
339 The current operation mode or type of the cgroup is shown in the
340 "cgroup.type" file which indicates whether the cgroup is a normal
341 domain, a domain which is serving as the domain of a threaded subtree,
342 or a threaded cgroup.
344 On creation, a cgroup is always a domain cgroup and can be made
345 threaded by writing "threaded" to the "cgroup.type" file. The
346 operation is single direction::
348 # echo threaded > cgroup.type
350 Once threaded, the cgroup can't be made a domain again. To enable the
351 thread mode, the following conditions must be met.
353 - As the cgroup will join the parent's resource domain. The parent
354 must either be a valid (threaded) domain or a threaded cgroup.
356 - When the parent is an unthreaded domain, it must not have any domain
357 controllers enabled or populated domain children. The root is
358 exempt from this requirement.
360 Topology-wise, a cgroup can be in an invalid state. Please consider
361 the following topology::
363 A (threaded domain) - B (threaded) - C (domain, just created)
365 C is created as a domain but isn't connected to a parent which can
366 host child domains. C can't be used until it is turned into a
367 threaded cgroup. "cgroup.type" file will report "domain (invalid)" in
368 these cases. Operations which fail due to invalid topology use
369 EOPNOTSUPP as the errno.
371 A domain cgroup is turned into a threaded domain when one of its child
372 cgroup becomes threaded or threaded controllers are enabled in the
373 "cgroup.subtree_control" file while there are processes in the cgroup.
374 A threaded domain reverts to a normal domain when the conditions
377 When read, "cgroup.threads" contains the list of the thread IDs of all
378 threads in the cgroup. Except that the operations are per-thread
379 instead of per-process, "cgroup.threads" has the same format and
380 behaves the same way as "cgroup.procs". While "cgroup.threads" can be
381 written to in any cgroup, as it can only move threads inside the same
382 threaded domain, its operations are confined inside each threaded
385 The threaded domain cgroup serves as the resource domain for the whole
386 subtree, and, while the threads can be scattered across the subtree,
387 all the processes are considered to be in the threaded domain cgroup.
388 "cgroup.procs" in a threaded domain cgroup contains the PIDs of all
389 processes in the subtree and is not readable in the subtree proper.
390 However, "cgroup.procs" can be written to from anywhere in the subtree
391 to migrate all threads of the matching process to the cgroup.
393 Only threaded controllers can be enabled in a threaded subtree. When
394 a threaded controller is enabled inside a threaded subtree, it only
395 accounts for and controls resource consumptions associated with the
396 threads in the cgroup and its descendants. All consumptions which
397 aren't tied to a specific thread belong to the threaded domain cgroup.
399 Because a threaded subtree is exempt from no internal process
400 constraint, a threaded controller must be able to handle competition
401 between threads in a non-leaf cgroup and its child cgroups. Each
402 threaded controller defines how such competitions are handled.
404 Currently, the following controllers are threaded and can be enabled
405 in a threaded cgroup::
412 [Un]populated Notification
413 --------------------------
415 Each non-root cgroup has a "cgroup.events" file which contains
416 "populated" field indicating whether the cgroup's sub-hierarchy has
417 live processes in it. Its value is 0 if there is no live process in
418 the cgroup and its descendants; otherwise, 1. poll and [id]notify
419 events are triggered when the value changes. This can be used, for
420 example, to start a clean-up operation after all processes of a given
421 sub-hierarchy have exited. The populated state updates and
422 notifications are recursive. Consider the following sub-hierarchy
423 where the numbers in the parentheses represent the numbers of processes
429 A, B and C's "populated" fields would be 1 while D's 0. After the one
430 process in C exits, B and C's "populated" fields would flip to "0" and
431 file modified events will be generated on the "cgroup.events" files of
435 Controlling Controllers
436 -----------------------
438 Enabling and Disabling
439 ~~~~~~~~~~~~~~~~~~~~~~
441 Each cgroup has a "cgroup.controllers" file which lists all
442 controllers available for the cgroup to enable::
444 # cat cgroup.controllers
447 No controller is enabled by default. Controllers can be enabled and
448 disabled by writing to the "cgroup.subtree_control" file::
450 # echo "+cpu +memory -io" > cgroup.subtree_control
452 Only controllers which are listed in "cgroup.controllers" can be
453 enabled. When multiple operations are specified as above, either they
454 all succeed or fail. If multiple operations on the same controller
455 are specified, the last one is effective.
457 Enabling a controller in a cgroup indicates that the distribution of
458 the target resource across its immediate children will be controlled.
459 Consider the following sub-hierarchy. The enabled controllers are
460 listed in parentheses::
462 A(cpu,memory) - B(memory) - C()
465 As A has "cpu" and "memory" enabled, A will control the distribution
466 of CPU cycles and memory to its children, in this case, B. As B has
467 "memory" enabled but not "CPU", C and D will compete freely on CPU
468 cycles but their division of memory available to B will be controlled.
470 As a controller regulates the distribution of the target resource to
471 the cgroup's children, enabling it creates the controller's interface
472 files in the child cgroups. In the above example, enabling "cpu" on B
473 would create the "cpu." prefixed controller interface files in C and
474 D. Likewise, disabling "memory" from B would remove the "memory."
475 prefixed controller interface files from C and D. This means that the
476 controller interface files - anything which doesn't start with
477 "cgroup." are owned by the parent rather than the cgroup itself.
483 Resources are distributed top-down and a cgroup can further distribute
484 a resource only if the resource has been distributed to it from the
485 parent. This means that all non-root "cgroup.subtree_control" files
486 can only contain controllers which are enabled in the parent's
487 "cgroup.subtree_control" file. A controller can be enabled only if
488 the parent has the controller enabled and a controller can't be
489 disabled if one or more children have it enabled.
492 No Internal Process Constraint
493 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
495 Non-root cgroups can distribute domain resources to their children
496 only when they don't have any processes of their own. In other words,
497 only domain cgroups which don't contain any processes can have domain
498 controllers enabled in their "cgroup.subtree_control" files.
500 This guarantees that, when a domain controller is looking at the part
501 of the hierarchy which has it enabled, processes are always only on
502 the leaves. This rules out situations where child cgroups compete
503 against internal processes of the parent.
505 The root cgroup is exempt from this restriction. Root contains
506 processes and anonymous resource consumption which can't be associated
507 with any other cgroups and requires special treatment from most
508 controllers. How resource consumption in the root cgroup is governed
509 is up to each controller (for more information on this topic please
510 refer to the Non-normative information section in the Controllers
513 Note that the restriction doesn't get in the way if there is no
514 enabled controller in the cgroup's "cgroup.subtree_control". This is
515 important as otherwise it wouldn't be possible to create children of a
516 populated cgroup. To control resource distribution of a cgroup, the
517 cgroup must create children and transfer all its processes to the
518 children before enabling controllers in its "cgroup.subtree_control"
528 A cgroup can be delegated in two ways. First, to a less privileged
529 user by granting write access of the directory and its "cgroup.procs",
530 "cgroup.threads" and "cgroup.subtree_control" files to the user.
531 Second, if the "nsdelegate" mount option is set, automatically to a
532 cgroup namespace on namespace creation.
534 Because the resource control interface files in a given directory
535 control the distribution of the parent's resources, the delegatee
536 shouldn't be allowed to write to them. For the first method, this is
537 achieved by not granting access to these files. For the second, files
538 outside the namespace should be hidden from the delegatee by the means
539 of at least mount namespacing, and the kernel rejects writes to all
540 files on a namespace root from inside the cgroup namespace, except for
541 those files listed in "/sys/kernel/cgroup/delegate" (including
542 "cgroup.procs", "cgroup.threads", "cgroup.subtree_control", etc.).
544 The end results are equivalent for both delegation types. Once
545 delegated, the user can build sub-hierarchy under the directory,
546 organize processes inside it as it sees fit and further distribute the
547 resources it received from the parent. The limits and other settings
548 of all resource controllers are hierarchical and regardless of what
549 happens in the delegated sub-hierarchy, nothing can escape the
550 resource restrictions imposed by the parent.
552 Currently, cgroup doesn't impose any restrictions on the number of
553 cgroups in or nesting depth of a delegated sub-hierarchy; however,
554 this may be limited explicitly in the future.
557 Delegation Containment
558 ~~~~~~~~~~~~~~~~~~~~~~
560 A delegated sub-hierarchy is contained in the sense that processes
561 can't be moved into or out of the sub-hierarchy by the delegatee.
563 For delegations to a less privileged user, this is achieved by
564 requiring the following conditions for a process with a non-root euid
565 to migrate a target process into a cgroup by writing its PID to the
568 - The writer must have write access to the "cgroup.procs" file.
570 - The writer must have write access to the "cgroup.procs" file of the
571 common ancestor of the source and destination cgroups.
573 The above two constraints ensure that while a delegatee may migrate
574 processes around freely in the delegated sub-hierarchy it can't pull
575 in from or push out to outside the sub-hierarchy.
577 For an example, let's assume cgroups C0 and C1 have been delegated to
578 user U0 who created C00, C01 under C0 and C10 under C1 as follows and
579 all processes under C0 and C1 belong to U0::
581 ~~~~~~~~~~~~~ - C0 - C00
584 ~~~~~~~~~~~~~ - C1 - C10
586 Let's also say U0 wants to write the PID of a process which is
587 currently in C10 into "C00/cgroup.procs". U0 has write access to the
588 file; however, the common ancestor of the source cgroup C10 and the
589 destination cgroup C00 is above the points of delegation and U0 would
590 not have write access to its "cgroup.procs" files and thus the write
591 will be denied with -EACCES.
593 For delegations to namespaces, containment is achieved by requiring
594 that both the source and destination cgroups are reachable from the
595 namespace of the process which is attempting the migration. If either
596 is not reachable, the migration is rejected with -ENOENT.
602 Organize Once and Control
603 ~~~~~~~~~~~~~~~~~~~~~~~~~
605 Migrating a process across cgroups is a relatively expensive operation
606 and stateful resources such as memory are not moved together with the
607 process. This is an explicit design decision as there often exist
608 inherent trade-offs between migration and various hot paths in terms
609 of synchronization cost.
611 As such, migrating processes across cgroups frequently as a means to
612 apply different resource restrictions is discouraged. A workload
613 should be assigned to a cgroup according to the system's logical and
614 resource structure once on start-up. Dynamic adjustments to resource
615 distribution can be made by changing controller configuration through
619 Avoid Name Collisions
620 ~~~~~~~~~~~~~~~~~~~~~
622 Interface files for a cgroup and its children cgroups occupy the same
623 directory and it is possible to create children cgroups which collide
624 with interface files.
626 All cgroup core interface files are prefixed with "cgroup." and each
627 controller's interface files are prefixed with the controller name and
628 a dot. A controller's name is composed of lower case alphabets and
629 '_'s but never begins with an '_' so it can be used as the prefix
630 character for collision avoidance. Also, interface file names won't
631 start or end with terms which are often used in categorizing workloads
632 such as job, service, slice, unit or workload.
634 cgroup doesn't do anything to prevent name collisions and it's the
635 user's responsibility to avoid them.
638 Resource Distribution Models
639 ============================
641 cgroup controllers implement several resource distribution schemes
642 depending on the resource type and expected use cases. This section
643 describes major schemes in use along with their expected behaviors.
649 A parent's resource is distributed by adding up the weights of all
650 active children and giving each the fraction matching the ratio of its
651 weight against the sum. As only children which can make use of the
652 resource at the moment participate in the distribution, this is
653 work-conserving. Due to the dynamic nature, this model is usually
654 used for stateless resources.
656 All weights are in the range [1, 10000] with the default at 100. This
657 allows symmetric multiplicative biases in both directions at fine
658 enough granularity while staying in the intuitive range.
660 As long as the weight is in range, all configuration combinations are
661 valid and there is no reason to reject configuration changes or
664 "cpu.weight" proportionally distributes CPU cycles to active children
665 and is an example of this type.
668 .. _cgroupv2-limits-distributor:
673 A child can only consume up to the configured amount of the resource.
674 Limits can be over-committed - the sum of the limits of children can
675 exceed the amount of resource available to the parent.
677 Limits are in the range [0, max] and defaults to "max", which is noop.
679 As limits can be over-committed, all configuration combinations are
680 valid and there is no reason to reject configuration changes or
683 "io.max" limits the maximum BPS and/or IOPS that a cgroup can consume
684 on an IO device and is an example of this type.
686 .. _cgroupv2-protections-distributor:
691 A cgroup is protected up to the configured amount of the resource
692 as long as the usages of all its ancestors are under their
693 protected levels. Protections can be hard guarantees or best effort
694 soft boundaries. Protections can also be over-committed in which case
695 only up to the amount available to the parent is protected among
698 Protections are in the range [0, max] and defaults to 0, which is
701 As protections can be over-committed, all configuration combinations
702 are valid and there is no reason to reject configuration changes or
705 "memory.low" implements best-effort memory protection and is an
706 example of this type.
712 A cgroup is exclusively allocated a certain amount of a finite
713 resource. Allocations can't be over-committed - the sum of the
714 allocations of children can not exceed the amount of resource
715 available to the parent.
717 Allocations are in the range [0, max] and defaults to 0, which is no
720 As allocations can't be over-committed, some configuration
721 combinations are invalid and should be rejected. Also, if the
722 resource is mandatory for execution of processes, process migrations
725 "cpu.rt.max" hard-allocates realtime slices and is an example of this
735 All interface files should be in one of the following formats whenever
738 New-line separated values
739 (when only one value can be written at once)
745 Space separated values
746 (when read-only or multiple values can be written at once)
758 KEY0 SUB_KEY0=VAL00 SUB_KEY1=VAL01...
759 KEY1 SUB_KEY0=VAL10 SUB_KEY1=VAL11...
762 For a writable file, the format for writing should generally match
763 reading; however, controllers may allow omitting later fields or
764 implement restricted shortcuts for most common use cases.
766 For both flat and nested keyed files, only the values for a single key
767 can be written at a time. For nested keyed files, the sub key pairs
768 may be specified in any order and not all pairs have to be specified.
774 - Settings for a single feature should be contained in a single file.
776 - The root cgroup should be exempt from resource control and thus
777 shouldn't have resource control interface files.
779 - The default time unit is microseconds. If a different unit is ever
780 used, an explicit unit suffix must be present.
782 - A parts-per quantity should use a percentage decimal with at least
783 two digit fractional part - e.g. 13.40.
785 - If a controller implements weight based resource distribution, its
786 interface file should be named "weight" and have the range [1,
787 10000] with 100 as the default. The values are chosen to allow
788 enough and symmetric bias in both directions while keeping it
789 intuitive (the default is 100%).
791 - If a controller implements an absolute resource guarantee and/or
792 limit, the interface files should be named "min" and "max"
793 respectively. If a controller implements best effort resource
794 guarantee and/or limit, the interface files should be named "low"
795 and "high" respectively.
797 In the above four control files, the special token "max" should be
798 used to represent upward infinity for both reading and writing.
800 - If a setting has a configurable default value and keyed specific
801 overrides, the default entry should be keyed with "default" and
802 appear as the first entry in the file.
804 The default value can be updated by writing either "default $VAL" or
807 When writing to update a specific override, "default" can be used as
808 the value to indicate removal of the override. Override entries
809 with "default" as the value must not appear when read.
811 For example, a setting which is keyed by major:minor device numbers
812 with integer values may look like the following::
814 # cat cgroup-example-interface-file
818 The default value can be updated by::
820 # echo 125 > cgroup-example-interface-file
824 # echo "default 125" > cgroup-example-interface-file
826 An override can be set by::
828 # echo "8:16 170" > cgroup-example-interface-file
832 # echo "8:0 default" > cgroup-example-interface-file
833 # cat cgroup-example-interface-file
837 - For events which are not very high frequency, an interface file
838 "events" should be created which lists event key value pairs.
839 Whenever a notifiable event happens, file modified event should be
840 generated on the file.
846 All cgroup core files are prefixed with "cgroup."
849 A read-write single value file which exists on non-root
852 When read, it indicates the current type of the cgroup, which
853 can be one of the following values.
855 - "domain" : A normal valid domain cgroup.
857 - "domain threaded" : A threaded domain cgroup which is
858 serving as the root of a threaded subtree.
860 - "domain invalid" : A cgroup which is in an invalid state.
861 It can't be populated or have controllers enabled. It may
862 be allowed to become a threaded cgroup.
864 - "threaded" : A threaded cgroup which is a member of a
867 A cgroup can be turned into a threaded cgroup by writing
868 "threaded" to this file.
871 A read-write new-line separated values file which exists on
874 When read, it lists the PIDs of all processes which belong to
875 the cgroup one-per-line. The PIDs are not ordered and the
876 same PID may show up more than once if the process got moved
877 to another cgroup and then back or the PID got recycled while
880 A PID can be written to migrate the process associated with
881 the PID to the cgroup. The writer should match all of the
882 following conditions.
884 - It must have write access to the "cgroup.procs" file.
886 - It must have write access to the "cgroup.procs" file of the
887 common ancestor of the source and destination cgroups.
889 When delegating a sub-hierarchy, write access to this file
890 should be granted along with the containing directory.
892 In a threaded cgroup, reading this file fails with EOPNOTSUPP
893 as all the processes belong to the thread root. Writing is
894 supported and moves every thread of the process to the cgroup.
897 A read-write new-line separated values file which exists on
900 When read, it lists the TIDs of all threads which belong to
901 the cgroup one-per-line. The TIDs are not ordered and the
902 same TID may show up more than once if the thread got moved to
903 another cgroup and then back or the TID got recycled while
906 A TID can be written to migrate the thread associated with the
907 TID to the cgroup. The writer should match all of the
908 following conditions.
910 - It must have write access to the "cgroup.threads" file.
912 - The cgroup that the thread is currently in must be in the
913 same resource domain as the destination cgroup.
915 - It must have write access to the "cgroup.procs" file of the
916 common ancestor of the source and destination cgroups.
918 When delegating a sub-hierarchy, write access to this file
919 should be granted along with the containing directory.
922 A read-only space separated values file which exists on all
925 It shows space separated list of all controllers available to
926 the cgroup. The controllers are not ordered.
928 cgroup.subtree_control
929 A read-write space separated values file which exists on all
930 cgroups. Starts out empty.
932 When read, it shows space separated list of the controllers
933 which are enabled to control resource distribution from the
934 cgroup to its children.
936 Space separated list of controllers prefixed with '+' or '-'
937 can be written to enable or disable controllers. A controller
938 name prefixed with '+' enables the controller and '-'
939 disables. If a controller appears more than once on the list,
940 the last one is effective. When multiple enable and disable
941 operations are specified, either all succeed or all fail.
944 A read-only flat-keyed file which exists on non-root cgroups.
945 The following entries are defined. Unless specified
946 otherwise, a value change in this file generates a file
950 1 if the cgroup or its descendants contains any live
951 processes; otherwise, 0.
953 1 if the cgroup is frozen; otherwise, 0.
955 cgroup.max.descendants
956 A read-write single value files. The default is "max".
958 Maximum allowed number of descent cgroups.
959 If the actual number of descendants is equal or larger,
960 an attempt to create a new cgroup in the hierarchy will fail.
963 A read-write single value files. The default is "max".
965 Maximum allowed descent depth below the current cgroup.
966 If the actual descent depth is equal or larger,
967 an attempt to create a new child cgroup will fail.
970 A read-only flat-keyed file with the following entries:
973 Total number of visible descendant cgroups.
976 Total number of dying descendant cgroups. A cgroup becomes
977 dying after being deleted by a user. The cgroup will remain
978 in dying state for some time undefined time (which can depend
979 on system load) before being completely destroyed.
981 A process can't enter a dying cgroup under any circumstances,
982 a dying cgroup can't revive.
984 A dying cgroup can consume system resources not exceeding
985 limits, which were active at the moment of cgroup deletion.
987 nr_subsys_<cgroup_subsys>
988 Total number of live cgroup subsystems (e.g memory
989 cgroup) at and beneath the current cgroup.
991 nr_dying_subsys_<cgroup_subsys>
992 Total number of dying cgroup subsystems (e.g. memory
993 cgroup) at and beneath the current cgroup.
996 A read-write single value file which exists on non-root cgroups.
997 Allowed values are "0" and "1". The default is "0".
999 Writing "1" to the file causes freezing of the cgroup and all
1000 descendant cgroups. This means that all belonging processes will
1001 be stopped and will not run until the cgroup will be explicitly
1002 unfrozen. Freezing of the cgroup may take some time; when this action
1003 is completed, the "frozen" value in the cgroup.events control file
1004 will be updated to "1" and the corresponding notification will be
1007 A cgroup can be frozen either by its own settings, or by settings
1008 of any ancestor cgroups. If any of ancestor cgroups is frozen, the
1009 cgroup will remain frozen.
1011 Processes in the frozen cgroup can be killed by a fatal signal.
1012 They also can enter and leave a frozen cgroup: either by an explicit
1013 move by a user, or if freezing of the cgroup races with fork().
1014 If a process is moved to a frozen cgroup, it stops. If a process is
1015 moved out of a frozen cgroup, it becomes running.
1017 Frozen status of a cgroup doesn't affect any cgroup tree operations:
1018 it's possible to delete a frozen (and empty) cgroup, as well as
1019 create new sub-cgroups.
1022 A write-only single value file which exists in non-root cgroups.
1023 The only allowed value is "1".
1025 Writing "1" to the file causes the cgroup and all descendant cgroups to
1026 be killed. This means that all processes located in the affected cgroup
1027 tree will be killed via SIGKILL.
1029 Killing a cgroup tree will deal with concurrent forks appropriately and
1030 is protected against migrations.
1032 In a threaded cgroup, writing this file fails with EOPNOTSUPP as
1033 killing cgroups is a process directed operation, i.e. it affects
1034 the whole thread-group.
1037 A read-write single value file that allowed values are "0" and "1".
1040 Writing "0" to the file will disable the cgroup PSI accounting.
1041 Writing "1" to the file will re-enable the cgroup PSI accounting.
1043 This control attribute is not hierarchical, so disable or enable PSI
1044 accounting in a cgroup does not affect PSI accounting in descendants
1045 and doesn't need pass enablement via ancestors from root.
1047 The reason this control attribute exists is that PSI accounts stalls for
1048 each cgroup separately and aggregates it at each level of the hierarchy.
1049 This may cause non-negligible overhead for some workloads when under
1050 deep level of the hierarchy, in which case this control attribute can
1051 be used to disable PSI accounting in the non-leaf cgroups.
1054 A read-write nested-keyed file.
1056 Shows pressure stall information for IRQ/SOFTIRQ. See
1057 :ref:`Documentation/accounting/psi.rst <psi>` for details.
1067 The "cpu" controllers regulates distribution of CPU cycles. This
1068 controller implements weight and absolute bandwidth limit models for
1069 normal scheduling policy and absolute bandwidth allocation model for
1070 realtime scheduling policy.
1072 In all the above models, cycles distribution is defined only on a temporal
1073 base and it does not account for the frequency at which tasks are executed.
1074 The (optional) utilization clamping support allows to hint the schedutil
1075 cpufreq governor about the minimum desired frequency which should always be
1076 provided by a CPU, as well as the maximum desired frequency, which should not
1077 be exceeded by a CPU.
1079 WARNING: cgroup2 cpu controller doesn't yet support the (bandwidth) control of
1080 realtime processes. For a kernel built with the CONFIG_RT_GROUP_SCHED option
1081 enabled for group scheduling of realtime processes, the cpu controller can only
1082 be enabled when all RT processes are in the root cgroup. Be aware that system
1083 management software may already have placed RT processes into non-root cgroups
1084 during the system boot process, and these processes may need to be moved to the
1085 root cgroup before the cpu controller can be enabled with a
1086 CONFIG_RT_GROUP_SCHED enabled kernel.
1088 With CONFIG_RT_GROUP_SCHED disabled, this limitation does not apply and some of
1089 the interface files either affect realtime processes or account for them. See
1090 the following section for details. Only the cpu controller is affected by
1091 CONFIG_RT_GROUP_SCHED. Other controllers can be used for the resource control of
1092 realtime processes irrespective of CONFIG_RT_GROUP_SCHED.
1098 The interaction of a process with the cpu controller depends on its scheduling
1099 policy and the underlying scheduler. From the point of view of the cpu controller,
1100 processes can be categorized as follows:
1102 * Processes under the fair-class scheduler
1103 * Processes under a BPF scheduler with the ``cgroup_set_weight`` callback
1104 * Everything else: ``SCHED_{FIFO,RR,DEADLINE}`` and processes under a BPF scheduler
1105 without the ``cgroup_set_weight`` callback
1107 For details on when a process is under the fair-class scheduler or a BPF scheduler,
1108 check out :ref:`Documentation/scheduler/sched-ext.rst <sched-ext>`.
1110 For each of the following interface files, the above categories
1111 will be referred to. All time durations are in microseconds.
1114 A read-only flat-keyed file.
1115 This file exists whether the controller is enabled or not.
1117 It always reports the following three stats, which account for all the
1118 processes in the cgroup:
1124 and the following five when the controller is enabled, which account for
1125 only the processes under the fair-class scheduler:
1134 A read-write single value file which exists on non-root
1135 cgroups. The default is "100".
1137 For non idle groups (cpu.idle = 0), the weight is in the
1140 If the cgroup has been configured to be SCHED_IDLE (cpu.idle = 1),
1141 then the weight will show as a 0.
1143 This file affects only processes under the fair-class scheduler and a BPF
1144 scheduler with the ``cgroup_set_weight`` callback depending on what the
1145 callback actually does.
1148 A read-write single value file which exists on non-root
1149 cgroups. The default is "0".
1151 The nice value is in the range [-20, 19].
1153 This interface file is an alternative interface for
1154 "cpu.weight" and allows reading and setting weight using the
1155 same values used by nice(2). Because the range is smaller and
1156 granularity is coarser for the nice values, the read value is
1157 the closest approximation of the current weight.
1159 This file affects only processes under the fair-class scheduler and a BPF
1160 scheduler with the ``cgroup_set_weight`` callback depending on what the
1161 callback actually does.
1164 A read-write two value file which exists on non-root cgroups.
1165 The default is "max 100000".
1167 The maximum bandwidth limit. It's in the following format::
1171 which indicates that the group may consume up to $MAX in each
1172 $PERIOD duration. "max" for $MAX indicates no limit. If only
1173 one number is written, $MAX is updated.
1175 This file affects only processes under the fair-class scheduler.
1178 A read-write single value file which exists on non-root
1179 cgroups. The default is "0".
1181 The burst in the range [0, $MAX].
1183 This file affects only processes under the fair-class scheduler.
1186 A read-write nested-keyed file.
1188 Shows pressure stall information for CPU. See
1189 :ref:`Documentation/accounting/psi.rst <psi>` for details.
1191 This file accounts for all the processes in the cgroup.
1194 A read-write single value file which exists on non-root cgroups.
1195 The default is "0", i.e. no utilization boosting.
1197 The requested minimum utilization (protection) as a percentage
1198 rational number, e.g. 12.34 for 12.34%.
1200 This interface allows reading and setting minimum utilization clamp
1201 values similar to the sched_setattr(2). This minimum utilization
1202 value is used to clamp the task specific minimum utilization clamp,
1203 including those of realtime processes.
1205 The requested minimum utilization (protection) is always capped by
1206 the current value for the maximum utilization (limit), i.e.
1209 This file affects all the processes in the cgroup.
1212 A read-write single value file which exists on non-root cgroups.
1213 The default is "max". i.e. no utilization capping
1215 The requested maximum utilization (limit) as a percentage rational
1216 number, e.g. 98.76 for 98.76%.
1218 This interface allows reading and setting maximum utilization clamp
1219 values similar to the sched_setattr(2). This maximum utilization
1220 value is used to clamp the task specific maximum utilization clamp,
1221 including those of realtime processes.
1223 This file affects all the processes in the cgroup.
1226 A read-write single value file which exists on non-root cgroups.
1229 This is the cgroup analog of the per-task SCHED_IDLE sched policy.
1230 Setting this value to a 1 will make the scheduling policy of the
1231 cgroup SCHED_IDLE. The threads inside the cgroup will retain their
1232 own relative priorities, but the cgroup itself will be treated as
1233 very low priority relative to its peers.
1235 This file affects only processes under the fair-class scheduler.
1240 The "memory" controller regulates distribution of memory. Memory is
1241 stateful and implements both limit and protection models. Due to the
1242 intertwining between memory usage and reclaim pressure and the
1243 stateful nature of memory, the distribution model is relatively
1246 While not completely water-tight, all major memory usages by a given
1247 cgroup are tracked so that the total memory consumption can be
1248 accounted and controlled to a reasonable extent. Currently, the
1249 following types of memory usages are tracked.
1251 - Userland memory - page cache and anonymous memory.
1253 - Kernel data structures such as dentries and inodes.
1255 - TCP socket buffers.
1257 The above list may expand in the future for better coverage.
1260 Memory Interface Files
1261 ~~~~~~~~~~~~~~~~~~~~~~
1263 All memory amounts are in bytes. If a value which is not aligned to
1264 PAGE_SIZE is written, the value may be rounded up to the closest
1265 PAGE_SIZE multiple when read back.
1268 A read-only single value file which exists on non-root
1271 The total amount of memory currently being used by the cgroup
1272 and its descendants.
1275 A read-write single value file which exists on non-root
1276 cgroups. The default is "0".
1278 Hard memory protection. If the memory usage of a cgroup
1279 is within its effective min boundary, the cgroup's memory
1280 won't be reclaimed under any conditions. If there is no
1281 unprotected reclaimable memory available, OOM killer
1282 is invoked. Above the effective min boundary (or
1283 effective low boundary if it is higher), pages are reclaimed
1284 proportionally to the overage, reducing reclaim pressure for
1287 Effective min boundary is limited by memory.min values of
1288 all ancestor cgroups. If there is memory.min overcommitment
1289 (child cgroup or cgroups are requiring more protected memory
1290 than parent will allow), then each child cgroup will get
1291 the part of parent's protection proportional to its
1292 actual memory usage below memory.min.
1294 Putting more memory than generally available under this
1295 protection is discouraged and may lead to constant OOMs.
1297 If a memory cgroup is not populated with processes,
1298 its memory.min is ignored.
1301 A read-write single value file which exists on non-root
1302 cgroups. The default is "0".
1304 Best-effort memory protection. If the memory usage of a
1305 cgroup is within its effective low boundary, the cgroup's
1306 memory won't be reclaimed unless there is no reclaimable
1307 memory available in unprotected cgroups.
1308 Above the effective low boundary (or
1309 effective min boundary if it is higher), pages are reclaimed
1310 proportionally to the overage, reducing reclaim pressure for
1313 Effective low boundary is limited by memory.low values of
1314 all ancestor cgroups. If there is memory.low overcommitment
1315 (child cgroup or cgroups are requiring more protected memory
1316 than parent will allow), then each child cgroup will get
1317 the part of parent's protection proportional to its
1318 actual memory usage below memory.low.
1320 Putting more memory than generally available under this
1321 protection is discouraged.
1324 A read-write single value file which exists on non-root
1325 cgroups. The default is "max".
1327 Memory usage throttle limit. If a cgroup's usage goes
1328 over the high boundary, the processes of the cgroup are
1329 throttled and put under heavy reclaim pressure.
1331 Going over the high limit never invokes the OOM killer and
1332 under extreme conditions the limit may be breached. The high
1333 limit should be used in scenarios where an external process
1334 monitors the limited cgroup to alleviate heavy reclaim
1337 If memory.high is opened with O_NONBLOCK then the synchronous
1338 reclaim is bypassed. This is useful for admin processes that
1339 need to dynamically adjust the job's memory limits without
1340 expending their own CPU resources on memory reclamation. The
1341 job will trigger the reclaim and/or get throttled on its
1342 next charge request.
1344 Please note that with O_NONBLOCK, there is a chance that the
1345 target memory cgroup may take indefinite amount of time to
1346 reduce usage below the limit due to delayed charge request or
1347 busy-hitting its memory to slow down reclaim.
1350 A read-write single value file which exists on non-root
1351 cgroups. The default is "max".
1353 Memory usage hard limit. This is the main mechanism to limit
1354 memory usage of a cgroup. If a cgroup's memory usage reaches
1355 this limit and can't be reduced, the OOM killer is invoked in
1356 the cgroup. Under certain circumstances, the usage may go
1357 over the limit temporarily.
1359 In default configuration regular 0-order allocations always
1360 succeed unless OOM killer chooses current task as a victim.
1362 Some kinds of allocations don't invoke the OOM killer.
1363 Caller could retry them differently, return into userspace
1364 as -ENOMEM or silently ignore in cases like disk readahead.
1366 If memory.max is opened with O_NONBLOCK, then the synchronous
1367 reclaim and oom-kill are bypassed. This is useful for admin
1368 processes that need to dynamically adjust the job's memory limits
1369 without expending their own CPU resources on memory reclamation.
1370 The job will trigger the reclaim and/or oom-kill on its next
1373 Please note that with O_NONBLOCK, there is a chance that the
1374 target memory cgroup may take indefinite amount of time to
1375 reduce usage below the limit due to delayed charge request or
1376 busy-hitting its memory to slow down reclaim.
1379 A write-only nested-keyed file which exists for all cgroups.
1381 This is a simple interface to trigger memory reclaim in the
1386 echo "1G" > memory.reclaim
1388 Please note that the kernel can over or under reclaim from
1389 the target cgroup. If less bytes are reclaimed than the
1390 specified amount, -EAGAIN is returned.
1392 Please note that the proactive reclaim (triggered by this
1393 interface) is not meant to indicate memory pressure on the
1394 memory cgroup. Therefore socket memory balancing triggered by
1395 the memory reclaim normally is not exercised in this case.
1396 This means that the networking layer will not adapt based on
1397 reclaim induced by memory.reclaim.
1399 The following nested keys are defined.
1401 ========== ================================
1402 swappiness Swappiness value to reclaim with
1403 ========== ================================
1405 Specifying a swappiness value instructs the kernel to perform
1406 the reclaim with that swappiness value. Note that this has the
1407 same semantics as vm.swappiness applied to memcg reclaim with
1408 all the existing limitations and potential future extensions.
1410 The valid range for swappiness is [0-200, max], setting
1411 swappiness=max exclusively reclaims anonymous memory.
1414 A read-write single value file which exists on non-root cgroups.
1416 The max memory usage recorded for the cgroup and its descendants since
1417 either the creation of the cgroup or the most recent reset for that FD.
1419 A write of any non-empty string to this file resets it to the
1420 current memory usage for subsequent reads through the same
1424 A read-write single value file which exists on non-root
1425 cgroups. The default value is "0".
1427 Determines whether the cgroup should be treated as
1428 an indivisible workload by the OOM killer. If set,
1429 all tasks belonging to the cgroup or to its descendants
1430 (if the memory cgroup is not a leaf cgroup) are killed
1431 together or not at all. This can be used to avoid
1432 partial kills to guarantee workload integrity.
1434 Tasks with the OOM protection (oom_score_adj set to -1000)
1435 are treated as an exception and are never killed.
1437 If the OOM killer is invoked in a cgroup, it's not going
1438 to kill any tasks outside of this cgroup, regardless
1439 memory.oom.group values of ancestor cgroups.
1442 A read-only flat-keyed file which exists on non-root cgroups.
1443 The following entries are defined. Unless specified
1444 otherwise, a value change in this file generates a file
1447 Note that all fields in this file are hierarchical and the
1448 file modified event can be generated due to an event down the
1449 hierarchy. For the local events at the cgroup level see
1450 memory.events.local.
1453 The number of times the cgroup is reclaimed due to
1454 high memory pressure even though its usage is under
1455 the low boundary. This usually indicates that the low
1456 boundary is over-committed.
1459 The number of times processes of the cgroup are
1460 throttled and routed to perform direct memory reclaim
1461 because the high memory boundary was exceeded. For a
1462 cgroup whose memory usage is capped by the high limit
1463 rather than global memory pressure, this event's
1464 occurrences are expected.
1467 The number of times the cgroup's memory usage was
1468 about to go over the max boundary. If direct reclaim
1469 fails to bring it down, the cgroup goes to OOM state.
1472 The number of time the cgroup's memory usage was
1473 reached the limit and allocation was about to fail.
1475 This event is not raised if the OOM killer is not
1476 considered as an option, e.g. for failed high-order
1477 allocations or if caller asked to not retry attempts.
1480 The number of processes belonging to this cgroup
1481 killed by any kind of OOM killer.
1484 The number of times a group OOM has occurred.
1487 Similar to memory.events but the fields in the file are local
1488 to the cgroup i.e. not hierarchical. The file modified event
1489 generated on this file reflects only the local events.
1492 A read-only flat-keyed file which exists on non-root cgroups.
1494 This breaks down the cgroup's memory footprint into different
1495 types of memory, type-specific details, and other information
1496 on the state and past events of the memory management system.
1498 All memory amounts are in bytes.
1500 The entries are ordered to be human readable, and new entries
1501 can show up in the middle. Don't rely on items remaining in a
1502 fixed position; use the keys to look up specific values!
1504 If the entry has no per-node counter (or not show in the
1505 memory.numa_stat). We use 'npn' (non-per-node) as the tag
1506 to indicate that it will not show in the memory.numa_stat.
1509 Amount of memory used in anonymous mappings such as
1510 brk(), sbrk(), and mmap(MAP_ANONYMOUS). Note that
1511 some kernel configurations might account complete larger
1512 allocations (e.g., THP) if only some, but not all the
1513 memory of such an allocation is mapped anymore.
1516 Amount of memory used to cache filesystem data,
1517 including tmpfs and shared memory.
1520 Amount of total kernel memory, including
1521 (kernel_stack, pagetables, percpu, vmalloc, slab) in
1522 addition to other kernel memory use cases.
1525 Amount of memory allocated to kernel stacks.
1528 Amount of memory allocated for page tables.
1531 Amount of memory allocated for secondary page tables,
1532 this currently includes KVM mmu allocations on x86
1533 and arm64 and IOMMU page tables.
1536 Amount of memory used for storing per-cpu kernel
1540 Amount of memory used in network transmission buffers
1543 Amount of memory used for vmap backed memory.
1546 Amount of cached filesystem data that is swap-backed,
1547 such as tmpfs, shm segments, shared anonymous mmap()s
1550 Amount of memory consumed by the zswap compression backend.
1553 Amount of application memory swapped out to zswap.
1556 Amount of cached filesystem data mapped with mmap(). Note
1557 that some kernel configurations might account complete
1558 larger allocations (e.g., THP) if only some, but not
1559 not all the memory of such an allocation is mapped.
1562 Amount of cached filesystem data that was modified but
1563 not yet written back to disk
1566 Amount of cached filesystem data that was modified and
1567 is currently being written back to disk
1570 Amount of swap cached in memory. The swapcache is accounted
1571 against both memory and swap usage.
1574 Amount of memory used in anonymous mappings backed by
1575 transparent hugepages
1578 Amount of cached filesystem data backed by transparent
1582 Amount of shm, tmpfs, shared anonymous mmap()s backed by
1583 transparent hugepages
1585 inactive_anon, active_anon, inactive_file, active_file, unevictable
1586 Amount of memory, swap-backed and filesystem-backed,
1587 on the internal memory management lists used by the
1588 page reclaim algorithm.
1590 As these represent internal list state (eg. shmem pages are on anon
1591 memory management lists), inactive_foo + active_foo may not be equal to
1592 the value for the foo counter, since the foo counter is type-based, not
1596 Part of "slab" that might be reclaimed, such as
1597 dentries and inodes.
1600 Part of "slab" that cannot be reclaimed on memory
1604 Amount of memory used for storing in-kernel data
1607 workingset_refault_anon
1608 Number of refaults of previously evicted anonymous pages.
1610 workingset_refault_file
1611 Number of refaults of previously evicted file pages.
1613 workingset_activate_anon
1614 Number of refaulted anonymous pages that were immediately
1617 workingset_activate_file
1618 Number of refaulted file pages that were immediately activated.
1620 workingset_restore_anon
1621 Number of restored anonymous pages which have been detected as
1622 an active workingset before they got reclaimed.
1624 workingset_restore_file
1625 Number of restored file pages which have been detected as an
1626 active workingset before they got reclaimed.
1628 workingset_nodereclaim
1629 Number of times a shadow node has been reclaimed
1632 Number of pages swapped into memory
1635 Number of pages swapped out of memory
1638 Amount of scanned pages (in an inactive LRU list)
1641 Amount of reclaimed pages
1644 Amount of scanned pages by kswapd (in an inactive LRU list)
1647 Amount of scanned pages directly (in an inactive LRU list)
1649 pgscan_khugepaged (npn)
1650 Amount of scanned pages by khugepaged (in an inactive LRU list)
1652 pgscan_proactive (npn)
1653 Amount of scanned pages proactively (in an inactive LRU list)
1655 pgsteal_kswapd (npn)
1656 Amount of reclaimed pages by kswapd
1658 pgsteal_direct (npn)
1659 Amount of reclaimed pages directly
1661 pgsteal_khugepaged (npn)
1662 Amount of reclaimed pages by khugepaged
1664 pgsteal_proactive (npn)
1665 Amount of reclaimed pages proactively
1668 Total number of page faults incurred
1671 Number of major page faults incurred
1674 Amount of scanned pages (in an active LRU list)
1677 Amount of pages moved to the active LRU list
1680 Amount of pages moved to the inactive LRU list
1683 Amount of pages postponed to be freed under memory pressure
1686 Amount of reclaimed lazyfree pages
1689 Number of pages swapped into memory and filled with zero, where I/O
1690 was optimized out because the page content was detected to be zero
1694 Number of zero-filled pages swapped out with I/O skipped due to the
1695 content being detected as zero.
1698 Number of pages moved in to memory from zswap.
1701 Number of pages moved out of memory to zswap.
1704 Number of pages written from zswap to swap.
1706 thp_fault_alloc (npn)
1707 Number of transparent hugepages which were allocated to satisfy
1708 a page fault. This counter is not present when CONFIG_TRANSPARENT_HUGEPAGE
1711 thp_collapse_alloc (npn)
1712 Number of transparent hugepages which were allocated to allow
1713 collapsing an existing range of pages. This counter is not
1714 present when CONFIG_TRANSPARENT_HUGEPAGE is not set.
1717 Number of transparent hugepages which are swapout in one piece
1720 thp_swpout_fallback (npn)
1721 Number of transparent hugepages which were split before swapout.
1722 Usually because failed to allocate some continuous swap space
1725 numa_pages_migrated (npn)
1726 Number of pages migrated by NUMA balancing.
1728 numa_pte_updates (npn)
1729 Number of pages whose page table entries are modified by
1730 NUMA balancing to produce NUMA hinting faults on access.
1732 numa_hint_faults (npn)
1733 Number of NUMA hinting faults.
1736 Number of pages demoted by kswapd.
1739 Number of pages demoted directly.
1742 Number of pages demoted by khugepaged.
1745 Number of pages demoted by proactively.
1748 Amount of memory used by hugetlb pages. This metric only shows
1749 up if hugetlb usage is accounted for in memory.current (i.e.
1750 cgroup is mounted with the memory_hugetlb_accounting option).
1753 A read-only nested-keyed file which exists on non-root cgroups.
1755 This breaks down the cgroup's memory footprint into different
1756 types of memory, type-specific details, and other information
1757 per node on the state of the memory management system.
1759 This is useful for providing visibility into the NUMA locality
1760 information within an memcg since the pages are allowed to be
1761 allocated from any physical node. One of the use case is evaluating
1762 application performance by combining this information with the
1763 application's CPU allocation.
1765 All memory amounts are in bytes.
1767 The output format of memory.numa_stat is::
1769 type N0=<bytes in node 0> N1=<bytes in node 1> ...
1771 The entries are ordered to be human readable, and new entries
1772 can show up in the middle. Don't rely on items remaining in a
1773 fixed position; use the keys to look up specific values!
1775 The entries can refer to the memory.stat.
1778 A read-only single value file which exists on non-root
1781 The total amount of swap currently being used by the cgroup
1782 and its descendants.
1785 A read-write single value file which exists on non-root
1786 cgroups. The default is "max".
1788 Swap usage throttle limit. If a cgroup's swap usage exceeds
1789 this limit, all its further allocations will be throttled to
1790 allow userspace to implement custom out-of-memory procedures.
1792 This limit marks a point of no return for the cgroup. It is NOT
1793 designed to manage the amount of swapping a workload does
1794 during regular operation. Compare to memory.swap.max, which
1795 prohibits swapping past a set amount, but lets the cgroup
1796 continue unimpeded as long as other memory can be reclaimed.
1798 Healthy workloads are not expected to reach this limit.
1801 A read-write single value file which exists on non-root cgroups.
1803 The max swap usage recorded for the cgroup and its descendants since
1804 the creation of the cgroup or the most recent reset for that FD.
1806 A write of any non-empty string to this file resets it to the
1807 current memory usage for subsequent reads through the same
1811 A read-write single value file which exists on non-root
1812 cgroups. The default is "max".
1814 Swap usage hard limit. If a cgroup's swap usage reaches this
1815 limit, anonymous memory of the cgroup will not be swapped out.
1818 A read-only flat-keyed file which exists on non-root cgroups.
1819 The following entries are defined. Unless specified
1820 otherwise, a value change in this file generates a file
1824 The number of times the cgroup's swap usage was over
1828 The number of times the cgroup's swap usage was about
1829 to go over the max boundary and swap allocation
1833 The number of times swap allocation failed either
1834 because of running out of swap system-wide or max
1837 When reduced under the current usage, the existing swap
1838 entries are reclaimed gradually and the swap usage may stay
1839 higher than the limit for an extended period of time. This
1840 reduces the impact on the workload and memory management.
1842 memory.zswap.current
1843 A read-only single value file which exists on non-root
1846 The total amount of memory consumed by the zswap compression
1850 A read-write single value file which exists on non-root
1851 cgroups. The default is "max".
1853 Zswap usage hard limit. If a cgroup's zswap pool reaches this
1854 limit, it will refuse to take any more stores before existing
1855 entries fault back in or are written out to disk.
1857 memory.zswap.writeback
1858 A read-write single value file. The default value is "1".
1859 Note that this setting is hierarchical, i.e. the writeback would be
1860 implicitly disabled for child cgroups if the upper hierarchy
1863 When this is set to 0, all swapping attempts to swapping devices
1864 are disabled. This included both zswap writebacks, and swapping due
1865 to zswap store failures. If the zswap store failures are recurring
1866 (for e.g if the pages are incompressible), users can observe
1867 reclaim inefficiency after disabling writeback (because the same
1868 pages might be rejected again and again).
1870 Note that this is subtly different from setting memory.swap.max to
1871 0, as it still allows for pages to be written to the zswap pool.
1872 This setting has no effect if zswap is disabled, and swapping
1873 is allowed unless memory.swap.max is set to 0.
1876 A read-only nested-keyed file.
1878 Shows pressure stall information for memory. See
1879 :ref:`Documentation/accounting/psi.rst <psi>` for details.
1885 "memory.high" is the main mechanism to control memory usage.
1886 Over-committing on high limit (sum of high limits > available memory)
1887 and letting global memory pressure to distribute memory according to
1888 usage is a viable strategy.
1890 Because breach of the high limit doesn't trigger the OOM killer but
1891 throttles the offending cgroup, a management agent has ample
1892 opportunities to monitor and take appropriate actions such as granting
1893 more memory or terminating the workload.
1895 Determining whether a cgroup has enough memory is not trivial as
1896 memory usage doesn't indicate whether the workload can benefit from
1897 more memory. For example, a workload which writes data received from
1898 network to a file can use all available memory but can also operate as
1899 performant with a small amount of memory. A measure of memory
1900 pressure - how much the workload is being impacted due to lack of
1901 memory - is necessary to determine whether a workload needs more
1902 memory; unfortunately, memory pressure monitoring mechanism isn't
1909 A memory area is charged to the cgroup which instantiated it and stays
1910 charged to the cgroup until the area is released. Migrating a process
1911 to a different cgroup doesn't move the memory usages that it
1912 instantiated while in the previous cgroup to the new cgroup.
1914 A memory area may be used by processes belonging to different cgroups.
1915 To which cgroup the area will be charged is in-deterministic; however,
1916 over time, the memory area is likely to end up in a cgroup which has
1917 enough memory allowance to avoid high reclaim pressure.
1919 If a cgroup sweeps a considerable amount of memory which is expected
1920 to be accessed repeatedly by other cgroups, it may make sense to use
1921 POSIX_FADV_DONTNEED to relinquish the ownership of memory areas
1922 belonging to the affected files to ensure correct memory ownership.
1928 The "io" controller regulates the distribution of IO resources. This
1929 controller implements both weight based and absolute bandwidth or IOPS
1930 limit distribution; however, weight based distribution is available
1931 only if cfq-iosched is in use and neither scheme is available for
1939 A read-only nested-keyed file.
1941 Lines are keyed by $MAJ:$MIN device numbers and not ordered.
1942 The following nested keys are defined.
1944 ====== =====================
1946 wbytes Bytes written
1947 rios Number of read IOs
1948 wios Number of write IOs
1949 dbytes Bytes discarded
1950 dios Number of discard IOs
1951 ====== =====================
1953 An example read output follows::
1955 8:16 rbytes=1459200 wbytes=314773504 rios=192 wios=353 dbytes=0 dios=0
1956 8:0 rbytes=90430464 wbytes=299008000 rios=8950 wios=1252 dbytes=50331648 dios=3021
1959 A read-write nested-keyed file which exists only on the root
1962 This file configures the Quality of Service of the IO cost
1963 model based controller (CONFIG_BLK_CGROUP_IOCOST) which
1964 currently implements "io.weight" proportional control. Lines
1965 are keyed by $MAJ:$MIN device numbers and not ordered. The
1966 line for a given device is populated on the first write for
1967 the device on "io.cost.qos" or "io.cost.model". The following
1968 nested keys are defined.
1970 ====== =====================================
1971 enable Weight-based control enable
1972 ctrl "auto" or "user"
1973 rpct Read latency percentile [0, 100]
1974 rlat Read latency threshold
1975 wpct Write latency percentile [0, 100]
1976 wlat Write latency threshold
1977 min Minimum scaling percentage [1, 10000]
1978 max Maximum scaling percentage [1, 10000]
1979 ====== =====================================
1981 The controller is disabled by default and can be enabled by
1982 setting "enable" to 1. "rpct" and "wpct" parameters default
1983 to zero and the controller uses internal device saturation
1984 state to adjust the overall IO rate between "min" and "max".
1986 When a better control quality is needed, latency QoS
1987 parameters can be configured. For example::
1989 8:16 enable=1 ctrl=auto rpct=95.00 rlat=75000 wpct=95.00 wlat=150000 min=50.00 max=150.0
1991 shows that on sdb, the controller is enabled, will consider
1992 the device saturated if the 95th percentile of read completion
1993 latencies is above 75ms or write 150ms, and adjust the overall
1994 IO issue rate between 50% and 150% accordingly.
1996 The lower the saturation point, the better the latency QoS at
1997 the cost of aggregate bandwidth. The narrower the allowed
1998 adjustment range between "min" and "max", the more conformant
1999 to the cost model the IO behavior. Note that the IO issue
2000 base rate may be far off from 100% and setting "min" and "max"
2001 blindly can lead to a significant loss of device capacity or
2002 control quality. "min" and "max" are useful for regulating
2003 devices which show wide temporary behavior changes - e.g. a
2004 ssd which accepts writes at the line speed for a while and
2005 then completely stalls for multiple seconds.
2007 When "ctrl" is "auto", the parameters are controlled by the
2008 kernel and may change automatically. Setting "ctrl" to "user"
2009 or setting any of the percentile and latency parameters puts
2010 it into "user" mode and disables the automatic changes. The
2011 automatic mode can be restored by setting "ctrl" to "auto".
2014 A read-write nested-keyed file which exists only on the root
2017 This file configures the cost model of the IO cost model based
2018 controller (CONFIG_BLK_CGROUP_IOCOST) which currently
2019 implements "io.weight" proportional control. Lines are keyed
2020 by $MAJ:$MIN device numbers and not ordered. The line for a
2021 given device is populated on the first write for the device on
2022 "io.cost.qos" or "io.cost.model". The following nested keys
2025 ===== ================================
2026 ctrl "auto" or "user"
2027 model The cost model in use - "linear"
2028 ===== ================================
2030 When "ctrl" is "auto", the kernel may change all parameters
2031 dynamically. When "ctrl" is set to "user" or any other
2032 parameters are written to, "ctrl" become "user" and the
2033 automatic changes are disabled.
2035 When "model" is "linear", the following model parameters are
2038 ============= ========================================
2039 [r|w]bps The maximum sequential IO throughput
2040 [r|w]seqiops The maximum 4k sequential IOs per second
2041 [r|w]randiops The maximum 4k random IOs per second
2042 ============= ========================================
2044 From the above, the builtin linear model determines the base
2045 costs of a sequential and random IO and the cost coefficient
2046 for the IO size. While simple, this model can cover most
2047 common device classes acceptably.
2049 The IO cost model isn't expected to be accurate in absolute
2050 sense and is scaled to the device behavior dynamically.
2052 If needed, tools/cgroup/iocost_coef_gen.py can be used to
2053 generate device-specific coefficients.
2056 A read-write flat-keyed file which exists on non-root cgroups.
2057 The default is "default 100".
2059 The first line is the default weight applied to devices
2060 without specific override. The rest are overrides keyed by
2061 $MAJ:$MIN device numbers and not ordered. The weights are in
2062 the range [1, 10000] and specifies the relative amount IO time
2063 the cgroup can use in relation to its siblings.
2065 The default weight can be updated by writing either "default
2066 $WEIGHT" or simply "$WEIGHT". Overrides can be set by writing
2067 "$MAJ:$MIN $WEIGHT" and unset by writing "$MAJ:$MIN default".
2069 An example read output follows::
2076 A read-write nested-keyed file which exists on non-root
2079 BPS and IOPS based IO limit. Lines are keyed by $MAJ:$MIN
2080 device numbers and not ordered. The following nested keys are
2083 ===== ==================================
2084 rbps Max read bytes per second
2085 wbps Max write bytes per second
2086 riops Max read IO operations per second
2087 wiops Max write IO operations per second
2088 ===== ==================================
2090 When writing, any number of nested key-value pairs can be
2091 specified in any order. "max" can be specified as the value
2092 to remove a specific limit. If the same key is specified
2093 multiple times, the outcome is undefined.
2095 BPS and IOPS are measured in each IO direction and IOs are
2096 delayed if limit is reached. Temporary bursts are allowed.
2098 Setting read limit at 2M BPS and write at 120 IOPS for 8:16::
2100 echo "8:16 rbps=2097152 wiops=120" > io.max
2102 Reading returns the following::
2104 8:16 rbps=2097152 wbps=max riops=max wiops=120
2106 Write IOPS limit can be removed by writing the following::
2108 echo "8:16 wiops=max" > io.max
2110 Reading now returns the following::
2112 8:16 rbps=2097152 wbps=max riops=max wiops=max
2115 A read-only nested-keyed file.
2117 Shows pressure stall information for IO. See
2118 :ref:`Documentation/accounting/psi.rst <psi>` for details.
2124 Page cache is dirtied through buffered writes and shared mmaps and
2125 written asynchronously to the backing filesystem by the writeback
2126 mechanism. Writeback sits between the memory and IO domains and
2127 regulates the proportion of dirty memory by balancing dirtying and
2130 The io controller, in conjunction with the memory controller,
2131 implements control of page cache writeback IOs. The memory controller
2132 defines the memory domain that dirty memory ratio is calculated and
2133 maintained for and the io controller defines the io domain which
2134 writes out dirty pages for the memory domain. Both system-wide and
2135 per-cgroup dirty memory states are examined and the more restrictive
2136 of the two is enforced.
2138 cgroup writeback requires explicit support from the underlying
2139 filesystem. Currently, cgroup writeback is implemented on ext2, ext4,
2140 btrfs, f2fs, and xfs. On other filesystems, all writeback IOs are
2141 attributed to the root cgroup.
2143 There are inherent differences in memory and writeback management
2144 which affects how cgroup ownership is tracked. Memory is tracked per
2145 page while writeback per inode. For the purpose of writeback, an
2146 inode is assigned to a cgroup and all IO requests to write dirty pages
2147 from the inode are attributed to that cgroup.
2149 As cgroup ownership for memory is tracked per page, there can be pages
2150 which are associated with different cgroups than the one the inode is
2151 associated with. These are called foreign pages. The writeback
2152 constantly keeps track of foreign pages and, if a particular foreign
2153 cgroup becomes the majority over a certain period of time, switches
2154 the ownership of the inode to that cgroup.
2156 While this model is enough for most use cases where a given inode is
2157 mostly dirtied by a single cgroup even when the main writing cgroup
2158 changes over time, use cases where multiple cgroups write to a single
2159 inode simultaneously are not supported well. In such circumstances, a
2160 significant portion of IOs are likely to be attributed incorrectly.
2161 As memory controller assigns page ownership on the first use and
2162 doesn't update it until the page is released, even if writeback
2163 strictly follows page ownership, multiple cgroups dirtying overlapping
2164 areas wouldn't work as expected. It's recommended to avoid such usage
2167 The sysctl knobs which affect writeback behavior are applied to cgroup
2168 writeback as follows.
2170 vm.dirty_background_ratio, vm.dirty_ratio
2171 These ratios apply the same to cgroup writeback with the
2172 amount of available memory capped by limits imposed by the
2173 memory controller and system-wide clean memory.
2175 vm.dirty_background_bytes, vm.dirty_bytes
2176 For cgroup writeback, this is calculated into ratio against
2177 total available memory and applied the same way as
2178 vm.dirty[_background]_ratio.
2184 This is a cgroup v2 controller for IO workload protection. You provide a group
2185 with a latency target, and if the average latency exceeds that target the
2186 controller will throttle any peers that have a lower latency target than the
2189 The limits are only applied at the peer level in the hierarchy. This means that
2190 in the diagram below, only groups A, B, and C will influence each other, and
2191 groups D and F will influence each other. Group G will influence nobody::
2200 So the ideal way to configure this is to set io.latency in groups A, B, and C.
2201 Generally you do not want to set a value lower than the latency your device
2202 supports. Experiment to find the value that works best for your workload.
2203 Start at higher than the expected latency for your device and watch the
2204 avg_lat value in io.stat for your workload group to get an idea of the
2205 latency you see during normal operation. Use the avg_lat value as a basis for
2206 your real setting, setting at 10-15% higher than the value in io.stat.
2208 How IO Latency Throttling Works
2209 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2211 io.latency is work conserving; so as long as everybody is meeting their latency
2212 target the controller doesn't do anything. Once a group starts missing its
2213 target it begins throttling any peer group that has a higher target than itself.
2214 This throttling takes 2 forms:
2216 - Queue depth throttling. This is the number of outstanding IO's a group is
2217 allowed to have. We will clamp down relatively quickly, starting at no limit
2218 and going all the way down to 1 IO at a time.
2220 - Artificial delay induction. There are certain types of IO that cannot be
2221 throttled without possibly adversely affecting higher priority groups. This
2222 includes swapping and metadata IO. These types of IO are allowed to occur
2223 normally, however they are "charged" to the originating group. If the
2224 originating group is being throttled you will see the use_delay and delay
2225 fields in io.stat increase. The delay value is how many microseconds that are
2226 being added to any process that runs in this group. Because this number can
2227 grow quite large if there is a lot of swapping or metadata IO occurring we
2228 limit the individual delay events to 1 second at a time.
2230 Once the victimized group starts meeting its latency target again it will start
2231 unthrottling any peer groups that were throttled previously. If the victimized
2232 group simply stops doing IO the global counter will unthrottle appropriately.
2234 IO Latency Interface Files
2235 ~~~~~~~~~~~~~~~~~~~~~~~~~~
2238 This takes a similar format as the other controllers.
2240 "MAJOR:MINOR target=<target time in microseconds>"
2243 If the controller is enabled you will see extra stats in io.stat in
2244 addition to the normal ones.
2247 This is the current queue depth for the group.
2250 This is an exponential moving average with a decay rate of 1/exp
2251 bound by the sampling interval. The decay rate interval can be
2252 calculated by multiplying the win value in io.stat by the
2253 corresponding number of samples based on the win value.
2256 The sampling window size in milliseconds. This is the minimum
2257 duration of time between evaluation events. Windows only elapse
2258 with IO activity. Idle periods extend the most recent window.
2263 A single attribute controls the behavior of the I/O priority cgroup policy,
2264 namely the io.prio.class attribute. The following values are accepted for
2268 Do not modify the I/O priority class.
2271 For requests that have a non-RT I/O priority class, change it into RT.
2272 Also change the priority level of these requests to 4. Do not modify
2273 the I/O priority of requests that have priority class RT.
2276 For requests that do not have an I/O priority class or that have I/O
2277 priority class RT, change it into BE. Also change the priority level
2278 of these requests to 0. Do not modify the I/O priority class of
2279 requests that have priority class IDLE.
2282 Change the I/O priority class of all requests into IDLE, the lowest
2286 Deprecated. Just an alias for promote-to-rt.
2288 The following numerical values are associated with the I/O priority policies:
2290 +----------------+---+
2292 +----------------+---+
2293 | promote-to-rt | 1 |
2294 +----------------+---+
2295 | restrict-to-be | 2 |
2296 +----------------+---+
2298 +----------------+---+
2300 The numerical value that corresponds to each I/O priority class is as follows:
2302 +-------------------------------+---+
2303 | IOPRIO_CLASS_NONE | 0 |
2304 +-------------------------------+---+
2305 | IOPRIO_CLASS_RT (real-time) | 1 |
2306 +-------------------------------+---+
2307 | IOPRIO_CLASS_BE (best effort) | 2 |
2308 +-------------------------------+---+
2309 | IOPRIO_CLASS_IDLE | 3 |
2310 +-------------------------------+---+
2312 The algorithm to set the I/O priority class for a request is as follows:
2314 - If I/O priority class policy is promote-to-rt, change the request I/O
2315 priority class to IOPRIO_CLASS_RT and change the request I/O priority
2317 - If I/O priority class policy is not promote-to-rt, translate the I/O priority
2318 class policy into a number, then change the request I/O priority class
2319 into the maximum of the I/O priority class policy number and the numerical
2325 The process number controller is used to allow a cgroup to stop any
2326 new tasks from being fork()'d or clone()'d after a specified limit is
2329 The number of tasks in a cgroup can be exhausted in ways which other
2330 controllers cannot prevent, thus warranting its own controller. For
2331 example, a fork bomb is likely to exhaust the number of tasks before
2332 hitting memory restrictions.
2334 Note that PIDs used in this controller refer to TIDs, process IDs as
2342 A read-write single value file which exists on non-root
2343 cgroups. The default is "max".
2345 Hard limit of number of processes.
2348 A read-only single value file which exists on non-root cgroups.
2350 The number of processes currently in the cgroup and its
2354 A read-only single value file which exists on non-root cgroups.
2356 The maximum value that the number of processes in the cgroup and its
2357 descendants has ever reached.
2360 A read-only flat-keyed file which exists on non-root cgroups. Unless
2361 specified otherwise, a value change in this file generates a file
2362 modified event. The following entries are defined.
2365 The number of times the cgroup's total number of processes hit the pids.max
2366 limit (see also pids_localevents).
2369 Similar to pids.events but the fields in the file are local
2370 to the cgroup i.e. not hierarchical. The file modified event
2371 generated on this file reflects only the local events.
2373 Organisational operations are not blocked by cgroup policies, so it is
2374 possible to have pids.current > pids.max. This can be done by either
2375 setting the limit to be smaller than pids.current, or attaching enough
2376 processes to the cgroup such that pids.current is larger than
2377 pids.max. However, it is not possible to violate a cgroup PID policy
2378 through fork() or clone(). These will return -EAGAIN if the creation
2379 of a new process would cause a cgroup policy to be violated.
2385 The "cpuset" controller provides a mechanism for constraining
2386 the CPU and memory node placement of tasks to only the resources
2387 specified in the cpuset interface files in a task's current cgroup.
2388 This is especially valuable on large NUMA systems where placing jobs
2389 on properly sized subsets of the systems with careful processor and
2390 memory placement to reduce cross-node memory access and contention
2391 can improve overall system performance.
2393 The "cpuset" controller is hierarchical. That means the controller
2394 cannot use CPUs or memory nodes not allowed in its parent.
2397 Cpuset Interface Files
2398 ~~~~~~~~~~~~~~~~~~~~~~
2401 A read-write multiple values file which exists on non-root
2402 cpuset-enabled cgroups.
2404 It lists the requested CPUs to be used by tasks within this
2405 cgroup. The actual list of CPUs to be granted, however, is
2406 subjected to constraints imposed by its parent and can differ
2407 from the requested CPUs.
2409 The CPU numbers are comma-separated numbers or ranges.
2415 An empty value indicates that the cgroup is using the same
2416 setting as the nearest cgroup ancestor with a non-empty
2417 "cpuset.cpus" or all the available CPUs if none is found.
2419 The value of "cpuset.cpus" stays constant until the next update
2420 and won't be affected by any CPU hotplug events.
2422 cpuset.cpus.effective
2423 A read-only multiple values file which exists on all
2424 cpuset-enabled cgroups.
2426 It lists the onlined CPUs that are actually granted to this
2427 cgroup by its parent. These CPUs are allowed to be used by
2428 tasks within the current cgroup.
2430 If "cpuset.cpus" is empty, the "cpuset.cpus.effective" file shows
2431 all the CPUs from the parent cgroup that can be available to
2432 be used by this cgroup. Otherwise, it should be a subset of
2433 "cpuset.cpus" unless none of the CPUs listed in "cpuset.cpus"
2434 can be granted. In this case, it will be treated just like an
2435 empty "cpuset.cpus".
2437 Its value will be affected by CPU hotplug events.
2440 A read-write multiple values file which exists on non-root
2441 cpuset-enabled cgroups.
2443 It lists the requested memory nodes to be used by tasks within
2444 this cgroup. The actual list of memory nodes granted, however,
2445 is subjected to constraints imposed by its parent and can differ
2446 from the requested memory nodes.
2448 The memory node numbers are comma-separated numbers or ranges.
2454 An empty value indicates that the cgroup is using the same
2455 setting as the nearest cgroup ancestor with a non-empty
2456 "cpuset.mems" or all the available memory nodes if none
2459 The value of "cpuset.mems" stays constant until the next update
2460 and won't be affected by any memory nodes hotplug events.
2462 Setting a non-empty value to "cpuset.mems" causes memory of
2463 tasks within the cgroup to be migrated to the designated nodes if
2464 they are currently using memory outside of the designated nodes.
2466 There is a cost for this memory migration. The migration
2467 may not be complete and some memory pages may be left behind.
2468 So it is recommended that "cpuset.mems" should be set properly
2469 before spawning new tasks into the cpuset. Even if there is
2470 a need to change "cpuset.mems" with active tasks, it shouldn't
2473 cpuset.mems.effective
2474 A read-only multiple values file which exists on all
2475 cpuset-enabled cgroups.
2477 It lists the onlined memory nodes that are actually granted to
2478 this cgroup by its parent. These memory nodes are allowed to
2479 be used by tasks within the current cgroup.
2481 If "cpuset.mems" is empty, it shows all the memory nodes from the
2482 parent cgroup that will be available to be used by this cgroup.
2483 Otherwise, it should be a subset of "cpuset.mems" unless none of
2484 the memory nodes listed in "cpuset.mems" can be granted. In this
2485 case, it will be treated just like an empty "cpuset.mems".
2487 Its value will be affected by memory nodes hotplug events.
2489 cpuset.cpus.exclusive
2490 A read-write multiple values file which exists on non-root
2491 cpuset-enabled cgroups.
2493 It lists all the exclusive CPUs that are allowed to be used
2494 to create a new cpuset partition. Its value is not used
2495 unless the cgroup becomes a valid partition root. See the
2496 "cpuset.cpus.partition" section below for a description of what
2497 a cpuset partition is.
2499 When the cgroup becomes a partition root, the actual exclusive
2500 CPUs that are allocated to that partition are listed in
2501 "cpuset.cpus.exclusive.effective" which may be different
2502 from "cpuset.cpus.exclusive". If "cpuset.cpus.exclusive"
2503 has previously been set, "cpuset.cpus.exclusive.effective"
2504 is always a subset of it.
2506 Users can manually set it to a value that is different from
2507 "cpuset.cpus". One constraint in setting it is that the list of
2508 CPUs must be exclusive with respect to "cpuset.cpus.exclusive"
2509 of its sibling. If "cpuset.cpus.exclusive" of a sibling cgroup
2510 isn't set, its "cpuset.cpus" value, if set, cannot be a subset
2511 of it to leave at least one CPU available when the exclusive
2512 CPUs are taken away.
2514 For a parent cgroup, any one of its exclusive CPUs can only
2515 be distributed to at most one of its child cgroups. Having an
2516 exclusive CPU appearing in two or more of its child cgroups is
2517 not allowed (the exclusivity rule). A value that violates the
2518 exclusivity rule will be rejected with a write error.
2520 The root cgroup is a partition root and all its available CPUs
2521 are in its exclusive CPU set.
2523 cpuset.cpus.exclusive.effective
2524 A read-only multiple values file which exists on all non-root
2525 cpuset-enabled cgroups.
2527 This file shows the effective set of exclusive CPUs that
2528 can be used to create a partition root. The content
2529 of this file will always be a subset of its parent's
2530 "cpuset.cpus.exclusive.effective" if its parent is not the root
2531 cgroup. It will also be a subset of "cpuset.cpus.exclusive"
2532 if it is set. If "cpuset.cpus.exclusive" is not set, it is
2533 treated to have an implicit value of "cpuset.cpus" in the
2534 formation of local partition.
2536 cpuset.cpus.isolated
2537 A read-only and root cgroup only multiple values file.
2539 This file shows the set of all isolated CPUs used in existing
2540 isolated partitions. It will be empty if no isolated partition
2543 cpuset.cpus.partition
2544 A read-write single value file which exists on non-root
2545 cpuset-enabled cgroups. This flag is owned by the parent cgroup
2546 and is not delegatable.
2548 It accepts only the following input values when written to.
2550 ========== =====================================
2551 "member" Non-root member of a partition
2552 "root" Partition root
2553 "isolated" Partition root without load balancing
2554 ========== =====================================
2556 A cpuset partition is a collection of cpuset-enabled cgroups with
2557 a partition root at the top of the hierarchy and its descendants
2558 except those that are separate partition roots themselves and
2559 their descendants. A partition has exclusive access to the
2560 set of exclusive CPUs allocated to it. Other cgroups outside
2561 of that partition cannot use any CPUs in that set.
2563 There are two types of partitions - local and remote. A local
2564 partition is one whose parent cgroup is also a valid partition
2565 root. A remote partition is one whose parent cgroup is not a
2566 valid partition root itself. Writing to "cpuset.cpus.exclusive"
2567 is optional for the creation of a local partition as its
2568 "cpuset.cpus.exclusive" file will assume an implicit value that
2569 is the same as "cpuset.cpus" if it is not set. Writing the
2570 proper "cpuset.cpus.exclusive" values down the cgroup hierarchy
2571 before the target partition root is mandatory for the creation
2572 of a remote partition.
2574 Currently, a remote partition cannot be created under a local
2575 partition. All the ancestors of a remote partition root except
2576 the root cgroup cannot be a partition root.
2578 The root cgroup is always a partition root and its state cannot
2579 be changed. All other non-root cgroups start out as "member".
2581 When set to "root", the current cgroup is the root of a new
2582 partition or scheduling domain. The set of exclusive CPUs is
2583 determined by the value of its "cpuset.cpus.exclusive.effective".
2585 When set to "isolated", the CPUs in that partition will be in
2586 an isolated state without any load balancing from the scheduler
2587 and excluded from the unbound workqueues. Tasks placed in such
2588 a partition with multiple CPUs should be carefully distributed
2589 and bound to each of the individual CPUs for optimal performance.
2591 A partition root ("root" or "isolated") can be in one of the
2592 two possible states - valid or invalid. An invalid partition
2593 root is in a degraded state where some state information may
2594 be retained, but behaves more like a "member".
2596 All possible state transitions among "member", "root" and
2597 "isolated" are allowed.
2599 On read, the "cpuset.cpus.partition" file can show the following
2602 ============================= =====================================
2603 "member" Non-root member of a partition
2604 "root" Partition root
2605 "isolated" Partition root without load balancing
2606 "root invalid (<reason>)" Invalid partition root
2607 "isolated invalid (<reason>)" Invalid isolated partition root
2608 ============================= =====================================
2610 In the case of an invalid partition root, a descriptive string on
2611 why the partition is invalid is included within parentheses.
2613 For a local partition root to be valid, the following conditions
2616 1) The parent cgroup is a valid partition root.
2617 2) The "cpuset.cpus.exclusive.effective" file cannot be empty,
2618 though it may contain offline CPUs.
2619 3) The "cpuset.cpus.effective" cannot be empty unless there is
2620 no task associated with this partition.
2622 For a remote partition root to be valid, all the above conditions
2623 except the first one must be met.
2625 External events like hotplug or changes to "cpuset.cpus" or
2626 "cpuset.cpus.exclusive" can cause a valid partition root to
2627 become invalid and vice versa. Note that a task cannot be
2628 moved to a cgroup with empty "cpuset.cpus.effective".
2630 A valid non-root parent partition may distribute out all its CPUs
2631 to its child local partitions when there is no task associated
2634 Care must be taken to change a valid partition root to "member"
2635 as all its child local partitions, if present, will become
2636 invalid causing disruption to tasks running in those child
2637 partitions. These inactivated partitions could be recovered if
2638 their parent is switched back to a partition root with a proper
2639 value in "cpuset.cpus" or "cpuset.cpus.exclusive".
2641 Poll and inotify events are triggered whenever the state of
2642 "cpuset.cpus.partition" changes. That includes changes caused
2643 by write to "cpuset.cpus.partition", cpu hotplug or other
2644 changes that modify the validity status of the partition.
2645 This will allow user space agents to monitor unexpected changes
2646 to "cpuset.cpus.partition" without the need to do continuous
2649 A user can pre-configure certain CPUs to an isolated state
2650 with load balancing disabled at boot time with the "isolcpus"
2651 kernel boot command line option. If those CPUs are to be put
2652 into a partition, they have to be used in an isolated partition.
2658 Device controller manages access to device files. It includes both
2659 creation of new device files (using mknod), and access to the
2660 existing device files.
2662 Cgroup v2 device controller has no interface files and is implemented
2663 on top of cgroup BPF. To control access to device files, a user may
2664 create bpf programs of type BPF_PROG_TYPE_CGROUP_DEVICE and attach
2665 them to cgroups with BPF_CGROUP_DEVICE flag. On an attempt to access a
2666 device file, corresponding BPF programs will be executed, and depending
2667 on the return value the attempt will succeed or fail with -EPERM.
2669 A BPF_PROG_TYPE_CGROUP_DEVICE program takes a pointer to the
2670 bpf_cgroup_dev_ctx structure, which describes the device access attempt:
2671 access type (mknod/read/write) and device (type, major and minor numbers).
2672 If the program returns 0, the attempt fails with -EPERM, otherwise it
2675 An example of BPF_PROG_TYPE_CGROUP_DEVICE program may be found in
2676 tools/testing/selftests/bpf/progs/dev_cgroup.c in the kernel source tree.
2682 The "rdma" controller regulates the distribution and accounting of
2685 RDMA Interface Files
2686 ~~~~~~~~~~~~~~~~~~~~
2689 A readwrite nested-keyed file that exists for all the cgroups
2690 except root that describes current configured resource limit
2691 for a RDMA/IB device.
2693 Lines are keyed by device name and are not ordered.
2694 Each line contains space separated resource name and its configured
2695 limit that can be distributed.
2697 The following nested keys are defined.
2699 ========== =============================
2700 hca_handle Maximum number of HCA Handles
2701 hca_object Maximum number of HCA Objects
2702 ========== =============================
2704 An example for mlx4 and ocrdma device follows::
2706 mlx4_0 hca_handle=2 hca_object=2000
2707 ocrdma1 hca_handle=3 hca_object=max
2710 A read-only file that describes current resource usage.
2711 It exists for all the cgroup except root.
2713 An example for mlx4 and ocrdma device follows::
2715 mlx4_0 hca_handle=1 hca_object=20
2716 ocrdma1 hca_handle=1 hca_object=23
2721 The "dmem" controller regulates the distribution and accounting of
2722 device memory regions. Because each memory region may have its own page size,
2723 which does not have to be equal to the system page size, the units are always bytes.
2725 DMEM Interface Files
2726 ~~~~~~~~~~~~~~~~~~~~
2728 dmem.max, dmem.min, dmem.low
2729 A readwrite nested-keyed file that exists for all the cgroups
2730 except root that describes current configured resource limit
2733 An example for xe follows::
2735 drm/0000:03:00.0/vram0 1073741824
2736 drm/0000:03:00.0/stolen max
2738 The semantics are the same as for the memory cgroup controller, and are
2739 calculated in the same way.
2742 A read-only file that describes maximum region capacity.
2743 It only exists on the root cgroup. Not all memory can be
2744 allocated by cgroups, as the kernel reserves some for
2747 An example for xe follows::
2749 drm/0000:03:00.0/vram0 8514437120
2750 drm/0000:03:00.0/stolen 67108864
2753 A read-only file that describes current resource usage.
2754 It exists for all the cgroup except root.
2756 An example for xe follows::
2758 drm/0000:03:00.0/vram0 12550144
2759 drm/0000:03:00.0/stolen 8650752
2764 The HugeTLB controller allows to limit the HugeTLB usage per control group and
2765 enforces the controller limit during page fault.
2767 HugeTLB Interface Files
2768 ~~~~~~~~~~~~~~~~~~~~~~~
2770 hugetlb.<hugepagesize>.current
2771 Show current usage for "hugepagesize" hugetlb. It exists for all
2772 the cgroup except root.
2774 hugetlb.<hugepagesize>.max
2775 Set/show the hard limit of "hugepagesize" hugetlb usage.
2776 The default value is "max". It exists for all the cgroup except root.
2778 hugetlb.<hugepagesize>.events
2779 A read-only flat-keyed file which exists on non-root cgroups.
2782 The number of allocation failure due to HugeTLB limit
2784 hugetlb.<hugepagesize>.events.local
2785 Similar to hugetlb.<hugepagesize>.events but the fields in the file
2786 are local to the cgroup i.e. not hierarchical. The file modified event
2787 generated on this file reflects only the local events.
2789 hugetlb.<hugepagesize>.numa_stat
2790 Similar to memory.numa_stat, it shows the numa information of the
2791 hugetlb pages of <hugepagesize> in this cgroup. Only active in
2792 use hugetlb pages are included. The per-node values are in bytes.
2797 The Miscellaneous cgroup provides the resource limiting and tracking
2798 mechanism for the scalar resources which cannot be abstracted like the other
2799 cgroup resources. Controller is enabled by the CONFIG_CGROUP_MISC config
2802 A resource can be added to the controller via enum misc_res_type{} in the
2803 include/linux/misc_cgroup.h file and the corresponding name via misc_res_name[]
2804 in the kernel/cgroup/misc.c file. Provider of the resource must set its
2805 capacity prior to using the resource by calling misc_cg_set_capacity().
2807 Once a capacity is set then the resource usage can be updated using charge and
2808 uncharge APIs. All of the APIs to interact with misc controller are in
2809 include/linux/misc_cgroup.h.
2811 Misc Interface Files
2812 ~~~~~~~~~~~~~~~~~~~~
2814 Miscellaneous controller provides 3 interface files. If two misc resources (res_a and res_b) are registered then:
2817 A read-only flat-keyed file shown only in the root cgroup. It shows
2818 miscellaneous scalar resources available on the platform along with
2826 A read-only flat-keyed file shown in the all cgroups. It shows
2827 the current usage of the resources in the cgroup and its children.::
2834 A read-only flat-keyed file shown in all cgroups. It shows the
2835 historical maximum usage of the resources in the cgroup and its
2843 A read-write flat-keyed file shown in the non root cgroups. Allowed
2844 maximum usage of the resources in the cgroup and its children.::
2850 Limit can be set by::
2852 # echo res_a 1 > misc.max
2854 Limit can be set to max by::
2856 # echo res_a max > misc.max
2858 Limits can be set higher than the capacity value in the misc.capacity
2862 A read-only flat-keyed file which exists on non-root cgroups. The
2863 following entries are defined. Unless specified otherwise, a value
2864 change in this file generates a file modified event. All fields in
2865 this file are hierarchical.
2868 The number of times the cgroup's resource usage was
2869 about to go over the max boundary.
2872 Similar to misc.events but the fields in the file are local to the
2873 cgroup i.e. not hierarchical. The file modified event generated on
2874 this file reflects only the local events.
2876 Migration and Ownership
2877 ~~~~~~~~~~~~~~~~~~~~~~~
2879 A miscellaneous scalar resource is charged to the cgroup in which it is used
2880 first, and stays charged to that cgroup until that resource is freed. Migrating
2881 a process to a different cgroup does not move the charge to the destination
2882 cgroup where the process has moved.
2890 perf_event controller, if not mounted on a legacy hierarchy, is
2891 automatically enabled on the v2 hierarchy so that perf events can
2892 always be filtered by cgroup v2 path. The controller can still be
2893 moved to a legacy hierarchy after v2 hierarchy is populated.
2896 Non-normative information
2897 -------------------------
2899 This section contains information that isn't considered to be a part of
2900 the stable kernel API and so is subject to change.
2903 CPU controller root cgroup process behaviour
2904 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2906 When distributing CPU cycles in the root cgroup each thread in this
2907 cgroup is treated as if it was hosted in a separate child cgroup of the
2908 root cgroup. This child cgroup weight is dependent on its thread nice
2911 For details of this mapping see sched_prio_to_weight array in
2912 kernel/sched/core.c file (values from this array should be scaled
2913 appropriately so the neutral - nice 0 - value is 100 instead of 1024).
2916 IO controller root cgroup process behaviour
2917 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2919 Root cgroup processes are hosted in an implicit leaf child node.
2920 When distributing IO resources this implicit child node is taken into
2921 account as if it was a normal child cgroup of the root cgroup with a
2922 weight value of 200.
2931 cgroup namespace provides a mechanism to virtualize the view of the
2932 "/proc/$PID/cgroup" file and cgroup mounts. The CLONE_NEWCGROUP clone
2933 flag can be used with clone(2) and unshare(2) to create a new cgroup
2934 namespace. The process running inside the cgroup namespace will have
2935 its "/proc/$PID/cgroup" output restricted to cgroupns root. The
2936 cgroupns root is the cgroup of the process at the time of creation of
2937 the cgroup namespace.
2939 Without cgroup namespace, the "/proc/$PID/cgroup" file shows the
2940 complete path of the cgroup of a process. In a container setup where
2941 a set of cgroups and namespaces are intended to isolate processes the
2942 "/proc/$PID/cgroup" file may leak potential system level information
2943 to the isolated processes. For example::
2945 # cat /proc/self/cgroup
2946 0::/batchjobs/container_id1
2948 The path '/batchjobs/container_id1' can be considered as system-data
2949 and undesirable to expose to the isolated processes. cgroup namespace
2950 can be used to restrict visibility of this path. For example, before
2951 creating a cgroup namespace, one would see::
2953 # ls -l /proc/self/ns/cgroup
2954 lrwxrwxrwx 1 root root 0 2014-07-15 10:37 /proc/self/ns/cgroup -> cgroup:[4026531835]
2955 # cat /proc/self/cgroup
2956 0::/batchjobs/container_id1
2958 After unsharing a new namespace, the view changes::
2960 # ls -l /proc/self/ns/cgroup
2961 lrwxrwxrwx 1 root root 0 2014-07-15 10:35 /proc/self/ns/cgroup -> cgroup:[4026532183]
2962 # cat /proc/self/cgroup
2965 When some thread from a multi-threaded process unshares its cgroup
2966 namespace, the new cgroupns gets applied to the entire process (all
2967 the threads). This is natural for the v2 hierarchy; however, for the
2968 legacy hierarchies, this may be unexpected.
2970 A cgroup namespace is alive as long as there are processes inside or
2971 mounts pinning it. When the last usage goes away, the cgroup
2972 namespace is destroyed. The cgroupns root and the actual cgroups
2979 The 'cgroupns root' for a cgroup namespace is the cgroup in which the
2980 process calling unshare(2) is running. For example, if a process in
2981 /batchjobs/container_id1 cgroup calls unshare, cgroup
2982 /batchjobs/container_id1 becomes the cgroupns root. For the
2983 init_cgroup_ns, this is the real root ('/') cgroup.
2985 The cgroupns root cgroup does not change even if the namespace creator
2986 process later moves to a different cgroup::
2988 # ~/unshare -c # unshare cgroupns in some cgroup
2989 # cat /proc/self/cgroup
2992 # echo 0 > sub_cgrp_1/cgroup.procs
2993 # cat /proc/self/cgroup
2996 Each process gets its namespace-specific view of "/proc/$PID/cgroup"
2998 Processes running inside the cgroup namespace will be able to see
2999 cgroup paths (in /proc/self/cgroup) only inside their root cgroup.
3000 From within an unshared cgroupns::
3004 # echo 7353 > sub_cgrp_1/cgroup.procs
3005 # cat /proc/7353/cgroup
3008 From the initial cgroup namespace, the real cgroup path will be
3011 $ cat /proc/7353/cgroup
3012 0::/batchjobs/container_id1/sub_cgrp_1
3014 From a sibling cgroup namespace (that is, a namespace rooted at a
3015 different cgroup), the cgroup path relative to its own cgroup
3016 namespace root will be shown. For instance, if PID 7353's cgroup
3017 namespace root is at '/batchjobs/container_id2', then it will see::
3019 # cat /proc/7353/cgroup
3020 0::/../container_id2/sub_cgrp_1
3022 Note that the relative path always starts with '/' to indicate that
3023 its relative to the cgroup namespace root of the caller.
3026 Migration and setns(2)
3027 ----------------------
3029 Processes inside a cgroup namespace can move into and out of the
3030 namespace root if they have proper access to external cgroups. For
3031 example, from inside a namespace with cgroupns root at
3032 /batchjobs/container_id1, and assuming that the global hierarchy is
3033 still accessible inside cgroupns::
3035 # cat /proc/7353/cgroup
3037 # echo 7353 > batchjobs/container_id2/cgroup.procs
3038 # cat /proc/7353/cgroup
3039 0::/../container_id2
3041 Note that this kind of setup is not encouraged. A task inside cgroup
3042 namespace should only be exposed to its own cgroupns hierarchy.
3044 setns(2) to another cgroup namespace is allowed when:
3046 (a) the process has CAP_SYS_ADMIN against its current user namespace
3047 (b) the process has CAP_SYS_ADMIN against the target cgroup
3050 No implicit cgroup changes happen with attaching to another cgroup
3051 namespace. It is expected that the someone moves the attaching
3052 process under the target cgroup namespace root.
3055 Interaction with Other Namespaces
3056 ---------------------------------
3058 Namespace specific cgroup hierarchy can be mounted by a process
3059 running inside a non-init cgroup namespace::
3061 # mount -t cgroup2 none $MOUNT_POINT
3063 This will mount the unified cgroup hierarchy with cgroupns root as the
3064 filesystem root. The process needs CAP_SYS_ADMIN against its user and
3067 The virtualization of /proc/self/cgroup file combined with restricting
3068 the view of cgroup hierarchy by namespace-private cgroupfs mount
3069 provides a properly isolated cgroup view inside the container.
3072 Information on Kernel Programming
3073 =================================
3075 This section contains kernel programming information in the areas
3076 where interacting with cgroup is necessary. cgroup core and
3077 controllers are not covered.
3080 Filesystem Support for Writeback
3081 --------------------------------
3083 A filesystem can support cgroup writeback by updating
3084 address_space_operations->writepages() to annotate bio's using the
3085 following two functions.
3087 wbc_init_bio(@wbc, @bio)
3088 Should be called for each bio carrying writeback data and
3089 associates the bio with the inode's owner cgroup and the
3090 corresponding request queue. This must be called after
3091 a queue (device) has been associated with the bio and
3094 wbc_account_cgroup_owner(@wbc, @folio, @bytes)
3095 Should be called for each data segment being written out.
3096 While this function doesn't care exactly when it's called
3097 during the writeback session, it's the easiest and most
3098 natural to call it as data segments are added to a bio.
3100 With writeback bio's annotated, cgroup support can be enabled per
3101 super_block by setting SB_I_CGROUPWB in ->s_iflags. This allows for
3102 selective disabling of cgroup writeback support which is helpful when
3103 certain filesystem features, e.g. journaled data mode, are
3106 wbc_init_bio() binds the specified bio to its cgroup. Depending on
3107 the configuration, the bio may be executed at a lower priority and if
3108 the writeback session is holding shared resources, e.g. a journal
3109 entry, may lead to priority inversion. There is no one easy solution
3110 for the problem. Filesystems can try to work around specific problem
3111 cases by skipping wbc_init_bio() and using bio_associate_blkg()
3115 Deprecated v1 Core Features
3116 ===========================
3118 - Multiple hierarchies including named ones are not supported.
3120 - All v1 mount options are not supported.
3122 - The "tasks" file is removed and "cgroup.procs" is not sorted.
3124 - "cgroup.clone_children" is removed.
3126 - /proc/cgroups is meaningless for v2. Use "cgroup.controllers" or
3127 "cgroup.stat" files at the root instead.
3130 Issues with v1 and Rationales for v2
3131 ====================================
3133 Multiple Hierarchies
3134 --------------------
3136 cgroup v1 allowed an arbitrary number of hierarchies and each
3137 hierarchy could host any number of controllers. While this seemed to
3138 provide a high level of flexibility, it wasn't useful in practice.
3140 For example, as there is only one instance of each controller, utility
3141 type controllers such as freezer which can be useful in all
3142 hierarchies could only be used in one. The issue is exacerbated by
3143 the fact that controllers couldn't be moved to another hierarchy once
3144 hierarchies were populated. Another issue was that all controllers
3145 bound to a hierarchy were forced to have exactly the same view of the
3146 hierarchy. It wasn't possible to vary the granularity depending on
3147 the specific controller.
3149 In practice, these issues heavily limited which controllers could be
3150 put on the same hierarchy and most configurations resorted to putting
3151 each controller on its own hierarchy. Only closely related ones, such
3152 as the cpu and cpuacct controllers, made sense to be put on the same
3153 hierarchy. This often meant that userland ended up managing multiple
3154 similar hierarchies repeating the same steps on each hierarchy
3155 whenever a hierarchy management operation was necessary.
3157 Furthermore, support for multiple hierarchies came at a steep cost.
3158 It greatly complicated cgroup core implementation but more importantly
3159 the support for multiple hierarchies restricted how cgroup could be
3160 used in general and what controllers was able to do.
3162 There was no limit on how many hierarchies there might be, which meant
3163 that a thread's cgroup membership couldn't be described in finite
3164 length. The key might contain any number of entries and was unlimited
3165 in length, which made it highly awkward to manipulate and led to
3166 addition of controllers which existed only to identify membership,
3167 which in turn exacerbated the original problem of proliferating number
3170 Also, as a controller couldn't have any expectation regarding the
3171 topologies of hierarchies other controllers might be on, each
3172 controller had to assume that all other controllers were attached to
3173 completely orthogonal hierarchies. This made it impossible, or at
3174 least very cumbersome, for controllers to cooperate with each other.
3176 In most use cases, putting controllers on hierarchies which are
3177 completely orthogonal to each other isn't necessary. What usually is
3178 called for is the ability to have differing levels of granularity
3179 depending on the specific controller. In other words, hierarchy may
3180 be collapsed from leaf towards root when viewed from specific
3181 controllers. For example, a given configuration might not care about
3182 how memory is distributed beyond a certain level while still wanting
3183 to control how CPU cycles are distributed.
3189 cgroup v1 allowed threads of a process to belong to different cgroups.
3190 This didn't make sense for some controllers and those controllers
3191 ended up implementing different ways to ignore such situations but
3192 much more importantly it blurred the line between API exposed to
3193 individual applications and system management interface.
3195 Generally, in-process knowledge is available only to the process
3196 itself; thus, unlike service-level organization of processes,
3197 categorizing threads of a process requires active participation from
3198 the application which owns the target process.
3200 cgroup v1 had an ambiguously defined delegation model which got abused
3201 in combination with thread granularity. cgroups were delegated to
3202 individual applications so that they can create and manage their own
3203 sub-hierarchies and control resource distributions along them. This
3204 effectively raised cgroup to the status of a syscall-like API exposed
3207 First of all, cgroup has a fundamentally inadequate interface to be
3208 exposed this way. For a process to access its own knobs, it has to
3209 extract the path on the target hierarchy from /proc/self/cgroup,
3210 construct the path by appending the name of the knob to the path, open
3211 and then read and/or write to it. This is not only extremely clunky
3212 and unusual but also inherently racy. There is no conventional way to
3213 define transaction across the required steps and nothing can guarantee
3214 that the process would actually be operating on its own sub-hierarchy.
3216 cgroup controllers implemented a number of knobs which would never be
3217 accepted as public APIs because they were just adding control knobs to
3218 system-management pseudo filesystem. cgroup ended up with interface
3219 knobs which were not properly abstracted or refined and directly
3220 revealed kernel internal details. These knobs got exposed to
3221 individual applications through the ill-defined delegation mechanism
3222 effectively abusing cgroup as a shortcut to implementing public APIs
3223 without going through the required scrutiny.
3225 This was painful for both userland and kernel. Userland ended up with
3226 misbehaving and poorly abstracted interfaces and kernel exposing and
3227 locked into constructs inadvertently.
3230 Competition Between Inner Nodes and Threads
3231 -------------------------------------------
3233 cgroup v1 allowed threads to be in any cgroups which created an
3234 interesting problem where threads belonging to a parent cgroup and its
3235 children cgroups competed for resources. This was nasty as two
3236 different types of entities competed and there was no obvious way to
3237 settle it. Different controllers did different things.
3239 The cpu controller considered threads and cgroups as equivalents and
3240 mapped nice levels to cgroup weights. This worked for some cases but
3241 fell flat when children wanted to be allocated specific ratios of CPU
3242 cycles and the number of internal threads fluctuated - the ratios
3243 constantly changed as the number of competing entities fluctuated.
3244 There also were other issues. The mapping from nice level to weight
3245 wasn't obvious or universal, and there were various other knobs which
3246 simply weren't available for threads.
3248 The io controller implicitly created a hidden leaf node for each
3249 cgroup to host the threads. The hidden leaf had its own copies of all
3250 the knobs with ``leaf_`` prefixed. While this allowed equivalent
3251 control over internal threads, it was with serious drawbacks. It
3252 always added an extra layer of nesting which wouldn't be necessary
3253 otherwise, made the interface messy and significantly complicated the
3256 The memory controller didn't have a way to control what happened
3257 between internal tasks and child cgroups and the behavior was not
3258 clearly defined. There were attempts to add ad-hoc behaviors and
3259 knobs to tailor the behavior to specific workloads which would have
3260 led to problems extremely difficult to resolve in the long term.
3262 Multiple controllers struggled with internal tasks and came up with
3263 different ways to deal with it; unfortunately, all the approaches were
3264 severely flawed and, furthermore, the widely different behaviors
3265 made cgroup as a whole highly inconsistent.
3267 This clearly is a problem which needs to be addressed from cgroup core
3271 Other Interface Issues
3272 ----------------------
3274 cgroup v1 grew without oversight and developed a large number of
3275 idiosyncrasies and inconsistencies. One issue on the cgroup core side
3276 was how an empty cgroup was notified - a userland helper binary was
3277 forked and executed for each event. The event delivery wasn't
3278 recursive or delegatable. The limitations of the mechanism also led
3279 to in-kernel event delivery filtering mechanism further complicating
3282 Controller interfaces were problematic too. An extreme example is
3283 controllers completely ignoring hierarchical organization and treating
3284 all cgroups as if they were all located directly under the root
3285 cgroup. Some controllers exposed a large amount of inconsistent
3286 implementation details to userland.
3288 There also was no consistency across controllers. When a new cgroup
3289 was created, some controllers defaulted to not imposing extra
3290 restrictions while others disallowed any resource usage until
3291 explicitly configured. Configuration knobs for the same type of
3292 control used widely differing naming schemes and formats. Statistics
3293 and information knobs were named arbitrarily and used different
3294 formats and units even in the same controller.
3296 cgroup v2 establishes common conventions where appropriate and updates
3297 controllers so that they expose minimal and consistent interfaces.
3300 Controller Issues and Remedies
3301 ------------------------------
3306 The original lower boundary, the soft limit, is defined as a limit
3307 that is per default unset. As a result, the set of cgroups that
3308 global reclaim prefers is opt-in, rather than opt-out. The costs for
3309 optimizing these mostly negative lookups are so high that the
3310 implementation, despite its enormous size, does not even provide the
3311 basic desirable behavior. First off, the soft limit has no
3312 hierarchical meaning. All configured groups are organized in a global
3313 rbtree and treated like equal peers, regardless where they are located
3314 in the hierarchy. This makes subtree delegation impossible. Second,
3315 the soft limit reclaim pass is so aggressive that it not just
3316 introduces high allocation latencies into the system, but also impacts
3317 system performance due to overreclaim, to the point where the feature
3318 becomes self-defeating.
3320 The memory.low boundary on the other hand is a top-down allocated
3321 reserve. A cgroup enjoys reclaim protection when it's within its
3322 effective low, which makes delegation of subtrees possible. It also
3323 enjoys having reclaim pressure proportional to its overage when
3324 above its effective low.
3326 The original high boundary, the hard limit, is defined as a strict
3327 limit that can not budge, even if the OOM killer has to be called.
3328 But this generally goes against the goal of making the most out of the
3329 available memory. The memory consumption of workloads varies during
3330 runtime, and that requires users to overcommit. But doing that with a
3331 strict upper limit requires either a fairly accurate prediction of the
3332 working set size or adding slack to the limit. Since working set size
3333 estimation is hard and error prone, and getting it wrong results in
3334 OOM kills, most users tend to err on the side of a looser limit and
3335 end up wasting precious resources.
3337 The memory.high boundary on the other hand can be set much more
3338 conservatively. When hit, it throttles allocations by forcing them
3339 into direct reclaim to work off the excess, but it never invokes the
3340 OOM killer. As a result, a high boundary that is chosen too
3341 aggressively will not terminate the processes, but instead it will
3342 lead to gradual performance degradation. The user can monitor this
3343 and make corrections until the minimal memory footprint that still
3344 gives acceptable performance is found.
3346 In extreme cases, with many concurrent allocations and a complete
3347 breakdown of reclaim progress within the group, the high boundary can
3348 be exceeded. But even then it's mostly better to satisfy the
3349 allocation from the slack available in other groups or the rest of the
3350 system than killing the group. Otherwise, memory.max is there to
3351 limit this type of spillover and ultimately contain buggy or even
3352 malicious applications.
3354 Setting the original memory.limit_in_bytes below the current usage was
3355 subject to a race condition, where concurrent charges could cause the
3356 limit setting to fail. memory.max on the other hand will first set the
3357 limit to prevent new charges, and then reclaim and OOM kill until the
3358 new limit is met - or the task writing to memory.max is killed.
3360 The combined memory+swap accounting and limiting is replaced by real
3361 control over swap space.
3363 The main argument for a combined memory+swap facility in the original
3364 cgroup design was that global or parental pressure would always be
3365 able to swap all anonymous memory of a child group, regardless of the
3366 child's own (possibly untrusted) configuration. However, untrusted
3367 groups can sabotage swapping by other means - such as referencing its
3368 anonymous memory in a tight loop - and an admin can not assume full
3369 swappability when overcommitting untrusted jobs.
3371 For trusted jobs, on the other hand, a combined counter is not an
3372 intuitive userspace interface, and it flies in the face of the idea
3373 that cgroup controllers should account and limit specific physical
3374 resources. Swap space is a resource like all others in the system,
3375 and that's why unified hierarchy allows distributing it separately.