4 October, 2015 Tejun Heo <tj@kernel.org>
6 This is the authoritative documentation on the design, interface and
7 conventions of cgroup v2. It describes all userland-visible aspects
8 of cgroup including core and specific controller behaviors. All
9 future changes must be reflected in this document. Documentation for
10 v1 is available under Documentation/cgroup-v1/.
19 2-2. Organizing Processes
20 2-3. [Un]populated Notification
21 2-4. Controlling Controllers
22 2-4-1. Enabling and Disabling
23 2-4-2. Top-down Constraint
24 2-4-3. No Internal Process Constraint
26 2-5-1. Model of Delegation
27 2-5-2. Delegation Containment
29 2-6-1. Organize Once and Control
30 2-6-2. Avoid Name Collisions
31 3. Resource Distribution Models
39 4-3. Core Interface Files
42 5-1-1. CPU Interface Files
44 5-2-1. Memory Interface Files
45 5-2-2. Usage Guidelines
46 5-2-3. Memory Ownership
48 5-3-1. IO Interface Files
51 5-4-1. PID Interface Files
53 5-5-1. RDMA Interface Files
58 6-2. The Root and Views
59 6-3. Migration and setns(2)
60 6-4. Interaction with Other Namespaces
61 P. Information on Kernel Programming
62 P-1. Filesystem Support for Writeback
63 D. Deprecated v1 Core Features
64 R. Issues with v1 and Rationales for v2
65 R-1. Multiple Hierarchies
66 R-2. Thread Granularity
67 R-3. Competition Between Inner Nodes and Threads
68 R-4. Other Interface Issues
69 R-5. Controller Issues and Remedies
77 "cgroup" stands for "control group" and is never capitalized. The
78 singular form is used to designate the whole feature and also as a
79 qualifier as in "cgroup controllers". When explicitly referring to
80 multiple individual control groups, the plural form "cgroups" is used.
85 cgroup is a mechanism to organize processes hierarchically and
86 distribute system resources along the hierarchy in a controlled and
89 cgroup is largely composed of two parts - the core and controllers.
90 cgroup core is primarily responsible for hierarchically organizing
91 processes. A cgroup controller is usually responsible for
92 distributing a specific type of system resource along the hierarchy
93 although there are utility controllers which serve purposes other than
94 resource distribution.
96 cgroups form a tree structure and every process in the system belongs
97 to one and only one cgroup. All threads of a process belong to the
98 same cgroup. On creation, all processes are put in the cgroup that
99 the parent process belongs to at the time. A process can be migrated
100 to another cgroup. Migration of a process doesn't affect already
101 existing descendant processes.
103 Following certain structural constraints, controllers may be enabled or
104 disabled selectively on a cgroup. All controller behaviors are
105 hierarchical - if a controller is enabled on a cgroup, it affects all
106 processes which belong to the cgroups consisting the inclusive
107 sub-hierarchy of the cgroup. When a controller is enabled on a nested
108 cgroup, it always restricts the resource distribution further. The
109 restrictions set closer to the root in the hierarchy can not be
110 overridden from further away.
117 Unlike v1, cgroup v2 has only single hierarchy. The cgroup v2
118 hierarchy can be mounted with the following mount command.
120 # mount -t cgroup2 none $MOUNT_POINT
122 cgroup2 filesystem has the magic number 0x63677270 ("cgrp"). All
123 controllers which support v2 and are not bound to a v1 hierarchy are
124 automatically bound to the v2 hierarchy and show up at the root.
125 Controllers which are not in active use in the v2 hierarchy can be
126 bound to other hierarchies. This allows mixing v2 hierarchy with the
127 legacy v1 multiple hierarchies in a fully backward compatible way.
129 A controller can be moved across hierarchies only after the controller
130 is no longer referenced in its current hierarchy. Because per-cgroup
131 controller states are destroyed asynchronously and controllers may
132 have lingering references, a controller may not show up immediately on
133 the v2 hierarchy after the final umount of the previous hierarchy.
134 Similarly, a controller should be fully disabled to be moved out of
135 the unified hierarchy and it may take some time for the disabled
136 controller to become available for other hierarchies; furthermore, due
137 to inter-controller dependencies, other controllers may need to be
140 While useful for development and manual configurations, moving
141 controllers dynamically between the v2 and other hierarchies is
142 strongly discouraged for production use. It is recommended to decide
143 the hierarchies and controller associations before starting using the
144 controllers after system boot.
146 During transition to v2, system management software might still
147 automount the v1 cgroup filesystem and so hijack all controllers
148 during boot, before manual intervention is possible. To make testing
149 and experimenting easier, the kernel parameter cgroup_no_v1= allows
150 disabling controllers in v1 and make them always available in v2.
152 cgroup v2 currently supports the following mount options.
156 Consider cgroup namespaces as delegation boundaries. This
157 option is system wide and can only be set on mount or modified
158 through remount from the init namespace. The mount option is
159 ignored on non-init namespace mounts. Please refer to the
160 Delegation section for details.
163 2-2. Organizing Processes
165 Initially, only the root cgroup exists to which all processes belong.
166 A child cgroup can be created by creating a sub-directory.
170 A given cgroup may have multiple child cgroups forming a tree
171 structure. Each cgroup has a read-writable interface file
172 "cgroup.procs". When read, it lists the PIDs of all processes which
173 belong to the cgroup one-per-line. The PIDs are not ordered and the
174 same PID may show up more than once if the process got moved to
175 another cgroup and then back or the PID got recycled while reading.
177 A process can be migrated into a cgroup by writing its PID to the
178 target cgroup's "cgroup.procs" file. Only one process can be migrated
179 on a single write(2) call. If a process is composed of multiple
180 threads, writing the PID of any thread migrates all threads of the
183 When a process forks a child process, the new process is born into the
184 cgroup that the forking process belongs to at the time of the
185 operation. After exit, a process stays associated with the cgroup
186 that it belonged to at the time of exit until it's reaped; however, a
187 zombie process does not appear in "cgroup.procs" and thus can't be
188 moved to another cgroup.
190 A cgroup which doesn't have any children or live processes can be
191 destroyed by removing the directory. Note that a cgroup which doesn't
192 have any children and is associated only with zombie processes is
193 considered empty and can be removed.
197 "/proc/$PID/cgroup" lists a process's cgroup membership. If legacy
198 cgroup is in use in the system, this file may contain multiple lines,
199 one for each hierarchy. The entry for cgroup v2 is always in the
202 # cat /proc/842/cgroup
204 0::/test-cgroup/test-cgroup-nested
206 If the process becomes a zombie and the cgroup it was associated with
207 is removed subsequently, " (deleted)" is appended to the path.
209 # cat /proc/842/cgroup
211 0::/test-cgroup/test-cgroup-nested (deleted)
214 2-3. [Un]populated Notification
216 Each non-root cgroup has a "cgroup.events" file which contains
217 "populated" field indicating whether the cgroup's sub-hierarchy has
218 live processes in it. Its value is 0 if there is no live process in
219 the cgroup and its descendants; otherwise, 1. poll and [id]notify
220 events are triggered when the value changes. This can be used, for
221 example, to start a clean-up operation after all processes of a given
222 sub-hierarchy have exited. The populated state updates and
223 notifications are recursive. Consider the following sub-hierarchy
224 where the numbers in the parentheses represent the numbers of processes
230 A, B and C's "populated" fields would be 1 while D's 0. After the one
231 process in C exits, B and C's "populated" fields would flip to "0" and
232 file modified events will be generated on the "cgroup.events" files of
236 2-4. Controlling Controllers
238 2-4-1. Enabling and Disabling
240 Each cgroup has a "cgroup.controllers" file which lists all
241 controllers available for the cgroup to enable.
243 # cat cgroup.controllers
246 No controller is enabled by default. Controllers can be enabled and
247 disabled by writing to the "cgroup.subtree_control" file.
249 # echo "+cpu +memory -io" > cgroup.subtree_control
251 Only controllers which are listed in "cgroup.controllers" can be
252 enabled. When multiple operations are specified as above, either they
253 all succeed or fail. If multiple operations on the same controller
254 are specified, the last one is effective.
256 Enabling a controller in a cgroup indicates that the distribution of
257 the target resource across its immediate children will be controlled.
258 Consider the following sub-hierarchy. The enabled controllers are
259 listed in parentheses.
261 A(cpu,memory) - B(memory) - C()
264 As A has "cpu" and "memory" enabled, A will control the distribution
265 of CPU cycles and memory to its children, in this case, B. As B has
266 "memory" enabled but not "CPU", C and D will compete freely on CPU
267 cycles but their division of memory available to B will be controlled.
269 As a controller regulates the distribution of the target resource to
270 the cgroup's children, enabling it creates the controller's interface
271 files in the child cgroups. In the above example, enabling "cpu" on B
272 would create the "cpu." prefixed controller interface files in C and
273 D. Likewise, disabling "memory" from B would remove the "memory."
274 prefixed controller interface files from C and D. This means that the
275 controller interface files - anything which doesn't start with
276 "cgroup." are owned by the parent rather than the cgroup itself.
279 2-4-2. Top-down Constraint
281 Resources are distributed top-down and a cgroup can further distribute
282 a resource only if the resource has been distributed to it from the
283 parent. This means that all non-root "cgroup.subtree_control" files
284 can only contain controllers which are enabled in the parent's
285 "cgroup.subtree_control" file. A controller can be enabled only if
286 the parent has the controller enabled and a controller can't be
287 disabled if one or more children have it enabled.
290 2-4-3. No Internal Process Constraint
292 Non-root cgroups can only distribute resources to their children when
293 they don't have any processes of their own. In other words, only
294 cgroups which don't contain any processes can have controllers enabled
295 in their "cgroup.subtree_control" files.
297 This guarantees that, when a controller is looking at the part of the
298 hierarchy which has it enabled, processes are always only on the
299 leaves. This rules out situations where child cgroups compete against
300 internal processes of the parent.
302 The root cgroup is exempt from this restriction. Root contains
303 processes and anonymous resource consumption which can't be associated
304 with any other cgroups and requires special treatment from most
305 controllers. How resource consumption in the root cgroup is governed
306 is up to each controller.
308 Note that the restriction doesn't get in the way if there is no
309 enabled controller in the cgroup's "cgroup.subtree_control". This is
310 important as otherwise it wouldn't be possible to create children of a
311 populated cgroup. To control resource distribution of a cgroup, the
312 cgroup must create children and transfer all its processes to the
313 children before enabling controllers in its "cgroup.subtree_control"
319 2-5-1. Model of Delegation
321 A cgroup can be delegated in two ways. First, to a less privileged
322 user by granting write access of the directory and its "cgroup.procs"
323 and "cgroup.subtree_control" files to the user. Second, if the
324 "nsdelegate" mount option is set, automatically to a cgroup namespace
325 on namespace creation.
327 Because the resource control interface files in a given directory
328 control the distribution of the parent's resources, the delegatee
329 shouldn't be allowed to write to them. For the first method, this is
330 achieved by not granting access to these files. For the second, the
331 kernel rejects writes to all files other than "cgroup.procs" and
332 "cgroup.subtree_control" on a namespace root from inside the
335 The end results are equivalent for both delegation types. Once
336 delegated, the user can build sub-hierarchy under the directory,
337 organize processes inside it as it sees fit and further distribute the
338 resources it received from the parent. The limits and other settings
339 of all resource controllers are hierarchical and regardless of what
340 happens in the delegated sub-hierarchy, nothing can escape the
341 resource restrictions imposed by the parent.
343 Currently, cgroup doesn't impose any restrictions on the number of
344 cgroups in or nesting depth of a delegated sub-hierarchy; however,
345 this may be limited explicitly in the future.
348 2-5-2. Delegation Containment
350 A delegated sub-hierarchy is contained in the sense that processes
351 can't be moved into or out of the sub-hierarchy by the delegatee.
353 For delegations to a less privileged user, this is achieved by
354 requiring the following conditions for a process with a non-root euid
355 to migrate a target process into a cgroup by writing its PID to the
358 - The writer must have write access to the "cgroup.procs" file.
360 - The writer must have write access to the "cgroup.procs" file of the
361 common ancestor of the source and destination cgroups.
363 The above two constraints ensure that while a delegatee may migrate
364 processes around freely in the delegated sub-hierarchy it can't pull
365 in from or push out to outside the sub-hierarchy.
367 For an example, let's assume cgroups C0 and C1 have been delegated to
368 user U0 who created C00, C01 under C0 and C10 under C1 as follows and
369 all processes under C0 and C1 belong to U0.
371 ~~~~~~~~~~~~~ - C0 - C00
374 ~~~~~~~~~~~~~ - C1 - C10
376 Let's also say U0 wants to write the PID of a process which is
377 currently in C10 into "C00/cgroup.procs". U0 has write access to the
378 file; however, the common ancestor of the source cgroup C10 and the
379 destination cgroup C00 is above the points of delegation and U0 would
380 not have write access to its "cgroup.procs" files and thus the write
381 will be denied with -EACCES.
383 For delegations to namespaces, containment is achieved by requiring
384 that both the source and destination cgroups are reachable from the
385 namespace of the process which is attempting the migration. If either
386 is not reachable, the migration is rejected with -ENOENT.
391 2-6-1. Organize Once and Control
393 Migrating a process across cgroups is a relatively expensive operation
394 and stateful resources such as memory are not moved together with the
395 process. This is an explicit design decision as there often exist
396 inherent trade-offs between migration and various hot paths in terms
397 of synchronization cost.
399 As such, migrating processes across cgroups frequently as a means to
400 apply different resource restrictions is discouraged. A workload
401 should be assigned to a cgroup according to the system's logical and
402 resource structure once on start-up. Dynamic adjustments to resource
403 distribution can be made by changing controller configuration through
407 2-6-2. Avoid Name Collisions
409 Interface files for a cgroup and its children cgroups occupy the same
410 directory and it is possible to create children cgroups which collide
411 with interface files.
413 All cgroup core interface files are prefixed with "cgroup." and each
414 controller's interface files are prefixed with the controller name and
415 a dot. A controller's name is composed of lower case alphabets and
416 '_'s but never begins with an '_' so it can be used as the prefix
417 character for collision avoidance. Also, interface file names won't
418 start or end with terms which are often used in categorizing workloads
419 such as job, service, slice, unit or workload.
421 cgroup doesn't do anything to prevent name collisions and it's the
422 user's responsibility to avoid them.
425 3. Resource Distribution Models
427 cgroup controllers implement several resource distribution schemes
428 depending on the resource type and expected use cases. This section
429 describes major schemes in use along with their expected behaviors.
434 A parent's resource is distributed by adding up the weights of all
435 active children and giving each the fraction matching the ratio of its
436 weight against the sum. As only children which can make use of the
437 resource at the moment participate in the distribution, this is
438 work-conserving. Due to the dynamic nature, this model is usually
439 used for stateless resources.
441 All weights are in the range [1, 10000] with the default at 100. This
442 allows symmetric multiplicative biases in both directions at fine
443 enough granularity while staying in the intuitive range.
445 As long as the weight is in range, all configuration combinations are
446 valid and there is no reason to reject configuration changes or
449 "cpu.weight" proportionally distributes CPU cycles to active children
450 and is an example of this type.
455 A child can only consume upto the configured amount of the resource.
456 Limits can be over-committed - the sum of the limits of children can
457 exceed the amount of resource available to the parent.
459 Limits are in the range [0, max] and defaults to "max", which is noop.
461 As limits can be over-committed, all configuration combinations are
462 valid and there is no reason to reject configuration changes or
465 "io.max" limits the maximum BPS and/or IOPS that a cgroup can consume
466 on an IO device and is an example of this type.
471 A cgroup is protected to be allocated upto the configured amount of
472 the resource if the usages of all its ancestors are under their
473 protected levels. Protections can be hard guarantees or best effort
474 soft boundaries. Protections can also be over-committed in which case
475 only upto the amount available to the parent is protected among
478 Protections are in the range [0, max] and defaults to 0, which is
481 As protections can be over-committed, all configuration combinations
482 are valid and there is no reason to reject configuration changes or
485 "memory.low" implements best-effort memory protection and is an
486 example of this type.
491 A cgroup is exclusively allocated a certain amount of a finite
492 resource. Allocations can't be over-committed - the sum of the
493 allocations of children can not exceed the amount of resource
494 available to the parent.
496 Allocations are in the range [0, max] and defaults to 0, which is no
499 As allocations can't be over-committed, some configuration
500 combinations are invalid and should be rejected. Also, if the
501 resource is mandatory for execution of processes, process migrations
504 "cpu.rt.max" hard-allocates realtime slices and is an example of this
512 All interface files should be in one of the following formats whenever
515 New-line separated values
516 (when only one value can be written at once)
522 Space separated values
523 (when read-only or multiple values can be written at once)
535 KEY0 SUB_KEY0=VAL00 SUB_KEY1=VAL01...
536 KEY1 SUB_KEY0=VAL10 SUB_KEY1=VAL11...
539 For a writable file, the format for writing should generally match
540 reading; however, controllers may allow omitting later fields or
541 implement restricted shortcuts for most common use cases.
543 For both flat and nested keyed files, only the values for a single key
544 can be written at a time. For nested keyed files, the sub key pairs
545 may be specified in any order and not all pairs have to be specified.
550 - Settings for a single feature should be contained in a single file.
552 - The root cgroup should be exempt from resource control and thus
553 shouldn't have resource control interface files. Also,
554 informational files on the root cgroup which end up showing global
555 information available elsewhere shouldn't exist.
557 - If a controller implements weight based resource distribution, its
558 interface file should be named "weight" and have the range [1,
559 10000] with 100 as the default. The values are chosen to allow
560 enough and symmetric bias in both directions while keeping it
561 intuitive (the default is 100%).
563 - If a controller implements an absolute resource guarantee and/or
564 limit, the interface files should be named "min" and "max"
565 respectively. If a controller implements best effort resource
566 guarantee and/or limit, the interface files should be named "low"
567 and "high" respectively.
569 In the above four control files, the special token "max" should be
570 used to represent upward infinity for both reading and writing.
572 - If a setting has a configurable default value and keyed specific
573 overrides, the default entry should be keyed with "default" and
574 appear as the first entry in the file.
576 The default value can be updated by writing either "default $VAL" or
579 When writing to update a specific override, "default" can be used as
580 the value to indicate removal of the override. Override entries
581 with "default" as the value must not appear when read.
583 For example, a setting which is keyed by major:minor device numbers
584 with integer values may look like the following.
586 # cat cgroup-example-interface-file
590 The default value can be updated by
592 # echo 125 > cgroup-example-interface-file
596 # echo "default 125" > cgroup-example-interface-file
598 An override can be set by
600 # echo "8:16 170" > cgroup-example-interface-file
604 # echo "8:0 default" > cgroup-example-interface-file
605 # cat cgroup-example-interface-file
609 - For events which are not very high frequency, an interface file
610 "events" should be created which lists event key value pairs.
611 Whenever a notifiable event happens, file modified event should be
612 generated on the file.
615 4-3. Core Interface Files
617 All cgroup core files are prefixed with "cgroup."
621 A read-write new-line separated values file which exists on
624 When read, it lists the PIDs of all processes which belong to
625 the cgroup one-per-line. The PIDs are not ordered and the
626 same PID may show up more than once if the process got moved
627 to another cgroup and then back or the PID got recycled while
630 A PID can be written to migrate the process associated with
631 the PID to the cgroup. The writer should match all of the
632 following conditions.
634 - Its euid is either root or must match either uid or suid of
637 - It must have write access to the "cgroup.procs" file.
639 - It must have write access to the "cgroup.procs" file of the
640 common ancestor of the source and destination cgroups.
642 When delegating a sub-hierarchy, write access to this file
643 should be granted along with the containing directory.
647 A read-only space separated values file which exists on all
650 It shows space separated list of all controllers available to
651 the cgroup. The controllers are not ordered.
653 cgroup.subtree_control
655 A read-write space separated values file which exists on all
656 cgroups. Starts out empty.
658 When read, it shows space separated list of the controllers
659 which are enabled to control resource distribution from the
660 cgroup to its children.
662 Space separated list of controllers prefixed with '+' or '-'
663 can be written to enable or disable controllers. A controller
664 name prefixed with '+' enables the controller and '-'
665 disables. If a controller appears more than once on the list,
666 the last one is effective. When multiple enable and disable
667 operations are specified, either all succeed or all fail.
671 A read-only flat-keyed file which exists on non-root cgroups.
672 The following entries are defined. Unless specified
673 otherwise, a value change in this file generates a file
678 1 if the cgroup or its descendants contains any live
679 processes; otherwise, 0.
686 [NOTE: The interface for the cpu controller hasn't been merged yet]
688 The "cpu" controllers regulates distribution of CPU cycles. This
689 controller implements weight and absolute bandwidth limit models for
690 normal scheduling policy and absolute bandwidth allocation model for
691 realtime scheduling policy.
694 5-1-1. CPU Interface Files
696 All time durations are in microseconds.
700 A read-only flat-keyed file which exists on non-root cgroups.
702 It reports the following six stats.
713 A read-write single value file which exists on non-root
714 cgroups. The default is "100".
716 The weight in the range [1, 10000].
720 A read-write two value file which exists on non-root cgroups.
721 The default is "max 100000".
723 The maximum bandwidth limit. It's in the following format.
727 which indicates that the group may consume upto $MAX in each
728 $PERIOD duration. "max" for $MAX indicates no limit. If only
729 one number is written, $MAX is updated.
733 [NOTE: The semantics of this file is still under discussion and the
734 interface hasn't been merged yet]
736 A read-write two value file which exists on all cgroups.
737 The default is "0 100000".
739 The maximum realtime runtime allocation. Over-committing
740 configurations are disallowed and process migrations are
741 rejected if not enough bandwidth is available. It's in the
746 which indicates that the group may consume upto $MAX in each
747 $PERIOD duration. If only one number is written, $MAX is
753 The "memory" controller regulates distribution of memory. Memory is
754 stateful and implements both limit and protection models. Due to the
755 intertwining between memory usage and reclaim pressure and the
756 stateful nature of memory, the distribution model is relatively
759 While not completely water-tight, all major memory usages by a given
760 cgroup are tracked so that the total memory consumption can be
761 accounted and controlled to a reasonable extent. Currently, the
762 following types of memory usages are tracked.
764 - Userland memory - page cache and anonymous memory.
766 - Kernel data structures such as dentries and inodes.
768 - TCP socket buffers.
770 The above list may expand in the future for better coverage.
773 5-2-1. Memory Interface Files
775 All memory amounts are in bytes. If a value which is not aligned to
776 PAGE_SIZE is written, the value may be rounded up to the closest
777 PAGE_SIZE multiple when read back.
781 A read-only single value file which exists on non-root
784 The total amount of memory currently being used by the cgroup
789 A read-write single value file which exists on non-root
790 cgroups. The default is "0".
792 Best-effort memory protection. If the memory usages of a
793 cgroup and all its ancestors are below their low boundaries,
794 the cgroup's memory won't be reclaimed unless memory can be
795 reclaimed from unprotected cgroups.
797 Putting more memory than generally available under this
798 protection is discouraged.
802 A read-write single value file which exists on non-root
803 cgroups. The default is "max".
805 Memory usage throttle limit. This is the main mechanism to
806 control memory usage of a cgroup. If a cgroup's usage goes
807 over the high boundary, the processes of the cgroup are
808 throttled and put under heavy reclaim pressure.
810 Going over the high limit never invokes the OOM killer and
811 under extreme conditions the limit may be breached.
815 A read-write single value file which exists on non-root
816 cgroups. The default is "max".
818 Memory usage hard limit. This is the final protection
819 mechanism. If a cgroup's memory usage reaches this limit and
820 can't be reduced, the OOM killer is invoked in the cgroup.
821 Under certain circumstances, the usage may go over the limit
824 This is the ultimate protection mechanism. As long as the
825 high limit is used and monitored properly, this limit's
826 utility is limited to providing the final safety net.
830 A read-only flat-keyed file which exists on non-root cgroups.
831 The following entries are defined. Unless specified
832 otherwise, a value change in this file generates a file
837 The number of times the cgroup is reclaimed due to
838 high memory pressure even though its usage is under
839 the low boundary. This usually indicates that the low
840 boundary is over-committed.
844 The number of times processes of the cgroup are
845 throttled and routed to perform direct memory reclaim
846 because the high memory boundary was exceeded. For a
847 cgroup whose memory usage is capped by the high limit
848 rather than global memory pressure, this event's
849 occurrences are expected.
853 The number of times the cgroup's memory usage was
854 about to go over the max boundary. If direct reclaim
855 fails to bring it down, the cgroup goes to OOM state.
859 The number of time the cgroup's memory usage was
860 reached the limit and allocation was about to fail.
862 Depending on context result could be invocation of OOM
863 killer and retrying allocation or failing alloction.
865 Failed allocation in its turn could be returned into
866 userspace as -ENOMEM or siletly ignored in cases like
867 disk readahead. For now OOM in memory cgroup kills
868 tasks iff shortage has happened inside page fault.
872 The number of processes belonging to this cgroup
873 killed by any kind of OOM killer.
877 A read-only flat-keyed file which exists on non-root cgroups.
879 This breaks down the cgroup's memory footprint into different
880 types of memory, type-specific details, and other information
881 on the state and past events of the memory management system.
883 All memory amounts are in bytes.
885 The entries are ordered to be human readable, and new entries
886 can show up in the middle. Don't rely on items remaining in a
887 fixed position; use the keys to look up specific values!
891 Amount of memory used in anonymous mappings such as
892 brk(), sbrk(), and mmap(MAP_ANONYMOUS)
896 Amount of memory used to cache filesystem data,
897 including tmpfs and shared memory.
901 Amount of memory allocated to kernel stacks.
905 Amount of memory used for storing in-kernel data
910 Amount of memory used in network transmission buffers
914 Amount of cached filesystem data that is swap-backed,
915 such as tmpfs, shm segments, shared anonymous mmap()s
919 Amount of cached filesystem data mapped with mmap()
923 Amount of cached filesystem data that was modified but
924 not yet written back to disk
928 Amount of cached filesystem data that was modified and
929 is currently being written back to disk
937 Amount of memory, swap-backed and filesystem-backed,
938 on the internal memory management lists used by the
939 page reclaim algorithm
943 Part of "slab" that might be reclaimed, such as
948 Part of "slab" that cannot be reclaimed on memory
953 Total number of page faults incurred
957 Number of major page faults incurred
961 Number of refaults of previously evicted pages
965 Number of refaulted pages that were immediately activated
967 workingset_nodereclaim
969 Number of times a shadow node has been reclaimed
973 Amount of scanned pages (in an active LRU list)
977 Amount of scanned pages (in an inactive LRU list)
981 Amount of reclaimed pages
985 Amount of pages moved to the active LRU list
989 Amount of pages moved to the inactive LRU lis
993 Amount of pages postponed to be freed under memory pressure
997 Amount of reclaimed lazyfree pages
1001 A read-only single value file which exists on non-root
1004 The total amount of swap currently being used by the cgroup
1005 and its descendants.
1009 A read-write single value file which exists on non-root
1010 cgroups. The default is "max".
1012 Swap usage hard limit. If a cgroup's swap usage reaches this
1013 limit, anonymous meomry of the cgroup will not be swapped out.
1016 5-2-2. Usage Guidelines
1018 "memory.high" is the main mechanism to control memory usage.
1019 Over-committing on high limit (sum of high limits > available memory)
1020 and letting global memory pressure to distribute memory according to
1021 usage is a viable strategy.
1023 Because breach of the high limit doesn't trigger the OOM killer but
1024 throttles the offending cgroup, a management agent has ample
1025 opportunities to monitor and take appropriate actions such as granting
1026 more memory or terminating the workload.
1028 Determining whether a cgroup has enough memory is not trivial as
1029 memory usage doesn't indicate whether the workload can benefit from
1030 more memory. For example, a workload which writes data received from
1031 network to a file can use all available memory but can also operate as
1032 performant with a small amount of memory. A measure of memory
1033 pressure - how much the workload is being impacted due to lack of
1034 memory - is necessary to determine whether a workload needs more
1035 memory; unfortunately, memory pressure monitoring mechanism isn't
1039 5-2-3. Memory Ownership
1041 A memory area is charged to the cgroup which instantiated it and stays
1042 charged to the cgroup until the area is released. Migrating a process
1043 to a different cgroup doesn't move the memory usages that it
1044 instantiated while in the previous cgroup to the new cgroup.
1046 A memory area may be used by processes belonging to different cgroups.
1047 To which cgroup the area will be charged is in-deterministic; however,
1048 over time, the memory area is likely to end up in a cgroup which has
1049 enough memory allowance to avoid high reclaim pressure.
1051 If a cgroup sweeps a considerable amount of memory which is expected
1052 to be accessed repeatedly by other cgroups, it may make sense to use
1053 POSIX_FADV_DONTNEED to relinquish the ownership of memory areas
1054 belonging to the affected files to ensure correct memory ownership.
1059 The "io" controller regulates the distribution of IO resources. This
1060 controller implements both weight based and absolute bandwidth or IOPS
1061 limit distribution; however, weight based distribution is available
1062 only if cfq-iosched is in use and neither scheme is available for
1066 5-3-1. IO Interface Files
1070 A read-only nested-keyed file which exists on non-root
1073 Lines are keyed by $MAJ:$MIN device numbers and not ordered.
1074 The following nested keys are defined.
1077 wbytes Bytes written
1078 rios Number of read IOs
1079 wios Number of write IOs
1081 An example read output follows.
1083 8:16 rbytes=1459200 wbytes=314773504 rios=192 wios=353
1084 8:0 rbytes=90430464 wbytes=299008000 rios=8950 wios=1252
1088 A read-write flat-keyed file which exists on non-root cgroups.
1089 The default is "default 100".
1091 The first line is the default weight applied to devices
1092 without specific override. The rest are overrides keyed by
1093 $MAJ:$MIN device numbers and not ordered. The weights are in
1094 the range [1, 10000] and specifies the relative amount IO time
1095 the cgroup can use in relation to its siblings.
1097 The default weight can be updated by writing either "default
1098 $WEIGHT" or simply "$WEIGHT". Overrides can be set by writing
1099 "$MAJ:$MIN $WEIGHT" and unset by writing "$MAJ:$MIN default".
1101 An example read output follows.
1109 A read-write nested-keyed file which exists on non-root
1112 BPS and IOPS based IO limit. Lines are keyed by $MAJ:$MIN
1113 device numbers and not ordered. The following nested keys are
1116 rbps Max read bytes per second
1117 wbps Max write bytes per second
1118 riops Max read IO operations per second
1119 wiops Max write IO operations per second
1121 When writing, any number of nested key-value pairs can be
1122 specified in any order. "max" can be specified as the value
1123 to remove a specific limit. If the same key is specified
1124 multiple times, the outcome is undefined.
1126 BPS and IOPS are measured in each IO direction and IOs are
1127 delayed if limit is reached. Temporary bursts are allowed.
1129 Setting read limit at 2M BPS and write at 120 IOPS for 8:16.
1131 echo "8:16 rbps=2097152 wiops=120" > io.max
1133 Reading returns the following.
1135 8:16 rbps=2097152 wbps=max riops=max wiops=120
1137 Write IOPS limit can be removed by writing the following.
1139 echo "8:16 wiops=max" > io.max
1141 Reading now returns the following.
1143 8:16 rbps=2097152 wbps=max riops=max wiops=max
1148 Page cache is dirtied through buffered writes and shared mmaps and
1149 written asynchronously to the backing filesystem by the writeback
1150 mechanism. Writeback sits between the memory and IO domains and
1151 regulates the proportion of dirty memory by balancing dirtying and
1154 The io controller, in conjunction with the memory controller,
1155 implements control of page cache writeback IOs. The memory controller
1156 defines the memory domain that dirty memory ratio is calculated and
1157 maintained for and the io controller defines the io domain which
1158 writes out dirty pages for the memory domain. Both system-wide and
1159 per-cgroup dirty memory states are examined and the more restrictive
1160 of the two is enforced.
1162 cgroup writeback requires explicit support from the underlying
1163 filesystem. Currently, cgroup writeback is implemented on ext2, ext4
1164 and btrfs. On other filesystems, all writeback IOs are attributed to
1167 There are inherent differences in memory and writeback management
1168 which affects how cgroup ownership is tracked. Memory is tracked per
1169 page while writeback per inode. For the purpose of writeback, an
1170 inode is assigned to a cgroup and all IO requests to write dirty pages
1171 from the inode are attributed to that cgroup.
1173 As cgroup ownership for memory is tracked per page, there can be pages
1174 which are associated with different cgroups than the one the inode is
1175 associated with. These are called foreign pages. The writeback
1176 constantly keeps track of foreign pages and, if a particular foreign
1177 cgroup becomes the majority over a certain period of time, switches
1178 the ownership of the inode to that cgroup.
1180 While this model is enough for most use cases where a given inode is
1181 mostly dirtied by a single cgroup even when the main writing cgroup
1182 changes over time, use cases where multiple cgroups write to a single
1183 inode simultaneously are not supported well. In such circumstances, a
1184 significant portion of IOs are likely to be attributed incorrectly.
1185 As memory controller assigns page ownership on the first use and
1186 doesn't update it until the page is released, even if writeback
1187 strictly follows page ownership, multiple cgroups dirtying overlapping
1188 areas wouldn't work as expected. It's recommended to avoid such usage
1191 The sysctl knobs which affect writeback behavior are applied to cgroup
1192 writeback as follows.
1194 vm.dirty_background_ratio
1197 These ratios apply the same to cgroup writeback with the
1198 amount of available memory capped by limits imposed by the
1199 memory controller and system-wide clean memory.
1201 vm.dirty_background_bytes
1204 For cgroup writeback, this is calculated into ratio against
1205 total available memory and applied the same way as
1206 vm.dirty[_background]_ratio.
1211 The process number controller is used to allow a cgroup to stop any
1212 new tasks from being fork()'d or clone()'d after a specified limit is
1215 The number of tasks in a cgroup can be exhausted in ways which other
1216 controllers cannot prevent, thus warranting its own controller. For
1217 example, a fork bomb is likely to exhaust the number of tasks before
1218 hitting memory restrictions.
1220 Note that PIDs used in this controller refer to TIDs, process IDs as
1224 5-4-1. PID Interface Files
1228 A read-write single value file which exists on non-root
1229 cgroups. The default is "max".
1231 Hard limit of number of processes.
1235 A read-only single value file which exists on all cgroups.
1237 The number of processes currently in the cgroup and its
1240 Organisational operations are not blocked by cgroup policies, so it is
1241 possible to have pids.current > pids.max. This can be done by either
1242 setting the limit to be smaller than pids.current, or attaching enough
1243 processes to the cgroup such that pids.current is larger than
1244 pids.max. However, it is not possible to violate a cgroup PID policy
1245 through fork() or clone(). These will return -EAGAIN if the creation
1246 of a new process would cause a cgroup policy to be violated.
1251 The "rdma" controller regulates the distribution and accounting of
1254 5-5-1. RDMA Interface Files
1257 A readwrite nested-keyed file that exists for all the cgroups
1258 except root that describes current configured resource limit
1259 for a RDMA/IB device.
1261 Lines are keyed by device name and are not ordered.
1262 Each line contains space separated resource name and its configured
1263 limit that can be distributed.
1265 The following nested keys are defined.
1267 hca_handle Maximum number of HCA Handles
1268 hca_object Maximum number of HCA Objects
1270 An example for mlx4 and ocrdma device follows.
1272 mlx4_0 hca_handle=2 hca_object=2000
1273 ocrdma1 hca_handle=3 hca_object=max
1276 A read-only file that describes current resource usage.
1277 It exists for all the cgroup except root.
1279 An example for mlx4 and ocrdma device follows.
1281 mlx4_0 hca_handle=1 hca_object=20
1282 ocrdma1 hca_handle=1 hca_object=23
1289 perf_event controller, if not mounted on a legacy hierarchy, is
1290 automatically enabled on the v2 hierarchy so that perf events can
1291 always be filtered by cgroup v2 path. The controller can still be
1292 moved to a legacy hierarchy after v2 hierarchy is populated.
1299 cgroup namespace provides a mechanism to virtualize the view of the
1300 "/proc/$PID/cgroup" file and cgroup mounts. The CLONE_NEWCGROUP clone
1301 flag can be used with clone(2) and unshare(2) to create a new cgroup
1302 namespace. The process running inside the cgroup namespace will have
1303 its "/proc/$PID/cgroup" output restricted to cgroupns root. The
1304 cgroupns root is the cgroup of the process at the time of creation of
1305 the cgroup namespace.
1307 Without cgroup namespace, the "/proc/$PID/cgroup" file shows the
1308 complete path of the cgroup of a process. In a container setup where
1309 a set of cgroups and namespaces are intended to isolate processes the
1310 "/proc/$PID/cgroup" file may leak potential system level information
1311 to the isolated processes. For Example:
1313 # cat /proc/self/cgroup
1314 0::/batchjobs/container_id1
1316 The path '/batchjobs/container_id1' can be considered as system-data
1317 and undesirable to expose to the isolated processes. cgroup namespace
1318 can be used to restrict visibility of this path. For example, before
1319 creating a cgroup namespace, one would see:
1321 # ls -l /proc/self/ns/cgroup
1322 lrwxrwxrwx 1 root root 0 2014-07-15 10:37 /proc/self/ns/cgroup -> cgroup:[4026531835]
1323 # cat /proc/self/cgroup
1324 0::/batchjobs/container_id1
1326 After unsharing a new namespace, the view changes.
1328 # ls -l /proc/self/ns/cgroup
1329 lrwxrwxrwx 1 root root 0 2014-07-15 10:35 /proc/self/ns/cgroup -> cgroup:[4026532183]
1330 # cat /proc/self/cgroup
1333 When some thread from a multi-threaded process unshares its cgroup
1334 namespace, the new cgroupns gets applied to the entire process (all
1335 the threads). This is natural for the v2 hierarchy; however, for the
1336 legacy hierarchies, this may be unexpected.
1338 A cgroup namespace is alive as long as there are processes inside or
1339 mounts pinning it. When the last usage goes away, the cgroup
1340 namespace is destroyed. The cgroupns root and the actual cgroups
1344 6-2. The Root and Views
1346 The 'cgroupns root' for a cgroup namespace is the cgroup in which the
1347 process calling unshare(2) is running. For example, if a process in
1348 /batchjobs/container_id1 cgroup calls unshare, cgroup
1349 /batchjobs/container_id1 becomes the cgroupns root. For the
1350 init_cgroup_ns, this is the real root ('/') cgroup.
1352 The cgroupns root cgroup does not change even if the namespace creator
1353 process later moves to a different cgroup.
1355 # ~/unshare -c # unshare cgroupns in some cgroup
1356 # cat /proc/self/cgroup
1359 # echo 0 > sub_cgrp_1/cgroup.procs
1360 # cat /proc/self/cgroup
1363 Each process gets its namespace-specific view of "/proc/$PID/cgroup"
1365 Processes running inside the cgroup namespace will be able to see
1366 cgroup paths (in /proc/self/cgroup) only inside their root cgroup.
1367 From within an unshared cgroupns:
1371 # echo 7353 > sub_cgrp_1/cgroup.procs
1372 # cat /proc/7353/cgroup
1375 From the initial cgroup namespace, the real cgroup path will be
1378 $ cat /proc/7353/cgroup
1379 0::/batchjobs/container_id1/sub_cgrp_1
1381 From a sibling cgroup namespace (that is, a namespace rooted at a
1382 different cgroup), the cgroup path relative to its own cgroup
1383 namespace root will be shown. For instance, if PID 7353's cgroup
1384 namespace root is at '/batchjobs/container_id2', then it will see
1386 # cat /proc/7353/cgroup
1387 0::/../container_id2/sub_cgrp_1
1389 Note that the relative path always starts with '/' to indicate that
1390 its relative to the cgroup namespace root of the caller.
1393 6-3. Migration and setns(2)
1395 Processes inside a cgroup namespace can move into and out of the
1396 namespace root if they have proper access to external cgroups. For
1397 example, from inside a namespace with cgroupns root at
1398 /batchjobs/container_id1, and assuming that the global hierarchy is
1399 still accessible inside cgroupns:
1401 # cat /proc/7353/cgroup
1403 # echo 7353 > batchjobs/container_id2/cgroup.procs
1404 # cat /proc/7353/cgroup
1405 0::/../container_id2
1407 Note that this kind of setup is not encouraged. A task inside cgroup
1408 namespace should only be exposed to its own cgroupns hierarchy.
1410 setns(2) to another cgroup namespace is allowed when:
1412 (a) the process has CAP_SYS_ADMIN against its current user namespace
1413 (b) the process has CAP_SYS_ADMIN against the target cgroup
1416 No implicit cgroup changes happen with attaching to another cgroup
1417 namespace. It is expected that the someone moves the attaching
1418 process under the target cgroup namespace root.
1421 6-4. Interaction with Other Namespaces
1423 Namespace specific cgroup hierarchy can be mounted by a process
1424 running inside a non-init cgroup namespace.
1426 # mount -t cgroup2 none $MOUNT_POINT
1428 This will mount the unified cgroup hierarchy with cgroupns root as the
1429 filesystem root. The process needs CAP_SYS_ADMIN against its user and
1432 The virtualization of /proc/self/cgroup file combined with restricting
1433 the view of cgroup hierarchy by namespace-private cgroupfs mount
1434 provides a properly isolated cgroup view inside the container.
1437 P. Information on Kernel Programming
1439 This section contains kernel programming information in the areas
1440 where interacting with cgroup is necessary. cgroup core and
1441 controllers are not covered.
1444 P-1. Filesystem Support for Writeback
1446 A filesystem can support cgroup writeback by updating
1447 address_space_operations->writepage[s]() to annotate bio's using the
1448 following two functions.
1450 wbc_init_bio(@wbc, @bio)
1452 Should be called for each bio carrying writeback data and
1453 associates the bio with the inode's owner cgroup. Can be
1454 called anytime between bio allocation and submission.
1456 wbc_account_io(@wbc, @page, @bytes)
1458 Should be called for each data segment being written out.
1459 While this function doesn't care exactly when it's called
1460 during the writeback session, it's the easiest and most
1461 natural to call it as data segments are added to a bio.
1463 With writeback bio's annotated, cgroup support can be enabled per
1464 super_block by setting SB_I_CGROUPWB in ->s_iflags. This allows for
1465 selective disabling of cgroup writeback support which is helpful when
1466 certain filesystem features, e.g. journaled data mode, are
1469 wbc_init_bio() binds the specified bio to its cgroup. Depending on
1470 the configuration, the bio may be executed at a lower priority and if
1471 the writeback session is holding shared resources, e.g. a journal
1472 entry, may lead to priority inversion. There is no one easy solution
1473 for the problem. Filesystems can try to work around specific problem
1474 cases by skipping wbc_init_bio() or using bio_associate_blkcg()
1478 D. Deprecated v1 Core Features
1480 - Multiple hierarchies including named ones are not supported.
1482 - All v1 mount options are not supported.
1484 - The "tasks" file is removed and "cgroup.procs" is not sorted.
1486 - "cgroup.clone_children" is removed.
1488 - /proc/cgroups is meaningless for v2. Use "cgroup.controllers" file
1489 at the root instead.
1492 R. Issues with v1 and Rationales for v2
1494 R-1. Multiple Hierarchies
1496 cgroup v1 allowed an arbitrary number of hierarchies and each
1497 hierarchy could host any number of controllers. While this seemed to
1498 provide a high level of flexibility, it wasn't useful in practice.
1500 For example, as there is only one instance of each controller, utility
1501 type controllers such as freezer which can be useful in all
1502 hierarchies could only be used in one. The issue is exacerbated by
1503 the fact that controllers couldn't be moved to another hierarchy once
1504 hierarchies were populated. Another issue was that all controllers
1505 bound to a hierarchy were forced to have exactly the same view of the
1506 hierarchy. It wasn't possible to vary the granularity depending on
1507 the specific controller.
1509 In practice, these issues heavily limited which controllers could be
1510 put on the same hierarchy and most configurations resorted to putting
1511 each controller on its own hierarchy. Only closely related ones, such
1512 as the cpu and cpuacct controllers, made sense to be put on the same
1513 hierarchy. This often meant that userland ended up managing multiple
1514 similar hierarchies repeating the same steps on each hierarchy
1515 whenever a hierarchy management operation was necessary.
1517 Furthermore, support for multiple hierarchies came at a steep cost.
1518 It greatly complicated cgroup core implementation but more importantly
1519 the support for multiple hierarchies restricted how cgroup could be
1520 used in general and what controllers was able to do.
1522 There was no limit on how many hierarchies there might be, which meant
1523 that a thread's cgroup membership couldn't be described in finite
1524 length. The key might contain any number of entries and was unlimited
1525 in length, which made it highly awkward to manipulate and led to
1526 addition of controllers which existed only to identify membership,
1527 which in turn exacerbated the original problem of proliferating number
1530 Also, as a controller couldn't have any expectation regarding the
1531 topologies of hierarchies other controllers might be on, each
1532 controller had to assume that all other controllers were attached to
1533 completely orthogonal hierarchies. This made it impossible, or at
1534 least very cumbersome, for controllers to cooperate with each other.
1536 In most use cases, putting controllers on hierarchies which are
1537 completely orthogonal to each other isn't necessary. What usually is
1538 called for is the ability to have differing levels of granularity
1539 depending on the specific controller. In other words, hierarchy may
1540 be collapsed from leaf towards root when viewed from specific
1541 controllers. For example, a given configuration might not care about
1542 how memory is distributed beyond a certain level while still wanting
1543 to control how CPU cycles are distributed.
1546 R-2. Thread Granularity
1548 cgroup v1 allowed threads of a process to belong to different cgroups.
1549 This didn't make sense for some controllers and those controllers
1550 ended up implementing different ways to ignore such situations but
1551 much more importantly it blurred the line between API exposed to
1552 individual applications and system management interface.
1554 Generally, in-process knowledge is available only to the process
1555 itself; thus, unlike service-level organization of processes,
1556 categorizing threads of a process requires active participation from
1557 the application which owns the target process.
1559 cgroup v1 had an ambiguously defined delegation model which got abused
1560 in combination with thread granularity. cgroups were delegated to
1561 individual applications so that they can create and manage their own
1562 sub-hierarchies and control resource distributions along them. This
1563 effectively raised cgroup to the status of a syscall-like API exposed
1566 First of all, cgroup has a fundamentally inadequate interface to be
1567 exposed this way. For a process to access its own knobs, it has to
1568 extract the path on the target hierarchy from /proc/self/cgroup,
1569 construct the path by appending the name of the knob to the path, open
1570 and then read and/or write to it. This is not only extremely clunky
1571 and unusual but also inherently racy. There is no conventional way to
1572 define transaction across the required steps and nothing can guarantee
1573 that the process would actually be operating on its own sub-hierarchy.
1575 cgroup controllers implemented a number of knobs which would never be
1576 accepted as public APIs because they were just adding control knobs to
1577 system-management pseudo filesystem. cgroup ended up with interface
1578 knobs which were not properly abstracted or refined and directly
1579 revealed kernel internal details. These knobs got exposed to
1580 individual applications through the ill-defined delegation mechanism
1581 effectively abusing cgroup as a shortcut to implementing public APIs
1582 without going through the required scrutiny.
1584 This was painful for both userland and kernel. Userland ended up with
1585 misbehaving and poorly abstracted interfaces and kernel exposing and
1586 locked into constructs inadvertently.
1589 R-3. Competition Between Inner Nodes and Threads
1591 cgroup v1 allowed threads to be in any cgroups which created an
1592 interesting problem where threads belonging to a parent cgroup and its
1593 children cgroups competed for resources. This was nasty as two
1594 different types of entities competed and there was no obvious way to
1595 settle it. Different controllers did different things.
1597 The cpu controller considered threads and cgroups as equivalents and
1598 mapped nice levels to cgroup weights. This worked for some cases but
1599 fell flat when children wanted to be allocated specific ratios of CPU
1600 cycles and the number of internal threads fluctuated - the ratios
1601 constantly changed as the number of competing entities fluctuated.
1602 There also were other issues. The mapping from nice level to weight
1603 wasn't obvious or universal, and there were various other knobs which
1604 simply weren't available for threads.
1606 The io controller implicitly created a hidden leaf node for each
1607 cgroup to host the threads. The hidden leaf had its own copies of all
1608 the knobs with "leaf_" prefixed. While this allowed equivalent
1609 control over internal threads, it was with serious drawbacks. It
1610 always added an extra layer of nesting which wouldn't be necessary
1611 otherwise, made the interface messy and significantly complicated the
1614 The memory controller didn't have a way to control what happened
1615 between internal tasks and child cgroups and the behavior was not
1616 clearly defined. There were attempts to add ad-hoc behaviors and
1617 knobs to tailor the behavior to specific workloads which would have
1618 led to problems extremely difficult to resolve in the long term.
1620 Multiple controllers struggled with internal tasks and came up with
1621 different ways to deal with it; unfortunately, all the approaches were
1622 severely flawed and, furthermore, the widely different behaviors
1623 made cgroup as a whole highly inconsistent.
1625 This clearly is a problem which needs to be addressed from cgroup core
1629 R-4. Other Interface Issues
1631 cgroup v1 grew without oversight and developed a large number of
1632 idiosyncrasies and inconsistencies. One issue on the cgroup core side
1633 was how an empty cgroup was notified - a userland helper binary was
1634 forked and executed for each event. The event delivery wasn't
1635 recursive or delegatable. The limitations of the mechanism also led
1636 to in-kernel event delivery filtering mechanism further complicating
1639 Controller interfaces were problematic too. An extreme example is
1640 controllers completely ignoring hierarchical organization and treating
1641 all cgroups as if they were all located directly under the root
1642 cgroup. Some controllers exposed a large amount of inconsistent
1643 implementation details to userland.
1645 There also was no consistency across controllers. When a new cgroup
1646 was created, some controllers defaulted to not imposing extra
1647 restrictions while others disallowed any resource usage until
1648 explicitly configured. Configuration knobs for the same type of
1649 control used widely differing naming schemes and formats. Statistics
1650 and information knobs were named arbitrarily and used different
1651 formats and units even in the same controller.
1653 cgroup v2 establishes common conventions where appropriate and updates
1654 controllers so that they expose minimal and consistent interfaces.
1657 R-5. Controller Issues and Remedies
1661 The original lower boundary, the soft limit, is defined as a limit
1662 that is per default unset. As a result, the set of cgroups that
1663 global reclaim prefers is opt-in, rather than opt-out. The costs for
1664 optimizing these mostly negative lookups are so high that the
1665 implementation, despite its enormous size, does not even provide the
1666 basic desirable behavior. First off, the soft limit has no
1667 hierarchical meaning. All configured groups are organized in a global
1668 rbtree and treated like equal peers, regardless where they are located
1669 in the hierarchy. This makes subtree delegation impossible. Second,
1670 the soft limit reclaim pass is so aggressive that it not just
1671 introduces high allocation latencies into the system, but also impacts
1672 system performance due to overreclaim, to the point where the feature
1673 becomes self-defeating.
1675 The memory.low boundary on the other hand is a top-down allocated
1676 reserve. A cgroup enjoys reclaim protection when it and all its
1677 ancestors are below their low boundaries, which makes delegation of
1678 subtrees possible. Secondly, new cgroups have no reserve per default
1679 and in the common case most cgroups are eligible for the preferred
1680 reclaim pass. This allows the new low boundary to be efficiently
1681 implemented with just a minor addition to the generic reclaim code,
1682 without the need for out-of-band data structures and reclaim passes.
1683 Because the generic reclaim code considers all cgroups except for the
1684 ones running low in the preferred first reclaim pass, overreclaim of
1685 individual groups is eliminated as well, resulting in much better
1686 overall workload performance.
1688 The original high boundary, the hard limit, is defined as a strict
1689 limit that can not budge, even if the OOM killer has to be called.
1690 But this generally goes against the goal of making the most out of the
1691 available memory. The memory consumption of workloads varies during
1692 runtime, and that requires users to overcommit. But doing that with a
1693 strict upper limit requires either a fairly accurate prediction of the
1694 working set size or adding slack to the limit. Since working set size
1695 estimation is hard and error prone, and getting it wrong results in
1696 OOM kills, most users tend to err on the side of a looser limit and
1697 end up wasting precious resources.
1699 The memory.high boundary on the other hand can be set much more
1700 conservatively. When hit, it throttles allocations by forcing them
1701 into direct reclaim to work off the excess, but it never invokes the
1702 OOM killer. As a result, a high boundary that is chosen too
1703 aggressively will not terminate the processes, but instead it will
1704 lead to gradual performance degradation. The user can monitor this
1705 and make corrections until the minimal memory footprint that still
1706 gives acceptable performance is found.
1708 In extreme cases, with many concurrent allocations and a complete
1709 breakdown of reclaim progress within the group, the high boundary can
1710 be exceeded. But even then it's mostly better to satisfy the
1711 allocation from the slack available in other groups or the rest of the
1712 system than killing the group. Otherwise, memory.max is there to
1713 limit this type of spillover and ultimately contain buggy or even
1714 malicious applications.
1716 Setting the original memory.limit_in_bytes below the current usage was
1717 subject to a race condition, where concurrent charges could cause the
1718 limit setting to fail. memory.max on the other hand will first set the
1719 limit to prevent new charges, and then reclaim and OOM kill until the
1720 new limit is met - or the task writing to memory.max is killed.
1722 The combined memory+swap accounting and limiting is replaced by real
1723 control over swap space.
1725 The main argument for a combined memory+swap facility in the original
1726 cgroup design was that global or parental pressure would always be
1727 able to swap all anonymous memory of a child group, regardless of the
1728 child's own (possibly untrusted) configuration. However, untrusted
1729 groups can sabotage swapping by other means - such as referencing its
1730 anonymous memory in a tight loop - and an admin can not assume full
1731 swappability when overcommitting untrusted jobs.
1733 For trusted jobs, on the other hand, a combined counter is not an
1734 intuitive userspace interface, and it flies in the face of the idea
1735 that cgroup controllers should account and limit specific physical
1736 resources. Swap space is a resource like all others in the system,
1737 and that's why unified hierarchy allows distributing it separately.