Merge tag 'mm-hotfixes-stable-2025-07-11-16-16' of git://git.kernel.org/pub/scm/linux...
[linux-block.git] / Documentation / admin-guide / cgroup-v2.rst
CommitLineData
e5ba9ea6
KK
1.. _cgroup-v2:
2
633b11be 3================
6c292092 4Control Group v2
633b11be 5================
6c292092 6
633b11be
MCC
7:Date: October, 2015
8:Author: Tejun Heo <tj@kernel.org>
6c292092
TH
9
10This is the authoritative documentation on the design, interface and
11conventions of cgroup v2. It describes all userland-visible aspects
12of cgroup including core and specific controller behaviors. All
13future changes must be reflected in this document. Documentation for
373e8ffa 14v1 is available under :ref:`Documentation/admin-guide/cgroup-v1/index.rst <cgroup-v1>`.
6c292092 15
633b11be
MCC
16.. CONTENTS
17
18 1. Introduction
19 1-1. Terminology
20 1-2. What is cgroup?
21 2. Basic Operations
22 2-1. Mounting
8cfd8147
TH
23 2-2. Organizing Processes and Threads
24 2-2-1. Processes
25 2-2-2. Threads
633b11be
MCC
26 2-3. [Un]populated Notification
27 2-4. Controlling Controllers
28 2-4-1. Enabling and Disabling
29 2-4-2. Top-down Constraint
30 2-4-3. No Internal Process Constraint
31 2-5. Delegation
32 2-5-1. Model of Delegation
33 2-5-2. Delegation Containment
34 2-6. Guidelines
35 2-6-1. Organize Once and Control
36 2-6-2. Avoid Name Collisions
37 3. Resource Distribution Models
38 3-1. Weights
39 3-2. Limits
40 3-3. Protections
41 3-4. Allocations
42 4. Interface Files
43 4-1. Format
44 4-2. Conventions
45 4-3. Core Interface Files
46 5. Controllers
47 5-1. CPU
48 5-1-1. CPU Interface Files
49 5-2. Memory
50 5-2-1. Memory Interface Files
51 5-2-2. Usage Guidelines
52 5-2-3. Memory Ownership
53 5-3. IO
54 5-3-1. IO Interface Files
55 5-3-2. Writeback
b351f0c7
JB
56 5-3-3. IO Latency
57 5-3-3-1. How IO Latency Throttling Works
58 5-3-3-2. IO Latency Interface Files
556910e3 59 5-3-4. IO Priority
633b11be
MCC
60 5-4. PID
61 5-4-1. PID Interface Files
4ec22e9c
WL
62 5-5. Cpuset
63 5.5-1. Cpuset Interface Files
64 5-6. Device
65 5-7. RDMA
66 5-7-1. RDMA Interface Files
b168ed45
ML
67 5-8. DMEM
68 5-9. HugeTLB
69 5.9-1. HugeTLB Interface Files
70 5-10. Misc
71 5.10-1 Miscellaneous cgroup Interface Files
72 5.10-2 Migration and Ownership
73 5-11. Others
74 5-11-1. perf_event
c4e0842b
MS
75 5-N. Non-normative information
76 5-N-1. CPU controller root cgroup process behaviour
77 5-N-2. IO controller root cgroup process behaviour
633b11be
MCC
78 6. Namespace
79 6-1. Basics
80 6-2. The Root and Views
81 6-3. Migration and setns(2)
82 6-4. Interaction with Other Namespaces
83 P. Information on Kernel Programming
84 P-1. Filesystem Support for Writeback
85 D. Deprecated v1 Core Features
86 R. Issues with v1 and Rationales for v2
87 R-1. Multiple Hierarchies
88 R-2. Thread Granularity
89 R-3. Competition Between Inner Nodes and Threads
90 R-4. Other Interface Issues
91 R-5. Controller Issues and Remedies
92 R-5-1. Memory
93
94
95Introduction
96============
97
98Terminology
99-----------
6c292092
TH
100
101"cgroup" stands for "control group" and is never capitalized. The
102singular form is used to designate the whole feature and also as a
103qualifier as in "cgroup controllers". When explicitly referring to
104multiple individual control groups, the plural form "cgroups" is used.
105
106
633b11be
MCC
107What is cgroup?
108---------------
6c292092
TH
109
110cgroup is a mechanism to organize processes hierarchically and
111distribute system resources along the hierarchy in a controlled and
112configurable manner.
113
114cgroup is largely composed of two parts - the core and controllers.
115cgroup core is primarily responsible for hierarchically organizing
116processes. A cgroup controller is usually responsible for
117distributing a specific type of system resource along the hierarchy
118although there are utility controllers which serve purposes other than
119resource distribution.
120
121cgroups form a tree structure and every process in the system belongs
122to one and only one cgroup. All threads of a process belong to the
123same cgroup. On creation, all processes are put in the cgroup that
124the parent process belongs to at the time. A process can be migrated
125to another cgroup. Migration of a process doesn't affect already
126existing descendant processes.
127
128Following certain structural constraints, controllers may be enabled or
129disabled selectively on a cgroup. All controller behaviors are
130hierarchical - if a controller is enabled on a cgroup, it affects all
131processes which belong to the cgroups consisting the inclusive
132sub-hierarchy of the cgroup. When a controller is enabled on a nested
133cgroup, it always restricts the resource distribution further. The
134restrictions set closer to the root in the hierarchy can not be
135overridden from further away.
136
137
633b11be
MCC
138Basic Operations
139================
6c292092 140
633b11be
MCC
141Mounting
142--------
6c292092
TH
143
144Unlike v1, cgroup v2 has only single hierarchy. The cgroup v2
633b11be 145hierarchy can be mounted with the following mount command::
6c292092
TH
146
147 # mount -t cgroup2 none $MOUNT_POINT
148
149cgroup2 filesystem has the magic number 0x63677270 ("cgrp"). All
150controllers which support v2 and are not bound to a v1 hierarchy are
151automatically bound to the v2 hierarchy and show up at the root.
152Controllers which are not in active use in the v2 hierarchy can be
153bound to other hierarchies. This allows mixing v2 hierarchy with the
154legacy v1 multiple hierarchies in a fully backward compatible way.
155
156A controller can be moved across hierarchies only after the controller
157is no longer referenced in its current hierarchy. Because per-cgroup
158controller states are destroyed asynchronously and controllers may
159have lingering references, a controller may not show up immediately on
160the v2 hierarchy after the final umount of the previous hierarchy.
161Similarly, a controller should be fully disabled to be moved out of
162the unified hierarchy and it may take some time for the disabled
163controller to become available for other hierarchies; furthermore, due
164to inter-controller dependencies, other controllers may need to be
165disabled too.
166
167While useful for development and manual configurations, moving
168controllers dynamically between the v2 and other hierarchies is
169strongly discouraged for production use. It is recommended to decide
170the hierarchies and controller associations before starting using the
171controllers after system boot.
172
1619b6d4
JW
173During transition to v2, system management software might still
174automount the v1 cgroup filesystem and so hijack all controllers
175during boot, before manual intervention is possible. To make testing
176and experimenting easier, the kernel parameter cgroup_no_v1= allows
177disabling controllers in v1 and make them always available in v2.
178
5136f636
TH
179cgroup v2 currently supports the following mount options.
180
c808f463 181 nsdelegate
5136f636
TH
182 Consider cgroup namespaces as delegation boundaries. This
183 option is system wide and can only be set on mount or modified
184 through remount from the init namespace. The mount option is
185 ignored on non-init namespace mounts. Please refer to the
186 Delegation section for details.
187
c808f463 188 favordynmods
6a010a49
TH
189 Reduce the latencies of dynamic cgroup modifications such as
190 task migrations and controller on/offs at the cost of making
191 hot path operations such as forks and exits more expensive.
192 The static usage pattern of creating a cgroup, enabling
193 controllers, and then seeding it with CLONE_INTO_CGROUP is
194 not affected by this option.
195
c808f463 196 memory_localevents
9852ae3f
CD
197 Only populate memory.events with data for the current cgroup,
198 and not any subtrees. This is legacy behaviour, the default
199 behaviour without this option is to include subtree counts.
200 This option is system wide and can only be set on mount or
201 modified through remount from the init namespace. The mount
202 option is ignored on non-init namespace mounts.
203
c808f463 204 memory_recursiveprot
8a931f80
JW
205 Recursively apply memory.min and memory.low protection to
206 entire subtrees, without requiring explicit downward
207 propagation into leaf cgroups. This allows protecting entire
208 subtrees from one another, while retaining free competition
209 within those subtrees. This should have been the default
210 behavior but is a mount-option to avoid regressing setups
211 relying on the original semantics (e.g. specifying bogusly
212 high 'bypass' protection values at higher tree levels).
213
8cba9576
NP
214 memory_hugetlb_accounting
215 Count HugeTLB memory usage towards the cgroup's overall
216 memory usage for the memory controller (for the purpose of
217 statistics reporting and memory protetion). This is a new
218 behavior that could regress existing setups, so it must be
219 explicitly opted in with this mount option.
220
221 A few caveats to keep in mind:
222
223 * There is no HugeTLB pool management involved in the memory
224 controller. The pre-allocated pool does not belong to anyone.
225 Specifically, when a new HugeTLB folio is allocated to
226 the pool, it is not accounted for from the perspective of the
227 memory controller. It is only charged to a cgroup when it is
228 actually used (for e.g at page fault time). Host memory
229 overcommit management has to consider this when configuring
230 hard limits. In general, HugeTLB pool management should be
231 done via other mechanisms (such as the HugeTLB controller).
232 * Failure to charge a HugeTLB folio to the memory controller
233 results in SIGBUS. This could happen even if the HugeTLB pool
234 still has pages available (but the cgroup limit is hit and
235 reclaim attempt fails).
236 * Charging HugeTLB memory towards the memory controller affects
237 memory protection and reclaim dynamics. Any userspace tuning
238 (of low, min limits for e.g) needs to take this into account.
239 * HugeTLB pages utilized while this option is not selected
240 will not be tracked by the memory controller (even if cgroup
241 v2 is remounted later on).
242
73e75e6f 243 pids_localevents
385a635c
MK
244 The option restores v1-like behavior of pids.events:max, that is only
245 local (inside cgroup proper) fork failures are counted. Without this
246 option pids.events.max represents any pids.max enforcemnt across
247 cgroup's subtree.
248
73e75e6f 249
6c292092 250
8cfd8147
TH
251Organizing Processes and Threads
252--------------------------------
253
254Processes
255~~~~~~~~~
6c292092
TH
256
257Initially, only the root cgroup exists to which all processes belong.
633b11be 258A child cgroup can be created by creating a sub-directory::
6c292092
TH
259
260 # mkdir $CGROUP_NAME
261
262A given cgroup may have multiple child cgroups forming a tree
263structure. Each cgroup has a read-writable interface file
264"cgroup.procs". When read, it lists the PIDs of all processes which
265belong to the cgroup one-per-line. The PIDs are not ordered and the
266same PID may show up more than once if the process got moved to
267another cgroup and then back or the PID got recycled while reading.
268
269A process can be migrated into a cgroup by writing its PID to the
270target cgroup's "cgroup.procs" file. Only one process can be migrated
271on a single write(2) call. If a process is composed of multiple
272threads, writing the PID of any thread migrates all threads of the
273process.
274
275When a process forks a child process, the new process is born into the
276cgroup that the forking process belongs to at the time of the
277operation. After exit, a process stays associated with the cgroup
278that it belonged to at the time of exit until it's reaped; however, a
279zombie process does not appear in "cgroup.procs" and thus can't be
280moved to another cgroup.
281
282A cgroup which doesn't have any children or live processes can be
283destroyed by removing the directory. Note that a cgroup which doesn't
284have any children and is associated only with zombie processes is
633b11be 285considered empty and can be removed::
6c292092
TH
286
287 # rmdir $CGROUP_NAME
288
289"/proc/$PID/cgroup" lists a process's cgroup membership. If legacy
290cgroup is in use in the system, this file may contain multiple lines,
291one for each hierarchy. The entry for cgroup v2 is always in the
633b11be 292format "0::$PATH"::
6c292092
TH
293
294 # cat /proc/842/cgroup
295 ...
296 0::/test-cgroup/test-cgroup-nested
297
298If the process becomes a zombie and the cgroup it was associated with
633b11be 299is removed subsequently, " (deleted)" is appended to the path::
6c292092
TH
300
301 # cat /proc/842/cgroup
302 ...
303 0::/test-cgroup/test-cgroup-nested (deleted)
304
305
8cfd8147
TH
306Threads
307~~~~~~~
308
309cgroup v2 supports thread granularity for a subset of controllers to
310support use cases requiring hierarchical resource distribution across
311the threads of a group of processes. By default, all threads of a
312process belong to the same cgroup, which also serves as the resource
313domain to host resource consumptions which are not specific to a
314process or thread. The thread mode allows threads to be spread across
315a subtree while still maintaining the common resource domain for them.
316
317Controllers which support thread mode are called threaded controllers.
318The ones which don't are called domain controllers.
319
320Marking a cgroup threaded makes it join the resource domain of its
321parent as a threaded cgroup. The parent may be another threaded
322cgroup whose resource domain is further up in the hierarchy. The root
323of a threaded subtree, that is, the nearest ancestor which is not
324threaded, is called threaded domain or thread root interchangeably and
325serves as the resource domain for the entire subtree.
326
327Inside a threaded subtree, threads of a process can be put in
328different cgroups and are not subject to the no internal process
329constraint - threaded controllers can be enabled on non-leaf cgroups
330whether they have threads in them or not.
331
332As the threaded domain cgroup hosts all the domain resource
333consumptions of the subtree, it is considered to have internal
334resource consumptions whether there are processes in it or not and
335can't have populated child cgroups which aren't threaded. Because the
336root cgroup is not subject to no internal process constraint, it can
337serve both as a threaded domain and a parent to domain cgroups.
338
339The current operation mode or type of the cgroup is shown in the
340"cgroup.type" file which indicates whether the cgroup is a normal
341domain, a domain which is serving as the domain of a threaded subtree,
342or a threaded cgroup.
343
344On creation, a cgroup is always a domain cgroup and can be made
345threaded by writing "threaded" to the "cgroup.type" file. The
346operation is single direction::
347
348 # echo threaded > cgroup.type
349
350Once threaded, the cgroup can't be made a domain again. To enable the
351thread mode, the following conditions must be met.
352
353- As the cgroup will join the parent's resource domain. The parent
354 must either be a valid (threaded) domain or a threaded cgroup.
355
918a8c2c
TH
356- When the parent is an unthreaded domain, it must not have any domain
357 controllers enabled or populated domain children. The root is
358 exempt from this requirement.
8cfd8147
TH
359
360Topology-wise, a cgroup can be in an invalid state. Please consider
2877cbe6 361the following topology::
8cfd8147
TH
362
363 A (threaded domain) - B (threaded) - C (domain, just created)
364
365C is created as a domain but isn't connected to a parent which can
366host child domains. C can't be used until it is turned into a
367threaded cgroup. "cgroup.type" file will report "domain (invalid)" in
368these cases. Operations which fail due to invalid topology use
369EOPNOTSUPP as the errno.
370
371A domain cgroup is turned into a threaded domain when one of its child
372cgroup becomes threaded or threaded controllers are enabled in the
373"cgroup.subtree_control" file while there are processes in the cgroup.
374A threaded domain reverts to a normal domain when the conditions
375clear.
376
377When read, "cgroup.threads" contains the list of the thread IDs of all
378threads in the cgroup. Except that the operations are per-thread
379instead of per-process, "cgroup.threads" has the same format and
380behaves the same way as "cgroup.procs". While "cgroup.threads" can be
381written to in any cgroup, as it can only move threads inside the same
382threaded domain, its operations are confined inside each threaded
383subtree.
384
385The threaded domain cgroup serves as the resource domain for the whole
386subtree, and, while the threads can be scattered across the subtree,
387all the processes are considered to be in the threaded domain cgroup.
388"cgroup.procs" in a threaded domain cgroup contains the PIDs of all
389processes in the subtree and is not readable in the subtree proper.
390However, "cgroup.procs" can be written to from anywhere in the subtree
391to migrate all threads of the matching process to the cgroup.
392
393Only threaded controllers can be enabled in a threaded subtree. When
394a threaded controller is enabled inside a threaded subtree, it only
395accounts for and controls resource consumptions associated with the
396threads in the cgroup and its descendants. All consumptions which
397aren't tied to a specific thread belong to the threaded domain cgroup.
398
399Because a threaded subtree is exempt from no internal process
400constraint, a threaded controller must be able to handle competition
401between threads in a non-leaf cgroup and its child cgroups. Each
402threaded controller defines how such competitions are handled.
403
a41796b5
WL
404Currently, the following controllers are threaded and can be enabled
405in a threaded cgroup::
406
407- cpu
408- cpuset
409- perf_event
410- pids
8cfd8147 411
633b11be
MCC
412[Un]populated Notification
413--------------------------
6c292092
TH
414
415Each non-root cgroup has a "cgroup.events" file which contains
416"populated" field indicating whether the cgroup's sub-hierarchy has
417live processes in it. Its value is 0 if there is no live process in
418the cgroup and its descendants; otherwise, 1. poll and [id]notify
419events are triggered when the value changes. This can be used, for
420example, to start a clean-up operation after all processes of a given
421sub-hierarchy have exited. The populated state updates and
422notifications are recursive. Consider the following sub-hierarchy
423where the numbers in the parentheses represent the numbers of processes
633b11be 424in each cgroup::
6c292092
TH
425
426 A(4) - B(0) - C(1)
427 \ D(0)
428
429A, B and C's "populated" fields would be 1 while D's 0. After the one
430process in C exits, B and C's "populated" fields would flip to "0" and
431file modified events will be generated on the "cgroup.events" files of
432both cgroups.
433
434
633b11be
MCC
435Controlling Controllers
436-----------------------
6c292092 437
633b11be
MCC
438Enabling and Disabling
439~~~~~~~~~~~~~~~~~~~~~~
6c292092
TH
440
441Each cgroup has a "cgroup.controllers" file which lists all
633b11be 442controllers available for the cgroup to enable::
6c292092
TH
443
444 # cat cgroup.controllers
445 cpu io memory
446
447No controller is enabled by default. Controllers can be enabled and
633b11be 448disabled by writing to the "cgroup.subtree_control" file::
6c292092
TH
449
450 # echo "+cpu +memory -io" > cgroup.subtree_control
451
452Only controllers which are listed in "cgroup.controllers" can be
453enabled. When multiple operations are specified as above, either they
454all succeed or fail. If multiple operations on the same controller
455are specified, the last one is effective.
456
457Enabling a controller in a cgroup indicates that the distribution of
458the target resource across its immediate children will be controlled.
459Consider the following sub-hierarchy. The enabled controllers are
633b11be 460listed in parentheses::
6c292092
TH
461
462 A(cpu,memory) - B(memory) - C()
463 \ D()
464
465As A has "cpu" and "memory" enabled, A will control the distribution
466of CPU cycles and memory to its children, in this case, B. As B has
467"memory" enabled but not "CPU", C and D will compete freely on CPU
468cycles but their division of memory available to B will be controlled.
469
470As a controller regulates the distribution of the target resource to
471the cgroup's children, enabling it creates the controller's interface
472files in the child cgroups. In the above example, enabling "cpu" on B
473would create the "cpu." prefixed controller interface files in C and
474D. Likewise, disabling "memory" from B would remove the "memory."
475prefixed controller interface files from C and D. This means that the
476controller interface files - anything which doesn't start with
477"cgroup." are owned by the parent rather than the cgroup itself.
478
479
633b11be
MCC
480Top-down Constraint
481~~~~~~~~~~~~~~~~~~~
6c292092
TH
482
483Resources are distributed top-down and a cgroup can further distribute
484a resource only if the resource has been distributed to it from the
485parent. This means that all non-root "cgroup.subtree_control" files
486can only contain controllers which are enabled in the parent's
487"cgroup.subtree_control" file. A controller can be enabled only if
488the parent has the controller enabled and a controller can't be
489disabled if one or more children have it enabled.
490
491
633b11be
MCC
492No Internal Process Constraint
493~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
6c292092 494
8cfd8147
TH
495Non-root cgroups can distribute domain resources to their children
496only when they don't have any processes of their own. In other words,
497only domain cgroups which don't contain any processes can have domain
498controllers enabled in their "cgroup.subtree_control" files.
6c292092 499
8cfd8147
TH
500This guarantees that, when a domain controller is looking at the part
501of the hierarchy which has it enabled, processes are always only on
502the leaves. This rules out situations where child cgroups compete
503against internal processes of the parent.
6c292092
TH
504
505The root cgroup is exempt from this restriction. Root contains
506processes and anonymous resource consumption which can't be associated
507with any other cgroups and requires special treatment from most
508controllers. How resource consumption in the root cgroup is governed
c4e0842b
MS
509is up to each controller (for more information on this topic please
510refer to the Non-normative information section in the Controllers
511chapter).
6c292092
TH
512
513Note that the restriction doesn't get in the way if there is no
514enabled controller in the cgroup's "cgroup.subtree_control". This is
515important as otherwise it wouldn't be possible to create children of a
516populated cgroup. To control resource distribution of a cgroup, the
517cgroup must create children and transfer all its processes to the
518children before enabling controllers in its "cgroup.subtree_control"
519file.
520
521
633b11be
MCC
522Delegation
523----------
6c292092 524
633b11be
MCC
525Model of Delegation
526~~~~~~~~~~~~~~~~~~~
6c292092 527
5136f636 528A cgroup can be delegated in two ways. First, to a less privileged
8cfd8147
TH
529user by granting write access of the directory and its "cgroup.procs",
530"cgroup.threads" and "cgroup.subtree_control" files to the user.
531Second, if the "nsdelegate" mount option is set, automatically to a
532cgroup namespace on namespace creation.
5136f636
TH
533
534Because the resource control interface files in a given directory
535control the distribution of the parent's resources, the delegatee
536shouldn't be allowed to write to them. For the first method, this is
d1a92d2d
CR
537achieved by not granting access to these files. For the second, files
538outside the namespace should be hidden from the delegatee by the means
539of at least mount namespacing, and the kernel rejects writes to all
540files on a namespace root from inside the cgroup namespace, except for
541those files listed in "/sys/kernel/cgroup/delegate" (including
542"cgroup.procs", "cgroup.threads", "cgroup.subtree_control", etc.).
5136f636
TH
543
544The end results are equivalent for both delegation types. Once
545delegated, the user can build sub-hierarchy under the directory,
546organize processes inside it as it sees fit and further distribute the
547resources it received from the parent. The limits and other settings
548of all resource controllers are hierarchical and regardless of what
549happens in the delegated sub-hierarchy, nothing can escape the
550resource restrictions imposed by the parent.
6c292092
TH
551
552Currently, cgroup doesn't impose any restrictions on the number of
553cgroups in or nesting depth of a delegated sub-hierarchy; however,
554this may be limited explicitly in the future.
555
556
633b11be
MCC
557Delegation Containment
558~~~~~~~~~~~~~~~~~~~~~~
6c292092
TH
559
560A delegated sub-hierarchy is contained in the sense that processes
5136f636
TH
561can't be moved into or out of the sub-hierarchy by the delegatee.
562
563For delegations to a less privileged user, this is achieved by
564requiring the following conditions for a process with a non-root euid
565to migrate a target process into a cgroup by writing its PID to the
566"cgroup.procs" file.
6c292092 567
6c292092
TH
568- The writer must have write access to the "cgroup.procs" file.
569
570- The writer must have write access to the "cgroup.procs" file of the
571 common ancestor of the source and destination cgroups.
572
576dd464 573The above two constraints ensure that while a delegatee may migrate
6c292092
TH
574processes around freely in the delegated sub-hierarchy it can't pull
575in from or push out to outside the sub-hierarchy.
576
577For an example, let's assume cgroups C0 and C1 have been delegated to
578user U0 who created C00, C01 under C0 and C10 under C1 as follows and
633b11be 579all processes under C0 and C1 belong to U0::
6c292092
TH
580
581 ~~~~~~~~~~~~~ - C0 - C00
582 ~ cgroup ~ \ C01
583 ~ hierarchy ~
584 ~~~~~~~~~~~~~ - C1 - C10
585
586Let's also say U0 wants to write the PID of a process which is
587currently in C10 into "C00/cgroup.procs". U0 has write access to the
576dd464
TH
588file; however, the common ancestor of the source cgroup C10 and the
589destination cgroup C00 is above the points of delegation and U0 would
590not have write access to its "cgroup.procs" files and thus the write
591will be denied with -EACCES.
6c292092 592
5136f636
TH
593For delegations to namespaces, containment is achieved by requiring
594that both the source and destination cgroups are reachable from the
595namespace of the process which is attempting the migration. If either
596is not reachable, the migration is rejected with -ENOENT.
597
6c292092 598
633b11be
MCC
599Guidelines
600----------
6c292092 601
633b11be
MCC
602Organize Once and Control
603~~~~~~~~~~~~~~~~~~~~~~~~~
6c292092
TH
604
605Migrating a process across cgroups is a relatively expensive operation
606and stateful resources such as memory are not moved together with the
607process. This is an explicit design decision as there often exist
608inherent trade-offs between migration and various hot paths in terms
609of synchronization cost.
610
611As such, migrating processes across cgroups frequently as a means to
612apply different resource restrictions is discouraged. A workload
613should be assigned to a cgroup according to the system's logical and
614resource structure once on start-up. Dynamic adjustments to resource
615distribution can be made by changing controller configuration through
616the interface files.
617
618
633b11be
MCC
619Avoid Name Collisions
620~~~~~~~~~~~~~~~~~~~~~
6c292092
TH
621
622Interface files for a cgroup and its children cgroups occupy the same
623directory and it is possible to create children cgroups which collide
624with interface files.
625
626All cgroup core interface files are prefixed with "cgroup." and each
627controller's interface files are prefixed with the controller name and
628a dot. A controller's name is composed of lower case alphabets and
629'_'s but never begins with an '_' so it can be used as the prefix
630character for collision avoidance. Also, interface file names won't
631start or end with terms which are often used in categorizing workloads
632such as job, service, slice, unit or workload.
633
634cgroup doesn't do anything to prevent name collisions and it's the
635user's responsibility to avoid them.
636
637
633b11be
MCC
638Resource Distribution Models
639============================
6c292092
TH
640
641cgroup controllers implement several resource distribution schemes
642depending on the resource type and expected use cases. This section
643describes major schemes in use along with their expected behaviors.
644
645
633b11be
MCC
646Weights
647-------
6c292092
TH
648
649A parent's resource is distributed by adding up the weights of all
650active children and giving each the fraction matching the ratio of its
651weight against the sum. As only children which can make use of the
652resource at the moment participate in the distribution, this is
653work-conserving. Due to the dynamic nature, this model is usually
654used for stateless resources.
655
656All weights are in the range [1, 10000] with the default at 100. This
657allows symmetric multiplicative biases in both directions at fine
658enough granularity while staying in the intuitive range.
659
660As long as the weight is in range, all configuration combinations are
661valid and there is no reason to reject configuration changes or
662process migrations.
663
664"cpu.weight" proportionally distributes CPU cycles to active children
665and is an example of this type.
666
667
acbee592
QY
668.. _cgroupv2-limits-distributor:
669
633b11be
MCC
670Limits
671------
6c292092 672
dbeb56fe 673A child can only consume up to the configured amount of the resource.
6c292092
TH
674Limits can be over-committed - the sum of the limits of children can
675exceed the amount of resource available to the parent.
676
677Limits are in the range [0, max] and defaults to "max", which is noop.
678
679As limits can be over-committed, all configuration combinations are
680valid and there is no reason to reject configuration changes or
681process migrations.
682
683"io.max" limits the maximum BPS and/or IOPS that a cgroup can consume
684on an IO device and is an example of this type.
685
acbee592 686.. _cgroupv2-protections-distributor:
6c292092 687
633b11be
MCC
688Protections
689-----------
6c292092 690
dbeb56fe 691A cgroup is protected up to the configured amount of the resource
9783aa99 692as long as the usages of all its ancestors are under their
6c292092
TH
693protected levels. Protections can be hard guarantees or best effort
694soft boundaries. Protections can also be over-committed in which case
dbeb56fe 695only up to the amount available to the parent is protected among
6c292092
TH
696children.
697
698Protections are in the range [0, max] and defaults to 0, which is
699noop.
700
701As protections can be over-committed, all configuration combinations
702are valid and there is no reason to reject configuration changes or
703process migrations.
704
705"memory.low" implements best-effort memory protection and is an
706example of this type.
707
708
633b11be
MCC
709Allocations
710-----------
6c292092
TH
711
712A cgroup is exclusively allocated a certain amount of a finite
713resource. Allocations can't be over-committed - the sum of the
714allocations of children can not exceed the amount of resource
715available to the parent.
716
717Allocations are in the range [0, max] and defaults to 0, which is no
718resource.
719
720As allocations can't be over-committed, some configuration
721combinations are invalid and should be rejected. Also, if the
722resource is mandatory for execution of processes, process migrations
723may be rejected.
724
725"cpu.rt.max" hard-allocates realtime slices and is an example of this
726type.
727
728
633b11be
MCC
729Interface Files
730===============
6c292092 731
633b11be
MCC
732Format
733------
6c292092
TH
734
735All interface files should be in one of the following formats whenever
633b11be 736possible::
6c292092
TH
737
738 New-line separated values
739 (when only one value can be written at once)
740
741 VAL0\n
742 VAL1\n
743 ...
744
745 Space separated values
746 (when read-only or multiple values can be written at once)
747
748 VAL0 VAL1 ...\n
749
750 Flat keyed
751
752 KEY0 VAL0\n
753 KEY1 VAL1\n
754 ...
755
756 Nested keyed
757
758 KEY0 SUB_KEY0=VAL00 SUB_KEY1=VAL01...
759 KEY1 SUB_KEY0=VAL10 SUB_KEY1=VAL11...
760 ...
761
762For a writable file, the format for writing should generally match
763reading; however, controllers may allow omitting later fields or
764implement restricted shortcuts for most common use cases.
765
766For both flat and nested keyed files, only the values for a single key
767can be written at a time. For nested keyed files, the sub key pairs
768may be specified in any order and not all pairs have to be specified.
769
770
633b11be
MCC
771Conventions
772-----------
6c292092
TH
773
774- Settings for a single feature should be contained in a single file.
775
776- The root cgroup should be exempt from resource control and thus
936f2a70 777 shouldn't have resource control interface files.
6c292092 778
a5e112e6
TH
779- The default time unit is microseconds. If a different unit is ever
780 used, an explicit unit suffix must be present.
781
782- A parts-per quantity should use a percentage decimal with at least
783 two digit fractional part - e.g. 13.40.
784
6c292092
TH
785- If a controller implements weight based resource distribution, its
786 interface file should be named "weight" and have the range [1,
787 10000] with 100 as the default. The values are chosen to allow
788 enough and symmetric bias in both directions while keeping it
789 intuitive (the default is 100%).
790
791- If a controller implements an absolute resource guarantee and/or
792 limit, the interface files should be named "min" and "max"
793 respectively. If a controller implements best effort resource
794 guarantee and/or limit, the interface files should be named "low"
795 and "high" respectively.
796
797 In the above four control files, the special token "max" should be
798 used to represent upward infinity for both reading and writing.
799
800- If a setting has a configurable default value and keyed specific
801 overrides, the default entry should be keyed with "default" and
802 appear as the first entry in the file.
803
804 The default value can be updated by writing either "default $VAL" or
805 "$VAL".
806
807 When writing to update a specific override, "default" can be used as
808 the value to indicate removal of the override. Override entries
809 with "default" as the value must not appear when read.
810
811 For example, a setting which is keyed by major:minor device numbers
633b11be 812 with integer values may look like the following::
6c292092
TH
813
814 # cat cgroup-example-interface-file
815 default 150
816 8:0 300
817
633b11be 818 The default value can be updated by::
6c292092
TH
819
820 # echo 125 > cgroup-example-interface-file
821
633b11be 822 or::
6c292092
TH
823
824 # echo "default 125" > cgroup-example-interface-file
825
633b11be 826 An override can be set by::
6c292092
TH
827
828 # echo "8:16 170" > cgroup-example-interface-file
829
633b11be 830 and cleared by::
6c292092
TH
831
832 # echo "8:0 default" > cgroup-example-interface-file
833 # cat cgroup-example-interface-file
834 default 125
835 8:16 170
836
837- For events which are not very high frequency, an interface file
838 "events" should be created which lists event key value pairs.
839 Whenever a notifiable event happens, file modified event should be
840 generated on the file.
841
842
633b11be
MCC
843Core Interface Files
844--------------------
6c292092
TH
845
846All cgroup core files are prefixed with "cgroup."
847
8cfd8147 848 cgroup.type
8cfd8147
TH
849 A read-write single value file which exists on non-root
850 cgroups.
851
852 When read, it indicates the current type of the cgroup, which
853 can be one of the following values.
854
855 - "domain" : A normal valid domain cgroup.
856
857 - "domain threaded" : A threaded domain cgroup which is
858 serving as the root of a threaded subtree.
859
860 - "domain invalid" : A cgroup which is in an invalid state.
861 It can't be populated or have controllers enabled. It may
862 be allowed to become a threaded cgroup.
863
864 - "threaded" : A threaded cgroup which is a member of a
865 threaded subtree.
866
867 A cgroup can be turned into a threaded cgroup by writing
868 "threaded" to this file.
869
6c292092 870 cgroup.procs
6c292092
TH
871 A read-write new-line separated values file which exists on
872 all cgroups.
873
874 When read, it lists the PIDs of all processes which belong to
875 the cgroup one-per-line. The PIDs are not ordered and the
876 same PID may show up more than once if the process got moved
877 to another cgroup and then back or the PID got recycled while
878 reading.
879
880 A PID can be written to migrate the process associated with
881 the PID to the cgroup. The writer should match all of the
882 following conditions.
883
6c292092 884 - It must have write access to the "cgroup.procs" file.
8cfd8147
TH
885
886 - It must have write access to the "cgroup.procs" file of the
887 common ancestor of the source and destination cgroups.
888
889 When delegating a sub-hierarchy, write access to this file
890 should be granted along with the containing directory.
891
892 In a threaded cgroup, reading this file fails with EOPNOTSUPP
893 as all the processes belong to the thread root. Writing is
894 supported and moves every thread of the process to the cgroup.
895
896 cgroup.threads
897 A read-write new-line separated values file which exists on
898 all cgroups.
899
900 When read, it lists the TIDs of all threads which belong to
901 the cgroup one-per-line. The TIDs are not ordered and the
902 same TID may show up more than once if the thread got moved to
903 another cgroup and then back or the TID got recycled while
904 reading.
905
906 A TID can be written to migrate the thread associated with the
907 TID to the cgroup. The writer should match all of the
908 following conditions.
909
910 - It must have write access to the "cgroup.threads" file.
911
912 - The cgroup that the thread is currently in must be in the
913 same resource domain as the destination cgroup.
6c292092
TH
914
915 - It must have write access to the "cgroup.procs" file of the
916 common ancestor of the source and destination cgroups.
917
918 When delegating a sub-hierarchy, write access to this file
919 should be granted along with the containing directory.
920
921 cgroup.controllers
6c292092
TH
922 A read-only space separated values file which exists on all
923 cgroups.
924
925 It shows space separated list of all controllers available to
926 the cgroup. The controllers are not ordered.
927
928 cgroup.subtree_control
6c292092
TH
929 A read-write space separated values file which exists on all
930 cgroups. Starts out empty.
931
932 When read, it shows space separated list of the controllers
933 which are enabled to control resource distribution from the
934 cgroup to its children.
935
936 Space separated list of controllers prefixed with '+' or '-'
937 can be written to enable or disable controllers. A controller
938 name prefixed with '+' enables the controller and '-'
939 disables. If a controller appears more than once on the list,
940 the last one is effective. When multiple enable and disable
941 operations are specified, either all succeed or all fail.
942
943 cgroup.events
6c292092
TH
944 A read-only flat-keyed file which exists on non-root cgroups.
945 The following entries are defined. Unless specified
946 otherwise, a value change in this file generates a file
947 modified event.
948
949 populated
6c292092
TH
950 1 if the cgroup or its descendants contains any live
951 processes; otherwise, 0.
afe471ea
RG
952 frozen
953 1 if the cgroup is frozen; otherwise, 0.
6c292092 954
1a926e0b
RG
955 cgroup.max.descendants
956 A read-write single value files. The default is "max".
957
958 Maximum allowed number of descent cgroups.
959 If the actual number of descendants is equal or larger,
960 an attempt to create a new cgroup in the hierarchy will fail.
961
962 cgroup.max.depth
963 A read-write single value files. The default is "max".
964
965 Maximum allowed descent depth below the current cgroup.
966 If the actual descent depth is equal or larger,
967 an attempt to create a new child cgroup will fail.
968
ec39225c
RG
969 cgroup.stat
970 A read-only flat-keyed file with the following entries:
971
972 nr_descendants
973 Total number of visible descendant cgroups.
974
975 nr_dying_descendants
976 Total number of dying descendant cgroups. A cgroup becomes
977 dying after being deleted by a user. The cgroup will remain
978 in dying state for some time undefined time (which can depend
979 on system load) before being completely destroyed.
980
981 A process can't enter a dying cgroup under any circumstances,
982 a dying cgroup can't revive.
983
984 A dying cgroup can consume system resources not exceeding
985 limits, which were active at the moment of cgroup deletion.
986
ab031252
WL
987 nr_subsys_<cgroup_subsys>
988 Total number of live cgroup subsystems (e.g memory
989 cgroup) at and beneath the current cgroup.
990
991 nr_dying_subsys_<cgroup_subsys>
992 Total number of dying cgroup subsystems (e.g. memory
993 cgroup) at and beneath the current cgroup.
994
afe471ea
RG
995 cgroup.freeze
996 A read-write single value file which exists on non-root cgroups.
997 Allowed values are "0" and "1". The default is "0".
998
999 Writing "1" to the file causes freezing of the cgroup and all
1000 descendant cgroups. This means that all belonging processes will
1001 be stopped and will not run until the cgroup will be explicitly
1002 unfrozen. Freezing of the cgroup may take some time; when this action
1003 is completed, the "frozen" value in the cgroup.events control file
1004 will be updated to "1" and the corresponding notification will be
1005 issued.
1006
1007 A cgroup can be frozen either by its own settings, or by settings
1008 of any ancestor cgroups. If any of ancestor cgroups is frozen, the
1009 cgroup will remain frozen.
1010
1011 Processes in the frozen cgroup can be killed by a fatal signal.
1012 They also can enter and leave a frozen cgroup: either by an explicit
1013 move by a user, or if freezing of the cgroup races with fork().
1014 If a process is moved to a frozen cgroup, it stops. If a process is
1015 moved out of a frozen cgroup, it becomes running.
1016
1017 Frozen status of a cgroup doesn't affect any cgroup tree operations:
1018 it's possible to delete a frozen (and empty) cgroup, as well as
1019 create new sub-cgroups.
6c292092 1020
340272b0
CB
1021 cgroup.kill
1022 A write-only single value file which exists in non-root cgroups.
1023 The only allowed value is "1".
1024
1025 Writing "1" to the file causes the cgroup and all descendant cgroups to
1026 be killed. This means that all processes located in the affected cgroup
1027 tree will be killed via SIGKILL.
1028
1029 Killing a cgroup tree will deal with concurrent forks appropriately and
1030 is protected against migrations.
1031
1032 In a threaded cgroup, writing this file fails with EOPNOTSUPP as
1033 killing cgroups is a process directed operation, i.e. it affects
1034 the whole thread-group.
1035
34f26a15
CZ
1036 cgroup.pressure
1037 A read-write single value file that allowed values are "0" and "1".
1038 The default is "1".
1039
1040 Writing "0" to the file will disable the cgroup PSI accounting.
1041 Writing "1" to the file will re-enable the cgroup PSI accounting.
1042
1043 This control attribute is not hierarchical, so disable or enable PSI
1044 accounting in a cgroup does not affect PSI accounting in descendants
1045 and doesn't need pass enablement via ancestors from root.
1046
1047 The reason this control attribute exists is that PSI accounts stalls for
1048 each cgroup separately and aggregates it at each level of the hierarchy.
1049 This may cause non-negligible overhead for some workloads when under
1050 deep level of the hierarchy, in which case this control attribute can
1051 be used to disable PSI accounting in the non-leaf cgroups.
1052
52b1364b
CZ
1053 irq.pressure
1054 A read-write nested-keyed file.
1055
1056 Shows pressure stall information for IRQ/SOFTIRQ. See
1057 :ref:`Documentation/accounting/psi.rst <psi>` for details.
1058
633b11be
MCC
1059Controllers
1060===========
6c292092 1061
e5ba9ea6
KK
1062.. _cgroup-v2-cpu:
1063
633b11be
MCC
1064CPU
1065---
6c292092 1066
6c292092
TH
1067The "cpu" controllers regulates distribution of CPU cycles. This
1068controller implements weight and absolute bandwidth limit models for
1069normal scheduling policy and absolute bandwidth allocation model for
1070realtime scheduling policy.
1071
2480c093
PB
1072In all the above models, cycles distribution is defined only on a temporal
1073base and it does not account for the frequency at which tasks are executed.
1074The (optional) utilization clamping support allows to hint the schedutil
1075cpufreq governor about the minimum desired frequency which should always be
1076provided by a CPU, as well as the maximum desired frequency, which should not
1077be exceeded by a CPU.
1078
dc9f08ba 1079WARNING: cgroup2 cpu controller doesn't yet support the (bandwidth) control of
c7461cca
SB
1080realtime processes. For a kernel built with the CONFIG_RT_GROUP_SCHED option
1081enabled for group scheduling of realtime processes, the cpu controller can only
1082be enabled when all RT processes are in the root cgroup. Be aware that system
1083management software may already have placed RT processes into non-root cgroups
1084during the system boot process, and these processes may need to be moved to the
1085root cgroup before the cpu controller can be enabled with a
1086CONFIG_RT_GROUP_SCHED enabled kernel.
1087
1088With CONFIG_RT_GROUP_SCHED disabled, this limitation does not apply and some of
1089the interface files either affect realtime processes or account for them. See
1090the following section for details. Only the cpu controller is affected by
1091CONFIG_RT_GROUP_SCHED. Other controllers can be used for the resource control of
1092realtime processes irrespective of CONFIG_RT_GROUP_SCHED.
c2f31b79 1093
6c292092 1094
633b11be
MCC
1095CPU Interface Files
1096~~~~~~~~~~~~~~~~~~~
6c292092 1097
d16e7994
SB
1098The interaction of a process with the cpu controller depends on its scheduling
1099policy and the underlying scheduler. From the point of view of the cpu controller,
1100processes can be categorized as follows:
1101
1102* Processes under the fair-class scheduler
1103* Processes under a BPF scheduler with the ``cgroup_set_weight`` callback
1104* Everything else: ``SCHED_{FIFO,RR,DEADLINE}`` and processes under a BPF scheduler
1105 without the ``cgroup_set_weight`` callback
1106
1107For details on when a process is under the fair-class scheduler or a BPF scheduler,
1108check out :ref:`Documentation/scheduler/sched-ext.rst <sched-ext>`.
1109
1110For each of the following interface files, the above categories
1111will be referred to. All time durations are in microseconds.
6c292092
TH
1112
1113 cpu.stat
936f2a70 1114 A read-only flat-keyed file.
d41bf8c9 1115 This file exists whether the controller is enabled or not.
6c292092 1116
d16e7994
SB
1117 It always reports the following three stats, which account for all the
1118 processes in the cgroup:
6c292092 1119
633b11be
MCC
1120 - usage_usec
1121 - user_usec
1122 - system_usec
d41bf8c9 1123
d16e7994
SB
1124 and the following five when the controller is enabled, which account for
1125 only the processes under the fair-class scheduler:
d41bf8c9 1126
633b11be
MCC
1127 - nr_periods
1128 - nr_throttled
1129 - throttled_usec
d73df887
HC
1130 - nr_bursts
1131 - burst_usec
6c292092
TH
1132
1133 cpu.weight
6c292092
TH
1134 A read-write single value file which exists on non-root
1135 cgroups. The default is "100".
1136
7b91eb60
JD
1137 For non idle groups (cpu.idle = 0), the weight is in the
1138 range [1, 10000].
1139
1140 If the cgroup has been configured to be SCHED_IDLE (cpu.idle = 1),
1141 then the weight will show as a 0.
6c292092 1142
d16e7994
SB
1143 This file affects only processes under the fair-class scheduler and a BPF
1144 scheduler with the ``cgroup_set_weight`` callback depending on what the
1145 callback actually does.
1146
0d593634
TH
1147 cpu.weight.nice
1148 A read-write single value file which exists on non-root
1149 cgroups. The default is "0".
1150
1151 The nice value is in the range [-20, 19].
1152
1153 This interface file is an alternative interface for
1154 "cpu.weight" and allows reading and setting weight using the
1155 same values used by nice(2). Because the range is smaller and
1156 granularity is coarser for the nice values, the read value is
1157 the closest approximation of the current weight.
1158
d16e7994
SB
1159 This file affects only processes under the fair-class scheduler and a BPF
1160 scheduler with the ``cgroup_set_weight`` callback depending on what the
1161 callback actually does.
1162
6c292092 1163 cpu.max
6c292092
TH
1164 A read-write two value file which exists on non-root cgroups.
1165 The default is "max 100000".
1166
633b11be 1167 The maximum bandwidth limit. It's in the following format::
6c292092
TH
1168
1169 $MAX $PERIOD
1170
dbeb56fe 1171 which indicates that the group may consume up to $MAX in each
6c292092
TH
1172 $PERIOD duration. "max" for $MAX indicates no limit. If only
1173 one number is written, $MAX is updated.
1174
d16e7994
SB
1175 This file affects only processes under the fair-class scheduler.
1176
d73df887
HC
1177 cpu.max.burst
1178 A read-write single value file which exists on non-root
1179 cgroups. The default is "0".
1180
1181 The burst in the range [0, $MAX].
1182
d16e7994
SB
1183 This file affects only processes under the fair-class scheduler.
1184
2ce7135a 1185 cpu.pressure
74bdd45c 1186 A read-write nested-keyed file.
2ce7135a
JW
1187
1188 Shows pressure stall information for CPU. See
373e8ffa 1189 :ref:`Documentation/accounting/psi.rst <psi>` for details.
2ce7135a 1190
d16e7994
SB
1191 This file accounts for all the processes in the cgroup.
1192
2480c093 1193 cpu.uclamp.min
79bfa4b3
SB
1194 A read-write single value file which exists on non-root cgroups.
1195 The default is "0", i.e. no utilization boosting.
2480c093 1196
79bfa4b3
SB
1197 The requested minimum utilization (protection) as a percentage
1198 rational number, e.g. 12.34 for 12.34%.
2480c093 1199
79bfa4b3
SB
1200 This interface allows reading and setting minimum utilization clamp
1201 values similar to the sched_setattr(2). This minimum utilization
1202 value is used to clamp the task specific minimum utilization clamp,
1203 including those of realtime processes.
2480c093 1204
79bfa4b3
SB
1205 The requested minimum utilization (protection) is always capped by
1206 the current value for the maximum utilization (limit), i.e.
1207 `cpu.uclamp.max`.
2480c093 1208
d16e7994 1209 This file affects all the processes in the cgroup.
2480c093
PB
1210
1211 cpu.uclamp.max
79bfa4b3
SB
1212 A read-write single value file which exists on non-root cgroups.
1213 The default is "max". i.e. no utilization capping
2480c093 1214
79bfa4b3
SB
1215 The requested maximum utilization (limit) as a percentage rational
1216 number, e.g. 98.76 for 98.76%.
2480c093 1217
79bfa4b3
SB
1218 This interface allows reading and setting maximum utilization clamp
1219 values similar to the sched_setattr(2). This maximum utilization
1220 value is used to clamp the task specific maximum utilization clamp,
1221 including those of realtime processes.
2480c093 1222
d16e7994 1223 This file affects all the processes in the cgroup.
2480c093 1224
7b91eb60
JD
1225 cpu.idle
1226 A read-write single value file which exists on non-root cgroups.
1227 The default is 0.
1228
1229 This is the cgroup analog of the per-task SCHED_IDLE sched policy.
1230 Setting this value to a 1 will make the scheduling policy of the
1231 cgroup SCHED_IDLE. The threads inside the cgroup will retain their
1232 own relative priorities, but the cgroup itself will be treated as
1233 very low priority relative to its peers.
1234
d16e7994 1235 This file affects only processes under the fair-class scheduler.
6c292092 1236
633b11be
MCC
1237Memory
1238------
6c292092
TH
1239
1240The "memory" controller regulates distribution of memory. Memory is
1241stateful and implements both limit and protection models. Due to the
1242intertwining between memory usage and reclaim pressure and the
1243stateful nature of memory, the distribution model is relatively
1244complex.
1245
1246While not completely water-tight, all major memory usages by a given
1247cgroup are tracked so that the total memory consumption can be
1248accounted and controlled to a reasonable extent. Currently, the
1249following types of memory usages are tracked.
1250
1251- Userland memory - page cache and anonymous memory.
1252
1253- Kernel data structures such as dentries and inodes.
1254
1255- TCP socket buffers.
1256
1257The above list may expand in the future for better coverage.
1258
1259
633b11be
MCC
1260Memory Interface Files
1261~~~~~~~~~~~~~~~~~~~~~~
6c292092
TH
1262
1263All memory amounts are in bytes. If a value which is not aligned to
1264PAGE_SIZE is written, the value may be rounded up to the closest
1265PAGE_SIZE multiple when read back.
1266
1267 memory.current
6c292092
TH
1268 A read-only single value file which exists on non-root
1269 cgroups.
1270
1271 The total amount of memory currently being used by the cgroup
1272 and its descendants.
1273
bf8d5d52
RG
1274 memory.min
1275 A read-write single value file which exists on non-root
1276 cgroups. The default is "0".
1277
1278 Hard memory protection. If the memory usage of a cgroup
1279 is within its effective min boundary, the cgroup's memory
1280 won't be reclaimed under any conditions. If there is no
1281 unprotected reclaimable memory available, OOM killer
9783aa99
CD
1282 is invoked. Above the effective min boundary (or
1283 effective low boundary if it is higher), pages are reclaimed
1284 proportionally to the overage, reducing reclaim pressure for
1285 smaller overages.
bf8d5d52 1286
d0c3bacb 1287 Effective min boundary is limited by memory.min values of
bf8d5d52
RG
1288 all ancestor cgroups. If there is memory.min overcommitment
1289 (child cgroup or cgroups are requiring more protected memory
1290 than parent will allow), then each child cgroup will get
1291 the part of parent's protection proportional to its
1292 actual memory usage below memory.min.
1293
1294 Putting more memory than generally available under this
1295 protection is discouraged and may lead to constant OOMs.
1296
1297 If a memory cgroup is not populated with processes,
1298 its memory.min is ignored.
1299
6c292092 1300 memory.low
6c292092
TH
1301 A read-write single value file which exists on non-root
1302 cgroups. The default is "0".
1303
7854207f
RG
1304 Best-effort memory protection. If the memory usage of a
1305 cgroup is within its effective low boundary, the cgroup's
6ee0fac1
JH
1306 memory won't be reclaimed unless there is no reclaimable
1307 memory available in unprotected cgroups.
822bbba0 1308 Above the effective low boundary (or
9783aa99
CD
1309 effective min boundary if it is higher), pages are reclaimed
1310 proportionally to the overage, reducing reclaim pressure for
1311 smaller overages.
7854207f
RG
1312
1313 Effective low boundary is limited by memory.low values of
1314 all ancestor cgroups. If there is memory.low overcommitment
bf8d5d52 1315 (child cgroup or cgroups are requiring more protected memory
7854207f 1316 than parent will allow), then each child cgroup will get
bf8d5d52 1317 the part of parent's protection proportional to its
7854207f 1318 actual memory usage below memory.low.
6c292092
TH
1319
1320 Putting more memory than generally available under this
1321 protection is discouraged.
1322
1323 memory.high
6c292092
TH
1324 A read-write single value file which exists on non-root
1325 cgroups. The default is "max".
1326
5647e53f 1327 Memory usage throttle limit. If a cgroup's usage goes
6c292092
TH
1328 over the high boundary, the processes of the cgroup are
1329 throttled and put under heavy reclaim pressure.
1330
1331 Going over the high limit never invokes the OOM killer and
5647e53f
DS
1332 under extreme conditions the limit may be breached. The high
1333 limit should be used in scenarios where an external process
1334 monitors the limited cgroup to alleviate heavy reclaim
1335 pressure.
6c292092 1336
c6c895cf
SB
1337 If memory.high is opened with O_NONBLOCK then the synchronous
1338 reclaim is bypassed. This is useful for admin processes that
1339 need to dynamically adjust the job's memory limits without
1340 expending their own CPU resources on memory reclamation. The
1341 job will trigger the reclaim and/or get throttled on its
1342 next charge request.
1343
1344 Please note that with O_NONBLOCK, there is a chance that the
1345 target memory cgroup may take indefinite amount of time to
1346 reduce usage below the limit due to delayed charge request or
1347 busy-hitting its memory to slow down reclaim.
c8e6002b 1348
6c292092 1349 memory.max
6c292092
TH
1350 A read-write single value file which exists on non-root
1351 cgroups. The default is "max".
1352
5647e53f
DS
1353 Memory usage hard limit. This is the main mechanism to limit
1354 memory usage of a cgroup. If a cgroup's memory usage reaches
1355 this limit and can't be reduced, the OOM killer is invoked in
1356 the cgroup. Under certain circumstances, the usage may go
1357 over the limit temporarily.
6c292092 1358
db33ec37
KK
1359 In default configuration regular 0-order allocations always
1360 succeed unless OOM killer chooses current task as a victim.
1361
1362 Some kinds of allocations don't invoke the OOM killer.
1363 Caller could retry them differently, return into userspace
1364 as -ENOMEM or silently ignore in cases like disk readahead.
1365
c6c895cf
SB
1366 If memory.max is opened with O_NONBLOCK, then the synchronous
1367 reclaim and oom-kill are bypassed. This is useful for admin
1368 processes that need to dynamically adjust the job's memory limits
1369 without expending their own CPU resources on memory reclamation.
1370 The job will trigger the reclaim and/or oom-kill on its next
1371 charge request.
1372
1373 Please note that with O_NONBLOCK, there is a chance that the
1374 target memory cgroup may take indefinite amount of time to
1375 reduce usage below the limit due to delayed charge request or
1376 busy-hitting its memory to slow down reclaim.
c8e6002b 1377
94968384
SB
1378 memory.reclaim
1379 A write-only nested-keyed file which exists for all cgroups.
1380
1381 This is a simple interface to trigger memory reclaim in the
1382 target cgroup.
1383
94968384
SB
1384 Example::
1385
1386 echo "1G" > memory.reclaim
1387
94968384
SB
1388 Please note that the kernel can over or under reclaim from
1389 the target cgroup. If less bytes are reclaimed than the
1390 specified amount, -EAGAIN is returned.
1391
73b73bac
YA
1392 Please note that the proactive reclaim (triggered by this
1393 interface) is not meant to indicate memory pressure on the
1394 memory cgroup. Therefore socket memory balancing triggered by
1395 the memory reclaim normally is not exercised in this case.
1396 This means that the networking layer will not adapt based on
1397 reclaim induced by memory.reclaim.
1398
68cd9050
DS
1399The following nested keys are defined.
1400
1401 ========== ================================
1402 swappiness Swappiness value to reclaim with
1403 ========== ================================
1404
1405 Specifying a swappiness value instructs the kernel to perform
1406 the reclaim with that swappiness value. Note that this has the
1407 same semantics as vm.swappiness applied to memcg reclaim with
1408 all the existing limitations and potential future extensions.
1409
68a1436b
ZH
1410 The valid range for swappiness is [0-200, max], setting
1411 swappiness=max exclusively reclaims anonymous memory.
1412
8e20d4b3 1413 memory.peak
c6f53ed8
DF
1414 A read-write single value file which exists on non-root cgroups.
1415
1416 The max memory usage recorded for the cgroup and its descendants since
1417 either the creation of the cgroup or the most recent reset for that FD.
8e20d4b3 1418
c6f53ed8
DF
1419 A write of any non-empty string to this file resets it to the
1420 current memory usage for subsequent reads through the same
1421 file descriptor.
8e20d4b3 1422
3d8b38eb
RG
1423 memory.oom.group
1424 A read-write single value file which exists on non-root
1425 cgroups. The default value is "0".
1426
1427 Determines whether the cgroup should be treated as
1428 an indivisible workload by the OOM killer. If set,
1429 all tasks belonging to the cgroup or to its descendants
1430 (if the memory cgroup is not a leaf cgroup) are killed
1431 together or not at all. This can be used to avoid
1432 partial kills to guarantee workload integrity.
1433
1434 Tasks with the OOM protection (oom_score_adj set to -1000)
1435 are treated as an exception and are never killed.
1436
1437 If the OOM killer is invoked in a cgroup, it's not going
1438 to kill any tasks outside of this cgroup, regardless
1439 memory.oom.group values of ancestor cgroups.
1440
6c292092 1441 memory.events
6c292092
TH
1442 A read-only flat-keyed file which exists on non-root cgroups.
1443 The following entries are defined. Unless specified
1444 otherwise, a value change in this file generates a file
1445 modified event.
1446
1e577f97
SB
1447 Note that all fields in this file are hierarchical and the
1448 file modified event can be generated due to an event down the
22b12557 1449 hierarchy. For the local events at the cgroup level see
1e577f97
SB
1450 memory.events.local.
1451
6c292092 1452 low
6c292092
TH
1453 The number of times the cgroup is reclaimed due to
1454 high memory pressure even though its usage is under
1455 the low boundary. This usually indicates that the low
1456 boundary is over-committed.
1457
1458 high
6c292092
TH
1459 The number of times processes of the cgroup are
1460 throttled and routed to perform direct memory reclaim
1461 because the high memory boundary was exceeded. For a
1462 cgroup whose memory usage is capped by the high limit
1463 rather than global memory pressure, this event's
1464 occurrences are expected.
1465
1466 max
6c292092
TH
1467 The number of times the cgroup's memory usage was
1468 about to go over the max boundary. If direct reclaim
8e675f7a 1469 fails to bring it down, the cgroup goes to OOM state.
6c292092
TH
1470
1471 oom
8e675f7a
KK
1472 The number of time the cgroup's memory usage was
1473 reached the limit and allocation was about to fail.
1474
7a1adfdd
RG
1475 This event is not raised if the OOM killer is not
1476 considered as an option, e.g. for failed high-order
db33ec37 1477 allocations or if caller asked to not retry attempts.
7a1adfdd 1478
8e675f7a 1479 oom_kill
8e675f7a
KK
1480 The number of processes belonging to this cgroup
1481 killed by any kind of OOM killer.
6c292092 1482
b6bf9abb
DS
1483 oom_group_kill
1484 The number of times a group OOM has occurred.
1485
1e577f97
SB
1486 memory.events.local
1487 Similar to memory.events but the fields in the file are local
1488 to the cgroup i.e. not hierarchical. The file modified event
1489 generated on this file reflects only the local events.
1490
587d9f72 1491 memory.stat
587d9f72
JW
1492 A read-only flat-keyed file which exists on non-root cgroups.
1493
1494 This breaks down the cgroup's memory footprint into different
1495 types of memory, type-specific details, and other information
1496 on the state and past events of the memory management system.
1497
1498 All memory amounts are in bytes.
1499
1500 The entries are ordered to be human readable, and new entries
1501 can show up in the middle. Don't rely on items remaining in a
1502 fixed position; use the keys to look up specific values!
1503
a21e7bb3
KK
1504 If the entry has no per-node counter (or not show in the
1505 memory.numa_stat). We use 'npn' (non-per-node) as the tag
1506 to indicate that it will not show in the memory.numa_stat.
5f9a4f4a 1507
587d9f72 1508 anon
587d9f72 1509 Amount of memory used in anonymous mappings such as
74949222
DH
1510 brk(), sbrk(), and mmap(MAP_ANONYMOUS). Note that
1511 some kernel configurations might account complete larger
1512 allocations (e.g., THP) if only some, but not all the
1513 memory of such an allocation is mapped anymore.
587d9f72
JW
1514
1515 file
587d9f72
JW
1516 Amount of memory used to cache filesystem data,
1517 including tmpfs and shared memory.
1518
a8c49af3
YA
1519 kernel (npn)
1520 Amount of total kernel memory, including
1521 (kernel_stack, pagetables, percpu, vmalloc, slab) in
1522 addition to other kernel memory use cases.
1523
12580e4b 1524 kernel_stack
12580e4b
VD
1525 Amount of memory allocated to kernel stacks.
1526
f0c0c115
SB
1527 pagetables
1528 Amount of memory allocated for page tables.
1529
ebc97a52
YA
1530 sec_pagetables
1531 Amount of memory allocated for secondary page tables,
1532 this currently includes KVM mmu allocations on x86
212c5c07 1533 and arm64 and IOMMU page tables.
ebc97a52 1534
a21e7bb3 1535 percpu (npn)
772616b0
RG
1536 Amount of memory used for storing per-cpu kernel
1537 data structures.
1538
a21e7bb3 1539 sock (npn)
4758e198
JW
1540 Amount of memory used in network transmission buffers
1541
4e5aa1f4
SB
1542 vmalloc (npn)
1543 Amount of memory used for vmap backed memory.
1544
9a4caf1e 1545 shmem
9a4caf1e
JW
1546 Amount of cached filesystem data that is swap-backed,
1547 such as tmpfs, shm segments, shared anonymous mmap()s
1548
f4840ccf
JW
1549 zswap
1550 Amount of memory consumed by the zswap compression backend.
1551
1552 zswapped
1553 Amount of application memory swapped out to zswap.
1554
587d9f72 1555 file_mapped
74949222
DH
1556 Amount of cached filesystem data mapped with mmap(). Note
1557 that some kernel configurations might account complete
1558 larger allocations (e.g., THP) if only some, but not
1559 not all the memory of such an allocation is mapped.
587d9f72
JW
1560
1561 file_dirty
587d9f72
JW
1562 Amount of cached filesystem data that was modified but
1563 not yet written back to disk
1564
1565 file_writeback
587d9f72
JW
1566 Amount of cached filesystem data that was modified and
1567 is currently being written back to disk
1568
b6038942
SB
1569 swapcached
1570 Amount of swap cached in memory. The swapcache is accounted
1571 against both memory and swap usage.
1572
1ff9e6e1
CD
1573 anon_thp
1574 Amount of memory used in anonymous mappings backed by
1575 transparent hugepages
b8eddff8
JW
1576
1577 file_thp
1578 Amount of cached filesystem data backed by transparent
1579 hugepages
1580
1581 shmem_thp
1582 Amount of shm, tmpfs, shared anonymous mmap()s backed by
1583 transparent hugepages
1ff9e6e1 1584
633b11be 1585 inactive_anon, active_anon, inactive_file, active_file, unevictable
587d9f72
JW
1586 Amount of memory, swap-backed and filesystem-backed,
1587 on the internal memory management lists used by the
1603c8d1
CD
1588 page reclaim algorithm.
1589
1590 As these represent internal list state (eg. shmem pages are on anon
1591 memory management lists), inactive_foo + active_foo may not be equal to
1592 the value for the foo counter, since the foo counter is type-based, not
1593 list-based.
587d9f72 1594
27ee57c9 1595 slab_reclaimable
27ee57c9
VD
1596 Part of "slab" that might be reclaimed, such as
1597 dentries and inodes.
1598
1599 slab_unreclaimable
27ee57c9
VD
1600 Part of "slab" that cannot be reclaimed on memory
1601 pressure.
1602
a21e7bb3 1603 slab (npn)
5f9a4f4a
MS
1604 Amount of memory used for storing in-kernel data
1605 structures.
587d9f72 1606
8d3fe09d
MS
1607 workingset_refault_anon
1608 Number of refaults of previously evicted anonymous pages.
b340959e 1609
8d3fe09d
MS
1610 workingset_refault_file
1611 Number of refaults of previously evicted file pages.
b340959e 1612
8d3fe09d
MS
1613 workingset_activate_anon
1614 Number of refaulted anonymous pages that were immediately
1615 activated.
1616
1617 workingset_activate_file
1618 Number of refaulted file pages that were immediately activated.
1619
1620 workingset_restore_anon
1621 Number of restored anonymous pages which have been detected as
1622 an active workingset before they got reclaimed.
1623
1624 workingset_restore_file
1625 Number of restored file pages which have been detected as an
1626 active workingset before they got reclaimed.
a6f5576b 1627
b340959e 1628 workingset_nodereclaim
b340959e
RG
1629 Number of times a shadow node has been reclaimed
1630
4c8bc7c4
HJ
1631 pswpin (npn)
1632 Number of pages swapped into memory
1633
1634 pswpout (npn)
1635 Number of pages swapped out of memory
1636
673520f8
QZ
1637 pgscan (npn)
1638 Amount of scanned pages (in an inactive LRU list)
1639
1640 pgsteal (npn)
1641 Amount of reclaimed pages
1642
1643 pgscan_kswapd (npn)
1644 Amount of scanned pages by kswapd (in an inactive LRU list)
1645
1646 pgscan_direct (npn)
1647 Amount of scanned pages directly (in an inactive LRU list)
1648
57e9cc50
JW
1649 pgscan_khugepaged (npn)
1650 Amount of scanned pages by khugepaged (in an inactive LRU list)
1651
e452872b
HJ
1652 pgscan_proactive (npn)
1653 Amount of scanned pages proactively (in an inactive LRU list)
1654
673520f8
QZ
1655 pgsteal_kswapd (npn)
1656 Amount of reclaimed pages by kswapd
1657
1658 pgsteal_direct (npn)
1659 Amount of reclaimed pages directly
1660
57e9cc50
JW
1661 pgsteal_khugepaged (npn)
1662 Amount of reclaimed pages by khugepaged
1663
e452872b
HJ
1664 pgsteal_proactive (npn)
1665 Amount of reclaimed pages proactively
1666
a21e7bb3 1667 pgfault (npn)
5f9a4f4a
MS
1668 Total number of page faults incurred
1669
a21e7bb3 1670 pgmajfault (npn)
5f9a4f4a
MS
1671 Number of major page faults incurred
1672
a21e7bb3 1673 pgrefill (npn)
2262185c
RG
1674 Amount of scanned pages (in an active LRU list)
1675
a21e7bb3 1676 pgactivate (npn)
2262185c
RG
1677 Amount of pages moved to the active LRU list
1678
a21e7bb3 1679 pgdeactivate (npn)
03189e8e 1680 Amount of pages moved to the inactive LRU list
2262185c 1681
a21e7bb3 1682 pglazyfree (npn)
2262185c
RG
1683 Amount of pages postponed to be freed under memory pressure
1684
a21e7bb3 1685 pglazyfreed (npn)
2262185c
RG
1686 Amount of reclaimed lazyfree pages
1687
e7ac4dae
BS
1688 swpin_zero
1689 Number of pages swapped into memory and filled with zero, where I/O
1690 was optimized out because the page content was detected to be zero
1691 during swapout.
1692
1693 swpout_zero
1694 Number of zero-filled pages swapped out with I/O skipped due to the
1695 content being detected as zero.
1696
db5b4f32
UA
1697 zswpin
1698 Number of pages moved in to memory from zswap.
1699
1700 zswpout
1701 Number of pages moved out of memory to zswap.
1702
1703 zswpwb
1704 Number of pages written from zswap to swap.
1705
a21e7bb3 1706 thp_fault_alloc (npn)
1ff9e6e1 1707 Number of transparent hugepages which were allocated to satisfy
2a8bef32
YS
1708 a page fault. This counter is not present when CONFIG_TRANSPARENT_HUGEPAGE
1709 is not set.
1ff9e6e1 1710
a21e7bb3 1711 thp_collapse_alloc (npn)
1ff9e6e1
CD
1712 Number of transparent hugepages which were allocated to allow
1713 collapsing an existing range of pages. This counter is not
1714 present when CONFIG_TRANSPARENT_HUGEPAGE is not set.
1715
811244a5
XH
1716 thp_swpout (npn)
1717 Number of transparent hugepages which are swapout in one piece
1718 without splitting.
1719
1720 thp_swpout_fallback (npn)
1721 Number of transparent hugepages which were split before swapout.
1722 Usually because failed to allocate some continuous swap space
1723 for the huge page.
1724
f77f0c75
KZ
1725 numa_pages_migrated (npn)
1726 Number of pages migrated by NUMA balancing.
1727
1728 numa_pte_updates (npn)
1729 Number of pages whose page table entries are modified by
1730 NUMA balancing to produce NUMA hinting faults on access.
1731
1732 numa_hint_faults (npn)
1733 Number of NUMA hinting faults.
1734
1735 pgdemote_kswapd
1736 Number of pages demoted by kswapd.
1737
1738 pgdemote_direct
1739 Number of pages demoted directly.
1740
1741 pgdemote_khugepaged
1742 Number of pages demoted by khugepaged.
1743
e452872b
HJ
1744 pgdemote_proactive
1745 Number of pages demoted by proactively.
1746
05d4532b
JH
1747 hugetlb
1748 Amount of memory used by hugetlb pages. This metric only shows
1749 up if hugetlb usage is accounted for in memory.current (i.e.
1750 cgroup is mounted with the memory_hugetlb_accounting option).
1751
5f9a4f4a
MS
1752 memory.numa_stat
1753 A read-only nested-keyed file which exists on non-root cgroups.
1754
1755 This breaks down the cgroup's memory footprint into different
1756 types of memory, type-specific details, and other information
1757 per node on the state of the memory management system.
1758
1759 This is useful for providing visibility into the NUMA locality
1760 information within an memcg since the pages are allowed to be
1761 allocated from any physical node. One of the use case is evaluating
1762 application performance by combining this information with the
1763 application's CPU allocation.
1764
1765 All memory amounts are in bytes.
1766
1767 The output format of memory.numa_stat is::
1768
1769 type N0=<bytes in node 0> N1=<bytes in node 1> ...
1770
1771 The entries are ordered to be human readable, and new entries
1772 can show up in the middle. Don't rely on items remaining in a
1773 fixed position; use the keys to look up specific values!
1774
1775 The entries can refer to the memory.stat.
1776
3e24b19d 1777 memory.swap.current
3e24b19d
VD
1778 A read-only single value file which exists on non-root
1779 cgroups.
1780
1781 The total amount of swap currently being used by the cgroup
1782 and its descendants.
1783
4b82ab4f
JK
1784 memory.swap.high
1785 A read-write single value file which exists on non-root
1786 cgroups. The default is "max".
1787
1788 Swap usage throttle limit. If a cgroup's swap usage exceeds
1789 this limit, all its further allocations will be throttled to
1790 allow userspace to implement custom out-of-memory procedures.
1791
1792 This limit marks a point of no return for the cgroup. It is NOT
1793 designed to manage the amount of swapping a workload does
1794 during regular operation. Compare to memory.swap.max, which
1795 prohibits swapping past a set amount, but lets the cgroup
1796 continue unimpeded as long as other memory can be reclaimed.
1797
1798 Healthy workloads are not expected to reach this limit.
1799
e0e0b412 1800 memory.swap.peak
c6f53ed8
DF
1801 A read-write single value file which exists on non-root cgroups.
1802
1803 The max swap usage recorded for the cgroup and its descendants since
1804 the creation of the cgroup or the most recent reset for that FD.
e0e0b412 1805
c6f53ed8
DF
1806 A write of any non-empty string to this file resets it to the
1807 current memory usage for subsequent reads through the same
1808 file descriptor.
e0e0b412 1809
3e24b19d 1810 memory.swap.max
3e24b19d
VD
1811 A read-write single value file which exists on non-root
1812 cgroups. The default is "max".
1813
1814 Swap usage hard limit. If a cgroup's swap usage reaches this
2877cbe6 1815 limit, anonymous memory of the cgroup will not be swapped out.
3e24b19d 1816
f3a53a3a
TH
1817 memory.swap.events
1818 A read-only flat-keyed file which exists on non-root cgroups.
1819 The following entries are defined. Unless specified
1820 otherwise, a value change in this file generates a file
1821 modified event.
1822
4b82ab4f
JK
1823 high
1824 The number of times the cgroup's swap usage was over
1825 the high threshold.
1826
f3a53a3a
TH
1827 max
1828 The number of times the cgroup's swap usage was about
1829 to go over the max boundary and swap allocation
1830 failed.
1831
1832 fail
1833 The number of times swap allocation failed either
1834 because of running out of swap system-wide or max
1835 limit.
1836
be09102b
TH
1837 When reduced under the current usage, the existing swap
1838 entries are reclaimed gradually and the swap usage may stay
1839 higher than the limit for an extended period of time. This
1840 reduces the impact on the workload and memory management.
1841
f4840ccf
JW
1842 memory.zswap.current
1843 A read-only single value file which exists on non-root
1844 cgroups.
1845
1846 The total amount of memory consumed by the zswap compression
1847 backend.
1848
1849 memory.zswap.max
1850 A read-write single value file which exists on non-root
1851 cgroups. The default is "max".
1852
1853 Zswap usage hard limit. If a cgroup's zswap pool reaches this
1854 limit, it will refuse to take any more stores before existing
1855 entries fault back in or are written out to disk.
1856
501a06fe 1857 memory.zswap.writeback
e3992573
MY
1858 A read-write single value file. The default value is "1".
1859 Note that this setting is hierarchical, i.e. the writeback would be
1860 implicitly disabled for child cgroups if the upper hierarchy
1861 does so.
501a06fe
NP
1862
1863 When this is set to 0, all swapping attempts to swapping devices
1864 are disabled. This included both zswap writebacks, and swapping due
1865 to zswap store failures. If the zswap store failures are recurring
1866 (for e.g if the pages are incompressible), users can observe
1867 reclaim inefficiency after disabling writeback (because the same
1868 pages might be rejected again and again).
1869
1870 Note that this is subtly different from setting memory.swap.max to
1871 0, as it still allows for pages to be written to the zswap pool.
5a53623d
MY
1872 This setting has no effect if zswap is disabled, and swapping
1873 is allowed unless memory.swap.max is set to 0.
501a06fe 1874
2ce7135a 1875 memory.pressure
74bdd45c 1876 A read-only nested-keyed file.
2ce7135a
JW
1877
1878 Shows pressure stall information for memory. See
373e8ffa 1879 :ref:`Documentation/accounting/psi.rst <psi>` for details.
2ce7135a 1880
6c292092 1881
633b11be
MCC
1882Usage Guidelines
1883~~~~~~~~~~~~~~~~
6c292092
TH
1884
1885"memory.high" is the main mechanism to control memory usage.
1886Over-committing on high limit (sum of high limits > available memory)
1887and letting global memory pressure to distribute memory according to
1888usage is a viable strategy.
1889
1890Because breach of the high limit doesn't trigger the OOM killer but
1891throttles the offending cgroup, a management agent has ample
1892opportunities to monitor and take appropriate actions such as granting
1893more memory or terminating the workload.
1894
1895Determining whether a cgroup has enough memory is not trivial as
1896memory usage doesn't indicate whether the workload can benefit from
1897more memory. For example, a workload which writes data received from
1898network to a file can use all available memory but can also operate as
1899performant with a small amount of memory. A measure of memory
1900pressure - how much the workload is being impacted due to lack of
1901memory - is necessary to determine whether a workload needs more
1902memory; unfortunately, memory pressure monitoring mechanism isn't
1903implemented yet.
1904
1905
633b11be
MCC
1906Memory Ownership
1907~~~~~~~~~~~~~~~~
6c292092
TH
1908
1909A memory area is charged to the cgroup which instantiated it and stays
1910charged to the cgroup until the area is released. Migrating a process
1911to a different cgroup doesn't move the memory usages that it
1912instantiated while in the previous cgroup to the new cgroup.
1913
1914A memory area may be used by processes belonging to different cgroups.
1915To which cgroup the area will be charged is in-deterministic; however,
1916over time, the memory area is likely to end up in a cgroup which has
1917enough memory allowance to avoid high reclaim pressure.
1918
1919If a cgroup sweeps a considerable amount of memory which is expected
1920to be accessed repeatedly by other cgroups, it may make sense to use
1921POSIX_FADV_DONTNEED to relinquish the ownership of memory areas
1922belonging to the affected files to ensure correct memory ownership.
1923
1924
633b11be
MCC
1925IO
1926--
6c292092
TH
1927
1928The "io" controller regulates the distribution of IO resources. This
1929controller implements both weight based and absolute bandwidth or IOPS
1930limit distribution; however, weight based distribution is available
1931only if cfq-iosched is in use and neither scheme is available for
1932blk-mq devices.
1933
1934
633b11be
MCC
1935IO Interface Files
1936~~~~~~~~~~~~~~~~~~
6c292092
TH
1937
1938 io.stat
ef45fe47 1939 A read-only nested-keyed file.
6c292092
TH
1940
1941 Lines are keyed by $MAJ:$MIN device numbers and not ordered.
1942 The following nested keys are defined.
1943
636620b6 1944 ====== =====================
6c292092
TH
1945 rbytes Bytes read
1946 wbytes Bytes written
1947 rios Number of read IOs
1948 wios Number of write IOs
636620b6
TH
1949 dbytes Bytes discarded
1950 dios Number of discard IOs
1951 ====== =====================
6c292092 1952
69654d37 1953 An example read output follows::
6c292092 1954
636620b6
TH
1955 8:16 rbytes=1459200 wbytes=314773504 rios=192 wios=353 dbytes=0 dios=0
1956 8:0 rbytes=90430464 wbytes=299008000 rios=8950 wios=1252 dbytes=50331648 dios=3021
6c292092 1957
7caa4715 1958 io.cost.qos
c4c6b86a 1959 A read-write nested-keyed file which exists only on the root
7caa4715
TH
1960 cgroup.
1961
1962 This file configures the Quality of Service of the IO cost
1963 model based controller (CONFIG_BLK_CGROUP_IOCOST) which
1964 currently implements "io.weight" proportional control. Lines
1965 are keyed by $MAJ:$MIN device numbers and not ordered. The
1966 line for a given device is populated on the first write for
1967 the device on "io.cost.qos" or "io.cost.model". The following
1968 nested keys are defined.
1969
1970 ====== =====================================
1971 enable Weight-based control enable
1972 ctrl "auto" or "user"
1973 rpct Read latency percentile [0, 100]
1974 rlat Read latency threshold
1975 wpct Write latency percentile [0, 100]
1976 wlat Write latency threshold
1977 min Minimum scaling percentage [1, 10000]
1978 max Maximum scaling percentage [1, 10000]
1979 ====== =====================================
1980
1981 The controller is disabled by default and can be enabled by
1982 setting "enable" to 1. "rpct" and "wpct" parameters default
1983 to zero and the controller uses internal device saturation
1984 state to adjust the overall IO rate between "min" and "max".
1985
1986 When a better control quality is needed, latency QoS
1987 parameters can be configured. For example::
1988
1989 8:16 enable=1 ctrl=auto rpct=95.00 rlat=75000 wpct=95.00 wlat=150000 min=50.00 max=150.0
1990
1991 shows that on sdb, the controller is enabled, will consider
1992 the device saturated if the 95th percentile of read completion
1993 latencies is above 75ms or write 150ms, and adjust the overall
1994 IO issue rate between 50% and 150% accordingly.
1995
1996 The lower the saturation point, the better the latency QoS at
1997 the cost of aggregate bandwidth. The narrower the allowed
1998 adjustment range between "min" and "max", the more conformant
1999 to the cost model the IO behavior. Note that the IO issue
2000 base rate may be far off from 100% and setting "min" and "max"
2001 blindly can lead to a significant loss of device capacity or
2002 control quality. "min" and "max" are useful for regulating
2003 devices which show wide temporary behavior changes - e.g. a
2004 ssd which accepts writes at the line speed for a while and
2005 then completely stalls for multiple seconds.
2006
2007 When "ctrl" is "auto", the parameters are controlled by the
2008 kernel and may change automatically. Setting "ctrl" to "user"
2009 or setting any of the percentile and latency parameters puts
2010 it into "user" mode and disables the automatic changes. The
2011 automatic mode can be restored by setting "ctrl" to "auto".
2012
2013 io.cost.model
c4c6b86a 2014 A read-write nested-keyed file which exists only on the root
7caa4715
TH
2015 cgroup.
2016
2017 This file configures the cost model of the IO cost model based
2018 controller (CONFIG_BLK_CGROUP_IOCOST) which currently
2019 implements "io.weight" proportional control. Lines are keyed
2020 by $MAJ:$MIN device numbers and not ordered. The line for a
2021 given device is populated on the first write for the device on
2022 "io.cost.qos" or "io.cost.model". The following nested keys
2023 are defined.
2024
2025 ===== ================================
2026 ctrl "auto" or "user"
2027 model The cost model in use - "linear"
2028 ===== ================================
2029
2030 When "ctrl" is "auto", the kernel may change all parameters
2031 dynamically. When "ctrl" is set to "user" or any other
2032 parameters are written to, "ctrl" become "user" and the
2033 automatic changes are disabled.
2034
2035 When "model" is "linear", the following model parameters are
2036 defined.
2037
2038 ============= ========================================
2039 [r|w]bps The maximum sequential IO throughput
2040 [r|w]seqiops The maximum 4k sequential IOs per second
2041 [r|w]randiops The maximum 4k random IOs per second
2042 ============= ========================================
2043
2044 From the above, the builtin linear model determines the base
2045 costs of a sequential and random IO and the cost coefficient
2046 for the IO size. While simple, this model can cover most
2047 common device classes acceptably.
2048
2049 The IO cost model isn't expected to be accurate in absolute
2050 sense and is scaled to the device behavior dynamically.
2051
8504dea7
TH
2052 If needed, tools/cgroup/iocost_coef_gen.py can be used to
2053 generate device-specific coefficients.
2054
6c292092 2055 io.weight
6c292092
TH
2056 A read-write flat-keyed file which exists on non-root cgroups.
2057 The default is "default 100".
2058
2059 The first line is the default weight applied to devices
2060 without specific override. The rest are overrides keyed by
2061 $MAJ:$MIN device numbers and not ordered. The weights are in
2062 the range [1, 10000] and specifies the relative amount IO time
2063 the cgroup can use in relation to its siblings.
2064
2065 The default weight can be updated by writing either "default
2066 $WEIGHT" or simply "$WEIGHT". Overrides can be set by writing
2067 "$MAJ:$MIN $WEIGHT" and unset by writing "$MAJ:$MIN default".
2068
633b11be 2069 An example read output follows::
6c292092
TH
2070
2071 default 100
2072 8:16 200
2073 8:0 50
2074
2075 io.max
6c292092
TH
2076 A read-write nested-keyed file which exists on non-root
2077 cgroups.
2078
2079 BPS and IOPS based IO limit. Lines are keyed by $MAJ:$MIN
2080 device numbers and not ordered. The following nested keys are
2081 defined.
2082
633b11be 2083 ===== ==================================
6c292092
TH
2084 rbps Max read bytes per second
2085 wbps Max write bytes per second
2086 riops Max read IO operations per second
2087 wiops Max write IO operations per second
633b11be 2088 ===== ==================================
6c292092
TH
2089
2090 When writing, any number of nested key-value pairs can be
2091 specified in any order. "max" can be specified as the value
2092 to remove a specific limit. If the same key is specified
2093 multiple times, the outcome is undefined.
2094
2095 BPS and IOPS are measured in each IO direction and IOs are
2096 delayed if limit is reached. Temporary bursts are allowed.
2097
633b11be 2098 Setting read limit at 2M BPS and write at 120 IOPS for 8:16::
6c292092
TH
2099
2100 echo "8:16 rbps=2097152 wiops=120" > io.max
2101
633b11be 2102 Reading returns the following::
6c292092
TH
2103
2104 8:16 rbps=2097152 wbps=max riops=max wiops=120
2105
633b11be 2106 Write IOPS limit can be removed by writing the following::
6c292092
TH
2107
2108 echo "8:16 wiops=max" > io.max
2109
633b11be 2110 Reading now returns the following::
6c292092
TH
2111
2112 8:16 rbps=2097152 wbps=max riops=max wiops=max
2113
2ce7135a 2114 io.pressure
74bdd45c 2115 A read-only nested-keyed file.
2ce7135a
JW
2116
2117 Shows pressure stall information for IO. See
373e8ffa 2118 :ref:`Documentation/accounting/psi.rst <psi>` for details.
2ce7135a 2119
6c292092 2120
633b11be
MCC
2121Writeback
2122~~~~~~~~~
6c292092
TH
2123
2124Page cache is dirtied through buffered writes and shared mmaps and
2125written asynchronously to the backing filesystem by the writeback
2126mechanism. Writeback sits between the memory and IO domains and
2127regulates the proportion of dirty memory by balancing dirtying and
2128write IOs.
2129
2130The io controller, in conjunction with the memory controller,
2131implements control of page cache writeback IOs. The memory controller
2132defines the memory domain that dirty memory ratio is calculated and
2133maintained for and the io controller defines the io domain which
2134writes out dirty pages for the memory domain. Both system-wide and
2135per-cgroup dirty memory states are examined and the more restrictive
2136of the two is enforced.
2137
2138cgroup writeback requires explicit support from the underlying
1b932b7d
ES
2139filesystem. Currently, cgroup writeback is implemented on ext2, ext4,
2140btrfs, f2fs, and xfs. On other filesystems, all writeback IOs are
2141attributed to the root cgroup.
6c292092
TH
2142
2143There are inherent differences in memory and writeback management
2144which affects how cgroup ownership is tracked. Memory is tracked per
2145page while writeback per inode. For the purpose of writeback, an
2146inode is assigned to a cgroup and all IO requests to write dirty pages
2147from the inode are attributed to that cgroup.
2148
2149As cgroup ownership for memory is tracked per page, there can be pages
2150which are associated with different cgroups than the one the inode is
2151associated with. These are called foreign pages. The writeback
2152constantly keeps track of foreign pages and, if a particular foreign
2153cgroup becomes the majority over a certain period of time, switches
2154the ownership of the inode to that cgroup.
2155
2156While this model is enough for most use cases where a given inode is
2157mostly dirtied by a single cgroup even when the main writing cgroup
2158changes over time, use cases where multiple cgroups write to a single
2159inode simultaneously are not supported well. In such circumstances, a
2160significant portion of IOs are likely to be attributed incorrectly.
2161As memory controller assigns page ownership on the first use and
2162doesn't update it until the page is released, even if writeback
2163strictly follows page ownership, multiple cgroups dirtying overlapping
2164areas wouldn't work as expected. It's recommended to avoid such usage
2165patterns.
2166
2167The sysctl knobs which affect writeback behavior are applied to cgroup
2168writeback as follows.
2169
633b11be 2170 vm.dirty_background_ratio, vm.dirty_ratio
6c292092
TH
2171 These ratios apply the same to cgroup writeback with the
2172 amount of available memory capped by limits imposed by the
2173 memory controller and system-wide clean memory.
2174
633b11be 2175 vm.dirty_background_bytes, vm.dirty_bytes
6c292092
TH
2176 For cgroup writeback, this is calculated into ratio against
2177 total available memory and applied the same way as
2178 vm.dirty[_background]_ratio.
2179
2180
b351f0c7
JB
2181IO Latency
2182~~~~~~~~~~
2183
2184This is a cgroup v2 controller for IO workload protection. You provide a group
2185with a latency target, and if the average latency exceeds that target the
2186controller will throttle any peers that have a lower latency target than the
2187protected workload.
2188
2189The limits are only applied at the peer level in the hierarchy. This means that
2190in the diagram below, only groups A, B, and C will influence each other, and
34b43446 2191groups D and F will influence each other. Group G will influence nobody::
b351f0c7
JB
2192
2193 [root]
2194 / | \
2195 A B C
2196 / \ |
2197 D F G
2198
2199
2200So the ideal way to configure this is to set io.latency in groups A, B, and C.
2201Generally you do not want to set a value lower than the latency your device
2202supports. Experiment to find the value that works best for your workload.
2203Start at higher than the expected latency for your device and watch the
c480bcf9
DZF
2204avg_lat value in io.stat for your workload group to get an idea of the
2205latency you see during normal operation. Use the avg_lat value as a basis for
2206your real setting, setting at 10-15% higher than the value in io.stat.
b351f0c7
JB
2207
2208How IO Latency Throttling Works
2209~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2210
2211io.latency is work conserving; so as long as everybody is meeting their latency
2212target the controller doesn't do anything. Once a group starts missing its
2213target it begins throttling any peer group that has a higher target than itself.
2214This throttling takes 2 forms:
2215
2216- Queue depth throttling. This is the number of outstanding IO's a group is
2217 allowed to have. We will clamp down relatively quickly, starting at no limit
2218 and going all the way down to 1 IO at a time.
2219
2220- Artificial delay induction. There are certain types of IO that cannot be
2221 throttled without possibly adversely affecting higher priority groups. This
2222 includes swapping and metadata IO. These types of IO are allowed to occur
2223 normally, however they are "charged" to the originating group. If the
2224 originating group is being throttled you will see the use_delay and delay
2225 fields in io.stat increase. The delay value is how many microseconds that are
2226 being added to any process that runs in this group. Because this number can
2227 grow quite large if there is a lot of swapping or metadata IO occurring we
2228 limit the individual delay events to 1 second at a time.
2229
2230Once the victimized group starts meeting its latency target again it will start
2231unthrottling any peer groups that were throttled previously. If the victimized
2232group simply stops doing IO the global counter will unthrottle appropriately.
2233
2234IO Latency Interface Files
2235~~~~~~~~~~~~~~~~~~~~~~~~~~
2236
2237 io.latency
2238 This takes a similar format as the other controllers.
2239
a477b94d 2240 "MAJOR:MINOR target=<target time in microseconds>"
b351f0c7
JB
2241
2242 io.stat
2243 If the controller is enabled you will see extra stats in io.stat in
2244 addition to the normal ones.
2245
2246 depth
2247 This is the current queue depth for the group.
2248
2249 avg_lat
c480bcf9
DZF
2250 This is an exponential moving average with a decay rate of 1/exp
2251 bound by the sampling interval. The decay rate interval can be
2252 calculated by multiplying the win value in io.stat by the
2253 corresponding number of samples based on the win value.
2254
2255 win
2256 The sampling window size in milliseconds. This is the minimum
2257 duration of time between evaluation events. Windows only elapse
2258 with IO activity. Idle periods extend the most recent window.
b351f0c7 2259
556910e3
BVA
2260IO Priority
2261~~~~~~~~~~~
2262
2263A single attribute controls the behavior of the I/O priority cgroup policy,
c1081a7b 2264namely the io.prio.class attribute. The following values are accepted for
556910e3
BVA
2265that attribute:
2266
2267 no-change
2268 Do not modify the I/O priority class.
2269
ddf63516
HT
2270 promote-to-rt
2271 For requests that have a non-RT I/O priority class, change it into RT.
2272 Also change the priority level of these requests to 4. Do not modify
2273 the I/O priority of requests that have priority class RT.
556910e3
BVA
2274
2275 restrict-to-be
2276 For requests that do not have an I/O priority class or that have I/O
ddf63516
HT
2277 priority class RT, change it into BE. Also change the priority level
2278 of these requests to 0. Do not modify the I/O priority class of
2279 requests that have priority class IDLE.
556910e3
BVA
2280
2281 idle
2282 Change the I/O priority class of all requests into IDLE, the lowest
2283 I/O priority class.
2284
ddf63516
HT
2285 none-to-rt
2286 Deprecated. Just an alias for promote-to-rt.
2287
556910e3
BVA
2288The following numerical values are associated with the I/O priority policies:
2289
ddf63516
HT
2290+----------------+---+
2291| no-change | 0 |
2292+----------------+---+
c1081a7b 2293| promote-to-rt | 1 |
ddf63516 2294+----------------+---+
c1081a7b
TY
2295| restrict-to-be | 2 |
2296+----------------+---+
2297| idle | 3 |
ddf63516 2298+----------------+---+
556910e3
BVA
2299
2300The numerical value that corresponds to each I/O priority class is as follows:
2301
2302+-------------------------------+---+
2303| IOPRIO_CLASS_NONE | 0 |
2304+-------------------------------+---+
2305| IOPRIO_CLASS_RT (real-time) | 1 |
2306+-------------------------------+---+
2307| IOPRIO_CLASS_BE (best effort) | 2 |
2308+-------------------------------+---+
2309| IOPRIO_CLASS_IDLE | 3 |
2310+-------------------------------+---+
2311
2312The algorithm to set the I/O priority class for a request is as follows:
2313
ddf63516
HT
2314- If I/O priority class policy is promote-to-rt, change the request I/O
2315 priority class to IOPRIO_CLASS_RT and change the request I/O priority
2316 level to 4.
c1081a7b 2317- If I/O priority class policy is not promote-to-rt, translate the I/O priority
ddf63516
HT
2318 class policy into a number, then change the request I/O priority class
2319 into the maximum of the I/O priority class policy number and the numerical
2320 I/O priority class.
556910e3 2321
633b11be
MCC
2322PID
2323---
20c56e59
HR
2324
2325The process number controller is used to allow a cgroup to stop any
2326new tasks from being fork()'d or clone()'d after a specified limit is
2327reached.
2328
2329The number of tasks in a cgroup can be exhausted in ways which other
2330controllers cannot prevent, thus warranting its own controller. For
2331example, a fork bomb is likely to exhaust the number of tasks before
2332hitting memory restrictions.
2333
2334Note that PIDs used in this controller refer to TIDs, process IDs as
2335used by the kernel.
2336
2337
633b11be
MCC
2338PID Interface Files
2339~~~~~~~~~~~~~~~~~~~
20c56e59
HR
2340
2341 pids.max
312eb712
TK
2342 A read-write single value file which exists on non-root
2343 cgroups. The default is "max".
20c56e59 2344
312eb712 2345 Hard limit of number of processes.
20c56e59
HR
2346
2347 pids.current
c9169291 2348 A read-only single value file which exists on non-root cgroups.
20c56e59 2349
312eb712
TK
2350 The number of processes currently in the cgroup and its
2351 descendants.
20c56e59 2352
c9169291
XJ
2353 pids.peak
2354 A read-only single value file which exists on non-root cgroups.
2355
2356 The maximum value that the number of processes in the cgroup and its
2357 descendants has ever reached.
2358
2359 pids.events
73e75e6f
MK
2360 A read-only flat-keyed file which exists on non-root cgroups. Unless
2361 specified otherwise, a value change in this file generates a file
2362 modified event. The following entries are defined.
c9169291
XJ
2363
2364 max
385a635c 2365 The number of times the cgroup's total number of processes hit the pids.max
73e75e6f 2366 limit (see also pids_localevents).
c9169291 2367
3f26a885
MK
2368 pids.events.local
2369 Similar to pids.events but the fields in the file are local
2370 to the cgroup i.e. not hierarchical. The file modified event
2371 generated on this file reflects only the local events.
2372
20c56e59
HR
2373Organisational operations are not blocked by cgroup policies, so it is
2374possible to have pids.current > pids.max. This can be done by either
2375setting the limit to be smaller than pids.current, or attaching enough
2376processes to the cgroup such that pids.current is larger than
2377pids.max. However, it is not possible to violate a cgroup PID policy
2378through fork() or clone(). These will return -EAGAIN if the creation
2379of a new process would cause a cgroup policy to be violated.
2380
2381
4ec22e9c
WL
2382Cpuset
2383------
2384
2385The "cpuset" controller provides a mechanism for constraining
2386the CPU and memory node placement of tasks to only the resources
2387specified in the cpuset interface files in a task's current cgroup.
2388This is especially valuable on large NUMA systems where placing jobs
2389on properly sized subsets of the systems with careful processor and
2390memory placement to reduce cross-node memory access and contention
2391can improve overall system performance.
2392
2393The "cpuset" controller is hierarchical. That means the controller
2394cannot use CPUs or memory nodes not allowed in its parent.
2395
2396
2397Cpuset Interface Files
2398~~~~~~~~~~~~~~~~~~~~~~
2399
2400 cpuset.cpus
2401 A read-write multiple values file which exists on non-root
2402 cpuset-enabled cgroups.
2403
2404 It lists the requested CPUs to be used by tasks within this
2405 cgroup. The actual list of CPUs to be granted, however, is
2406 subjected to constraints imposed by its parent and can differ
2407 from the requested CPUs.
2408
2409 The CPU numbers are comma-separated numbers or ranges.
f3431ba7 2410 For example::
4ec22e9c
WL
2411
2412 # cat cpuset.cpus
2413 0-4,6,8-10
2414
2415 An empty value indicates that the cgroup is using the same
2416 setting as the nearest cgroup ancestor with a non-empty
2417 "cpuset.cpus" or all the available CPUs if none is found.
2418
2419 The value of "cpuset.cpus" stays constant until the next update
2420 and won't be affected by any CPU hotplug events.
2421
2422 cpuset.cpus.effective
5776cecc 2423 A read-only multiple values file which exists on all
4ec22e9c
WL
2424 cpuset-enabled cgroups.
2425
2426 It lists the onlined CPUs that are actually granted to this
2427 cgroup by its parent. These CPUs are allowed to be used by
2428 tasks within the current cgroup.
2429
2430 If "cpuset.cpus" is empty, the "cpuset.cpus.effective" file shows
2431 all the CPUs from the parent cgroup that can be available to
2432 be used by this cgroup. Otherwise, it should be a subset of
2433 "cpuset.cpus" unless none of the CPUs listed in "cpuset.cpus"
2434 can be granted. In this case, it will be treated just like an
2435 empty "cpuset.cpus".
2436
2437 Its value will be affected by CPU hotplug events.
2438
2439 cpuset.mems
2440 A read-write multiple values file which exists on non-root
2441 cpuset-enabled cgroups.
2442
2443 It lists the requested memory nodes to be used by tasks within
2444 this cgroup. The actual list of memory nodes granted, however,
2445 is subjected to constraints imposed by its parent and can differ
2446 from the requested memory nodes.
2447
2448 The memory node numbers are comma-separated numbers or ranges.
f3431ba7 2449 For example::
4ec22e9c
WL
2450
2451 # cat cpuset.mems
2452 0-1,3
2453
2454 An empty value indicates that the cgroup is using the same
2455 setting as the nearest cgroup ancestor with a non-empty
2456 "cpuset.mems" or all the available memory nodes if none
2457 is found.
2458
2459 The value of "cpuset.mems" stays constant until the next update
2460 and won't be affected by any memory nodes hotplug events.
2461
ee9707e8
WL
2462 Setting a non-empty value to "cpuset.mems" causes memory of
2463 tasks within the cgroup to be migrated to the designated nodes if
2464 they are currently using memory outside of the designated nodes.
2465
2466 There is a cost for this memory migration. The migration
2467 may not be complete and some memory pages may be left behind.
2468 So it is recommended that "cpuset.mems" should be set properly
2469 before spawning new tasks into the cpuset. Even if there is
2470 a need to change "cpuset.mems" with active tasks, it shouldn't
2471 be done frequently.
2472
4ec22e9c 2473 cpuset.mems.effective
5776cecc 2474 A read-only multiple values file which exists on all
4ec22e9c
WL
2475 cpuset-enabled cgroups.
2476
2477 It lists the onlined memory nodes that are actually granted to
2478 this cgroup by its parent. These memory nodes are allowed to
2479 be used by tasks within the current cgroup.
2480
2481 If "cpuset.mems" is empty, it shows all the memory nodes from the
2482 parent cgroup that will be available to be used by this cgroup.
2483 Otherwise, it should be a subset of "cpuset.mems" unless none of
2484 the memory nodes listed in "cpuset.mems" can be granted. In this
2485 case, it will be treated just like an empty "cpuset.mems".
2486
2487 Its value will be affected by memory nodes hotplug events.
2488
efdf7532
WL
2489 cpuset.cpus.exclusive
2490 A read-write multiple values file which exists on non-root
2491 cpuset-enabled cgroups.
2492
2493 It lists all the exclusive CPUs that are allowed to be used
2494 to create a new cpuset partition. Its value is not used
2495 unless the cgroup becomes a valid partition root. See the
2496 "cpuset.cpus.partition" section below for a description of what
2497 a cpuset partition is.
2498
2499 When the cgroup becomes a partition root, the actual exclusive
2500 CPUs that are allocated to that partition are listed in
2501 "cpuset.cpus.exclusive.effective" which may be different
2502 from "cpuset.cpus.exclusive". If "cpuset.cpus.exclusive"
2503 has previously been set, "cpuset.cpus.exclusive.effective"
2504 is always a subset of it.
2505
2506 Users can manually set it to a value that is different from
fe8cd273
WL
2507 "cpuset.cpus". One constraint in setting it is that the list of
2508 CPUs must be exclusive with respect to "cpuset.cpus.exclusive"
2509 of its sibling. If "cpuset.cpus.exclusive" of a sibling cgroup
2510 isn't set, its "cpuset.cpus" value, if set, cannot be a subset
2511 of it to leave at least one CPU available when the exclusive
2512 CPUs are taken away.
efdf7532
WL
2513
2514 For a parent cgroup, any one of its exclusive CPUs can only
2515 be distributed to at most one of its child cgroups. Having an
2516 exclusive CPU appearing in two or more of its child cgroups is
2517 not allowed (the exclusivity rule). A value that violates the
2518 exclusivity rule will be rejected with a write error.
2519
2520 The root cgroup is a partition root and all its available CPUs
2521 are in its exclusive CPU set.
2522
2523 cpuset.cpus.exclusive.effective
2524 A read-only multiple values file which exists on all non-root
2525 cpuset-enabled cgroups.
2526
2527 This file shows the effective set of exclusive CPUs that
737bb142
WL
2528 can be used to create a partition root. The content
2529 of this file will always be a subset of its parent's
efdf7532
WL
2530 "cpuset.cpus.exclusive.effective" if its parent is not the root
2531 cgroup. It will also be a subset of "cpuset.cpus.exclusive"
2532 if it is set. If "cpuset.cpus.exclusive" is not set, it is
2533 treated to have an implicit value of "cpuset.cpus" in the
2534 formation of local partition.
2535
877c737d
WL
2536 cpuset.cpus.isolated
2537 A read-only and root cgroup only multiple values file.
2538
2539 This file shows the set of all isolated CPUs used in existing
2540 isolated partitions. It will be empty if no isolated partition
2541 is created.
2542
b1e3aeb1 2543 cpuset.cpus.partition
90e92f2d
WL
2544 A read-write single value file which exists on non-root
2545 cpuset-enabled cgroups. This flag is owned by the parent cgroup
2546 and is not delegatable.
2547
8a32d0fe 2548 It accepts only the following input values when written to.
90e92f2d 2549
8cbfdc24
WL
2550 ========== =====================================
2551 "member" Non-root member of a partition
2552 "root" Partition root
2553 "isolated" Partition root without load balancing
2554 ========== =====================================
2555
efdf7532
WL
2556 A cpuset partition is a collection of cpuset-enabled cgroups with
2557 a partition root at the top of the hierarchy and its descendants
2558 except those that are separate partition roots themselves and
2559 their descendants. A partition has exclusive access to the
2560 set of exclusive CPUs allocated to it. Other cgroups outside
2561 of that partition cannot use any CPUs in that set.
2562
2563 There are two types of partitions - local and remote. A local
2564 partition is one whose parent cgroup is also a valid partition
2565 root. A remote partition is one whose parent cgroup is not a
2566 valid partition root itself. Writing to "cpuset.cpus.exclusive"
2567 is optional for the creation of a local partition as its
2568 "cpuset.cpus.exclusive" file will assume an implicit value that
2569 is the same as "cpuset.cpus" if it is not set. Writing the
2570 proper "cpuset.cpus.exclusive" values down the cgroup hierarchy
2571 before the target partition root is mandatory for the creation
2572 of a remote partition.
2573
2574 Currently, a remote partition cannot be created under a local
2575 partition. All the ancestors of a remote partition root except
2576 the root cgroup cannot be a partition root.
2577
2578 The root cgroup is always a partition root and its state cannot
2579 be changed. All other non-root cgroups start out as "member".
8cbfdc24
WL
2580
2581 When set to "root", the current cgroup is the root of a new
efdf7532
WL
2582 partition or scheduling domain. The set of exclusive CPUs is
2583 determined by the value of its "cpuset.cpus.exclusive.effective".
8cbfdc24 2584
72c6303a
WL
2585 When set to "isolated", the CPUs in that partition will be in
2586 an isolated state without any load balancing from the scheduler
2587 and excluded from the unbound workqueues. Tasks placed in such
2588 a partition with multiple CPUs should be carefully distributed
2589 and bound to each of the individual CPUs for optimal performance.
8cbfdc24 2590
8cbfdc24
WL
2591 A partition root ("root" or "isolated") can be in one of the
2592 two possible states - valid or invalid. An invalid partition
2593 root is in a degraded state where some state information may
2594 be retained, but behaves more like a "member".
2595
2596 All possible state transitions among "member", "root" and
2597 "isolated" are allowed.
2598
2599 On read, the "cpuset.cpus.partition" file can show the following
2600 values.
2601
2602 ============================= =====================================
2603 "member" Non-root member of a partition
2604 "root" Partition root
2605 "isolated" Partition root without load balancing
2606 "root invalid (<reason>)" Invalid partition root
2607 "isolated invalid (<reason>)" Invalid isolated partition root
2608 ============================= =====================================
2609
2610 In the case of an invalid partition root, a descriptive string on
2611 why the partition is invalid is included within parentheses.
2612
efdf7532 2613 For a local partition root to be valid, the following conditions
8cbfdc24
WL
2614 must be met.
2615
efdf7532
WL
2616 1) The parent cgroup is a valid partition root.
2617 2) The "cpuset.cpus.exclusive.effective" file cannot be empty,
2618 though it may contain offline CPUs.
2619 3) The "cpuset.cpus.effective" cannot be empty unless there is
8cbfdc24
WL
2620 no task associated with this partition.
2621
efdf7532
WL
2622 For a remote partition root to be valid, all the above conditions
2623 except the first one must be met.
8cbfdc24 2624
efdf7532
WL
2625 External events like hotplug or changes to "cpuset.cpus" or
2626 "cpuset.cpus.exclusive" can cause a valid partition root to
2627 become invalid and vice versa. Note that a task cannot be
2628 moved to a cgroup with empty "cpuset.cpus.effective".
8cbfdc24
WL
2629
2630 A valid non-root parent partition may distribute out all its CPUs
efdf7532
WL
2631 to its child local partitions when there is no task associated
2632 with it.
8cbfdc24 2633
efdf7532
WL
2634 Care must be taken to change a valid partition root to "member"
2635 as all its child local partitions, if present, will become
8cbfdc24
WL
2636 invalid causing disruption to tasks running in those child
2637 partitions. These inactivated partitions could be recovered if
2638 their parent is switched back to a partition root with a proper
efdf7532 2639 value in "cpuset.cpus" or "cpuset.cpus.exclusive".
8cbfdc24
WL
2640
2641 Poll and inotify events are triggered whenever the state of
2642 "cpuset.cpus.partition" changes. That includes changes caused
2643 by write to "cpuset.cpus.partition", cpu hotplug or other
2644 changes that modify the validity status of the partition.
2645 This will allow user space agents to monitor unexpected changes
2646 to "cpuset.cpus.partition" without the need to do continuous
2647 polling.
90e92f2d 2648
efdf7532
WL
2649 A user can pre-configure certain CPUs to an isolated state
2650 with load balancing disabled at boot time with the "isolcpus"
2651 kernel boot command line option. If those CPUs are to be put
2652 into a partition, they have to be used in an isolated partition.
2653
4ec22e9c 2654
4ad5a321
RG
2655Device controller
2656-----------------
2657
2658Device controller manages access to device files. It includes both
2659creation of new device files (using mknod), and access to the
2660existing device files.
2661
2662Cgroup v2 device controller has no interface files and is implemented
2663on top of cgroup BPF. To control access to device files, a user may
c0002d11
A
2664create bpf programs of type BPF_PROG_TYPE_CGROUP_DEVICE and attach
2665them to cgroups with BPF_CGROUP_DEVICE flag. On an attempt to access a
2666device file, corresponding BPF programs will be executed, and depending
2667on the return value the attempt will succeed or fail with -EPERM.
2668
2669A BPF_PROG_TYPE_CGROUP_DEVICE program takes a pointer to the
2670bpf_cgroup_dev_ctx structure, which describes the device access attempt:
2671access type (mknod/read/write) and device (type, major and minor numbers).
2672If the program returns 0, the attempt fails with -EPERM, otherwise it
2673succeeds.
2674
2675An example of BPF_PROG_TYPE_CGROUP_DEVICE program may be found in
2676tools/testing/selftests/bpf/progs/dev_cgroup.c in the kernel source tree.
4ad5a321
RG
2677
2678
633b11be
MCC
2679RDMA
2680----
968ebff1 2681
9c1e67f9 2682The "rdma" controller regulates the distribution and accounting of
aefea466 2683RDMA resources.
9c1e67f9 2684
633b11be
MCC
2685RDMA Interface Files
2686~~~~~~~~~~~~~~~~~~~~
9c1e67f9
PP
2687
2688 rdma.max
2689 A readwrite nested-keyed file that exists for all the cgroups
2690 except root that describes current configured resource limit
2691 for a RDMA/IB device.
2692
2693 Lines are keyed by device name and are not ordered.
2694 Each line contains space separated resource name and its configured
2695 limit that can be distributed.
2696
2697 The following nested keys are defined.
2698
633b11be 2699 ========== =============================
9c1e67f9
PP
2700 hca_handle Maximum number of HCA Handles
2701 hca_object Maximum number of HCA Objects
633b11be 2702 ========== =============================
9c1e67f9 2703
633b11be 2704 An example for mlx4 and ocrdma device follows::
9c1e67f9
PP
2705
2706 mlx4_0 hca_handle=2 hca_object=2000
2707 ocrdma1 hca_handle=3 hca_object=max
2708
2709 rdma.current
2710 A read-only file that describes current resource usage.
2711 It exists for all the cgroup except root.
2712
633b11be 2713 An example for mlx4 and ocrdma device follows::
9c1e67f9
PP
2714
2715 mlx4_0 hca_handle=1 hca_object=20
2716 ocrdma1 hca_handle=1 hca_object=23
2717
b168ed45
ML
2718DMEM
2719----
2720
2721The "dmem" controller regulates the distribution and accounting of
2722device memory regions. Because each memory region may have its own page size,
2723which does not have to be equal to the system page size, the units are always bytes.
2724
2725DMEM Interface Files
2726~~~~~~~~~~~~~~~~~~~~
2727
2728 dmem.max, dmem.min, dmem.low
2729 A readwrite nested-keyed file that exists for all the cgroups
2730 except root that describes current configured resource limit
2731 for a region.
2732
2733 An example for xe follows::
2734
2735 drm/0000:03:00.0/vram0 1073741824
2736 drm/0000:03:00.0/stolen max
2737
2738 The semantics are the same as for the memory cgroup controller, and are
2739 calculated in the same way.
2740
2741 dmem.capacity
2742 A read-only file that describes maximum region capacity.
2743 It only exists on the root cgroup. Not all memory can be
2744 allocated by cgroups, as the kernel reserves some for
2745 internal use.
2746
2747 An example for xe follows::
2748
2749 drm/0000:03:00.0/vram0 8514437120
2750 drm/0000:03:00.0/stolen 67108864
2751
2752 dmem.current
2753 A read-only file that describes current resource usage.
2754 It exists for all the cgroup except root.
2755
2756 An example for xe follows::
2757
2758 drm/0000:03:00.0/vram0 12550144
2759 drm/0000:03:00.0/stolen 8650752
2760
faced7e0
GS
2761HugeTLB
2762-------
2763
2764The HugeTLB controller allows to limit the HugeTLB usage per control group and
2765enforces the controller limit during page fault.
2766
2767HugeTLB Interface Files
2768~~~~~~~~~~~~~~~~~~~~~~~
2769
2770 hugetlb.<hugepagesize>.current
2771 Show current usage for "hugepagesize" hugetlb. It exists for all
2772 the cgroup except root.
2773
2774 hugetlb.<hugepagesize>.max
2775 Set/show the hard limit of "hugepagesize" hugetlb usage.
2776 The default value is "max". It exists for all the cgroup except root.
2777
2778 hugetlb.<hugepagesize>.events
2779 A read-only flat-keyed file which exists on non-root cgroups.
2780
2781 max
2782 The number of allocation failure due to HugeTLB limit
2783
2784 hugetlb.<hugepagesize>.events.local
2785 Similar to hugetlb.<hugepagesize>.events but the fields in the file
2786 are local to the cgroup i.e. not hierarchical. The file modified event
2787 generated on this file reflects only the local events.
9c1e67f9 2788
f4776199
MA
2789 hugetlb.<hugepagesize>.numa_stat
2790 Similar to memory.numa_stat, it shows the numa information of the
2791 hugetlb pages of <hugepagesize> in this cgroup. Only active in
2792 use hugetlb pages are included. The per-node values are in bytes.
2793
633b11be
MCC
2794Misc
2795----
63f1ca59 2796
25259fc9
VS
2797The Miscellaneous cgroup provides the resource limiting and tracking
2798mechanism for the scalar resources which cannot be abstracted like the other
2799cgroup resources. Controller is enabled by the CONFIG_CGROUP_MISC config
2800option.
2801
2802A resource can be added to the controller via enum misc_res_type{} in the
2803include/linux/misc_cgroup.h file and the corresponding name via misc_res_name[]
2804in the kernel/cgroup/misc.c file. Provider of the resource must set its
2805capacity prior to using the resource by calling misc_cg_set_capacity().
2806
2807Once a capacity is set then the resource usage can be updated using charge and
2808uncharge APIs. All of the APIs to interact with misc controller are in
2809include/linux/misc_cgroup.h.
2810
2811Misc Interface Files
2812~~~~~~~~~~~~~~~~~~~~
2813
2814Miscellaneous controller provides 3 interface files. If two misc resources (res_a and res_b) are registered then:
2815
2816 misc.capacity
2817 A read-only flat-keyed file shown only in the root cgroup. It shows
2818 miscellaneous scalar resources available on the platform along with
2819 their quantities::
2820
2821 $ cat misc.capacity
2822 res_a 50
2823 res_b 10
2824
2825 misc.current
e973dfe9 2826 A read-only flat-keyed file shown in the all cgroups. It shows
25259fc9
VS
2827 the current usage of the resources in the cgroup and its children.::
2828
2829 $ cat misc.current
2830 res_a 3
2831 res_b 0
2832
1028f391
XJ
2833 misc.peak
2834 A read-only flat-keyed file shown in all cgroups. It shows the
2835 historical maximum usage of the resources in the cgroup and its
2836 children.::
2837
2838 $ cat misc.peak
2839 res_a 10
2840 res_b 8
2841
25259fc9
VS
2842 misc.max
2843 A read-write flat-keyed file shown in the non root cgroups. Allowed
2844 maximum usage of the resources in the cgroup and its children.::
2845
2846 $ cat misc.max
2847 res_a max
2848 res_b 4
2849
2850 Limit can be set by::
2851
2852 # echo res_a 1 > misc.max
2853
2854 Limit can be set to max by::
2855
2856 # echo res_a max > misc.max
2857
2858 Limits can be set higher than the capacity value in the misc.capacity
2859 file.
2860
4b53bb87
CX
2861 misc.events
2862 A read-only flat-keyed file which exists on non-root cgroups. The
2863 following entries are defined. Unless specified otherwise, a value
2864 change in this file generates a file modified event. All fields in
2865 this file are hierarchical.
2866
2867 max
2868 The number of times the cgroup's resource usage was
2869 about to go over the max boundary.
2870
6a26f9c6
XJ
2871 misc.events.local
2872 Similar to misc.events but the fields in the file are local to the
2873 cgroup i.e. not hierarchical. The file modified event generated on
2874 this file reflects only the local events.
2875
25259fc9
VS
2876Migration and Ownership
2877~~~~~~~~~~~~~~~~~~~~~~~
2878
2879A miscellaneous scalar resource is charged to the cgroup in which it is used
2880first, and stays charged to that cgroup until that resource is freed. Migrating
2881a process to a different cgroup does not move the charge to the destination
2882cgroup where the process has moved.
2883
2884Others
2885------
2886
633b11be
MCC
2887perf_event
2888~~~~~~~~~~
968ebff1
TH
2889
2890perf_event controller, if not mounted on a legacy hierarchy, is
2891automatically enabled on the v2 hierarchy so that perf events can
2892always be filtered by cgroup v2 path. The controller can still be
2893moved to a legacy hierarchy after v2 hierarchy is populated.
2894
2895
c4e0842b
MS
2896Non-normative information
2897-------------------------
2898
2899This section contains information that isn't considered to be a part of
2900the stable kernel API and so is subject to change.
2901
2902
2903CPU controller root cgroup process behaviour
2904~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2905
2906When distributing CPU cycles in the root cgroup each thread in this
2907cgroup is treated as if it was hosted in a separate child cgroup of the
2908root cgroup. This child cgroup weight is dependent on its thread nice
2909level.
2910
2911For details of this mapping see sched_prio_to_weight array in
2912kernel/sched/core.c file (values from this array should be scaled
2913appropriately so the neutral - nice 0 - value is 100 instead of 1024).
2914
2915
2916IO controller root cgroup process behaviour
2917~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2918
2919Root cgroup processes are hosted in an implicit leaf child node.
2920When distributing IO resources this implicit child node is taken into
2921account as if it was a normal child cgroup of the root cgroup with a
2922weight value of 200.
2923
2924
633b11be
MCC
2925Namespace
2926=========
d4021f6c 2927
633b11be
MCC
2928Basics
2929------
d4021f6c
SH
2930
2931cgroup namespace provides a mechanism to virtualize the view of the
2932"/proc/$PID/cgroup" file and cgroup mounts. The CLONE_NEWCGROUP clone
2933flag can be used with clone(2) and unshare(2) to create a new cgroup
2934namespace. The process running inside the cgroup namespace will have
2935its "/proc/$PID/cgroup" output restricted to cgroupns root. The
2936cgroupns root is the cgroup of the process at the time of creation of
2937the cgroup namespace.
2938
2939Without cgroup namespace, the "/proc/$PID/cgroup" file shows the
2940complete path of the cgroup of a process. In a container setup where
2941a set of cgroups and namespaces are intended to isolate processes the
2942"/proc/$PID/cgroup" file may leak potential system level information
7361ec68 2943to the isolated processes. For example::
d4021f6c
SH
2944
2945 # cat /proc/self/cgroup
2946 0::/batchjobs/container_id1
2947
2948The path '/batchjobs/container_id1' can be considered as system-data
2949and undesirable to expose to the isolated processes. cgroup namespace
2950can be used to restrict visibility of this path. For example, before
633b11be 2951creating a cgroup namespace, one would see::
d4021f6c
SH
2952
2953 # ls -l /proc/self/ns/cgroup
2954 lrwxrwxrwx 1 root root 0 2014-07-15 10:37 /proc/self/ns/cgroup -> cgroup:[4026531835]
2955 # cat /proc/self/cgroup
2956 0::/batchjobs/container_id1
2957
633b11be 2958After unsharing a new namespace, the view changes::
d4021f6c
SH
2959
2960 # ls -l /proc/self/ns/cgroup
2961 lrwxrwxrwx 1 root root 0 2014-07-15 10:35 /proc/self/ns/cgroup -> cgroup:[4026532183]
2962 # cat /proc/self/cgroup
2963 0::/
2964
2965When some thread from a multi-threaded process unshares its cgroup
2966namespace, the new cgroupns gets applied to the entire process (all
2967the threads). This is natural for the v2 hierarchy; however, for the
2968legacy hierarchies, this may be unexpected.
2969
2970A cgroup namespace is alive as long as there are processes inside or
2971mounts pinning it. When the last usage goes away, the cgroup
2972namespace is destroyed. The cgroupns root and the actual cgroups
2973remain.
2974
2975
633b11be
MCC
2976The Root and Views
2977------------------
d4021f6c
SH
2978
2979The 'cgroupns root' for a cgroup namespace is the cgroup in which the
2980process calling unshare(2) is running. For example, if a process in
2981/batchjobs/container_id1 cgroup calls unshare, cgroup
2982/batchjobs/container_id1 becomes the cgroupns root. For the
2983init_cgroup_ns, this is the real root ('/') cgroup.
2984
2985The cgroupns root cgroup does not change even if the namespace creator
633b11be 2986process later moves to a different cgroup::
d4021f6c
SH
2987
2988 # ~/unshare -c # unshare cgroupns in some cgroup
2989 # cat /proc/self/cgroup
2990 0::/
2991 # mkdir sub_cgrp_1
2992 # echo 0 > sub_cgrp_1/cgroup.procs
2993 # cat /proc/self/cgroup
2994 0::/sub_cgrp_1
2995
2996Each process gets its namespace-specific view of "/proc/$PID/cgroup"
2997
2998Processes running inside the cgroup namespace will be able to see
2999cgroup paths (in /proc/self/cgroup) only inside their root cgroup.
633b11be 3000From within an unshared cgroupns::
d4021f6c
SH
3001
3002 # sleep 100000 &
3003 [1] 7353
3004 # echo 7353 > sub_cgrp_1/cgroup.procs
3005 # cat /proc/7353/cgroup
3006 0::/sub_cgrp_1
3007
3008From the initial cgroup namespace, the real cgroup path will be
633b11be 3009visible::
d4021f6c
SH
3010
3011 $ cat /proc/7353/cgroup
3012 0::/batchjobs/container_id1/sub_cgrp_1
3013
3014From a sibling cgroup namespace (that is, a namespace rooted at a
3015different cgroup), the cgroup path relative to its own cgroup
3016namespace root will be shown. For instance, if PID 7353's cgroup
633b11be 3017namespace root is at '/batchjobs/container_id2', then it will see::
d4021f6c
SH
3018
3019 # cat /proc/7353/cgroup
3020 0::/../container_id2/sub_cgrp_1
3021
3022Note that the relative path always starts with '/' to indicate that
3023its relative to the cgroup namespace root of the caller.
3024
3025
633b11be
MCC
3026Migration and setns(2)
3027----------------------
d4021f6c
SH
3028
3029Processes inside a cgroup namespace can move into and out of the
3030namespace root if they have proper access to external cgroups. For
3031example, from inside a namespace with cgroupns root at
3032/batchjobs/container_id1, and assuming that the global hierarchy is
633b11be 3033still accessible inside cgroupns::
d4021f6c
SH
3034
3035 # cat /proc/7353/cgroup
3036 0::/sub_cgrp_1
3037 # echo 7353 > batchjobs/container_id2/cgroup.procs
3038 # cat /proc/7353/cgroup
3039 0::/../container_id2
3040
3041Note that this kind of setup is not encouraged. A task inside cgroup
3042namespace should only be exposed to its own cgroupns hierarchy.
3043
3044setns(2) to another cgroup namespace is allowed when:
3045
3046(a) the process has CAP_SYS_ADMIN against its current user namespace
3047(b) the process has CAP_SYS_ADMIN against the target cgroup
3048 namespace's userns
3049
3050No implicit cgroup changes happen with attaching to another cgroup
3051namespace. It is expected that the someone moves the attaching
3052process under the target cgroup namespace root.
3053
3054
633b11be
MCC
3055Interaction with Other Namespaces
3056---------------------------------
d4021f6c
SH
3057
3058Namespace specific cgroup hierarchy can be mounted by a process
633b11be 3059running inside a non-init cgroup namespace::
d4021f6c
SH
3060
3061 # mount -t cgroup2 none $MOUNT_POINT
3062
3063This will mount the unified cgroup hierarchy with cgroupns root as the
3064filesystem root. The process needs CAP_SYS_ADMIN against its user and
3065mount namespaces.
3066
3067The virtualization of /proc/self/cgroup file combined with restricting
3068the view of cgroup hierarchy by namespace-private cgroupfs mount
3069provides a properly isolated cgroup view inside the container.
3070
3071
633b11be
MCC
3072Information on Kernel Programming
3073=================================
6c292092
TH
3074
3075This section contains kernel programming information in the areas
3076where interacting with cgroup is necessary. cgroup core and
3077controllers are not covered.
3078
3079
633b11be
MCC
3080Filesystem Support for Writeback
3081--------------------------------
6c292092
TH
3082
3083A filesystem can support cgroup writeback by updating
6b0dfabb 3084address_space_operations->writepages() to annotate bio's using the
6c292092
TH
3085following two functions.
3086
3087 wbc_init_bio(@wbc, @bio)
6c292092 3088 Should be called for each bio carrying writeback data and
fd42df30
DZ
3089 associates the bio with the inode's owner cgroup and the
3090 corresponding request queue. This must be called after
3091 a queue (device) has been associated with the bio and
3092 before submission.
6c292092 3093
30dac24e 3094 wbc_account_cgroup_owner(@wbc, @folio, @bytes)
6c292092
TH
3095 Should be called for each data segment being written out.
3096 While this function doesn't care exactly when it's called
3097 during the writeback session, it's the easiest and most
3098 natural to call it as data segments are added to a bio.
3099
3100With writeback bio's annotated, cgroup support can be enabled per
3101super_block by setting SB_I_CGROUPWB in ->s_iflags. This allows for
3102selective disabling of cgroup writeback support which is helpful when
3103certain filesystem features, e.g. journaled data mode, are
3104incompatible.
3105
3106wbc_init_bio() binds the specified bio to its cgroup. Depending on
3107the configuration, the bio may be executed at a lower priority and if
3108the writeback session is holding shared resources, e.g. a journal
3109entry, may lead to priority inversion. There is no one easy solution
3110for the problem. Filesystems can try to work around specific problem
fd42df30 3111cases by skipping wbc_init_bio() and using bio_associate_blkg()
6c292092
TH
3112directly.
3113
3114
633b11be
MCC
3115Deprecated v1 Core Features
3116===========================
6c292092
TH
3117
3118- Multiple hierarchies including named ones are not supported.
3119
5136f636 3120- All v1 mount options are not supported.
6c292092
TH
3121
3122- The "tasks" file is removed and "cgroup.procs" is not sorted.
3123
3124- "cgroup.clone_children" is removed.
3125
ab031252
WL
3126- /proc/cgroups is meaningless for v2. Use "cgroup.controllers" or
3127 "cgroup.stat" files at the root instead.
6c292092
TH
3128
3129
633b11be
MCC
3130Issues with v1 and Rationales for v2
3131====================================
6c292092 3132
633b11be
MCC
3133Multiple Hierarchies
3134--------------------
6c292092
TH
3135
3136cgroup v1 allowed an arbitrary number of hierarchies and each
3137hierarchy could host any number of controllers. While this seemed to
3138provide a high level of flexibility, it wasn't useful in practice.
3139
3140For example, as there is only one instance of each controller, utility
3141type controllers such as freezer which can be useful in all
3142hierarchies could only be used in one. The issue is exacerbated by
3143the fact that controllers couldn't be moved to another hierarchy once
3144hierarchies were populated. Another issue was that all controllers
3145bound to a hierarchy were forced to have exactly the same view of the
3146hierarchy. It wasn't possible to vary the granularity depending on
3147the specific controller.
3148
3149In practice, these issues heavily limited which controllers could be
3150put on the same hierarchy and most configurations resorted to putting
3151each controller on its own hierarchy. Only closely related ones, such
3152as the cpu and cpuacct controllers, made sense to be put on the same
3153hierarchy. This often meant that userland ended up managing multiple
3154similar hierarchies repeating the same steps on each hierarchy
3155whenever a hierarchy management operation was necessary.
3156
3157Furthermore, support for multiple hierarchies came at a steep cost.
3158It greatly complicated cgroup core implementation but more importantly
3159the support for multiple hierarchies restricted how cgroup could be
3160used in general and what controllers was able to do.
3161
3162There was no limit on how many hierarchies there might be, which meant
3163that a thread's cgroup membership couldn't be described in finite
3164length. The key might contain any number of entries and was unlimited
3165in length, which made it highly awkward to manipulate and led to
3166addition of controllers which existed only to identify membership,
3167which in turn exacerbated the original problem of proliferating number
3168of hierarchies.
3169
3170Also, as a controller couldn't have any expectation regarding the
3171topologies of hierarchies other controllers might be on, each
3172controller had to assume that all other controllers were attached to
3173completely orthogonal hierarchies. This made it impossible, or at
3174least very cumbersome, for controllers to cooperate with each other.
3175
3176In most use cases, putting controllers on hierarchies which are
3177completely orthogonal to each other isn't necessary. What usually is
3178called for is the ability to have differing levels of granularity
3179depending on the specific controller. In other words, hierarchy may
3180be collapsed from leaf towards root when viewed from specific
3181controllers. For example, a given configuration might not care about
3182how memory is distributed beyond a certain level while still wanting
3183to control how CPU cycles are distributed.
3184
3185
633b11be
MCC
3186Thread Granularity
3187------------------
6c292092
TH
3188
3189cgroup v1 allowed threads of a process to belong to different cgroups.
3190This didn't make sense for some controllers and those controllers
3191ended up implementing different ways to ignore such situations but
3192much more importantly it blurred the line between API exposed to
3193individual applications and system management interface.
3194
3195Generally, in-process knowledge is available only to the process
3196itself; thus, unlike service-level organization of processes,
3197categorizing threads of a process requires active participation from
3198the application which owns the target process.
3199
3200cgroup v1 had an ambiguously defined delegation model which got abused
3201in combination with thread granularity. cgroups were delegated to
3202individual applications so that they can create and manage their own
3203sub-hierarchies and control resource distributions along them. This
3204effectively raised cgroup to the status of a syscall-like API exposed
3205to lay programs.
3206
3207First of all, cgroup has a fundamentally inadequate interface to be
3208exposed this way. For a process to access its own knobs, it has to
3209extract the path on the target hierarchy from /proc/self/cgroup,
3210construct the path by appending the name of the knob to the path, open
3211and then read and/or write to it. This is not only extremely clunky
3212and unusual but also inherently racy. There is no conventional way to
3213define transaction across the required steps and nothing can guarantee
3214that the process would actually be operating on its own sub-hierarchy.
3215
3216cgroup controllers implemented a number of knobs which would never be
3217accepted as public APIs because they were just adding control knobs to
3218system-management pseudo filesystem. cgroup ended up with interface
3219knobs which were not properly abstracted or refined and directly
3220revealed kernel internal details. These knobs got exposed to
3221individual applications through the ill-defined delegation mechanism
3222effectively abusing cgroup as a shortcut to implementing public APIs
3223without going through the required scrutiny.
3224
3225This was painful for both userland and kernel. Userland ended up with
3226misbehaving and poorly abstracted interfaces and kernel exposing and
3227locked into constructs inadvertently.
3228
3229
633b11be
MCC
3230Competition Between Inner Nodes and Threads
3231-------------------------------------------
6c292092
TH
3232
3233cgroup v1 allowed threads to be in any cgroups which created an
3234interesting problem where threads belonging to a parent cgroup and its
3235children cgroups competed for resources. This was nasty as two
3236different types of entities competed and there was no obvious way to
3237settle it. Different controllers did different things.
3238
3239The cpu controller considered threads and cgroups as equivalents and
3240mapped nice levels to cgroup weights. This worked for some cases but
3241fell flat when children wanted to be allocated specific ratios of CPU
3242cycles and the number of internal threads fluctuated - the ratios
3243constantly changed as the number of competing entities fluctuated.
3244There also were other issues. The mapping from nice level to weight
3245wasn't obvious or universal, and there were various other knobs which
3246simply weren't available for threads.
3247
3248The io controller implicitly created a hidden leaf node for each
3249cgroup to host the threads. The hidden leaf had its own copies of all
633b11be 3250the knobs with ``leaf_`` prefixed. While this allowed equivalent
6c292092
TH
3251control over internal threads, it was with serious drawbacks. It
3252always added an extra layer of nesting which wouldn't be necessary
3253otherwise, made the interface messy and significantly complicated the
3254implementation.
3255
3256The memory controller didn't have a way to control what happened
3257between internal tasks and child cgroups and the behavior was not
3258clearly defined. There were attempts to add ad-hoc behaviors and
3259knobs to tailor the behavior to specific workloads which would have
3260led to problems extremely difficult to resolve in the long term.
3261
3262Multiple controllers struggled with internal tasks and came up with
3263different ways to deal with it; unfortunately, all the approaches were
3264severely flawed and, furthermore, the widely different behaviors
3265made cgroup as a whole highly inconsistent.
3266
3267This clearly is a problem which needs to be addressed from cgroup core
3268in a uniform way.
3269
3270
633b11be
MCC
3271Other Interface Issues
3272----------------------
6c292092
TH
3273
3274cgroup v1 grew without oversight and developed a large number of
3275idiosyncrasies and inconsistencies. One issue on the cgroup core side
3276was how an empty cgroup was notified - a userland helper binary was
3277forked and executed for each event. The event delivery wasn't
3278recursive or delegatable. The limitations of the mechanism also led
3279to in-kernel event delivery filtering mechanism further complicating
3280the interface.
3281
3282Controller interfaces were problematic too. An extreme example is
3283controllers completely ignoring hierarchical organization and treating
3284all cgroups as if they were all located directly under the root
3285cgroup. Some controllers exposed a large amount of inconsistent
3286implementation details to userland.
3287
3288There also was no consistency across controllers. When a new cgroup
3289was created, some controllers defaulted to not imposing extra
3290restrictions while others disallowed any resource usage until
3291explicitly configured. Configuration knobs for the same type of
3292control used widely differing naming schemes and formats. Statistics
3293and information knobs were named arbitrarily and used different
3294formats and units even in the same controller.
3295
3296cgroup v2 establishes common conventions where appropriate and updates
3297controllers so that they expose minimal and consistent interfaces.
3298
3299
633b11be
MCC
3300Controller Issues and Remedies
3301------------------------------
6c292092 3302
633b11be
MCC
3303Memory
3304~~~~~~
6c292092
TH
3305
3306The original lower boundary, the soft limit, is defined as a limit
3307that is per default unset. As a result, the set of cgroups that
3308global reclaim prefers is opt-in, rather than opt-out. The costs for
3309optimizing these mostly negative lookups are so high that the
3310implementation, despite its enormous size, does not even provide the
3311basic desirable behavior. First off, the soft limit has no
3312hierarchical meaning. All configured groups are organized in a global
3313rbtree and treated like equal peers, regardless where they are located
3314in the hierarchy. This makes subtree delegation impossible. Second,
3315the soft limit reclaim pass is so aggressive that it not just
3316introduces high allocation latencies into the system, but also impacts
3317system performance due to overreclaim, to the point where the feature
3318becomes self-defeating.
3319
3320The memory.low boundary on the other hand is a top-down allocated
9783aa99
CD
3321reserve. A cgroup enjoys reclaim protection when it's within its
3322effective low, which makes delegation of subtrees possible. It also
3323enjoys having reclaim pressure proportional to its overage when
3324above its effective low.
6c292092
TH
3325
3326The original high boundary, the hard limit, is defined as a strict
3327limit that can not budge, even if the OOM killer has to be called.
3328But this generally goes against the goal of making the most out of the
3329available memory. The memory consumption of workloads varies during
3330runtime, and that requires users to overcommit. But doing that with a
3331strict upper limit requires either a fairly accurate prediction of the
3332working set size or adding slack to the limit. Since working set size
3333estimation is hard and error prone, and getting it wrong results in
3334OOM kills, most users tend to err on the side of a looser limit and
3335end up wasting precious resources.
3336
3337The memory.high boundary on the other hand can be set much more
3338conservatively. When hit, it throttles allocations by forcing them
3339into direct reclaim to work off the excess, but it never invokes the
3340OOM killer. As a result, a high boundary that is chosen too
3341aggressively will not terminate the processes, but instead it will
3342lead to gradual performance degradation. The user can monitor this
3343and make corrections until the minimal memory footprint that still
3344gives acceptable performance is found.
3345
3346In extreme cases, with many concurrent allocations and a complete
3347breakdown of reclaim progress within the group, the high boundary can
3348be exceeded. But even then it's mostly better to satisfy the
3349allocation from the slack available in other groups or the rest of the
3350system than killing the group. Otherwise, memory.max is there to
3351limit this type of spillover and ultimately contain buggy or even
3352malicious applications.
3e24b19d 3353
b6e6edcf
JW
3354Setting the original memory.limit_in_bytes below the current usage was
3355subject to a race condition, where concurrent charges could cause the
3356limit setting to fail. memory.max on the other hand will first set the
3357limit to prevent new charges, and then reclaim and OOM kill until the
3358new limit is met - or the task writing to memory.max is killed.
3359
3e24b19d
VD
3360The combined memory+swap accounting and limiting is replaced by real
3361control over swap space.
3362
3363The main argument for a combined memory+swap facility in the original
3364cgroup design was that global or parental pressure would always be
3365able to swap all anonymous memory of a child group, regardless of the
3366child's own (possibly untrusted) configuration. However, untrusted
3367groups can sabotage swapping by other means - such as referencing its
3368anonymous memory in a tight loop - and an admin can not assume full
3369swappability when overcommitting untrusted jobs.
3370
3371For trusted jobs, on the other hand, a combined counter is not an
3372intuitive userspace interface, and it flies in the face of the idea
3373that cgroup controllers should account and limit specific physical
3374resources. Swap space is a resource like all others in the system,
3375and that's why unified hierarchy allows distributing it separately.