sched/psi: report zeroes for CPU full at the system level
[linux-block.git] / Documentation / accounting / psi.rst
CommitLineData
373e8ffa
JK
1.. _psi:
2
eb414681
JW
3================================
4PSI - Pressure Stall Information
5================================
6
7:Date: April, 2018
8:Author: Johannes Weiner <hannes@cmpxchg.org>
9
10When CPU, memory or IO devices are contended, workloads experience
11latency spikes, throughput losses, and run the risk of OOM kills.
12
13Without an accurate measure of such contention, users are forced to
14either play it safe and under-utilize their hardware resources, or
15roll the dice and frequently suffer the disruptions resulting from
16excessive overcommit.
17
18The psi feature identifies and quantifies the disruptions caused by
19such resource crunches and the time impact it has on complex workloads
20or even entire systems.
21
22Having an accurate measure of productivity losses caused by resource
23scarcity aids users in sizing workloads to hardware--or provisioning
24hardware according to workload demand.
25
26As psi aggregates this information in realtime, systems can be managed
27dynamically using techniques such as load shedding, migrating jobs to
28other systems or data centers, or strategically pausing or killing low
29priority or restartable batch jobs.
30
31This allows maximizing hardware utilization without sacrificing
32workload health or risking major disruptions such as OOM kills.
33
34Pressure interface
35==================
36
37Pressure information for each resource is exported through the
38respective file in /proc/pressure/ -- cpu, memory, and io.
39
890d550d 40The format is as such::
eb414681 41
c3123552
MCC
42 some avg10=0.00 avg60=0.00 avg300=0.00 total=0
43 full avg10=0.00 avg60=0.00 avg300=0.00 total=0
eb414681
JW
44
45The "some" line indicates the share of time in which at least some
46tasks are stalled on a given resource.
47
48The "full" line indicates the share of time in which all non-idle
49tasks are stalled on a given resource simultaneously. In this state
50actual CPU cycles are going to waste, and a workload that spends
51extended time in this state is considered to be thrashing. This has
52severe impact on performance, and it's useful to distinguish this
53situation from a state where some tasks are stalled but the CPU is
54still doing productive work. As such, time spent in this subset of the
55stall state is tracked separately and exported in the "full" averages.
56
890d550d
CZ
57CPU full is undefined at the system level, but has been reported
58since 5.13, so it is set to zero for backward compatibility.
59
be87ab0a
WL
60The ratios (in %) are tracked as recent trends over ten, sixty, and
61three hundred second windows, which gives insight into short term events
62as well as medium and long term trends. The total absolute stall time
63(in us) is tracked and exported as well, to allow detection of latency
64spikes which wouldn't necessarily make a dent in the time averages,
65or to average trends over custom time frames.
2ce7135a 66
0e94682b
SB
67Monitoring for pressure thresholds
68==================================
69
70Users can register triggers and use poll() to be woken up when resource
71pressure exceeds certain thresholds.
72
73A trigger describes the maximum cumulative stall time over a specific
74time window, e.g. 100ms of total stall time within any 500ms window to
75generate a wakeup event.
76
77To register a trigger user has to open psi interface file under
78/proc/pressure/ representing the resource to be monitored and write the
79desired threshold and time window. The open file descriptor should be
80used to wait for trigger events using select(), poll() or epoll().
c3123552 81The following format is used::
0e94682b 82
c3123552 83 <some|full> <stall amount in us> <time window in us>
0e94682b
SB
84
85For example writing "some 150000 1000000" into /proc/pressure/memory
86would add 150ms threshold for partial memory stall measured within
871sec time window. Writing "full 50000 1000000" into /proc/pressure/io
88would add 50ms threshold for full io stall measured within 1sec time window.
89
90Triggers can be set on more than one psi metric and more than one trigger
91for the same psi metric can be specified. However for each trigger a separate
92file descriptor is required to be able to poll it separately from others,
93therefore for each trigger a separate open() syscall should be made even
a06247c6
SB
94when opening the same psi interface file. Write operations to a file descriptor
95with an already existing psi trigger will fail with EBUSY.
0e94682b
SB
96
97Monitors activate only when system enters stall state for the monitored
98psi metric and deactivates upon exit from the stall state. While system is
99in the stall state psi signal growth is monitored at a rate of 10 times per
100tracking window.
101
102The kernel accepts window sizes ranging from 500ms to 10s, therefore min
103monitoring update interval is 50ms and max is 1s. Min limit is set to
104prevent overly frequent polling. Max limit is chosen as a high enough number
105after which monitors are most likely not needed and psi averages can be used
106instead.
107
108When activated, psi monitor stays active for at least the duration of one
109tracking window to avoid repeated activations/deactivations when system is
110bouncing in and out of the stall state.
111
112Notifications to the userspace are rate-limited to one per tracking window.
113
114The trigger will de-register when the file descriptor used to define the
115trigger is closed.
116
117Userspace monitor usage example
118===============================
119
c3123552
MCC
120::
121
122 #include <errno.h>
123 #include <fcntl.h>
124 #include <stdio.h>
125 #include <poll.h>
126 #include <string.h>
127 #include <unistd.h>
128
129 /*
130 * Monitor memory partial stall with 1s tracking window size
131 * and 150ms threshold.
132 */
133 int main() {
0e94682b
SB
134 const char trig[] = "some 150000 1000000";
135 struct pollfd fds;
136 int n;
137
138 fds.fd = open("/proc/pressure/memory", O_RDWR | O_NONBLOCK);
139 if (fds.fd < 0) {
140 printf("/proc/pressure/memory open error: %s\n",
141 strerror(errno));
142 return 1;
143 }
144 fds.events = POLLPRI;
145
146 if (write(fds.fd, trig, strlen(trig) + 1) < 0) {
147 printf("/proc/pressure/memory write error: %s\n",
148 strerror(errno));
149 return 1;
150 }
151
152 printf("waiting for events...\n");
153 while (1) {
154 n = poll(&fds, 1, -1);
155 if (n < 0) {
156 printf("poll error: %s\n", strerror(errno));
157 return 1;
158 }
159 if (fds.revents & POLLERR) {
160 printf("got POLLERR, event source is gone\n");
161 return 0;
162 }
163 if (fds.revents & POLLPRI) {
164 printf("event triggered!\n");
165 } else {
166 printf("unknown event received: 0x%x\n", fds.revents);
167 return 1;
168 }
169 }
170
171 return 0;
c3123552 172 }
0e94682b 173
2ce7135a
JW
174Cgroup2 interface
175=================
176
177In a system with a CONFIG_CGROUP=y kernel and the cgroup2 filesystem
178mounted, pressure stall information is also tracked for tasks grouped
179into cgroups. Each subdirectory in the cgroupfs mountpoint contains
180cpu.pressure, memory.pressure, and io.pressure files; the format is
181the same as the /proc/pressure/ files.
0e94682b
SB
182
183Per-cgroup psi monitors can be specified and used the same way as
184system-wide ones.