summaryrefslogtreecommitdiff
path: root/engines/exec.c
AgeCommit message (Collapse)Author
2022-02-20Spelling and grammar fixesVille Skyttä
Signed-off-by: Ville Skyttä <ville.skytta@upcloud.com>
2021-07-26engines/exec: Code cleanup to remove leaksErwan Velu
As per the coverty reports, there was some issues in my code : - Some structures were not properly freed before returning. - Some file descriptors were not properly closed - Testing with 'if (!int)' isn't a good way to test if the value is negative Signed-off-by: Erwan Velu <erwanaliasr1@gmail.com>
2021-07-25engines/exec: style cleanupsJens Axboe
No functional changes in this patch. Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-06-29engines: Adding exec engineErwan Velu
When performing benchmarks with fio, some need to execute tasks in parallel to the job execution. A typical use-case would be observing performance/power metrics. Several implementations were possible : - Adding an exec_run in addition of the existing exec_{pre|post}run - Implementating performance/power metrics in fio - Adding an exec engine 1°) Adding an exec_run This was my first intention but quickly noticed that exec_{pre-post}run are executed for each 'numjob'. In the case of performance/power monitoring, this doesn't make sense to spawn an instance for each thread. 2°) Implementing performance/power metrics This is possible but would require lot of work to maintain this part of fio while 3rd party tools already take care of that perfectly. 3°) Adding an engine Adding an engine let users defining when and how many instances of the program they want. In the provided example, a single monitoring job is spawning at the same time as the worker thread which could be composed of several worker threads. A stonewall barrier is used to define which jobs must run together (monitoring / benchmark). The engine has two parameters : - program: name of the program to run - arguments: arguments to pass to the program - grace_time: duration between SIGTERM and SIGKILL - std_redirect: redirect std{err|out} to dedicated files Arguments can have special variables to be expanded before the execution: - %r will be replaced by the job duration in seconds - %n will be replaced by the job name During the program execution, the std{out|err} are redirected to files if std_redirect option is set (default). - stdout: <job_name>.stdout - stderr: <job_name>.stderr If the executed program has a nice stdout output, after the fio execution, the stdout file can be parsed by other tools like CI jobs or graphing tools. A sample job is provided here to show how this can be used. It runs twice the CPU engine with two different CPU modes (noop vs qsort). For each benchmark, the output of turbostat is saved for later analysis. After the fio run, it is possible to compare the impact of the two modes on the CPU frequency and power consumption. This can be easily extended to any other usage that needs to analysis the behavior of the host during some jobs. About the implementation, the exec engine forks : - the child doing an execvp() of the program. - the parent, fio, will monitor the time passed into the job Once the time is over, the program is SIGTERM followed by a SIGKILL to ensure it will not run _after_ the job is completed. This mechanism is required as : - not all programs can be controlled properly - that's last resort protection if the program gets crazy The delay is controlled by grace_time option, default is 1 sec. If the program can be limited in its duration, using the %r variable in the arguments can be used to request the program to stop _before_ the job finished like : program=/usr/bin/mytool.sh arguments=--duration %r Signed-off-by: Erwan Velu <e.velu@criteo.com>