path: root/btt/trace_requeue.c
AgeCommit message (Collapse)Author
2008-01-03Fix Q counts during requeue and merges.Dave Boutcher
It looks to me like btt doesn't correctly keep track of the number of requests currently in the queue for a device. n_act_q gets incremented in trace_queue and decremented in trace_issue, but I think it also needs to get updated in trace_merge and trace_requeue. The one thing I'm not sure about is whether we want r_iop->dip->n_qs++ in the new handle_requeue routine. The following patch makes the "active requests at Q" count a little more sane for me. This is against git as of yesterday. Signed-off-by: Alan D. Brunelle <>
2007-09-10Major revamping (ver 2.0)Alan D. Brunelle
After a lot of fighting with maintaining a tree-styled design (each trace having it's own node), it was just getting too cumbersome to work in all circumstances. Taking a clue from blkparse itself, I decided to just keep track of IOs at queue time, and updating fields based upon later traces. The attached (large) patch works much faster, handles larger test cases with less failures, and is managing some pretty large jobs I'm working on (large Oracle-based DB analysis - 32-way box w/ lots of storage). I've also added a Q2Q seek distance feature - it's come in handy when comparing results of IO scheduler choice: We can see what the incoming IO seek distances are (at queue time), and then see how the scheduler itself manages things (via merges & sorting) by looking at D2D seek distances generated. As noted in the subject, I arbitrarily bumped this to version 2.00 as the innards are so different. The documentation (btt/doc/btt.tex) has been updated to reflect some minor output changes. I also fixed a bug dealing with process name notification: there was a problem that if a new PID came up with a name that was previously seen, btt wouldn't keep track of it right. [When running with Oracle, a lot of processes have the same name but different PIDs of course.] Signed-off-by: Jens Axboe <>
2007-04-13A couple of weeks ago Ming Zhang had noted that btt was using tremendousAlan D. Brunelle
amounts of memory to accomplish a run. After looking at the code, and doing some testing, I determined the cause - please find a patch to the latest tree that seems to fix the problem for me... Signed-off-by: Alan D. Brunelle <> Signed-off-by: Jens Axboe <>
2007-02-06[PATCH]: btt - major fixes and speed improvementsAlan D. Brunelle
From: Alan D. Brunelle <> Lots of changes to how we handle traces - adds robustness & quicker This large patch contains the following changes to the trace handling aspects of btt: 1. Use larger buffers for output options. 2. Use mmap to handle the input of trace data. 3. More precise btt statistics are output at the end. 4. Added in (under DEBUG) the display of unhandled traces. I was running into the problem where traces were not being connected, and the rb trees would get quite large. This would slow things down considerably. (See below for details on why traces weren't being handled.) 5. Sprinkled some ASSERTs (under DEBUG). 6. Added a new btt-specific trace type: "links" - since 'A' (remaps) contain two separate pieces of information, I broke them up into a link and a remap trace. [Thus, it is easy to find either end of the remap.] 7. Added in the notion of retries of completes (and requeues). I'm finding some discrepencies in the time stamps, in order to make btt handle these better, I've added the notion of keeping the trace around for a bit, to see if it gets linked up later. 8. Separated trace streams into: simple IOs, and remapped IOs. 9. Fixed up D2C averages - Q2I + I2D + D2C should equal Q2C averages. ---------------------------------------------------------------------------- I do not understand why it is so, but I am seeing two 'C' (complete) traces for the same IO track at times. The sequence number is different (+1 for the second one), and the time stamps are different (100's of microseconds apart). I'm investigating this. At least on an IA64, I am seeing time inconsistencies amongst CPUs on very heavy loads (48 disks, 4 CPUs, almost 300 million traces). I find the 'D' (issue) and 'C' (complete) traces coming out ahead of the associate 'I' (insert) and 'M' (merge) traces. It would be good to get this fixed in the kernel, but I figure it is also goodness to attempt to account for it in post-processing as well. ---------------------------------------------------------------------------- This work was done in order to handle some of these large data sets, and I've found that the performance is reasonable - here are some stats for very large file (the largest of which used to take well over 12 minutes, now it takes about 5 1/2 minutes - and a lot of that is just getting the 18GiB of data read in): Size Real User System ----- -------- -------- ------- 7GiB 123.445s 80.188s 11.392s 10GiB 179.148s 137.456s 16.680s 13GiB 237.561s 156.992s 21.968s 16GiB 283.262s 187.468s 26.748s 18GiB 336.345s 225.084s 31.200s Signed-off-by: Alan D. Brunelle <> Signed-off-by: Jens Axboe <>
2006-12-01[PATCH] BTT patch: (3/3) time bounded trace analysisJens Axboe
Added in -t and -T options to allow bounding of traces analyzed. Be forewarned: this can result in some excessive numbers of orphaned traces (partial IO streams before the -t tiem and after the -T time won't be analyzed). Signed-off-by: Alan D. Brunelle <> Signed-off-by: Jens Axboe <>
2006-12-01[PATCH] BTT patch: (2/3) per-IO stream outputJens Axboe
Two major updates: (1) Added in some robustness - can accept out-of-order traces, and can "orphan" unfinished IO streams. (2) Added in ability to put IO streams to a file, sending Q-to-C traces on a per-IO bases. The additional robustness comes at some expense (performance), and so I will look into that next. (Perhaps see what those Judy trees buy us... :-) ) Signed-off-by: Alan D. Brunelle <> Signed-off-by: Jens Axboe <>