summaryrefslogtreecommitdiff
path: root/btt/trace_queue.c
AgeCommit message (Collapse)Author
2012-02-01Fix compiler warningsJens Axboe
One was a real bug, assigned i_time twice instead of c_time (which was left unitialized). Signed-off-by: Jens Axboe <axboe@kernel.dk>
2008-05-09Added S2G times + fixed up -X output to include X2XAlan D. Brunelle
Including Q2Q, Q2G, S2G, G2I, Q2M, I2D, M2D, D2C, Q2C. S2G is part of Q2G, and shows the number of times we had to sleep to get a request. Ignored 0-byte I/Os - coming from barrier I/Os...
2008-02-13Cleanups: Fixed IOPs in btt left over at end of runAlan D. Brunelle
o Using valgrind, determined we had Q IOPs left over that weren't used. Added "all" list, and then deleted these at end. o Removed old debug stuff (COUNT_IOS, DEBUG, ...) o Fixed a bunch of white space at end of lines. Signed-off-by: Alan D. Brunelle <alan.brunelle@hp.com>
2007-12-10Separated out g/i/m trace handling.Alan D. Brunelle
Also separated out DM-device calculations.
2007-11-13Added active requests at Q information.Alan D. Brunelle
An important consideration when analyzing block IO schedulers is to know how many requests the scheduler has to work with. The metric provided in this section details how many requests (on average) were being held by the IO scheduler when an incoming IO request was being handled. To determine this, btt keeps track of how many Q requests came in, and subtacts requests that have been issued (D). Sample: ==================== Active Requests At Q Information ==================== DEV | Avg Reqs @ Q ---------- | ------------- ( 65, 80) | 12.0 ( 65,240) | 16.9 ( 65,112) | 13.3 ( 66, 64) | 32.8 ( 66, 80) | 21.5 ( 65, 96) | 8.6 ( 66, 16) | 16.2 ( 65, 64) | 20.4 ( 66, 96) | 18.3 ( 65,176) | 17.4 ( 66, 32) | 13.6 ( 65,144) | 13.4 ( 66, 0) | 21.4 ( 65,192) | 19.4 ( 66,128) | 18.4 ( 66,144) | 16.2 ( 65,128) | 8.0 ( 66,112) | 44.2 ---------- | ------------- Overall | Avgs Reqs @ Q Average | 17.4 Signed-off-by: Alan D. Brunelle <Alan.Brunelle@hp.com>
2007-11-08Fixed REMAP to update Q2A & fixed #Q calculationsAlan D. Brunelle
2007-09-10Major revamping (ver 2.0)Alan D. Brunelle
After a lot of fighting with maintaining a tree-styled design (each trace having it's own node), it was just getting too cumbersome to work in all circumstances. Taking a clue from blkparse itself, I decided to just keep track of IOs at queue time, and updating fields based upon later traces. The attached (large) patch works much faster, handles larger test cases with less failures, and is managing some pretty large jobs I'm working on (large Oracle-based DB analysis - 32-way box w/ lots of storage). I've also added a Q2Q seek distance feature - it's come in handy when comparing results of IO scheduler choice: We can see what the incoming IO seek distances are (at queue time), and then see how the scheduler itself manages things (via merges & sorting) by looking at D2D seek distances generated. As noted in the subject, I arbitrarily bumped this to version 2.00 as the innards are so different. The documentation (btt/doc/btt.tex) has been updated to reflect some minor output changes. I also fixed a bug dealing with process name notification: there was a problem that if a new PID came up with a name that was previously seen, btt wouldn't keep track of it right. [When running with Oracle, a lot of processes have the same name but different PIDs of course.] Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
2007-04-13A couple of weeks ago Ming Zhang had noted that btt was using tremendousAlan D. Brunelle
amounts of memory to accomplish a run. After looking at the code, and doing some testing, I determined the cause - please find a patch to the latest tree that seems to fix the problem for me... Signed-off-by: Alan D. Brunelle <Alan.Brunelle@hp.com> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
2007-02-26Add Q and D histograms (based upon IO size)Alan D. Brunelle
Signed-off-by: Alan D. Brunelle <Alan.Brunelle@hp.com> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
2007-02-06[PATCH]: btt - major fixes and speed improvementsAlan D. Brunelle
From: Alan D. Brunelle <Alan.Brunelle@hp.com> Lots of changes to how we handle traces - adds robustness & quicker This large patch contains the following changes to the trace handling aspects of btt: 1. Use larger buffers for output options. 2. Use mmap to handle the input of trace data. 3. More precise btt statistics are output at the end. 4. Added in (under DEBUG) the display of unhandled traces. I was running into the problem where traces were not being connected, and the rb trees would get quite large. This would slow things down considerably. (See below for details on why traces weren't being handled.) 5. Sprinkled some ASSERTs (under DEBUG). 6. Added a new btt-specific trace type: "links" - since 'A' (remaps) contain two separate pieces of information, I broke them up into a link and a remap trace. [Thus, it is easy to find either end of the remap.] 7. Added in the notion of retries of completes (and requeues). I'm finding some discrepencies in the time stamps, in order to make btt handle these better, I've added the notion of keeping the trace around for a bit, to see if it gets linked up later. 8. Separated trace streams into: simple IOs, and remapped IOs. 9. Fixed up D2C averages - Q2I + I2D + D2C should equal Q2C averages. ---------------------------------------------------------------------------- I do not understand why it is so, but I am seeing two 'C' (complete) traces for the same IO track at times. The sequence number is different (+1 for the second one), and the time stamps are different (100's of microseconds apart). I'm investigating this. At least on an IA64, I am seeing time inconsistencies amongst CPUs on very heavy loads (48 disks, 4 CPUs, almost 300 million traces). I find the 'D' (issue) and 'C' (complete) traces coming out ahead of the associate 'I' (insert) and 'M' (merge) traces. It would be good to get this fixed in the kernel, but I figure it is also goodness to attempt to account for it in post-processing as well. ---------------------------------------------------------------------------- This work was done in order to handle some of these large data sets, and I've found that the performance is reasonable - here are some stats for very large file (the largest of which used to take well over 12 minutes, now it takes about 5 1/2 minutes - and a lot of that is just getting the 18GiB of data read in): Size Real User System ----- -------- -------- ------- 7GiB 123.445s 80.188s 11.392s 10GiB 179.148s 137.456s 16.680s 13GiB 237.561s 156.992s 21.968s 16GiB 283.262s 187.468s 26.748s 18GiB 336.345s 225.084s 31.200s Signed-off-by: Alan D. Brunelle <Alan.Brunelle@hp.com> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
2006-12-01[PATCH] BTT patch: (2/3) per-IO stream outputJens Axboe
Two major updates: (1) Added in some robustness - can accept out-of-order traces, and can "orphan" unfinished IO streams. (2) Added in ability to put IO streams to a file, sending Q-to-C traces on a per-IO bases. The additional robustness comes at some expense (performance), and so I will look into that next. (Perhaps see what those Judy trees buy us... :-) ) Signed-off-by: Alan D. Brunelle <Alan.Brunelle@hp.com> Signed-off-by: Jens Axboe <jens.axboe@oracle.com>