I/O schedulers and their impact on performance
(part 2)
For details about I/O schedulers (what are they?) refer to the previous article
My current set up :
- Motorola MotoG
- memory chipset : emmc: 8GB Sandisk REV=06 PRV=07 TYPE=17
- CM11-20140517-Nightly
- RenderKernel r17
- 192-1190MHz
- gov : intellidemand
Benchmark used : AndroBench v3.4
What is being tested ?
- Sequential Read (MB/s)
- Sequential Write (MB/s)
- Random Read (IOPS)
- Random Write (IOPS)
==> Using AndroBench v3.4 default settings
==> Each available scheduler is tested 3 times
- fiops
- noop
- bfq
- cfq
- deadline
- zen
- vr
- row
- sio
Results :
RAW data : the table displays mean result (out of 3) and related standard deviation to show how (in)consistent some results are.
RAW data : the table displays mean result (out of 3) and related standard deviation to show how (in)consistent some results are.
IO scheduler | seq read (MB/s) | seq write (MB/s) | rd read (IOPS) | rd write (IOPS) |
fiops | 62,76 | 15,61 | 1379,42 | 236,48 |
stdev | 3,39 | 2,77 | 21,64 | 17,15 |
noop | 59,34 | 13,43 | 1329,92 | 231,53 |
stdev | 3,51 | 0,42 | 44,79 | 4,30 |
bfq | 54,62 | 12,12 | 1320,23 | 231,39 |
stdev | 1,53 | 0,49 | 25,45 | 2,11 |
cfq | 55,31 | 14,37 | 1322,95 | 226,21 |
stdev | 0,59 | 0,98 | 1,22 | 14,56 |
deadline | 53,37 | 13,18 | 1335,51 | 229,98 |
stdev | 5,51 | 1,14 | 26,91 | 6,46 |
zen | 57,54 | 13,28 | 1383,16 | 232,40 |
stdev | 1,18 | 1,93 | 39,88 | 14,18 |
vr | 53,91 | 14,54 | 1352,52 | 226,86 |
stdev | 1,22 | 1,15 | 22,62 | 1,95 |
row | 57,88 | 13,66 | 1351,85 | 220,26 |
stdev | 0,53 | 0,90 | 38,84 | 14,24 |
sio | 58,32 | 13,26 | 1375,62 | 229,29 |
stdev | 1,27 | 0,49 | 18,06 | 8,30 |
Following graphs display Sequential and Random Read/Write per Scheduler
The higher the better.
As we can see, results are not much different from a scheduler to another (unsignificant differences for Random IO)
Though we can see that fiops is a bit above the others (Zen, sio, noop and row are not that bad!)
To better discriminate the good and the evil, i tried to take the median result as 100% (median is not modified by extreme values contrary to average/mean)
==> remember : this is a delta% against median (100%), that way a little difference looks big (mostly because of the chosen scale)
How to read the graph?
With median value as 100%,
What does that show?
- fiops is 109% in sequential read, 116% in sequential write, 102% in radom read and 103% in radom write
- bfq is 94% in sequential read, 90% in sequential write, 97% in radom read and 100% in radom write
What does that show?
- fiops seems to be the best here
- Zen, noop, sio and row are quite similar
- cfq and vr are behind in sequential read but better in sequential write
- deadline and bfq are worst here
==> These results are not god's speaking, and may be differents with other tests/benchmarks AND devices.
(i may test SQlite insert, update and delete later, perhaps these will change results?? )
What are the limits?
- Lack of accuracy with only 3 test per scheduler (and high results dispersion)
- These results are device dependent, kernel dependent, ROM dependent AND emmc type dependent (MotoG could be built with at least 4 different memory chips)
- Delta% against median is not the best to compare but was the easiest here (i could not put IOPS and MB/s in the same chart because of scaling)
Don't forget to read other I/O performances related posts :
No comments:
Post a Comment