Linux RAID speeds measured

Linux RAID speeds, more extensive testing

In my last test, I tested read and write speeds of software RAID 10. I got the base speed with single disk. Today other tests, which I did read from various sites, caught my attention. Is RAID 5 write speeds slow
comparing to other RAID -setups? Is RAID 0 twice the speed of a single disk? How about
RAID 0 with four disks?

These questions in mind I started the test. Test setup
was same as before:

  • Intel Motherboard DQ35JO
  • Intel Core 2 Duo E4700 @ 2.60 GHz
  • 4 GB DDR2 @ 800 MHz
  • 4 X SAMSUNG HD501LJ SATA2 500 GB hard drives
  • Ubuntu 10.04.1 LTS Server X64

First I decided to get the base / control speed. So the
tests were made with single disk. I used 8 GB as swap, and the rest (~492 GB)
was EXT4. No LVM was configured. Here’s what I got with hdparm:

/dev/sda:

Timing cached
reads: 2644 MB in 2.00 seconds = 1322.67 MB/sec

Timing buffered
disk reads: 242 MB in 3.00 seconds
= 80.59 MB/sec

/dev/sda:

Timing cached
reads: 2624 MB in 2.00 seconds = 1312.92 MB/sec

Timing buffered disk
reads: 244 MB in 3.01 seconds = 81.03 MB/sec

/dev/sda:

Timing cached
reads: 2600 MB in 2.00 seconds = 1300.49 MB/sec

Timing buffered
disk reads: 244 MB in 3.00 seconds
= 81.30 MB/sec

And here’s what I got with 10 GB file with dd:

Write:

for i in 1 2 3; do dd if=/dev/zero of=ddfile.10G bs=1MB
count=10000; done

10000+0 records in

10000+0 records out

10000000000 bytes (10 GB) copied, 134.755 s, 74.2 MB/s

10000+0 records in

10000+0 records out

10000000000 bytes (10 GB) copied, 136.876 s, 73.1 MB/s

10000+0 records in

10000+0 records out

10000000000 bytes (10 GB) copied, 140.327 s, 71.3 MB/s

 

Read:

for i in 1 2 3; do dd if=ddfile.10G of=/dev/null; done

19531250+0 records in

19531250+0 records out

10000000000 bytes (10 GB) copied, 151.35 s, 66.1 MB/s

19531250+0 records in

19531250+0 records out

10000000000 bytes (10 GB) copied, 151.245 s, 66.1 MB/s

19531250+0 records in

19531250+0 records out

10000000000 bytes (10 GB) copied, 151.927 s, 65.8 MB/s

 

So, how will RAID 0 with two disks compare to this? Had
to find out, so I attached the second drive, erased all the contents, and
re-installed the OS. This time I created software RAID 0, of course. Here’s the
hdparm results:

/dev/md1:

Timing cached
reads: 2614 MB in 2.00 seconds = 1307.87 MB/sec

Timing buffered
disk reads: 388 MB in 3.00 seconds =
129.21 MB/sec

/dev/md1:

Timing cached
reads: 2608 MB in 2.00 seconds = 1304.91 MB/sec

Timing buffered
disk reads: 382 MB in 3.00 seconds =
127.21 MB/sec

/dev/md1:

Timing cached
reads: 2594 MB in 2.00 seconds = 1297.95 MB/sec

Timing buffered
disk reads: 398 MB in 3.02 seconds =
131.93 MB/sec

 

And here’s what I got with 10 GB file with dd:

Write:

for i in 1 2 3; do dd if=/dev/zero of=ddfile.10G bs=1MB
count=10000; done

10000+0 records in

10000+0 records out

10000000000 bytes (10 GB) copied, 61.1483 s, 164 MB/s

10000+0 records in

10000+0 records out

10000000000 bytes (10 GB) copied, 65.8284 s, 152 MB/s

10000+0 records in

10000+0 records out

10000000000 bytes (10 GB) copied, 66.207 s, 151 MB/s

 

Read:

for i in 1 2 3; do dd if=ddfile.10G of=/dev/null; done

19531250+0 records in

19531250+0 records out

10000000000 bytes (10 GB) copied, 73.1131 s, 137 MB/s

19531250+0 records in

19531250+0 records out

10000000000 bytes (10 GB) copied, 72.7213 s, 138 MB/s

19531250+0 records in

19531250+0 records out

10000000000 bytes (10 GB) copied, 72.846 s, 137 MB/s

 

As expected, read and write speeds were twice as fast
(plus some more) than single disk setup. Appetite got bigger, so next I
inserted the two remaining disks, and went max out: RAID 0 with four disks.
Hdparm gave me:

/dev/md1:

Timing cached
reads: 2604 MB in 2.00 seconds = 1303.09 MB/sec

Timing buffered
disk reads: 620 MB in 3.00 seconds =
206.50 MB/sec

 

/dev/md1:

Timing cached
reads: 2612 MB in 2.00 seconds = 1306.79 MB/sec

Timing buffered
disk reads: 636 MB in 3.01 seconds =
211.20 MB/sec

/dev/md1:

Timing cached
reads: 2618 MB in 2.00 seconds = 1309.58 MB/sec

Timing buffered
disk reads: 614 MB in 3.00 seconds =
204.64 MB/sec

 

And the results for dd:

Write:

for i in 1 2 3; do dd if=/dev/zero of=ddfile.10G bs=1MB
count=10000; done

10000+0 records in

10000+0 records out

10000000000 bytes (10 GB) copied, 31.544 s, 317 MB/s

10000+0 records in

10000+0 records out

10000000000 bytes (10 GB) copied, 32.6077 s, 307 MB/s

10000+0 records in

10000+0 records out

10000000000 bytes (10 GB) copied, 32.9227 s, 304 MB/s

 

Read:

for i in 1 2 3; do dd if=ddfile.10G of=/dev/null; done

19531250+0 records in

19531250+0 records out

10000000000 bytes (10 GB) copied, 44.2907 s, 226 MB/s

19531250+0 records in

19531250+0 records out

10000000000 bytes (10 GB) copied, 45.2025 s, 221 MB/s

19531250+0 records in

19531250+0 records out

10000000000 bytes (10 GB) copied, 45.1361 s, 222 MB/s

 

Damn that’s fast! Let’s remember here, I’m using
relatively old hardware; SATA2 drives in SATA2 motherboard with no external
controller.

Finally i tested RAID 5 speeds with four disks. Hdparm:

 

/dev/md1:

Timing cached
reads: 2624 MB in 2.00 seconds = 1313.06 MB/sec

Timing buffered
disk reads: 268 MB in 3.01 seconds
= 89.07 MB/sec

/dev/md1:

Timing cached
reads: 2600 MB in 2.00 seconds = 1300.86 MB/sec

Timing buffered
disk reads: 258 MB in 3.01 seconds
= 85.79 MB/sec

 

/dev/md1:

Timing cached
reads: 2638 MB in 2.00 seconds = 1320.23 MB/sec

Timing buffered
disk reads: 272 MB in 3.01 seconds
= 90.34 MB/sec

And dd:

Write:

for i in 1 2 3; do dd if=/dev/zero of=ddgile.10G bs=1MB
count=10000; done

10000+0 records in

10000+0 records out

10000000000 bytes (10 GB) copied, 134.755 s, 74.2 MB/s

10000+0 records in

10000+0 records out

10000000000 bytes (10 GB) copied, 136.876 s, 73.1 MB/s

10000+0 records in

10000+0 records out

10000000000 bytes (10 GB) copied, 140.327 s, 71.3 MB/s

 

Read:

for i in 1 2 3; do dd if=ddfile.10G of=/dev/null; done

19531250+0 records in

19531250+0 records out

10000000000 bytes (10 GB) copied, 151.35 s, 66.1 MB/s

19531250+0 records in

19531250+0 records out

10000000000 bytes (10 GB) copied, 151.245 s, 66.1 MB/s

19531250+0 records in

19531250+0 records out

10000000000 bytes (10 GB) copied, 151.927 s, 65.8 MB/s

 

Finally, the matrix and the chart of the whole test:

raid_speeds_01 raid_speeds_02

You may also like...

Leave a Reply

Your email address will not be published. Required fields are marked *