Linux check disk speed. How to measure hard drive speed

Original: Test read/write speed of usb and ssd drives with dd command on Linux
Author: Silver Moon
Publication date: Jul 12, 2014
Translation: N. Romodanov
Transfer date: October 2014

Device speed

The speed of a device is measured in units of how much data it can read or write per unit of time. The dd command is a simple command line tool that can be used to read and write arbitrary blocks of data on a disk and measure the speed at which a data transfer has occurred.

In this article, we will use the dd command to check the read and write speed of usb and ssd devices.

The data transfer speed depends not only on the disk, but also on the interface through which it is connected. For example, the usb 2.0 port has a maximum functional speed limit of 35 MB/s, so even if you plug a high-speed usb 3 flash drive into the usb 2 port, the speed will be limited to a lower value.

The same applies to the SSD device. The SSD device is connected through SATA ports, which have different versions. Sata 2.0 has a maximum theoretical speed limit of 3Gbps, which is about 375MB/s. While SATA 3.0 supports twice the speed.

Test method

Mount the drive and navigate to it from a terminal window. Then use the dd command to first write the file, which consists of fixed-size blocks. Then read the same file using the same block size.

The general syntax of the dd command is as follows

Dd if=path/to/input_file of=/path/to/output_file bs=block_size count=number_of_blocks

When writing to disk, we simply read from /dev/zero, which is the source of an infinite number of bytes. When reading from disk, we read the previously written file and send it to /dev/null, which doesn't really exist. Throughout the process, the dd command monitors and reports the rate at which the transfer occurs.

SSD device

The SSD device we are using is a "Samsung Evo 120GB" SSD drive. This is an entry level ssd device related to budget and also my first SSD drive. It is also one of the highest performing SSDs available on the market.

In this test, the ssd drive is connected to a sata 2.0 port.

Recording speed

Let's record to ssd first

$ dd if=/dev/zero of=./largefile bs=1M count=1024 1024+0 records in 1024+0 records out 1073741824 bytes (1.1 GB) copied, 4.82364 s, 223 MB/s

The block size is actually quite large. You can try using a smaller size like 64k or even 4k.

Reading speed

Now, on the contrary, read the same file. But clear the memory cache first to make sure the file is actually being read from disk.

To clear the memory cache, run the following command

$ sudo sh -c "sync && echo 3 > /proc/sys/vm/drop_caches"

Now read the file

$ dd if=./largefile of=/dev/null bs=4k 165118+0 records in 165118+0 records out 676323328 bytes (676 MB) copied, 3.0114 s, 225 MB/s

USB device

In this test, we will measure the read speed of ordinary usb flash drives. The devices connect to standard usb 2 ports. The first device is a sony 4gb usb stick and the second is a strontium 16gb.

First connect the device and mount it so that it is readable. Then, from the command line, navigate to the mounted directory.

Sony 4GB device - recording

In this test, the dd command is used to write 10,000 pieces of data of 8 KB each into a single file on disk.

# dd if=/dev/zero of=./largefile bs=8k count=10000 10000+0 records in 10000+0 records out 81920000 bytes (82 MB) copied, 11.0626 s, 7.4 MB/s

The write speed is about 7.5 MB/s. This is a low figure.

Sony 4GB device - reading

The same file is read to test the read speed. To clear the memory cache, run the following command

$ sudo sh -c "sync && echo 3 > /proc/sys/vm/drop_caches"

Now read the file with dd command

# dd if=./largefile of=/dev/null bs=8k 8000+0 records in 8000+0 records out 65536000 bytes (66 MB) copied, 2.65218 s, 24.7 MB/s

The read speed is approximately 25 MB/s, which is more or less standard for cheap usb sticks.

USB 2.0 has a theoretical maximum signaling rate of 480 Mbps or 60 Mbps. But due to various limitations, the maximum throughput is limited to approximately 280 Mbps or 35 Mbps. In addition, the actual speed depends on the quality of the flash drive, as well as other factors.

And since the above usb device was connected to a USB 2.0 port and a read speed of 24.7 MB/s was achieved, which is not too bad. But the write speed is far behind.

Now let's do the same test with a Strontium 16gb stick. Strontium is another brand that makes very cheap usb sticks, but these sticks are reliable.

Write speed for Strontium 16gb device

# dd if=/dev/zero of=./largefile bs=64k count=1000 1000+0 records in 1000+0 records out 65536000 bytes (66 MB) copied, 8.3834 s, 7.8 MB/s

Read speed for Strontium 16gb device

# sudo sh -c "sync && echo 3 > /proc/sys/vm/drop_caches" # dd if=./largefile of=/dev/null bs=8k 8000+0 records in 8000+0 records out 65536000 bytes (66 MB) copied, 2.90366 s, 22.6 MB/s

Data reading speed is slower than Sony device.

To determine the disk write speed, you need to run the following command in the console:

sync; dd if=/dev/zero of=tempfile bs=1M count=1024; sync

The command writes a temporary file 1 MB in size 1024 times and the result of its work will be the output of such data

1024+0 records received 1024+0 records sent copied 1073741824 bytes (1.1 GB), 15.4992 s, 69.3 MB/s

To determine the disk read speed, you need to run the following command in the console:

The temporary file that was generated by the previous command is cached in the buffer, which will increase its reading speed by itself and it will be much higher than the actual reading speed directly from the hard disk itself. In order to get real speed, you must first clear this cache.

To determine the speed of reading from a disk from a buffer, you need to run the following command in the console:

Dd if=tempfile of=/dev/null bs=1M count=1024

Output of the previous command:

1024+0 records received 1024+0 records sent copied 1073741824 bytes (1.1 GB), 15.446 s, 69.5 MB/s

To measure the actual read speed from the disk, clear the cache:

sudo /sbin/sysctl -w vm.drop_caches=3

Command output:

vm.drop_caches = 3

We perform a reading speed test after clearing the cache:

Dd if=tempfile of=/dev/null bs=1M count=1024 1024+0 records received 1024+0 records sent copied 1073741824 bytes (1.1 GB), 16.5786 s, 64.8 MB/s

Performing a read/write speed test on an external drive

To test the speed of any External HDD, USB Flash Drive or other removable media or file system of a remote machine (vps / vds), you need to go to the mount point and execute the above commands.

Or, instead of tempfile, you can of course write the path to the mount point, as follows:

sync; dd if=/dev/zero of=/media/user/USBFlash/tempfile bs=1M count=1024; sync

You should also specify that the above commands use the temporary file tempfile. Don't forget to delete it after the tests are over.

HDD speed test using hdparm utility

hdparm is Linux utility, which allows you to quickly find out the read speed from your hdd.

To start measuring reading speed from your hard drive, you need to run the following command in the console:

sudo hdparm -Tt /dev/sda

Command output in console:

/dev/sda: Timing cached reads: 6630 MB in 2.00 seconds = 3315.66 MB/sec Timing buffered disk reads: 236 MB in 3.02 seconds = 78.17 MB/sec

That's all. Thus, we were able to find out the performance of our hard drive and give a rough estimate of its capabilities.

It requires reading the manual (man fio) but it will give you accurate results. Note that for any accuracy, you need to specify exactly what you want to measure. Some examples:

Sequential READ speed with big blocks

Fio --name TEST --eta-newline=5s --filename=fio-tempfile.dat --rw=read --size=500m --io_size=10g --blocksize=1024k --ioengine=libaio --fsync= 10000 --iodepth=32 --direct=1 --numjobs=1 --runtime=60 --group_reporting

Sequential WRITE speed with big blocks(this should be near the number you see in the specifications for your drive):

Fio --name TEST --eta-newline=5s --filename=fio-tempfile.dat --rw=write --size=500m --io_size=10g --blocksize=1024k --ioengine=libaio --fsync= 10000 --iodepth=32 --direct=1 --numjobs=1 --runtime=60 --group_reporting

Random 4K read QD1(this is the number that really matters for real world performance unless you know better for sure):

Fio --name TEST --eta-newline=5s --filename=fio-tempfile.dat --rw=randread --size=500m --io_size=10g --blocksize=4k --ioengine=libaio --fsync= 1 --iodepth=1 --direct=1 --numjobs=1 --runtime=60 --group_reporting

Mixed random 4K read and write QD1 with sync(this is worst case number you should ever expect from your drive, usually less than 1% of the numbers listed in the spec sheet):

Fio --name TEST --eta-newline=5s --filename=fio-tempfile.dat --rw=randrw --size=500m --io_size=10g --blocksize=4k --ioengine=libaio --fsync= 1 --iodepth=1 --direct=1 --numjobs=1 --runtime=60 --group_reporting

Increase the --size argument to increase the file size. Using bigger files may reduce the numbers you get depending on drive technology and firmware. Small files will give "too good" results for rotational media because the read head does not need to move that much. If your device is near empty, using file big enough to almost fill the drive will get you the worst case behavior for each test. In case of SSD, the file size does not matter that much.

However, note that for some storage media the size of the file is not as important as total bytes written during short time period. For example, some SSDs may have significantly faster performance with pre-erased blocks or it might have small SLC flash area that"s used as write cache and the performance changes once SLC cache is full. As an another example, Seagate SMR HDDs have about 20 GB PMR cache area that has pretty high performance but once it gets full, writing directly to SMR area may cut the performance to 10% from the original. as possible. Of course, this all depends on your workload: if your write access is bursty with longish delays that allow the device to clean the internal cache, shorter test sequences will reflect your real world performance better. IO, you need to increase both --io_size and --runtime parameters.Note that some media (e.g. most flash devices) will get extra wear from such testing.In my opinion, if any device is poor enough not to handle this kind of testing, it should not be used to hold any valueable data in any case.

In addition, some high quality SSD devices may have even more intelligent wear leveling algorithms where internal SLC cache has enough smarts to replace data in place that is being re-written during the test if it hits the same address space (that is, test file is smaller than total SLC cache). For such devices, the file size starts to matter again. If you need your actual workload it "s best to test with file sizes that you"ll actually see in real life. Otherwise your numbers may look too good.

Note that fio will create the required temporary file on first run. It will be filled with random data to avoid getting too good numbers from devices that cheat by compressing the data before writing it to permanent storage. The temporary file will be called fio-tempfile.dat in above examples and stored in current working directory. So you should first change to directory that is mounted on the device you want to test.

If you have a good SSD and want to see even higher numbers, increase --numjobs above. That defines the concurrency for the reads and writes. The above examples all have numjobs set to 1 so the test is about single threaded process reading and writing (possibly with a queue set with iodepth). High end SSDs (e.g. Intel Optane) should get high numbers even without increasing numjobs a lot (e.g. 4 should be enough to get the highest spec numbers) but some "Enterprise" SSDs require going to 32 - 128 to get the spec numbers because the internal latency of those devices is higher but the overall throughput is insane.

mob_info