HDD, SSD and SSS – DMM WRITE SAME Test Step

STB Suite | The Industry Standard in Peripheral Testing.

Introduction

This paper will describe two new write tests available in DMM – WriteSameSCSI and WriteSameSATA.

How to write an entire drive – the old way

For our example we will use a typical 1TB disk drive. Our 1TB drive is made up of 1,953,525,168 512-byte blocks or sectors. Let’s say that we want to write all blocks of this drive, at a typical 128 blocks-per-transfer (64KB per I/O).  This is going to require 15,261,915 individual I/O’s, with each I/O transferring 128 blocks of data from the test computer to the disk under test. 15 Million I/O’s with each I/O transferring data to the drive.

What writing one single drive entails

Real-world numbers for a typical test system show that we can sustain a data rate of around 100 MB/second.

Going back to the disk size of 1,953,525,168 512-byte blocks, and factoring in a sustained transfer rate of 100MB/second we can see that it is going to take 2 hours and 38 minutes to write the entire drive. The DMM log file will confirm this number.

We know that the process of writing the entire drive using 128 block-per-I/O  is going to need approximately  15,261,915 Write Commands or CDBs.

Issuing each one of these 15,261,915 CDBs takes certain amounts of  time, consisting of:

  • selecting the drive that the I/O is going to go to – 15,261,915 times
  • sending the WRITE CDB –typically 12 bytes – from the computer to the drive – 15,261,915 times
  • sending the 128 blocks of data from the computer to the drive – 15,261,915 times
  • various overhead and handshaking messages which are part of the I/O protocol for each CDB – 15,261,915 times

 

What writing to multiple drives at once entails

We have all seen that no matter how powerful our test computers are, and no matter how fast our disk drives and host bus adapters are, a state of data-saturation will always happen as more drives are added into the test.
Our example typical test system with one PCIe SAS host bus adapter can typically sustain an aggregate data rate of approximately 500 MB/second. This means that we can run up to 5 disk drives at full speed, but adding more than 5 drives to the test will cause all of the drives to slow down.

  • One drive @ 100MB/s  – available system I/O = 500MB/s
  • Two drives @ 100MB/s = 200MB/s aggregate throughput out of available system I/O = 500MB/s
  • Three drives @ 100MB/s = 300MB/s aggregate throughput out of available system I/O = 500MB/s
  • Five drives @ 100MB/s = 500MB/s aggregate throughput out of available system I/O = 500MB/s
  • ALERT – we are now out of test system I/O bandwidth!
  • 10 drives @ 50MB/s = 500MB/s aggregate throughput out of available system I/O = 500MB/s
  •  each drive added will reduce the I/O rate of all drives under test

Once we exceed our systems total I/O bandwidth our test times will multiply and climb.

Two Solutions to the total system I/O bandwidth problem

  1. You can increase your systems total I/O bandwidth. By changing  your test system hardware such as:
    • Adding more host bus adapter to spread more drives over more PCIe bus slots
    • changing to faster host bus adapters. Substitute 6G SATA and SAS host bus adapters to raise the highest system throughput versus 3G hardware.
    • CAUTION be certain that you also upgrade all enclosures and cables to be compatible with the higher speeds.
    • These things will raise the I/O threshold limit but the limit will still be there.
  2. You can increase your system maximum I/O throughput by DECREASING the amount of  data you write.  This is a clever option and deserves its own section “Maximize by Reducing”

 

Maximize by Reducing

“If you just write less data to your disk drive it will take less time to finish”

“But” you say, “I need to write to the entire drive. I can’t write less than the entire drive!” And you are correct!

The answer to this dilemma is to selectively reduce the data you’re writing.  Here’s another clue – you need to selectively reduce what must be done to write all 1,953,525,168 blocks of your drive.

Remember the list up above of four separate little things that needed to happen 15,261,915 times to write to all of the blocks on the drive? We still need to do each of those four things – but what if we only had to do them once?

The WRITE SAME Command

As the name of this command implies, this CDB is going to WRITE the SAME data to some range of blocks on a drive. Such a range as “the entire drive”. To write to every block on the drive we:

  • selecting the target disk drive – once
  • send one WRITE SAME command from the test system to the disk under test, and
  • we copy one block (512 bytes) of data from the test computer to the disk under test

Once – versus 15,261,915 times.

After the drive receives this one command and one block of data it will go on it’s way copying that block of data to each of its’ blocks. During the time that the drive is copying it does not take up any I/O bus bandwidth.

Now, we all know that there’s no such thing as a free lunch, even with the WRITE SAME command. We only send our 512 bytes of data over the bus to the drive one time – but then the drive takes that 512 bytes and copies it 15,261,914 more times internally to each block. Since your disk drive still has physical constraints upon it you will still take about the same time as it would take if it was the only drive on your system runningat full disk speed.

Here is a real-world example taken from a DMM log file
Results:
07/18/2012  11:54:13  TEST 1 of 2:

Write Test; Sequential; for 1,953,525,168 Blocks
 Fixed-Length Transfers of 128 (0x0080) Blocks
 Start Block: 0
 Data Pattern: Decrementing
 Queue Depth = 1
 FUA = OFF
 Stop-on-Error Type: Stop Current Test

07/18/2012  14:55:44  Test Completed Successfully

Transfer Rate: 87.58 MB/sec
 I/O Per Second: 1401.33 IO/sec
 Number of Blocks Transferred: 1,953,525,168
 Fastest Command Completion Time: 0.382 ms
 Slowest Command Completion Time: 75.867 ms
 Average Command Completion Time: 0.704 ms
 Standard Deviation of Command Completion Times: 0.334 ms

07/18/2012  14:55:44                                                 PASSED
——————————————————————————
07/18/2012  14:55:44  TEST 2 of 2:    External Program Test, executable = writesamesata
PASSED- SATA WRITE SAME PASSED – Test ran for 6660 seconds
07/18/2012  16:47:51  Test Completed Successfully

The first “Normal” Sequential Write test ran in 2 hours 38 minutes, or 158 minutes or 9480 seconds, and as is shown, the WRITE SAME test #2 accomplished the same result in 6660 seconds. Just about 50% faster. Obviously all of the overhead associated with processing a CDB 15,261,915 times adds up quickly!

Some drives will see a slight improvement using Write Same versus a sequential write test, others will take about the same amount of time.

But overall, with each drive doing its own data copying and not using any I/O bus bandwidth, when writing to many drives you will see a dramatic decrease in test time.

The Results

So we see that by using the WRITE SAME method versus the Sequential Write All Blocks method we may shorten the time to entirely write any given drive.

But the biggest time saving comes from eliminating all of those 15,261,915 data transfers for each drive, leaving the I/O bus virtually traffic-free.

Consider writing 100 drives this way :

  • issue one CDB with one 512 byte of data to drive 1 – drive 1 is now off the bus, busy copying its data
  • issue one CDB with one 512 byte of data to drive 2 – drive 2 is now off the bus, busy copying its data
  • etc.

There is never more than one drive trying to get data from the I/O bus at a time. And each drive only needs to copy one block of data from the test system instead of 1,953,525,168 blocks. Once the process is started and one block of data is transferred then as far as each disk is concerned the bus is free, it is busy copying data to itself while more drives can all do the same.

In practical terms, that first drive which is going to take 6,660 seconds to complete is running at its full disk speed – along with as many other drives as you wish. The time it will take to entirely write 1 drive will be the same time it will take 100 drives to entirely write themselves.

Using the new WriteSame test there is NO time penalty for writing to as many disks as you can attach to your test system.

Summary

There are limitations to using the WRITE SAME test method. Obviously you can only write an identical data pattern to each block on the drive. You still have a wide choice of data patterns to use, but you will be writing the same data pattern to each block. But in many cases such as wiping entire drives that is not a limitation.

There are practical limits, but they have been taken care of for you in the STB Suite implementation of these tests. In particular, this method with SATA drives is very “fragile” and must be done with upmost care. But once again, we’ve taken care of the nuts and bolts of this WRITE SAME method, boiling all the nitty-gritty details down into a simple menu choice in DMM.

 

Implementation

There are two separate Write Same DMM tests, one for SCSI/SAS/FC drives and another for SATA drives. Click  the DMM External Test button in the test type area. A pop-up window will ask if you have any command line parameters. Enter the data pattern you want written to the drive.
Available data patterns are:

    • pattern=allzeros
    • pattern=allones
    • pattern=altzeroandone
    • pattern=altoneandzero
    • pattern=incrementing
    • pattern=decrementing
    • pattern=random
    • pattern=walkingzeros
    • pattern=walkingones
    • pattern=alt1and0thenalt0and1
    • pattern=alt0and1thenalt1and0

 

DMM WRITE SAME test - External Program

Enter the data pattern, click the OK button, then in the Download File/External Program Executable field enter writesamescsi for SCSI, SAS, or FC drives, or enter writesamesata for SATA drives.
DMM WRITE SAME test - Add Sequence

then click the Add This Test to Test Sequence button.