STB Suite

Introduction to Command Queuing

Introduction to Command Queuing

What is Queue DepthCommand Queuing, and how do I set or achieve Command Tag Queuing?

Queuing

Queuing is a method of sending multiple commands to a drive while not waiting for the drive to process each command before sending the next.

You could  issue one command to the drive, and while the drive is still processing that command you can  issue another command, etc. until your queue is full. Each command you issue is put into a Queue, and as the drive completes one command out of the queue it fetches the next command from the queue and executes it. While commands are being executed by the drive more commands can be “stacked up” or added into the queue, to be processed when the drive gets to them.

Since it takes longer for the disk to execute a command than it takes to issue commands to the drives queue you can end up having a fairly deep queue of commands waiting to be processed by the drive.

This type of queuing is called a Simple Queue – the drive executes the commands in the queue in the same order that the commands were placed into the Queue. This is sometimes referred to as a FIFO (First In First Out) type of queuing method.

There are more complex types of queuing, such as Ordered Queuing, where instead of the disk just executing the queued commands in the order they were issued (FIFO), the drive actually analyzes each command in the queue and sorts them, executing the commands in an order which may give higher performance.

For example, a drive may take commands out of its queue based on the starting LBA of each command, to try to issue commands to the drive in a way that would minimize moving the heads back and forth.

As you can see command queuing is a complex process. Commands which are placed in the queue are tagged with an identifier called a “tag”. The operating system, drivers, and host bus adapter all use these tags to keep track of which commands are still waiting in the queue and which command is currently being processed. The use of these tags becomes very important when there is a problem and the drive can’t execute a command properly.  For another example of the importance and complexity of dealing with queued commands and keeping track of them via their tags, imagine what must happen if you have 100 write commands queued and you lose power to the disk drive? Keeping track of which tags were completed successfully and which tags weren’t is a very complex task!

You may see the terms Command Tagged Queuing or Tagged Command Queuing. These terms all refer to the process of queuing commands, that is, putting commands into a queue at the same time the drive is taking commands out of the queue and executing them.

Queue Depth

The term Queue Depth refers to how many commands  the drive, host bus adapter, driver, and operating system can have in a queue. Some drives have a queue depth limit of 32, while some newer storage technologies such as NVMe supports queue depths of 64K!

Queuing in the STB Suite

Queuing and allowing the operator to specify the desired queue depth is supported in the STB Suite Disk Manufacturing Module (DMM). You specify the desired queue depth via the Advanced Options  page using the Queue Depth setting –

For each test step you can specify the maximum queue depth desired. Note – it is possible to enter a desired queue depth which for one reason or another cannot be attained. DMM will try its best to issue commands fast enough to sustain the  desired queue depth. SATA drives will report what their maximum queue depth is via their IDENTIFY data – such as these –



There’s no point specifying a queue depth greater than the supports.

 

What does Queuing look like?

Here is some example screenshots of BAM showing queuing in action –
First – BAM’s real-time meters show maximum and average metrics, including queue depth attained, in this case we requested a queue depth of 64 and were able to sustain it –

Now, looking at the trace display you can see a number of WRITE commands being issued. Note – we are issuing WRITE commands, and we are able to issue many (up to 64) write commands before a single command completes. The command would be complete after the data phase has actually written the data to the drive. What you see here are seven WRITE commands being queued in the drive before the first commands data is written –

Conversely, at the end of the trace DMM will have issued all the WRITE commands and is now waiting for all of them to complete. Here we see the last WRITE command, then  the data out phases “catching up” –

BAM can show you in post-capture the highest queue depth that was attained during a capture –

 

And you can also see the highest (deepest) queue depth attained in the Trace Analysis post-capture display –

 

The effects of queue depth on performance

Queuing commands can have an impact on drive throughput performance.
Each test scenario you have (different drives, interface, adapters, etc) should be tested to find the combination of I/O transfer size and queue depth that yields the highest throughput.
Here is an example real-time transfer rate performance measurement taken from DMM, showing a sequential WRITE test –

First with a queue depth of 1 –

 

And now with a queue depth of 32 –

 


The I/O transfer rate more than doubled by setting the queue depth from 1 to 32.

Summary

The STB Suite can generate deeply queued I/O of a user-defined depth in any DMM READ or WRITE test step. Maximum queue depth can be set on a per-test basis for versatile testing and measurement.

Other tools within the STB Suite can display what maximum queue depth a given drive will support.

The Bus Analyzer Module (BAM) can display in real-time and in post-capture what actual queue depth was attained during a trace capture.