Toshiba XG6 review: The first SSD with 96-layer 3D-NAND!

Is Toshiba now competitive with Samsung and WD?

By


Performance over time (consistency)

Unlike hard disk drives, SSDs do not have a constant performance level. When an SSD is under heavy load between breaks for a long period of time and/or when the storage of an SSD is nearly full, the performance can be (significantly) worse. The performance level that SSDs can maintain under the most extreme conditions (i.e. the worst case scenario) is called the steady state performance.

Garbage collector

First some background information. We have written several times that data on an SSD per so-called 'page', quantities of usually 4, 8 or 16 kB, can be written and read. To be able to write data, however, data cells must first be deleted: this is only possible per block. Such a block consists of 128, 256 or 512 pages. This is why SSDs have to perform clever tricks. When a number of pages of data must be deleted, the rest of the data in the block must first be copied to another block, after which the entire block can be emptied. In practice, it means that SSD controllers collect write actions as much as possible, then run them simultaneously to new, blocks that have just been emptied and, at the same time, perform removal actions only at regular intervals. At those moments, when the SSD has nothing to do, the garbage collector embedded in the controller switches on, which actually carries out removal actions at chip level and combines the remaining data in full blocks as much as possible, in order to be able to completely empty as many blocks as possible. However, if the SSD is used continuously for a long period of time, i.e. without a second's rest, the garbage collector will not be able to start working in between. At some point there will be no empty blocks left and the SSD will have to go through the execution of commands to garbage collection. The result: lower performance.

Overprovisioning

This effect is amplified when the SSD is very full; the number of completely empty blocks to which data can be written directly is then by definition limited. This is why each SSD has always been overprovisioned by a few percent, which means that there is more physical memory in it than you can use via the operating system. For example, a 512GB SSD generally has 512 real gigabytes (512 x 1024 x 1024 x 1024 = 549,755,813,888 bytes) on board, while the operating system only calculates 512GB with 1000 bytes per kilobyte (512 x 1000 x 1000 x 1000, or 512,000,000,000 bytes, or 476.8 real gigabytes). The difference of 7.5 percent is what the garbage collector can use even when the SSD is completely full. More expensive SSDs are generally much more overprovisioned than 7.5 percent, which means that they do not need to use the garbage collector as quickly and less often during the processing of data.

TLC, SLC, MLC

And there is another reason why SSDs can slump in performance after a few seconds, minutes, or even longer during a workload. Many consumers' SSDs use so-called TLC memory, which can store three bits per cell. Useful to keep costs down, but writing with TLC memory is much slower than with SLC or MLC memory (1- or 2-bit per cell respectively). Some manufacturers solve this issue by using a buffer; a memory chip and/or part of the flash memory that is controlled as SLC and can therefore be written to more quickly. The idea is that write actions first go to buffer and are slowly written to TLC later. However, if the workload is so long that the buffer is full, writing needs to be done directly to the TLC, resulting in lower throughput speed and higher access times.

PCMark 8 consistency test

We test the steady state performance with the help of the PCMark 8 consistency test. During this test the entire PCMark 8 storage test, in other words: all of the workloads shown on previous pages, is run 18 times. 

Beforehand, the entire disk is completely written to with data twice. This is done twice to ensure that the entire overprovisioning is also "occupied". During the eight degradation phases of the tets, the entire PCMark 8 benchmark is run with a constant workload of random writing commands in between for 10, 15, 20, up to a maximum of 45 minutes in succession. During the steady state phase, the benchmark is run five more times, each time with 45 minutes of random write workloads in advance. During the recovery phase, the disc is allowed to "breathe" again and the benchmark is run five times, every time with 5 minutes of idle time in advance. During this time, the SSD can have the garbage collector and other internal optimizations do their job.

In the graphs below you can see the scores of the benchmarks during the 18 steps. The tabs show the average throughput speed as well as the average read and write access time.

With the heavy degradation tests, the Toshiba XG6 is stressed more than the other high-end SSDs. The steady-state performance is between 250 and 300 MB/s. However, as soon as the test enters the recovery phase, the XG6 can immediately measure up to other high-end SSDs again.

  • Bandwidth
  • Average read access time
  • Average write access time

higher = better

lower = better

lower = better

Of the five tests in the steady state phase, we then determined the average bandwidth, read access time and write access time. These values paint a final picture of the performance of SSDs in a worst case scenario.

On average we arrive at 273.6 MB/s for the XG6 1TB. This is comparable to, for example, the Intel 760p and the Corsair Force MP500. Especially the access times for reading are on the high side, while when writing it is the fastest, not including Intel's 3D XPoint-based SSDs.

  • Bandwidth
  • Average read access time
  • Average write access time


Product discussed in this review

  Product Lowest price

Toshiba XG6 1TB

SSD, 1024 GB, PCI-Express 3.0 x4, Toshiba TC58NCP090GSD, 3180 MB/s, 2960 MB/s, M.2 2280

Specifications Test results Reviews

Also read these hard disk/ssd articles on Hardware.Info

The Hardware.Info website uses cookies.
*