Performance over a period of time (consistency)
Unlike a hard drive, SSD’s performance level is not constant. When an SSD is loaded between breaks for a long period of time and/or when an SSD is quite full, the performance can collapse (significantly). The performance level of SSDs under the most extreme conditions (i.e. the worst case scenario) can hold on to is called the steady state performance.
First some background information. Data can be written down and read on an SSD for every so-called 'page', quantities of usually 4, 8 or 16 kB. To be able to write data, however, data cells must first be deleted: this is only possible per block. Such a block consists of 128, 256 or 512 pages. This fact ensures that SSDs have to do some smart tricks. When several pages of data must be deleted, the rest of the data in the block must first be copied to another block, after which the entire block can be emptied. In practice, this means that SSD controllers save write actions as much as possible, then perform them simultaneously to new, freshly emptied blocks and, at the same time, perform removal actions only at regular intervals. At those moments, when the SSD has nothing to do, the garbage collector embedded in the controller switches on and carries out removal actions at chip level while combining the remaining data as much as possible in full blocks, in order to be able to completely empty as many blocks as possible. However, if the SSD is used continuously for a long time, i.e. without a second's rest, the garbage collector will not be able to operate in between. At a certain moment there will be no empty blocks left and the SSD will have to go through the execution of commands to garbage collection. The result - lower performance.
This effect is amplified when the SSD is full; the number of completely empty blocks in which data can be written directly is then by definition limited. Every SSD is therefore over-provisioned by a few percent, which means that there is physically more memory in it than you can use via the operating system. For example, a 512 GB SSD typically has 512 real gigabytes (512 x 1024 x 1024 x 1024 = 549,755,813,888 bytes) on board, while for the operating system only 512 GB calculated with 1000 bytes per kilobyte is available (512 x 1000 x 1000 x 1000, or 512,000,000,000 bytes, or 476.8 real gigabytes). The difference of 7.5% is what the garbage collector can empty, even when the SSD is fully written. More expensive SSDs are usually far more over-provisioned, which means that they need to use the garbage collector less often when processing data.
TLC, SLC, MLC
There is another reason why during workloads, SSDs can drop in performance after a few seconds, minutes or even longer. Many consumer SSDs use so-called TLC memory, which can store up to three bits per cell. Useful to keep costs down, but the writing of TLC memory is much slower than with SLC or MLC (1-bit or 2-bit per cell) memory. This is absorbed by some manufacturers through a buffer - a memory chip and/or part of the flash memory that is controlled as a SLC and can therefore be written more quickly. The idea is that write-actions first go to buffer and are later written to the slow TLC. However, if the workload is so long that the buffer is full, it must be written directly to the TLC, which results in lower throughput rates and higher access times.
PCMark 8 consistency test
We test the steady state performance using the PCMark 8 consistency test. In this case, the entire PCMark 8 storage test, so all ten already mentioned traces, are run 18 times.
In advance, the entire disk is filled with data twice over. It needs to be done twice, in order to ensure that the entire overprovisioning is also 'occupied'. During the eight degradation phases of the tests, the entire PCMark 8 benchmark is run, with a constant workload of random writing tasks for 10, 15, 20, up to a maximum of 45 minutes in succession. During the steady state phase, the benchmark is run five more times, each time with 45 minutes of random write downloads in advance. During the recovery phase, the drive is allowed to 'catch up' and the benchmark is re-run five times, each time with 5 minutes idle time in between. During this time, the SSD can have the garbage collector and other internal optimisations do their job.
On the product pages, you can view the results of each tested SSD in detail: for each of the eighteen runs, you will find both the average throughput speed and the average access times for both read and write actions. The graphs on this page show the average throughput rate as well as the average read and write access time of the five steady state phase tests. These values give a final picture of the performance of SSDs in a worst-case scenario.
The results show large differences between budget SSDs, which usually do not exceed a few dozen MBs per second, and high-end models, which score from 200 to 400 MB/s. The Intel Optane is once again proudly at the top with a result of over 1700 MB/s. Of the more regular PCIe SSDs, the WD Black 2018 and the Samsung 970 Pro are the most resistant to this heavy test, while the best SATA SSDs are the 860 Pro and Evo, WD Blue 3D/Sandisk Ultra 3D and Crucial MX500.
- Gemiddelde lees access time
- Gemiddelde schrijf access time