New on Hardware.Info: Frametime tests for GPUs

A different and more accurate method for measuring graphics card performance.


Dissecting the second

Let's first have a look at what fps exactly means. 30 fps means that an average of 30 frames per second are processed. If you're talking about the lowest reading of 30 fps, it means that somewhere in the benchmark there was a second in which only 30 frames were calculated. There are different types of 30 fps, in other words.

A second with 30 frames can consist of 30 frames that each took 33.3 milliseconds to compute. 30 x 33.3 = 1,000 milliseconds, or one second. The other extreme would be that 29 frames are calculated very quickly, let's say 20 ms per frame, but that one frame is extremely slow with 420 ms. Do the math: 29 x 20 + 1 x 420 is also 1,000 milliseconds, or one second.

The traditional way of benchmarking would show a result of 30 fps in both cases, leading you to conclude that both are equally fast. In reality one would be experienced as smooth, while in the second case you would see a clear stutter. We're not creating these scenarios out of thin air either. In practice there is a lot of variety in how long it takes to process a frame, with regular peaks. A reason can be that at a certain moment a very large amount of texture data has to be written to the memory of the GPU.

Two entirely different scenarios, but both with an average of 30 fps. It should be self-explanatory which one is the smoother experience.

To find out whether a graphics card will run a particular game completely stutter-free, you have to look at the time it takes for each individual frame to be produced. When all frames are processed within about 30 milliseconds, you can say that a game is 100 percent smooth and without hick-ups.

A special tool like Fraps makes it possible to calculate the individual frametimes during benchmarking, and this can provide some interesting revelations.

Also read these 3d chip articles on Hardware.Info

The Hardware.Info website uses cookies.