Screen Tearing

Published April 19, 2014 By Joe Savage

Screen tearing is a visual artifact where two or more frames are shown in a single screen draw. It's an interesting effect with direct implications on things like graphics performance, so I'm going to explain some of the causes in this post.

Simulated Screen Tearing (Credit to Vanessaezekowitz of Wikipedia)

The refresh rate of a display - commonly specified in Hertz (Hz) - is the rate at which the image on the display is updated. If a monitor has a refresh rate of 75 Hz, it updates the image it displays 75 times every second.

Screen tearing can be caused by a number of things, but the most common in the context of computer-generated graphics is when the video feed to a display is out of sync with the display's refresh rate. This means that as the data that represents what should be displayed on-screen is part-way through being updated, the display may request this data so that it can perform a refresh - and thus we have image tearing where only part of the frame is updated. I've worked up a little demo to demonstrate this sync issue between the video buffer and the display's refresh rate:

See the Pen Single-Buffered Screen Tearing by Joe Savage (@joesavage) on CodePen.

In the above example it may initially seem like a refresh rate of 0.26 Hz and a framerate of 0.2 FPS (Frames Per Second) might be close enough to display correctly, however it's clear watching the demo run that the two are completely out of sync and thus screen tearing occurs regularly.

The real issue with this example is that it's a 'single-buffered application'. We write our frames directly to the buffer which is fed to the display, and so when the display wants to refresh its image, there is a risk of it getting a frame which isn't completely finished yet. In a real application the framerate can often change dramatically depending on certain circumstances, which can worsen this issue in single-buffered applications and make screen tearing even more unpredictable.

Double Buffering

One way to alleviate some of these issues is to use double-buffering. This method of drawing consists of writing the frame data to a back buffer, and then copying it over to the primary buffer (the video feed to the display) only when a frame is complete. This way the user will never see any half-completed frames, right? Wrong.

While screen tearing tends to be reduced significantly with double buffering, copying data takes time, and as such it's completely possible that the monitor will refresh its image half way through the buffer-copying process! This issue can be visualised exactly like single-buffered tearing - in the previous demo, just imagine that the pixels being written into the primary buffer aren't generated by the graphics card directly, but instead are being placed in the buffer sequentially as the result of a copy operation from a secondary buffer.

Double Buffering + VSync

A further alleviation tactic is to use VSync. Short for 'vertical synchronisation', VSync works by only letting the back buffer write to the primary buffer right after the display has refreshed. The hope is that the buffer copy operation should be complete before the next refresh. This works in dramatically reducing screen tearing, however does have its problems.

If you have a high framerate application, VSync will usually work perfectly. Your framerate is limited to the refresh rate of your monitor, but that's fine because your monitor would be displaying at that rate anyway. The issue comes when you achieve a lower framerate than your display's refresh rate.

Say you were achieving 60 FPS on a 75 Hz monitor - that means the frame buffer is updating at 80% of the refresh rate. If VSync is enabled, frames must be copied to the screen buffer at a subdivision of the refresh rate. In this case, the application will miss the 'deadline' at every other cycle, so we'll end up with half of the refresh rate as our framerate: 37.5 FPS. This is significantly less than the 60 FPS which the graphics card can achieve.

The TL;DR for double-buffered VSync is that if you achieve a consistently higher FPS than your refresh rate, it may be a good idea for reducing screen tearing, but if the FPS will drop below the refresh rate of your display, VSync may significantly reduce FPS.

Triple Buffering + VSync

The holy grail of this whole mess is usually triple buffering with VSync. The limit of VSync with double buffering is that the framerate may significantly decrease due to the time spent waiting for the right time to copy the secondary buffer into the primary buffer. Waiting is often not a fantastic idea in computer science where it can be avoided, so a solution here is to simply add another buffer.

With three buffers, the two back buffers can be drawn to alternately so it is always the case that one buffer is complete and one is in-progress. This means that just after a monitor refresh, whichever buffer is currently complete can be copied to the primary buffer - providing the advantages of VSync without the framerate disadvantage previously discussed.

The main disadvantage of triple buffering where it's available in applications is that it takes up more memory - if we're talking about graphics cards, this means more VRAM (Video RAM). This is fast memory used directly by the graphics card, and so in some cases triple buffering could cause a significant FPS drop while higher latency RAM is used to make up for the lack of native VRAM. Typically this isn't a huge issue as the amount of memory taken up by the third buffer is often small in comparison to total size of the VRAM, however VSync with only two buffers may provide a better framerate in some very niche cases! Swings and roundabouts, eh?