G-SYNC 101: Input Lag & Test Methodology


Test Setup

High Speed Camera Casio Exilim EX-ZR200 w/1000 FPS 224x64px video capture
Display Acer Predator XB252Q 240Hz w/G-SYNC (1920×1080)
Mouse Razer Deathadder Chroma modified w/external LED
.
Nvidia Driver 381.78
Nvidia Control Panel Default settings (“Prefer maximum performance” enabled)
.
OS Windows 10 Home 64-bit (Creators Update)
Motherboard ASRock Z87 Extreme4
Power Supply EVGA SuperNOVA 750 W G2
Heatsink Hyper 212 Evo w/2x Noctua NF-F12 fans
CPU i7-4770k @4.2GHz w/Hyper-Threading enabled (8 cores, unparked: 4 physical/4 virtual)
GPU EVGA GTX 1080 FTW GAMING ACX 3.0 w/8GB VRAM & 1975MHz Boost Clock
Sound Card Creative Sound Blaster Z (optical audio)
RAM 16GB G.SKILL Sniper DDR3 @1866 MHz (dual-channel: 9-10-9-28, 2T)
SSD (OS) 256GB Samsung 850 Pro
HDD (Games) 5TB Western Digital Black 7200 RPM w/128 MB cache
.
Test Game #1 Overwatch w/lowest settings, “Reduced Buffering” enabled
Test Game #2 Counter-Strike: Global Offensive w/lowest settings, “Multicore Rendering” disabled

Introduction

The input lag testing method used in this article was pioneered by Blur Buster’s Mark Rejhon, and originally featured in his 2014 Preview of NVIDIA G-SYNC, Part #2 (Input Lag) article. It has become the standard among testers since, and is used by a variety of sources across the web.

Middle Screen vs. First On-screen Reaction

In my original input lag tests featured in this thread on the Blur Busters Forums, I measured middle screen (crosshair-level) reactions at a single refresh rate (144Hz), and found that both V-SYNC OFF and G-SYNC, at the same framerate within the refresh rate, delivered frames to the middle of the screen at virtually the same time. This still holds true.

Blur Buster's G-SYNC 101: Input Lag & Optimal Settings

However, while middle screen measurements are a common and fully valid input lag testing method, they are limited in what they can reveal, and do not account for the first on-screen reaction, which can mask the subtle and not so subtle differences in frame delivery between V-SYNC OFF and various syncing solutions; a reason why I opted to capture the entire screen this time around.

Due to the differences between the two test methods, V-SYNC OFF results generated from first on-screen measurements, especially at lower refresh rates (for reasons that will later be explained), can appear to have up to twice the input lag reduction of middle screen readings:

Blur Buster's G-SYNC 101: Middle Screen vs. First On-screen Reaction Diagram

As the diagram shows, this is because the measurement of the first on-screen reaction is begun at the start of the frame scan, whereas the measurement of the middle screen reaction is begun at crosshair-level, where, with G-SYNC, the in-progress frame scan is already half completed, and with V-SYNC OFF, can be at various percentages of completion, depending on the given refresh rate/framerate offset.

When V-SYNC OFF is directly compared to FPS-limited G-SYNC at crosshair-level, even with V-SYNC OFF’s framerate at up to 3x times above the refresh rate, middle screen readings are virtually a wash (the results in this article included). But, as will be detailed further in, V-SYNC OFF can, for a lack of better term, “defeat” the scanout by beginning the next frame scan in the previous scanout.

With V-SYNC OFF at -2 FPS below the refresh rate, for instance (the scenario used to compare V-SYNC OFF directly against G-SYNC in this article), the tearline will continuously roll downwards, which means, when measured by first on-screen reactions, its advantage over G-SYNC can be anywhere from 0 to 1/2 frame, depending on the ever-fluctuating position of the tearline between samples. With middle screen readings, the initial position of the tearline(s), and thus, its advantage, is effectively ignored.

These differences should be kept in mind when inspecting the upcoming results, with the method featured in this article being the best case scenario for V-SYNC OFF, and the worst case scenario for synced when directly compared to V-SYNC OFF, G-SYNC included.

Test Methodology

To further facilitate the first on-screen reaction method, I’ve changed sample capture from muzzle flash to strafe for Overwatch (credit goes to Battle(non)sense for the initial suggestion) and look for CS:GO, which triggers horizontal updates across the entire screen. The strafe/look mechanics are also more consistent from click to click, and less prone to the built-in variable delay experienced from shot to shot with the previous method.

To ensure a proper control environment for testing, and rule out as many variables as possible, the Nvidia Control Panel settings (but for “Power management mode” set to “Prefer maximum performance”) were left at defaults, all background programs were closed, and all overlays were disabled, as was the Creators Update’s newly introduced “Game Mode,” and .exe Compatibility option “fullscreen optimizations,” along with the existing “Game bar” and “Game DVR” options.

To guarantee extraneous mouse movements didn’t throw off input reads during rapid clicks, masking tape was placed over the sensor of the modified test mouse (Deathadder Chroma), and a second mouse (Deathadder Elite) was used to navigate the game menus and get into place for sample capture.

To emulate lower maximum refresh rates on the native 240Hz Acer Predator XB252Q, “Preferred refresh rate” was set to “Application-controlled” when G-SYNC was enabled, and the refresh rate was manually adjusted as needed in the game options (Overwatch), or on the desktop (CS:GO) before launch.

And, finally, to validate and track the refresh rate, framerate, and the syncing solution in use for each scenario, the in-game FPS counter, Nvidia Control Panel’s G-SYNC Indicator, and the display’s built-in refresh rate meter were active at all times.

Testing was performed with a Casio Exilim EX-ZR200 capable of 1000 FPS high speed video capture (accurate within 1ms), and a Razer Deathadder Chroma modified with an external LED (credit goes to Chief Blur Buster for the mod), which lights up on left click, and has a reactive variance of <1ms.

Blur Buster's G-SYNC 101: Input Lag & Optimal Settings

To compensate for the camera’s low 224×64 pixel video resolution, a bright image with stark contrast between foreground and background, and thin vertical elements that could easily betray horizontal movement across the screen, were needed for reliable discernment of first reactions after click.

For Overwatch, Genji was used due to his smaller viewmodel and ability to scale walls, and an optimal spot on the game’s Practice Range was found that met the aforementioned criteria. Left click was mapped to strafe left, in-game settings were at the lowest available, and “Reduced Buffering” was enabled to ensure the lowest input latency possible.

Blur Buster's G-SYNC 101: Input Lag & Optimal Settings

For CS:GO, a custom map provided by the Blur Busters Forum’s lexlazootin was used, which strips all unnecessary elements (time limits, objectives, assets, viewmodel, etc), and contains a lone white square suspended in a black void, that when positioned just right, allows the slightest reactions to be accurately spotted via the singular vertical black and white separation. Left click was mapped to look left, in-game settings were at the lowest available, and “Multicore Rendering” was disabled to ensure the lowest input latency possible.

For capture, the Acer Predator XB252Q (LED fixed to its left side) was recorded as the mouse was clicked a total of ten times. To average out differences between runs, this process was repeated four times per scenario, and each game was restarted after each run.

Once all scenarios were recorded, the .mov format videos, containing ten samples each, were inspected in QuickTime using its built-in frame counter and single frame stepping function via the arrows keys. The video was jogged through until the LED lit up, at which point the frame number was input into an Excel spreadsheet. Frames (thanks to 1000 FPS video capture, represent a literal 1ms each) were then stepped through until the first reaction was spotted on-screen, where, again, the frame number was input into the spreadsheet. This generated the total delay in milliseconds from left click to first on-screen reaction, and the process was repeated per video, ad nauseam.

All told, 508 videos weighing in at 17.5GB, with an aggregated (albeit slow-motion) 45 hour and 40 minute runtime, were recorded across 2 games and 6 refresh rates, containing a total of 42 scenarios, 508 runs, and 5080 individual samples. My original Excel spreadsheet is available for download here, and can also be viewed online here.

Blur Buster's G-SYNC 101: Input Latency & Optimal Settings

To preface, the following results and explanations assume that the native resolution w/default timings are in use on a single monitor in exclusive fullscreen mode, paired with a single-GPU desktop system that can sustain the framerate above the refresh rate at all times.

This article does not seek to measure the impact of input lag differences incurred by display, input device, CPU or GPU overclocks, RAM timings, disk drives, drivers, BIOS, OS, or in-game graphical settings. And the baseline numbers represented in the results are not indicative of, and should not be expected to be replicable on other systems, which will vary in configuration, specs, and the games being run.

This article seeks only to measuring the impact V-SYNC OFF, G-SYNC, V-SYNC, and Fast Sync, paired with various framerate limiters, have on frame delivery and input lag, and the differences between them; the results of which are replicable across setups.

+/- 1ms differences between identical scenarios in the following charts are usually within margin of error, while +/- 1ms differences between separate scenarios are usually measurable, and the error margin may not apply. And finally, all mentions of “V-SYNC (NVCP)” in the denoted scenarios signify that the Nvidia Control Panel’s “Vertical sync” entry was set to “On,” and “V-SYNC OFF” or “G-SYNC + V-SYNC ‘Off'” signify that “Use the 3D application setting” was applied w/V-SYNC disabled in-game.

So, without further ado, onto the results…

Input Lag: Not All Frames Are Created Equal

When it is said that there is “1 frame” or “2 frames” of delay, what does that actually mean? In this context, a “frame” signifies the total time a rendered frame takes to be displayed completely on-screen. The worth of a single frame is dependent on the display’s maximum native refresh rate. At 60Hz, a frame is worth 16.6ms, at 100Hz: 10ms, 120Hz: 8.3ms, 144Hz: 6.9ms, 200Hz: 5ms, and 240Hz: 4.2ms, continuing to decrease in worth as the refresh rate increases.

With double buffer V-SYNC, there is typically a 2 frame delay when the framerate exceeds the refresh rate, but this isn’t always the case. Overwatch, even with “Reduced Buffering” enabled, can have up to 4 frames of delay with double buffer V-SYNC engaged.

Blur Buster's G-SYNC 101: Input Lag & Optimal Settings

The chart above depicts anywhere from 3 to 3 1/2 frames of added delay. At 60Hz, this is significant, at up to 58.1ms of additional input lag. At 240Hz, where a single frame is worth far less (4.2ms), a 3 1/2 frame delay is comparatively insignificant, at up to 14.7ms.

In other words, a “frame” of delay is relative to the refresh rate, and dictates how much or how little of a delay is incurred per, a constant which should be kept in mind going forward.



2033 Comments For “G-SYNC 101”

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Sort by:   newest | oldest | most liked
majkool
Member
majkool

1. So with 390Hz monitor input lag with GSYNC + VSYNC and 386fps game limitter is nearly the same like with GSYNC/VSYNC off and fps_max 0 in CSGO.
2. I know that with 240Hz monitor there is a few ms delay with these options but I think it’s still better to accustom to it and play on a pro level?

AgentDaumer
Member
AgentDaumer

G-SYNC reverts to V-SYNC behavior when it can no longer adjust the refresh rate to the framerate, 2-6 frames (typically 2 frames; approximately an additional 33.2ms @60 Hz, 20ms @100 Hz, 13.8ms @144 Hz, etc) of delay is added as rendered frames begin to over-queue in both buffers, ultimately delaying their appearance on-screen.

I’m surprised that a framerate limit just 2-3 FPS below the monitor’s refresh rate can actually prevent G-Sync from hitting that threshold and reverting to V-Sync behaviour. Using RTSS I noticed that a 140 FPS cap (in-game) will still allow the framerate to fluctuate and reach pretty high values, often higher than 160 FPS. Even a 120 FPS cap still exceeds 144 every couple seconds.
It happens intermittently of course, the frametimes are constantly fluctuating (as they should) and the average framerate is still respecting the limit.
However, wouldn’t those big spikes mean that G-Sync is frequently engaging and disengaging all the time? Am I missing something?

Unneverseen
Member
Unneverseen

How is the latency for competitive fps games with 141fps limit (i have 144hz monitor) and Gsync and nvcp vsync on? should i disable fps limit and nvcp vsync for less latency? also do you set Max Frame Rate, low latency mode, and vsync globally or per game basis? also i heard that fps limit improving latency depends on the game, some it might give worse latency some its better latency. thanks for the article

YormFatigue
Member
YormFatigue

Hello Jorimt, thanks for your article, it’s great, now I know more about this kind of Technology and am less confused of it.
But anyway, I want to confirm something, and hope you will answer it.
I use Freesync monitor 144HZ, based on what I learn, so it’s just the same as G-sync. When I play games, one thing I absolutely want to get rid of is microsutter, even after using VRR, I still have it to this day, when reading your article, I think I know the cause, and I know it may be unavoidable, but I want to be clear to it. So, these are my cases:
– When I play some games, I suffer occasional microsutter and sometime occasional tearing, of course I turn Freesync on, but it still remains as is, turning on V-sync will make that “occasional tearing” become microsutter as well, using fps limiter to limit the fps which is far lower than what point my specs can archive. My question is what makes that behavior? Is It the game’s problem that creates great frametimes variance and frametimes spikes? And additional info, some games I played have are kind of weird, that the lower fps is, the smoother the game is,
– There are also games that will present microstutter no matter what fps is, take an example of The Witcher 3, my specs can reach 60+ fps, so good idea that I should cap it just to 60 (I do it in almost game anyway) and it still has microstutter, then I tried to apply the weird way I state above, tried lowering fps, as low as I can play (well – 30fps), and still no good, microstutter is still there. So is it my system’s fault or the game’s fault?
And one last thing I want to share, based on what I’ve experienced so far (not read or learn), is I am pretty sure that fps limiter can somehow help Freesync in terms of smoothness, even it is below the targeted fps. I am talking about AMD framerate target control – FRTC for short, which is the smoothest framerate limiter (and It comes with the biggest latency), on my test, In the same game, in the same place of the game, I used 4 fps limiter: In-game, Chill with max-min equal, RTSS, FRTC, I saw that with FRTC, there is almost no microstutter no matter what fps, Chill and RTSS are pretty same as each other – comes with microsutter, but Chill does have less than RTSS and RTSS seems to have little smoother images when rotating camera, in-game is the father of microstutter.
I find that interesting, and I turn off Freesync to see what would happen. Well, with FRTC, tearing would present as one line tear which is incredibly stable, if fps can reach the target, even below tearing is still pretty stable. With Chill, tearing at almost time will present as two pretty small tearing at two position of the monitor, one upper and one lower. With RTSS, tearing gathers its power at one position of monitor and “dances” at that point. This behavior applies to almost all games, or so I’ve seen so far, and with that information, I think it is not silly to assume that fps limiters somehow affect or help Freesync or G-sync.
Thanks for taking your time to go through my long comment, there are maybe mistakes as English is not my primary Language, hope I did not make too much of it.
And Happy New Year.

nidzo
Member
nidzo

Hello, so I’ve been trying to get my game to run more smoothly and I have a few questions about how gsync and nvidia reflex and all that works. I’ve been capping my fps to 141 with reflex on and gsync and there’s no problem for the most part, but when my fps starts to fluctuate (100-141 fps) it feels very choppy. SO i was wondering, would it make more sense to limit my fps to something like 100 FPS so that I’m still in gsync range but my fps is more consistent or should i just keep it capped at 141 and let the game reach that fps every once in a while. I’ve been trying for a while to try to get this game as smooth as possible on my poopy machine lmao. Thanks.

wpDiscuz