G-SYNC 101: G-SYNC vs. V-SYNC OFF


Beyond the Limits of the Scanout

It’s already been established that single, tear-free frame delivery is limited by the scanout, and V-SYNC OFF can defeat it by allowing more than one frame scan per scanout. That said, how much of an input lag advantage can be had over G-SYNC, and how high must the framerate be sustained above the refresh rate to diminish tearing artifacts and justify the difference?

Blur Buster's G-SYNC 101: Input Latency & Optimal Settings
Blur Buster's G-SYNC 101: Input Latency & Optimal Settings
Blur Buster's G-SYNC 101: Input Latency & Optimal Settings
Blur Buster's G-SYNC 101: Input Latency & Optimal Settings
Blur Buster's G-SYNC 101: Input Latency & Optimal Settings
Blur Buster's G-SYNC 101: Input Latency & Optimal Settings

Quite high. Counting first on-screen reactions, V-SYNC OFF already has a slight input lag advantage (up to a 1/2 frame) over G-SYNC at the same framerate, especially the lower the refresh rate, but it actually takes a considerable increase in framerate above the given refresh rate to widen the gap to significant levels. And while the reductions may look significant in bar chart form, even with framerates in excess of 3x the refresh rate, and when measured at middle screen (crosshair-level) only, V-SYNC OFF actually has a limited advantage over G-SYNC in practice, and most of it is in areas that one could argue, for the average player, are comparatively useless when something such as a viewmodel’s wrist is updated 1-3ms faster with V-SYNC OFF.

This is where the refresh rate/sustained framerate ratio factors in:

Blur Buster's G-SYNC 101: Input Latency & Optimal Settings
Blur Buster's G-SYNC 101: Input Latency & Optimal Settings
Blur Buster's G-SYNC 101: Input Latency & Optimal Settings

As shown in the above diagrams, the true advantage comes when V-SYNC OFF can allow not just two, but multiple frame scans in a single scanout. Unlike syncing solutions, with V-SYNC OFF, the frametime is not paced to the scanout, and a frame will begin scanning in as soon as it’s rendered, regardless whether the previous frame scan is still in progress. At 144Hz with 1000 FPS, for instance, this means with a sustained frametime of 1ms, the display updates nearly 7 times in a single scanout.

In fact, at 240Hz, first on-screen reactions became so fast at 1000 FPS and 0 FPS, that the inherit delay in my mouse and display became the bottleneck for minimum measurements.

So, for competitive players, V-SYNC OFF still reigns supreme in the input lag realm, especially if sustained framerates can exceed the refresh rate by 5x or more. However, while at higher refresh rates, visible tearing artifacts are all but eliminated at these ratios, it can instead manifest as microstutter, and thus, even at its best, V-SYNC OFF still can’t match the consistency of G-SYNC frame delivery.



3703 Comments For “G-SYNC 101”

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Sort by:   newest | oldest | most liked
barry12345
Member
barry12345

Is there any explanation why my CS2 is capping at 200FPS with a 240Hz monitor?

Using Gsync + Vsync (nvcp) + reflex

With Vsync off it will uncap and I get around 300FPS so it should be able to cap at 225 like it does for everyone else?

With Vsync on the FPS is still capped at 200 even with a 225 limit set in nvcp.

barry12345
Member
barry12345

Overwatch with these settings caps at 239 instead of 225.

ItapL
Member
ItapL

In the Windows graphics settings, there’s an option for “Variable Refresh Rate.” Should I turn this on? (I’m not using an English version of Windows, so I’m not sure what the exact label says.)

My monitor has a real-time refresh rate display, but when I enable both G-SYNC and V-SYNC, it always shows the maximum refresh rate of 320 Hz in games — it doesn’t change dynamically. My G-SYNC settings are configured correctly, and V-SYNC is forced on through NVIDIA Profile Inspector.

tearxinnuan
Member
tearxinnuan

Thank you very much for your article and tutorial! I’ve set up the appropriate settings according to your article, but I still have some questions I’d like to ask!

First, my current settings are:
NVCP: G-SYNC + V-SYNC on, LLM off,
In Game: Reflex on + boost, V-SYNC off

I believe this setup is optimal for GSYNC usage. I don’t limit my frame rate using any external software or NVCP. When I enable Reflex in-game, it automatically caps my frame rate at 260 FPS (my monitor is 280Hz). I think relying solely on Reflex to limit my frame rate would be more straightforward than setting it separately, and perhaps also avoid conflicts and instability caused by multiple frame limits. Secondly, I’ve personally tested the games I play, and Reflex takes precedence over both the in-game and NVCP frame limits. That is, no matter how much I limit my frame rate, once Reflex is enabled, it caps it at 260 FPS.

I primarily play competitive games like Valve, APEX, and Overwatch, but I also occasionally play other single-player games. Then, the competitive games I play all have Reflex, so can I completely abandon all external frame limiting methods and rely solely on Reflex?

Also, regarding LLM in NVCP, should I set it on or off, or even set it to Ultra? I’m not sure if there are any advantages or disadvantages to turning LLM on, even though Reflex takes over a lot of the processing. There’s a lot of controversy online about LLM, and even NVIDA officials claim that setting LLM to Ultra will minimize V-SYCN latency.

Looking forward to your answers!

dimacbka
Member
dimacbka

Hi. I really liked this article. But I have a couple of questions. I have a new PC that gives 800 fps in cs2. How do I set up this gsync+vsync+reflex bundle correctly? My monitor is 280Hz. I’m confused, do I need to limit frames via the nvidia panel? Yesterday I turned on “delay” on Ultra and reflex+boost. In the game, the frames were around 260. With the fps_max parameter 0

mike-lesnik
Member

Hello, jorimt! My question is more about input delay than G-sync, but I decided to ask it here because I really like your style of response — simple and clear.
I don’t quite understand what role frametime plays in input delay? It is often written that frametime is the time needed to create a frame, but 60 frames of 16.6 ms each can be created by either an underloaded or overloaded GPU. On the screen, we see the same framerate and frametime in both cases, but the resulting input delay will be different…
That is, the frametime is not “the time it took the system (CPU-OS-Engine-GPU) to create the frame”, but “the time allotted for displaying the frame by the display before the next one appears”?

wpDiscuz