Measure What Your Users Really Experience: Login Enterprise Meets NVIDIA nVector
- Edwin de Bruin
- 2 dagen geleden
- 7 minuten om te lezen
Measuring Real Latency — The Missing Piece in VDI Performance
Latency isn’t just a number you pull from monitoring tools. It’s about the delay users feel, the gap between when they click and when something actually happens on their screen.
In virtual desktop environments, traditional latency metrics (like server response times or network delays) only tell part of the story. Users interact with their device locally, so the real question is:
how long does it take until their screen, on the endpoint, to show the result?
That’s the gap that NVIDIA’s nVector together with Login Enterprise now bridges.
They don’t just track backend stats but measure latency and image quality from the user’s perspective, at scale, without needing physical sensors.
Let me show you a cool demo with Need for Speed Underground controlled by Login Enterprise, but first, let’s take a step back.

Login Enterprise by LoginVSI
Login Enterprise is the go-to platform for running synthetic workloads on virtual desktops and applications, letting you test how your environment performs under stress.
It’s a reliable way to simulate user activity, push your systems to the limit, and uncover scalability bottlenecks.
On top of that, it provides detailed insights into each session, logon times, resource usage, application latency, session load, and exactly where the system starts to struggle.
But here’s the catch: while Login Enterprise shows you the user experience at the virtual desktop level, it still doesn’t fully capture the actual user experience at the endpoint. You get numbers on response times and session load, but it doesn’t measure what users actually see, the real latency, frame drops, or graphical glitches that make a session feel sluggish.
In other words, it’s really great for backend stress testing and its reporting gives you the numbers, but it still only tells half the story.
Why GPU Performance and User Perception Don’t Always Match
GPUs are critical in delivering rich graphics and smooth video in VDI, but raw GPU performance doesn’t always match what users actually feel. One person calls it sluggish, another swears it’s butter-smooth.
And let’s be real: high-performance GPUs aren’t cheap. Dropping serious cash on hardware that might not improve the user experience? Not ideal.
So, to measurements it is!
To start: LDAT (Latency Display Analysis Tool) is the most accurate way to measure latency in practice.
It tracks the time between a user action and the resulting frame appearing on the screen, commonly called “click-to-photon” delay, capturing exactly how long it takes from a click to the GPU to render a frame and for the light to actually leave the display. Even the latency of the monitor itself is measured.

It gives a close approximation of what users actually experience (though, of course, we can’t measure latency in someone’s brain, but that also depends on that's someone's individual state).
In a real enterprise, however, attaching LDAT devices on the screens at scale simply isn’t practical.
nVector by NVIDIA:
So, NVIDIA nVector is the go-to tool for testing the visual and interactive performance of virtual desktops and applications, specifically in environments powered by NVIDIA GPUs.
It compares what the GPU renders in the remote desktop environment to what appears on the user’s client display using optional watermarking to estimate frame delivery latency between the two. This delivers real-time insights into latency without any extra hardware.
nVector also supports Structural Similarity Index Measure (SSIM), a method for objectively assessing image quality. Instead of comparing images pixel by pixel, SSIM takes screenshots within the session and on the endpoint and evaluates how similar two images are in terms of structure, luminance, and contrast and assigns a numerical score to that comparison.
A score closer to 1.0 indicates that the images are nearly identical meaning minimal visual degradation or pixel loss between the rendered frame and what’s displayed on the endpoint.

On top of that, nVector can run a knowledge worker load, simulating real user activity alongside GPU monitoring.
That means you get a complete picture: not just the frames leaving the GPU, but how actual day-to-day workloads affect performance.
But nVector never really took off as widely as expected, leaving some of its promise on the table.
Still, it represents a significant step forward from backend-only monitoring, giving insight into what users actually see and feel when interacting with their virtual desktops.
The Ultimate Benchmark Buddies: Login Enterprise & nVector
Individually, Login Enterprise and nVector had their limitations.
Login Enterprise can stress-test your virtual desktops and applications, pushing the infrastructure to its limits.
nVector measures what actually happens on the endpoint, latency, frame rates, image quality, and even simulates real user activity with a knowledge worker load.
But hey! Put them together, and suddenly it all clicks.
Login Enterprise generates realistic workloads at scale, while nVector shows how those workloads actually feel on the user’s screen. Login Enterprise provides the detailed reports you need and captures additional session metrics (or even platform metrics) to complete the picture.

Metrics finally meet endpoint reality.
It’s the perfect pairing: you don’t just see how your systems behave under load, you see what your users experience.
Real Tests, Real Results
So, let's do a test and no, not a boring knowledge worker, no no! Instead let's run Need for Speed Underground!
I ran tests on Omnissa Horizon 2503 with NVIDIA T4 vGPU (1B profile), 2 vCPU, and 12 GB RAM using Horizon's Blast Extreme protocol at a sprawling 5120x1440 resolution.
Test 1: cranked the frame rate cap to 60 Frames Per Second (FPS) meaning the virtual desktop was allowed to render up to 60 frames every second, providing smoother motion and more responsive visuals.
(Wait T4-1B and 60 FPS? but... but... yes, I know I'll come back to that later)
Test 2: Horizon Agent default settings
In Login Enterprise I defined the following Session metrics:
Metric | Description | What It Indicates |
CPU Utilization | % of CPU resources currently in use by the host or VDI session | High usage may indicate CPU bottlenecks or heavy workloads, low usage shows underutilized CPU |
Horizon Dirty FPS | Frames per second that have changed but are not yet encoded or sent | High values indicate frequent screen updates, increasing CPU/GPU encoding and network load |
Horizon FPS | Frames per second sent out by the agent to the Horizon client | Reflects session smoothness. Low FPS can result from CPU/GPU limits, network constraints, or encoding delays |
Memory Utilization | % of system RAM currently in use | High memory usage may indicate pressure or heavy workloads, low usage shows available capacity |
NVIDIA T4 % GPU Util | % of GPU compute resources currently in use | High values indicate heavy rendering or encoding, low values indicate underused GPU |
NVIDIA T4 Memory Usage | Amount of GPU memory (VRAM) currently in use | High usage may limit concurrent sessions, low usage shows available GPU memory |
The nVector integration measures actual latency by placing a changing watermark within the session and tracking it on the endpoint at the expected location, known as the Launcher in Login Enterprise.

The Demo:
This is a demo of running a session with Login Enterprise controlling Need for Speed Underground!
Mind the appearing watermark top left as it changes during the session.
The current version of the integration requires starting the Launcher and nVector integration with a script, this will be integrated in the near future.
As you can imagine it's quite a challenge to control NFSU with Login Enterprise so the driving is well...
The results:
Below are some of the results from Login Enterprise. Side note: don’t draw any final conclusions from these measurements, as both tests were only run once.”





The catch: the T4 1B profile limits frame rate to 45 FPS by default by the Nvidia Frame Rate Limiter You can alter this behavior but good to know:


As you can imagine with playing a game each frame changes, so each almost each frame will be marked as "Dirty"

This one is interesting; this is coming from the nVector integration measuring the actual latency on the endpoint.
I got higher latency with the default Horizon Agent settings compared to the 60 FPS. This is to be expected as the adaptive mechanism of the protocol may add delays to optimize compressions.
Also, some additional buffering might take place. This is fine for a regular knowledge worker, but for latency-sensitive workloads, not so much.
To deep dive into this behavior look at the research we did at GO-EUC. Ryan Ververs-Bijkerk and Eltjo van Gulik did very thorough research at this subject:
Why You Should Care
This isn’t just about flashy tech demos. Although Need for Speed Underground remains awesome.
The insights provided by nVector show you exactly what your users experience: visual smoothness, latency, frame rates, and image quality, the real factors that affect user satisfaction.
In virtual desktop environments, backend metrics alone do not tell the whole story.
You can monitor CPU, GPU, or network usage all day, but that will not reveal the delays, stuttering, or visual degradation that users actually notice. nVector bridges that gap by measuring latency capturing the real user experience in a way that is automated and scalable.
In short, it turns abstract performance numbers into actionable insights about what your users actually see and feel, and it does so in a way that is practical for real-world enterprise environments, not just a lab setup.
The Bottom Line
Performance testing isn’t just about generating metrics or running workloads.
It’s about making sure your users actually get the experience you intend.
With Login Enterprise and NVIDIA nVector, you can see and measure exactly what users experience, giving you confidence in your VDI environment.
It's not perfect yet, for example
The nVector and Login Enterprise Launcher integration feels a bit bolted on, but it does an excellent job, native integration is on its way.
Although LDAT is the most realistic method, as it measures photons leaving the monitor, the nVector integration is coming close providing a realistic measurement.
The SSIM integration with Login Enterprise is not there yet. But it's coming soon! Imagine adding the SSIM scores at scale.
The Launchers need a GPU and currently only one session per Launcher (instead of the regular 30 sessions per Launcher). Doable but keep this in mind.
You can finally stop guessing about latency and quality and make evidence-based decisions to improve your VDI environment.
That’s performance testing done right.
If you have questions or remarks, please leave a comment. Keep following me for more updates and news as the Login Enterprise and NVIDIA nVector integration continues to develop.