
Editorial note: This article is a follow up to Corey Hynes’ blog on Optimizing Lab Performance and Analyzing Performance Data. We recommend reading the articles in that order to review factors that impact lab performance and how you can create labs that perform well on our lab development platform, Skillable Studio.
As discussed in our blog, Analyzing Performance Data, latency is one element of lab performance. Latency provides a real-time performance measure for virtual machine (VM)-based labs but can also provide historical data when evaluating the performance of past labs. To facilitate this, Skillable Studio has historically tracked the average latency and provides a high-level connection quality measurement of Excellent, Good, OK and Poor. Average latency is stored on the lab instance. Latency history (the full sample set of measurements) is stored for 90 days.
We have updated the way we track, average and report on lab session performance using latency with Skillable Studio. Read on to see how this change provides you with the most accurate representation of user experience across lab instances.
Prior to June 26, 2020
Average latency was calculated as the raw average of all latency measurements. Measurements are taken every minute during a lab instance and stored. When the instance ends, the entire sample set was averaged, the average connection quality was determined and the data was stored in the lab instance. The process did not take outliers in data into consideration.
After June 26, 2020
Measurements are taken every minute during the lab instance. Average latency is calculated by determining the standard deviation of all latency samples and discarding any sample that is more than three times the standard deviation.
The remaining samples are averaged to determine the average latency and connection quality. The connection quality, average latency and now, the measured standard deviation, are stored in the lab instance.
A future release will leverage the standard deviation value to provide a connection stability and consistency metric back to the user.
Connection Quality
The following table outlines connection quality ranges before and after June 26, 2020:
Before June 26 | After June 26 | |
Excellent | 0-60 ms | 0-100 ms |
Good | 60-150 ms | 100-250 ms |
OK | 150-250 ms | 250-500 ms |
Poor | 250 ms or higher | 500 ms or higher |
NOTE: The update connection quality table will be deployed in production on approximately July 10.
Why did we make these changes?
Latency measurement was originally created to provide our Help Desk with a quick way to understand if a user was accessing a lab from a reasonably good internet connection. It was used more as a “point in time” tool than a tool for historical measurement of connection performance. As more customers use data to analyze past lab performance, we want to provide a latency value that more accurately reflects the actual experience of the user and considers recent updates to the platform to improve user experience in lower latency scenarios.
Consider the following examples:
This first user has a raw latency of 178 ms, mainly due to some short term, high spikes in latency. With the new latency calculations, the value drops to 147 ms and moves from Poor to OK in the old table and from OK to Good in the updated quality table.

This next example is vastly more extreme.
The raw latency average is a whopping 28 seconds. By revising the calculation, we can see the new average latency is 222 ms, which fell in the OK category previously, is now in the Good category, and is a much more realistic representation of the users’ overall experience.

As a final note, when this update was rolled out in summer 2020, we also restoratively updated past lab latency averages for all lab instances for which sample history was available. This means that for all lab instances after approximately March 29, 2020, you will see a change in the average latency as the average was updated using the new formula. This was done to ensure customers have the most accurate representation of the user experience for all instances where the raw sample data was available. The above two samples show the benefits of applying these changes to past lab instances.