This article compares two approaches to monitoring website performance: lab-based monitoring and collecting field data from real users.
Lab-based monitoring is done by a server regularly loading your page and analyzing its performance. Another common name for this is synthetic monitoring. This is what tools like PageSpeed Insights, WebPageTest, or DebugBear do.
Field data is collected is collected in the browser every time a user loads one of your pages. Code on your website captures performance metrics and sends this data to your server. This is also called real-user monitoring (RUM).
How fast is my site for users?
Real user field data lets you accurately determine how fast your site loads. You can say things like "The site fully loads within 4s for 95% of users", or see a distribution of page load experiences.
For example, this DebugBear screenshot shows a breakdown of the Largest Contentful Paint metric for a large number of users on a website.
Lab-based testing on the other hand captures data in a set of fixed test environments. There's a risk that these environments are very different from what real users experience.
The advantage of lab data is that it provides much more detailed reports that you can use to optimize your website. For example, you can get automated recommendations, view a video recording of the loading process, or analyze network activity in depth with request waterfall visualizations.
To get the most realistic results in lab data you need to know what type of device and network connection your users are likely to use, so you can configure the test environments accordingly. Essentially, you measure the experience of a few hypothetical users, masking the complex reality.
Did my site get faster?
So why bother with lab-based testing if it can't tell you how fast your site is? The big advantage is that tests are run in a controlled environment.
Did your load times go down in a lab-based test? Congratulations, you've made your site faster! (Assuming your change didn't hurt performance in different device conditions.)
What about real-user metrics? Did your site get faster, or did your users come back from holiday and now have fast wifi again? Did you lose a big customer that happened to use slow devices, bringing up average performance?
Likewise, if you speed up your website you might attract more users on slower connections, who'd otherwise have bounced. These new users who now stick around will drag down your RUM metrics.
With real-user monitoring you need to dig into your metrics and figure out what happened. Here are just a few factors that affect performance:
- Device and browser
- Network connection
- User location
- Content that varies based on account data or ad targeting
- Resource competition between your page and other software on the device
In a lab environment these can be kept constant. However, the fact that lab tests only look at one experiences in one set of conditions also means that you might end up optimizing only for these conditions. You need to check your lab data to verify that the optimizations were broadly applicable.
Why did my site get slower?
Here's another aspect where lab-based testing can shine. The test server can capture information about every request, record a filmstrip with screenshots, or collect console messages. There's no need to worry that collecting lots of data will impact page performance for real users. This makes it easier to identify what caused a performance regression.
With RUM metrics this will be much more limited, to reduce data usage and because the page doesn't have access to all the information the browser has for security reasons. If you detect a problem you'll need to investigate what caused it.