A web performance blog

by Alex Painter

Does a better speed index score always mean a better experience?

26 August 2019

At the dawn of web performance time, a website's speed was pretty much all about time to onload.

Things have moved on a lot. Clever people have come up with clever ways to home in on metrics that mean something. Metrics that tell you about user experience, not network traffic.

We have time to interactive, first paint, time to visually complete. And of course speed index.

If you're not familiar with speed index, it can be loosely defined as the rate at which a page becomes visually complete.

A page that renders more content earlier will get a lower (better) score than one that renders the same content later. This means that two pages with identical render start and visually complete times could still have very different speed index scores.

This is illustrated in the two charts below. The first shows a page with a high (slow) speed index, the second a page with a low (fast) speed index.

Chart illustrating a page with a slow speed index Chart illustrating a page with a fast speed index

On the face of it, speed index is a great way to get an insight into the experience a web page delivers. It has some limitations – for obvious reasons, it doesn't really work with animated pages. But for a lot of people – me included – it's become the go-to metric for render performance.

However, the road to web performance hell is paved with good intentions. Slavishly adhering to metrics or, worse, focusing in on one or two metrics to the exclusion of all else, could land us in trouble.

When it comes to speed index, we take it on trust that displaying more content faster is just, well, better. After all, we do have plenty of statistics linking website speed – usually time to onload – to a whole host of other KPIs. And speed index has to be a better measure of performance than load time, surely?

Well, probably. But the desire to optimise for speed index put me in mind of an excellent piece of work from Radware a few years ago. They looked at progressive JPEGs – JPEGs that render a low resolution placeholder while you wait for the full version to finish loading. Many people assumed that a progressive JPEG delivers a better experience than a baseline JPEG. It sounds like common sense. Better to have something in place that gives you a good idea of what the finished image will look like. In fact, this study revealed quite the opposite. Initially seeing the low resolution image actually made people's brains work harder to make sense of what was in front of them.

I'm not suggesting for one moment that speed index is directly analogous. But there are similarities, and there will be times when seeing a partially completed web page is a lot more frustrating than not seeing one at all. For example, when everything has rendered except the text, which is waiting on a slow-loading font. Or when you can see everything but the login button you need. I don't know, but if I had to wait a very long time, I'd probably favour an empty page and a progress bar over a page that displayed everything except the one thing I needed.

It's not hard to imagine scenarios in which it might just be better to wait until everything is ready to display than to paint every element to the page as soon as we possibly can, just so we can improve the speed index score.

While we can't all afford to carry out the same kind of research that Radware did into progressive JPEGs, it does highlight the value of complementing web performance metrics with usability testing. And the danger of making assumptions about what constitutes a good user experience.

tl;dr

Relying too much on one web performance metric is probably a bad idea.

Research from Radware some years ago challenged the widely held assumption that progressive JPEGs delivered a better experience than baseline JPEGs.

There may be an analogy with speed index. While better speed index scores probably mean better experiences most of the time, there could well be some notable exceptions.