Interviews, insight & analysis on digital media & marketing

Want to improve video viewer engagement? Get these four key metrics right

By Marty Roberts, SVP, Product Strategy and Marketing, Brightcove

Of all the video metrics you could measure – or perhaps, more accurately – would like to measure but currently can’t – do you know which are the most important?

For many marketers, the answer is usually no. But the ‘no’ often has a nuance: either marketers simply can’t get their hands on the data they’d like to see, or they have so much data, it’s hard to identify exactly which metrics they need to worry about.

To further complicate matters, the terminology isn’t always helpful. Video Quality of Service (QoS) and Quality of Experience (QoE) are often conflated, when in reality, the metrics measure two very different things. QoS provides data on operational performance, while QoE provides data on the user’s viewing experience. As video is an important e-commerce tool, it makes sense to look at QOE data to ensure you’re respecting the time that your prospect/customer is using to engage with your brand.  

Operational data is important and shouldn’t be dismissed. But when it comes to driving engagement (and ultimately, sales), it’s the viewer experience we need to focus on – that means first, getting your hands on QoE data, and second, figuring out how to use that data to enhance the viewer’s experience.

So, here’s a simple, jargon-free guide to the four metrics that will positively impact viewer engagement, if you get them right.

1. Video start time

What is it?
Video start time measures the average number of seconds between the play request and the stream start.

Why does it matter?
This is the metric with the most causal relationship to viewer engagement. It takes just 0.6 seconds to begin losing viewers if the video has not started. In a world of shortened attention spans (thanks, social media), that’s not surprising. If video start time is high, that could suggest issues such as a problem with a player configuration. It is critical to identify how video start time varies across the audience, regions, and devices. Fixing these problems is often an easy win, and a low video start time increases the likelihood that viewers will engage with the content.

2. Error rates

What is it?
The error rate is the percentage of all play requests with errors preventing playback (as opposed to background errors the viewer doesn’t notice).

Why does it matter?
High error rates mean the viewer can’t physically watch the video they want – which, of course, isn’t great. The average player can have up to 12 plugins, which is often the root of the problem. Another cause could be device-specific – for example, the data might reveal that mobile Android viewers are experiencing a sudden spike in error rates, compared to other devices, with the culprit being a recent Android app update. Low error rates allow viewers to watch the content they’ve clicked on.

3. Stall rate

What is it?
Stall rate is the average number of stalls per hour, calculated by comparing total stalls to total hours viewed in the selected time range.

Why does it matter?
Unlike other rebuffering events, video stalls directly affect playback. Stalled video, whether it’s one long stall or multiple stalls of varying length, is annoying for viewers and, just like glitchy e-commerce carts, can lead to the video being abandoned in frustration. You might see regional differences in stall data: perhaps across ten markets, the stall rate is unusually high in just one country, and that could be the fault of an underperforming CDN, or point to the need to create additional streaming variants to better support low bandwidth connections.

4. Upscaling time

What is it?
Upscaling time measures the average number of seconds per hour of viewing spent in an upscaled state. Upscaling issues occur when a playback device consistently streams a lower-resolution video rendition and then upscaling the video to fill the screen’s dimensions, often resulting in fuzziness or pixelation.

Why does it matter?
Sometimes, upscaling doesn’t really matter – for example, it won’t always affect the viewer. However, when it does affect the viewer, it’s an annoyance. A viewer might notice that the video goes fuzzy when, midway through a video, they switch from viewing on their mobile to casting to their TV, as the screen is much bigger. You might expect to see upscaling issues if available bandwidth anywhere in the streaming path, from the content origin, through CDN and last-mile provider, down to the household and local player, is constrained. These constraints can result in players electing to stream less bandwidth-intensive, lower-resolution renditions, which are then blown up to fit the screen. Pinpointing the source of an upscaling issue enables teams to quickly identify the appropriate actions: for example, producing more stream renditions to better fit the bandwidth constraints, or routing across different CDN partners for different geographies.

Together, these four metrics provide a ‘quality score’ to indicate the experience users get from your video content. And that’s it – a simple and structured way to refine, improve and polish the viewer experience to deepen audience engagement and boost viewer retention.   

Opinion

More posts from ->

Related articles