M+E Connections

The (Un)Predictability of SVOD Performance

Streaming services are a mystery.

Complete and accurate theatrical box office, television audience ratings, and home video rental performance have generally been available throughout their respective histories, but comparable streaming data are much harder to find.

Subscription video on demand (SVOD) services are remarkably coy about their viewership numbers. Netflix, for example, publishes a weekly Top 10 list but does not indicate how many views went to No. 1 or how far behind was No. 2, etc.

In other words, knowing titles’ relative rankings doesn’t say much about viewership every week or over the course of the film’s run. Add to that contemporary binge-viewing behaviour not present in traditional theatrical and television markets and not exercised in a significant way in the traditional home video market. This development leaves a considerable gap in our understanding of the global media market and the relative performance of different works, making competitive analysis particularly difficult and green light decisions even more fraught than they once were.

Ben French, an MBA student at Pepperdine University, recently published a detailed study of home entertainment and streaming performance in an attempt to quantify SVOD performance, using Netflix as an example. Ben French completed the study with the advice and assistance of Bruce Nash, president of Nash Information Services, and Richard W. Kroon, director of technical operations for the Entertainment Identifier Registry (EIDR).

The study began with a detailed analysis of Blockbuster home video rental data for the top 50 rentals each week from 1999 to 2010, including relative ranking and total rentals for each work. To that were added a number of additional factors such as prior theatrical performance, genre, cast, etc. These data showed a predictable, statistically significant curve for video rentals over time and identified the relatively few factors that were most important in predicting a work’s overall performance.

Lessons Learned

Home entertainment rental behaviour follows a predictable curve over the course of a work’s run within the top 50. But there is so much variability within the Top 10 — relative performance between works in a given week or a particular work’s performance week to week — that accurate predictions are particularly difficult. The factors that helped predict top 50 performance (genre, time of year, etc.) have no significant predictive value within the top 10.

Next, transitioning from home video to streaming, the study looked at Netflix performance reported by VOD Clickstream for browser-based Netflix viewing in the UK from January 2016 to June 2019. This data set contained 610 million data points, converted to weekly rankings and views per title comparable to the Blockbuster rentals per title. The Clickstream data represents a narrower data slice than does the Blockbuster data. The Blockbuster data captured all U.S. rental transactions within the identified week.

The Clickstream data only covered a sub-set of the UK’s Netflix viewers — those who watched Netflix in a browser — and only a sub-set of their views; presumably, those people also own and use other devices, such as televisions. So, while still statistically significant, the Clickstream results are less predictable than the Blockbuster results.

Lessons Learned

The Blockbuster data followed a logarithmic curve while the Clickstream data were best fit to a power equation, but both curves had a very similar shape. The primary difference is that the streaming viewership drops off much more quickly (initially, a steeper curve) but then runs much longer (it has a longer, flatter tail).

As before, the top 10 results were much less predictable than the top 50 results. In addition, the variability increased significantly when the Clickstream data were reviewed on a daily basis rather than a weekly basis. In other words, the worst-case scenario is attempting to predict streaming performance within the top 10 on a daily basis, which happens to be the only streaming performance data set available directly from Netflix.

With these preliminary studies in hand, the report then analysed Netflix’s top 10 daily streaming performance data provided by Netflix from March 24, 2020, to July 4, 2021, along with 28-day Netflix viewership numbers provided by the service What’s On Netflix.

As one would expect, the various limitations imposed by the lack of available data made it very difficult to build an accurate predictive model. In addition, rather than having direct rental or view data (Blockbuster and Clickstream, respectively), those statistics had to be estimated using just Top 10 rankings along with total market size and external audience behaviour statistics. Despite these caveats and limitations, it was still possible to build a moderately strong predictive model.

Lessons Learned

Predicting SVOD performance for specific titles was significantly limited by the lack of available data (top 10 vs. top 50, the first 28 days of run vs. the entire run, rankings without total views, etc.). Overall viewing behaviour does follow a regular and predictable pattern but has no significant predictive value within the top 10.

There are still a number of unknowns, which warrant further investigation. The study observed these factors, but their specific impact could not be assessed. For example:

• The increasing percentage of direct-to-VOD works in the top rankings vs. the much smaller contribution of direct-to-video titles in the traditional home entertainment mix. Among other things, this diminishes the predictive power of theatrical performance on the home entertainment market.

• Series outperforming movies due to binge viewing. Series rarely appeared in the Blockbuster top 50 but are a regular feature of the Netflix top 10. The cumulative viewing for all episodes contributes to a single ranking for the series — compounded by the fact that new viewers may go back to old seasons to “catch up” on the program. Going back to view a prior instalment in a movie franchise does not contribute to the total views for the latest instalment.

• A lack of standard definition for “view” and, therefore, a lack of accuracy when comparing or combining data sets. A Blockbuster rental was a single, discrete, and measurable event. What is a “view”? How much of a show must be watched to count? If one person watches a programme in several disconnected sessions, is that one or multiple views? Are all data sources using the same definitions? Etc.

• The impact of recommendation engines: i.e., how much of a work’s performance is due to traditional characteristics (genre, cast, word-of-mouth, etc.) that are measurable and can be included in a predictive model and how much is due to algorithmic promotion giving particular programmes preference over others.

Finally, the study shows that movie or series descriptors (such as genre and creative type) traditionally used to predict home video rentals are important on a portfolio level—predicting overall performance over a diverse set of titles. However, these descriptors do not indicate statistical significance when predicting the performance of a title in the SVOD market.

Performance is driven by a multitude of qualitative and perhaps unreliably quantifiable (to a third party) variables such as cultural reach, film quality, acting performance, writing, social media buzz, recommendation algorithms, etc.

The resulting model can predict, with moderate reliability, how many people around the world watch Netflix’s programmes. As more detailed ranking and viewership data become available, the model will become more reliable, and streaming services will become less of a mystery.

For a copy of the full report authored by Ben French, with contributions by Richard W. Kroon and Bruce Nash, please contact Kroon at [email protected].