Connections

M&E Journal: Video in the Cognitive Computing Era

By Steven L. Canepa and Chad M. Andrews, IBM

Video is the most popular form of content consumption on the planet, generating behemoth revenues through traditional business models, and compelling the average American to spend 5.5 hours viewing daily, according to eMarketer. By 2018, the global pay-TV market will see its 1 billionth subscriber, generating $300 billion in global services revenues, predicts ABI Research, and another $200 billion by way of television advertising, per Statista.

Yet despite the strength of traditional distribution models, a mobile and socially savvy customer is consuming more video from the cloud. Facebook and Snapchat users combine to watch 15 billion videos each day, Fortune reported, and over-the-top streaming subscribers are projected to grow by three times to reach 330 million-plus by 2019, according to Juniper Research.

All told, online video ad revenue will generate $24 billion by 2018, predicts ExchangeWire, and is growing by double digits.

Video is already the most dominant data type flowing across global networks, but it is still growing – and fast. By 2019 video will eclipse everything else on the consumer Internet by 4:1, and will comprise greater than 3:2 of all data on business networks as more video is used within the enterprise for communications, training and marketing, according to Cisco.

Use of video is growing across industries because it is effective. Some examples of this include:

* 70 percent of marketing professionals report that video converts better than any other medium, reports Marketingprofs.com;

* 64 percent of consumers are more likely to buy a product after watching a video about it, reports Social Media Today;

* 79 percent of students in higher education voluntarily watch videos to enhance their understanding of a topic, reports Science Daily.

Despite its clear emergence as a first-class data type, the exploding popularity of video masks a startling truth: that the upside of video is still largely untapped. Slowly, however, it is beginning to be unlocked by modern video cloud platforms and cognitive computing.

Just as the human mind uses visual and aural cues to navigate the world around us, video data provides valuable visual and aural information to machines.

Sub1 is a robot that used video to help set a World’s Record by solving the Rubik’s Cube in under a second, signifying that machines are now essential collaborators in mastery of a growing number of tasks.

Rubik’s Cube is a relatively easy problem to solve. For any combination of colors across the cube’s faces, there is a pattern that can be recognized and a corresponding optimal turn. Arriving at effective solutions to many of the world’s problems require more advanced weighing of information against historical records and decision trees.

Japanese telecommunications giant Soft- Bank’s Pepper customer support robots use video to interpret emotions through facial expressions and vocal tones. Encounters measured to result in the best customer support experiences become learned behaviors.

Invisibly over the past few years, we entered into the cognitive computing era, where the human brain and data are linked. Search engines, traffic and weather apps have long surpassed traditional methods for finding relevant and timely information.

Similarly, an enormity of business challenges across industries will soon utilize video data along with cognitive computing algorithms to help answer valuable questions, such as: By scoring similar police encounters versus outcomes, is it possible to derive best practices and formulate training? Can video cameras in a theme park help identify when a guest is happy, bored or hungry and help inform real-time recommendations and offers? Does a skin lesion’s characteristics compare to a historical database of similar imagines to suggest a likelihood of it being cancerous?

As the Internet of Things (IoT) takes shape, more cameras will connect in a matrix of images, sounds and data to solve for more advanced problems, like: Can traffic signals, sensors and cameras combine to inform cars to avoid hazards? How about sensors and cameras at a football game combining to identify a game’s strategic keys to victory and helping coaches to make decisions?

Data improves how video is targeted, packaged and served

Beyond traditional broadcasters, cable and satellite players, every enterprise is to some degree becoming a media company tasked with understanding what exists within their video repositories and how to make it useful; and media companies are faced with unique challenges, such as:

* An epic volume of video is created globally every day, making it exceedingly difficult to manage, let alone understand;

* In the expanding digital universe, consumers are faced with infinite competition for finite attention;

* Consumers increasingly want content coupled with other kinds of information to form bespoke experiences spanning screens;

* It can be difficult to measure how a watched video led to a result, whether it is a product purchased or a skill learned;

* Consumption is fragmented across platforms, hindering reach and audience measurement.

To solve for these challenges, modern video platforms require changes to the application architecture tailored to individual use cases. For best results, these changes follow organically from world-class experience design, beginning with the principles of immersive experience and flowing backwards.

Effective video cloud solutions ease the burden of ingesting, managing and processing video by decomposing underlying video processes into standardized services that can be recomposed with commodity piece parts into powerful workflows that can also be adapted or changed on-the-fly.

These solutions are also capable of comparing content and viewing characteristics across multiple sources of customer data to better understand bespoke audience preferences. Common sources of data include:

1. First party (direct data about consumers);

2. Second party (data from your own data management platform assembled from multiple data sources);

3. Third party (data from other platforms operated by data services);

4. Metadata (contextual data about video useful for automating processes and determining associations between videos and other content).

A strong video cloud platform will accelerate time to value by exposing micro-services in a Platform-as-a-Service (PaaS), allowing developers to expediently add, test and scale video rich building blocks into any application, with API access to powerful cognitive computing modules.

It is vital that you work with vendors that understand the idiosyncrasies and ecosystems surrounding each unique data type. But assuming you can organize and model data effectively, cognitive computing offers invaluable tools to derive context and intent and make improvements to supply chain processes and engagement.

Processes that traditionally happened in silos, like ad sales and traffic, can be radically improved by cognitive computing solutions that consolidate and provide visibility into multiple data sources, and feed measurable outcomes back in a closed loop to test and optimize decisions. Enterprises that do this well have the ability to revolutionize how video is distributed, custom curating around desired business outcomes.

By micro-segmenting audiences and learning from results, cognitive computing promises to use large samples of real world outcomes to prescribe actions as diverse as what marketing offer to make, what storyline or training to tailor, and what storage system or content delivery network to load balance across.

In each of these cases, cognitive computing can do what it does best: helping humans to see across seas of data and finding the next, most logical move.

Click here to translate this article
Click here to download the complete .PDF version of this article
Click here to download the entire Spring 2016 M&E Journal