M+E Daily

M&E Journal: Content Gravity Pulling Media Workloads into the Cloud

By Bhavik Vyas, Global Ecosystem Leader: M&E, Amazon Web Services

Media industry companies have traditionally used on-premises infrastructure due to two primary factors: performance requirements and requirements around compliance with industry imposed security controls. However, the industry as a whole is no longer able to keep up with the ever-increasing infrastructure requirements, nor does it want to—companies would prefer to focus on their core competencies of content production and distribution.

To date, cloud-based solutions have primarily been leveraged for media distribution scenarios, but due to a number of enabling factors, media companies are now using cloud solutions for the entire media workflow, from contribution through distribution, for live and on-demand content.

Looking forward, the cloud will enable new paradigms in media production and distribution that will increase efficiency and agility, provide more choice, and lead to collapsing cost structures.

Why the cloud?
Industry factors contributing to the exponential growth in IT requirements include: complexity of rendering visual content, managing the vast number of assets that comprise a production, increasing size of content (SD, HD, 3D, 4K, 8K, etc.), complexity associated with proliferation of distribution devices and fragmentation of formats, and complexity associated with new consumer experiences (second screen, content discovery, etc.).

Cloud-based solutions can help address these issues by providing a global, scalable, elastic infrastructure platform that is available in minutes not months, with a better security profile than media companies may be able to provide internally, and pay-as-you-go pricing that lets companies shift capital expenditures to operating expenses. Furthermore, media company executives are unable to ignore the rapidly dropping costs of the cloud due to the economies of scale that cloud providers can achieve.

The cloud enables media companies to spend more time growing and less time worrying about the logistics of growth. Business decisions are no longer constrained by physical infrastructure availability.

Traditionally, media companies would plan release schedules based on solving business problems in a linear or serial manner dependent upon the time that it would take to process the content with the available hardware they had in their data centers.

With a scalable cloud infrastructure you can make a business decision to provision the right amount of capacity for your current infrastructure needs, and scale that up or down as needed, enabling you to solve business problems in scalable, non-linear ways.

What about security?
The Motion Picture Association of America (MPAA) Best Practices for Content Security have been established to provide specific recommendations for security of media content. These requirements typically dictate that a third party audit is performed on the content infrastructure to ensure compliance with those security controls prior to storing, processing, or distributing any premium content on that infrastructure. There are often additional security controls imposed by specific content owners that go above and beyond the MPAA controls.

Security is a top priority for Amazon Web Services (AWS).

The AWS infrastructure has been designed and managed in alignment with international regulations, standards, and best-practices including ISO 27001, PCI, and SOC, as well as industry specific security controls such as FISMA, HIPAA, and MPAA.

Beyond certifications and alignment with security control sets, AWS provides services that customers can use to secure content and applications. This includes content encryp-tion during transfer and at rest in storage, an Identity and Access Management (IAM) service which allows companies to define security access for individuals or groups, while at the same time managing access to specific re-sources as well as services such as CloudTrail that let companies log and review user activity at the API level.

This allows enterprises to run comprehensive security analysis, and better manage their governance and compliance efforts.

Paradigm shift: Content gravity leads to access
You may be familiar with Moore’s Law, which states that processor performance will double every 18 months. That law has held true for decades. There are similar laws for storage and networking.

For storage, Kryder’s Law says magnetic disk density is increasing significantly faster than processor speed. For the sake of this illustration, let’s just say that it is doubling every 12 months. Similarly for net-work connection speeds, Nielsen’s Law states that connectivity will increase 50 percent per year, or double every 21 months. The graph shows the relationship between these trends over time.

It’s pretty easy to extrapolate this over time and see that network connectivity is likely to be the gating factor.

As files get bigger they may run into networking limitations. This will be true for source assets in film production and broadcast source feeds, as well as for distribution scenarios. Consequently, you need to look at what you have to do with the content.

The goal should be to minimize the movement of the content by placing it into a service that is surrounded by all the technology needed to transform the content into a monetizable asset.

So, rather than the traditional model of transferring large content files through each stage in media production and distribution workflows, instead store the content in cloud storage and bring each stage of the processing to the platform where the content lives. This is the concept of content gravity.

It is similar to the concept of data locality, which states that you should have your data near the processing resources, but in the media industry the data is SO large that it tends to have a gravity to it.

To that end, as media industry companies look to the cloud as a solution to their infra-structure needs, one of the key determining factors revolves around storage considerations – large content libraries are often slow and expensive to move, potentially taking months to transfer into cloud-based storage.

From there you need scalable, tiered storage for current catalog, back catalog, and archive. You need ways to manage across those tiers to promote and demote content in order to take advantage of monetization opportunities.

Because of this, selection of a cloud storage provider will likely be one of the first and biggest cloud infrastructure decisions media companies will make, since content libraries will likely stay in that initial storage provider’s infrastructure for decades.

How are media companies leveraging the cloud?
With that perspective in mind, the sweet spot for utilizing cloud infrastructure has been in B2C video distribution at scale.

If we explore media distribution workloads that utilize cloud-based processing at scale, the primary compute task is transcoding of video into the myriad distribution formats in order to reach consumer devices such as set-top boxes, PCs, tablets, phones, connected TVs, and so on.

Once content is transcoded and packaged, it is often distributed via a global content delivery network. Examples of this are Netflix, Amazon Instant Video, and Vessel, all of which utilize Amazon Web Services. However, companies are looking for ways to further utilize the cloud for media aspects further upstream in the media production workflow (broadcast operations, content creation, post production, etc.).

As we move up the media workflow to content production scenarios, the primary use of the cloud has been rendering farms where visual effects and entire scenes are generated. However, content production companies are looking for ways to further leverage the cloud for the entire production workflow, but many of these scenarios have infrastructure requirements related to specific hardware needs and/or performance characteristics that can be difficult to meet.

For example, editing and post production of 4K video can require extremely high-speed storage and low latency network connectivity—up-wards of 1Gbps for uncompressed 4K source and ~300Mbps for lossless compressed editing formats.

Broadcast scenarios may require signal acquisition from satellite or component connectivity via SDI.

The question comes down to how to bridge these physical world requirements with the virtual world of the cloud.

Increased availability of high speed fiber connectivity has begun to enable these scenarios. For example, companies such as All Mobile Video are now providing signal acquisition (satellite, private fiber rings, etc.) as a service and directly connecting via high speed fiber into cloud service providers.

This allows content to be pushed over dedicated fiber into cloud plat-forms for processing at scale and global distribution, and enables media companies to get out of the capital intensive business of buying and maintaining expensive satellite dish hardware, IRDs, and monitoring equipment.

Additionally, application virtualization technologies are enabling content production scenarios to be more secure and more agile.

Traditionally, content creation and manipulation applications such as video editing or 3D modelling require very high-end expensive desktop machines with powerful processors, lots of memory, and GPUs. Application virtualization technologies enable these applications to be run in the cloud, but the end user experience and UI are streamed to client devices.

By running the applications in the cloud and streaming the UI you can get the rich thick-client experience on any device that’s capable of rendering the UI stream, which is typically about the same bandwidth as watching a normal streaming video, meaning that you can use laptops and tablets to run those high-end application experiences. Another benefit of this model is that it keeps all of the source content centrally stored so security and access is much more tightly and centrally con-trolled.

And finally, this model also makes it very easy to spin up contract workers since they can use just about any device to work on the content vs. having to have a high-end machine provisioned and set up for them. As media companies evaluate use of the cloud, having a rich ecosystem of solutions available on the cloud platform is a critical consideration.

Whether it is point solutions such as transcoding, asset management, or DRM, or integrated end-to-end video production and distribution workflows, companies expect the trusted solutions that they’ve used for years, but want those solutions to leverage the power and scale of cloud-based platforms. A developing industry trend has media companies shifting from wanting (or needing) to understand and control all levels of the technology stack to instead looking more for managed solutions.

As the complexity of the scenarios and technologies to address those scenarios continues to increase, it is often no longer tenable for media companies to build and maintain IT infrastructures and applications themselves.

What does the future hold?
The size or gravity of 4K content will continue to drive new paradigms in content production and distribution. Media companies need to find ways to tame complexity, increase agility and drive costs down.

Use of the cloud may have been a competitive advantage initially, but the impending arrival of 4K content means that it is fast becoming a core competency for media companies in order to remain competitive, enabling them to spend more time growing and less time worrying about the logistics of growth.

Click to read the .pdf version

—————————————————-
Bhavik has worked in the IT and communications field for over 15 years, at leading technology companies like HP, Agilent, Reliance Communications and Aspera. He spent ~4 years at Aspera, as Director of Cloud Services & Partnerships, where he managed relationships with partners like AWS, EMC, HP, Microsoft, IBM and Adobe and worked with leading M&E companies like Netflix, Amazon Instant Video, Sony, WB, UFC, UEFA and Deluxe in deploying cloud solutions.