M+E Daily

NAB 2022: MEDCA, CommScope Explore How AI Can Be Leveraged for M&E

Just how well the artificial intelligence (AI) and machine learning (ML) systems that media and entertainment (M&E) companies are significantly investing in will work and integrate within business divisions across their companies and with service provider partners starts with the design and installation of proper data center infrastructure, meeting data industry standards, according to the Media and Entertainment Data Center Alliance (MEDCA) and network infrastructure provider CommScope.

A facility’s design, on-premise storage and compute, and connectivity all lead directly to an organization’s ability to leverage AI and ML across its business.

Meanwhile, workflows in this age of Ultra High-Definition (UHD) require more connectivity and throughput than ever before.

On April 24, during the panel session “How AI and Other Advanced Computing Integrate Into Production” at the Intelligent Content Theatre during the NAB Show in Las Vegas, Sean Tajkowski, MEDCA technical director and co-founder, and Jason Bautista, solutions architect, enterprise strategy and technology at CommScope, discussed the similarities that a remote or virtual production facility has with multi-tenant data and edge data centers.

When it comes to high speed, low latency connectivity, the design and planning approach can take inspiration from those concepts that are quickly becoming a foundational requirement for any business plan addressing expansion and growth in the M&E industry.

“We’re here to talk about AI and machine learning, which is explosive right now,” Tajkowski said at the start of the session.

“We may consider AI a new thing given all the attention that it’s received recently in the news and all the applications,” he noted, asking Bautista to get into the technology’s history and explain why it’s so prolific today.

Taking a Trip Down Memory Lane

AI is “so prolific because we read about it all the time in the news,” replied Bautista, adding: “We have Elon Musk telling us that the computers are going to kill us all or, no, the computers are going to help us all.”

But AI actually started in about 1950, when famous mathematician Alan Turing predicted we were a generation away from having computers having typed conversations with a human and a human would not be able to tell that the other thing it’s conversing with is not a human, he said.

Since then, Google made it possible to have its voice assistant make a call to reserve a table for you at a restaurant, then put that into your calendar automatically and not have the person at the restaurant realize they’re talking to a computer, he said.

“My first story I like to tell about AI is that I had a service called GrandCentral and GrandCentral was actually the predecessor to what became Google Voice. And so it would actually type out and send me messages of my voicemails that was on there,” he recalled.

That was great, he said…. “Up until my Filipino mother left me a voicemail and I got a lot of ‘cannot transcribe,” he said. Google bought GrandCentral and then all of a sudden, as you started putting more data through this, the intelligence of the compute was really starting to be able to catch up,” he told viewers.

That same capability is now in “everybody’s phone and everybody’s pocket and everybody’s cloud,” he said, adding: “This is something that we’re really seeing. And when we talk about AI in this marketplace, it’s not AI. AI is a simple thing. Here are my choices. I feed the data and based off of the data, here are the choices I should be able to make.”

Instead of AI, however, “what we should be interested in is deep learning and machine learning because, as I start iterating on that, I should have a feedback loop that makes my process smarter,” he said.

All this tech won’t replace people as many still fear, he said, explaining: “It’s not about replacing the person who knows the craft because the craft is still important. What it is it’s about letting the person who knows the craft have the time to do the craft and not look at the different feeds to try to figure out which data is important for me.”

The technology can aid people in multiple industries, including the M&E and financial sectors.

Understanding Data Centers

Data centers play a key role in being able to harness AI and ML. But there is a “misunderstanding of what a data center is,” Tajkowski  said. Many people think a data center is “hyperscale –  these aisles and rows of these massive things,” he said. But a data center can actually be something as small as a watch, he added.

Agreeing, Bautista said: “Data centers really can be anything.” But the “one thing that people always forget about” when thinking about hyperscale is the “scale,” he noted, explaining that it can be used in networks of all sizes and in edge data centers.

Meanwhile, the “base component of the compute that you see in the cloud [has] become more powerful,” he said. After starting out with central processing units (CPUs), along came graphics processing units (GPUs), he said, noting: “We finally figured out that, oh, floating point – math is really good [and] graphic cards are really good at floating point math because of all the things that they have to render.”

Processing has only become more complex since then, he said, pointing to Google developing tensor processing units (TPUs), which were developed because the company wanted to “accelerate the GPU,” he said, adding that, as a result, we now have “three levels of acceleration to crunch through more and more and more data.” And he predicted we’ll see further acceleration.