M+E Daily

Adobe Explores the Transformative Potential of GenAI

Adobe used one of the Strategy keynotes at the Adobe Summit in Las Vegas on March 26 to explain the enormous potential that generative artificial intelligence (GenAI) has for companies across many business sectors.

GenAI has the “transformative potential” especially when it comes to “scaling personalized customer experiences, according to Adobe.

Attendees and those viewing the keynote remotely live or on demand were able to learn how GenAI can work for each person to “unlock creativity and increase customer engagement for your entire organization,” Adobe said.

A whopping 84% of AI decision-makers have said their executives are ready to adopt GenAI, according to Adobe. Now it’s up to each organization to figure out if it’s ready or not.

During the keynote, Ely Greenfield, CTO of Adobe Digital Media, helped viewers separate hype from reality and explained how to make the most of GenAI, while avoiding  the potential pitfalls.

“Last year, we got this AI that could answer questions, conjure up images out of thin air and write functioning code, Greenfield told Adobe Summit attendees. “And we were blown away last year by all of these amazing demos. But … this is the year that we have to put it all to work, transforming how we’re delivering customer experiences.”

He then took attendees on a deep dive into AI, including what he said are the “questions that you should be asking of any AI solution you use, whether you’re getting that from us, Adobe, or from somebody else.”

Greenfield also provided more color on Adobe’s latest advancements in Firefly and the new AI Assistant, and explained how “people like you are actually using … our Adobe products.”

GenAI capabilities have exploded in recent years recent inventions, including advancements in compute, he went on to say.

“Compute today is 100 billion times cheaper than it was in 1980…. [But] please don’t quote me on that number. I literally asked an AI and that’s what it told me. But it’s something like that. And all that compute actually allows us to take this theory, which was only theory 70 years ago, and run it at massive scale,” according to Greenfield.

He continued: “What that compute allows us to do is then take the basic theory and actually apply the art of understanding the use case, the customer, what uses we want to apply this to, and develop these new, very complex, higher dimensional functions that we can’t even visualize because we don’t think in 10, 000 dimensions…. But we can figure out how to model … real world use cases.”

There are “two new models: And [when I] say model, what I really mean is functions like curves that are structure of the curve that have unlocked GenAI over the past few years,” he said.

The first of those models was called a transformer model, which he said is the curve for the model that lives at the heart of” Large Language Models (LLMs), he explained. “This is what drives all of the recent generative work around text and data.”

The second model is “what’s called Diffusion Models,” he said, noting: “This is a different architecture of AI that has been driving all of the media work that we do with images and video and others. It’s at the heart of Firefly and a lot of other pieces out there.”

He went on to say the “real value of Gen AI is not creativity, as much as we talk about it as these creative AIs, because those models aren’t really creative, not the way a human is.”

Instead, the real value is leveraging AI to “keep your best people using their creativity instead of spending time on the rogue production,” he said.

There are “three ways we do that,” he explained. “First is by accelerating the work of your most skilled people. These are people who can get the job done today, but work out there and AI can help them do it faster and easier by leveraging their skills for the more high value work in offloading that road production work to the AI as an assistant.”

The second way is, “by enabling the people out there who have the creativity and the drive, but they don’t necessarily have the technical production skills yet to be successful, for those people, AI can just push them over that line where they can actually be enabled to do some of this work instead of being bottlenecked by that small group of highly skilled technical production people who are doing the work and they probably can’t get any of their time for,” he explained.

And third, he said, is by “actually offloading some of the most common tasks to AI and automation altogether.”

However, he pointed out: “Many of these rote production tasks today that no human really wants to spend their time on can actually be fully handled by AI today. [Therefore,] I wouldn’t hand it off to them completely, except in some limited cases. This stuff is amazing, but you still want humans in the loop to do approval and quality assurance.”

AI Assistant

Earlier that morning, Adobe announced AI Assistant,  which is a “natural language interface to your data,” Shivakumar Vaithyanathan VP of platform engineering and architecture at Adobe, pointed out.

Adobe has been “engaging customers in a private preview of AI Assistant, and they’ve seen the potential that this can bring, according to Rachel Hanessian, principal product manager at Adobe.