Connections

IBM Security Exec: AI Presents Opportunities but Also Challenges for Cyber Risk

NEW YORK — Artificial intelligence (AI) presents opportunities for organizations to reduce their overall cyber risk, along with new challenges for them as well, according to Ravi Mani, IBM distinguished engineer, chief information security officer (CISO) and director of IBM Watson and Cloud Platform Security, and other CISOs who spoke April 29 during the Ai4 Cybersecurity conference panel session “A CISO Perspective on AI.”

About 5-6 years ago, IBM had already deployed facial recognition and it looked like deep learning (DL) and machine learning (ML) were “coming up to speed,” Mani recalled. So, the company started looking into how it could “leverage” DL and ML “algorithms for our cybersecurity,” he told attendees.

IBM Research and “several other IBM research entities around the world contributed to” that project, he noted, adding the company had its IBM Watson AI platform “learn the language of cybersecurity.” Then, “after achieving that, we embedded” the Watson for Cybersecurity into IBM’s Watson Curator Software-as-a-Service (SaaS) solution, he pointed out.

One major challenge that IBM faced was that its analysts didn’t “have enough time” or knowledge to deal with the rapidly increasing loads of unstructured data the company had that weren’t being used effectively,” he said. “To address the time-skill” issue, as well as to address the “threat landscape” that was growing, IBM “developed and deployed the AI models” for its Security Operations Center, he noted.

Like the other panelists, however, he agreed that AI is not magic and won’t solve all problems for an organization. The technology also creates new challenges, including legal ones that include how data including images of people are used, Mani said.

AI can help an organization take its analysts, other staff and capabilities and “target them much better,” ex-Time Warner CISO Gary Owen said. The technology also presents a “terrific opportunity to mine data in a way that you would not normally be able to,” he noted.

But Owen warned that AI creates privacy challenges and “I’m not sure I’m ready to take AI as a decision maker to the very end of a lot of activity.” For one thing, it’s not always a given that the models yielded by AI-driven systems are accurate to start with, he noted.

“We already have too much information” now and AI may allow organizations to “shrink” that data down “to digestible chunks that we can actually” act upon, he said. But there should still be human beings who then make decisions based on that info, he said.

One “challenge” that health care company MultiPlan faces when using AI is that “a lot of insurers are coming to us and saying” they don’t want the company using any of their patients’ data, “even in its masked form, for any services” other than what those insurers have asked MultiPlan to do, Erinmichelle Perri, that company’s CISO, told attendees.

Earlier that morning at the conference, security experts from the U.S. National Security Agency (NSA) and Department of Homeland Security pointed to a wide variety of benefits and challenges created by AI.

“AI is not a panacea for all the world’s problems,” Mark Segal, NSA chief of computer and analytic sciences research, told the conference. “If it’s used properly, it’s great,” but challenges around the technology include the rise of “adversarial AI” and “deep fakes,” he said. An example of adversarial AI is that if an adversary of the U.S. finds out the NSA is “using a machine learning algorithm to solve a certain problem, he may defeat that algorithm,” he noted.

Meanwhile, it’s “critical” that we come up with ways to deal with the growing number of deep fake images that are generated and “look so close to the real one, it’s very hard even for an expert to be able to tell those things apart,” he said.

Sustained investment in AI research and development is also desperately needed, Erin Kenneally, portfolio manager for the Cyber Security Division-Science & Technology Directorate at the U.S. Department of Homeland Security, warned attendees. That’s due to issues that include robustness concerns such as the fact that AI is susceptible to pre- and post-training attacks, which could have a dramatic impact on predictions, according to Kenneally.