Key Strategies from RSA 2024 for Securing Generative AI Projects

Generative AI is this decade’s cloud computing: exciting and urgent, yet fraught with poorly understood peril. In the previous decade, cloud computing arrived on the technology scene as a revelation. Met initially with skepticism by some, cloud’s benefits soon proved irresistible to most organizations; today, businesses typically use terms like “cloud first” or “cloud forward” to describe their philosophies for data management and storage.

Lessons from Cloud Computing Security Failures

The rapid adoption of cloud computing brought numerous advantages, such as scalable storage solutions and enhanced collaboration capabilities. However, along the way, businesses made a lot of mistakes, especially with security. Most significantly, they failed to grasp the differences between securing data in the cloud and protecting it on-premises, often leaving data vulnerable to configuration errors and other simple mistakes. For example, a study conducted by Gartner in 2015 revealed that 95% of cloud security failures within the next five years would be the customer’s fault. Many organizations were also slow to see the shortcomings of the traditional castle-and-moat approach to cyber defense in a cloud-oriented world and only recently have begun to embrace the principles of zero trust.

At the RSA Conference in San Francisco this week, many of the world’s leading cybersecurity professionals suggested that the original sin of the early cloud era was that businesses plowed ahead with cloud transitions without thinking through the security implications. A report by McAfee in 2019 highlighted that over 99% of cloud misconfigurations go unnoticed, further exacerbating the security risks.

Parallels Between Cloud and AI Security

Now, they worry that history may repeat itself with the industry’s newest revelation: generative artificial intelligence. “We have to make sure that what happened with cloud doesn’t happen with AI,” said Akiba Saeedi, vice president of product management at IBM Security. AI, much like the early days of cloud computing, is being rapidly integrated into business operations without fully understanding its security implications.

AI “is not just a trend we’re following,” said John Yeoh, global vice president of research for the Cloud Security Alliance. “Our customers are using it. Our staff is using it. And your CEO is presenting it to you now, telling you, ‘We have to do it.’” This urgency can lead to oversight in security practices, mirroring the mistakes made during the early adoption of cloud technologies.

When it comes to securing cloud environments, Yeoh said, most professionals today ask questions about network access, data control and management, and configuration, among other things. AI adds new wrinkles to the same questions but doesn’t really change what security professionals need to monitor, he argued: “A lot of the same questions we were asking about cloud 10 or so years ago, we’re going to be asking that for AI.” For instance, a 2023 study by IDC noted that over 70% of organizations had concerns about data privacy and security with AI integration.

The Importance of Authentication and Data Control

Authentication is critical in a cloud environment because users seek network access from anywhere. The addition of AI will mean the generation of artificial identities, Yeoh explained. “The machine identities are growing,” he said. “We know access control is important in a cloud environment. In a machine environment, it becomes even more important. For every human you have in your organization, you have 10 or 20 times the machine identities.” A report from CyberArk in 2022 found that machine identities could outnumber human identities by up to 45 times by 2025, emphasizing the need for robust access management strategies.

Data control is another critical issue. “We’re going to take a large language model, and we’re going to customize it, train it ourselves and tailor it to specific data in our own environment,” Yeoh said. “And so, data control becomes a crucial aspect of that — what goes in and what goes out.” The complexity of managing AI-generated data necessitates stringent data governance policies. According to a report by Forrester in 2023, 68% of data breaches involved misconfigured AI models or improperly secured data used for AI training.

The Current State of AI Security

Most AI Projects Are Not Being Secured

In a new report, “Securing Generative AI,” IBM and Amazon Web Services found that only 24 percent of current generative AI projects are being secured, even though 82 percent of organizations say that “secure and trustworthy AI is essential to the success of the business.” This discrepancy highlights a significant gap between the perceived importance of AI security and the actions taken to ensure it.

“While a majority of executives are concerned about unpredictable risks impacting generative AI initiatives, they are not prioritizing security,” the report notes. The survey of more than 2,300 executives identifies a likely reason for the disconnect: Nearly 70 percent of executives say innovation takes precedence over security. This finding is consistent with a 2023 survey by PwC, which found that 65% of business leaders prioritized rapid AI deployment over comprehensive security measures.

Bridging the Security Gap

“It’s very similar to what we saw in the past, with cloud, where the drive for innovation is out in front of where the current security posture is,” Saeedi said. “From a maturity standpoint, there’s a lot of new projects and a lot of people just trying to figure out what’s going on right now.” Part of the challenge with AI security is that it involves two distinct disciplines: data scientists, who do a lion’s share of the work in building the deep learning models that are foundational to AI, but who don’t know much about security; and cybersecurity experts, who are only now learning about AI, Saeedi said.

That’s where leadership needs to step in and bring the sides together. It’s critical that business leaders understand that for all of their AI projects, security must be built from the ground up; otherwise, they will repeat the mistakes of the past, when they zoomed ahead into the cloud with poor security controls. The importance of interdisciplinary collaboration in AI security is underscored by a 2022 study by Stanford University, which found that teams with integrated security and AI expertise were 35% more effective in mitigating AI-related risks.

Conclusion: The Path Forward for Secure AI

“At the highest level, AI that’s not trustworthy is not sustainable,” Saeedi said. “And if you have AI that is not secure, and you have other data that’s being manipulated outside the boundaries of the business’s intent, then it’s not trustworthy.” Ensuring the security of AI systems requires a proactive approach, starting from the initial stages of development. Organizations must implement comprehensive security frameworks that address both technological and human factors.

Furthermore, continuous monitoring and adaptation of security measures are crucial as AI technologies evolve. A proactive stance, coupled with a commitment to ongoing education and awareness, can help businesses avoid the pitfalls experienced during the early days of cloud adoption. By learning from past mistakes and prioritizing security alongside innovation, businesses can harness the full potential of AI while safeguarding their critical assets.

Be the first to comment

Leave a Reply

Your email address will not be published.


*