Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More
At the DataGrail Summit 2024 this week, industry leaders delivered a stark warning about the rapidly advancing risks associated with artificial intelligence.
Dave Zhou, CISO of Instacart, and Jason Clinton, CISO of Anthropic, highlighted the urgent need for robust security measures to keep pace with the exponential growth of AI capabilities during a panel titled โCreating the Discipline to Stress Test AIโNowโfor a More Secure Future.โ The panel, moderated by VentureBeatโs editorial director Michael Nunez, revealed both the thrilling potential and the existential threats posed by the latest generation of AI models.
AIโs exponential growth outpaces security frameworks
Jason Clinton, whose company Anthropic operates at the forefront of AI development, didnโt hold back. โEvery single year for the last 70 years, since the perceptron came out in 1957, we have had a 4x year-over-year increase in the total amount of compute that has gone into training AI models,โ he explained, emphasizing the relentless acceleration of AIโs power. โIf we want to skate to where the puck is going to be in a few years, we have to anticipate what a neural network thatโs four times more compute has gone into it a year from now, and 16x more compute has gone into it two years from now.โ
Clinton warned that this rapid growth is pushing AI capabilities into uncharted territory, where todayโs safeguards may quickly become obsolete. โIf you plan for the models and the chatbots that exist today, and youโre not planning for agents and sub-agent architectures and prompt caching environments, and all of the things emerging on the leading edge, youโre going to be so far behind,โ he cautioned. โWeโre on an exponential curve, and an exponential curve is a very, very difficult thing to plan for.โ
AI hallucinations and the risk to consumer trust
For Dave Zhou at Instacart, the challenges are immediate and pressing. He oversees the security of vast amounts of sensitive customer data and confronts the unpredictable nature of large language models (LLMs) daily. โWhen we think about LLMs with memory being Turing complete and from a security perspective, knowing that even if you align these models to only answer things in a certain way, if you spend enough time prompting them, curing them, nudging them, there may be ways you can kind of break some of that,โ Zhou pointed out.
Zhou shared a striking example of how AI-generated content could lead to real-world consequences. โSome of the initial stock images of various ingredients looked like a hot dog, but it wasnโt quite a hot dogโit looked like, kind of like an alien hot dog,โ he said. Such errors, he argued, could erode consumer trust or, in more extreme cases, pose actual harm. โIf the recipe potentially was a hallucinated recipe, you donโt want to have someone make something that may actually harm them.โ
Throughout the summit, speakers emphasized that the rapid deployment of AI technologiesโdriven by the allure of innovationโhas outpaced the development of critical security frameworks. Both Clinton and Zhou called for companies to invest as heavily in AI safety systems as they do in the AI technologies themselves.
Zhou urged companies to balance their investments. โPlease try to invest as much as you are in AI into either those AI safety systems and those risk frameworks and the privacy requirements,โ he advised, highlighting the โhuge pushโ across industries to capitalize on AIโs productivity benefits. Without a corresponding focus on minimizing risks, he warned, companies could be inviting disaster.
Preparing for the unknown: AIโs future poses new challenges
Clinton, whose company operates on the cutting edge of AI intelligence, provided a glimpse into the futureโone that demands vigilance. He described a recent experiment with a neural network at Anthropic that revealed the complexities of AI behavior.
โWe discovered that itโs possible to identify in a neural network exactly the neuron associated with a concept,โ he said. Clinton described how a model trained to associate specific neurons with the Golden Gate Bridge couldnโt stop talking about the bridge, even in contexts where it was wildly inappropriate. โIf you asked the networkโฆ โtell me if you know, you can stop talking about the Golden Gate Bridge,โ it actually recognized that it could not stop talking about the Golden Gate Bridge,โ he revealed, noting the unnerving implications of such behavior.
Clinton suggested that this research points to a fundamental uncertainty about how these models operate internallyโa black box that could harbor unknown dangers. โAs we go forwardโฆ everything thatโs happening right now is going to be so much more powerful in a year or two years from now,โ Clinton said. โWe have neural networks that are already sort of recognizing when their neural structure is out of alignment with what they consider to be appropriate.โ
As AI systems become more deeply integrated into critical business processes, the potential for catastrophic failure grows. Clinton painted a future where AI agents, not just chatbots, could take on complex tasks autonomously, raising the specter of AI-driven decisions with far-reaching consequences. โIf you plan for the models and the chatbots that exist todayโฆ youโre going to be so far behind,โ he reiterated, urging companies to prepare for the future of AI governance.
The DataGrail Summit panels in whole delivered a clear message: the AI revolution is not slowing down, and neither can the security measures designed to control it. โIntelligence is the most valuable asset in an organization,โ Clinton stated, capturing the sentiment that will likely drive the next decade of AI innovation. But as both he and Zhou made clear, intelligence without safety is a recipe for disaster.
As companies race to harness the power of AI, they must also confront the sobering reality that this power comes with unprecedented risks. CEOs and board members must heed these warnings and ensure that their organizations are not just riding the wave of AI innovation but are also prepared to navigate the treacherous waters ahead.
