9.3 C
United Kingdom
Saturday, June 7, 2025

ChatGPT and Claude privacy: Why AI makes surveillance everyone’s issue


For decades, digital privacy advocates have been warning the public to be more careful about what we share online. And for the most part, the public has cheerfully ignored them.

I am certainly guilty of this myself. I usually click “accept all” on every cookie request every website puts in front of my face, because I don’t want to deal with figuring out which permissions are actually needed. I’ve had a Gmail account for 20 years, so I’m well aware that on some level that means Google knows every imaginable detail of my life.

I’ve never lost too much sleep over the idea that Facebook would target me with ads based on my internet presence. I figure that if I have to look at ads, they might as well be for products I might actually want to buy.

But even for people indifferent to digital privacy like myself, AI is going to change the game in a way that I find pretty terrifying.

This is a picture of my son on the beach. Which beach? OpenAI’s o3 pinpoints it just from this one picture: Marina State Beach in Monterey Bay, where my family went for vacation.

A child is a small figure on a cloudy beach, flying a kite.

Courtesy of Kelsey Piper

To my merely-human eye, this image doesn’t look like it contains enough information to guess where my family is staying for vacation. It’s a beach! With sand! And waves! How could you possibly narrow it down further than that?

But surfing hobbyists tell me there’s far more information in this image than I thought. The pattern of the waves, the sky, the slope, and the sand are all information, and in this case sufficient information to venture a correct guess about where my family went for vacation. (Disclosure: Vox Media is one of several publishers that have signed partnership agreements with OpenAI. Our reporting remains editorially independent. One of Anthropic’s early investors is James McClave, whose BEMC Foundation helps fund Future Perfect.)

ChatGPT doesn’t always get it on the first try, but it’s more than sufficient for gathering information if someone were determined to stalk us. And as AI is only going to get more powerful, that should worry all of us.

When AI comes for digital privacy

For most of us who aren’t excruciatingly careful about our digital footprint, it has always been possible for people to learn a terrifying amount of information about us — where we live, where we shop, our daily routine, who we talk to — from our activities online. But it would take an extraordinary amount of work.

For the most part we enjoy what is known as security through obscurity; it’s hardly worth having a large team of people study my movements intently just to learn where I went for vacation. Even the most autocratic surveillance states, like Stasi-era East Germany, were limited by manpower in what they could track.

But AI makes tasks that would previously have required serious effort by a large team into trivial ones. And it means that it takes far fewer hints to nail someone’s location and life down.

It was already the case that Google knows basically everything about me — but I (perhaps complacently) didn’t really mind, because the most Google can do with that information is serve me ads, and because they have a 20-year track record of being relatively cautious with user data. Now that degree of information about me might be becoming available to anyone, including those with far more malign intentions.

And while Google has incentives not to have a major privacy-related incident — users would be angry with them, regulators would investigate them, and they have a lot of business to lose — the AI companies proliferating today like OpenAI or DeepSeek are much less kept in line by public opinion. (If they were more concerned about public opinion, they’d need to have a significantly different business model, since the public kind of hates AI.)

Be careful what you tell ChatGPT

So AI has huge implications for privacy. These were only hammered home when Anthropic reported recently that they had discovered that under the right circumstances (with the right prompt, placed in a scenario where the AI is asked to participate in pharmaceutical data fraud) Claude Opus 4 will try to email the FDA to whistleblow. This cannot happen with the AI you use in a chat window — it requires the AI to be set up with independent email sending tools, among other things. Nonetheless, users reacted with horror — there’s just something fundamentally alarming about an AI that contacts authorities, even if it does it in the same circumstances that a human might.

Some people took this as a reason to avoid Claude. But it almost immediately became clear that it isn’t just Claude — users quickly produced the same behavior with other models like OpenAI’s o3 and Grok. We live in a world where not only do AIs know everything about us, but under some circumstances, they might even call the cops on us.

Right now, they only seem likely to do it in sufficiently extreme circumstances. But scenarios like “the AI threatens to report you to the government unless you follow its instructions” no longer seem like sci-fi so much as like an inevitable headline later this year or the next.

What should we do about that? The old advice from digital privacy advocates — be thoughtful about what you post, don’t grant things permissions they don’t need — is still good, but seems radically insufficient. No one is going to solve this on the level of individual action.

New York is considering a law that would, among other transparency and testing requirements, regulate AIs which act independently when they take actions that would be a crime if taken by humans “recklessly” or “negligently.” Whether or not you like New York’s exact approach, it seems clear to me that our existing laws are inadequate for this strange new world. Until we have a better plan, be careful with your vacation pictures — and what you tell your chatbot!

A version of this story originally appeared in the Future Perfect newsletter. Sign up here!

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles