Skip to main content

Meta AI app ‘a privacy disaster’ as chats unknowingly made public

The Meta AI app has been described as “a privacy disaster,” as users unknowingly make their embarrassing questions public.

One tech writer described it as like discovering your web browser history has been public all along without you knowing it …

TechCrunch reports that there is an absolutely ridiculous privacy default for many users.

Whether you admit to committing a crime or having a weird rash, this is a privacy nightmare. Meta does not indicate to users what their privacy settings are as they post, or where they are even posting to. So, if you log into Meta AI with Instagram, and your Instagram account is public, then so too are your searches about how to meet “big booty women.”

Other users are inadvertently making their chats public because they think they are sharing them with specific people, rather than with the world.

When you ask the AI a question, you have the option of hitting a share button, which then directs you to a screen showing a preview of the post, which you can then publish. But some users appear blissfully unaware that they are sharing these text conversations, audio clips, and images publicly with the world.

Business Insider notes that the chats link to the user’s social media accounts.

People’s real Instagram or Facebook handles are attached to their Meta AI posts. I was able to look up some of these people’s real-life profiles, although I felt icky doing so. I reached out to more than 20 people whose posts I’d come across in the feed to ask them about their experience.

9to5Mac’s Take

If someone hits a share button for their chat, one might reasonably put that one down to user error. But if the public Instagram account claim is accurate, that’s an absolutely inexcusable privacy breach by Meta. Most Instagram accounts are public, and the company actively encourages people to link all their Meta logins.

The broader issue here is that the history of AI chatbots is littered with poor privacy practices, such as scraping the web for training data, and user questions likewise being used to refine training. My view is that you shouldn’t say anything in an AI chat you wouldn’t want to risk becoming public. In particular, never include sensitive data like names, contact details, and so on.

Highlighted accessories

Photo by Chris Yang on Unsplash

FTC: We use income earning auto affiliate links. More.

You’re reading 9to5Mac — experts who break news about Apple and its surrounding ecosystem, day after day. Be sure to check out our homepage for all the latest news, and follow 9to5Mac on Twitter, Facebook, and LinkedIn to stay in the loop. Don’t know where to start? Check out our exclusive stories, reviews, how-tos, and subscribe to our YouTube channel

Comments

Author

Avatar for Ben Lovejoy Ben Lovejoy

Ben Lovejoy is a British technology writer and EU Editor for 9to5Mac. He’s known for his op-eds and diary pieces, exploring his experience of Apple products over time, for a more rounded review. He also writes fiction, with two technothriller novels, a couple of SF shorts and a rom-com!


Ben Lovejoy's favorite gear