Rachel Tobac, chief executive of the U.S. cybersecurity firm Social Proof Security, last week sounded dire warnings. She emphasized possible security risks on the new Meta AI platform. In a post on X, she described a significant problem that could compromise user privacy, stating, “If a user’s expectations about how a tool functions don’t match reality, you’ve got yourself a huge user experience and security problem.”
The problem comes from being able to trace posts on Meta AI all the way back to specific users. For example, this can be achieved with their usernames and profile images. This lack of accountability dramatically compounds the potential risks posed to users. Many of these posts go straight to users’ Tik Tok accounts. For instance, one user challenged Meta AI to produce an image of an animated character lying on the grass wearing nothing but underwear. Investigators were able to follow this request back to the user’s Instagram account by following their unique (@username) handle and avatar profile picture.
Tobac emphasized the implications of this issue. “Because of this, users are inadvertently posting sensitive info to a public feed with their identity linked,” she explained. The platform’s unique design means that the images it generates are tailored to specific user searches, which can unintentionally reveal personally identifiable information.
Unique among large social media platforms, Meta AI prominently features a public “Discover” feed that highlights posts from users. One chat thread titled “Generative AI tackles math problems with ease” reflects the capabilities of the platform in generating content based on user input. This functionality comes with risks. A message pop-up from Meta AI warns users: “Prompts you post are public and visible to everyone… Avoid sharing personal or sensitive information.”
Reality check #3: Meta repeatedly promises users they have more control over what they’re sharing. Unfortunately for those who wish it was that simple, nothing goes on to their feed without their permission. The lack of privacy around posts, along with the potential for simple traceability, creates serious risk to user safety.