Meta Patches Security Flaw Exposing Users’ AI Prompts and Responses

Meta Patches Security Flaw Exposing Users’ AI Prompts and Responses
Meta has recently addressed a critical security vulnerability that exposed private AI prompts and generated content to other users. This bug, impacting its Meta AI chatbot, could have allowed unauthorized access to sensitive user data.
How Was the Bug Discovered?
The flaw was brought to Meta's attention by Sandeep Hodkasia, founder of Appsecure, a renowned security testing firm. While analyzing Meta AI’s prompt editing features, Hodkasia found that whenever a user edited a prompt, the system assigned a unique, sequential number to the prompt and its AI-generated response. By modifying this number in browser network traffic, it was possible to retrieve another user’s prompt and response—without any special permissions.
What Made the Vulnerability So Serious?
- The unique numbers were "easily guessable," making it simple for malicious actors to potentially automate the process and scrape large amounts of private content.
- Meta’s servers failed to verify whether the requesting user was authorized to access the prompt and response, enabling cross-user data leaks.
How Did Meta Respond?
After receiving Hodkasia’s private disclosure on December 26, 2024, Meta acted quickly. The company deployed a fix by January 24, 2025, ensuring the bug could no longer be exploited. Meta confirmed that it found "no evidence of abuse" related to this flaw and rewarded Hodkasia with a $10,000 bug bounty.
Context: AI, Privacy, and Growing Security Concerns
This incident comes at a time when major tech companies are racing to release advanced AI products, often under pressure to innovate quickly. However, as this case shows, privacy and security risks can easily be overlooked. Meta AI’s standalone app, launched earlier this year to compete with ChatGPT, had already faced criticism for accidental public sharing of private conversations in the past.
What Does This Mean for Businesses and Users?
- For businesses: The incident is a stark reminder to prioritize security testing when integrating third-party AI solutions or deploying proprietary AI tools.
- For users: It highlights the importance of understanding how AI platforms handle your data, especially when using features that seem private.
As AI adoption accelerates, both providers and users must remain vigilant about emerging security and privacy challenges.