Meta’s new AI assistant app includes a feature called “Discover,” a public feed meant to showcase interesting user interactions with the chatbot. According to reporting from TechCrunch, that feature has raised serious privacy concerns. Researchers found that the Discover feed contains conversations that include extremely sensitive, personal, and sometimes identifiable information, apparently submitted by users without fully realizing the content would be made public.
Mozilla fellow M.C. McGrath conducted an analysis of Discover and shared several examples of what is currently being surfaced in the feed:
-
Medical and mental health concerns: Users shared prompts seeking help coping with grief, a cancer diagnosis, or even drafting a suicide note. One person asked for help with a recent cancer diagnosis, another to write a suicide note for a minor.
-
Legal matters and identities: Some queries included full names and details on querying tax evasion, writing character reference letters for a person facing legal troubles, and asking about liability in fraud cases.
-
Home addresses and personal contact info: Posts were discovered containing home addresses and other identifiable details, as noted by security researchers.
-
Audio recordings: The feed also features user-submitted voice clips, such as a man asking, “Hey, Meta, why do some farts stink more than other farts?”.
Although Meta claims the Discover feed only displays content users have explicitly opted in to share, the interface does little to emphasize that conversations will be made publicly accessible. The submission option is a small toggle that is easy to miss, and users are not shown a preview or warning before the final post appears in the feed. There is no indication of how long the content remains viewable or if users can request its removal.
Some of the chats appear to come from minors, and none of the visible Discover posts contain anonymization or redaction. That raises questions about whether informed consent is truly being obtained and whether any review process is in place before publication. TechCrunch notes that even if users technically opt in, the lack of transparency around how Discover works could amount to a privacy failure.
Meta has not responded publicly to the specific examples flagged by researchers. The company has said only that Discover showcases approved content submitted by users, but the vetting process remains unclear.
The issues emerging from this feature point to larger concerns about how user data is handled in AI-integrated platforms. As AI chatbots become more embedded in daily tools, especially those developed by large tech companies, the line between private interaction and public content needs to be clearly and responsibly defined.