Meta is introducing a new way for parents to monitor their teens’ activity with Meta AI. Now, parents can see what topics their child has discussed with the chatbot over the past week on Facebook, Messenger, and Instagram.
Parents using Meta’s supervision tools will find a new “Insights” tab that lists the categories their teen has asked about. This feature is now available in the US, UK, Australia, Canada, and Brazil.
A new view into teen AI activity
TechCrunch says parents can view topics like “School,” “Entertainment,” “Lifestyle,” “Travel,” “Writing,” and “Health and Wellbeing.” Parents can tap on a topic to see more specific subcategories.
For example, “Lifestyle” covers fashion, food, and holidays, while “Health and Wellbeing” includes fitness, physical health, and mental health.
The Verge notes that this feature summarizes what teens have asked Meta AI about in each app over the past week, offering a categorized overview instead of a full transcript.
This is important because Meta is not letting parents read every message word for word. Instead, the tool is meant to highlight patterns and themes in a teen’s AI use, giving parents a general sense of the questions or interests that come up in chatbot conversations.
The update builds on Meta’s wider teen-safety push
Meta first introduced these AI supervision insights in October, saying it was working on tools to help parents guide teens through AI experiences.
This rollout builds on earlier safety features, like alerts for parents if teens “repeatedly search for self-harm topics.” In short, Meta is linking its new AI-monitoring feature to a larger child-safety system that already uses topic-based supervision.
The timing stands out as well.
TechCrunch says Meta stopped teens from using AI characters worldwide in January, planning to create a new version for younger users. The report also points out that this happened just before a lawsuit in New Mexico went to trial, accusing Meta of not protecting minors on its platforms.
Meta lost the case, marking the first time a court found the company legally responsible for putting child safety at risk.
Meta is pairing oversight with parent guidance
In addition to the new Insights tab, Meta is introducing suggested conversation starters to help parents talk “openly and without judgment” about their teens’ experiences with AI.
The company also announced a new AI Wellbeing Expert Council to help guide the development of AI products for teens. These steps show that Meta wants the feature to be seen as more than just surveillance, positioning it as part of a family-focused safety approach to generative AI.
A bigger test for AI and child safety
These new controls highlight how quickly chatbot features are becoming part of the larger conversation about teen safety online. Meta is not taking AI out of apps for teens, but it does recognize that parents want more insight into how these tools are used.
For now, the company is showing themes instead of full conversations, which reveals patterns without sharing every detail.
As AI becomes more common in social apps, finding the right balance between parental oversight, teen privacy, and platform safety will likely become an even bigger issue.