Meta announced Tuesday it is deploying artificial intelligence to identify and remove users under the age of 13 from Facebook and Instagram, as the tech giant faces mounting legal pressure in state courts and on Capitol Hill over child safety. The move is part of a broader push to enforce age restrictions more aggressively, using technology that scans user profiles for contextual clues such as birthday posts or school-related discussions.
In a blog post, Meta said the AI examines entire profiles — including bios, comments, captions, and images — for indicators that a user might be underage. If the system flags an account as potentially belonging to someone under 13, the profile is deactivated, and the user must complete the platform’s age verification process to avoid deletion. The company emphasized that the visual analysis does not rely on facial recognition but instead looks at general themes and visual cues like height or bone structure to estimate age.
“Our AI looks at general themes and visual cues, for example height or bone structure, to estimate someone’s general age,” Meta wrote. “It does not identify the specific person in the image.” The advanced features are currently available in select countries, with plans for a broader rollout. Additionally, users will get an easier process for reporting suspected underage accounts, and human review teams will be assisted by AI models trained with standard evaluation criteria. “In our testing, this AI-driven review delivers higher accuracy and faster resolutions than human review alone,” the company stated.
Meta’s Teen Account program, launched in 2024 for users under 18, sets accounts to private by default and requires manual approval for new followers, messages, tags, and mentions. The company said it is expanding its automatic detection of suspected underage Instagram accounts to 27 new countries in the European Union and Brazil. This month, parents will also receive notifications on Facebook and Instagram about how to check and confirm their children’s ages on the platforms.
The push for AI enforcement comes as Meta advocates for federal legislation that would require app stores to verify users’ ages and share that data with app developers. However, app store operators like Apple and Google argue the responsibility should be shared, and Congress has yet to reach a consensus. The company’s efforts also unfold against a backdrop of significant legal setbacks. Last month, a jury in New Mexico found Meta liable for compromising children’s safety online, ordering the company to pay $375 million in damages for violating the state’s Unfair Practices Act. A bench trial began Monday to consider additional protections for users under 18 requested by the New Mexico attorney general.
In a separate case, Meta and Google’s YouTube were found liable by a jury for negligence in the design and operation of their platforms, further intensifying scrutiny. As Meta fights these battles, its use of AI to enforce age limits represents a strategic shift toward automated compliance, though critics question whether the technology can fully address the scale of underage usage. The company insists that the AI-driven approach delivers faster and more reliable results than human review alone, but the broader debate over who should bear the burden of age verification — platforms, app stores, or lawmakers — remains unresolved.
