iToverDose/Technology· 5 MAY 2026 · 13:00

Meta leverages AI to enforce child safety on Facebook and Instagram

Meta introduces a non-facial AI system to detect underage users by analyzing skeletal features in photos and videos, aiming to strengthen child protection across its platforms.

The Verge2 min read0 Comments

Meta has rolled out a new artificial intelligence tool designed to identify and remove accounts belonging to children under 13 on Facebook and Instagram. Unlike traditional facial recognition, the system focuses on analyzing bone structure and height cues in visual content to flag potential underage users without recognizing specific individuals.

How Meta’s AI detects underage accounts

The technology scans uploaded photos and videos for "general themes and visual cues," including skeletal proportions and body size, to estimate age ranges. Meta emphasizes that this approach is not facial recognition and cannot identify individuals in images. The system also examines posts, comments, user bios, and captions for "contextual clues" that may indicate a user’s actual age.

Meta’s AI model was trained on anonymized datasets to recognize patterns associated with younger age groups, ensuring minimal risk of false positives while maintaining detection accuracy. The company states the tool is part of a broader strategy to comply with regulations like the Children’s Online Privacy Protection Act (COPPA) and to enhance child safety online.

Broader implications for platform moderation

This initiative reflects Meta’s ongoing efforts to improve content moderation and age verification across its platforms. While the AI focuses on visual analysis, it operates alongside existing measures such as birthday prompts and user-reported accounts. Meta acknowledges that no system is foolproof but asserts that combining multiple detection methods reduces the likelihood of underage users slipping through.

Critics have raised concerns about privacy and the ethical use of AI in moderation, particularly regarding how visual data is processed and stored. Meta has not disclosed whether the AI system stores analyzed images or shares data with third parties, though the company states all processing occurs on its servers.

What this means for parents and regulators

For parents, the new AI tool may offer additional reassurance that platforms are taking steps to limit underage exposure. However, its effectiveness depends on the accuracy of age estimation and the ability to adapt to evolving user behaviors. Regulators will likely scrutinize Meta’s approach, especially as governments worldwide tighten rules around child safety online.

Meta has not specified a timeline for full deployment but indicates the system will be gradually integrated into Facebook and Instagram’s moderation workflows. The company plans to refine the model based on feedback and real-world performance to minimize errors and improve detection rates.

AI summary

Meta’nın Facebook ve Instagram’da 13 yaş altı kullanıcıları tespit etmek için geliştirdiği yeni yapay zeka sistemi hakkında bilmeniz gerekenler. Sistem nasıl çalışıyor, gizlilik nasıl korunuyor?

Comments

00
LEAVE A COMMENT
ID #2QQRF7

0 / 1200 CHARACTERS

Human check

3 + 5 = ?

Will appear after editor review

Moderation · Spam protection active

No approved comments yet. Be first.