Meta Intensifies AI Efforts to Safeguard Teens on Instagram

Meta, the parent company of Instagram, is escalating its use of artificial intelligence to identify underage users on the platform. Building upon its initial implementation of AI for age detection in 2024, Instagram will now proactively employ this technology to identify accounts with adult-stated birthdays that the system suspects belong to children. In some instances, this will lead to an automatic adjustment of account settings to align with stricter teen safety protocols.

Instagram’s existing AI-powered age detection system analyses various signals to identify users under the age of 18. These signals can include explicit mentions of age in messages, such as friends wishing someone a “happy 16th birthday”. Meta has also indicated that it leverages engagement data, recognising that users within similar age groups often interact with content in comparable patterns. Teen accounts on Instagram are subject to enhanced safety measures by default, including private account settings, restrictions on receiving messages from strangers, and limitations on the type of content they are exposed to. In the previous year, Instagram took a significant step by automatically enabling these safety features for all existing and new teen accounts.

MovingMarkets

Now, the company is taking a more proactive stance. In a recent blog post, Instagram announced the testing of a new feature, commencing today in the US, that will utilise AI to actively scan for accounts that claim an adult age but exhibit indicators suggesting the user is a minor. If the AI detects a discrepancy and suspects the user is indeed a child, Instagram will automatically apply the more restrictive teen account settings.

Instagram acknowledges that, like any AI system, this new feature may not be infallible and could potentially make errors. To address this, the company states that users will have the option to revert their settings if they believe the automatic adjustment was incorrect.

Meta’s ongoing development and deployment of these safety-focused features for younger users reflect a broader trend of increased scrutiny from parents and regulatory bodies. Last year, the European Union initiated an investigation into Meta’s efforts to protect the well-being of its young users. Furthermore, disturbing reports detailing instances of predators targeting children on Instagram led to legal action, including a lawsuit filed by a US state attorney general.

The issue of online child safety has also sparked debate among tech companies, with differing views on responsibility. Notably, Google has recently accused Meta, along with Snap and X, of attempting to shift the onus of child safety onto app stores following the passage of a relevant bill in Utah.

Meta’s expansion of its AI-powered age detection and its willingness to override stated account settings in suspected cases underscore a growing commitment to creating a safer environment for teenagers on its platform. While acknowledging the potential for errors, the company’s proactive approach signals a determined effort to address concerns and implement measures to better protect its younger user base.

Facebook
Twitter
LinkedIn
Reddit
Telegram
Email

About Post Author