Meta Platforms announced that it will expand the scope of technological protection measures for minors’ accounts to include 27 European Union countries, in addition to the Facebook platform in the United States, in a move aimed at confronting growing criticism regarding the protection of children and adolescents on the Internet.

Technology companies are facing mounting pressure from governments around the world to take more stringent measures to verify the ages of users, in light of concerns about online exploitation, the mental health of minors, and the spread of artificial intelligence-generated sexual images of children.

Last year, Meta launched a technology that relies on proactively searching for accounts suspected of belonging to minors, even if an incorrect date of birth is entered, while subjecting these accounts to special protection measures.

The company said in a publication: This technology will be expanded to include 27 countries in the European Union, adding that it will also be applied to Facebook in the United States for the first time, and will later be extended to Britain and the European Union next June.

Meta explained that it relies on advanced artificial intelligence tools that are not limited to verifying the entered age, but also include analyzing user files and digital behavior to extract indicators that may indicate that the account belongs to a minor.

The measures also include strengthening anti-phishing technologies to prevent users suspected of being minors from creating new accounts with misleading data.