Australia’s internet regulator, eSafety, is reportedly contemplating strict measures against artificial intelligence (AI) services that are not adhering to age verification rules.

The regulator’s decision follows a review that found over half of these services had not publicly committed to meeting the compliance deadline set for the following week, reported Reuters on Monday.

From March 9, internet services in Australia, including AI tools like OpenAI‘s ChatGPT and other chatbots, will be mandated to prevent Australians under 18 from accessing explicit content, or they could attract fines up to A$49.5 million ($35 million).

The commissioner’s spokesperson told the publication that eSafety would use its “full range” of powers in case of non-compliance, and added that this could include action against “gatekeeper services such as search engines and app stores that provide key points of access to particular services”. This is one of the most severe global efforts to regulate AI companies, which are increasingly facing lawsuits for their inability to curb harmful content.

In December, Australia became the first country to bar children under 16 from using major social platforms.

OpenAI, Google (NASDAQ:GOOGL) (NASDAQ:GOOG) and Apple Inc. (NASDAQ:APPL) did not immediately respond to Benzinga’s request for comments.

Global Scrutiny of AI Platforms Grows

This move by Australia’s internet regulator is part of a growing global trend of increasing scrutiny over AI services. In January, Britain enforced new rules requiring technology companies to proactively block unsolicited sexual images, a move aimed at holding platforms accountable for online abuse fueled by AI.

Just a month later, Elon Musk’s AI chatbot Grok faced a formal investigation by Ireland’s privacy regulator over concerns about how it processes personal data and generates sexualized content.

Earlier this year, a lawsuit filed by New Mexico Attorney General Raul Torrez alleges that Meta Platforms (NASDAQ:META) CEO Mark Zuckerberg overruled internal safety warnings to support a less restrictive approach to AI chatbot companions.

Disclaimer: This content was partially produced with the help of AI tools and was reviewed and published by Benzinga editors

Photo courtesy: Shutterstock/ wutzkohphoto