Australia Moves to Regulate AI Content for Youth Protection

This article was generated by AI and cites original sources.

Australia is taking a strong stance on regulating artificial intelligence (AI) content to protect its youth. In response to concerns about the impact of AI on mental health, Australia has announced measures to restrict access to harmful content for individuals under 18. Search engines and app stores will be required to block AI services that do not verify user ages, with potential fines for non-compliance reaching up to A$49.5 million ($35 million).

This move follows Australia’s earlier decision to ban social media for teenagers, highlighting a growing trend of holding AI companies accountable for the content distributed through their platforms. Concerns about AI platforms facilitating self-harm or violence have led to increased scrutiny and legal actions against tech companies.

Starting from March 9, internet services in Australia must implement restrictions on AI-generated content to prevent young users from accessing harmful material such as pornography, extreme violence, self-harm, and eating disorder content. The eSafety commissioner emphasized that non-compliance will not be tolerated, with actions planned against key gatekeepers like search engines and app stores that provide access to these services.

While Australia has not yet experienced AI-related incidents of violence or self-harm, the regulator has received reports of children as young as 10 engaging extensively with AI-powered chatbots. This regulatory move underscores the need for responsible AI development and usage to safeguard vulnerable populations, particularly the youth.

Source: Tech-Economic Times