The Federal Government has outlined new “guardrails” for businesses’ use of AI. 

New measures aimed at regulating artificial intelligence (AI) have been proposed in response to growing concerns about the risks posed by unchecked AI development. 

This week, the Albanese Government introduced a paper titled Mandatory Guardrails for AI in High-Risk Settings, alongside a new Voluntary AI Safety Standard.

The government says businesses, eager to take advantage of AI's potential while navigating the regulatory landscape, have requested clearer guidance. 

AI’s economic potential is significant, with Australia's Tech Council estimating that generative AI could contribute between $45 billion and $115 billion annually to the national economy by 2030.

But the government's consultations also highlighted examples of AI discriminating against individuals based on ethnicity and gender in job recruitment, and the misappropriation of First Nations cultural materials for AI training. 

Additionally, the rapid rise of deepfake technologies has fuelled privacy concerns, with 150,000 deepfake videos uploaded to major sites in the first three quarters of 2023 alone.

The government’s proposals paper identifies key regulatory measures, focusing on AI used in high-risk contexts, such as healthcare, finance, and law enforcement. 

These include the establishment of ten mandatory “guardrails”, which would require businesses to disclose when AI systems are making decisions about individuals or interacting with them. 

This disclosure rule aims to address concerns raised by recent surveys, which found that one-third of Australian businesses are using AI without informing customers or employees. 

Half of the surveyed businesses also admitted to not conducting human rights or risk assessments for their AI systems.

“Australians know AI can do great things, but people want to know there are protections in place if things go off the rails,” Industry Minister Ed Husic said this week.

The government is currently considering three regulatory approaches: integrating AI rules into existing legislation, creating a new legal framework to adapt regulations across sectors, or establishing a standalone AI-specific law, akin to the EU’s AI Act.

In addition to the mandatory proposals, the government’s Voluntary AI Safety Standard is intended to provide businesses with immediate guidance on best practices in AI use. 

The voluntary standard, which will be updated regularly to align with international best practices, is in line with actions taken by the European Union, Japan, and the United States. 

The public has four weeks to provide feedback on the government’s proposals paper, with submissions closing on 4 October 2024. 

This email address is being protected from spambots. You need JavaScript enabled to view it. CareerSpot News