Australia’s business sector set to have a say in shaping proposed AI regulation
Australia’s business sector will have a key role to play in the framing of artificial intelligence (AI) regulation in the country.
Australia’s business sector will have a key role to play in the framing of artificial intelligence (AI) regulation in the country, with the federal government planning to overhaul the expert group tasked with guiding AI regulations.
Earlier this year, the federal government release its eagerly anticipated interim response to the Safe and Responsible AI consultation held in 2023.
There were three main takeaways from the interim response. These are as follows:
The federal government will consult on the case for a new regulatory framework around "high-risk" AI applications, including safety guardrails for AI;
The National AI Centre will work with industry to develop a voluntary AI Safety Standard; and
Industry input will guide what merits there might be for a voluntary labelling and watermarking scheme for AI-generated materials.
There seem to be two divergent views on what sort of regulatory framework Australia should adopt when it comes to AI governance. One is a more rigorous approach—the sort that was recently adopted by the European Union—while the other approach is rooted in broad principles. Whichever way the Australian government decides to go, it will have profound implications for Australian businesses, industries, and consumers.
Central to the government's strategy is the formation of a revamped expert panel on AI, poised to feature a greater influence of business representatives. The move aligns with industry demands, with leading technology organisations advocating for business influence in AI governance.
Stakeholders have emphasised the importance of aligning any proposed AI regulation in Australia to international standards and building on building on existing legal frameworks and regulatory standards to foster responsible AI deployment.
In an address earlier in the year, ASIC Chair Joe Longo noted that while individuals and businesses that work with AI are already subject to Australian laws, current regulation around AI may not be sufficient.
“Existing laws likely do not adequately prevent AI-facilitated harms before they occur, and more work is needed to ensure there is an adequate response to harms after they occur,” notes the Australian Government’s interim response.
Whichever path Australia takes, it faces choices that will shape its competitiveness and innovation landscape in the global AI arena.