This act, which has taken several years to effectively finalise, is expected to come into force by late 2023. It takes a risk-based approach to AI regulation, providing a clear blueprint for other legislators to follow. Importantly, the act has struck a balance between the significant benefits AI technologies can bring, the moral dilemmas it poses and the need to encourage ethical innovation without stifling growth. It is a measured, pragmatic and implementable approach to effectively regulate AI.
AI is not bad – there is much good that comes from the use of these diverse and evolving technologies. Efficiencies are found, optimisation is enhanced, and our everyday lives are made simpler.
Loading
But, like any sophisticated technology, there is immense scope for misuse. In particular, Generative AI – currently the technology’s most high-profile iteration – clearly illustrates the malicious purposes for which AI can be harnessed – deep fakes, voice cloning and sophisticated scams to name a few.
Ultimately, AI is only as good as the algorithm that operates it, the data that trains it and the law that underpins it. If these are ineffective, as was the case with robo-debt, then calamity can ensue.
Therefore, human checks and balances and intensive oversight must form the cornerstone of establishing an effective and ethical AI ecosystem in Australia. To support this, there is a need for algorithmic transparency at both public and private levels, and legislative and regulatory provisions to ensure intensive governance is enshrined.
Rachael Falk is chief executive of the Cyber Security Cooperative Research Centre and a member of the federal government’s expert advisory board for Australia’s 2023-30 cybersecurity strategy.
This act, which has taken several years to effectively finalise, is expected to come into force by late 2023. It takes a risk-based approach to AI regulation, providing a clear blueprint for other legislators to follow. Importantly, the act has struck a balance between the significant benefits AI technologies can bring, the moral dilemmas it poses and the need to encourage ethical innovation without stifling growth. It is a measured, pragmatic and implementable approach to effectively regulate AI.
AI is not bad – there is much good that comes from the use of these diverse and evolving technologies. Efficiencies are found, optimisation is enhanced, and our everyday lives are made simpler.
Loading
But, like any sophisticated technology, there is immense scope for misuse. In particular, Generative AI – currently the technology’s most high-profile iteration – clearly illustrates the malicious purposes for which AI can be harnessed – deep fakes, voice cloning and sophisticated scams to name a few.
Ultimately, AI is only as good as the algorithm that operates it, the data that trains it and the law that underpins it. If these are ineffective, as was the case with robo-debt, then calamity can ensue.
Therefore, human checks and balances and intensive oversight must form the cornerstone of establishing an effective and ethical AI ecosystem in Australia. To support this, there is a need for algorithmic transparency at both public and private levels, and legislative and regulatory provisions to ensure intensive governance is enshrined.
Rachael Falk is chief executive of the Cyber Security Cooperative Research Centre and a member of the federal government’s expert advisory board for Australia’s 2023-30 cybersecurity strategy.