Another robo-debt disaster is inevitable if we’re not vigilant on AI



This act, which has taken several years to effectively finalise, is expected to come into force by late 2023. It takes a risk-based approach to AI regulation, providing a clear blueprint for other legislators to follow. Importantly, the act has struck a balance between the significant benefits AI technologies can bring, the moral dilemmas it poses and the need to encourage ethical innovation without stifling growth. It is a measured, pragmatic and implementable approach to effectively regulate AI.

AI is not bad – there is much good that comes from the use of these diverse and evolving technologies. Efficiencies are found, optimisation is enhanced, and our everyday lives are made simpler.

Loading

But, like any sophisticated technology, there is immense scope for misuse. In particular, Generative AI – currently the technology’s most high-profile iteration – clearly illustrates the malicious purposes for which AI can be harnessed – deep fakes, voice cloning and sophisticated scams to name a few.

Ultimately, AI is only as good as the algorithm that operates it, the data that trains it and the law that underpins it. If these are ineffective, as was the case with robo-debt, then calamity can ensue.

Therefore, human checks and balances and intensive oversight must form the cornerstone of establishing an effective and ethical AI ecosystem in Australia. To support this, there is a need for algorithmic transparency at both public and private levels, and legislative and regulatory provisions to ensure intensive governance is enshrined.

Rachael Falk is chief executive of the Cyber Security Cooperative Research Centre and a member of the federal government’s expert advisory board for Australia’s 2023-30 cybersecurity strategy.



This act, which has taken several years to effectively finalise, is expected to come into force by late 2023. It takes a risk-based approach to AI regulation, providing a clear blueprint for other legislators to follow. Importantly, the act has struck a balance between the significant benefits AI technologies can bring, the moral dilemmas it poses and the need to encourage ethical innovation without stifling growth. It is a measured, pragmatic and implementable approach to effectively regulate AI.

AI is not bad – there is much good that comes from the use of these diverse and evolving technologies. Efficiencies are found, optimisation is enhanced, and our everyday lives are made simpler.

Loading

But, like any sophisticated technology, there is immense scope for misuse. In particular, Generative AI – currently the technology’s most high-profile iteration – clearly illustrates the malicious purposes for which AI can be harnessed – deep fakes, voice cloning and sophisticated scams to name a few.

Ultimately, AI is only as good as the algorithm that operates it, the data that trains it and the law that underpins it. If these are ineffective, as was the case with robo-debt, then calamity can ensue.

Therefore, human checks and balances and intensive oversight must form the cornerstone of establishing an effective and ethical AI ecosystem in Australia. To support this, there is a need for algorithmic transparency at both public and private levels, and legislative and regulatory provisions to ensure intensive governance is enshrined.

Rachael Falk is chief executive of the Cyber Security Cooperative Research Centre and a member of the federal government’s expert advisory board for Australia’s 2023-30 cybersecurity strategy.

FOLLOW US ON GOOGLE NEWS

Read original article here

Denial of responsibility! Techno Blender is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – admin@technoblender.com. The content will be deleted within 24 hours.
disasterInevitableLatestrobodebtTechnologyUpdatesVigilant
Comments (0)
Add Comment