Not only is responsible AI the right thing to do, but it is also fast becoming a competitive advantage for businesses.
Digital technologies, including AI, are potentially worth $315 billion to the Australian economy by 2028 and AI could be worth $22.17 trillion to the global economy by 2030, according to the Artificial Intelligence Roadmap by CSIRO.
The National AI Centre is funded by the Australian Government and is coordinated by Australia’s National Science Agency, CSIRO to activate a responsible and inclusive AI future for Australia.
The benefits of AI are real. In our recently published Australia’s AI Ecosystem Momentum report, 200 Australian businesses were surveyed, and on average, for every AI project that was implemented, $361,000 of additional revenue was realised. Businesses implementing AI also found time savings of 30% on existing processes. It is a needed technology to tackle the scale and complexity of our world.
However, these same leaders also expressed concerns with AI development and implementation, including trust, privacy, security and data quality as the challenges predominantly impacting its adoption.
Recent examples in the media — such as the data gap in women’s health leading to misdiagnosis of heart attacks in women — have highlighted that when AI is not developed and implemented responsibly, there can be unintended consequences for our communities. This can result in negative headlines around privacy, data breaches, and ethics. No organisation wants to be at the centre of negative press, but there is a gap in practical guidance on how organisations can do AI responsibly.
To mitigate risks, business leaders must work toward achieving ethical, safe, and responsible outcomes, or “Responsible AI” practices.
The changing global standards
Organisations that can do artificial intelligence responsibly will stand out from the competition and are likely to win market share based on their ability to earn and retain trust from the communities they serve.
Worldwide, standards and regulatory changes are coming, including a wave of AI standards from the International Standards Organisation (ISO), in addition to the EU AI Act and many AI Risk Assurance Frameworks. While these changes are welcomed, and needed, they will require major upskilling and change for organisations to adapt to this new regulatory landscape.
Supply chains will be disrupted during this time of regulatory flux. Businesses that are agile to adapt will build an advantage to enable commerce and trade, while others may be marginalised from national and international supply chains.
We spoke to 135 companies during our Listening Tour last year, and many of them spoke of the need for practical guidance to help them navigate this fast-moving landscape.
In response to this global momentum and a clear industry need for guidance, the National AI Centre has launched the Responsible AI Network, a world-first cross-ecosystem collaboration aimed at uplifting the practice of responsible AI across Australia’s commercial sector.
We have partnered with the Australian Industry Group (Ai Group), Australian Information Industry Association, CEDA, CSIRO’s Data61, Standards Australia, The Ethics Centre, The Gradient Institute, The Human Technology Institute, and the Tech Council of Australia to create curated advice and best practice guidance that will allow all Australian companies to thrive with responsible AI.
Australia’s ecosystem is vibrant with small and medium organisations, and their needs are very different from those of large enterprises. In terms of current adoption trends, large enterprises are ahead while SMEs lag behind.
SMEs — who are less likely to have their own data science or developer teams, or substantial budgets for expert advice — are not benefiting from these opportunities at the same rate as large enterprises.
At the same time, with AI able to handle volume, scale and complexity, the AI opportunity is perhaps the largest for SMEs who could compete with larger players by harnessing this ability to scale.
For this reason, the Responsible AI Network is ensuring Australia’s SMEs can also have access to expert guidance and advice to uplift their practice of responsible AI, provided in a context that empowers SME’s to take action.
Our aim is to create a competitive advantage for Australia, and a safe, ethical and fair technology environment for all Australians.
No one in the world has worked out responsible AI. Our best shot is to be a country that collaborates to share best practices, one that is agile to respond to regulatory developments and community expectations, and one that aligns to global markets to expedite success in the international economy. That’s how Australia will become a world leader in AI.
Stela Solar is the director of the National Artificial Intelligence Centre, hosted by CSIRO’s Data61. In this role, she is focused on building value for Australian people, businesses and the country, through the use of Artificial Intelligence.
COMMENTS
SmartCompany is committed to hosting lively discussions. Help us keep the conversation useful, interesting and welcoming. We aim to publish comments quickly in the interest of promoting robust conversation, but we’re a small team and we deploy filters to protect against legal risk. Occasionally your comment may be held up while it is being reviewed, but we’re working as fast as we can to keep the conversation rolling.
The SmartCompany comment section is members-only content. Please subscribe to leave a comment.
The SmartCompany comment section is members-only content. Please login to leave a comment.