Brought to you by:
AMA Group
AMA Group

New Responsible AI Index backed by IAG

Facebook Twitter LinkedIn Google

IAG has sponsored a new Australian Responsible AI Index, launched by the Ethical AI Advisory and Gradient Institute, which finds fewer than one in 10 Australia-based organisations have a mature approach to deploying responsible and ethical artificial intelligence (AI).

Responsible AI is developed with a focus on safe, transparent and accountable use of AI technology. The index signals an urgent need for Australian organisations to increase investment in responsible AI strategies, IAG Group Executive Direct Insurance Australia Julie Batch says.

Fair and ethical AI is a societal challenge, she says, and the new index helps organisations gauge where they sit versus peers and what they need to do to help ensure they’re applying AI thoughtfully.

“To ensure the right outcome for our customers, we embed considered thinking about fairness and equality before implementing an AI solution,” Ms Batch said.

IAG uses AI to predict whether a motor vehicle is a total loss after a car accident, reducing claims processing times to just a few days, providing customers with more clarity and certainty sooner. It has an AI ethics framework and uses the government’s voluntary AI ethics principles related to social & environmental wellbeing, reliability & safety and fairness to identify potential issues or risks prior to launch.

IAG says it is also looking at how AI can be used to help detect motor claim fraud using advanced analytical techniques.

The index studied 416 organisations operating in Australia and found that only 8% are in the ‘Maturing stage of Responsible AI’, while 38% are ‘Developing’, 34% are ‘Initiating’, and 20% are planning. The mean score was 62 out of 100, placing the overall result in the ‘Initiating’ category.

Gradient Institute CEO Bill Simpson-Young says the Index found just over half of the organisations have an AI strategy in place, highlighting the opportunity for business leaders to act on critical AI initiatives such as reviewing algorithms and underlying databases, monitoring outcomes for customers, sourcing legal advice around potential areas of liability and reviewing global best practice.

He recommends putting training in place to upskill data scientists, engineers and management.

A new Responsible AI Self-Assessment Tool will help companies develop the right guardrails at a time of rapid growth in consumer adoption of digital technology using AI.