Brought to you by:

Bosses, boards ‘will be held to account’ on robo-hallucinations

Businesses enjoying the benefits of artificial intelligence should know they will be accountable for errors and instances of “AI gone wrong”, a legal expert says.

Gilchrist Connell principal Nitesh Patel told the Curium Risk and Compliance Summit last Thursday that leaders and board members need to understand the risks of using new generative AI.  

“If you’re going to derive the benefit of the product, you’ll be the one that will be held to account,” he said.  

Mr Patel pointed to recent instances of lawyers generating court documents with non-existent citations derived from AI “hallucinations”. These led to a personal indemnity cost order against one law firm, and a principal losing their ability to practice for two years.  

The Supreme Court has since banned the use of GenAI in the development of court documents.  

In another case, Air Canada’s chatbot incorrectly assured a passenger he could book a full-fare flight to his grandmother’s funeral and apply for a bereavement discount later. A tribunal ruled the airline must pay damages and fees.

“It was held that it had to make good on what the promise was ... It’s a relatively minor sum, but it’s a demonstration of where we’re going with all of this,” Mr Patel said.

From December 10 next year, the Privacy Act will include transparency requirements for automated decision-making. Businesses need to update their privacy policies to detail types of personal information used and the nature of decisions made. 

Mr Patel says any system that will affect “in substantive way the rights or interests of the individual” will need to be disclosed in privacy policies.

“This is not AI. This is automated decision-making, and so it actually captures a lot more. If you’ve got an online questionnaire ... that spits out an answer, that’s automatic decision-making. Even though there’s no AI there, that would be something that could be captured if it’s going to have a material impact.”  

Mr Patel says “the starting point will be the very top” for accountability around AI use.

“Questions will be asked, first and foremost, of the business and then the board itself. You really should be making sure you’re in control, you understand the inner workings of the system.

“It will come down to your decisions if you’re deriving the benefit from all of this and you’re representing this to be your output.”