Brought to you by:

AI 'brings new risks to brand, profitability': KPMG

Fundamental ethics questions are being raised by widespread adoption of artificial intelligence (AI) and machine learning (ML), and this requires careful governance and oversight, the latest Cyber trust insights report from KPMG says.

Businesses are “determined to embrace” AI and ML to boost efficiency and productivity and generate predictive insights into customers and markets, but KPMG says this growing use of new technologies is creating a “new and ill-understood” set of trust issues.

“The danger is that these technologies, if badly handled, raise cybersecurity and privacy risks with potential for reputational damage and regulatory sanction,” the report said.

The report reveals Microsoft is taking action on “adversarial” AI such as data poisoning, machine drift and AI targeting which it expects "will be the next wave of attack”.

KPMG’s 2022 survey of 1881 global executives across six industries – including financial services – found more than three quarters agreed adoption of AI/ML raises unique cybersecurity challenges, and that this requires special attention and additional safeguards.

The report says 75% agreed there were privacy concerns over the way data from customers and business partners is aggregated and analysed.

KPMG Partner Sander Klous says organisations “know they must become data-driven or risk irrelevance,” and many are scaling AI to automate data-driven decision-making. This “brings new risks to brand and profitability”.

“The technology has the potential to drive inequality and violate privacy, as well as limiting the capacity for autonomous and individual decision-making,” Mr Klous said.

"You can’t simply blame the AI system itself for unwanted outcomes. Trustworthy, ethical AI is not a luxury, but a business necessity.”

What is considered ethical and trustworthy in one sector or region “may not hold in another,” he warns. “There is no one-size-fits-all solution and copying existing frameworks is ineffective”.

Mr Klous says trustworthy AI can only be achieved with a “technology-agnostic and broadly endorsed approach to awareness, AI governance and risk management”.

AI impact assessments should involve the right stakeholders to identify risks, he says, and AI needs to be aligned with an organisation’s values.

Management should carefully assess compliance with laws and regulations, with “traceable and auditable” decisions.

KPMG says its survey indicates organisations are starting to recognise these new risks, and going forward they will need to communicate more openly about how they are managing the issues.

This "underlines the important role cybersecurity and privacy teams play in helping shape the ethical debate and managing risks,” it said.