View all newsletters
Receive our newsletter – data, insights and analysis delivered to you
  1. Comment
February 12, 2020updated 21 Jan 2021 9:54am

Explainable AI (XAI) spotlights factors influencing algorithms

By Zoya Malik

Explainable AI (XAI) emphasises not just how algorithms provide an output, but also how they work with the user, and how the output or conclusion is reached. XAI approaches shine a light on the algorithm’s inner workings to show the factors that influenced its output. The idea is for this information to be available in a human-readable way, rather than being hidden within code

ACCA’s latest report Explainable AI addresses explainability from the perspective of accountancy and finance practitioners. Head of Business Insights, Narayanan Vaidyanathan said, ‘It is in the public interest to improve understanding of XAI, which helps to balance the protection of the consumer with innovation in the marketplace.’

Complexity, speed and volume of AI decision-making often obscure what is going on in the background (the black box), which makes the model difficult to interrogate.  Explainability, or the lack of this, affects the ability of professional accountants to understand and display scepticism.  In a recent ACCA survey, more than double, 54%, agreed with this statement compared to those who didn’t. 

Vaidyanathan continued, ‘It’s an area that’s relevant to being able to trust technology and to be confident that it’s used ethically and XAI can help in this scenario.  It’s helpful to think of it as a design principle as much as a set of tools.  Moreover, this is AI decoded, and designed to augment the human ability to understand and interrogate the results returned by the model.’

Key messages for practitioners:

  • Maintain awareness of evolving trends in AI: 51% of respondents were unaware of XAI.  This impairs the ability to engage. The report sets out some of the key developments in this emerging area to help raise awareness. 
  • Beware of oversimplified narratives: In accountancy, AI isn’t fully autonomous, but nor is it a complete fantasy. The middle path of augmenting, as opposed to replacing, the human works best when the human understands what the AI is doing; which needs explainability.
  • Embed explainability into enterprise adoption:  Consider the level of explainability needed, and how it can help with model performance, ethical use and legal compliance.

Policy makers, for instance in government or at regulators, frequently hear the developer/supplier perspective from the AI industry. This report can complement that with a view from the user/demand side, so that policy can incorporate consumer needs. 

The report’s key messages for policy makers are:

  • Explainability empowers consumers and regulators: improved explainability reduces the deep asymmetry between experts who understand AI and the wider public. And for regulators, it can help reduce systemic risk if there is a better understanding of factors influencing algorithms that are being increasingly deployed across the marketplace.
  • Emphasise explainability as a design principle: An environment that balances innovation and regulation can be achieved by supporting industry to continue, indeed redouble, its efforts to include explainability as a core feature in product development.

Narayanan Vaidyanathan added, ‘XAI can be polarising, with some having unrealistic expectations for it to be like magic and answer all questions. While others are deeply suspicious of what the algorithm is doing in the background. XAI seeks to bridge this gap, by improving understanding to manage unrealistic expectations, and to give a level of comfort and clarity to the doubters.’

NEWSLETTER Sign up Tick the boxes of the newsletters you would like to receive. A roundup of the latest news and analysis, sent every Wednesday.
I consent to GlobalData UK Limited collecting my details provided via this form in accordance with the Privacy Policy
SUBSCRIBED

THANK YOU

Thank you for subscribing to International Accounting Bulletin