Creating and maintaining ethical artificial intelligence products is the cause du jour, but while the intentions are good, operationalising them is a lot harder. Companies must consider context and compromise as they expand their use of AI.
This article is part of PwC’s Responsible AI initiative. Simply fill in the short form to visit the Responsible AI website for further information on PwC’s comprehensive AI frameworks and toolkits - including tools for assessing AI sensitivity, algorithm manipulability and system security.
So you’ve invested in artificial intelligence (AI). The first questions your board may ask will likely be related to what it can do, how will it improve business processes, save money or provide greater experiences for customers. However to be responsible there are two questions that should be asked first:
Will it perform as intended at all times?
Is it safe?
If AI is to gain people’s trust, organisations should ensure they are able to account for the decisions that AI makes, and explain them to the people affected. Making AI interpretable can foster trust, enable control and make the results produced by machine learning more actionable. How can AI-based decisions therefore be explained?