Setting the standards for responsible AI

There is no doubt that AI is a necessity for digital decision-making and has provided significant value to consumers. But it also comes with significant risks. Scott Zoldi explores the need for ethical and responsible AI.

As financial institutions and regulators look to speed up bank applications, insurance handling, curb financial crime and much more, AI is increasingly an essential component of the digital decision-making process.

However, advocacy groups are questioning the increasing reliance on AI to make digital decisions, arguing human understanding and empathy are still an essential part of the process to avoid careless, or sometimes what appears to be callous, life-changing decisions. 

There is a genuine need for data scientists and organisations to set and strengthen their AI development standards and enforce responsible, ethical AI now.  

Responsible AI

AI is no different from any other business system; it needs to be built on strong foundations, monitored, tweaked and upgraded to ensure best practice. And three pillars – explainability, accountability, and ethics – establish standards and give organisations the confidence that they are making sound and responsible digital decisions.

EXPLAINABILITY: AI decision systems need to allow a business to explain why the model made the decision it did – for example, why it has flagged a transaction as fraud. Human analysts can then further investigate the implications and accuracy of the decision. 

A detailed explanation of the drivers of the score ensures the decision is understandable, reasonable and satisfactory – for both the organisation and customer. In addition, it allows for an error made by the customer providing data or the AI system itself, to be rectified and reassessed. This potentially will result in a different outcome.

ACCOUNTABILITY: Technology must be transparent and compliant. Algorithm limitations must be accounted for and algorithms must be carefully chosen to create reliable machine learning models. Accountable development of models ensures the decisions make sense with changing inputs. For example, scores adapt appropriately with increasing risk. 

Part of accountability is the concept of humble AI — ensuring the model is used only on the data examples and scenarios similar to data on which it was trained. Where that is not the case, the model may not be trustworthy and an organisation should downgrade to an alternate algorithm or rely on policy.

ETHICS: Built by humans and trained using societal data, AI can be discriminatory. Explainable machine learning architectures allow extraction of the specific machine-learned relationships between features that can lead to biased decision-making. Ethical models ensure bias and discrimination are explicitly tested and removed.

With the pillars of well-defined and standards around explainability, accountability, and ethics we have the foundations of Responsible AI.

Enforcing responsible AI

As data scientists develop their systems, it is vital they enlist external forces to ensure their model continues to deliver responsible AI. 

Rise of advocacy 

Public awareness of how algorithms are making very serious life-changing decisions is leading to organised advocacy efforts. Many groups are so concerned about some of the wrong decisions based on AI, they are willing to undertake legal proceedings. This underlines the need for collaboration between advocates and machine learning experts for the greater good of both humans and AI. 

Increased regulation 

Partly due to advocacy concerns, regulations have been introduced to protect consumer rights and monitor AI developments. Views on regulation vary but contrary to popular belief, regulation does not stifle innovation but rather apply socially responsible constraints to technology directions. 

Without external regulation, there would be no restrictions on, or control over, how organisations could use data and AI. This is a dangerous situation to be in and regulations are vital for setting the standard of conduct and rule of law for the use of algorithms to ensure decisions are fair. 

Auditable models

To demonstrate compliance with regulation, data scientists and organisations require a framework of corporate standards for creating auditable models and modelling processes. Audits must ensure essential steps such as explainability, bias detection, and accountability tests are performed ahead of model release, with explicit approvals recorded. This creates an audit trail for accountability, attribution, and forensic analysis.  

Furthermore, as data changes, these same concepts of ethical AI must be retested and verified in the field to continue to use the model responsibly and safely.

Raising the standard

As the use of AI continues to grow across industries, borders and more parts of our lives, Responsible AI will be the expectation and standard. 

Organisations must enforce responsible AI now and strengthen and set their standards of AI explainability, accountability and ethics to ensure they are making digital decisions responsibly.

Scott Zoldi, is Chief Analytics Officer at FICO

The views and opinions expressed in this Viewpoint article are solely those of the author(s) and do not reflect the views and opinions of Fintech Bulletin.