Background

These models are sometimes still preferred because of higher accuracy, but most of them suffer from lack of explainability as mentioned above. Lack of understanding of how such black box models work raises trust issues with the results generated from them. Hence, the requirement to build a sustainable and explainable AI framework is sometimes crucial.

Another drawback associated with many recent state of the art machine learning models is the fact that they are unable to provide feature importance at record level. It is nearly impossible to get an understanding around the intelligence used by the model to obtain the record level predictions and how specifically one observation is demarcated from another by the model in terms of features. An explainable AI framework makes it easy to observe how a model comprehends different features and makes them act interactively to obtain the final predictions. It is this analysis of such interactions and the way such interactions are mapped to human intelligence that makes it easy to build more trust on the framework developed.

Ethics and its role in Explainability of AI Models 

The advocates of ethical AI argue that the AI models must follow FAST principles. FAST is an abbreviation for Fairness, Accountability, Sustainability and Transparency. As per the Research 

(https://www.researchgate.net/publication/353952148_Artificial_intelligence_Explainability_ethical_issues_and_ done last year on Explainability, ethical issues and bias, FAST is explained as below:

Fairness relates to algorithms as well as data pertaining to features of humans designed to meet the discriminatory non-harm 

Accountability is concerned with developing AI systems that can answer questionable decisions generated by the AI Algorithms 

Sustainability is the principle that ensures that AI-enabled systems have transformative effects on individuals and society

Transparency offers the bases for the AI system to explain, in a simple language, the factors that were considered while behaving in a specific way. And to justify the ethical permissibility, the discriminatory non-harm and the public trustworthiness of the outcomes and the process behind them 

Since AI is not confined to a limited number of applications but may have vast usages in day to day life, it is necessary that the ‘black box’ problems that crop up with the wide adoption of AI, are addressed to make AI adoption seamless in the modern world.

Business Use Cases and Applications 

There are multiple direct and indirect applications of this experiment. Some of them include:

Healthcare: Explainable AI provides evidence to doctors and medical health care professionals about the cause of a particular disease which the model is able to predict. Many image classification deep neural network-based models are able to detect a particular disease from various medical images like X-Ray, CT-Scan etc. However, in such scenarios along with model predictions, the explainability component is needed because along with predictions, we may need to know if the reasons with which a particular disease is identified are similar to human understanding or not. 

BFSI: In the banking, financial and insurance domains, it is necessary to consider customer acquisition, loan/credit card approvals, credit limit estimation, KYC approvals etc., where the basis of prediction and know-how of crucial components like model and data drift are important. This helps the business to know when to incorporate the latest trends which might be missed out on. And also check the suitability of the model with respect to time. 

Retail and CPG: In many retail companies, there are models which try to estimate which offers should be given to specific customers. The models deployed are particularly black box models with reasonably good performance but most of them are unable to provide the reasons for providing customer offers and how specifically the offers are given to the customers. In such cases, it becomes necessary to understand the explainability to ensure the model’s predictions are in parity with customer needs. In many cases, model bias may be removed in case a particular type of offer is getting allocated to a selected group of customers again and again which may not be actually needed. 

Audit and fairness of model: In recent trends, it is necessary that the model accountability is considered especially when it is selected as a substitute for rational human thinking. In such scenarios (especially when we have self learning bots). The model accountability helps to know if the model trained for a specific purpose can take rational decisions or not. In such cases the explainable AI component helps to know if the decisions are rational or not. 

Do you have an AI use case you want to explore?

If you’ve got a specific AI use case that you’d like some help exploring, if you’re interested in collaborating with us as partners, or if you’re just interested in finding viable and effective ways to apply AI in your organisation, speak to us today.

hbspt.forms.create({ portalId: "2495356", formId: "3c5c77b2-406f-4cd8-a872-c3aa17c17f73" });