AI is disrupting many industries worldwide and, as new methods arise, there is a growing concern about the need for more transparency and interpretability of these powerful methods. McKinsey’s “The State of the AI” report of 2020 lists Explainability as the third most relevant risk in technology industry, according to respondents, above individual privacy and others. Algorithms like Random Forest or the increasingly popular Artificial Neural Network are undoubtedly very accurate, but these so-called “black-box “models are created directly from the data. This means that not even the engineers or data scientists who designed the algorithm can understand or explain what is happening inside them or how it arrived at a specific result. This article will explore the concept of explainability in AI, its importance, and the different approaches to achieving it. We will also examine the ethical implications of explainability in AI, particularly relevant in the European context of the GDPR.
But what does it actually mean? According to IBM, Explainable artificial intelligence( XAI) is a set of processes and methods that allows human users to comprehend and trust the results and output created by machine learning algorithms. Imagine you are a data scientist, and a hotel company demands an algorithm to predict booking cancellations. You feed your data to an ANN, and you come up with a very accurate result: However, the manager is interested in knowing what factors impacted the decision, to behave accordingly. Here, having a more explainable model would be fundamental for the management to obtain valuable insights for their choices, even at the expense of some accuracy.
Having an explainable model could also be useful for understanding problems with your data, for example measurement biases. That is, if you find that the model needs to put more importance in factors that you believe are irrelevant, then there could be some problems with how you collected data. If overlooked, this can seriously harm the firm, leading to extremely biased decisions that can ruin the reputation of a business.
How can we ensure to have a sense of what our model is doing? The most obvious one is using an explainable model. The most famous are Linear Regression and Decision Trees, which coefficients are easily interpretable with a solid logical foundation. The main problem with them is that they are overly simple, so they cannot accurately capture the relationship between our predictors and the dependent variable. Another solution is to put extra effort into the model-building process, so data processing, preparation and visualization. The last is often overlooked; a good visualization process can help us summarize useful insights from our data before the model fitting part. There are also some techniques for post-hoc methods such as LIME (Local Interpretable Model-agnostic Explanations), where explanations are generated by approximating the model by an interpretable one (such as a linear model)learned on perturbations of the original instance.
Striking a balance between explainability and complexity is often difficult. We should always consider the end user of this ML algorithm and what insights he wants to get from our model. The workforce of some industries, for example, the manufacturing industry, are very resistant to change of culture, and they will likely be reluctant to follow pieces of advice from “machines” if they do not understand what the machine is doing. The industry mentioned above is, in fact, one of the industries where XAI has proven to be very effective, with an estimated increase of the EBITDA by 15% overall through AI initiatives.
Given the rising ethical concerns surrounding AI, XAI practices can ensure compliance with national and international regulations. The European Union’s 2016 General Data Protection Regulation (GDPR), for instance, states that when individuals are impacted by decisions made through “automated processing,” they are entitled to “meaningful information about the logic involved.” A similar provision is contained in the California Consumer Privacy Act(CCPA). In addition, the Montreal Declaration of Responsible AI of 2018 states that “The decisions made by AIS affecting a person’s life, quality of life, or reputation should always be justifiable” and that “justification consists in making transparent the most important factors and parameters shaping the decision.”
In conclusion, explainability is becoming a growingly critical aspect of AI development. As AI becomes increasingly integrated into our lives, it Is crucial to ensure that these systems are transparent and understandable, to build trust in AI and foster its responsible growth in society.
Comments