XAI – Why explain AI?

In the past decade the topic AI started a hype which has been picked up both by the news media and the scientific community. An indicator of that is an estimate of the International Data Corporation which suggests that the global financial investments in AI will grow from 24 billion in 2018 to 77.7 billion U.S. dollars in 2022. Many of the businesses investing in AI are involved in decision making processes that have long-lasting impacts on human lives, such as credit scores or insurance prices.


The more is at stake in a decision that we leave to algorithms, the more important it is that these decisions are made based on characteristics of humans and factors that we would consider to be fair distinguishers. However, there is evidence of several cases in which developers employed machine learning methods on data that contained prejudices, such as racial stereotypes, without noticing the biases they contained. One program that displayed a racial bias was built to single out job applicants for St. George’s Hospital Medical School in London. It was made public later that the selection was a result of discrimination against minorities and women. As it turned out, the places of birth and the surnames revealed some information about the applicants’ ethnicity and gender.


Understanding models is crucial for ensuring that decision making systems are just. Explaining and understanding AI is a difficult and complex procedure, as a lot of machine learning models are inherently opaque and intransparent. In spite of the ubiquity of AI technology, eXplainable AI (XAI) has so far been taking a backseat to other research tracks in AI. However, to ensure automated decisions are fair we need to diffuse knowledge on XAI methods. This is why we want to discuss XAI methods in the following weeks.

Leave a Comment

Your email address will not be published.