Infusing Ethics in AI: Causes, Effects, and Mitigation

Aishwarya Srinivasan
8 min readAug 10, 2022

What is Ethical AI and how it impacts the community?

With great power comes great responsibility. This has proven to be true especially in the case of artificial intelligence. With the flamboyance created by AI in the era, the ill-effects of the technology have not been carefully studied. The side-effects are only observed where they come into play and when it’s too late to stop or access these adversities. In the last couple of years, we have seen a lot of technology companies relying on artificial intelligence face drastic adversities due to poor compliance with ethical AI, which has been illustrated in this blog by Forbes:

https://www.forbes.com/sites/carlypage/2020/10/01/ai-has-resulted-in-ethical-issues-for-90-of-businesses/?sh=53ae965a3ff0

It is definitely not a questionable fact that artificial intelligence can change the world, but at what cost needs to be thought through. Ethics in broader terms is a discipline of dealing with the rights and wrongs and the moral obligation associated with our creations. When we humans have started creating thinking machines as we call them, we have some moral duties we need to be cognizant about.

The ethical AI can be seen in 3 major landscapes, in the realm of the AI algorithms itself, on what the technology does in the shorter-term, and what their effects should be in the longer-term. The issues that reflect the ethical loopholes in AI makes us ponder to be more responsible and accountable for what we are developing.

Starting from the production of AI, where we are building machine learning models, we need to make sure that we are taking care of the model being unbiased, accountable, interpretable, and explainable. In several situations, we have seen that the models which are productionalized tend to generate biased results, most commonly concerning race, ethnicity, gender, or age. One of the most infamous case studies was when Apple’s card was seen to have a bias that was reported based on the gender of the applicant requesting a credit increase. This blew out and Goldman Sachs was brought into the picture for this, as they had been building the models for the credit risk. More details here:

https://www.washingtonpost.com/business/2019/11/11/apple-card-algorithm-sparks-gender-bias-allegations-against-goldman-sachs/

Such biases could be because different target groups could be having different ground truth positive rates or the data could be a biased representation of the ground truth. This takes a crucial investigation and subject matter expertise to identify and eradicate such biases.

The technology is being developed with documentation reflecting a specific cause, but how and when it can take turns and begin to be misused cannot be predicted. There has been enough evidence in the past where we have seen this nightmare turning into reality, and it has been too late to save the situation. Several countries have stepped in to create policies and laws to govern the data privacy of their citizens. The most popular social media platform, in fact, the platform which brought the hype of social media all across the globe, Facebook has faced major data violation penalties summing up to $5 billion in 2019, which was about 7% of your $69 billion earned in 2019.

Now, who should be responsible for maintaining the governance around data and the use-cases? It is each of us, every data analyst, data scientist, data associates, every individual contributing to the AI use-case from designing until deployment needs to be responsible for this. Ethical AI is indeed a skill that needs to be inculcated in every curriculum. It is very crucial to create awareness on the rightful means of developing something with AI.

How to ensure fairness while building machine learning models?

The road from getting a machine learning model that is built in python notebooks to something that works in production is a hard process. When building machine learning models to generate business value, we need to comply with certain criteria. A machine learning model is true to its potential if it takes into account fairness, explainability, and ethics.

For a model to be fair, we must consider the following conditions:

  1. Un-biased: The model needs to be making predictions independent of features which can cause bias in the model. Some features which could create bias in the model are gender, age, ethnicity, or race. As Data Scientists, we need to make sure that the model prediction does not ONLY depend on one of these parameters. On one hand, features like age, gender, or ethnicity could be an important predictive feature of the model, but on the other hand, the models should not be sensitive to these features. For example, the decision to approve an increase in credit limit while using a machine learning model should not be changing just based on gender, where all other feature values remain the same.
  2. Drift: Model drift is essentially the degradation of model performance over a period of time due to change in environmental factors, or changing influence of predicting features on the prediction target. At times, due to data drift or model drift, the model outputs start getting biased, so a model that was initially unbiased can slowly tend to produce biased results. Hence model monitoring for drift detection and correction is crucial.
  3. Robustness: Model robustness refers to sensitivity associated with the machine learning model where the model prediction changes with slight changes in the input parameters. If the model output is being too dependent on a particular parameter, in a way that slight modification in the parameter value turns the prediction in a different direction, the data needs to be investigated, as it could be biased.

Once we account for the model fairness, we will be needed to make the model explainable. For the stakeholders and executives to understand the model and the features affecting the predictions of the model, we need to attach explainability to the model. We need to understand the features and the quantitative measure of their impact on the target variable.

There are multiple open-source packages available that can help you explain the features influencing your machine learning models, like LIME, SHAP in Python.

Let us see an example of using LIME to produce explainability in a text classification model from the famous 20 newsgroup dataset. We would be taking the two classes which are difficult to distinguish due to high commonality in words: Christianity and Atheism. We are looking at a Random Forest model with 500 trees, just based on the accuracy value of 92.4%, the model looks good, but we need to evaluate the reasons for the decision on the model to trust it, hence we use LIME. Below is an explanation for an arbitrary instance in the test set, generated using the lime package.

This is a case where the classifier predicts the instance correctly, but for the wrong reasons. A little further exploration shows us that the word “Posting” (part of the email header) appears in 21.6% of the examples in the training set, only two times in the class ‘Christianity’. This is repeated on the test set, where it appears in almost 20% of the examples, only twice in ‘Christianity’. This kind of quirk in the dataset makes the problem much easier than it is in the real world, where this classifier would not be able to distinguish between Christianity and atheism documents. This is hard to see just by looking at accuracy or raw data, but easy once explanations are provided. Such insights become common once you understand what models are actually doing, leading to models that generalize much better.

Note further how interpretable the explanations are: they correspond to a very sparse linear model (with only 6 features). Even though the underlying classifier is a complicated random forest, in the neighborhood of this example it behaves roughly as a linear model. Sure enough, if we remove the words “Host” and “NNTP” from the example, the “atheism” prediction probability becomes close to 0.57–0.14–0.12 = 0.31. [1]

You can find some resources below to know how to use these packages for model explainability.[1][2]

References:

[1] https://homes.cs.washington.edu/~marcotcr/blog/lime/

[2] https://github.com/slundberg/shap

When we start looking at what AI does the areas of concern correspond to human-AI-interaction, cybersecurity, privacy, and malicious use. With social media becoming a need for everyone around the world, we have started seeing signs which trigger us to think about what it might lead to in the future. We see teenagers being influenced by social media so much that they start measuring their self-worth based on the likes and comments they receive on these social media platforms. In a recent Netflix documentary, The Social Dilemma, it is rightly and concerningly shown how human brains are being ruled by machines and algorithms. How the decisions, emotions, and actions of about 7.5 billion people are being controlled and triggered by a handful of tech designers. If at all we could realize the solemnity of the issue, we discern the need to act upon this.

“Data is the bedrock of AI. No data, no AI. Skewed data leads to ineffective models and AI that doesn’t reflect the real-world. Until recently, AI researchers and engineers have been focused on ‘making it work’. Before it’s too late we need to think about ‘how it works?’ and ‘what does it mean for it to work?’ AI is data-hungry, and it’s essential to follow ethical guidelines in the very first step of creating an AI model if we want a more equitable world and a more just society. Data Scientists and AI researchers need to constantly ask and answer questions such as How was the data collected? What does it represent? What could be the after-effects of using certain data dimensions (like gender and geo-location) in creating a model? Is the data balanced and representative? What are the scenarios when bias is introduced? At this juncture, there are more questions than answers in coming up with reasonable mechanisms and guidelines that can be followed uniformly by AI implementers.

Just as in business and in life — our collective and individual ethics guide how we conduct ourselves, take decisions, and build solutions. Like it or not, in the not-so-distant future, AI will be a reflection of society, our humanity, or lack of it. “

Rati Sharma, Head of Data Analytics, Prime Brokerage Technology, Morgan Stanley

I hope you liked reading the blog. Please leave your questions and thoughts in the comments below.

Please do share this article with your friends and colleagues who would be interested to learn about ethical AI.

--

--

Aishwarya Srinivasan

LinkedIn Top Voice 2020- Data Science || MS Data Science - Columbia University || IBM- Data Science Elite || Unicorn in Data Science || Scikit-Learn Contributor