Was Thanos the good guy? What about the Joker, or the Krishna-Arjun dynamic just before the epic war? AI and Ethics?
In this era of AI/ML, it is all the more necessary that we recognize what is ethical and what is not, to take crucial decisions of our lives in the upcoming future.
In those ridiculous moments, you probably have heard it a number of times, of people saying that “Thanos was right.” For example, when you see that climate change is real, and millions still unwilling to believe it is real, you probably say it inside too. But then you compose yourself and repeat “No, he killed half the universe right? He was a bad guy at the end.” And life goes on. But at the back of your mind, you still wonder what was that all about! Was Thanos correct all along?
Maybe you wonder the same about the Joker, in reference to ‘The Dark Knight (2008)’. Batman was pushed to the edge by the Joker, and what followed was a peaceful era of no vigilante, and no crimes either in the city of Gotham. But at what cost? It is beautifully explained in this top-notch youtube video. Just for the sake of it, consider Hitler’s actions too. Light Yagami, Krishna-Arjun for that matter. To give some context to those who do not know about Krishna-Arjun: An YouTube video. So, how can Sri Krishna an incarnation of god tell Arjuna to fight & kill?
And consider some of the decisions that businesses have to make regarding what is right and what is wrong. One cannot cite “Everything is fair in love and war” in every layoff email. And consider the emerging world of Web 3.0 for that matter. Metaverse will slowly seep in into our lives, we are already glued to our phones interacting with content with our thumbs anyway. The next phase is for our neurons to be connected to the vastitude of information. And when that happens, there will be huge risks. To look at our present situations itself, we recently saw the Cambridge Analytica scandal of 2018, and others being credit scoring, fraud detection, or even surveillance risks that are coming forward.
To understand this, let’s go to the basics for a while. Let’s take the classic example of ethics, the Trolley Problem (with memes).
All the people lying on the track, you know none of them. They are complete equal strangers to you (hence, you cannot say you are developing feelings for a person). In other words, you are unbiased. What would your choice be in that case? Chances are you will be saving the five people on the lower track at the expense of the one person on the other track. This is what we call the “Utilitarian Lens” where we try to produce the greatest good and minimize the harm. Briefly, the six lenses are:
- The Rights Lens : Respecting the moral rights.
- The Justice Lens : The goal is to give what is due to a person.
- The Utilitarian Lens : Maximizing good and minimizing bad.
- The Common Good Lens : Taking into view our society and the common resources that we share. So, these common resources should be given priority.
- The Virtue Lens : Virtues are the value system that guides a person to do the right thing, as in not asking ”What should I do?” but asking “What kind of person should I be?”
- The Care Ethics Lens : Is one of the loose lens which basically says we should listen to others’ concerns.
If you are still here, great work!
The problem arises when you have to apply multiple lenses to arrive at a conclusion. Which lens do you give priority to? Let’s say that there is an order to this, a prioritized order so to say as to which lens is the most important and which the least. But why do you need this order? It is required not just for you to make decisions, but also to help you when you are on the receiving end of such judgements. You know how you are going to be judged, or for that matter what are the consequences going to be when you know the rules of this game. For example, say the single person in the above track is a pregnant lady, 8 months pregnant and her child has a lot at stake. And the other five guys let’s say are still ordinary people you don’t know much about. So, what does your value system say? You are in a conflict now. Here you have two choices:
- Either apply the prioritized version of lens which everyone agrees to
- Or, apply your own version of the prioritization (say, you have your own value system).
The first one we will call ethics. And the second one we will call morality. And by now, you see that morality is subjective.
Great. Now, we are all on the same page. And we will discuss Thanos.
Thanos’ objective was to save lifeform by cutting down overpopulation (he had a second objective with a link to the Eternals, but since he himself ‘forgot’ about that, let’s keep it aside for now), much like how we trim our lawn overgrowth or how fires helps forests to maintain their health and vigor. Is his act justified? You should not look at analogies here. Just because a forest was saved by a fire does not mean it is the right thing which Thanos did. Maybe it was, but that is not the way we are going to think about it. We will instead look at it through the prioritized order of the lenses. So, if you go above and re-read the lenses, we find the problem he had is a classic “Utilitarianism” problem, sacrificing half of all population so that life may thrive in the long run, i.e., the greater good. But we have a superior lens to it, which is the Justice lens. Is it what he did justified? Of course not, because one cannot guarantee the longevity of life is all life forms is decreased by half. It is a speculation and a second degree result at best that life would survive. There could be many other things that might eventually follow. Thanos has no control over those things. So you see, the Utilitarian lens failed. The Justice lens obviously fails and so does the Rights lens. Now, if you still have a doubt somewhere in you, it is your own subjective morality that is speaking. And when we want a consensus, we want a framework which is equally acceptable by all, so as to have a common ground for judgement.
This is where so many people confuse the intentions of Thanos. Some people agree and some people disagree. But I hope this framework would help you out now. That Thanos was wrong. Some more content in this regard, if you want.
Next, look at Hitler. Same as Thanos. The Rights lens fails. The Joker? Again wrong because the means to the end is not justified. You have to take everyone into consideration. Light Yagami, same. You see a pattern right? We are conflicted among ourselves. We did sympathize with every villain mentioned. Somewhere deep down we knew they were wrong, only we did not know how to articulate them.
Now, Krishna Arjun. Krishna obviously tells Arjun to stand for the truth, the greater good. But at what cost? By killing his own uncle, his own teachers, his own cousins on the enemy side? Here, it is important to understand the context first. Krishna is using the Utilitarianism lens here (larger good). But the superior lens of Justice and Right get nullified because both the parties are willfully there on the battlefield, ready to die if necessary. Their sense of Rights and Justice has already been defined/declared by everyone themselves on the battlefield. This is the context before we apply the lenses.
Even if you do not agree with me completely, my point of this whole article is that we must have a framework in place when we try to come to a decision. Because ultimately, ethics is not merely a game we play for the sake of it. It is a noble endeavor to consider everyone’s good, yours and mine included.
Currently, most executives are actually unsure about the ethics and transparency of their AI systems. In a study conducted by Capgemini with 1,580 executives, more than two in five executives have reportedly abandoned the AI system altogether when an ethical issue was raised. Infact, “Today, we don’t really have a way of evaluating the ethical impact of an AI product or service,” says Marija Slavkovik, associate professor at University of Bregen.
Another reason why businesses must be ethically aware is regulations. Because the government is definitely going to bring in regulations in these areas. And it is better to operate within a regulated framework in mind from beforehand, rather than expending millions when regulation kicks in. Because regulations can create huge competitive advantage for those who comply vs those who don’t.
In the end, there is research going on in this field and India has set up its own center for Responsible AI, and as new problems arise we, as citizens of this world, will overcome them. Have always done, and will do in the future. But as a common individual, we should understand the implications and the dynamics that could play out when things go south. That we should all have a North Star we can look up to and align our values.
This article is purely for an academic thought exercise and is not an attempt to justify their actions.
References:
- Film Theory: Joker Is The Hero of Gotham (Batman The Dark Knight)
- Light Yagami, Death Note
- Ethics and the history of Indian Philosophy
- Responsible AI, Research Report, NITI Ayog, India
- The Capgemini Report
- Bhagavad Gita
- The Bhagavad Gita — Krishna Speaks With Prince Arjuna — Extra Mythology
- The Trolley Problem
- A Framework for Ethical Decision Making
- Restoring Fire to Native Grasslands
- Consequentialism
- Is the Thanos Snap Ethical?? | Avengers Infinity War | Space Taste
- Google Announces $1 Mn Grant to IIT Madras for Responsible AI Initiative