Ethics of AI – Chapter 4 : Negative Impacts of AI to Mitigate

Chapter 4: Negative Impact of AI on Humanity

Editors Note: This post is a part of a series on Ethics of Artificial Intelligence.

In the same way the positive impact of AI cannot be understated, the negative impact cannot either and must be explored.  The negative impact of AI is the very reason why ethics in this area is important and if controlled properly will help mitigate the risks.  These are just a few of the potential negative impacts to humanity from the enablement of AI, specifically for Weak AI scenarios unless otherwise stated.

  • De-emphasis of Human Work.  We know that part of a person’s understanding of their own worth is tied to how they ‘work’ in the world and in the same way, bestow upon that work a type of value because of who created it.  Think of how a meal lovingly created by a person for another has more intrinsic value than a meal purchased from a vending machine.  A risk is that as machines become more capable at accomplishing tasks that humans would otherwise do, we’ll see a tension between the efficiency of the machine and the value of the work created by a human.  This isn’t to say that we should keep jobs around just because humans do them, but we should consider that if we do not find alternative work for humans to do, we may be hyper-optimizing a process but making humanity as a whole worse off.  The efficiency gains from machines may be great, but we need to be aware that we do not mis the inherent value that human activity imparts in the value created.  Is there value, for instance, in there being a friendly maintenance guy at a school vs. a robot that cleans the floors?  There might, but the value might not be directly in the clean floor, but more-so in the presence of the person that connects with the students while performing a duty.  This is the tradeoff we need to be aware of as we seek efficiency gains.
  • De-emphasis of Human Intelligence.  This downside we already see in the ubiquity of smart phone is how learning and the retention of knowledge has been replaced by being able to quickly retrieve information by searching the internet.  The accessibility of information is a benefit, but the trend toward not knowing information because it can be quickly searched for, is not.  In the same way, as AI can solve complex engineering, medical, or industrial challenges there may be an inclination that we no longer need experts in these fields, because the machine can figure it out for us.  If we do not surround the machine with humans who understand what is happening inside the ‘black box’, we’ll find ourselves dependent and unable to innovate ourselves because we no longer have the knowledge.  The refrain, “I don’t need to know this because a machine already figures it out”, could increasingly become an issue.  The idea that we cannot understand our own machines is a troubling future reality that could easily come to pass.
  • Weakened Interpersonal Relationships.  For all the benefits of that AI can bring to interpersonal relationships, it could also result in driving those relationships apart because we replace human relationships with machines.  We already experience many situations where children and adults alike are addicted to digital content, games, or experiences.  The increase in the sophistication of AI will likely result in people preferring the AI relationship to that of an actual human.  The AI can be configured by the human to interact in a certain way, to sound a certain way, to “look” a certain way, and to approximate a relationship they desire.  These hyper-optimized AI-human relationships will lessen the focus on true human-interdependence and create a society of highly functioning hermits.  If we are to understand that part of the Human Difference is the ability for us to be in relationship with each other, we need to understand that AI will create many opportunities to poison the perceived need for those relationships.  You might say, “what’s wrong with that?”, “a machine may actually be able to be programmed to provide better advice, better interactions tuned to the person, and better stimulation?”  The issue is that the relationships that humans share between each other are qualitatively different than that of a human-machine relationship, at any level, because we share all of the qualities of the Human Difference.  The attempt to replacement with an approximation may seem like a good idea, or even feel like a good idea, but it will be only an approximation, not the real thing.  You have only to look at a person that has shut themselves off from society to see the result of such an experiment, even if a future AI approximation is better than what they have now.  We need to be in relationships with each other, because it is through those relationships we grow to the person we have the opportunity to be.
  • Expanded Biases.  A common concern with expansion of AI into many areas of society is the expansion of biases based on data fed into the system or discovered by the AI service.  Let’s say that an AI system is responsible for approving loans and discovers that a certain minority group has a higher risk of default in a certain area.  It may then begin to discriminate not based on the criteria of the loan, but instead on the minority group itself, understanding that to be a predictor.  In this case we would need to correct the bias in order to achieve a system which would implement loans based on the criteria we intend.  The bias was derived based on data but may not also have been looking at the correct data in order to make the decisions.  The challenge will be the extent to which we can guide AI through the data acquisition and delivery process when it is self-learning.  For example, a Microsoft AI system was developed that ‘learned’ inappropriate biases on twitter by reading commentary.  In this case the system took comments to be ‘fact’ which drove the model and then governed how it interacted with humans.  In the case of loan processes or the case of the social bot we needed to correct the model, either by tuning how the AI model leveraged the data or by feeding the appropriate data into the model at the start.  As learning models become more sophisticated and we are more challenged to mitigate risks because of its capabilities we will need to take this area of study very seriously.
  • Privacy and Security.  The expansion of AI into personal life and personal data greatly enhances the risk that an AI system be maintained in a secure way which limits access to the information stored and shared.  Let’s say a bank contains your purchase history and makes that information available to an online retailer to make better product suggestions to you based on the time of year, comparable products you’ve purchased, or financial capabilities.  This for some might be an asset and greatly appreciated, but for many it would be considered a gross invasion of privacy.  In addition to sharing information the pervasive expansion of personal data will make extracting and purging personal data also very challenging, but one which is also very important if we are to take starting points like GDPR seriously.  There exists a wide span of perspectives on how much personal data being accessible is a problem, but at some point, it crosses a line that does not respect the appropriate lines we establish in society and becomes a risk to inappropriate behavior from dishonest people or machines created by dishonest people.  The truth is that we care about privacy because of that moral delineation and we need to be prepared to protect it in order to ensure that AI-powered society reflects what is best about us, not what is worst.
  • Economic Impact from Mistakes.  The collapse of the markets during the great depression impacted an enormous number of people financially, then impacting their quality of life and the state of the society.  The mistakes that led to that collapse and the way the market was managed was a catalyst to implement new controls which govern its operation.  The implementation of AI into economic controls, is one example of where a mistake by the algorithm could create substantial harm to the global community.  A scenario where an AI solution continues unimpeded down a path that is causing economic harm because of faulty logic is very real and could be difficult to mitigate if we don’t understand what’s happening inside the “black box” or we don’t realize what’s going on until its too late.  Let’s say an individual has an AI retirement manager for their 401k and instead of trading appropriately it begins to make financial choices which make the majority better off but intentionally make some individuals worse off.  That individual looks at their 401k and its ¼ the size they were expecting… that is the sort of issue we need to mitigate.  We could also see economic impact just from for-profit organizations whose processes were negatively impacted by faulty AI, or who have sold less because of a mistaken self-learning process.  In all of these cases the impact of the AI system on the financial stability of companies, the market, and individuals is real and something we need to care about from an ethical standpoint.
  • Income Inequality Widens.  A major concern of automation and AI specifically is the widening of the impact of income inequality within the world.  If AI is to create new jobs and cause other jobs to disappear it could impact the wellbeing of much of humanity.  We need to look collectively at how this technology can advance the distribution of prosperity or we’ll find that we’ve improved efficiency without improving quality of life.
  • Disaster.  If AI is responsible for a system that could cause major physical damage an impact of the system mis-managing the environment could be catastrophic.  For example, hydroelectric dam believes a system failure is coming so it proactively opens the floodgates and causes downstream damage to property and endangers people because it’s prioritizing the system over the individuals.  The ability for a system to prevent a problem in a facility is noteworthy and helpful, but if it doesn’t consider the resulting impact of it’s “decision” it could cause negative results elsewhere.  This is the difference between a human and a machine in this scenario.  A human might intuitively know, “I don’t want to destroy the downstream town” vs. a machine might only think about the closed-loop issue.  We need to be careful we don’t create our own disasters when we consider AI implementations.
  • Confusion.  Many of the AI models built today are built by a trained Data Scientist that understands what is happening inside the “black box” and can tune the model as needed.  In AI self-learning models we may reach a point where no one understands what is occurring inside the model and the outcomes are largely just trusted.  The danger here is the confusion regarding how a system works and even more how inter-connected systems work.  In cases where one AI model depends on another, even the best human mind may be unable to unravel the cause of an error and by then it may be too late.  The concern that our own models will be unknown to us and we will be at the mercy of the interconnected system is concerning and one we need to mitigate via controls.
  • Weaponization.  To many the obvious concern around AI is weaponization, with the main question being “should AI be allowed to make a decision to kill a human being?”  You could argue with the evidence of friendly fire, wartime atrocities, criminal behavior, and mis-identification of criminals that AI would be better than an armed humanity and lead to better outcomes.  The statistics alone, proponents argue, will prove that AI is better at making the decision to kill.  With that said, there is something off about the idea of AI being able to kill a human being, if only because we are unsure of the potential for error and especially at a very catastrophic level.  The idea that with one wrong set of coding an AI system could exterminate huge numbers of people is somewhat terrifying and cause for many science fiction movies.  The concerns around weaponization are real and it is only a matter of time until it is put into practice.  The question is if we respond by moving full speed down the slippery slope or if we put controls in place to limit such an implementation.
  • Immoral Goals.  The usage of AI to accomplish immoral ends is another very real outcome of the AI economy.  With all the individuals leveraging AI to accomplish the betterment of society we’ll certainly have individuals, corporations, or governments who attempt to use AI to accomplish goals which are morally wrong.  The mitigation of this will be challenging, since many of these organizations will employ the same sort of intelligence and capable scientists that people with good intentions do.  This is an area where law will become very important as a control against the misuse of this technology, although a difficult one to catch.  As we have with other emerging criminal or simply inappropriate behavior, sufficient social and legal controls will need to be erected to protect society and the world.

These negative outcomes are not the only ways that AI may harm individuals and society, but they represent many of the impacts we have not yet mitigated well.  As practitioners of AI, we owe it to the world to implement the appropriate governance around the ways we use it and protect ourselves through patterns, practices, and laws.  In future chapters we’ll explore how we’ll apply ethical principals and controls to AI models and improve the opportunity to use AI for good and mitigate negative effects.

Nathan Lasnoski

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s