Chapter 2: The Definition of Artificial Intelligence
Editors Note: This post is a part of a series on Ethics of Artificial Intelligence.
What is AI? Is AI essentially any machine computational process that takes an input and arrives at a probability? Does it depend on more? To have a fruitful conversation about artificial intelligence, we need to break it down into at least two main categories:
Weak AI is a system which solves computational problems, where I take data, apply against the model, and receive a result. The model might be very complex, but its just that, a model. The model was created by a data scientist, it exists in a computational execution engine, it receives an input, and it produces an output. This is what most people are using when they consider an AI system today. The model is created with a purpose, provided by humans, and then implemented against a set of data, not within an ecosystem. A few interesting things about Weak AI:
- Weak AI is the current emerging market and is being applied to most industry scenarios in some capacity. The results of leveraging Weak AI are clear and the positive financial or results impact is driving companies to take advantage of the capability, perhaps prior to fully understanding the impact to their workforce, customers, or risks. The benefit is so great that companies will drive to innovate and leverage Weak AI to gain competitive advantage.
- Weak AI can be either “ground up” computational models or IP such as Microsoft’s Cognitive Services where a platform leverages the pre-built capabilities, such as image recognition or speech to text translation. The idea of “ground up” models have been around since we’ve been doing “statistics” which mathematicians will remind us has been for some time. The advent of cloud computing has made the application of these ground up models and especially the application of pre-built models available for most if not all companies.
- Weak AI’s laws have not been adequately defined. For instance, if a car with automatic driving hits another car because it swerved to avoid a pedestrian, who is at fault? The driver, the car company, the pedestrian, the company that built the AI system, the individual the built the AI system? The application of fault and causality will drive the legal industry into breaking new ground and will have significant affect on how companies apply and test their AI systems.
- Weak AI’s model differentiation will become product differentiation. Let’s say there are several companies that built an automated driving system for their cars. The smarter one or the one most aligned to the driver’s safety might be the one with the preference. We often buy cars because they are safer, with better air bags, construction, weight, etc. What if the safety is not getting into an accident at all because the car can game the system better than other cars?
- Weak AI’s data will become one of the greatest debates of our time, especially with the advent of regulations like GDPR. The idea that Weak AI drives algorithms based on data, which needs to be collected, and stored *somewhere*. The question is whose data is it? Can I request it be deleted? Can I govern what kinds of data can and should be used to describe me in a system and make decisions about me? The soverenty of the data, personally, organizationally, and nationally will become a hard fought battleground.
- Weak AI tends to still execute pre-built models, even if complex. The system might seem to act more and more capable, but it is still simply executing a model against a set of criteria fed from the machine.
- Weak AI can take on some human-like qualities, despite not being a true learning machine. For example, a car might contain hundreds of sensors to gather information about the exterior and interior of the car, the driving conditions, and powers the automatic driving. The sensors of a car can be similar to how we sense the world around us as we drive, or understand that the car is making a strange noise. These senses are qualities which start to bring AI’s capabilities closer to how a human might gather information in a situation and will be part of how we will later compare the future capabilities of machines and those within the Human Difference.
Strong AI is the “hard problem” of AI, which is a machine which learns like or better than we do, establishing similarities to human qualities with the ultimate goal of meeting them, including the phenomenal consiousness. A Strong AI system is not necessarily a “human equivalent” system, but one that takes on human qualities (especially independent thought) or exceeds them, especially in the extent to which it learns vs. just executes a payload. Where a Weak AI system is programmed with a model and given a set of inputs to execute, a Strong AI system is made to learn and self-improve based on an ecosystem in which it is activated. A few related considerations are:
- Strong AI is understood from an industry standpoint to mean equivalent of human cognitive capabilities, especially those of intelligence, decision making, and consciousness.
- Strong AI could possibly accomplish human cognitive capability without accomplishing every capability within the Human Difference. This is a consideration we’ll look at later in the series.
- Strong AI represents an intelligence which could change the way we engage with the system, as at some point we’ll need to consider if it represents a “person”. We’ll also consider this later in the series.
- Strong AI also represents risks to humanity as it essentially leaves the human control, or has the potential to, since a platform of this capability could have its own intention, direction, and mind.
As we look at the different sorts of AI considerations, these major types of AI will start to govern how we leverage AI and how AI is leveraged. In both cases, we will need controls, governance, and ethics applied that allow us to engage appropriately. The intention of providing these key definitions is to set the table for us now to discuss the opportunities, threats, and impacts from this ecosystem.
Nathan Lasnoski