Featured News

Five Myths of Artificial Intelligence

By: Jason H. Moore, PhD, director of the Institute for Biomedical Informatics at Penn Medicine

Artificial-intelligence-507813_960_720The goal of artificial intelligence (AI) is to develop machines that can think, reason, and solve problems in a manner that is similar to the human brain. There are two types of AI research. The top-down approach is focused on developing an artificial brain with structure and function that is similar to that of a human. This is similar to the concept of a positronic brain that was introduced by science fiction writer Isaac Asimov and popularized in television and film with robots that exhibit consciousness such as Data from “Star Trek.”

In contrast, the bottom-up approach is more common and involves providing the basic building blocks that interact and build the computational complexity needed to approximate intelligence. Neural networks and cellular automata are examples. There have been a number of recent examples of deep neural networks with many connections that are competing with the best human players of games such as “Go” or chess. The maturation of AI algorithms and software along with the availability of inexpensive supercomputing technology has, for the first time, given us a glimpse of the true power of AI for solving difficult problems, such as determining optimal treatments for patients in a clinical setting. However, along with the promise comes natural fears about the potential harm AI could inflict on our society. Many of these concerns are premature and unfounded at this point in time. Here are five commons myths worth debunking.

AI is as Smart as Humans

This myth has been propagated by headlines claiming that AI technology can beat humans playing games like Go. While AI has become human-competitive for some specific tasks, it is not yet generalizable to most difficult problems that human encounter on a daily basis. Many of the recent successes are actually examples of machine learning algorithms, a sub-discipline of AI, that take a set of inputs and learn through positive reinforcement to produce a desired set of outputs. Humans don't just learn from positive reinforcement. In fact, there are several important features of our learning that must be replicated by an AI. The first is learning from failure. Most computer algorithms are designed to receive a reward when they do something well but aren't designed to learn from failures. Humans have the unique ability of trying innovative solutions to a problem and then studying and learning from the consequences of failure. Failed solutions become the focus of study and reflection leading to new solutions to try. This is especially true in the business world where failure is common and seen as an important part of the process. Designing a computer algorithm to learn from failure is a very difficult task and there are very few examples of this kind of learning. Part of the challenge is to teach a computer how to know when it has failed and then giving it the capability to alter its problem-solving strategy in response to that failure. In addition, humans learn from the failures of other humans. True AI would need to study humans as the solve problems and similar experiences of other AI systems. This challenge will need to be overcome before AI can approximate human intelligence.

AI Can Love

Given a computer can exhibit consciousness, it is a natural extension to assume that they could express human emotions such as love. This was explored in the 2013 movie “Her” that portrays an AI that seemingly falls in love with her user. As defined on Wikipedia, love is a complex human emotion that has both biological and psychological bases. The biological basis is tied to neurotransmitters and hormones that are influenced by different physiological systems. The psychological basis involves a conscious commitment to another person that forms a long-term bond. In other words, love is complex and, further, is different in different people. Designing an intelligence computer system that can love would require fully understanding human love and giving the AI the physical systems for it to exhibit this emotion and the complex decisions that come with it. Love thus requires more than just consciousness. It is not realistic to assume that this will be a reality in the near future.

AI is Harmful

Before addressing the potential negative impacts of AI, it is important to differentiate the capabilities of systems available today that can compete with humans on games and systems that exhibit true consciousness. The types of AI that exist today are not conscious and are thus more limited in their behavior. It is unlikely that these systems will be placed in positions that could jeopardize the health of humans. For example, it is unlikely that the military would put a current AI system in control of a missile system and allow it to make battlefield decisions. The same is true for health care. In fact, one of the earliest AI systems designed for predicting treatments for intensive care patients presenting with infection was never used in practice because of the potential for lawsuits around machine-driven malpractice. Modern AI is seen much more as an assistant that makes recommendations to the human who ultimately makes the decision. Even as AI matures and approaches, the capabilities imagined for the positronic brain it will still be expected to fulfill all the rules and regulations that humans do. The speculation about harm to humans is mostly speculative and anticipates a future where AI is conscious and can execute an evil agenda. There are no examples of conscious computers and no sign this will be part of our near future. As such, the speculation about harm is more in the science fiction realm at this point and thus more about generating headlines than reality.

AI is Easy to Create

It is reasonable to assume that it will be many decades before we see a mature AI exhibiting human-level intelligence and consciousness. Part of this assumption is that we do not yet fully understand how biological brains work. Not only do we not have a complete understanding of the molecular, cellular, physiological, and anatomical underpinnings of the brain, we do not yet know what emergent properties such as consciousness are. This understanding seems essential for designing computer systems that exhibit human-level intelligence. Given that neuroscience and computer science must go hand in hand, it is reasonable to assume that AI advances will be hard-fought and will take decades to mature. However, we are beginning to see rapid advances toward this goal.

AIs Will be Everywhere

Many of us are intrigued about AI and love to imagine a world where AI is commonplace as depicted in science fiction and predictions made by futurists. However, this is an idealized vision that may not be consistent with cultural and societal norms. Humans depend on each other as part of a functioning society and a key element of societal success is trust. A foundation of trust is the expectation of punishment for failing to follow rules and laws. An important question is whether humans would trust AI as much or more than they do other humans. Key to this question is the punishment structure for AI and the belief that if an AI doesn't meet its obligations that there would be consequences that would alter the behavior of the AI in future interactions. Would the usual consequences such as reduced compensation or loss of employment be motivating factors for an AI? Without the usual trust-punishment system humans are familiar with, it is hard to imagine that they would trust machines more so than other humans. As such, it is hard to imagine that humans would desire to interact with AI over other humans. There are no answers to these questions of course because AI functioning at the human consciousness level don't exist yet.

There is no question that AI has entered a golden era that will see rapid advances and an increasing number of applications where computers outperform humans on certain tasks such as playing games. The fundamental algorithms are available and enabled by powerful high-performance computing. We are at the beginning of this new era that will likely need 50 years or more to truly approximate human intelligence and consciousness. Even then, we may not see AI exhibit complex human characteristics such as love and hate.

In the meantime, we should be thinking about how AI can be used to assist humans with, for example, the analysis of big data to address timely challenges, such as improving health care or predicting harmful weather patterns. Imagine a world in which we could diagnose diseases at a much earlier and faster rate or get people out of harm’s way of a tsunami.

The reality is closer than we may even know.

You Might Also Be Interested In...

About this Blog

This blog is written and produced by Penn Medicine’s Department of Communications. Subscribe to our mailing list to receive an e-mail notification when new content goes live!

Views expressed are those of the author or other attributed individual and do not necessarily represent the official opinion of the related Department(s), University of Pennsylvania Health System (Penn Medicine), or the University of Pennsylvania, unless explicitly stated with the authority to do so.

Health information is provided for educational purposes and should not be used as a source of personal medical advice.

Blog Archives

Go

Author Archives

Go
Share This Page: