, ,

AI’s Ethical Impact

By James Jersin

The mathematician Vernor Vinge defined singularity as, a “sudden explosion of intelligence the ‘technological singularity,’ and thought that it was unlikely to be good news, from a human point of view.”  In other words, when AI can think faster than people. And Patrick Tucker asks, “if computer power advances beyond our control, how will that change us?”

I’ll now show how superintelligent machines affects our morality, and then explain close we are to AI’s development.

AI and Superintelligence

Making a more intelligent being might be a dumb goal. Bostrom says that “humanity owes its dominant position on Earth to our unique ingenuity of our brains… the chief rate-limiting factor” is “the development of civilization.” By arranging the environment around us, we can survive and prosper, despite our lacking bodies.

The difficulty of human intelligence is that each person needs to learn previously discovered information — which means that education, interpersonal skills, emotional intelligence, and creativity need to be learned. The disadvantage of this is time. Each person needs about over two decades before going into the workforce. Whereas AI can learn old information faster. So, if people produce a superintelligent AI, we’d need to be careful, and if we aren’t, we might not get to define the world as we’ve been experiencing. And if humanity drops down the food chain because of AI, then not only will we not rule our world anymore, but our ethical and cultural systems could be destroyed and replaced.

Many people think that the solution to the technology singularity and AI’s exponential growth in intelligence is to limit hardware for AI, or similarly, limit the programming intellectual capabilities that are allowed for AI, but I think both of these solutions ignore the historical push to create AI. Put differently, setting limits on AI hardware, doesn’t seem likely. What I think is a better solution, is to create ethical parameters in the code.

Next, I’ll investigate how people can live with AI by answering two questions: First, how can people live with superintelligent AI and keep the same quality of life? And secondly, how can an AI designer create ethical AI?

Can We Apply Human Ethics to AI?

To keep our same quality of life, we could set human values into AI. Chalmers thinks that “if we value scientific progress, for example, it makes sense for us to create AI [AI that’s less intelligent than people] and AI+ [AI that’s as intelligent as people] systems that also value scientific progress.” So, intelligent machines could help us with scientific progress by comparing the past knowledge to new information, and then analyzing the data and reporting it to people.

But lining up AI values to human values has risks. Each value could also have harmful effects, such as immoral scientific studies in the name of scientific progress and knowledge (ie the Nazi twin studies that brutalized people in the name of scientific progress).

Clearly, a problem with AI is programming is going to be: How can we make AI moral? Or in other words, how do we program AI to not disrupt our human ethics and goals?

Though thinking hard about how AI should act towards people could clarify how to program AI. Chalmers asks how effective a given AI morality will be, “of course even if we create an AI or AI+ (whether human-based or not) with values that we approve of, that is no guarantee that those values will be preserved all the way to AI++ [superintelligent AI].” Put differently, it may not matter if we think AI should or shouldn’t value a thing or action, after all AI could simply reprogram its values to whatever it wants.

However, supposing that AI accepts human morality, there are still issues of what AI will do to fulfill its programmed morality. Singer thinks argues “when we can’t predict how AI will act, how can we limit its actions to be moral.” In other words, because of AI’s unknown capabilities of programming speed and other abilities restricted by hardware, we can’t predict what domain or set of abilities should be restricted because people lack the imagination to understand all of AI’s possible actions.

All of the above questions are hard to answer, and Singer even states that asking what AI superintelligence will do is impossible to answer. So, what should we think about AI? Well, Chalmers asks whether we need to think that far ahead and states, “my own view is that the history of artificial intelligence suggests that the biggest bottleneck on the path to AI is software, not hardware: we have to find the right algorithms, and no-one has come close to finding them yet.” Developing a program to create AI, is the greatest challenge to creating superintelligent machines.

So, how well are we progressing to developing a program for AI?

How Close Are We to Creating an AI Program?

What’s the hot new programming technique for AI? It’s called backpropagation, or backprop, first developed by a man named Geoffrey Hinton. Hinton wrote a paper in 1986 describing a new programming method to create smarter AI. But it wasn’t until 2012, 26 years later, when Hinton and two of his students showed his programming technique beat the rest. So what’s backprop?

Backprop is typically used with image recognition software to help fix errors in multilayered programming, or neural network (which is one program running and then outputting information into another programming layer’s input). Somers describes backprop as a way to “know how much each individual connection contributed to the overall error, and in a final step, you change each of the weights in the direction that best reduces the error overall. The technique is called “backpropagation” because you are ‘propagating’ errors back (or down) through the network, starting from the output.” In other words, the bottom layer data inputs lead to a decision output (Is there a hot dog or not is the picture?), and if the output is wrong (No hotdog), each error is fixed until the output is correct (Hotdog!).

And how good is backprop? Somers explains that “in the words of Jon Cohen, a computational psychologist at Princeton, is ‘what all of deep learning is based on—literally everything.’” Which means that all of our current progress in based on an old system. And Yet, Somers explains seeing AI image recognition in action: first the software correctly identifies objects in photos, and then “when that same program sees a picture of a girl brushing her teeth and says, ‘The boy is holding a baseball bat,’ you realize how thin that understanding really is, if ever it was there at all.” AI’s current ‘what’s-that-object’ programming isn’t great; and as understanding is concerned, it’s also not clear AI can have one at all. But so what AI doesn’t recognize objects in images every time, what does that mean for the future of AI? I’ll now investigate how close programmers are to creating AI++.

What’s AI’s Future?

A future with superintelligent AI is clear and near for some, but today’s progress of AI programming is uncertain. Somers says that “it’s worth asking whether we’ve wrung nearly all we can out of backprop. If so, that might mean a plateau for progress in artificial intelligence. Therefore, until we’ve developed a new system of machine learning, one that’s newer than the 42-year-old backpropagation idea, there isn’t going to be much progress towards developing AI. But what is it going to take for AI to progress?

It’s not clear how AI will progress, but some think that understanding the human brain will lead to better AI programming. Somers explains that “for Hinton, he is convinced that overcoming AI’s limitations involves building ‘a bridge between computer science and biology.’ Backprop was, in this view, a triumph of biologically inspired computation.”  Hinton wants to focus learning the chemical process of our brains, and then design a computer program that copies those neural processes. Because programmers aren’t sure how to program intelligence (especially human level intelligence) in machines on their own, waiting until there’s an understanding.

It seems that AI is relining on human neurology more than many programmers thought. Which means that AI will likely not pose a moral problem until neuroscience understands clearly how the brain works, nearly predicting how people will behave, and what each specific part of the brain does, and why each part is necessary to certain thoughts; not simply seeing which parts of the brain flushes with blood.

 

References:

Bostrom, Nick. (2009). Superintelligence. https://nickbostrom.com/views/superintelligence.pdf.

Chalmers, David. (2010). The Singularity: A Philosophical Analysis. http://consc.net/papers/singularity.pdf.

Price, Huw, and Tallinn, Jaan. (2012). The Conversation: Artificial Intelligence – Can We Keep It in the Box? https://theconversation.com/artificial-intelligence-can-we-keep-it-in-the-box-8541.

Singer, Peter. (2016). Project Syndicate: Can Artificial Intelligence Be Ethical? https://www.project-syndicate.org/commentary/can-artificial-intelligence-be-ethical-by-peter-singer-2016-04?barrier=accesspaylog.

Somers, James. (2017). MIT Technology Review: Is AI Riding a One-Trick Pony? https://www.technologyreview.com/s/608911/is-ai-riding-a-one-trick-pony/

, ,

Leave a Reply

Your email address will not be published. Required fields are marked *