Can a non-sensitive robot pass the Turing test?

By David K. Johnson, Ph.D., King’s College

The Turing test was introduced by one of the first computer scientists; his computer genius even helped crack the Nazi codes that enabled the Allies to win World War II. Back then, the big question was whether computers would ever mentally grasp the meaning of words.

The Turing test could make a robot communicate with you. (Image: sdecoret / Shutterstock)

The Turing test

To answer the question discussed above, Alan Turing of the Turing Test argued that if machines ever acquire the ability to use language like humans do, the answer would be yes. To establish this, he imagined a person having two long conversations, one with a human and the other with a computer, without the person knowing which one was which.

Both conversations simply involved textual interaction. Turing suggested that if people couldn’t tell which one was which – this is called “taking the Turing test” – then you should conclude that the machine really understands the language it uses.

Turing only cared about whether a machine could understand language, but we could expand the example to include all human behavior to draw a conclusion as to whether a machine is also sensitive. We could call it “the Turing mega-test”.

If, in a personal interaction with a human and a machine, you cannot tell which is which, then you should conclude that whatever machine is sensitive. The basis of the test is found in the solution to another philosophical problem: the problem of other minds.

This is a transcript of the video series Sci-Phi: science fiction as philosophy. Watch it now, on The Great Courses Plus.

The problem of other minds

The problem of other minds points out that the only mind one is directly aware of is one’s own. So, for example, as far as I know, everyone in the world has no mind and instead acts like they do. Therefore, the argument suggests, I cannot know that anyone else is concerned.

The solution, however, is simple: I can know that others are concerned because knowledge does not require certainty. The best explanation for why other people behave the way they do is because they are aware. I know my mind determines my behavior. Since others behave much like me, I should conclude that they also have spirits that determine their behavior. To doubt that others have spirits, while possible, is unreasonable. And I can know something beyond a reasonable doubt.

If the fact that other humans behave like me is a reason to conclude that they are aware of their mind, then so too is the fact that a machine behaves like me. Therefore, if ever we ever invent androids, we should conclude that they have the mind – what we call sentient.

Learn more about capitalism in Metropolis, Elysium, and Panem.

Wires and circuits do not create life

A number of objections have been raised suggesting that even androids should not be considered susceptible. Some say, “This is all just the result of anthropomorphic bias, the human tendency to ascribe agency to things that display human-like behavior.” If we were just relying on our emotional reaction to androids, this could be true.

But, we continue to consider an argument, based on the fact that we could not distinguish the behavior of an android from that of a human, and then we used the inference of the best explanation to draw the conclusion that the robot , as well as many other such machines, is sensitive. Our conclusion is therefore the result of rational inference and not of instinctive bias.

An electronic card
Some people think that because robots are made of wires and circuits, they cannot be sensitive. (Image: raigvi / Shutterstock)

Another objection might be, “They’re made of the wrong kind of material. Wires and circuits cannot think. This objection simply raises the question. The question that arises is whether the wires and circuits could think; you can’t fix it by just stating that they can’t.

Indeed, since we do not yet know what is necessary for consciousness, we do not know that being made of organic matter is necessary for consciousness.

Learn more about the main directive and postcolonialism.

Can something that is programmed be sensitive?

There are people who claim, “Androids would be programmed, so they can’t care.” Well, first of all, they might not be scheduled. We can just artificially create infant-like brains and then raise them like babies.

But even if they are programmed, so what? We too, through our genes and our environment. Being programmed can prevent androids from having free will, but never once, by doubting our free will, have we been tempted to think that we have no mind.

Lines of code zero and one on a screen
Computers are not literally zeros and ones. It’s just a metaphor that we use to say whether the circuits are on or off. (Image: Tavarius / Shutterstock)

The fourth objection we encounter about this might be, “All computers do is mix symbols – swap one symbol for another.” And the intermixing of symbols could never produce linguistic understanding, much less consciousness.

Computers are not, in fact, mixers of symbols. We’ve invented symbol shuffling languages ​​to describe how we program them, but there aren’t really any symbols floating around in there. And this whole “0 and 1” thing is just a metaphor for circuits on or off. We could actually do the same thing with the neurons in your brain – describe their triggers with a series of 0’s and 1’s – but that wouldn’t mean you’re unaware.

At the basic level, parts of your brain and an android brain would do the same thing: send complex information to each other by pulling electrical impulses at each other. If one of these processes produces a spirit, why shouldn’t the other?

Common questions about the Turing test

Q: How does the Turing test work?

the Turing test is basically like a blind taste test where a human has a conversation with two parties, one of which is a robot. If the human cannot tell who a robot is, then the robot passes the test.

Q: What’s wrong with other spirits?

The Other Minds Problem states that we can never really know if other people are concerned because the only mind that each person is really aware of is their own. the Turing test itself is based on this problem.

Q: What is the answer to the objection that “androids are programmed not to care”?

There is a chance that we will build artificial intelligence and raise her like a baby, so that she learns on her own, and maybe one day she will pass. the Turing test with what he learned. On the other hand, if being programmed is an objection to being sensitive, so are we. We are programmed by nature and education, which can make us doubt our free will, but that never makes us doubt that we are sensitive.

Keep reading
Mythical faith and fictionalism: religious belief or religious allegiance?
Carl Sagan’s “contact”: balancing religion and science
The scientific method: personal experience versus scientific reasoning


Source link

About Leslie Schwartz

Check Also

Jimbo Fisher on Haynes King, Development Philosophy at the Practice Clinic

“You know, that’s an interesting thing. I come back to two things people forget about …

Leave a Reply

Your email address will not be published. Required fields are marked *