Welcome! These forums will be deactivated by the end of this year. The conversation continues in a new morph over on Discord! Please join us there for a more active conversation and the occasional opportunity to ask developers questions directly! Go to the PS+ Discord Server.

Eugene goostman passes turring test

17 posts / 0 new
Last post
OneTrikPony OneTrikPony's picture
Eugene goostman passes turring test
http://www.gizmag.com/eugene-goostman-turing-test/32453/ This is interesting but it makes me think that the turing test should be a little more robust. The blurb at the end about the program becoming a tool for cybercrime got me thinking. Might it be possible to use the eugene goostman technology as a personality skeleton then track a person's social media to build an impostor to phish their friends family and employer?

Mea Culpa: My mode of speech can make others feel uninvited to argue or participate. This is the EXACT opposite of what I intend when I post.

ORCACommander ORCACommander's picture
i was personally surprised
i was personally surprised when i heard the threshold is only 30%. i think we are giving people to much credit
Aldrich Aldrich's picture
Woo! Science!
Hey! I know this one! Computational Linguistics is my thing, so I'll go ahead and speak for the whole field when I say that you're right: the Turing Test is very outdated and largely irrelevant. First off, I have NO idea where they pulled the "30% is passing" standard from. In most of computer science a repeatable 90-95% is the cutoff for "passing" for a human interaction test (depending on the journal). And did you read some of the answers their system produced? Total gibberish - I'm shocked that they even got 30%. It's also not a record or anything, Cleverbot tricked about 70% of subjects (when running a single terminal on a dedicated computing cluster - the version you can talk to online is like a delta or gamma fork). Second, the Turing test is outdated. Imitating human speech patterns in a random question-answer task is (on its own) both pretty useless and a bad measure of intelligence. A much better and more interesting test would be to have the program watch a video or read an article, and then summarize it or answer questions about it.
nezumi.hebereke nezumi.hebereke's picture
90-95 seems a little high. I
90-95 seems a little high. I don't know that all humans would pass that threshold. Also, tweaking the 'character' so he is belligerent, a non-native speaker, and uneducated is really gaming the speaker. I could have a program that randomly does key presses and tells you it's a baby. Does that mean it's a step towards intelligent machines?
MAD Crab MAD Crab's picture
In point of fact, there have
In point of fact, there have been a couple of experiments with irc bots that prove all of this. When nobody expects you to be coherent in the first place, you're always going to pass...
LatwPIAT LatwPIAT's picture
This is less "Computer
This is less "Computer convinces judges it is human" and more "dishonest researchers convince judges that computer cannot speak English and has no attention span". 30% over an incredibly short period of conversation is, frankly, not conclusive of anything. Turing himself didn't even really give a definite percentage; rather, he simply said that statistics could be compiled about how often the computer fooled judges. There was no time limit either, and ideally you should compare it to a control group of a real person, with the judge in charge of picking out which of the anonymous computer and anonymous real person is actually the computer - where a 50% success rate would be conclusive. Also, this was a project done by Kevin Warwick, a self-aggrandizing fraud known for making overblown claims to media without really doing anything worthwhile. He once got media attention by sticking a computer virus on a memory chip and putting it under his skin. Completely worthless and pointless, but "human infected by computer virus" does make headlines. So no. The Turing Test has not been passed. When you can convince an AI scientist that your computer is more human than a real human 50% of the time [i]then[/i] we can talk about passing the Turing Test.
@-rep +2 C-rep +1
Alkahest Alkahest's picture
Cat AI
I agree with the skeptic comments. I'm fairly certain even I can write a program that can convince people a cat is walking on my keyboard. That doesn't mean I have created a perfect simulation of a cat.
President of PETE: People for the Ethical Treatment of Exhumans.
Erulastant Erulastant's picture
Haven't there been
Haven't there been experiments where [i]humans[/i] have failed to pass the Turing test?
You, too, were made by humans. The methods used were just cruder, imprecise. I guess that explains a lot.
Lorsa Lorsa's picture
Erulastant wrote:Haven't
Erulastant wrote:
Haven't there been experiments where [i]humans[/i] have failed to pass the Turing test?
Well, if a true Turing test is supposed to have one computer program and one human for reference, in a group with two humans and the test group being forced to choose one as a computer program, there'll always be at least one human failing the test?
Lorsa is a Forum moderator [color=red]Red text is for moderator stuff[/color]
DivineWrath DivineWrath's picture
Erulastant wrote:Haven't
Erulastant wrote:
Haven't there been experiments where [i]humans[/i] have failed to pass the Turing test?
Yes, it does happen. Some people don't do well on the turing test. The key point is that a human judge has to determine that the thing on the other end is a human or a machine. Some people make terrible judges, while others do a terrible job appearing to be human. There can be many factors that influence results. Having a mental disability can be one. Some times, a human in the wrong mood might shift results. Edit: The basic idea behind the turing test is, in order to fool a human, a machine would require certain cognitive processes and abilities to hold a conversation. For instance, an adult human would have been collecting information over a period of 18 to 21 year (minimum). A program written up in a month would probably be lacking that information (it'll need that information added in somehow). Likewise, human behavior like emotions and humor would also have to be programmed in (or it's absence will probably be noticed). In theory, a machine that could do everything that a human could do would be able to pass the turing test. Unfortunately, everything a machine needs to pass the turing test might not be everything it needs to match a human in ability. A blind person might know to say that flower is pretty, but lacks the ability to recognize a flower by sight (because it lacks the means to see), and wouldn't know what a flower looks like. So in that regard, the turing test is flawed, a machine that can hold a conversation might still be missing important stuff. Edit 2: Another way to put it, everything you need to pass the turing test can be considered a subset of everything you need to match a human in ability. Its much like a set of characters, A B C, is a subset of the whole alphabet. Much like a few characters is not the whole alphabet, a machine being able to pass the turing test is not going to prove it is an equal to a human, only that it can do some of the things a human can do.
Undocking Undocking's picture
One issue with the Turing
One issue with the Turing Test that chat bots exploit is known as the Chinese Room. Imagine, you are on one side of a wall, and a piece of paper with incomprehensible symbols is slipped under a door. In the room you have a book that has a list of symbol combinations on the left hand side, and response list on the right hand side (quite a large book, mind). If you transcribe the proper left-hand output to the right-hand input, ta-da, you have just beaten the Turing Test. Chat bots have been doing that for ages, and are not too difficult to program. The Turing Test is not holding a conversation that convinces a human that it is human. Turing's question: "Are there imaginable digital computers which would do well in the imitation game?" The Imitation Game is as follows: There are two participants, A and B, one of which is a woman, the other a man. A and B cannot be seen by the judge C, and C may only communicate with A and B through written (or typed) notes. A and B may not communicate. C must determine which participant is a man, and which is a woman. However, A is attempting to trick C, while B attempts to assist C. The Turing Test proposes that a sufficiently human-like artificial intelligence could perform the role of A or B successfully and convincingly. However, AI researchers have posited that the Turing Test is no longer useful to the study of computer intelligence.
nezumi.hebereke nezumi.hebereke's picture
Erulastant wrote:Haven't
Erulastant wrote:
Haven't there been experiments where [i]humans[/i] have failed to pass the Turing test?
The Turing test is intended to test for the existence of intelligence. The fact that some humans fail seems totally appropriate.
Erulastant Erulastant's picture
Lorsa wrote:Erulastant wrote
Lorsa wrote:
Erulastant wrote:
Haven't there been experiments where [i]humans[/i] have failed to pass the Turing test?
Well, if a true Turing test is supposed to have one computer program and one human for reference, in a group with two humans and the test group being forced to choose one as a computer program, there'll always be at least one human failing the test?
I think (I don't remember much detail, I saw this years ago) that there was a group of, let's say, 16 virtual avatars and the judges were supposed to determine which (if any) were human and which (if any) were AIs. They weren't told the composition of the groups IIRC. And a significant portion of the humans were identified as human 25% or less of the time. Something like that.
You, too, were made by humans. The methods used were just cruder, imprecise. I guess that explains a lot.
The Doctor The Doctor's picture
nezumi.hebereke wrote:90-95
nezumi.hebereke wrote:
90-95 seems a little high. I don't know that all humans would pass that threshold.
Not all humans do.
nezumi.hebereke wrote:
Also, tweaking the 'character' so he is belligerent, a non-native speaker, and uneducated is really gaming the speaker. I could have a program that randomly does key presses and tells you it's a baby. Does that mean it's a step towards intelligent machines?
I have to agree. That read to me like the developers decided to give the interviewers some exploitable preconceived notions rather than improve their parser or derivation engine.
The Doctor The Doctor's picture
LatwPIAT wrote:Also, this was
LatwPIAT wrote:
Also, this was a project done by Kevin Warwick, a self-aggrandizing fraud known for making overblown claims to media without really doing anything worthwhile.
For what it is worth, I still seek medical assistance to replicate some of his direct peripheral neural interface experiments from a decade ago. His results were intreiguing and could bear repetition.
LatwPIAT wrote:
So no. The Turing Test has not been passed. When you can convince an AI scientist that your computer is more human than a real human 50% of the time [i]then[/i] we can talk about passing the Turing Test.
The Turing Test has been plausibly passed serveral times in the past twenty-three years, with no small amount of controversy. ([url=https://en.wikipedia.org/wiki/Loebner_Prize#Winners]source[/url]) As much as it pains me to play the "then X is not real intelligence" card, I may have to reluctantly toss my headgear inside that particular circular area.
The Doctor The Doctor's picture
Erulastant wrote:Haven't
Erulastant wrote:
Haven't there been experiments where [i]humans[/i] have failed to pass the Turing test?
Yes. I was one of them. (September 1999)
nezumi.hebereke nezumi.hebereke's picture
The Doctor wrote:Erulastant
The Doctor wrote:
Erulastant wrote:
Haven't there been experiments where [i]humans[/i] have failed to pass the Turing test?
Yes. I was one of them. (September 1999)
*gives Doctor a suspicious look*