Welcome! These forums will be deactivated by the end of this year. The conversation continues in a new morph over on Discord! Please join us there for a more active conversation and the occasional opportunity to ask developers questions directly! Go to the PS+ Discord Server.

Can Muses pass the Turing Test?

17 posts / 0 new
Last post
MrWigggles MrWigggles's picture
Can Muses pass the Turing Test?
I know Muses are not sapient AIs. I wonder if their distinct unique personality, is a means to make them more affable, and to ignore their mistakes when using natural language.
uwtartarus uwtartarus's picture
I think only the lowliest of
I think only the lowliest of ALI would fail the Turing Test. Isn't the test really basic? We have chat-bots today that are beating it. ALI just don't learn or adapt, they just collect data and react to the data very straightforwardly, while learning is more of a nebulous or organic way of collecting data and drawing conclusions about that data. Or so I think, I am not a cognitive science type!
Exhuman, and Humanitarian.
MrWigggles MrWigggles's picture
We do not have anything that
We do not have anything that pass the Turing Test. It would be astounding when that happens.
Darkening Kaos Darkening Kaos's picture
Obvious Statement.....
I work with some humans that would fail the Turing Test.
Your definition of horror is meaningless to me....... I. Am. A Bay12'er.
ORCACommander ORCACommander's picture
EP is a varied place with
EP is a varied place with different laws regulations and of course price points. I would say that the most standard muse would be able to pass a turing test since they are designed to be your best friend and confidant since literal childhood while some cheap models and ones made for highly regulated jurisdictions definitely would not. However this is all superfluous since the major criteria for passing a turing test is that the person interacting with the AI can not know its an AI to begin with while everyone in EP knows what a muse is.
uwtartarus uwtartarus's picture
ELIZA passed it and as have
ELIZA passed it and as have others. Which doesn't show their intelligence but merely the flaws of the test, which is to just trick a human into thinking they are talking to a human. A muse could easily use Mesh communications to trick someone is what I meant. AI researchers today seem to be disinterested in the Turing Test.
Exhuman, and Humanitarian.
SquireNed SquireNed's picture
One thing to note is that
One thing to note is that there's a lot of development in natural language processing (NLP). For instance, computers can grade essays for standardized tests better than humans can, because it turns out that they can actually analyze the language that the test criteria require, and so without actually knowing what the user wrote they can critique and correct it. It wouldn't surprise me if NLP software can get to Turing-complete levels of detail, though it would require a corpus of experiences to prevent a "what is your favorite place" question from sinking it.
UnitOmega UnitOmega's picture
Or the classic question
Or the classic question "Are you a robot?"
H-Rep: An EP Homebrew Blog http://ephrep.blogspot.com/
SquireNed SquireNed's picture
UnitOmega wrote:Or the
UnitOmega wrote:
Or the classic question "Are you a robot?"
Pretty sure that one's not Turing test valid.
Spoiler: Highlight to view
I'd say yes.
Lazarus Lazarus's picture
UnitOmega wrote:Or the
UnitOmega wrote:
Or the classic question "Are you a robot?"
There's no requirement in the Turing test that the speaker give truthful responses.
My artificially intelligent spaceship is psychic. Your argument it invalid.
Lazarus Lazarus's picture
re: The Turing Test
At first I was going to say 'no', that I didn't think Muses would be able to pass the Turing test but then I thought about it a bit more. I thought about things like the claim that ELIZA passed the Turing test (which it turns out is both arguably correct and arguably incorrect) and decided to do a little research. The problem is that the Turing test is actually not well specified. The biggest issue is that it requires an 'average interrogator', but know real explanation of what that means. Does that mean an average person who is interrogating or does it mean an interrogator who is about average for trained interrogators? Also, it is fairly unspecific about how the human respondent is suppose to reply. There have been a lot of 'successes' where the interrogator had difficult establishing a strong enough margin of success of identifying humans over machines, but in a lot of those cases they were frequently misidentifying humans (it should be noted that an awful lot of the 'passed' Turing tests are not 'true' Turing tests because in the original test there were only two subjects, one human and one machine, and the interrogator had to figure which was which. Most 'successful' tests have used multiple people and multiple machines and there's no given figure for how many are people and how many are machines. At least that's my understanding). To me that's less indicative that the machines are that good and more indicative that a lot of the human agents are responding in rather odd ways. The real thing, however, is that we learned as early as the 60's that the test was defective. It does nothing to determine if a machine is really 'thinking' (its original theorized goal). It just determines if it is capable of pulling off linguistic trickery with a large enough vocabulary and repertoire. ELIZA, the earlier mentioned program, may or may not have been able to pass the test but even if it failed it showed that something which was beginning to approach the ability to pass the test did not have to have any real form of cognition. ELIZA has no real capabilities of learning or problem solving. So I have to change my answer to 'Yes'. I suspect that most likely a Muse could pass the Turing test and pretend to be a human being in a conversation, but that doesn't mean a whole lot. It doesn't mean they are capable of the problem solving activities people can do. As an example, if a Muse is requested to get information about a University student it would contact the University. It is completely possible that it could make whoever is on the other end of the connection believe that it was a human being requesting the information. However, if that person said 'no' to the request the Muse would most likely not have the capability of attempting to figure out a solution. It wouldn't decide to try and call back when the person might be at lunch in the hopes that the new respondent would be more amenable. It wouldn't try tricks like building up a rapport with the respondent in the hopes of convincing them to bend the rules. It wouldn't decide to try and pass itself off as a University official to get the information. It would simply thank the respondent for their time and log off. Turing test passed since the respondent thought that it was a human on the other end the whole time, but functionally the muse was no more capable of dealing with obstacles than a hand written letter.
My artificially intelligent spaceship is psychic. Your argument it invalid.
uwtartarus uwtartarus's picture
Also Chinese Room problem.
Also Chinese Room problem.
Exhuman, and Humanitarian.
UnitOmega UnitOmega's picture
Yeah, the Turing Test gets
Yeah, the Turing Test gets touted a lot as some grand measure of AI capability, but really, it's more a test of how well a machine can imitate human language patterns. Turing said this himself, because devising a way to check if a machine thinks is a rough question to answer. It's complex to define what and when a human thinks in exact terms. In mechanical terms, a Muse could probably easily pass the Turing Test as originally laid out. The Muse has a base INT of 20, which is double that of an average modern human and the root aptitude for linguistics. And it would have a base SAV of 10, same as an average human. So using the hard numbers the system gives us, assuming you're communicating in a language the Muse is actually programmed to use, it could pretty easily appear as competent as a human or at least a transhuman family member - they're designed to emulate human-like personalities too. However, since they don't have Deception, and can't default, I'd wager most Muses have programming barriers about directly lying and would instead have to stick to neutral or avoidant answers. (As an aside, my comment about asking "are you a robot?" comes from a couple of anecdotal reports I've heard recently of people getting telemarketing calls from what they assumed were bots based on their responses, but when directly confronted would not actually answer "yes" or "no" to the question "are you a computer?".)
H-Rep: An EP Homebrew Blog http://ephrep.blogspot.com/
Justin Alexander Justin Alexander's picture
MrWigggles wrote:We do not
MrWigggles wrote:
We do not have anything that pass the Turing Test. It would be astounding when that happens.
(a) The Turing Test has been passed many times. (b) It's been recognized for decades that the Turing Test is really pretty irrelevant. The Turing Test was conceived at at a time when the conception of the methodology for convincing someone that you were human through a text-based conversation was very different. We learned later that it was both a lot easier to fool people into thinking you're human and a lot harder to actually simulate human thought. (We've also concluded that human-level intelligence doesn't necessarily mean human behavior.) It's as if someone in the 19th century believed that the only way you could 70mph was to fly and proposed a test for a flying vehicle that said "if it can go 70mph, then it's flying". That's basically the Turing Test. The value of the Turing Test today is in a far more general and philosophical sense: It's the recognition that your belief that the people around you are possessed of human intelligence is, in fact, almost entirely based on faith. You are only cognizant of the outputs of systems you have no real knowledge of.
The Doctor The Doctor's picture
Darkening Kaos wrote:I work
Darkening Kaos wrote:
I work with some humans that would fail the Turing Test.
Some of us have.
SquireNed SquireNed's picture
On an unrelated note,
On an unrelated note, apparently there are natural language processing applications that grade papers better than human proctors do, without even knowing what's going on and what people are talking about. Or, basically, the robots write better than we do, though their writing skills are not necessarily paired with the ability to produce viable output. There's something floating around about a neural-net system for creating Magic the Gathering cards, which has learned to create mostly sensible (if poorly balanced) cards over time just by looking at text, the mana that goes into them, and such, to the point of maintaining color affinity in abilities.
eaton eaton's picture
Remember that the Turing test
Remember that the Turing test is a very, very simple one. Restrictions on artificial life in the EP universe generally revolve around something way past the Turing Threshold—the ability to recursively self-improve and "ramp up" to infinite intelligence, given sufficient computational and storage resources. Lots of "toaster" level AIs exist in the EP univerise—simple AIs designed to converse naturally around very specific subjects, like operating the device they were designed to interface with. Going off-script with those AIs (say, asking your fabber about politics) is going to be an obvious tip-off. A reasonably complex shipboard AI or personal Muse, though, is definitely going to be designed to go beyond that. Think about it this way: Every standard muse has Academics: Psychology (60). That's not the text of a bunch of psych books, rather it's an actual understanding of human/transhuman psychology. If that doesn't come with Turing-success, I'm guessing that quite a few of today's humans would fail.