Welcome! These forums will be deactivated by the end of this year. The conversation continues in a new morph over on Discord! Please join us there for a more active conversation and the occasional opportunity to ask developers questions directly! Go to the PS+ Discord Server.

Chinese room, Turing test and the existence of A.I.-s

19 posts / 0 new
Last post
Anarhista Anarhista's picture
Chinese room, Turing test and the existence of A.I.-s
Since my search didn't find any related topics I'll ASSume (yeah, yeah, I know... ;) this is the first. First, if you want to be thorough, read this http://en.wikipedia.org/wiki/Chinese_room In short, Chinese room argument says that if you (non-Chinese speaking entity :) use (incredibly complex) series of rules (a.k.a. program), you can 'talk' with someone (intelligent and self-aware) in Chinese without understanding ANYTHING, what the conversation was about. Time is not relevant in this thought experiment. If you translate this into a computer hardware and complex expert systems, the question is, can digital computers (as we know them NOW) be self-aware and pass the Turing test for true A.I. (which I don't find very precise, but I'll use what I have) For more info read this http://en.wikipedia.org/wiki/Turing_test I'll happily accept any thought on the subject and share my 'loophole breaker' at the former problem: it is a quantum computer, which (supposedly) can be hardware for the true A.I. The problem is, I don't really understand HOW would this new computation platform be self-aware.
So Long, and Thanks for All the Fish.
Arenamontanus Arenamontanus's picture
Re: Chinese room, Turing test and the existence of A.I.-s
Anarhista wrote:
I'll happily accept any thought on the subject and share my 'loophole breaker' at the former problem: it is a quantum computer, which (supposedly) can be hardware for the true A.I. The problem is, I don't really understand HOW would this new computation platform be self-aware.
A lot of people don't see the Chinese room argument as very convincing. Searle is assuming that if there is no understanding in the parts, then the whole system cannot understand something. But your neurons individually do not know anything, let alone English, yet you as a system of neurons can produce and understand English. I don't see how a quantum computer would help the argument in any direction: Searle's criticism is not about the physical system in the Chinese room, just the difference between syntax (correct manipulation of symbols) and semantics (having meaning). And it has nothing to do with self-awareness.
Extropian
Anarhista Anarhista's picture
Re: Chinese room, Turing test and the existence of A.I.-s
Arenamontanus wrote:
A lot of people don't see the Chinese room argument as very convincing. Searle is assuming that if there is no understanding in the parts, then the whole system cannot understand something. But your neurons individually do not know anything, let alone English, yet you as a system of neurons can produce and understand English.
You got a point in synergy being greater than the sum of elements but the problem is that you compare neurons with, current, computer architecture which are NOT the same. Since understanding of (human) consciousness is, currently, not perfect we could both be right and not know about it...
Arenamontanus wrote:
I don't see how a quantum computer would help the argument in any direction: Searle's criticism is not about the physical system in the Chinese room, just the difference between syntax (correct manipulation of symbols) and semantics (having meaning). And it has nothing to do with self-awareness.
On the contrary, it has many things about "mind", "understanding" or "consciousness". Let me reiterate the experiment: If I, using complex algorithms, translate Chinese signs, so Chinese person think it have intelligent conversation, doesn't mean I understand Chinese. Also since I don't understand the language how can I be self-aware when ALL I do is follow the algorithm? (OK, OK, I'm obviously self-aware, but imagine software doing my job) Actually, I would like being wrong, because it would mean the A.I. would come sooner then... well, much later :D P.S. Any indication that I'm working with seed AI to allow them to grow uninterrupted is completely and utterly not true!!!
So Long, and Thanks for All the Fish.
Smokeskin Smokeskin's picture
Re: Chinese room, Turing test and the existence of A.I.-s
Searle's chinese room argument is not about an AI's intelligence. It is about how an AI able to act exactly like a human, with intelligence and communication skills sufficient to carry a conversation, is still not a conscious mind. It merely simulates consciousness. If you're the type who doesn't care about what an AI experiences, but only with what the AI can do, then the chinese room has no relevance to you, because what the argument supposedly shows is that no matter how smart the AI is it will never actually experienxe consciousness. Apparently consciousness requires some special, non-computable process that our brains are capable off but a machine working merely with formal logic will never be able to. Searle's entire argument rests on this special property of our brains (which is why it doesn't crumble under Arena's argument about neurons not having understanding), though this is just something he came up with. I suspect that's why he dressed the argument up with all the trappings about foreign languages and boxes and papers passed around. He could just as well have said "imagine a turing machine able to carry convincing conversation - it will never understand what it says or experience consciousness because it is just processing formal logic instead of working like our brains do". Wow, really insightful, right? I think the chinese room is quite silly, in that his conviction that it somehow shows that consciousness can't arise from formal logic processes is totally unfounded. The more general idea, that machines that act and respond exactly like we do might not be consciousness but merely simulating our behavior, is interesting though. Is that possible, or is it meaningless to talk about a difference between consciousness and its simulation? How would we tell the difference if there was one? Would a machine with only simulated consciousness have any rights?
nerdnumber1 nerdnumber1's picture
Re: Chinese room, Turing test and the existence of A.I.-s
Smokeskin wrote:
Searle's chinese room argument is not about an AI's intelligence. It is about how an AI able to act exactly like a human, with intelligence and communication skills sufficient to carry a conversation, is still not a conscious mind. It merely simulates consciousness. If you're the type who doesn't care about what an AI experiences, but only with what the AI can do, then the chinese room has no relevance to you, because what the argument supposedly shows is that no matter how smart the AI is it will never actually experienxe consciousness. Apparently consciousness requires some special, non-computable process that our brains are capable off but a machine working merely with formal logic will never be able to. Searle's entire argument rests on this special property of our brains (which is why it doesn't crumble under Arena's argument about neurons not having understanding), though this is just something he came up with. I suspect that's why he dressed the argument up with all the trappings about foreign languages and boxes and papers passed around. He could just as well have said "imagine a turing machine able to carry convincing conversation - it will never understand what it says or experience consciousness because it is just processing formal logic instead of working like our brains do". Wow, really insightful, right? I think the chinese room is quite silly, in that his conviction that it somehow shows that consciousness can't arise from formal logic processes is totally unfounded. The more general idea, that machines that act and respond exactly like we do might not be consciousness but merely simulating our behavior, is interesting though. Is that possible, or is it meaningless to talk about a difference between consciousness and its simulation? How would we tell the difference if there was one? Would a machine with only simulated consciousness have any rights?
The really interesting part is that there is no way to test to see if a being is conscious or not through any conceivable observation. Heck, you would only "know" that a synthmorph was conscious while you were in sleeved in it (if you later resleeved into a biomorph and had memories of being conscious as a synth, you could argue that you are only just now interpretting the unconscious memory information from a conscious view, another layer of simulation). If someone in an artificial brain (or even a real brain) said they were conscious, that could be a simulation as well. Sounds like an interesting concept for a cult in an adventure seed. Heck, they might believe that all downloaded "egos" are just mind-simulations and that death of the original body is the end of the conscious mind Think heavy bio-conservatives devout in their belief that everyone who left their birth-body isn't a real person... no matter how much they might scream, they aren't feeling "real" pain. And, if resleeving means the end of conscious life, then it is an evil that needs to be destroyed at all costs! The only way to prove that this belief is false to an individual would be to resleeve them (and wipe the original), at which point they would likely go mad as their worldview crashes down around them (and would do nothing to convince any other member of the organization).
Anarhista Anarhista's picture
Re: Chinese room, Turing test and the existence of A.I.-s
First, I'll agree that oversimplified analogy can, and often is, horribly wrong. Chinese room 'may' have more holes then Texan shooting barrel but it does provoke some questions about A.I. (basically I like questions that make you think and not just categorically dismiss or accept them) Just what you said: Would a machine with only simulated consciousness have 'actual' consciousness? And what about its rights? These are the questions worth thinking about. Again, problem (in my opinion) is that there isn't 'definitive' definition of consciousness so one could defend Chinese room indefinitely. In absence of new ideas/data I'll bury this thread. Edit: Ooops, I may have hasten the burying a bit...
So Long, and Thanks for All the Fish.
The Doctor The Doctor's picture
Re: Chinese room, Turing test and the existence of A.I.-s
Anarhista wrote:
If you translate this into a computer hardware and complex expert systems, the question is, can digital computers (as we know them NOW) be self-aware and pass the Turing test for true A.I. (which I don't find very precise, but I'll use what I have) For more info read this http://en.wikipedia.org/wiki/Turing_test
Non-sentient constructs have won the Turing test several times in recent history. Cleverbot [url=http://www.geekosystem.com/cleverbot-passes-turing-test/]was mistaken for a person[/url] by approximately 59% of the questioners, slightly better than a coin-flip. I believe the construct called Julia did, also. Agent Cameron has a bit of a reputation for convincing journalists that she is a (very busy, very sleep deprived, possibly inhebriated) person as well. There are others that have passed the Test but I am still rebuilding my personal archive search engine after a hardware migration so finding the papers is a manual (read: slow) process. [url=https://en.wikipedia.org/wiki/Turing_test#Naivete_of_interrogators_and_t... is one of the reasons why.[/url] With a sufficiently complex choice tree and mathematical models of a particular language (and probably training on rather a large body of carefully chosen text - IRC is not the best medium) the idiosyncrasies that often creep into machine-generated text can be minimized. It is also possible for people to fail the Turing test; I did in an undergrad AI class some years ago (which explains a few things, or so they tell me...)
d2f d2f's picture
Re: Chinese room, Turing test and the existence of A.I.-s
The main problem about the chinese room thought experiment is its roots in the computational model. Wolpert's studies suggest that the mind does not work based on forlam logic, but on [url=http://en.wikipedia.org/wiki/Decision_theory]bayesian decision theory[/url], using a [url=http://en.wikipedia.org/wiki/Kalman_filter]kalman filter[/url] for cross-checking prognostic models. Therefore, a computer model trying to emulate true biological intelligence would need to be something close to a [url=http://en.wikipedia.org/wiki/Hubert_Dreyfus]heideggarian AI[/url]. [url=http://en.wikipedia.org/wiki/Embodied_cognition]Embodied cognition[/url] (like proposed by Gallagher, Clark or Valera, to name a few) plays an important role in this. The bayesian model needs the feedback from the environment to create sentience. A truly sentient AI therefore needs bayesian decions that are imperfect, it is the difference between those two that allows us to actively experience conscious thought. Case in point would be [url=http://en.wikipedia.org/wiki/Flow_(psychology)]flow-experiences[/url], or trance experiences, where the brains prognostic models are near perfect and thus prevent a conscious experience. Another fine excemple would be a phenomenon called interface shift, where tools become a sensory extension of our mind. Where we "feel" with the tool, rather than the hand holding it. It is biomechanically the same as a flow-experience and very powerfull as an effect. To try it yourself, take a pen, close your eyes and drag the tip of the pen over a rough surface. Long story short: It is time to wave the idea of the computational model of the mind goodbye. Wether Wolpert's idea of a bayesian brain can be upheld is a good question. The importance of embodiment for a true AI, however, can no longer be denied. P.S.: excuse my spelling and punctuation.
Arenamontanus Arenamontanus's picture
Re: Chinese room, Turing test and the existence of A.I.-s
d2f wrote:
The main problem about the chinese room thought experiment is its roots in the computational model. Wolpert's studies suggest that the mind does not work based on forlam logic, but on [url=http://en.wikipedia.org/wiki/Decision_theory]bayesian decision theory[/url], using a [url=http://en.wikipedia.org/wiki/Kalman_filter]kalman filter[/url] for cross-checking prognostic models. Therefore, a computer model trying to emulate true biological intelligence would need to be something close to a [url=http://en.wikipedia.org/wiki/Hubert_Dreyfus]heideggarian AI[/url]. [url=http://en.wikipedia.org/wiki/Embodied_cognition]Embodied cognition[/url] (like proposed by Gallagher, Clark or Valera, to name a few) plays an important role in this. The bayesian model needs the feedback from the environment to create sentience. A truly sentient AI therefore needs bayesian decions that are imperfect, it is the difference between those two that allows us to actively experience conscious thought.
I think there is something to this argument, but it doesn't preclude a computationalist account of the mind. First, showing that emulations of biological intelligence isn't entirely computational doesn't show that intelligence isn't computational. Second, Bayesian methods are pure computational, as are Kalman filters. Embodying the AI is probably a good idea, but it is not clear that an embodiment (and lifeworld) has to be a material world. It could just be a detailed virtual setting with enough emergent complexity to challenge the AI's models.
Extropian
d2f d2f's picture
Re: Chinese room, Turing test and the existence of A.I.-s
I completely agree, that the enviroment, the mind is embodied in does not need to be a physical environment. It merely needs to provide feedback. As such, a simulation is completely sufficient. The bayesian decision theory is not a traditional "computational model", though, it's a mathematical model. The computational model describes a particular paradigm in cognitive science using formal logic and categories. It faces the problem of infinite regress and proves as such a problem for true AI programming. Wolpert's model is based on the assumption (a point he argues VERY well), that the brain's sole purpose is the coordination of complex and adaptive movements, speech being one of them. I would argue the point that reflective thought in itself is a form of movement.
Smokeskin Smokeskin's picture
Re: Chinese room, Turing test and the existence of A.I.-s
d2f wrote:
The bayesian model needs the feedback from the environment to create sentience. A truly sentient AI therefore needs bayesian decions that are imperfect, it is the difference between those two that allows us to actively experience conscious thought.
This sounds like hyperbole. I'm pretty sure the hard problem of consciousness haven't been cracked.
d2f wrote:
Case in point would be [url=http://en.wikipedia.org/wiki/Flow_(psychology)]flow-experiences[/url], or trance experiences, where the brains prognostic models are near perfect and thus prevent a conscious experience.
I'd put my money on the hypothesis that dopamine release causes the experience of flow, rather than it being some artefact of information processing. I'd also imagine that direct brain stimulation could induce flow regardless of how perfect the brain's prognostic models are at the time.
Arenamontanus Arenamontanus's picture
Re: Chinese room, Turing test and the existence of A.I.-s
d2f wrote:
The bayesian decision theory is not a traditional "computational model", though, it's a mathematical model. The computational model describes a particular paradigm in cognitive science using formal logic and categories. It faces the problem of infinite regress and proves as such a problem for true AI programming.
Well, I suspect a critic of Bayesian decision theory would also argue it has a grounding problem. When the AI estimates that the probability of seeing a tomato to 90% because of various visual inputs, one might argue that it is just doing as empty formal manipulations as the formal logic type of AI: various abstract features are assigned probabilities based on other abstract features, and these numbers are not even "real" but bit string manipulated in particular ways by the floating point logic of the computer. However, I would argue that the binding between the sensors and those bit strings do ground the computation in the real world - but then I would have to accept that a formal logic system doing the same calculations would also be grounded. I think the real reason to go Bayesian is that it doesn't assume perfection on the part of the inferences, data or system itself: it can be implemented on shoddy hardware and software and still work mostly right. (I speak of experience: my network I studied from my Ph.D. tended to work decent even when implemented wrong!)
Quote:
I would argue the point that reflective thought in itself is a form of movement.
I agree. It is just an exaptation of our old motor and motor planning systems. We "move" mental objects rather than muscles, and call it thinking.
Extropian
Xagroth Xagroth's picture
Re: Chinese room, Turing test and the existence of A.I.-s
Personally, I don't see a reason to such a debate of what makes an artificial intelligence construct "alive". I think there are one definition for "living being" (birth, develops, interacts, reproduces, death), and for sentienty, I believe it is even simpler: will, goals, self-improvement and growth are needed characteristics that cannot be proven with a single test (because, let's be reasonable here, you can cheat by making a program specifically designed to pass that test!), but with long-term interaction. In EP terms, for me the difference between an AGI and a simple LAI is that the AGI has a will of its own, and can develop itself (grown) into unnecesary fields of expertise for no reason but personal enjoyment or curiosity. A LAI, on the other hand, is completely faithful and devoted to its core programming, and won't have any kind of personal goals or ambitions. It can be programmed to be curious or other stuff that will mimic an AGI's behaviour, but it will never have a self-drive. In philosophical terms, a true intelligence can go against its own nature. So an AGI can say "no" to a command, but a simple LAI won't.
Arenamontanus Arenamontanus's picture
Re: Chinese room, Turing test and the existence of A.I.-s
Xagroth wrote:
Personally, I don't see a reason to such a debate of what makes an artificial intelligence construct "alive". I think there are one definition for "living being" (birth, develops, interacts, reproduces, death), and for sentienty, I believe it is even simpler: will, goals, self-improvement and growth are needed characteristics that cannot be proven with a single test (because, let's be reasonable here, you can cheat by making a program specifically designed to pass that test!), but with long-term interaction.
Well, what about a system like AIXI that runs all possible programs internally and uses the results from the ones that achieve its goals the best to act? It has pre-defined goals that it will not change (and indeed, it will protect them fiercely). Yet it would be able to solve any problem just as well as the best program possible, including any problems involving human interaction. It would in many ways be a LAI, yet if the problem at hand required AGI abilities it would gain them. It would realize any realizable moral truths, but would disregard all of them that did not help it reach its goals with maximum efficiency.
Quote:
In philosophical terms, a true intelligence can go against its own nature. So an AGI can say "no" to a command, but a simple LAI won't.
I'm not sure. LAIs often reject commands ("Syntax error/access denied!"), including for complex internal reasons no human can understand (why did your browser crash last time?) An AGI might decide not to do what you order it, but a sceptical philosopher may point out that that decision was based on a deterministic process run on the accumulated knowledge it had and deep down due to some original code. Even if the AGI changes that code the changes will now be due to original state: all "free will" in the system comes from external inputs, so in what sense can the AGI be said to act on its own? (Or for that matter, the philosopher?) Of course, LAIs are built to do what they are told and not to do too much unexpected things, while AGIs are intended to be general and grow in creative ways. But that in itself seems very much to be a given nature. Maybe the truly free AGI is the one that decides to lobotomize itself into a LAI - but that seems fairly pointless.
Extropian
Anarhista Anarhista's picture
Re: Chinese room, Turing test and the existence of A.I.-s
Agrhhh! You people have given me more homework then many professors! :P Since I don't want to spew nonsense I'll generally refrain from comments but the more I learn about today's emulation of human mind (the good qualities...), I'm starting to believe it can be done...
So Long, and Thanks for All the Fish.
Xagroth Xagroth's picture
Re: Chinese room, Turing test and the existence of A.I.-s
Anharista: personally, I believe the "simulation" of a person might be achieved (the same one presented in Caprica, the prequel-branch series to Galactica 2004): an expert program fed with all avaiable information about someone might become indistinguishable from the original. Sadly, truly digitalization of the human mind seems more a matter of faith than science, at least for now. Arenamontanus: Can AIXI decide its own goals? And I'm not talking about steps to reach the goal it was given, but, if for example, I ask it to add 5+5 and then go as it pleases... will it simply wait for the next command forever? Also, you say it will protect the goal given with all its strength, but can it change that goal on its own volition? Surpass it? Decide to stop pursuing it and do something else entirely? As for the "no" to a command, I am not talking about error messages, but conscious negatives to stuff the AI can do but will not because, despite all the reasons to, it doesn't want to do that task. In essence, will equals ego, which equals sentience. And will demands "wants", "fears", and "never will do" among other things. But the trick is that they should be decided by the AI, not simply given (at least, eventually: children, after all, take on their parent's ideals a lot).
Decivre Decivre's picture
Re: Chinese room, Turing test and the existence of A.I.-s
Xagroth wrote:
Anharista: personally, I believe the "simulation" of a person might be achieved (the same one presented in Caprica, the prequel-branch series to Galactica 2004): an expert program fed with all avaiable information about someone might become indistinguishable from the original. Sadly, truly digitalization of the human mind seems more a matter of faith than science, at least for now.
I'm of the opposite mind with this. I think that the true simulation of the human brain by means of blueprinting the neural layout of a human being and using software to emulate the specific biological processes of the human mind are going to be significantly easier in the long run than creating a true artificial intelligence, which will then try to imitate a human being merely utilizin the information gathered about them off of aggregated sources. There are many elements of a person's mind that simply do not get recorded by any means. Secrets being a key part of that. And those things will be hard to imitate knowing only what is recorded and stored. Furthermore, we'd have to create an artificial intelligence program capable of all the aspects of human thought... something far harder in my opinion than simply emulating brain biology and expecting human thought as a byproduct.
Transhumans will one day be the Luddites of the posthuman age. [url=http://bit.ly/2p3wk7c]Help me get my gaming fix, if you want.[/url]
Xagroth Xagroth's picture
Re: Chinese room, Turing test and the existence of A.I.-s
Decivre, I am not telling that a true and 100% accurate "fork" of somebody can be made just by pumping all the relevant data into some sort of program (please note that EP-wise you will be able to achieve only a simulation of somebody's behaviour, but it won't fool any type of test, like brainwaves, neither will have any kind of secrets you did not tell it), I am saying that it is easier to see an artificial construct that can be perceived (at short term) as a digitalized copy of somebody. At best, it will be a little like a muse, a memento from a departed one that can behave really close to the original (or to the conception of the original its creator had!), but would never pass as one. Incidentally, I just had an idea for the GM's toolbox: one of this constructs "thinks" close enough to the original it can be use to retrace the original's decisions, like what kind of password would he choose or stuff like that. The players might need to use one of this "toys" as a guide for something. Or more evil, an enemy of the players might have used all that juicy personal information they post online to make one of this constructs, close enough to their mindset to give him a great edge when dealing with them (essentially an excuse to make an enemy to be able to stay one or two steps ahead of the players).
Decivre Decivre's picture
Re: Chinese room, Turing test and the existence of A.I.-s
Xagroth wrote:
Decivre, I am not telling that a true and 100% accurate "fork" of somebody can be made just by pumping all the relevant data into some sort of program (please note that EP-wise you will be able to achieve only a simulation of somebody's behaviour, but it won't fool any type of test, like brainwaves, neither will have any kind of secrets you did not tell it), I am saying that it is easier to see an artificial construct that can be perceived (at short term) as a digitalized copy of somebody. At best, it will be a little like a muse, a memento from a departed one that can behave really close to the original (or to the conception of the original its creator had!), but would never pass as one.
Even then, I disagree. The production of a versatile humanlike general AI is likely far more daunting than an attempt at creating a biological simulation of the brain. We've already ran plenty of biological simulations... we just need to produce one that operates on the scale and scope of the entire human mind (or body). That said, a general AI would probably take up far less computer resources to run and operate. A biological sim will simulate every single individual process that happens in the human mind or body, thus taking up processor resources for every element it needs to keep track of. An artificial intelligence will simply recreate the consequences of intelligence, without having to simulate every individual process. Still, I think the simulation will come first... we might even produce the first true general intelligences once we experiment with simulations.
Xagroth wrote:
Incidentally, I just had an idea for the GM's toolbox: one of this constructs "thinks" close enough to the original it can be use to retrace the original's decisions, like what kind of password would he choose or stuff like that. The players might need to use one of this "toys" as a guide for something. Or more evil, an enemy of the players might have used all that juicy personal information they post online to make one of this constructs, close enough to their mindset to give him a great edge when dealing with them (essentially an excuse to make an enemy to be able to stay one or two steps ahead of the players).
Once XP technology comes out, the possibility of recreating someone becomes a true possibility. One could simply create a tabula rasa mind state based on the person to be restored (the equivalent of an infant perhaps), then run a complete XP playback for that mind state, allowing them to live through that person's entire lifetime. This may very well have the potential to recreate a person without necessarily having to back up a mind blueprint itself... but in many ways it is just a workaround for the same effect.
Transhumans will one day be the Luddites of the posthuman age. [url=http://bit.ly/2p3wk7c]Help me get my gaming fix, if you want.[/url]