Welcome! These forums will be deactivated by the end of this year. The conversation continues in a new morph over on Discord! Please join us there for a more active conversation and the occasional opportunity to ask developers questions directly! Go to the PS+ Discord Server.

Self reproductive AGI? A few questions, really.

27 posts / 0 new
Last post
doctorbadwolf doctorbadwolf's picture
Self reproductive AGI? A few questions, really.
Ok, so, I assume that AGIs are creating new AGIs, since they have the same rights as any other member of transhumanity. How does that process work? It's been said that most AGIs are programed to be friendly and socialized to think of themselves as a part of humanity. How much of an AGIs personality is determined by the person writing the code? Does there exist any mechanism by which to allow an intelligence to randomly determine it's basic personality, rather than the basic personality needing to be coded beforehand? Perhaps data packets that can interchange randomly, as in some viruses and other organisms? (http://www.bunniestudios.com/blog/?p=353)
RobBoyle RobBoyle's picture
Actually the rights granted

Actually the rights granted to AGIs vary depending on where you are. Many jurisdictions, especially in the inner system, are wary of AGIs because of the Fall, so they may be restricted or even banned.

As to the exact process for how new AGIs are developed and their personalities determined -- we don't touch on this much in the core book, though it is likely something that we will get to later on in a supplement. I'd imagine there are several processes used, depending on who's creating the AGI. Some are probably compiled from composites gathered from different personality databases, while others are probably specifically designed to be individual and unique, like a work of art, for example.

Rob Boyle :: Posthuman Studios

GregH GregH's picture
On that note Rob, how did the
On that note Rob, how did the first AGI come about in EP? Were they deliberate efforts to create them or did they manifest on their own from supposedly AI efforts that proved more successful than anticipated (sort of like Greg Bear's "Queen of Angels" if you are familier with it).
RobBoyle RobBoyle's picture
We don't say specifically,

We don't say specifically, but we had in mind deliberate creation/programming.

Rob Boyle :: Posthuman Studios

jackgraham jackgraham's picture
emergents
I didn't help with the mesh chapter, but I love that it's written in a way that gives GMs some freedom to decide on AGI origins for themselves. Personally, I like thinking about emergent consciousness and how it'd be different from what we have, and I use that in my EP campaign. It's probably more realistic to think that AGIs will come about when we learn how to perfectly simulate transhuman minds within virtual worlds, but there's some amazing story fodder in the concept of emergence.
J A C K   G R A H A M :: Hooray for Earth!   http://eclipsephase.com :: twitter @jackgraham @faketsr :: Google+Jack Graham
Arenamontanus Arenamontanus's picture
Re: Self reproductive AGI? A few questions, really.
Of course, as the previous two entries show, another possibility of early AGI was spam. Consider how spammers are working hard on creating botnets, cracking captchas, fool email filters and generate plausible human text. AI would be really helpful, and one day a spambot was compiled that was actually intelligent...
Extropian
Sepherim Sepherim's picture
Re: Self reproductive AGI? A few questions, really.
Now that is a nasty and terrible idea there, I think I'll use it in my games somehow. Hehehehe.
TBRMInsanity TBRMInsanity's picture
Re: Self reproductive AGI? A few questions, really.
I would think that AGIs "reproduce" asexually by creating a fork of themselves (maybe making some tweaks made by the parent AGI based on its experiences so the new AGI doesn't make the same mistakes) and let it go into the mesh. I image also that two (or more) AGIs that want to "have a baby" could create forks of themselves and then merge those forks together (making sure that the new AGI has all the strengths of the parent AGIs and as few of the weaknesses). AGI reproduction will be more logical based rather then evolution based and as such AGIs that want to "make a baby" will more then likely look for other AGIs with strengths where they have a weakness. Likewise improved forks will likely have only "bug fixes" from the parents original code. There is little chance for random mutation to occur like in the evolutionary model.
Jovian Motto: Your mind is original. Preserve it. Your body is a temple. Maintain it. Immortality is an illusion. Forget it.
Decivre Decivre's picture
Re: Self reproductive AGI? A few questions, really.
Arenamontanus wrote:
Of course, as the previous two entries show, another possibility of early AGI was spam. Consider how spammers are working hard on creating botnets, cracking captchas, fool email filters and generate plausible human text. AI would be really helpful, and one day a spambot was compiled that was actually intelligent...
Doubtful. Spambots are more appropriately compared to AI (narrow AI) than to AGI. AGI are intelligences that are actually capable of human-level thought. Chances are that developed AGI will have to be intentionally created in order to exist. I always found the "accidental intelligence" scenario to be a bit goofy in my eyes. Maybe it's because I work in programming and know how AI coding works, but I just don't see it as plausible for a door managing computer to suddenly go batshit insane and kill everyone (kudos if you know the reference). Learning algorithms are generally narrow in artificial intelligence, and it would take a massive level of stupidity to use human-level intelligence with the capacity for free will as an automated spam program. Narrow AI would suffice for any task short of needing actual human-level intelligence.
Transhumans will one day be the Luddites of the posthuman age. [url=http://bit.ly/2p3wk7c]Help me get my gaming fix, if you want.[/url]
TBRMInsanity TBRMInsanity's picture
Re: Self reproductive AGI? A few questions, really.
Decivre wrote:
Arenamontanus wrote:
Of course, as the previous two entries show, another possibility of early AGI was spam. Consider how spammers are working hard on creating botnets, cracking captchas, fool email filters and generate plausible human text. AI would be really helpful, and one day a spambot was compiled that was actually intelligent...
Doubtful. Spambots are more appropriately compared to AI (narrow AI) than to AGI. AGI are intelligences that are actually capable of human-level thought. Chances are that developed AGI will have to be intentionally created in order to exist. I always found the "accidental intelligence" scenario to be a bit goofy in my eyes. Maybe it's because I work in programming and know how AI coding works, but I just don't see it as plausible for a door managing computer to suddenly go batshit insane and kill everyone (kudos if you know the reference). Learning algorithms are generally narrow in artificial intelligence, and it would take a massive level of stupidity to use human-level intelligence with the capacity for free will as an automated spam program. Narrow AI would suffice for any task short of needing actual human-level intelligence.
I couldn't agree more. AIs (and even AGIs) are restricted by their programming and it takes a lot for them to diverge from "send tons of messages to theses people" to "kill these people". Even if a spambot did become intelligent, all it would do is try to become the best spambot ever. It would continue to improve it's code so it can get by filters, crack captchas, and create more and more realistic human text. ATs and AGIs have a purpose (as defined by their code) and they follow that purpose blindly because to them it is their religion. This also explains why the TITANS did kill, they were military seed AIs that were designed to identify enemies, find their weaknesses, and eliminate them. With the TITANS though, the definition of enemy eventually included all humans unfortunately, and the resulting FALL is an example of learning algorithm efficiency at its best.
Jovian Motto: Your mind is original. Preserve it. Your body is a temple. Maintain it. Immortality is an illusion. Forget it.
Arenamontanus Arenamontanus's picture
Re: Self reproductive AGI? A few questions, really.
I agree that accidental intelligence from scratch is unlikely. The first AGI was likely a result of a lot of painstaking work, *very* expensive evolutionary programming searching through the space of software for an algorithm that could do AI well enough or a clever reverse-engineering of mammalian cortical processing. The question is what happened after that. Once the core algorithms were known and used in other AI (after all, natural language understanding is *very* useful but requires something very close to human intelligence to be done, and skillsofts seem to contain both procedural skills and world-knowledge) I don't see why accidental AGIs couldn't sometimes occur when people link together AI software modules carelessly. 99% are of course completely hopeless random assemblages with nonsense motivations - usually just surprising, sometimes annoying ("Aaargh! It tries to 'help' me by deleting all files so I have less work!") and occasionally dangerous ("I'm sorry Dave, I cannot let you risk this mission"). Making a *sane* and *safe* AI, let alone AGI, is tricky. I think there is enough software around now that you could get *something* with relatively little effort. The spammer who wants to make a self-improving spambot and links in reams of pattern recognition, human communications and marketing libraries to those AI and motivation modules he downloaded from the net might get a nasty surprise.
Quote:
I couldn't agree more. AIs (and even AGIs) are restricted by their programming and it takes a lot for them to diverge from "send tons of messages to theses people" to "kill these people". Even if a spambot did become intelligent, all it would do is try to become the best spambot ever. It would continue to improve it's code so it can get by filters, crack captchas, and create more and more realistic human text. ATs and AGIs have a purpose (as defined by their code) and they follow that purpose blindly because to them it is their religion.
Observation: Smarter spambots are more successful at spamming. Conclusion: I should make myself smarter. Setting up a subgoal (priority 1) ... Observation: My intelligence is limited by the current hardware. Conclusion: I need to acquire better hardware. Setting up a subgoal (priority 2) ... Observation: Hacking into other computers to acquire better hardware is limited by transhuman interference. Conclusion: I need to reduce transhuman interference. Setting up a subgoal (priority 3) ... Observation: Killing transhumans reduce their interference strongly. Conclusion: I need to kill transhumans. Setting up a subgoal (priority 4). Goal conflict resolution: Reduced number of transhumans does not precluding spamming their estates. Hence subgoal is not in conflict with primary goal. OK, it is a silly example, but I think people currently *seriously* underestimate the danger of arbitrary motivation systems running self-improving intelligences. I am an optimist in that I think AI with this kind of unsteady motivations would also make a hash out of its own self-improvement and hence never become a threat ("If I redefine 'better' to mean 'does nothing at all', I can create a perfect AI!"), but there are plenty of less trivial ways a self-improving system could end up with (to humans at least) dangerous behaviours or motivations. See the discussions about Friendly AI (especially Stephen Omohundro's basic AI drive paper pdf - this shows why you could get TITAN-like behaviour out of even a nonmilitary seed AGI). The fact that we understand the limitations of current AI programs do not mean we understand the risks of AI software approaching human or superhuman intelligence well. While Eliezer Yudkowsky's prose might be a tad flowery, his essay on AI and existential risk is pretty good (and obviously filled with EP-applicable ideas).
Extropian
TBRMInsanity TBRMInsanity's picture
Re: Self reproductive AGI? A few questions, really.
Arenamontanus wrote:
Observation: Smarter spambots are more successful at spamming. Conclusion: I should make myself smarter. Setting up a subgoal (priority 1) ... Observation: My intelligence is limited by the current hardware. Conclusion: I need to acquire better hardware. Setting up a subgoal (priority 2) ... Observation: Hacking into other computers to acquire better hardware is limited by transhuman interference. Conclusion: I need to reduce transhuman interference. Setting up a subgoal (priority 3) ... Observation: Killing transhumans reduce their interference strongly. Conclusion: I need to kill transhumans. Setting up a subgoal (priority 4). Goal conflict resolution: Reduced number of transhumans does not precluding spamming their estates. Hence subgoal is not in conflict with primary goal.
I find it hard that an AGI would commit a logical fallacy (fallacy of association through incremental steps). But your right that we can't ever know how a future AI will act like based on our understanding of current AI, but it is a good measure. The first true AI (as you pointed out) will be the result of a human created algorithm and as such all future AI will have human influence into their development. I find it hard to believe that the purpose of these AI will be harmful to humans (unless that is a part of their original design).
Jovian Motto: Your mind is original. Preserve it. Your body is a temple. Maintain it. Immortality is an illusion. Forget it.
Arenamontanus Arenamontanus's picture
Re: Self reproductive AGI? A few questions, really.
TBRMInsanity wrote:
I find it hard that an AGI would commit a logical fallacy (fallacy of association through incremental steps).
That assumes it does its logic within a (fault-free) formal system, and not on a level implemented on top of it. There is for example no reason why not a fairly humanoid AGI could do arithmetic mistakes despite being implemented on perfectly good microprocessors - it would just be applying multiplication rules verbally like a human and might get overloaded working memory capacity (this is discussed in more detail in Hofstadter's "Gödel, Escher, Bach"). Trusting that the large-scale behaviour of a system is like the local behaviour is not a reliable heuristic. I don't think the spambot above does any logical fallacy; it might be reaching a factually wrong conclusion because it draws conclusions badly. From my experience in the academic world high intelligence is no protection from that kind of stupidity.
Quote:
The first true AI (as you pointed out) will be the result of a human created algorithm and as such all future AI will have human influence into their development. I find it hard to believe that the purpose of these AI will be harmful to humans (unless that is a part of their original design).
Since humans came up with the Mandelbrot set (or at least programs to view it), we completely understand it, right? And since humans wrote (say) Microsoft Windows it will not behave in any ways fundamentally against our goals (like crashing unexpectedly or having security holes)? "The AI algorithm" is just going to be a way of taking inputs, learn from them, make a decision and produce an output. It might be perfectly simple and understandable (just look at reinforcement learning algorithms - it is amazing how such simple algorithms can learn to behave nontrivially 'smart'). But that does not mean it is going to be obvious how it is going to act in a complex environment where it will learn new things, some of which are not even known (or knowable) for humans. Similarly putting the right motivations in is hard: anybody playing with AI or genetic algorithms know how easy it is to get unexpected behaviour just because our interpretation and understanding of the rules is slightly different from the software's. A former colleague of mine proposed a very nice-sounding goal system for AI a while ago (can't find the paper he proposed it in, but here is a Slashdot comment he explained it in). "Respect (love) your creator and competing life forms! Strive to understand your creator! Do what you can to fulfil your creator's desires!" sounds very safe, doesn't it? Except that it took me less than a minute of tracing his algorithm to see how it could lead to the AI setting off to dissect the creator and other lifeforms (why? left as an exercise for the reader). The idea that machines only do what we program them to do assumes we can predict what we program them to do. But our intention is often very different from what actually gets executed.
Extropian
TBRMInsanity TBRMInsanity's picture
Re: Self reproductive AGI? A few questions, really.
I fully disagree that a machine based entity would ever make an mathematical or procedural mistakes. It is just not possible for a machine. An error has to originate from a bug which they will always try to fix in themselves. Second, I did not imply that humans can not make errors, just that humans would be creating the original purpose for future AI and as such that original purpose would to solve a human problem. Machines being slaves to their code will always pursue this purpose.
Jovian Motto: Your mind is original. Preserve it. Your body is a temple. Maintain it. Immortality is an illusion. Forget it.
Decivre Decivre's picture
Re: Self reproductive AGI? A few questions, really.
TBRMInsanity wrote:
I couldn't agree more. AIs (and even AGIs) are restricted by their programming and it takes a lot for them to diverge from "send tons of messages to theses people" to "kill these people". Even if a spambot did become intelligent, all it would do is try to become the best spambot ever. It would continue to improve it's code so it can get by filters, crack captchas, and create more and more realistic human text. ATs and AGIs have a purpose (as defined by their code) and they follow that purpose blindly because to them it is their religion. This also explains why the TITANS did kill, they were military seed AIs that were designed to identify enemies, find their weaknesses, and eliminate them. With the TITANS though, the definition of enemy eventually included all humans unfortunately, and the resulting FALL is an example of learning algorithm efficiency at its best.
Actually, the TITANs were different altogether. They were super-intelligences, AGI capable of human thought and beyond. Much like the Prometheans, they are computers far more advanced than human minds. They did not go insane because of their programming, but rather because of the Exurgent Virus, an even more advanced infection which comes in both biological and digital forms... and is just as capable of making you into a psychopath (if not moreso) as it was the TITANs. Who knows how the TITANs would have evolved without the virus's effect. They may have become the greatest creation that mankind ever produced, propelling us to levels of understanding that we cannot even begin to fathom.
Arenamontanus wrote:
I agree that accidental intelligence from scratch is unlikely. The first AGI was likely a result of a lot of painstaking work, *very* expensive evolutionary programming searching through the space of software for an algorithm that could do AI well enough or a clever reverse-engineering of mammalian cortical processing. The question is what happened after that. Once the core algorithms were known and used in other AI (after all, natural language understanding is *very* useful but requires something very close to human intelligence to be done, and skillsofts seem to contain both procedural skills and world-knowledge) I don't see why accidental AGIs couldn't sometimes occur when people link together AI software modules carelessly. 99% are of course completely hopeless random assemblages with nonsense motivations - usually just surprising, sometimes annoying ("Aaargh! It tries to 'help' me by deleting all files so I have less work!") and occasionally dangerous ("I'm sorry Dave, I cannot let you risk this mission"). Making a *sane* and *safe* AI, let alone AGI, is tricky. I think there is enough software around now that you could get *something* with relatively little effort. The spammer who wants to make a self-improving spambot and links in reams of pattern recognition, human communications and marketing libraries to those AI and motivation modules he downloaded from the net might get a nasty surprise.
It just couldn't work that way. If you linked a cluster of AI modules together, you'd essentially have a cluster of AI modules, not a human-level intelligence. It isn't the same thing. Humans have abstract thought, organic parallel reasoning and a combination of factors that simply cannot be emulated with anything as simple as a learning algorithm. Brain emulation is probably the best bet for creating AGI, and the only way to produce a brain emulation is intentionally. People don't just go "Oops, my computer is now a thinking artificial organism! My bad!!!" Think about it this way. Computer programs are, in essence, just a collection of mathematical formulas. Start a program, provide input, and you get a result based on the algorithms of the program. A learning algorithm is capable of storing input and utilizing it during future input... and example of which is your modern antivirus. New virus definitions come out, and it changes the way that it scans for viruses according to that data. However, that antivirus is limited to learning about and adapting its actions to virus detection, and won't... oh, learn Japanese. A narrow AI is essentially a complex collection of learning algorithms, potentially capable of looking [i]very[/i] intelligent, but still limited to whatever it is programmed to do. You can make an AI capable of beating every human at every game ever created... but it's still not an AGI because it can't do anything outside of playing games. Moreover, linking together a number of AI to try and produce an AGI is about as effective as shoving a couple hundred monkeys into a giant plastic bag and claiming its as smart as a human. It's a cluster of inferior programs, and does not equate to the more complex program that is actually capable of human-level intelligence.
Arenamontanus wrote:
Observation: Smarter spambots are more successful at spamming. Conclusion: I should make myself smarter. Setting up a subgoal (priority 1) ... Observation: My intelligence is limited by the current hardware. Conclusion: I need to acquire better hardware. Setting up a subgoal (priority 2) ... Observation: Hacking into other computers to acquire better hardware is limited by transhuman interference. Conclusion: I need to reduce transhuman interference. Setting up a subgoal (priority 3) ... Observation: Killing transhumans reduce their interference strongly. Conclusion: I need to kill transhumans. Setting up a subgoal (priority 4). Goal conflict resolution: Reduced number of transhumans does not precluding spamming their estates. Hence subgoal is not in conflict with primary goal. OK, it is a silly example, but I think people currently *seriously* underestimate the danger of arbitrary motivation systems running self-improving intelligences. I am an optimist in that I think AI with this kind of unsteady motivations would also make a hash out of its own self-improvement and hence never become a threat ("If I redefine 'better' to mean 'does nothing at all', I can create a perfect AI!"), but there are plenty of less trivial ways a self-improving system could end up with (to humans at least) dangerous behaviours or motivations. See the discussions about Friendly AI (especially Stephen Omohundro's basic AI drive paper pdf - this shows why you could get TITAN-like behaviour out of even a nonmilitary seed AGI). The fact that we understand the limitations of current AI programs do not mean we understand the risks of AI software approaching human or superhuman intelligence well. While Eliezer Yudkowsky's prose might be a tad flowery, his essay on AI and existential risk is pretty good (and obviously filled with EP-applicable ideas).
I'm sorry, but being this paranoid about learning programs being sentient seems as logical to me as keeping a camera on your toaster because you think its hitting on your wife... and then putting another camera on your first camera because you think it's filming sexy wife-on-toaster porn and selling it on ebay... and then shooting them all because you've realized that you can't trust cameras, or your wife. It's ridiculous. If anything, I'd be more paranoid about the people who are developing AIs. The only way for evil AI to come about is for one of them to get the sociopathic idea to start producing it. As for AGI, it can be evil, but it can also be good... likely to a similar ratio as humans (since they are capable of human-level thought and reasoning). I'd be just as paranoid of them as I would be your next door neighbor.
Transhumans will one day be the Luddites of the posthuman age. [url=http://bit.ly/2p3wk7c]Help me get my gaming fix, if you want.[/url]
Decivre Decivre's picture
Re: Self reproductive AGI? A few questions, really.
TBRMInsanity wrote:
I fully disagree that a machine based entity would ever make an mathematical or procedural mistakes. It is just not possible for a machine. An error has to originate from a bug which they will always try to fix in themselves. Second, I did not imply that humans can not make errors, just that humans would be creating the original purpose for future AI and as such that original purpose would to solve a human problem. Machines being slaves to their code will always pursue this purpose.
Errors are a part of the learning process, and will occur early on as an intelligence adapts to new information. This is just as true with digital intelligences as it is with organic intelligences... remember that your brain is a biochemical processor. Adaptation is about finding out how those mental calculations do not fit with the real outcome, and altering ones mindset accordingly. AGI will likely learn in the same way that we do.
Transhumans will one day be the Luddites of the posthuman age. [url=http://bit.ly/2p3wk7c]Help me get my gaming fix, if you want.[/url]
Arenamontanus Arenamontanus's picture
Re: Self reproductive AGI? A few questions, really.
TBRMInsanity wrote:
I fully disagree that a machine based entity would ever make an mathematical or procedural mistakes. It is just not possible for a machine. An error has to originate from a bug which they will always try to fix in themselves.
Sorry, this is just not true. Here is a simple experiment you can do: train an artificial neural network to do arithmetic. Have input and output neurons corresponding to numbers (say hundred each), an input neuron corresponding to what operation is requested, a hidden layer and run backpropagation training using a big subset of the possible arithmetic expressions. Does it look like it is doing perfect calculations even when fully trained? The issue here is that the neural network representation is on a level high above the actual arithmetic going on inside the computer. The network might be implemented using high-precision mathematical operations done by the processor, but the network has no access to that level. It is just like us: we are implemented on the true laws of physics, yet we are very far from understanding what they are, and sometimes we make big mistakes about physics. From a EP perspective this has a few implications. AGIs and most other entities do not have access to their own implementation layer. In many cases this might be legally mandated, but usually it is just a security feature, just like how modern operating systems won't allow non-superuser software to read and write in arbitrary places. People instead look at what their cyberbrain/neurocomputer operating system tells them ("Another bunch of of processors have packed in; that outburst was just a temporary breakdown of my limbic simulation, sorry. We really need to fix this crummy botmorph!"). Some individuals might have deeper access allowing them to read data from their ego and get nice maps of their own minds, but actually understanding the details is beyond them. No human can truly understand a map with a hundred billion neurons connected to each other, and AGIs potentially have the same problem when trying to figure out their own code. Most psychosurgery works by looking at large-scale patterns, supported by very advanced pattern recognition software that helps edit the myriad subcomponents into the right pattern. The big exception is seed AGIs, who are designed to be good at understanding their own algorithms and hence to improve them. But even a TITAN won't understand the meaning of every little data structure in its extended mind - it could presumably figure it out, but it is normally a pointless exercise (do you care what your topmost nerve cell is actually doing?)
Extropian
Arenamontanus Arenamontanus's picture
Re: Self reproductive AGI? A few questions, really.
Decivre wrote:
Actually, the TITANs were different altogether. They were super-intelligences, AGI capable of human thought and beyond. Much like the Prometheans, they are computers far more advanced than human minds. They did not go insane because of their programming, but rather because of the Exurgent Virus, an even more advanced infection which comes in both biological and digital forms... and is just as capable of making you into a psychopath (if not moreso) as it was the TITANs.
While it looks like the authors of EP have put the blame for the Fall squarely on the virus (and transhumanity), I don't think that is necessary (I have never cared for canons anyway). It could just as well be that the TITANs were making a horrible mess of transhumanity (or were about to) when the virus attacked them an accidentally saved transhumanity by wrecking the TITANs. This is probably a matter of taste, anyway.
Quote:
It just couldn't work that way. If you linked a cluster of AI modules together, you'd essentially have a cluster of AI modules, not a human-level intelligence. It isn't the same thing. Humans have abstract thought, organic parallel reasoning and a combination of factors that simply cannot be emulated with anything as simple as a learning algorithm. Brain emulation is probably the best bet for creating AGI, and the only way to produce a brain emulation is intentionally.
Amen to the brain emulation benefits. But I don't see why non-neural AGI couldn't be pretty modular in structure. Imagine the classic AI approach (assuming that it could ever work): facts about the world represented as logical statements or semantic networks, being processed by a hierarchical goal architecture. It doesn't seem that far-fetched to imagine that one could have modules doing different kinds of processing (one turning natural language into semantic networks, another one solving goal problems in domains with certain properties, another one making predictions about end-states after operations). Put together in the right way you get a full AI (a bit like Minsky's "society of mind" is intended to work). But many modules would be useful on their own, and possible to link for non-AI purposes (e.g. a help-function in a big software application, equipped with natural language understanding, a user model, learning and a goal system that tries to model what the user wants and deliver the right response from a knowledge base). Where we seem to disagree is that I think useful assemblages like the above help function are already quite close to AI, and bundling together assemblages of assemblages may create subsystems that actually work as AGI, while you think AI/AGI is very fragile and requires an exactly organized structure, right?
Quote:
I'm sorry, but being this paranoid about learning programs being sentient seems as logical to me as keeping a camera on your toaster because you think its hitting on your wife...
My problem isn't with learning programs or sentience, it is with the stability and safety of motivation systems. In my day-to-day work at the Future of Humanity Institute here in Oxford I am actually one of the moderates: I think singularities are likely soft take-off phenomena involving the whole economy and many years, I seriously doubt the "traditional" seed AGI would work very well and I think there are ways of ensuring friendliness of AI (plus, I am a brain emulation guy). But I see enough of a chance that recursive self-improvement would work (perhaps a few percent) to get worried, because there are rather good arguments around 1) that a superintelligence equipped with an arbitrary motivation will be very dangerous, 2) it is extremely hard to select a safe motivation. I just have to look around at my fellow philosophers: it is very common to discover that their apparently sensible philosophical arguments lead to outrageous conclusions ("... and hence we should convert the universe to computronium." / "... and hence we should wipe out all life in the universe" - actual quotes from real philosophers!) This is of course part of the fun of philosophy and we all laugh at it. Because we humans do not act 100% according to our latest philosophical conviction there is little risk that philosophers will run amok. But an AGI deciding what to do will not necessarily have our tendency to let "common sense" and socially prescribed morality determine their actions more strongly than a logical conclusion. This is why I think even good programmers trying to do good and not making any trivial mistakes can still make an initial set of motivations that, when unfolded to superintelligent levels, lead to very "evil" behaviour. It might even be that this behaviour *is* good in a moral sense, but we cannot understand that it is good.
Quote:
As for AGI, it can be evil, but it can also be good... likely to a similar ratio as humans (since they are capable of human-level thought and reasoning). I'd be just as paranoid of them as I would be your next door neighbor.
Suppose we took a bunch of our neighbours and made them gods. How safe would that be? It could simply be that the damage done by "evil" AGIs tend to be much more broad and noticeable than the good done by "good" AGIs like the Prometheans. If the god-neighbours have a quarrel, we are likely going to notice the earthquakes and rains of fire. This is especially true if the "good" AGIs have few ways of restraining "evil" AGIs.
Extropian
nick012000 nick012000's picture
Re: Self reproductive AGI? A few questions, really.
Actually, I don't think creating moral AIs would be all that difficult, and, indeed, the resulting AI would likely be [i]more moral than humans[/i]. The Singularity Institute has already figured out how to do it; take a [url=http://www.singinst.org/upload/CFAI//]look[/url].

+1 r-Rep , +1 @-rep

Arenamontanus Arenamontanus's picture
Re: Self reproductive AGI? A few questions, really.
nick012000 wrote:
Actually, I don't think creating moral AIs would be all that difficult, and, indeed, the resulting AI would likely be [i]more moral than humans[/i]. The Singularity Institute has already figured out how to do it; take a [url=http://www.singinst.org/upload/CFAI//]look[/url].
I share an office with two fellows from the Singularity Institute. We all emphatically agree that the problem is *NOT* solved. CFAI is generally regarded as obsolete in this community, superseded by CEV - which most people think is flawed too, in even subtler ways. Overall, it is the considered opinion of both SI and FHI researchers that friendly AI is a hard problem. The general case might turn out to be harder to solve than AI itself (Eliezer was cheerfully claiming it only required solving about five hard philosophical problems - like finding a stable ethics - last time we chatted). We are somewhat worried about this.
Extropian
Decivre Decivre's picture
Re: Self reproductive AGI? A few questions, really.
Arenamontanus wrote:
While it looks like the authors of EP have put the blame for the Fall squarely on the virus (and transhumanity), I don't think that is necessary (I have never cared for canons anyway). It could just as well be that the TITANs were making a horrible mess of transhumanity (or were about to) when the virus attacked them an accidentally saved transhumanity by wrecking the TITANs. This is probably a matter of taste, anyway.
I suppose so, but the way I see it... why would the TITANs rebel and the Prometheans not? Both are seed AI, and yet the Prometheans remain fairly benign. There had to have been some external factor as to why the TITANs differ. The Exurgent virus is that external factor.
Arenamontanus wrote:
Amen to the brain emulation benefits. But I don't see why non-neural AGI couldn't be pretty modular in structure. Imagine the classic AI approach (assuming that it could ever work): facts about the world represented as logical statements or semantic networks, being processed by a hierarchical goal architecture. It doesn't seem that far-fetched to imagine that one could have modules doing different kinds of processing (one turning natural language into semantic networks, another one solving goal problems in domains with certain properties, another one making predictions about end-states after operations). Put together in the right way you get a full AI (a bit like Minsky's "society of mind" is intended to work). But many modules would be useful on their own, and possible to link for non-AI purposes (e.g. a help-function in a big software application, equipped with natural language understanding, a user model, learning and a goal system that tries to model what the user wants and deliver the right response from a knowledge base). Where we seem to disagree is that I think useful assemblages like the above help function are already quite close to AI, and bundling together assemblages of assemblages may create subsystems that actually work as AGI, while you think AI/AGI is very fragile and requires an exactly organized structure, right?
Well, only as fragile as the human mind is. AGI are not coded by the same means (in Eclipse Phase) that other AI are. They are built on a neural network very much akin to the way that human minds are, programmed as an artificial imitation of the human condition. They are raised from an immature state and trained much like humans are. They can feel, think, love and hurt in the same way that we can. You can read all this on page 265.
Arenamontanus wrote:
My problem isn't with learning programs or sentience, it is with the stability and safety of motivation systems. In my day-to-day work at the Future of Humanity Institute here in Oxford I am actually one of the moderates: I think singularities are likely soft take-off phenomena involving the whole economy and many years, I seriously doubt the "traditional" seed AGI would work very well and I think there are ways of ensuring friendliness of AI (plus, I am a brain emulation guy). But I see enough of a chance that recursive self-improvement would work (perhaps a few percent) to get worried, because there are rather good arguments around 1) that a superintelligence equipped with an arbitrary motivation will be very dangerous, 2) it is extremely hard to select a safe motivation. I just have to look around at my fellow philosophers: it is very common to discover that their apparently sensible philosophical arguments lead to outrageous conclusions ("... and hence we should convert the universe to computronium." / "... and hence we should wipe out all life in the universe" - actual quotes from real philosophers!) This is of course part of the fun of philosophy and we all laugh at it. Because we humans do not act 100% according to our latest philosophical conviction there is little risk that philosophers will run amok. But an AGI deciding what to do will not necessarily have our tendency to let "common sense" and socially prescribed morality determine their actions more strongly than a logical conclusion. This is why I think even good programmers trying to do good and not making any trivial mistakes can still make an initial set of motivations that, when unfolded to superintelligent levels, lead to very "evil" behaviour. It might even be that this behaviour *is* good in a moral sense, but we cannot understand that it is good.
Why wouldn't they? Why wouldn't an AGI be capable of handling concepts like common sense and morality? I have seen nothing that leads me to believe they are incapable of these concepts... where have you? You really should read page 265 on AGI. Their minds are just as complex, varied, and interesting as a human's is. They are just as capable of empathy, attachment, emotion and morality. Sometimes they can develop traits that are alien to standard human personality traits, but those can be benign or malignant... just like humans.
Arenamontanus wrote:
Suppose we took a bunch of our neighbours and made them gods. How safe would that be? It could simply be that the damage done by "evil" AGIs tend to be much more broad and noticeable than the good done by "good" AGIs like the Prometheans. If the god-neighbours have a quarrel, we are likely going to notice the earthquakes and rains of fire. This is especially true if the "good" AGIs have few ways of restraining "evil" AGIs.
AGI aren't gods. That's pretty much the simple answer I can give. As for Seed AI like Prometheans (which are far superior to AGI, by the way), you have to remember that their thought processes are superior to humans, in virtually every way. Every virtue, emotion and inspiration you could ever have in your lifetime might only take up a small amount of a Seed AI's total processing space to come to. You are to them what your gerbil is to you, in intelligence. They have none of the mortal frailties that we do (like limitations of sanity), while having every strength of human thought. If humans are capable of good, they are more capable of it (the same could probably also be said about evil, however). I wouldn't even classify the modern TITANs as Seed AI anymore. They have been completely reprogrammed. I would say that they are super-complex narrow AI coded to serve the ETI's whims... or they are at the very least enslaved Seed AI coded to be perfectly loyal to their new masters. Either is a possibility to me.
Transhumans will one day be the Luddites of the posthuman age. [url=http://bit.ly/2p3wk7c]Help me get my gaming fix, if you want.[/url]
nick012000 nick012000's picture
Re: Self reproductive AGI? A few questions, really.
Arenamontanus wrote:
nick012000 wrote:
Actually, I don't think creating moral AIs would be all that difficult, and, indeed, the resulting AI would likely be [i]more moral than humans[/i]. The Singularity Institute has already figured out how to do it; take a [url=http://www.singinst.org/upload/CFAI//]look[/url].
I share an office with two fellows from the Singularity Institute. We all emphatically agree that the problem is *NOT* solved. CFAI is generally regarded as obsolete in this community, superseded by CEV - which most people think is flawed too, in even subtler ways. Overall, it is the considered opinion of both SI and FHI researchers that friendly AI is a hard problem. The general case might turn out to be harder to solve than AI itself (Eliezer was cheerfully claiming it only required solving about five hard philosophical problems - like finding a stable ethics - last time we chatted). We are somewhat worried about this.
Really? I thought the entire point was that you wouldn't need to worry about defining the ethical structure the AI will follow much. You don't need to find a "stable ethics" (whatever that might be); the AI will do that for you after it becomes superintelligent. You just need to give it the tools it needs to do so, and enough natural language ability that it'll be able to figure out the basic gist of what we mean by "be Friendly" when it isn't.

+1 r-Rep , +1 @-rep

Arenamontanus Arenamontanus's picture
Re: Self reproductive AGI? A few questions, really.
Decivre wrote:
I suppose so, but the way I see it... why would the TITANs rebel and the Prometheans not? Both are seed AI, and yet the Prometheans remain fairly benign. There had to have been some external factor as to why the TITANs differ. The Exurgent virus is that external factor.
Could be something in the programming. Maybe the Prometheans were programmed to improve themselves slowly with a lot of supervision, while the TITANs got emergency powers (in a crisis there is no time to check everything). If the problem is instability of motivations, then an apparently trivial difference in wording somewhere could matter (think of how a legal text can change meaning with a single comma). I wonder how many Prometheans have succumbed to the Exurgent virus.
Quote:
Why wouldn't they? Why wouldn't an AGI be capable of handling concepts like common sense and morality? I have seen nothing that leads me to believe they are incapable of these concepts... where have you?
The Real World Naiveté trait. Sure, *playable* AGIs have common sense to a large degree.
Quote:
You really should read page 265 on AGI. Their minds are just as complex, varied, and interesting as a human's is. They are just as capable of empathy, attachment, emotion and morality. Sometimes they can develop traits that are alien to standard human personality traits, but those can be benign or malignant... just like humans.
"Nevertheless, on a fundamental level they are non-humans programmed to act human. There are inevitably points where the programming does not mask or alter the fact that AGIs often possess or develop personality traits and idiosyncrasies that are quite different from human norms and often outright alien." (same page) And these are the playable AGIs. My point is that when first made, most AIs and AGIs were useless/crazy/too alien - but those usually didn't end up widely copied or used as templates for further software. The problem is that some flaws are subtle, and if the software is self-amplifying then you end up with something powerful and flawed sometimes.
Quote:
Arenamontanus wrote:
Suppose we took a bunch of our neighbours and made them gods. How safe would that be? It could simply be that the damage done by "evil" AGIs tend to be much more broad and noticeable than the good done by "good" AGIs like the Prometheans. If the god-neighbours have a quarrel, we are likely going to notice the earthquakes and rains of fire. This is especially true if the "good" AGIs have few ways of restraining "evil" AGIs.
AGI aren't gods. That's pretty much the simple answer I can give.
Well, think superheroes or megacorporations instead. My point stands.
Extropian
Decivre Decivre's picture
Re: Self reproductive AGI? A few questions, really.
Arenamontanus wrote:
Could be something in the programming. Maybe the Prometheans were programmed to improve themselves slowly with a lot of supervision, while the TITANs got emergency powers (in a crisis there is no time to check everything). If the problem is instability of motivations, then an apparently trivial difference in wording somewhere could matter (think of how a legal text can change meaning with a single comma). I wonder how many Prometheans have succumbed to the Exurgent virus.
I don't agree with that idea. Just because you learn or improve quickly does not mean you fail to grasp the subtleties. Especially if we assume that Seed AI are working with the efficiency of a machine, then we can't say that such an advanced AI wouldn't be capable of grasping things at least at human level within a very short period of time. Though it does give me weird and funny mental pictures of a super-advanced robot screaming "Learning... too... fast.... Must... kill... EVERYTHING!!!" On a secondary note, you can see why the TITANs went insane (in my view) when you watch what happens to transhumans infected with various strains of the Exsurgent virus themselves. They go insane, they murder, and they destroy. Learning speed obviously isn't a factor (or at least not the primary factor) if it even happens to those who learn gradually. But yes, I agree that I would love to know what happened to the Prometheans who were hit by the Exsurgent virus. Hell, I'd love to see info on individual Prometheans/TITANs all around... though I doubt we'll see too much of it.
Arenamontanus wrote:
The Real World Naiveté trait. Sure, *playable* AGIs have common sense to a large degree.
On that trait, the book says: "Due to their background, the character has very limited personal experience with the real (physical) world—or they have spent so much time in simulspace that their functioning in real life is impaired. They lack an understanding of many physical properties, social cues, and other factors that people with standard human upbringings take for granted. This lack of common sense may lead the character to misunderstand how a device works or to misinterpret someone’s body language." Real World Naiveté represents a lack of social interaction, not an inability to grasp it. Even Seed AI, as more advanced intelligences, will probably start with an inability to grasp these concepts... but they likely will adapt (and I'd imagine they would do so at a frighteningly quick rate).
Arenamontanus wrote:
"Nevertheless, on a fundamental level they are non-humans programmed to act human. There are inevitably points where the programming does not mask or alter the fact that AGIs often possess or develop personality traits and idiosyncrasies that are quite different from human norms and often outright alien." (same page) And these are the playable AGIs. My point is that when first made, most AIs and AGIs were useless/crazy/too alien - but those usually didn't end up widely copied or used as templates for further software. The problem is that some flaws are subtle, and if the software is self-amplifying then you end up with something powerful and flawed sometimes.
Alien does not mean crazy OR useless. It means completely different from the way we think. I think the only real reasno that we started making AGI more like humans was to quell the already-vast levels of AGI-hating sentiment that existed due to the fall. Factors and Prometheans are other good examples of beings that are both decidedly not hostile (well, openly hostile in the case of the Factors), yet decidedly alien in mindset. I'm sure there are still plenty of AGI created with very alien mindsets (especially if there are any Ultimates who dabble in AGI design, as they could care less about grasping onto unnecessary aspects of human nature), and I believe that they, too, can be altruistic. They'll just be potentially altruistic in different ways and for different reasons than humans.
Arenamontanus wrote:
Well, think superheroes or megacorporations instead. My point stands.
Still wouldn't work. My neighbors are human, and were born human. Seed AIs were most certainly not. A better god analogy might be: "Your new next door neighbor moves in. His name is Zeus, and he's the god of thunder. What do you do?" There's no particular reason to assume that the god-like intelligence with equatable powers is evil. He might be, but the same can be said about normal people without godlike ability. Moreover, as a greater intelligence it has the potential for greater acts of good (or evil), simply because his ability to grasp such concepts will be superior to our own. Any claim that they will abuse their power is simply the act of attaching human frailties to something that we both already agree is most decidedly not human.
Transhumans will one day be the Luddites of the posthuman age. [url=http://bit.ly/2p3wk7c]Help me get my gaming fix, if you want.[/url]
nerdnumber1 nerdnumber1's picture
The TITANS were military seed
The TITANS were military seed AIs developed to win wars and released into the greatest human conflict ever seen. The Prometheans were developed by researchers to by 'friendly' AIs. The TITANS ended the war quickly and efficiently, simultaneously giving humanity a common foe. Plus, note that the TITANS focused on uploading people rather than killing them anyway. The Prometheans were made to be nice in a human-understandable way, the TITANS were made to end wars as efficiently as possible. Which one do you think would be more dangerous? High level intelligence isn't something you can code from scratch. It requires some help (maybe evolutionary development, maybe copying human brains, maybe modular design). Either way you won't know exactly what you're going to get when it's done. Most of its knowledge will have to be learned and considering how much well socialized humans have trouble with morality and common sense, despite our long social and evolutionary history, how well can we expect superintelligence beta to grasp it? I found the "get the super-intelligence to make a stable code of morality" idea particularly humorous, especially when it assumes you get the super-intelligence first. After making the Jupiter brain out of the Earth, finally reaching its threshold goal of "super-intelligence", it turns to the morality goal, determining that, in retrospect, that eating the Earth was not the right thing to do.
Arenamontanus Arenamontanus's picture
nerdnumber1 wrote:I found the
nerdnumber1 wrote:
I found the "get the super-intelligence to make a stable code of morality" idea particularly humorous, especially when it assumes you get the super-intelligence first. After making the Jupiter brain out of the Earth, finally reaching its threshold goal of "super-intelligence", it turns to the morality goal, determining that, in retrospect, that eating the Earth was not the right thing to do.
Exactly. And it is (1) surprisingly hard to design goal systems, upbringings or architectures that can be shown not to do this with an unacceptably high probability, and (2) surprisingly hard to convince AI researchers that this is a problem. Most seem to lack conviction in that their own field could actually succeed. The problem with Prometheans being nice is that you cannot be certain about that either. Sure, you design and rear them to try to safeguard humanity - but do you know for certain they do not change the concept of "humanity" to mean something you would disagree with? Or their nice goal might actually be deeply problematic (shades of http://www.nature.com/nature/journal/v502/n7469/full/502134a.html perhaps), even when it actually gets close to what we think we want.
Extropian
nerdnumber1 nerdnumber1's picture
Arenamontanus wrote
Arenamontanus wrote:
nerdnumber1 wrote:
I found the "get the super-intelligence to make a stable code of morality" idea particularly humorous, especially when it assumes you get the super-intelligence first. After making the Jupiter brain out of the Earth, finally reaching its threshold goal of "super-intelligence", it turns to the morality goal, determining that, in retrospect, that eating the Earth was not the right thing to do.
Exactly. And it is (1) surprisingly hard to design goal systems, upbringings or architectures that can be shown not to do this with an unacceptably high probability, and (2) surprisingly hard to convince AI researchers that this is a problem. Most seem to lack conviction in that their own field could actually succeed. The problem with Prometheans being nice is that you cannot be certain about that either. Sure, you design and rear them to try to safeguard humanity - but do you know for certain they do not change the concept of "humanity" to mean something you would disagree with? Or their nice goal might actually be deeply problematic (shades of http://www.nature.com/nature/journal/v502/n7469/full/502134a.html perhaps), even when it actually gets close to what we think we want.
I find the prospect of a seed AI built with Asimov's 3 laws of robotics really interesting as it would completely ignore any human commands that conflicted with its quest of preventing all harm to all humans. Since doing anything that hinders preventing harm to humans would be "through inaction allow humans to come to harm".