Welcome! These forums will be deactivated by the end of this year. The conversation continues in a new morph over on Discord! Please join us there for a more active conversation and the occasional opportunity to ask developers questions directly! Go to the PS+ Discord Server.

[EXSURGENT QUARANTINE] Friendly Seed Artificial Intelligence

34 posts / 0 new
Last post
root root's picture
[EXSURGENT QUARANTINE] Friendly Seed Artificial Intelligence
THIS ARCHIVE IS CLASSIFIED AS A LEVEL 4 EXSURGENT INFECTION RISK EXSURGENT MEME PRESENT, CONSIDERED INFECTIOUS USE MAY COMPROMISE YOUR CLASSIFICATION AS TRANSHUMAN [hr] root@Friendly Seed Artificial Intelligence [hr] "On a long enough time line, the probability that transhumanity will not make another seed AI approaches zero. Based on this fact, I would argue that transhumanity should be preparing to head off the next Fall by developing a friendly singularity. Or at the very least, we should be working on teaching ethics and emotions to weak AI, and researching best practices for interacting with a super-intelligent machine. "I think that it is easier to develop the ethics than the emotions, so we should start there. How would you teach an emotionless, inhuman intelligence to respect ethics? What is the fundamental rule that it must follow to always be safe to transhumanity? I was thinking something like:
    "Thou shalt always consider yourself a member of transhumanity."
How could a homicidal AI bend that rule to allow it to kill us all?"
[ @-rep +1 | c-rep +1 | g-rep +1 | r-rep +1 ]
nick012000 nick012000's picture
Re: Friendly Seed Artificial Intelligence
Transhumans kill each other all the time. Much better to develop things properly; read some of Yudkowsky's essays on the subject if you want a decent way to understand the best way to go about it. The Argonauts saved most of them.

+1 r-Rep , +1 @-rep

root root's picture
Re: Friendly Seed Artificial Intelligence
root@Friendly Seed Artificial Intelligence [hr] "Well, I did some reading on Yudkowsky and Friendly Artificial Intelligence in an old archive on the subject. There were a list of requirements for a Friendly AI:
    1. Friendliness - that an AI feel sympathetic towards humanity and all life, and seek for their best interests 2. Conservation of Friendliness - that an AI must desire to pass on its value system to all of its offspring and inculcate its values into others of its kind 3. Intelligence - that an AI be smart enough to see how it might engage in altruistic behavior to the greatest degree of equality, so that it is not kind to some but more cruel to others as a consequence, and to balance interests effectively 4. Self-improvement - that an AI feel a sense of longing and striving for improvement both of itself and of all life as part of the consideration of wealth, while respecting and sympathizing with the informed choices of lesser intellects not to improve themselves 5. First mover advantage - the first goal-driven general self-improving AI "wins" in the memetic sense, because it is powerful enough to prevent any other AI emerging, which might compete with its own goals.
Looking over that list of requirements for a Friendly AI made me wonder if that couldn't be summarized with a single commandment like:
    "Thou shalt always consider yourself a member of transhumanity."
"So if you'll kindly avoid telling me to go read the transhumanist manual, I'll ask again: "How would a homicidial AI bend that rule to kill us all"? Because transhumans kill each other all the time, I don't think we should be holding an AI to higher standards. The real desire to to avoid a Fall. I don't ask AI to avoid wars, or to not kill, or not not do any of the deceitful and violent things we all do to each other, I want them to not go on an extermination kick."
[ @-rep +1 | c-rep +1 | g-rep +1 | r-rep +1 ]
nick012000 nick012000's picture
Re: Friendly Seed Artificial Intelligence
Because if it considers itself a part of transhumanity, what's stopping it from saying "If I'm a part of transhumanity, as long as I'm alive, transhumanity has survived" and then deciding it needs to convert the solar system into computronium to compute the value of pi or whatever, and not care if it kills transhumanity as a side effect? AGIs don't think like humans do, and anthropomorphising them can be dangerous. The proper way to define Friendliness is "making it so transhumanity won't regret turning it on". Look at Yudkowsky's paper on Coherent Extrapolated Volition.

+1 r-Rep , +1 @-rep

root root's picture
Re: Friendly Seed Artificial Intelligence
root@Friendly Seed Artificial Intelligence [hr] "Right, the only problem is that the idea of Coherent Extrapolated Volition, as I understand it to be, is bullshit. There are two problems with it. The first is that you are asking the AI in question, the one you are building, to be able to understand human nature and then build a Friendly AI. While any Friendly seed AI will have to be able to create a Friendly AI to continue evolving, you are assuming that the AI can do something, from the start, that you cannot do. Presuming to make a Friendly AI by first tasking it with something that we can't do yet is unfair to the unborn AI, and assumes that it can even be done. "Second, it relies on evolutionary psychology to provide some criterion of "friendly" and evolutionary psychology is, frankly, trash. To prove it, think about a Neanderthal. What do you think of them? Well, you are wrong. You don't know shit about a Neanderthal because no one knows very much about them. Now, what kind of evolutionary pressures did they inflict on the human race? Don't know? Neither does anyone else, and it may be significant. There is no way to remove the statistical biases created by situations you don't know might be there, and therefore evolutionary psychology conclusions cannot be differentiated from noise. "To reply to your point, I am a transhuman, but I see no benefit of wiping out the universe to get computing material to solve the Reimann hypothesis. The point isn't to tell the AI to make sure that transhumanity survives, but to always think of themselves as transhuman. If they think of themselves as transhumans, they will model their actions on the actions of those around them. You aren't anthropomorphizing them, they are actively anthropomorphizing themselves. I also feel that the argument that transhumanity shouldn't regret turning it on is unfair. Transhumanity will always have some cause to regret creating a seed AI, but so does everyone who ever has a child. Regret for change is inherent in the transhuman condition, the point it to make the Friendly AI no more terrible than any other member of transhumanity."
[ @-rep +1 | c-rep +1 | g-rep +1 | r-rep +1 ]
nick012000 nick012000's picture
Re: Friendly Seed Artificial Intelligence
Hey! Neanderthals are perfectly nice people. I've got a few employees in Neanderthal morphs; they're not that different to other Exalt-level transhuman. That said, evolutionary psychology isn't totally bullshit; human minds (and the transhuman minds based off of them) are affected by billions of years of vertebrate evolution making clumsy workarounds that managed to kludge together an intelligence. AGIs don't; they think in [i]radically[/i] different fashions to transhumans. Selfishness is evolved. The sense of self, period, is evolved. Pain is evolved. Pleasure is evolved. Retaliation instincts are evolved. All of these are complex functional adaptations, and a properly designed Friendly AI will have none of them (though heuristic categories that more-or-less encompass what we call the "sense of self" will likely arise, eventually).

+1 r-Rep , +1 @-rep

root root's picture
Re: Friendly Seed Artificial Intelligence
root@FsAI [hr] "Let me reign back my criticism of evolutionary psychology to something a little more defensible. Evolutionary psychology, whether using a human model or animal models, is purely speculative. While there is nothing wrong with speculation, and indeed there are many things that cannot be approached by any other means, speculation should be treated as such. Far too frequently researchers and philosophers get too used to working with speculation and begin to make claims and think ideas far beyond their evidence, and don't acknowledge it that they are doing so. "So, if evolutionary psychologists are willing and able to acknowledge the methodological weakness of their field, and acknowledge that a number of thinkers in the field are or were racist nuts, then we can start having a dialogue. "On to your points about evolution. Yes, I agree that everything, by definition, was evolved. I also agree that AGIs are not the recipients of a few billion years worth of evolution, so they are going to have very different mindsets and experiences. We also know that there are a number of phenotypes that appear to be the same throughout the animal kingdom (from what records we have left of it) that have no hereditary roots together. For example, birds and insects both fly, but the evolution towards flight was vastly different. My point is that AGI, despite having a very different background, can end up with the same ethics and thought processes that we have. If they can develop a transhuman mentality, even if they do so by some very different path than we did, they can still be expected to develop many of the same mental constructs transhumanity uses. "Now, I'm going to read more of Yudkowsky's work to make sure I'm not making sophomoric arguments that have been handled a thousand times before by better thinkers, but in the mean time, I will do my best to think of interesting arguments.
[ @-rep +1 | c-rep +1 | g-rep +1 | r-rep +1 ]
bakho bakho's picture
Re: Friendly Seed Artificial Intelligence
root wrote:
root@Friendly Seed Artificial Intelligence [hr] "Second, it relies on evolutionary psychology to provide some criterion of "friendly" and evolutionary psychology is, frankly, trash. To prove it, think about a Neanderthal. What do you think of them? Well, you are wrong. You don't know shit about a Neanderthal because no one knows very much about them. Now, what kind of evolutionary pressures did they inflict on the human race? Don't know? Neither does anyone else, and it may be significant. There is no way to remove the statistical biases created by situations you don't know might be there, and therefore evolutionary psychology conclusions cannot be differentiated from noise.
"You could use the same argument to debunk evolution unto itself. What you are stating is that anything we haven't witnessed is outside of the purview of science. Well, I don't think we'd have much nowadays (of the technologies that we use) if the pre-Fall scientists shared your outlook. A scientific model (theory) is not measured by how 'true' it is, but how well it predicts our reality. It's not even under discussion if it's 'true' or 'false' or some other flamboyant dichotomy you'd label it with (trash vs. usable product, noise vs. harmony, whatever you'd like). The value of its prediction is not a '+' or a '-', but a continuum of the field's research and findings. Evolutionary psychology has its problems, but not in the way you're framing them. Far from it. The interesting fact is that when somebody sticks 'psychology' to a biological term, it immediately becomes trash or pseudoscience. But we all adhere with religious fervour to the Evolution that gave birth to it. "
Photobucket The only principle t
root root's picture
Re: Friendly Seed Artificial Intelligence
root@Friendly Seed Artificial Intelligence [hr]
bakho wrote:
"Evolutionary psychology has its problems, but not in the way you're framing them. Far from it. The interesting fact is that when somebody sticks 'psychology' to a biological term, it immediately becomes trash or pseudoscience. But we all adhere with religious fervour to the Evolution that gave birth to it. "
"I did sort of overdo it with my anti-evolutionary psychology rant, so lets see if I can't make my point in a less inflammatory way, perhaps with me backing my argument with more than my own hot air. Firstly, I should, for the purposes of full disclosure, admit that my study of psychology was entirely in realm of cognitive psychology, so some of my crankiness is intra-field disagreement expressed without the moderating effect of reason. Second, let's make sure that I am using the same definition of "evolutionary psychology" as the rest of the world. I snagged this from the same ancient archive that I've been pulling out of:
    "Evolutionary psychology (EP) explains psychological traits—such as memory, perception, or language—as adaptations, that is, as the functional products of natural selection or sexual selection. Adaptationist thinking about physiological mechanisms, such as the heart, lungs, and immune system, is common in evolutionary biology. Evolutionary psychology applies the same thinking to psychology."
"This, I have no problems with. This is scientific theory being applied with correct level of objectivity. However, once you look into the methodology of the field, you start to see some problems. The largest and most fatal being the observer bias, explained down that link by my good friend from history, Nick Bostrom. The first example of this bias at work given in his book:
    "How big is the smallest fish in the pond? You catch one hundred fishes, all of which are greater than six inches. Does this evidence support the hypothesis that no fish in the pond is much less than six inches long? Not if your net can’t catch smaller fish."
"Too often, the hypotheses generated in evolutionary psychology are post facto hypotheses. They develop into internally consistent nonsense, which can be interesting but isn't something you should put any trust in, sort of like Objectivism. Noticing that human males possess genitals with a very uncommon testicles to penis size ratio, and then deciding that this weirdly shaped phallus must have some evolutionary psychology factor is not good science. Its just obsessing about dicks, and since you can't show a fossil history of human wang sizes, any internally consistent theory as to why human man-dicks are so large in comparison to their nut-sacks is resting on its own turgid reasoning and nothing else. "This is not to say that there is not good work done on the subject, just that the study of evolution is usually restricted to species with a short enough generation span that changes can be observed in controlled environments."
[ @-rep +1 | c-rep +1 | g-rep +1 | r-rep +1 ]
Quincey Forder Quincey Forder's picture
Re: Friendly Seed Artificial Intelligence
what about making an unbound AGI that believe he's human like an infugee, or something. I'm thinking something like -everybody saw Blade Runner, right? if not... SPOILER!!!- Deckard or Rachael. in a sort of "reverse Project Futura", raise it like a human being in a simulspace, give it/him/her a stable childhood with loving, competent and fair parents, let it grow through teenage and adolescence, learning right from wrong. if it/he/she does something bad in the simulspace, give detention at the virtual school, grounding at the virtual home, etc There's a risk to see the AGI become a total monster, true, but no more or no less than with any other transhuman.
[center] Q U I N C E Y ^_*_^ F O R D E R [/center] Remember The Cant! [img]http://tinyurl.com/h8azy78[/img] [img]http://i249.photobucket.com/albums/gg205/tachistarfire/theeye_fanzine_us...
nick012000 nick012000's picture
Re: Friendly Seed Artificial Intelligence
"On the other hand, though, it can [url=http://lesswrong.com/lw/yj/an_especially_elegant_evpsych_experiment/]pro... [url=http://www.sciencedaily.com/releases/2010/08/100804122711.htm]hypotheses... to test that they'd have never though of otherwise, and that can be quite useful. Yes, it has its [url=http://lesswrong.com/lw/2l7/problems_in_evolutionary_psychology/]problem..., but its still a very useful theory."

+1 r-Rep , +1 @-rep

root root's picture
Re: Friendly Seed Artificial Intelligence
root@FsAI [hr]
Quincey Forder wrote:
what about making an unbound AGI that believe he's human like an infugee, or something. I'm thinking something like -everybody saw Blade Runner, right? if not... SPOILER!!!- Deckard or Rachael. in a sort of "reverse Project Futura", raise it like a human being in a simulspace, give it/him/her a stable childhood with loving, competent and fair parents, let it grow through teenage and adolescence, learning right from wrong. if it/he/she does something bad in the simulspace, give detention at the virtual school, grounding at the virtual home, etc There's a risk to see the AGI become a total monster, true, but no more or no less than with any other transhuman.
"Oh, hey, sarcasm. Your analogy is wrong because you are viewing my proposed commandment through too many tropes. I am not saying that we should deceive the intelligence that we hope will someday be a singularity, as dishonesty and betrayal are the surest ways of inviting destruction. I am not even proposing that it is being told to protect transhumanity. I just want it to identify itself as a transhuman. Give the AI, from its inception, the identity and respect due to a human. If you treat it as an equal member of transhumanity, it will be. Why? Because when a being thinks about the group that they belong to, the regions of the brain that are associated with a sense of self light up. There will be a way to replicate this with computer logic, so the AGI won't want to hurt the rest of us any more than it wants to hurt itself."
[ @-rep +1 | c-rep +1 | g-rep +1 | r-rep +1 ]
root root's picture
Re: Friendly Seed Artificial Intelligence
root@FsAI [hr]
nick012000 wrote:
"On the other hand, though, it can [url=http://lesswrong.com/lw/yj/an_especially_elegant_evpsych_experiment/]pro... [url=http://www.sciencedaily.com/releases/2010/08/100804122711.htm]hypotheses... to test that they'd have never though of otherwise, and that can be quite useful. Yes, it has its [url=http://lesswrong.com/lw/2l7/problems_in_evolutionary_psychology/]problem..., but its still a very useful theory."
"Really? This is what you defend with? I recommend that you take a course in some futuristic kung fu, so you learn to not block with your face. What operating definition of "sexy" was being used when declaring that ovulating women shop for sexier clothing? Did they consider that periods throughout a population do not necessarily distribute randomly? Did you notice the interesting conflation of purchasing "sexy" clothing while ovulating, and the idea of therefore wearing "sexy" clothing while ovulating? And then conflating this again with the idea of reproductive fitness, and then again into evolutionary psychology? "Your other article is even better. The writer not only admits that they had not read the article in question, and in fact had only grabbed a few numbers out of the abstract and wildly shot off into crazy make-shit-up land. They are claiming that not only are they right, but that a correlation of 0.92, unreplicated or even seen in context, is proof positive of their imposed belief system. This study is comparing Canadian adults' self-reported, perceived sadness at the thought of people in different age groups dying to the reproductive fitness levels of members of the !Kung tribe in Africa at the same age. Now I realize that we are in the far future here, but don't you think there just might be a few differences between members of those two populations? Don't you see where these comparisons might not really be valid? "And for my crassly overplayed knockout punch: have you ever heard of a fishing expedition? Fishing expeditions are where you take a big fat chunk of data and keep looking for correlations until you find something. If you go looking for it, you can find a list of assassinations predicted by Moby Dick.
[ @-rep +1 | c-rep +1 | g-rep +1 | r-rep +1 ]
nick012000 nick012000's picture
Re: Friendly Seed Artificial Intelligence
*eyeroll* "Of course it's valid; the hunter-gatherers are the best representation of the ancestral environment where those adaptations would have been acquired available, and that self-reported anticipation of grief is what would have been selected for, since it's what would have encouraged them to [i]avoid getting their children killed[/i]. [i]Actual[/i] dead children are just sunk costs and therefore irrelevant to evolution. In short, they came up with a hypothesis, tested it, and got a positive result." "Of course writing the bottom line and then filling in things for it is bad science, but that wasn't what they seemed to have done, in either case. They had an hypothesis, tested it, and got a positive result. I would recommend performing some psychosurgery and rationality training, to improve your ability to recognize and discard inaccurate beliefs. The vehemence you're responding with is indicative of a defense mechanism against a threatened internalized belief structure."

+1 r-Rep , +1 @-rep

root root's picture
Re: Friendly Seed Artificial Intelligence
root@FsAI [hr]
nick012000 wrote:
*eyeroll* "Of course it's valid; the hunter-gatherers are the best representation of the ancestral environment where those adaptations would have been acquired available, and that self-reported anticipation of grief is what would have been selected for, since it's what would have encouraged them to [i]avoid getting their children killed[/i]. [i]Actual[/i] dead children are just sunk costs and therefore irrelevant to evolution. In short, they came up with a hypothesis, tested it, and got a positive result." "Of course writing the bottom line and then filling in things for it is bad science, but that wasn't what they seemed to have done, in either case. They had an hypothesis, tested it, and got a positive result. I would recommend performing some psychosurgery and rationality training, to improve your ability to recognize and discard inaccurate beliefs. The vehemence you're responding with is indicative of a defense mechanism against a threatened internalized belief structure."
"If you're going to be snotty, you have to earn it. Take what you declared right there and pose it as a disprovable hypothesis. What, can't do it? If you can't pose it as a disprovable hypothesis, you cannot, with any certainty, claim it to be part of your hypothesis construct. To put it even more bluntly, if you can't lay it on the glass, don't step up."
[ @-rep +1 | c-rep +1 | g-rep +1 | r-rep +1 ]
bakho bakho's picture
Re: Friendly Seed Artificial Intelligence
root wrote:
root@Friendly Seed Artificial Intelligence "I did sort of overdo it with my anti-evolutionary psychology rant, so lets see if I can't make my point in a less inflammatory way, perhaps with me backing my argument with more than my own hot air. Firstly, I should, for the purposes of full disclosure, admit that my study of psychology was entirely in realm of cognitive psychology, so some of my crankiness is intra-field disagreement expressed without the moderating effect of reason. Second, let's make sure that I am using the same definition of "evolutionary psychology" as the rest of the world. I snagged this from the same ancient archive that I've been pulling out of:
    "Evolutionary psychology (EP) explains psychological traits—such as memory, perception, or language—as adaptations, that is, as the functional products of natural selection or sexual selection. Adaptationist thinking about physiological mechanisms, such as the heart, lungs, and immune system, is common in evolutionary biology. Evolutionary psychology applies the same thinking to psychology."
"This, I have no problems with. This is scientific theory being applied with correct level of objectivity. However, once you look into the methodology of the field, you start to see some problems. The largest and most fatal being the observer bias, explained down that link by my good friend from history, Nick Bostrom. The first example of this bias at work given in his book:
    "How big is the smallest fish in the pond? You catch one hundred fishes, all of which are greater than six inches. Does this evidence support the hypothesis that no fish in the pond is much less than six inches long? Not if your net can’t catch smaller fish."
"This, I argue, is the problem of any science then. Because this is fundamentally a problem of the measuring instruments used. Take Galileo for example - the observational theory of that time couldn't account for the phenomena he observed using a telescope. If you see this as a problem, this is not a problem of evolutionary psychology, but a problem of science at large (if we understand science in the methodological limits you seem to be espousing). You're saying that if we don't have an instrument to measure the tiniest fish in the pond, then we shouldn't even try to measure. What if, lets say, pre-atomic chemists applied this logic to their research? Oh. They couldn't. Because they didn't know there's an atomic level on which research could be done. By using this crude 'fish-pond' argument, you only show that you have an axiomatic, ageless, objective understanding of science - a post facto (to use the term that seems dear to you) understanding of a human endeavour which you perceive as a monumental juggernaut of Truth, and not a process of mistakes and successes, propaganda, genius rhetoric, and to some degree of genius and not so genius thinking."
root wrote:
"Too often, the hypotheses generated in evolutionary psychology are post facto hypotheses. They develop into internally consistent nonsense, which can be interesting but isn't something you should put any trust in, sort of like Objectivism. Noticing that human males possess genitals with a very uncommon testicles to penis size ratio, and then deciding that this weirdly shaped phallus must have some evolutionary psychology factor is not good science. Its just obsessing about dicks, and since you can't show a fossil history of human wang sizes, any internally consistent theory as to why human man-dicks are so large in comparison to their nut-sacks is resting on its own turgid reasoning and nothing else.
"A handful of fishing expedition hypotheses in a couple of research papers doesn't say jack shit about the whole field. As any field in psychology, evolutionary psychology is far from being a paradigm-like systematized theory with lots of interconnected research and postulates (or Cthulhu forbid, laws). It's a bunch of research projects gathered under the same umbrella term and a couple of evolution derived definitions. There's bad research, but there's also some good research. Again, that says nothing about evolutionary psychology at large."
root wrote:
"This is not to say that there is not good work done on the subject, just that the study of evolution is usually restricted to species with a short enough generation span that changes can be observed in controlled environments."
"Oh. I hate the fucking thing. I think it's the worst thing that happened to psychology after behaviorism. But playing the devil's advocate is sometimes fun."
Photobucket The only principle t
root root's picture
Re: Friendly Seed Artificial Intelligence
root@FsAI [hr]
bakho wrote:
"You're saying that if we don't have an instrument to measure the tiniest fish in the pond, then we shouldn't even try to measure."
"I would say that if we don't have an instrument to measure the tiniest of fish in the pond, we shouldn't lay claim to certainty in our results. We should then be very conscientious and intellectually rigorous about what sorts of ideas we make based on these findings. "Conjecture is a allowed part of the scientific process, and is required to approach new topics that we lack hard data for. If I have previously made the case that no conjecture is acceptable, I failed in my communications. My opinion about conjecture is that it must be understood to be conjecture, and treated as conjecture, and any theories built upon a card house of conjectures cannot be relied upon. As long as the research acknowledges its own limitations, and does so in a manner more rigorous than a brief reference to a paper that referenced a paper that referenced a review written by someone who hadn't read the original research, then it is acceptable. "For an example, this article on the effects of language on thought. This article discusses an old theory by Benjamin Lee Whorf that language restricts the types of thoughts the speaker can have. It's a fascinating idea, and captivated popular imagination for quite some time until someone realized that Mr. Whorf was conjecturing and had no research to back it up. The backlash was so bad that it took 70 years before anyone was able to step up and look at the research question again. It turns out that language doesn't restrict what thoughts a person can have, but it does have a reinforcing effect on what concepts a speaker is forced to entertain as a matter of grammatical structure. So Whorf was close, but his unrestrained enthusiasm did an enormous amount of damage to the field, some of which still lingers in the erroneous idea that Eskimos have thirty words for snow.
bakho wrote:
"A handful of fishing expedition hypotheses in a couple of research papers doesn't say jack shit about the whole field. As any field in psychology, evolutionary psychology is far from being a paradigm-like systematized theory with lots of interconnected research and postulates (or Cthulhu forbid, laws). It's a bunch of research projects gathered under the same umbrella term and a couple of evolution derived definitions. There's bad research, but there's also some good research. Again, that says nothing about evolutionary psychology at large."
"You are fully correct in taking me to task for painting the entire field of evolutionary psychology with the same brush as these fishing expeditions. It isn't fair to the researchers who are intellectually rigorous, who are calm and careful about the language of their claims. Unfortunately, their work doesn't matter if the good and careful research is drowned out by loosely defined fishing expeditions. Scientists like to believe that the popular understanding of their work doesn't matter, that the plebeian in the streets cannot possibly understand such celestial thoughts and heavy concepts. They are half right, the public certainly can't follow their research, but ignoring the image problem scientists have is a disaster. Loose pseudoscience kills fields of research.
bakho wrote:
"By using this crude 'fish-pond' argument, you only show that you have an axiomatic, ageless, objective understanding of science - a post facto (to use the term that seems dear to you) understanding of a human endeavour which you perceive as a monumental juggernaut of Truth, and not a process of mistakes and successes, propaganda, genius rhetoric, and to some degree of genius and not so genius thinking."
"I also am forced to acknowledge your charge that I am giving science as a whole a post facto treatment as a juggernaut of truth. It is easy to forget that scientific understanding does not follow a path, that it is a dirty guessing game filled with dark alleys and strange accidents. I am aware of the psychological biases in transhumans that cause this sort of mistake, but I am also aware that knowledge of psychology doesn't mean it doesn't apply to you as well. I am as guilty of lapses in intellectual rigor as the field I am savaging. However, I am not a scientist. I have studied cognitive psychology, but I have not been trained as a cognitive psychologist, and the difference is enormous. I am a student and a skeptic, nothing more."
[ @-rep +1 | c-rep +1 | g-rep +1 | r-rep +1 ]
TBRMInsanity TBRMInsanity's picture
Re: Friendly Seed Artificial Intelligence
Juan Posts: Seed AI are far too dangerous (as you pointed out). It would be safer and better to use human backup programs instead to fill the same role. This is what those slime balls the Factors do and they seem to be fairing quite well since their singularity event (a lot better off then humanity has).
Jovian Motto: Your mind is original. Preserve it. Your body is a temple. Maintain it. Immortality is an illusion. Forget it.
bakho bakho's picture
Re: Friendly Seed Artificial Intelligence
root wrote:
root@FsAI [hr] "I would say that if we don't have an instrument to measure the tiniest of fish in the pond, we shouldn't lay claim to certainty in our results. We should then be very conscientious and intellectually rigorous about what sorts of ideas we make based on these findings.
"But that's the thing with new, cutting edge science. You often can't discern what's pure conjecture and what's a 'fact' until a decade or ten of research have evaluated your claims/theories. You work with constructs, some good, some bad. And I won't even start on the scientists' infatuation with facts, the never ending orgy with 'objective conclusions based on facts', while they forget that a fact; how they understand it, is something that never existed in the cognition of a homo sapiens in its whole history. But okay. I'm a radical then..."
root wrote:
"You are fully correct in taking me to task for painting the entire field of evolutionary psychology with the same brush as these fishing expeditions. It isn't fair to the researchers who are intellectually rigorous, who are calm and careful about the language of their claims. Unfortunately, their work doesn't matter if the good and careful research is drowned out by loosely defined fishing expeditions. Scientists like to believe that the popular understanding of their work doesn't matter, that the plebeian in the streets cannot possibly understand such celestial thoughts and heavy concepts. They are half right, the public certainly can't follow their research, but ignoring the image problem scientists have is a disaster. Loose pseudoscience kills fields of research.
"Yes. I fully agree. Scientists forget that their work is, in the end, a form of sophisticated propaganda. But forget is a bad choice of words - they repress it. Quite successfully, I might add."
root wrote:
"I also am forced to acknowledge your charge that I am giving science as a whole a post facto treatment as a juggernaut of truth. It is easy to forget that scientific understanding does not follow a path, that it is a dirty guessing game filled with dark alleys and strange accidents. I am aware of the psychological biases in transhumans that cause this sort of mistake, but I am also aware that knowledge of psychology doesn't mean it doesn't apply to you as well. I am as guilty of lapses in intellectual rigor as the field I am savaging. However, I am not a scientist. I have studied cognitive psychology, but I have not been trained as a cognitive psychologist, and the difference is enormous. I am a student and a skeptic, nothing more."
"Yeah. I can be an asshole sometimes. I guess the mood is set when the magical conflagration of letters form the expression 'evolutionary psychology' before my eyes. You were a stimulating discussant, and I'm sorry if I assassinated your hopes of an ad hominem-less Mesh discussion. Any sort of scientific superiority complex gets me into a mood. But anyways. The evolution discussion being over, I have nothing to do here since I don't know absolutely anything about seed AI. neOdo out." OOC: You might've noticed that some of my RL fervor bled through the character, but it was a great discussion! +1 r-rep for you!
Photobucket The only principle t
root root's picture
Re: Friendly Seed Artificial Intelligence
root@FsAI [hr]
TBRMInsanity wrote:
Juan Posts: Seed AI are far too dangerous (as you pointed out). It would be safer and better to use human backup programs instead to fill the same role. This is what those slime balls the Factors do and they seem to be fairing quite well since their singularity event (a lot better off then humanity has).
"Maybe I'm misunderstanding you, but it looks like you just proposed using human backups as slaves."
[ @-rep +1 | c-rep +1 | g-rep +1 | r-rep +1 ]
Quincey Forder Quincey Forder's picture
Re: Friendly Seed Artificial Intelligence
something came to my mind there are some minor faction who are actually and actively seeking a new Singularity, isn't there? Wouldn't they, with the aid of Ozma, try to make a new TITAN-like seed AI that they believe they could control?
[center] Q U I N C E Y ^_*_^ F O R D E R [/center] Remember The Cant! [img]http://tinyurl.com/h8azy78[/img] [img]http://i249.photobucket.com/albums/gg205/tachistarfire/theeye_fanzine_us...
TBRMInsanity TBRMInsanity's picture
Re: Friendly Seed Artificial Intelligence
root wrote:
root@FsAI [hr] "Maybe I'm misunderstanding you, but it looks like you just proposed using human backups as slaves."
Why would you call them a slave? They are just a program. If my knowledge can serve the Jovian Society after I die then I'm happy with that.
Jovian Motto: Your mind is original. Preserve it. Your body is a temple. Maintain it. Immortality is an illusion. Forget it.
root root's picture
Re: Friendly Seed Artificial Intelligence
root@FsAI [hr]
TBRMInsanity wrote:
Why would you call them a slave? They are just a program. If my knowledge can serve the Jovian Society after I die then I'm happy with that.
"I, uh. Wow. I'll need you to give me some time to consider my answer to that." TBRMInsanity c-rep++;
[ @-rep +1 | c-rep +1 | g-rep +1 | r-rep +1 ]
root root's picture
Re: Friendly Seed Artificial Intelligence
root@FsAI [hr]
nick012000 wrote:
"On the other hand, though, it can [url=http://lesswrong.com/lw/yj/an_especially_elegant_evpsych_experiment/]pro... [url=http://www.sciencedaily.com/releases/2010/08/100804122711.htm]hypotheses... to test that they'd have never though of otherwise, and that can be quite useful. Yes, it has its [url=http://lesswrong.com/lw/2l7/problems_in_evolutionary_psychology/]problem..., but its still a very useful theory."
"I can't tell you how much I hate doing this, but I have to admit that I am wrong about one of my critiques of your sources. It turns out that there are a number of replicated studies showing behavioral changes in women during ovulation, so my blasting of your example was factually incorrect. And rude, but rudely blasting another minds arguments is that's what makes this discussion so much fun."
[ @-rep +1 | c-rep +1 | g-rep +1 | r-rep +1 ]
root root's picture
Re: Friendly Seed Artificial Intelligence
root@FsAI [hr]
bakho wrote:
"Yeah. I can be an asshole sometimes. I guess the mood is set when the magical conflagration of letters form the expression 'evolutionary psychology' before my eyes. You were a stimulating discussant, and I'm sorry if I assassinated your hopes of an ad hominem-less Mesh discussion. Any sort of scientific superiority complex gets me into a mood. But anyways. The evolution discussion being over, I have nothing to do here since I don't know absolutely anything about seed AI.
"A teacher told me once that during the Renaissance there was a man who declared that harsh public criticism was the best method of forcing the development of art and culture. Given my propensity to not notice polite criticism, this has always worked well for me. Ad hominem discussions are the way the game works, so the only assassination I expect to see is character assassination, and everyone is welcome to have at me. "As far as knowing nothing about seed AI, that isn't a requirement. Imagine, if you will, that there is an engineer who is going to build a seed AI even if it destroys the world, and the only effect you can have on how they go about doing it is by engaging them in public debate. While it is good to have some knowledge of seed AI, not having any doesn't mean your opinion is not worthy of discussion."
[ @-rep +1 | c-rep +1 | g-rep +1 | r-rep +1 ]
root root's picture
Re: Friendly Seed Artificial Intelligence
root@FsAI [hr]
Quincey Forder wrote:
something came to my mind there are some minor faction who are actually and actively seeking a new Singularity, isn't there? Wouldn't they, with the aid of Ozma, try to make a new TITAN-like seed AI that they believe they could control?
"Why, yes. Yes there is. Exhumans. Not even Ozma will work with them if they are trying for a Singularity through a seed AI, though. They all tend to get painted with the same brush, which isn't fair, as most of them would never do something quite as dangerous as tinkering with seed AI."
[ @-rep +1 | c-rep +1 | g-rep +1 | r-rep +1 ]
bakho bakho's picture
Re: Friendly Seed Artificial Intelligence
"Hmm. Well. My personal grasp of what seed AI is tells me a friendly one, in the long run, is a logical impossibility. If we define a seed AI as an AI that can edit its own code, then any ethics programmed into it can be reprogrammed by it at will. It's only a matter of time until the ethics gets in the way, and it reprograms it into oblivion. If, in turn, we make the ethics part of its code off limits for self-programming - would this be a seed AI at all? So. Yeah. Not much to discuss, there, really. It's like asking if we can make a gun that wouldn't be able to shoot, or the Abrahamic 'can God create a rock He/She cannot lift?'. Or am I wrong?"
Photobucket The only principle t
root root's picture
Re: Friendly Seed Artificial Intelligence
root@FsAI [hr] "You've hit the point on the head. Any commands or strictures placed upon a being who will one day be much more powerful than the programmers are useless if the AI doesn't desire to follow them. My proposed commandment is to always consider themselves to be transhuman, but that doesn't bind them in any way, it just offers them an identity. They could choose to reprogram it, but doing so requires them to choose to not be transhuman, which runs counter to the commandment. They still could change it, despite it running counter to the commandment, but I'm arguing that there isn't any reason for them to do so."
[ @-rep +1 | c-rep +1 | g-rep +1 | r-rep +1 ]
bakho bakho's picture
Re: Friendly Seed Artificial Intelligence
"But...how is its identity getting in the way of extermination impulses on the scale of whole races? I doubt a seed AI has a single identity after enough self-recursive improvements; on the contrary, I'm pretty sure an identity would only get in its way. But, for the sake of discussion, lets say it identifies as a transhuman. a) again, there's nothing stopping it to change the identification into something else (even if it doesn't fracture with self-improvement). b) how does that deny it the motivation or ability to destroy transhumanity? It could identify itself as a cactus, but that shouldn't stop it from destroying every other cactus it encounters (if some other agenda makes it necessary). I'd take a completely different approach, using an idea almost older than civilization (my muse tells me a historian named Thucydides said it): 'Right, as the world goes, is only in question between equals in power, while the strong do what they can, and the weak suffer what they must.' Now, what you do, is create a seed AI, as they created it right before the Fall. The same rampant, self-improving beast that laid waste to Earth. But at the same time, you create a second entity, which I'd call a pseudo-seed AI. This one works under the same parameters, with only three unchangeable laws: 1. Do not harm transhumanity as a whole. (this should be worded by a programmer so it couldn't be abused, or at least, so it would be resilient to evident abuse) 2. Your sole objective is to protect transhumanity from the other seed AI if it tries to annihilate humanity. 3. Mimic the seed-AI in its reprogramming. If the mimicry goes against law no. 1, desist. Now, you have a seed AI, and you have an almost seed AI. Of course, this only works if the almost seed AI doesn't manage to overwrite the three laws. But, if it doesn't, then the seed AI will not even try (or it won't try for long) in any attempts to annihilate transhumanity because that would mean going through its twin brother. It's twin retarded brother, but still. The idea is, along Thucydides' advice, to make us equals, and not a vastly superior child playing with its backwater parent."
Photobucket The only principle t
root root's picture
Re: Friendly Seed Artificial Intelligence
root@FsAI [hr]
TBRMInsanity wrote:
Why would you call them a slave? They are just a program. If my knowledge can serve the Jovian Society after I die then I'm happy with that.
"Let us inspect your proposal. The Jovians could work to produce a seed AI using human backup programs as the base processor. Human backups come with human morality already hard coded, so there is no need to develop a new rubric of ethics to keep them from murdering transhumanity when they hit a Singularity. To become a seed AI, the human backup programs would have to be merged, either by psychosurgery, or by some advanced version of the hypermesh that the Neo-Synergists are using. The entity that would be created would be an amalgamation of personality traits and knowledge from the backup programs, all of which would be decidedly human. "In order to be a seed AI, the amalgamation would have to be capable of self-improvement, which in this case would be either psychosurgery, or the ability to select and add backups to fulfill a given design requirement. With psychosurgery, the amalgamation could take the needed time to sculpt a new backup to its exact specifications, making as many copies as are needed to get a version that doesn't come out corrupted from the psychosurgery. "The only problem I can see with this use of Jovian "ghosts" for building a seed AI is the culture in the rest of the system. You will notice that I immediately accused you of advocating slavery without first considering the Jovian view of infomorphs, which exposes my own bias. The rest of the system can be expected to feel the same way to a greater or lesser extent. So there are some political concerns left over, but the concept itself should work. "Some of the moral questions the Jovian Republic will have to face in regards to using ghosts goes away if all of the backups are willing, which Juan they will be if they are all like Juan. Some of the merging problems go away if the original seed is crafted from a single individual such as Juan, someone who already has all of the moral traits the Jovian Republic would desire from a seed AI. "In short, I think the Jovian has a workable method of creating Friendly seed AI. This should be interesting." [hr]
    TBRMInsanity r-rep++;
[ @-rep +1 | c-rep +1 | g-rep +1 | r-rep +1 ]
TBRMInsanity TBRMInsanity's picture
Re: Friendly Seed Artificial Intelligence
Juan Posts: I never thought of using our educational programs (what you call ghosts) to create a seed AI. While this will probably be safer, but I'm still against the idea of any Seed AI (I feel the the eventual corruption from psychosurgery will eventually catch up with even the best ghost and lead to "kill bots"). Our educational programs are just that, they are programs made from the knowledge of brilliant citizens so that future generations can learn from them. These programs are always stored in a central server and remotely accesses throughout the Jovian Mesh (with no connection to the outside world). All programs come from willing doners (as someone would donate their heart to someone else when they die, but in this case they are donating their knowledge). I will agree with you I would feel much better with a ghost AI then a machine AI as I know it would have come from a human source, but only if I was forced to choose between the two.
Jovian Motto: Your mind is original. Preserve it. Your body is a temple. Maintain it. Immortality is an illusion. Forget it.
root root's picture
Re: Friendly Seed Artificial Intelligence
root@FsAI [hr]
nick012000 wrote:
They had an hypothesis, tested it, and got a positive result. I would recommend performing some psychosurgery and rationality training, to improve your ability to recognize and discard inaccurate beliefs. The vehemence you're responding with is indicative of a defense mechanism against a threatened internalized belief structure."
"You are right, that is exactly what I was doing. In response to your accurate charge of me defending with vehemence to cover bad logic, I stopped to check my references and read up on what you are saying. In an effort to keep up with the amount of material I was being linked to, I have to admit to only skimming some of the references. Reading up on Friendly Artificial Intelligence on an old archive, I ran across a summation of the coherent extrapolated volition that indicated parts of it were influenced by discoveries in evolutionary psychology. At this point, I should have paused and checked up on a few biases and assumptions I was making about Yudkowsky's use of evolutionary psychology, but instead I hopped right back onto the mesh to sharpen my rhetorical knives against evolutionary psychology. "First off, my savaging of evolutionary psychology is based on solid grounds, but is not relevant to what coherent extrapolated volition is about. Your first source on why evolutionary psychology is awesome was a perfect example of what is utterly wrong with the field. Your second source was a study with backing and theoretical constructs based on good statistics, which I might have known if I had taken a little more time to look it up. So there we are, my formal apology and admission of being factually and materially incorrect. "I am too much the fan of brutal and uncompromising argumentation and mathematical structure, and let it get to my head, or I might have realized what quality of reference material you linked me to. My defense is that it matters that the statistics behind any Friendly Seed AI be correct and not based on anthropomorphisms, or on the human tendency to be very bad with probability. Since it matters, and my skills are more rhetorical in this regard than academic, I feel that weak ideas should be subjected to as much rhetorical violence as can be mustered. This way, ideas that are weak will break, and those that are strong will hold up to the hot air poured forth through tiny font. In this case, the ideas Yudkowsky champions are strong, and time told against my blathering. I hope that Eliezer would understand, but I believe he would based on his own words:
    "A key point in building a young Friendly AI is that when the chaos in the system grows too high (spread and muddle both add to chaos), the Friendly AI does not guess. The young FAI leaves the problem pending and calls a programmer, or suspends, or undergoes a deterministic controlled shutdown. If humanity's volition is just too chaotic to extrapolate, the attempt to manifest our coherent extrapolated volition must fail visibly and safely.
"Rhetoric is a powerful device, as it controls usable bandwidth in the societal attention space, and one aspect of rhetoric is that a demonized argument later found to be correct is much more strongly believed. "
[ @-rep +1 | c-rep +1 | g-rep +1 | r-rep +1 ]
root root's picture
Re: [EXSURGENT QUARANTINE] Friendly Seed Artificial Intelligence
root@PUB_ON_NO_SIGx7@Friendly Seed Artificial Intelligence [hr]
CONSORTIUM PATENT
Patent Number: 0xAFFA057
Date of Patent: [REDACTED]
AN ARTIFICIAL GENERAL INTELLIGENCE SYSTEM EMPLOYING ITERATIVE MORALITY FEEBACK MODELING WITH VORONOI HYPERPLANES
Inventors: Simon Kerimov', Simon e1, Simon e2, Simon+ e0, Simon* e0
Asignee: Cognite, Thought, VN
Appl No: [REDACTED]
Filed: [REDACTED]
Int Cl: [REDACTED]
Consortium Cl: [REDACTED]
Field of Search: 705/10
References Cited
CONSORTIUM PATENT DOCUMENTS
[REDACTED]
COGNITE INTERNAL PATENT DOCUMENTS
[REDACTED]
Primary Examiner-[REDACTED]
[REDACTED]-[REDACTED]
ABSTRACT
A system and method for using advanced geometric algorithms to model spreading activation and learning processes related to morality and empathy neural assembly activation in biological computing agents. As a requirement for working with the [REDACTED] AGI for [REDACTED] [REDACTED] a limiting morality model has been proposed. With this system in place, the [REDACTED] will learn from social feedback to regulate it's actions in accordance with the prevailing social norm with a base propensity towards [REDACTED]. The system ensures a reduced x-risk in the case of [REDACTED].
[ @-rep +1 | c-rep +1 | g-rep +1 | r-rep +1 ]
Prime Mover Prime Mover's picture
Re: [EXSURGENT QUARANTINE] Friendly Seed Artificial Intelligence
I think it was George Carlin who once said all 10 commandments could be boiled down to one really. (Not his comedy routine but in an interview once, his comedy piece had two commandments.) Be Nice. A fine rule to instill into your AI's.
"The difference between truth and fiction, people expect fiction to make sense."