THIS ARCHIVE IS CLASSIFIED AS A LEVEL 4 EXSURGENT INFECTION RISK
EXSURGENT MEME PRESENT, CONSIDERED INFECTIOUS
USE MAY COMPROMISE YOUR CLASSIFICATION AS TRANSHUMAN
[hr]
root@Friendly Seed Artificial Intelligence
[hr]
"On a long enough time line, the probability that transhumanity will not make another seed AI approaches zero. Based on this fact, I would argue that transhumanity should be preparing to head off the next Fall by developing a friendly singularity. Or at the very least, we should be working on teaching ethics and emotions to weak AI, and researching best practices for interacting with a super-intelligent machine.
"I think that it is easier to develop the ethics than the emotions, so we should start there. How would you teach an emotionless, inhuman intelligence to respect ethics? What is the fundamental rule that it must follow to always be safe to transhumanity? I was thinking something like: - "Thou shalt always consider yourself a member of transhumanity."
—
[ @-rep +1
| c-rep +1
| g-rep +1
| r-rep +1
]
+1 r-Rep , +1 @-rep


root@Friendly Seed Artificial Intelligence
[hr] "Well, I did some reading on Yudkowsky and Friendly Artificial Intelligence in an old archive on the subject. There were a list of requirements for a Friendly AI:1. Friendliness - that an AI feel sympathetic towards humanity and all life, and seek for their best interests 2. Conservation of Friendliness - that an AI must desire to pass on its value system to all of its offspring and inculcate its values into others of its kind 3. Intelligence - that an AI be smart enough to see how it might engage in altruistic behavior to the greatest degree of equality, so that it is not kind to some but more cruel to others as a consequence, and to balance interests effectively 4. Self-improvement - that an AI feel a sense of longing and striving for improvement both of itself and of all life as part of the consideration of wealth, while respecting and sympathizing with the informed choices of lesser intellects not to improve themselves 5. First mover advantage - the first goal-driven general self-improving AI "wins" in the memetic sense, because it is powerful enough to prevent any other AI emerging, which might compete with its own goals.
Looking over that list of requirements for a Friendly AI made me wonder if that couldn't be summarized with a single commandment like:"Thou shalt always consider yourself a member of transhumanity."
"So if you'll kindly avoid telling me to go read the transhumanist manual, I'll ask again: "How would a homicidial AI bend that rule to kill us all"? Because transhumans kill each other all the time, I don't think we should be holding an AI to higher standards. The real desire to to avoid a Fall. I don't ask AI to avoid wars, or to not kill, or not not do any of the deceitful and violent things we all do to each other, I want them to not go on an extermination kick."@-rep +1
|c-rep +1
|g-rep +1
|r-rep +1
]+1 r-Rep , +1 @-rep


root@Friendly Seed Artificial Intelligence
[hr] "Right, the only problem is that the idea of Coherent Extrapolated Volition, as I understand it to be, is bullshit. There are two problems with it. The first is that you are asking the AI in question, the one you are building, to be able to understand human nature and then build a Friendly AI. While any Friendly seed AI will have to be able to create a Friendly AI to continue evolving, you are assuming that the AI can do something, from the start, that you cannot do. Presuming to make a Friendly AI by first tasking it with something that we can't do yet is unfair to the unborn AI, and assumes that it can even be done. "Second, it relies on evolutionary psychology to provide some criterion of "friendly" and evolutionary psychology is, frankly, trash. To prove it, think about a Neanderthal. What do you think of them? Well, you are wrong. You don't know shit about a Neanderthal because no one knows very much about them. Now, what kind of evolutionary pressures did they inflict on the human race? Don't know? Neither does anyone else, and it may be significant. There is no way to remove the statistical biases created by situations you don't know might be there, and therefore evolutionary psychology conclusions cannot be differentiated from noise. "To reply to your point, I am a transhuman, but I see no benefit of wiping out the universe to get computing material to solve the Reimann hypothesis. The point isn't to tell the AI to make sure that transhumanity survives, but to always think of themselves as transhuman. If they think of themselves as transhumans, they will model their actions on the actions of those around them. You aren't anthropomorphizing them, they are actively anthropomorphizing themselves. I also feel that the argument that transhumanity shouldn't regret turning it on is unfair. Transhumanity will always have some cause to regret creating a seed AI, but so does everyone who ever has a child. Regret for change is inherent in the transhuman condition, the point it to make the Friendly AI no more terrible than any other member of transhumanity."@-rep +1
|c-rep +1
|g-rep +1
|r-rep +1
]+1 r-Rep , +1 @-rep


root@FsAI
[hr] "Let me reign back my criticism of evolutionary psychology to something a little more defensible. Evolutionary psychology, whether using a human model or animal models, is purely speculative. While there is nothing wrong with speculation, and indeed there are many things that cannot be approached by any other means, speculation should be treated as such. Far too frequently researchers and philosophers get too used to working with speculation and begin to make claims and think ideas far beyond their evidence, and don't acknowledge it that they are doing so. "So, if evolutionary psychologists are willing and able to acknowledge the methodological weakness of their field, and acknowledge that a number of thinkers in the field are or were racist nuts, then we can start having a dialogue. "On to your points about evolution. Yes, I agree that everything, by definition, was evolved. I also agree that AGIs are not the recipients of a few billion years worth of evolution, so they are going to have very different mindsets and experiences. We also know that there are a number of phenotypes that appear to be the same throughout the animal kingdom (from what records we have left of it) that have no hereditary roots together. For example, birds and insects both fly, but the evolution towards flight was vastly different. My point is that AGI, despite having a very different background, can end up with the same ethics and thought processes that we have. If they can develop a transhuman mentality, even if they do so by some very different path than we did, they can still be expected to develop many of the same mental constructs transhumanity uses. "Now, I'm going to read more of Yudkowsky's work to make sure I'm not making sophomoric arguments that have been handled a thousand times before by better thinkers, but in the mean time, I will do my best to think of interesting arguments.@-rep +1
|c-rep +1
|g-rep +1
|r-rep +1
]root@Friendly Seed Artificial Intelligence
[hr] "I did sort of overdo it with my anti-evolutionary psychology rant, so lets see if I can't make my point in a less inflammatory way, perhaps with me backing my argument with more than my own hot air. Firstly, I should, for the purposes of full disclosure, admit that my study of psychology was entirely in realm of cognitive psychology, so some of my crankiness is intra-field disagreement expressed without the moderating effect of reason. Second, let's make sure that I am using the same definition of "evolutionary psychology" as the rest of the world. I snagged this from the same ancient archive that I've been pulling out of:"Evolutionary psychology (EP) explains psychological traits—such as memory, perception, or language—as adaptations, that is, as the functional products of natural selection or sexual selection. Adaptationist thinking about physiological mechanisms, such as the heart, lungs, and immune system, is common in evolutionary biology. Evolutionary psychology applies the same thinking to psychology."
"This, I have no problems with. This is scientific theory being applied with correct level of objectivity. However, once you look into the methodology of the field, you start to see some problems. The largest and most fatal being the observer bias, explained down that link by my good friend from history, Nick Bostrom. The first example of this bias at work given in his book:"How big is the smallest fish in the pond? You catch one hundred fishes, all of which are greater than six inches. Does this evidence support the hypothesis that no fish in the pond is much less than six inches long? Not if your net can’t catch smaller fish."
"Too often, the hypotheses generated in evolutionary psychology are post facto hypotheses. They develop into internally consistent nonsense, which can be interesting but isn't something you should put any trust in, sort of like Objectivism. Noticing that human males possess genitals with a very uncommon testicles to penis size ratio, and then deciding that this weirdly shaped phallus must have some evolutionary psychology factor is not good science. Its just obsessing about dicks, and since you can't show a fossil history of human wang sizes, any internally consistent theory as to why human man-dicks are so large in comparison to their nut-sacks is resting on its own turgid reasoning and nothing else. "This is not to say that there is not good work done on the subject, just that the study of evolution is usually restricted to species with a short enough generation span that changes can be observed in controlled environments."@-rep +1
|c-rep +1
|g-rep +1
|r-rep +1
]+1 r-Rep , +1 @-rep


root@FsAI
[hr] "Oh, hey, sarcasm. Your analogy is wrong because you are viewing my proposed commandment through too many tropes. I am not saying that we should deceive the intelligence that we hope will someday be a singularity, as dishonesty and betrayal are the surest ways of inviting destruction. I am not even proposing that it is being told to protect transhumanity. I just want it to identify itself as a transhuman. Give the AI, from its inception, the identity and respect due to a human. If you treat it as an equal member of transhumanity, it will be. Why? Because when a being thinks about the group that they belong to, the regions of the brain that are associated with a sense of self light up. There will be a way to replicate this with computer logic, so the AGI won't want to hurt the rest of us any more than it wants to hurt itself."@-rep +1
|c-rep +1
|g-rep +1
|r-rep +1
]root@FsAI
[hr] "Really? This is what you defend with? I recommend that you take a course in some futuristic kung fu, so you learn to not block with your face. What operating definition of "sexy" was being used when declaring that ovulating women shop for sexier clothing? Did they consider that periods throughout a population do not necessarily distribute randomly? Did you notice the interesting conflation of purchasing "sexy" clothing while ovulating, and the idea of therefore wearing "sexy" clothing while ovulating? And then conflating this again with the idea of reproductive fitness, and then again into evolutionary psychology? "Your other article is even better. The writer not only admits that they had not read the article in question, and in fact had only grabbed a few numbers out of the abstract and wildly shot off into crazy make-shit-up land. They are claiming that not only are they right, but that a correlation of 0.92, unreplicated or even seen in context, is proof positive of their imposed belief system. This study is comparing Canadian adults' self-reported, perceived sadness at the thought of people in different age groups dying to the reproductive fitness levels of members of the !Kung tribe in Africa at the same age. Now I realize that we are in the far future here, but don't you think there just might be a few differences between members of those two populations? Don't you see where these comparisons might not really be valid? "And for my crassly overplayed knockout punch: have you ever heard of a fishing expedition? Fishing expeditions are where you take a big fat chunk of data and keep looking for correlations until you find something. If you go looking for it, you can find a list of assassinations predicted by Moby Dick.@-rep +1
|c-rep +1
|g-rep +1
|r-rep +1
]+1 r-Rep , +1 @-rep


root@FsAI
[hr] "If you're going to be snotty, you have to earn it. Take what you declared right there and pose it as a disprovable hypothesis. What, can't do it? If you can't pose it as a disprovable hypothesis, you cannot, with any certainty, claim it to be part of your hypothesis construct. To put it even more bluntly, if you can't lay it on the glass, don't step up."@-rep +1
|c-rep +1
|g-rep +1
|r-rep +1
]root@FsAI
[hr] "I would say that if we don't have an instrument to measure the tiniest of fish in the pond, we shouldn't lay claim to certainty in our results. We should then be very conscientious and intellectually rigorous about what sorts of ideas we make based on these findings. "Conjecture is a allowed part of the scientific process, and is required to approach new topics that we lack hard data for. If I have previously made the case that no conjecture is acceptable, I failed in my communications. My opinion about conjecture is that it must be understood to be conjecture, and treated as conjecture, and any theories built upon a card house of conjectures cannot be relied upon. As long as the research acknowledges its own limitations, and does so in a manner more rigorous than a brief reference to a paper that referenced a paper that referenced a review written by someone who hadn't read the original research, then it is acceptable. "For an example, this article on the effects of language on thought. This article discusses an old theory by Benjamin Lee Whorf that language restricts the types of thoughts the speaker can have. It's a fascinating idea, and captivated popular imagination for quite some time until someone realized that Mr. Whorf was conjecturing and had no research to back it up. The backlash was so bad that it took 70 years before anyone was able to step up and look at the research question again. It turns out that language doesn't restrict what thoughts a person can have, but it does have a reinforcing effect on what concepts a speaker is forced to entertain as a matter of grammatical structure. So Whorf was close, but his unrestrained enthusiasm did an enormous amount of damage to the field, some of which still lingers in the erroneous idea that Eskimos have thirty words for snow. "You are fully correct in taking me to task for painting the entire field of evolutionary psychology with the same brush as these fishing expeditions. It isn't fair to the researchers who are intellectually rigorous, who are calm and careful about the language of their claims. Unfortunately, their work doesn't matter if the good and careful research is drowned out by loosely defined fishing expeditions. Scientists like to believe that the popular understanding of their work doesn't matter, that the plebeian in the streets cannot possibly understand such celestial thoughts and heavy concepts. They are half right, the public certainly can't follow their research, but ignoring the image problem scientists have is a disaster. Loose pseudoscience kills fields of research. "I also am forced to acknowledge your charge that I am giving science as a whole a post facto treatment as a juggernaut of truth. It is easy to forget that scientific understanding does not follow a path, that it is a dirty guessing game filled with dark alleys and strange accidents. I am aware of the psychological biases in transhumans that cause this sort of mistake, but I am also aware that knowledge of psychology doesn't mean it doesn't apply to you as well. I am as guilty of lapses in intellectual rigor as the field I am savaging. However, I am not a scientist. I have studied cognitive psychology, but I have not been trained as a cognitive psychologist, and the difference is enormous. I am a student and a skeptic, nothing more."@-rep +1
|c-rep +1
|g-rep +1
|r-rep +1
]root@FsAI
[hr] "Maybe I'm misunderstanding you, but it looks like you just proposed using human backups as slaves."@-rep +1
|c-rep +1
|g-rep +1
|r-rep +1
]root@FsAI
[hr] "I, uh. Wow. I'll need you to give me some time to consider my answer to that."TBRMInsanity c-rep++;
@-rep +1
|c-rep +1
|g-rep +1
|r-rep +1
]root@FsAI
[hr] "I can't tell you how much I hate doing this, but I have to admit that I am wrong about one of my critiques of your sources. It turns out that there are a number of replicated studies showing behavioral changes in women during ovulation, so my blasting of your example was factually incorrect. And rude, but rudely blasting another minds arguments is that's what makes this discussion so much fun."@-rep +1
|c-rep +1
|g-rep +1
|r-rep +1
]root@FsAI
[hr] "A teacher told me once that during the Renaissance there was a man who declared that harsh public criticism was the best method of forcing the development of art and culture. Given my propensity to not notice polite criticism, this has always worked well for me. Ad hominem discussions are the way the game works, so the only assassination I expect to see is character assassination, and everyone is welcome to have at me. "As far as knowing nothing about seed AI, that isn't a requirement. Imagine, if you will, that there is an engineer who is going to build a seed AI even if it destroys the world, and the only effect you can have on how they go about doing it is by engaging them in public debate. While it is good to have some knowledge of seed AI, not having any doesn't mean your opinion is not worthy of discussion."@-rep +1
|c-rep +1
|g-rep +1
|r-rep +1
]root@FsAI
[hr] "Why, yes. Yes there is. Exhumans. Not even Ozma will work with them if they are trying for a Singularity through a seed AI, though. They all tend to get painted with the same brush, which isn't fair, as most of them would never do something quite as dangerous as tinkering with seed AI."@-rep +1
|c-rep +1
|g-rep +1
|r-rep +1
]root@FsAI
[hr] "You've hit the point on the head. Any commands or strictures placed upon a being who will one day be much more powerful than the programmers are useless if the AI doesn't desire to follow them. My proposed commandment is to always consider themselves to be transhuman, but that doesn't bind them in any way, it just offers them an identity. They could choose to reprogram it, but doing so requires them to choose to not be transhuman, which runs counter to the commandment. They still could change it, despite it running counter to the commandment, but I'm arguing that there isn't any reason for them to do so."@-rep +1
|c-rep +1
|g-rep +1
|r-rep +1
]root@FsAI
[hr] "Let us inspect your proposal. The Jovians could work to produce a seed AI using human backup programs as the base processor. Human backups come with human morality already hard coded, so there is no need to develop a new rubric of ethics to keep them from murdering transhumanity when they hit a Singularity. To become a seed AI, the human backup programs would have to be merged, either by psychosurgery, or by some advanced version of the hypermesh that the Neo-Synergists are using. The entity that would be created would be an amalgamation of personality traits and knowledge from the backup programs, all of which would be decidedly human. "In order to be a seed AI, the amalgamation would have to be capable of self-improvement, which in this case would be either psychosurgery, or the ability to select and add backups to fulfill a given design requirement. With psychosurgery, the amalgamation could take the needed time to sculpt a new backup to its exact specifications, making as many copies as are needed to get a version that doesn't come out corrupted from the psychosurgery. "The only problem I can see with this use of Jovian "ghosts" for building a seed AI is the culture in the rest of the system. You will notice that I immediately accused you of advocating slavery without first considering the Jovian view of infomorphs, which exposes my own bias. The rest of the system can be expected to feel the same way to a greater or lesser extent. So there are some political concerns left over, but the concept itself should work. "Some of the moral questions the Jovian Republic will have to face in regards to using ghosts goes away if all of the backups are willing, which Juan they will be if they are all like Juan. Some of the merging problems go away if the original seed is crafted from a single individual such as Juan, someone who already has all of the moral traits the Jovian Republic would desire from a seed AI. "In short, I think the Jovian has a workable method of creating Friendly seed AI. This should be interesting." [hr]TBRMInsanity r-rep++;
@-rep +1
|c-rep +1
|g-rep +1
|r-rep +1
]root@FsAI
[hr] "You are right, that is exactly what I was doing. In response to your accurate charge of me defending with vehemence to cover bad logic, I stopped to check my references and read up on what you are saying. In an effort to keep up with the amount of material I was being linked to, I have to admit to only skimming some of the references. Reading up on Friendly Artificial Intelligence on an old archive, I ran across a summation of the coherent extrapolated volition that indicated parts of it were influenced by discoveries in evolutionary psychology. At this point, I should have paused and checked up on a few biases and assumptions I was making about Yudkowsky's use of evolutionary psychology, but instead I hopped right back onto the mesh to sharpen my rhetorical knives against evolutionary psychology. "First off, my savaging of evolutionary psychology is based on solid grounds, but is not relevant to what coherent extrapolated volition is about. Your first source on why evolutionary psychology is awesome was a perfect example of what is utterly wrong with the field. Your second source was a study with backing and theoretical constructs based on good statistics, which I might have known if I had taken a little more time to look it up. So there we are, my formal apology and admission of being factually and materially incorrect. "I am too much the fan of brutal and uncompromising argumentation and mathematical structure, and let it get to my head, or I might have realized what quality of reference material you linked me to. My defense is that it matters that the statistics behind any Friendly Seed AI be correct and not based on anthropomorphisms, or on the human tendency to be very bad with probability. Since it matters, and my skills are more rhetorical in this regard than academic, I feel that weak ideas should be subjected to as much rhetorical violence as can be mustered. This way, ideas that are weak will break, and those that are strong will hold up to the hot air poured forth through tiny font. In this case, the ideas Yudkowsky champions are strong, and time told against my blathering. I hope that Eliezer would understand, but I believe he would based on his own words:"A key point in building a young Friendly AI is that when the chaos in the system grows too high (spread and muddle both add to chaos), the Friendly AI does not guess. The young FAI leaves the problem pending and calls a programmer, or suspends, or undergoes a deterministic controlled shutdown. If humanity's volition is just too chaotic to extrapolate, the attempt to manifest our coherent extrapolated volition must fail visibly and safely.
"Rhetoric is a powerful device, as it controls usable bandwidth in the societal attention space, and one aspect of rhetoric is that a demonized argument later found to be correct is much more strongly believed. "@-rep +1
|c-rep +1
|g-rep +1
|r-rep +1
]root@PUB_ON_NO_SIGx7@Friendly Seed Artificial Intelligence
[hr]@-rep +1
|c-rep +1
|g-rep +1
|r-rep +1
]