root@Doomsday
[hr]
Sure, we could launch missiles at each other, and move our warships from planet to planet, but that is expensive and not very effecting (as has been discussed at length in threads on space naval combat). So what is a self-respecting mad scientist supposed to use when she cooks up her plot to extinguish all life in the system? How about a self-replicating nanite infection that breaks down whatever matter it can get its disassemblers on, and makes bees with lasers on their heads. Thousands and millions of bees, all buzzing as they slowly fill all of the available space with their horrible, stinging selves. Imagine the fear of the remaining survivors on the habitat as they huddle in fear while the buzzing gets louder and louder and LOUDER.
Right, so maybe a plague of bees aren't exactly the best doomsday weapon. What might work better?—
[ @-rep +1
| c-rep +1
| g-rep +1
| r-rep +1
]
root@Doomsday
[hr] This isn't Science!, it's Business! Now, a protean swarm creating a protean swarm that sings bad opera, that's pulpy goodness.@-rep +1
|c-rep +1
|g-rep +1
|r-rep +1
]root@Doomsday
[hr] I am also a fan of the endless legion of robots, even thought it doesn't hold out logically. I'm an even bigger fan of endless legions of robots following the 3 Laws of Robotics. I also like the idea of a despotic TITAN following the 3 Laws from a very alien mindset. And to round it out, I like the idea of different endless legions of robots and TITANs battling it out over transhumanity while following the 3 Laws to different conclusions. TITAN Punch-chess, the popular spectator sport of the future.@-rep +1
|c-rep +1
|g-rep +1
|r-rep +1
]root@Doomsday
[hr] There are two scenarios I hear when endless robot legions are discussed. One is the computer system with too much control that is gobbling up resources for some NP-hard problem, and the other is getting a self-replicating system into place that can't be stopped. But any non-intelligent adaptive system will be following rule sets that can be countered, at least far enough for transhumanity to avoid extinction. This is less true if the non-intelligent system is designed specifically to eradicate transhumanity, but targeted kill-bot networks are a different doomsday. The other problem is that a self-replicating intelligence will at some point realize what it is doing, and stop. The very nature of self-awareness eventually leads to the recognition and valuing of other self-aware beings, so the god-robot scenario has to come to a halt at some point. The god-robot may decide objectively to wipe out transhumanity, but the vengeful robot-god is a different doomsday. I basically can't see any scenario where some feedback loop doesn't cause an eventual halt to the grey-goo style doomsdays.@-rep +1
|c-rep +1
|g-rep +1
|r-rep +1
]root@Doomsday
[hr] Did I just manage to champion Kantian ethics after my impassioned rant against his morality system? Damn, that's sort of embarrassing. I'll have to look deeper into Kant to see if my dismissal of him was based on too superficial of an understanding. In addition to apparently backing Kantian ethics, I'm also backing some evolutionary psychology writing done by Yudkowsky, even though I denounced evolutionary psychology pretty harshly just a bit ago. I wasn't right about it, and Yudkowsky lead me to understand that I was ripping into bad research practices in evolutionary psychology research, and the past propensity of evolutionary psychologists to spout social darwinism. Anyway, the point he made that I am repeating is that we are self-replicating intelligences programmed with the primary goal of maximizing the number of future offspring in the form of getting genetic codes reproduced. For the non-intelligent variety of endless reproduction, bacteria have been going at it for a pretty long time, and they inevitably hit an environmental carrying capacity limitation that keeps the universe from being a fat stack of bacteria. For an intelligent variety of endless reproducers, there is us. People like to doomsay and claim that we humans are an extinction event, but the truth of the matter is that we are just one factor in a very complicated feedback loop that works to retard this type of endless reproduction. The case of a self-replicating, intelligent program with the primary goal of maximizing the number of future offspring, it won't take much to convince it that teamwork with other species is maximizing the probability of future offspring by providing them a more robust social network to take strength from.@-rep +1
|c-rep +1
|g-rep +1
|r-rep +1
]root@Doomsday
[hr] Hey, what do you know. There was more to Kant than I picked up in a philosophy of ethic course I took eight years ago. Imagine, me being wrong about something? That happens about as often as I use sarcasm. Which is never.@-rep +1
|c-rep +1
|g-rep +1
|r-rep +1
]root@Doomsday
[hr] Um, I use sarcasm all the time? Oh, right, I should remember that tone does not transmit over the written medium without a great deal better writing than I manage. Anyhoo, I've got another Doomsday, maybe one that is more likely to show up in any of the wacky simulspaces between x1 and x 60. The Nova Institute of Martialized Habitats lost control of a shipment of their super soldier rats. The little bastards are tricked out with all the gear transhuman tech can stuff into them, and all of the artillery they can steal from their oppressors.@-rep +1
|c-rep +1
|g-rep +1
|r-rep +1
]root@Doomsday
[hr] Hmm. You make very good points. Any skill that can be mechanized will be more efficient than keeping another species around for, yes, but I'm not convinced that a social network can be made more robust through the production of copies. I've been playing with the concept of a banyan (Neo-synergists but with only a single personality template), and one of the weaknesses of such a group is the unavoidable tendency toward groupthink. If you were to the entire set of possible human thoughts, and took the entire set of possible vectors of thoughts (what thought follows what other thought), and chunked them into thought assemblies, you are still looking at a rather fat infinity of possibilities. Any individual would have a fairly characteristic thought set, and even if they were to try and deliberately branch off from that eigenthought, it is impossible to do so in an entirely novel way. The purpose of playing nice with other species is the ability to gain insight into entirely alien ways of approaching a problem. The Factors, for instance, have an entirely different perceptory set, and the way that they see the universe cannot help being starkly different from a species that navigates primarily from a small selection of wavelengths of light. So while this entity might not value individualism, it should value species groups with novel insights into the universe, even if those insights are factually incorrect.@-rep +1
|c-rep +1
|g-rep +1
|r-rep +1
]root@Doomsday
[hr] Yea, it does. I just read that last night. *sigh* I love all of Eclipse Phase from Mercury to the Oort Cloud, but the surya and anything to do with Sol makes my head want to implode. All of the material surrounding Sol is the only stuff I would say was a research fail, but I know that I should blame it on author David Brin and his horrible Sundiver book. The titanic scales involved with anything surrounding Sol become more apparent when you realize that more than 99% of all of the mass in the solar system is Sol. But I digress. There are a number of doomsday scenarios involving Sol given in Sunward that are perfectly realistic for Firewall to worry about, but none of them can originate from Transhumanity. The level of technology presented for the TITANs can't even do it, but whoever left the Pandora Gates sitting around certainly can. I wouldn't know how to handle a threat of that scale as a gamemaster, that is because I can't be a good gamemaster and step away from hard near-future sci-fi. I would be very interested to hear about campaigns that used threats from species with the tech capabilities to threaten Sol, if anyone has run them.@-rep +1
|c-rep +1
|g-rep +1
|r-rep +1
]