Welcome! These forums will be deactivated by the end of this year. The conversation continues in a new morph over on Discord! Please join us there for a more active conversation and the occasional opportunity to ask developers questions directly! Go to the PS+ Discord Server.

Abolitionism and moral enhancement in EP (warning: contains philosophy nerdery)

22 posts / 0 new
Last post
Alkahest Alkahest's picture
Abolitionism and moral enhancement in EP (warning: contains philosophy nerdery)
Hello everyone! Medium time lurker, first time poster here. I guess I should start by saying something about how much I love Eclipse Phase. There are few things I enjoy as much as thinking about the future and how technology may change society, and since Eclipse Phase is pretty much "Transhumanism: The Game" it's an excellent way to combine my interest in futurism with my interest in procrastinating by reading RPG books. Now, while the branches of transhumanism embraced by libertarian-leaning people like Max More, Peter Thiel, Ronald Bailey and Anders Sandberg (hey there Arenamontanus!) and currents such as anarcho-transhumanism are well represented in the game, the philosophies of utilitarian transhumanists such as David Pearce and Julian Savulescu seem to have less of a place in the harsh Solar System of 10 AF. I'm more specifically thinking about the philosophy known as abolitionism (http://en.wikipedia.org/wiki/Abolitionism_%28bioethics%29), which advocates making all feeling beings happier (using SCIENCE!) and the idea of "moral enhancement" (http://philosophynow.org/issues/91/Moral_Enhancement), which advocates making humans nicer (using even more SCIENCE!). If one were to try to find a place for these ideas in the world of Eclipse Phase, where would it be? How could one include such themes in this game of ideas?
President of PETE: People for the Ethical Treatment of Exhumans.
OneTrikPony OneTrikPony's picture
Hmm... I was unaware of this
Hmm... I was unaware of this branch of transhumanism. [edit] Wow, massive personal diatribe deleted. :D Um... I think Locus and Paradise Station pretty much have the hedonism thing covered. I'm sure the PC and the Jovians are working on "moral enhancement"

Mea Culpa: My mode of speech can make others feel uninvited to argue or participate. This is the EXACT opposite of what I intend when I post.

DivineWrath DivineWrath's picture
Brinker habs are always an
Brinker habs are always an option. They are often difficult to reach so they tend to have the isolation and privacy needed to try out new ideas. In fact one the habs in rimward uses psychosurgery to have each individual living there to become hyper-specialized on one specific job (usually weakening them in other areas to make this possible).
Smokeskin Smokeskin's picture
Alkahest wrote:I'm more
Alkahest wrote:
I'm more specifically thinking about the philosophy known as abolitionism (http://en.wikipedia.org/wiki/Abolitionism_%28bioethics%29), which advocates making all feeling beings happier (using SCIENCE!) and the idea of "moral enhancement" (http://philosophynow.org/issues/91/Moral_Enhancement), which advocates making humans nicer (using even more SCIENCE!). If one were to try to find a place for these ideas in the world of Eclipse Phase, where would it be? How could one include such themes in this game of ideas?
Abolitionism seem to be implemented to a large degree already. From Splicers and up, morphs are engineered for physical and mental health. People don't feel bad just because they're born grumpy, or prone to depression, or with a weak immune system and poor health. As for the more extreme versions, maximizing happiness is a very dangerous goal. Would you want to become a wirehead (wires in your brain stimulate your pleasure center, so you experience eternal bliss)? Most people wouldn't. Also, while people say they want happiness, they very often do quite little to actually become happy. Having children is a common example of something that all research shows make you significantly less happy, but we still do it. Ambition and working a lot is also a source for major unhappiness, and the rewards even if you succeed tend to do very little for your happiness. I've done both with my eyes open and I honestly don't value happiness that much. Would we really want to maximize happiness when it would mean we stopped reproducing and striving hard for things? Such a society would fail. Would I choose that individually, strip out my drives for other things than happiness? Hell no, I don't want to be a childless loser. If anything, I'd consider rewriting my brain so I'd also enjoy happiness at taking care of my children and working hard (imagine if you actually liked being woken at 3AM to comfort your child, and pulling an all-nighter at work was as fun as playing computer games). But from the descriptions of EP morphs, I suspect they're somewhat like that already - and note that from the outside, you'd observe the opposite of what you'd expect from an abolitionist society, as you'd see people doing MORE things that you regard as causing unhappiness! Moral enhancement is at best a completely silly idea, at worst utterly terrifying. We don't individually want to improve our morals (which I assume goes beyond someone just getting a fix allowing them to resist the desire to shoplift because they know it will mess up their life). If I think being tolerant towards religion is a immoral, why would I change that? If a fundamentalist thinks gay marriage is wrong, why would he change that? If I believe that economic growth is more important both for me and for the world's desperatedly poor than slowing global warming, why would I change that? Would I willingly give up my ability for deception? The answer to all these questions are of course "no". I'd love for a lot of other people to modify their morals though. Imagine a world without socialism, religion, and environmentalism, how much better off everyone would be. As a tool for oppression it is certainly tempting - and unsurprisingly, the idea of moral enhancement seem to come with left wing ideas, though of course any oppressive regime or fanatic would love it too. As a libertarian, I certainly can't be anything but horrified at the idea. It's the ultimate weapon, and should be treated as such. It is a good alternative to other horrifying things like life imprisonment of criminals and dropping bombs on Taleban soldiers, but if something like that is ever used, it is important that we retain the same degree of horror at its use as when we kill people. There's a risk that its invisible effects could make us tolerant of its use and allow us to go on a slippery slope, and we need to be aware of that. In certain circles, I could imagine moral enhancements being used as a form of moral signalling though. You could prove that you were a devoted and uncorruptable patriot, or that you actually did put animal rights equal to humans, or whatever. Peter Watts' Rifter trilogy had extensive use of such modifications. The people charged with the really tough decisions (like firebombing an area and killing everyone in it to prevent the spread of virulent disease) had their brain chemistry modified so they were compelled to act in the way that maximized population utility, so they could be allowed to act quickly without oversight. The moral tweaks they give spies were quite a bit more sinister...
Arenamontanus Arenamontanus's picture
Alkahest wrote:Now, while the
Alkahest wrote:
Now, while the branches of transhumanism embraced by libertarian-leaning people like Max More, Peter Thiel, Ronald Bailey and Anders Sandberg (hey there Arenamontanus!) and currents such as anarcho-transhumanism are well represented in the game, the philosophies of utilitarian transhumanists such as David Pearce and Julian Savulescu seem to have less of a place in the harsh Solar System of 10 AF. I'm more specifically thinking about the philosophy known as abolitionism (http://en.wikipedia.org/wiki/Abolitionism_%28bioethics%29), which advocates making all feeling beings happier (using SCIENCE!) and the idea of "moral enhancement" (http://philosophynow.org/issues/91/Moral_Enhancement), which advocates making humans nicer (using even more SCIENCE!). If one were to try to find a place for these ideas in the world of Eclipse Phase, where would it be? How could one include such themes in this game of ideas?
Hi there! I did a brief write-up of the abolitionists at http://www.aleph.se/EclipsePhase/Pleasure.pdf I think abolitionism as an individual philosophy can be found nearly anywhere in the non-conservative parts of the solar system (it is likely not too well received among Ultimates or Jovians). Abolitionism as a group project Moral enhancement is intriguing. This could be something both the PC and some autonomist groups pursue: the PC have culture experimentation projects and worry about social cohesion, and autonomists really need to influence transhuman nature to make their societies scale up better. Plenty of tech around to implement it, from psychosurgery to drugs to moral enhancement software in muses. I have been thinking about writing an adventure where it has been secretly applied to some habitat, and everything has gone horribly wrong/right - it would be fun to make some well-meaning Titanians the villains for once. Of course, the best kind of problems are those where it is even unclear whether the result is bad and whether the villain is a villain or hero.
Extropian
Smokeskin Smokeskin's picture
Arenamontanus wrote:
Arenamontanus wrote:
I did a brief write-up of the abolitionists at http://www.aleph.se/EclipsePhase/Pleasure.pdf I think abolitionism as an individual philosophy can be found nearly anywhere in the non-conservative parts of the solar system (it is likely not too well received among Ultimates or Jovians).
I'm not sure it goes down that well in other parts of the system either. The common desire for enjoying life and experiencing happiness is not abolitionism. Abolitionists take it to the next level. Abolitionists won't do common things like have children for example (unless they fix their brains to actually enjoy all the work, obligation and worry that comes with children). Abolitionists will get the implants you mention in your write-up, which all have effects that most would find detrimental. They're likely to end up as junkies or moochers, or the equivalent, aren't they? Even in the tolerant autonomist habs they're unlikely to like people who don't pull their own weight. Extropian and inner system habs they'll tend to become poor. Perhaps the real abolitionists are inner system retired wealthy people :)
Quote:
Moral enhancement is intriguing. This could be something both the PC and some autonomist groups pursue: the PC have culture experimentation projects and worry about social cohesion, and autonomists really need to influence transhuman nature to make their societies scale up better. Plenty of tech around to implement it, from psychosurgery to drugs to moral enhancement software in muses.
But why would individuals use it? Peer pressure? Unless you have eusocial morals to begin with, would you really want to do that to yourself?
Arenamontanus Arenamontanus's picture
Smokeskin wrote:But why would
Smokeskin wrote:
But why would individuals use it? Peer pressure? Unless you have eusocial morals to begin with, would you really want to do that to yourself?
If you are Kantian you think you have a duty to behave and think in a moral fashion. Unfortunately you have not evolved to behave and think like that. But what if you used psychosurgery to make sure you actually acted according to your higher principles? Same thing for utilitarians like Peter Singer, who thinks the true morality might be quite a bit tougher than most people normally consider sensible: making yourself psychologically able to act as you should would be useful and perhaps necessary. If you reduced akrasia, game-wise this might be represented as a boost to WIL. I think cognitive enhancement is a partial moral enhancement - COG and INT boosts also matter. Being better able to predict the consequences of your actions and how they impact other people (a quite sophisticated mental operation) doesn't necessarily make you a moral person, but it makes it much easier to act morally. It is also advantageous in general: even if you do not care about moral enhancement, becoming smarter helps many life projects. Studies have shown that smarter people also are better at cooperating in economic games (because they figure out the benefits and how to create cooperation with strangers), and do more long-term investment in the future. Not morality per se, but good in general. The famous Stanford marshmallow experiment demonstrates that self-control (or maybe strategic reasoning) correlates with future success. Greg Bear also suggested a gentle peer pressure related moral enhancement situation in the novels Queen of Angels, Slant and Moving Mars. Therapy is developed that fixes people's psychological issues, making them certified sane. For many jobs you only want to hire sane people, so you favor the therapied. So there is an advantage in becoming therapied, and eventually all people on Earth are sane. Nobody is forced, and everybody benefits. (Slight spoiler) This is also what drives the conflict in Moving Mars: the sane Earth cannot allow the un-therapied people of Mars to develop a certain very powerful technology. Had they only been therapied everything would have been fine.
Extropian
Arenamontanus Arenamontanus's picture
Smokeskin wrote:Abolitionists
Smokeskin wrote:
Abolitionists will get the implants you mention in your write-up, which all have effects that most would find detrimental. They're likely to end up as junkies or moochers, or the equivalent, aren't they?
The whole point of abolitionism is to get rid of pain and maximize pleasure *without failing at life*. The implants I mention are clearly not quite right for the job, but I would expect abolitionist-minded researchers working on fixing that. Remember that pain control is already fairly well in hand in EP: people can live pain-free lives with great success, so a fraction of the whole program is already done. Fixing suffering is a deeper problem that still needs a fix, although EP medicine is clearly fairly good at it. I can totally see abolitionists working on very serious projects in Cognite, finding their ideology compatible with both corporate profit and helping suffering people. It would be fun to do a game with abolitionist exhumans. They are junkies, but they get their fix from the inside, so they can be highly active in the world - their motivations are about different kinds and intensities of pleasure. Every action is a delight, even failure is an amusing challenge to overcome ("Oh, I lost another life! It is so bothersome to have to respawn in this game"). Hmm, maybe they are trying to acquire as much sensory information as possible to run through their illegal hedonium mainframe hidden on some asteroid. So now they try to rob or infiltrate XP repositories... or steal infugee egos whose memories can be experienced (don't worry, the egos get to live in the mainframe too - in absolute bliss). "We are not selfish and we are not lazy. You are free and welcome to join us, but you need to work at learning how to control a turbo-charged motivation system. I spent a year realtime in accelerated simspaces figuring out how to function as an agent: every time I failed we froze the fork and rolled back the ego a bit. There are literally tens of thousands of me trapped in bliss in that server, and I plan to ensure they continue to enjoy forever once we spread hedonium across the universe. Does that horrify you... or does it *excite* you?"
Extropian
Smokeskin Smokeskin's picture
Arenamontanus wrote:Smokeskin
Arenamontanus wrote:
Smokeskin wrote:
But why would individuals use it? Peer pressure? Unless you have eusocial morals to begin with, would you really want to do that to yourself?
If you are Kantian you think you have a duty to behave and think in a moral fashion. Unfortunately you have not evolved to behave and think like that. But what if you used psychosurgery to make sure you actually acted according to your higher principles? Same thing for utilitarians like Peter Singer, who thinks the true morality might be quite a bit tougher than most people normally consider sensible: making yourself psychologically able to act as you should would be useful and perhaps necessary.
I confess that I have a deep aversion to deontology, to the extent that I might not be able to imagine how a Kantian would actually think. I can't help but think of it as permantly imprinting their folly in their mind, and how would anyone want that? I have a lot of respect for Singer, both in the clarity of his thought and his personal devotion to following through on whatever path this rationality sets him on. But his utility function is extremely different from mine - the moral enhancement he'd get seems incredibly moral and un-selfserving, but I'd argue that he already had those morals. A slight note, I haven't read anything by Singer for a long time and I have no idea if his position has developed.
Quote:
If you reduced akrasia, game-wise this might be represented as a boost to WIL. I think cognitive enhancement is a partial moral enhancement - COG and INT boosts also matter. Being better able to predict the consequences of your actions and how they impact other people (a quite sophisticated mental operation) doesn't necessarily make you a moral person, but it makes it much easier to act morally. It is also advantageous in general: even if you do not care about moral enhancement, becoming smarter helps many life projects. Studies have shown that smarter people also are better at cooperating in economic games (because they figure out the benefits and how to create cooperation with strangers), and do more long-term investment in the future. Not morality per se, but good in general. The famous Stanford marshmallow experiment demonstrates that self-control (or maybe strategic reasoning) correlates with future success.
I completely agree - fixing various cognitive biases would also be extremely helpful, both for yourself and your moral behavior as seen from the outside. This form of moral enhancement, through upgrading your insight, understanding and self control, while certainly affecting your behavior seems distinctly different from actually changing your morals. (I gave my son, still age 3, the marshmallow test last month and he made it :) And I'm wasting working hours writing on forums...)
Quote:
Greg Bear also suggested a gentle peer pressure related moral enhancement situation in the novels Queen of Angels, Slant and Moving Mars. Therapy is developed that fixes people's psychological issues, making them certified sane. For many jobs you only want to hire sane people, so you favor the therapied. So there is an advantage in becoming therapied, and eventually all people on Earth are sane. Nobody is forced, and everybody benefits. (Slight spoiler) This is also what drives the conflict in Moving Mars: the sane Earth cannot allow the un-therapied people of Mars to develop a certain very powerful technology. Had they only been therapied everything would have been fine.
You're right. I had considered peer pressure and signalling, but not the job market. I thought that in the long term, posthumans would have a competitive problem versus AIs who didn't waste clock cycles on mere fun, but maybe market forces will turn us all into non-eudaimonian agents long before that... I consider myself a transhumanist, open to all sorts of modification of our mental and physical state and uploading, but this just feels like too much for me, like we're touching with something at the core of my personality, maybe even humanity as I wouldn't want that done to anyone else either. I feel like using arguments that sound suspiciously much like bioconservatives arguing that nootropics and genetic engineering violate human dignity. But perhaps it is merely a desire for goal consistency.
Alkahest Alkahest's picture
OneTrikPony wrote:[edit]
OneTrikPony wrote:
[edit] Wow, massive personal diatribe deleted. :D
Aww, but I like personal diatribes.
OneTrikPony wrote:
Um... I think Locus and Paradise Station pretty much have the hedonism thing covered.
Oh, there's no shortage of egoistic hedonism in the System. Hedonistic utilitarianism seems somewhat more rare.
OneTrikPony wrote:
I'm sure the PC and the Jovians are working on "moral enhancement"
Wouldn't the Jovians see such things as messing with God's divine plan or something?
DivineWrath wrote:
Brinker habs are always an option. They are often difficult to reach so they tend to have the isolation and privacy needed to try out new ideas. In fact one the habs in rimward uses psychosurgery to have each individual living there to become hyper-specialized on one specific job (usually weakening them in other areas to make this possible).
Complete eusocial engineering is not precisely what I had in mind, but Brinkers are indeed an option. There seems to be a little hab for every social experiment imaginable.
President of PETE: People for the Ethical Treatment of Exhumans.
Alkahest Alkahest's picture
Smokeskin wrote:As for the
Smokeskin wrote:
As for the more extreme versions, maximizing happiness is a very dangerous goal. Would you want to become a wirehead (wires in your brain stimulate your pleasure center, so you experience eternal bliss)? Most people wouldn't.
The "wirehead hedonist" is really a kind of straw man abolitionist. Abolitionism simply advocates raising overall happiness, while keeping the reward- and punishment-systems that motivate us to do more than sit down and drool all day. For example, you would still be more happy if you pursued a hobby or helped others than if you did nothing, but all things you do would be significantly more fun to do than our current brains allow.
Smokeskin wrote:
Also, while people say they want happiness, they very often do quite little to actually become happy. Having children is a common example of something that all research shows make you significantly less happy, but we still do it. Ambition and working a lot is also a source for major unhappiness, and the rewards even if you succeed tend to do very little for your happiness. I've done both with my eyes open and I honestly don't value happiness that much.
I can of course not tell you what you "really" want since the person with the best access to your brain is you. As for other people pursuing objectives that do not make them happier, well, humans are well-known to not be very rational creatures. I think we both can agree that there are people who value happiness very much but still fail to engage in the kind of behavior that will actually increase their happiness.
Smokeskin wrote:
Would we really want to maximize happiness when it would mean we stopped reproducing and striving hard for things? Such a society would fail. Would I choose that individually, strip out my drives for other things than happiness? Hell no, I don't want to be a childless loser. If anything, I'd consider rewriting my brain so I'd also enjoy happiness at taking care of my children and working hard (imagine if you actually liked being woken at 3AM to comfort your child, and pulling an all-nighter at work was as fun as playing computer games).
That would be incredibly convenient, and nothing an abolitionist would have anything against, the very opposite! Rewiring ourselves so that we enjoy making the world a better place seems to be the ultimate in utilitarian psychosurgery.
Smokeskin wrote:
But from the descriptions of EP morphs, I suspect they're somewhat like that already - and note that from the outside, you'd observe the opposite of what you'd expect from an abolitionist society, as you'd see people doing MORE things that you regard as causing unhappiness!
Utopia might be a nice place to live, but it makes a horrible RPG setting.
Smokeskin wrote:
Moral enhancement is at best a completely silly idea, at worst utterly terrifying. We don't individually want to improve our morals (which I assume goes beyond someone just getting a fix allowing them to resist the desire to shoplift because they know it will mess up their life). If I think being tolerant towards religion is a immoral, why would I change that? If a fundamentalist thinks gay marriage is wrong, why would he change that? If I believe that economic growth is more important both for me and for the world's desperatedly poor than slowing global warming, why would I change that? Would I willingly give up my ability for deception?
Greater intelligence, greater empathy and greater self-control are all examples of "moral enhancement" that I can see people pursuing both for themselves and for others. I think it has less to do with modifying yourself to hold specific opinions and more to do with modifying yourself to be a good member of society, which benefits both society and yourself. I also imagine that you would get an incredible rep-boost from morally enhancing yourself in certain societies. Heck, since one of the the main problems with anarchist societies is the possibility of freeriding, some anarchist habs might require their residents to be morally enhanced. Of course, such a policy doesn't exactly scream "Anarchy! Woo!".
Smokeskin wrote:
The answer to all these questions are of course "no". I'd love for a lot of other people to modify their morals though. Imagine a world without socialism, religion, and environmentalism, how much better off everyone would be. As a tool for oppression it is certainly tempting - and unsurprisingly, the idea of moral enhancement seem to come with left wing ideas, though of course any oppressive regime or fanatic would love it too.
Well, I think that morally enhanced people could hold pretty much all kinds of political opinions, although those based on irrational rage (such as most xenophobic ideologies) and limitless egoism (Stirnerian anarchism) would probably not be popular. Heck, I could even see big-O Objectivists getting into it. Think about it: Making a rational decision to modify yourself so that you're never even tempted to force anyone else to sacrifice hirself for your sake, is that necessarily something ol' Ayn would condemn?
Smokeskin wrote:
As a libertarian, I certainly can't be anything but horrified at the idea. It's the ultimate weapon, and should be treated as such. It is a good alternative to other horrifying things like life imprisonment of criminals and dropping bombs on Taleban soldiers, but if something like that is ever used, it is important that we retain the same degree of horror at its use as when we kill people. There's a risk that its invisible effects could make us tolerant of its use and allow us to go on a slippery slope, and we need to be aware of that.
Oh, I agree to a certain extent. It is dangerous and can be misused, but like most technology I think it can also be a force for good. Anyway, my opinion isn't really relevant since we're discussing EP here, and neither of us live in that world. (No matter how much I wish I did.)
Smokeskin wrote:
In certain circles, I could imagine moral enhancements being used as a form of moral signalling though. You could prove that you were a devoted and uncorruptable patriot, or that you actually did put animal rights equal to humans, or whatever.
Oh, absolutely. And as I said above, in the world of EP it might give your rep a boost.
President of PETE: People for the Ethical Treatment of Exhumans.
Alkahest Alkahest's picture
Arenamontanus wrote:I did a
Arenamontanus wrote:
I did a brief write-up of the abolitionists at http://www.aleph.se/EclipsePhase/Pleasure.pdf
Ooh, neat! *reads* Yeah, this seems to be pretty much what I was thinking about. I especially like the "modifiable motivation" ("roll for a SV 2d10 philosophical crisis", heh) and I can see it being used in both the most liberal and the most repressive parts of the system, for wildly different reasons, of course. It could be very useful for spies, although I see defections being pretty common. I also like your explanation of abolitionism, although for some reason the phrase "dark abolitionists" brings a picture of a BDSM club to my mind.
Arenamontanus wrote:
I think abolitionism as an individual philosophy can be found nearly anywhere in the non-conservative parts of the solar system (it is likely not too well received among Ultimates or Jovians).
The killjoys of the Solar System. :-)
Arenamontanus wrote:
Moral enhancement is intriguing. This could be something both the PC and some autonomist groups pursue: the PC have culture experimentation projects and worry about social cohesion, and autonomists really need to influence transhuman nature to make their societies scale up better. Plenty of tech around to implement it, from psychosurgery to drugs to moral enhancement software in muses.
I can see both factions having many practical uses for moral enhancement, but some ideological objections. The PC, as far as I understand it, glorifies competition and the affluence of the fittest, while some anarchists might see it as a form of cohercion.
Arenamontanus wrote:
I have been thinking about writing an adventure where it has been secretly applied to some habitat, and everything has gone horribly wrong/right - it would be fun to make some well-meaning Titanians the villains for once. Of course, the best kind of problems are those where it is even unclear whether the result is bad and whether the villain is a villain or hero.
Seems very interesting. How do you envision this weirdtopia? That is, what about it would be horribly wrong/right?
President of PETE: People for the Ethical Treatment of Exhumans.
Alkahest Alkahest's picture
Arenamontanus wrote:It would
Arenamontanus wrote:
It would be fun to do a game with abolitionist exhumans. They are junkies, but they get their fix from the inside, so they can be highly active in the world - their motivations are about different kinds and intensities of pleasure. Every action is a delight, even failure is an amusing challenge to overcome ("Oh, I lost another life! It is so bothersome to have to respawn in this game"). Hmm, maybe they are trying to acquire as much sensory information as possible to run through their illegal hedonium mainframe hidden on some asteroid. So now they try to rob or infiltrate XP repositories... or steal infugee egos whose memories can be experienced (don't worry, the egos get to live in the mainframe too - in absolute bliss).
For some reason I see Xenomorphs wheezing "Wee looove yooou" as they saw off your head for uploading - after having rewired your brain to make you a major masochist. Utilitarianism! Being nice has never been so creepy!
President of PETE: People for the Ethical Treatment of Exhumans.
Smokeskin Smokeskin's picture
I think we should distinguish
I think we should distinguish between Rational enhancements: Enhancements that improves your self discipline, reduces your cognitive biases, improves your ability model the effects of your actions, etc. These don't modify your moral compass or utility function, only your ability to act according to it rather than lead your personal weaknesses lead you astray. It basically increases your ability to act rationally. Moral enhancements: Modifications to your moral compass and utility function. With this, your ideas of good and bad change. You used to want to fly to the Maldives for vacation, now you feel better about reducing your carbon footprint. Happiness enhancements: This should perhaps be a subgroup of the rational enhancements, but it is still slightly different. This simply aligns your happiness response to your utility function. If you want to be a special forces soldier, you'll make it through the grueling exercises not through sheer willpower but because you have exceptional willpower, but because you're enjoying every second of it. While this one seems to let everyone be an abolitionist for free, or let abolitionists act like everyone else, you need to be careful with this one as it seems extremely susceptible to unintended consequences! Actively enjoying the stress of special forces training could be detrimental to your health. If we muddle them up, we're not talking about the same thing. I've certainly railed against moral enhancements, only to be met with the wonders of rational enhancements which I'm all for. We're inadverdently going to make straw men if we don't define our words.
Alkahest wrote:
As for other people pursuing objectives that do not make them happier, well, humans are well-known to not be very rational creatures.
It is a mistake to think that the pursuit of happiness is rational. We value a great many things that are othogonal or even opposed to happiness, things like children, ambition, pride, duty, responsibility. Happiness is only one part of your average human utility function (and I'm not talking about the revealed preference utility function which is tainted by many irrational actions). As I see it, the difference between abolitionists and everyone else is that the abolitionists value happiness much more than everything else. There are actual differences between the utility functions of humans and abolitionists, and is not just a matter of lacking the right rational enhancements.
Alkahest Alkahest's picture
I think your divisions
I think your divisions presupposes the independent existence of a moral compass or utility function, which I don't think I believe in myself. I think a lot of what we consider to be our morality/our utility function are actually the result of cognitive biases, emotions, ignorance and so forth. The line between rational enhancements and moral enhancements might not be so clear-cut. I believe it's actually impossible or at least very, very hard for humans to be VNM-rational (http://en.wikipedia.org/wiki/Von_Neumann%E2%80%93Morgenstern_utility_the...) since what we pursue and value change from moment to moment. How do we distinguish between an irrational drive to do something against our morality/utility function and a moral compass/utility function that tells us to do something? Anyway, for the purposes of this thread I think it's enough to say that there are probably people in EP that agrees with you as well as people who agree with me. If you don't mind, I think I will use you as my go-to person for the extropian view on things.
President of PETE: People for the Ethical Treatment of Exhumans.
Smokeskin Smokeskin's picture
If we're talking revealed
If we're talking revealed preference utility functions, I completely agree. That's pretty much just defined as the function that's maximized by what we actually do, and what I called rational enhancement will change our actions. But let us assume that a perfect rational enhancement exists. With it you would remove all your biases, you'd be able to model your future mental states and sum their utility over time with correct discounting, you had perfect self discipline and the ability to set up mental commitment devices, etc. This is a very different human being for sure, but it would have a utility function left still that contained the essence of your moral compass (among other things). Something that changed that essential utility function would be a moral enhancement. I don't doubt at all that my utility function would change with perfect rational enhancement. Of course some of the things I think I want are only because of my muddled thinking, limited mental capacity and biases, and with clarity I'd want other things. But clarity wouldn't change everything. If that was the case, there'd be only two options left: 1) Perfect rationality came with its own utility function 2) Perfect rationality is only possible without any utility function Option 2 seems non-sensical (since imo the definition of perfect rationality is to be able to maximize the expected value of your utility function). Option 1 would solve a question in Friendly AI research, where some people believe that any sufficiently advanced agent would be cooperative, compassionate, and so on - a sentiment I don't share. To give a concrete example. The moral enhancement link given in the OP mentioned global warming. I'm personally a humanist (and I expect myself to become a "sentienist" and include other sentient entities once such are discovered or constructed, provided insurmountable conflicts of interests don't arise, and of course our understanding of sentience could undergo a revolution). I only place value on humans, and that means I only value nature and the biosphere to the extent that it affects humans. Contrast this to the more common environmentalist view where other things than humans are valued greatly. For example spending trillions of dollars on slowing global warming as we're doing now is obviously inhumane. The cost and resulting lack of economic growth means millions of people remain stuck in deep poverty with increased mortality and much reduced quality of life, and the scientific predictions of the difference in consequences from our current mitigation efforts does in no way justify all this misery. If you value the environmently greatly compared to humans, current policies make sense though. Another example would be animal right's activist, some of which believe that animal testing in order to develop life saving drugs for humans is immoral. These are fundemental moral issues that are independent of biases. Of course, some people would if they got a perfect rational enhancement discover that they really were humanists and their support for efforts to reduce global warming were the result of a bias, while others would discover that they actually cared about the environment and large parts of their consumption was only due to personal weakness and hedonism. But I don't know of a convincing argument for why to value humans over nature (even if I could point to some brain circuitry and say "there, right there, you're supposed to think like me" that wouldn't be a convincing argument, would it?). It seems to follow that there are utility functions that exist outside of our rational ability. And I fail to see why anyone would change their utility function. Rationally, you would at any point in time make decisions that maximized your current utility function. By changing your utility function, you would in the future make decisions that maximized the new utility function rather than your current utility function, and this would of course lead to lower expected value according to your current utility function. Ergo changing your utility function reduces the expected value, so no rational agent would do that. I actually believe that's a formal argument for why no rational agent should accept a moral enhancement, except for reasons of external pressure. For example, if my current utility function made it impossible for me to get a job and this was important to me according to my current utility function, I could be in a position where the expected value according to my current utility function would be higher when evaluated on the actions I'd take to optimize the expected value of the new function. Let F be my current utility function and x the future actions I'd take to optimize the expected value of F. Let F* be the future utility function and x* the future actions that optimized its expected value. It follows that iff F(x*) > F(x), I should change my utility function to F*. This could only be the case if x* is impossible while my utility function is F, since x is the optimal actions under F. And if you care about the integrity of your utility function (and I believe it's at the heart of what we are as individuals), that should make you even more worried about moral enhancements. It isn't just oppression that's a risk, external agents or market forces could simply incentivize you to change it.
Arenamontanus Arenamontanus's picture
Alkahest wrote:For some
Alkahest wrote:
For some reason I see Xenomorphs wheezing "Wee looove yooou" as they saw off your head for uploading - after having rewired your brain to make you a major masochist.
Maybe this is what the TITANs did?
Quote:
Utilitarianism! Being nice has never been so creepy!
That sounds like a slogan I have to use in the office.
Extropian
Alkahest Alkahest's picture
Smokeskin wrote:But let us
Smokeskin wrote:
But let us assume that a perfect rational enhancement exists. With it you would remove all your biases, you'd be able to model your future mental states and sum their utility over time with correct discounting, you had perfect self discipline and the ability to set up mental commitment devices, etc. This is a very different human being for sure, but it would have a utility function left still that contained the essence of your moral compass (among other things).
As you correctly say, rationality (which I think we can without much controversy assume is the same as VNM-rationality) can be defined as "to be able to maximize the expected value of your utility function". Assuming the possibility of perfect rationality enhancement presupposes the existence of human utility functions, so using one to prove the other isn't much of a challenge. I think the problem I have with accepting your point of view is this: How do we differentiate between "biases" and moral compasses/utility functions? Are there people who have "racism" as a fundamental utility function, one which values humans sharing their own phenotype more than other humans? If so, how do we decide if someone is a xenophobe because of hir biases or because of hir moral compass?
Smokeskin wrote:
To give a concrete example. The moral enhancement link given in the OP mentioned global warming. I'm personally a humanist (and I expect myself to become a "sentienist" and include other sentient entities once such are discovered or constructed, provided insurmountable conflicts of interests don't arise, and of course our understanding of sentience could undergo a revolution).
Well, if you value all sentient beings you should value pretty much all food animals, since all mammals, birds and even fish that we know of are sentient (assuming that concept has any extension at all, that is, if we can be said to be sentient in the first place). You might be thinking of "sapience", which I personally consider an ill-defined piece of woo-woo with no neurological basis. Then again I don't believe in qualia or propositional attitudes either, so I'm not very charitable when it comes to folk psychology. :-)
Smokeskin wrote:
And I fail to see why anyone would change their utility function. Rationally, you would at any point in time make decisions that maximized your current utility function. By changing your utility function, you would in the future make decisions that maximized the new utility function rather than your current utility function, and this would of course lead to lower expected value according to your current utility function. Ergo changing your utility function reduces the expected value, so no rational agent would do that.
Hmm. Putting aside for one moment the existence or non-existence of human utility functions, haven't you ever fundamentally changed your mind about something, in a way that affected the basis of your moral system? For example, I used to share your opinion about humans being the only beings worth caring about (as well as your political opinions, but I think political opinions are far more "shallow" than such fundamentals about which beings one should care about) but then I simply, well, changed my mind and became one of those preachy, annoying vegans I used to hate so much. Disregrading the question about whether my change in opinion was an improvement or the opposite, I think this raises an interesting question: Is perfect rationality desirable? As said, I agree with your definition of rationality. A perfectly rational being would always act to maximize hir utility function, and never, ever do anything that could change that utility function since that would be deeply irrational. But how would society look if everyone was perfectly rational? We would be pretty much stuck with the same utility functions, forever. (At least in the world of EP, where death is largely a thing of the past). It would be the most conservative society imaginable, and I don't think I would actually want to live in that kind of world. Our ability to fundamentally change our mind about things is VNM-irrational but probably a blessing.
President of PETE: People for the Ethical Treatment of Exhumans.
Alkahest Alkahest's picture
Arenamontanus wrote:Maybe
Arenamontanus wrote:
Maybe this is what the TITANs did?
Welp, that's what happens when you give David Pearce access to a seed AI. Beware the nice ones...
Quote:
That sounds like a slogan I have to use in the office.
:-)
President of PETE: People for the Ethical Treatment of Exhumans.
Smokeskin Smokeskin's picture
Some very good points
Some very good points Alkahest. I'll need to spend some time pondering them and also on actual work and family interaction, so it might be a few days before I reply, but this page is added to my reading list so I'll get back here even if there isn't any new activity. One thought I'll leave you with though: perhaps our utility functions have or should have a negative term for strict utility function conservation over time, so we'd maximize our expected value by sometimes modifying our function? It seems highly maladaptive to have your utility function locked in like you describe (you're absolutely right that this is the logical consequence). A 2-order system, with a higher-level function that evaluates and updates your utility function, doesn't seem to solve problem as it just reappears at the higher level.
NewtonPulsifer NewtonPulsifer's picture
Considering the ease of
Considering the ease of backup/re-sleeving, super advanced medical technology, and ease of memory alteration/psychosurgery - wouldn't an ideal utilitarian modification in say an extropian society be to become *more* sociopathic. Because the cost of your potential anti-social actions are inexpensively mitigated vs. the profit.
"I fear all we have done is to awaken a sleeping giant and fill him with a terrible resolve."- Isoroku Yamamoto
Alkahest Alkahest's picture
Smokeskin wrote:Some very
Smokeskin wrote:
Some very good points Alkahest. I'll need to spend some time pondering them and also on actual work and family interaction, so it might be a few days before I reply, but this page is added to my reading list so I'll get back here even if there isn't any new activity.
This discussion has been a pleasure so far, I'm looking forward to it!
Smokeskin wrote:
One thought I'll leave you with though: perhaps our utility functions have or should have a negative term for strict utility function conservation over time, so we'd maximize our expected value by sometimes modifying our function? It seems highly maladaptive to have your utility function locked in like you describe (you're absolutely right that this is the logical consequence). A 2-order system, with a higher-level function that evaluates and updates your utility function, doesn't seem to solve problem as it just reappears at the higher level.
Well, the problem is finding a way to maximize our expected value by modifying our utility function when "value" itself is defined by said utility function. Maybe the only way to not create a dystopic scenario while still reaping the benefits of rational actors is to have perfectly rational beings controlled by by irrational beings, who change the utility functions of the rational beings in a fundamentally irrational way, but without the rational beings ever being aware of it. Or maybe every mind should contain rational and irrational subsystems. Or maybe there could be a way to tie utility function modification to background radiation, creating a kind of "evolution by natural selection" via random "mutations". Urgh.
NewtonPulsifer wrote:
Considering the ease of backup/re-sleeving, super advanced medical technology, and ease of memory alteration/psychosurgery - wouldn't an ideal utilitarian modification in say an extropian society be to become *more* sociopathic.
From an egoistic utilitarian perspective, sure. I hope that most people don't want to be egoists, however. Then again, I'm a hippie.
President of PETE: People for the Ethical Treatment of Exhumans.