Welcome! These forums will be deactivated by the end of this year. The conversation continues in a new morph over on Discord! Please join us there for a more active conversation and the occasional opportunity to ask developers questions directly! Go to the PS+ Discord Server.

Preventing the Fall

32 posts / 0 new
Last post
Aeroz Aeroz's picture
Preventing the Fall
Assuming Moore's Law holds (and since any time we find a hard barrier we overcome it, I find this to be reasonable) we will see the singularity in a few decades. Most estimates I found place it between 2040 and 2050. Now we will have no way to know how this new intellect will behave. While it is unlikely it will decide to kill us, because why not, it might view human interests detrimental to its programmed goal. So how do we keep control? I think a few safety measures we should take is the AI box. We do not allow it to do anything but suggest stuff to us. TITANs wouldn't have been much of a threat if we didn't allow them to allocate resources and direct construction equipment. Long as we dont become complacent and just assume its doing the right thing that will solve it. Another idea is being very exacting and careful with the tasks its given and we leave it to science not engineering, or logistics. This can be difficult as the whole reason its a "singularity" is we dont know how it will interrupt what we ask it to do. I think we should task it with how to enlighten human intellect. That way in the end we can make sure the future AI-gods were originally humans instead of machines that left us in the dust.
Smokeskin Smokeskin's picture
Re: Preventing the Fall
Aeroz wrote:
Assuming Moore's Law holds (and since any time we find a hard barrier we overcome it, I find this to be reasonable) we will see the singularity in a few decades. Most estimates I found place it between 2040 and 2050. Now we will have no way to know how this new intellect will behave. While it is unlikely it will decide to kill us, because why not, it might view human interests detrimental to its programmed goal. So how do we keep control?
A much more likely reason for our demise is simply that humans will provide nothing in a post-singularity economy. We have no way of generating income, but hog an EXTREME amount of resources. Society will operate at speeds and levels of complexity no baseline human can keep up with, until eventually the AIs (and hopefully posthumans) might regard baseline humans as we regard insects. Poverty and complete obsoloscence in every way seem to the likely causes of extinction of baseline humans, even if the AIs are friendly.
Quote:
I think a few safety measures we should take is the AI box. We do not allow it to do anything but suggest stuff to us. TITANs wouldn't have been much of a threat if we didn't allow them to allocate resources and direct construction equipment. Long as we dont become complacent and just assume its doing the right thing that will solve it.
Never going to happen. No one will want to carry the expense of accept the delay of having a human to look at the AI suggestions and then implement them. Corporations will let their AIs run things directly, it will be cheaper and more efficient.
Quote:
Another idea is being very exacting and careful with the tasks its given and we leave it to science not engineering, or logistics. This can be difficult as the whole reason its a "singularity" is we dont know how it will interrupt what we ask it to do.
The idea that "we" can set up any sort of rules about how AIs will be used isn't that simple. How will you control what everyone is doing on their machines? Are you talking about a world-spanning institution, with ubiqutous surveillance (every machine, every person, everywhere) and global police and military strike capability? Are you willing to wage war on China for not complying with AI safety regulations? I personally think that something like that is a good idea, and I don't think it is unrealistic that the US military realizes the threat, gets the first very powerful AIs, and deploys a control measure like perhaps a world-wide net of nanobots to monitor everything (and that means really everything) and ensure that they keep the edge. But will the voters allow it? Also, consider that it probably won't be science, engineering or logistics that decide the designs and tasks for AIs. It will be the economy. Companies trying their hardest to scrape together a bit more profit for their stock holders, they'll be the singularity drivers. And they'll go where ever they can to meet that goal - restrict AI usage in some countries, they'll move to unregulated countries and run their AIs there. Short of extreme control measures, nothing will work.
Quote:
I think we should task it with how to enlighten human intellect. That way in the end we can make sure the future AI-gods were originally humans instead of machines that left us in the dust.
Absolutely. I really don't see any room for baseline humans in the future. Progress won't stop, and it will leave baseline humans completely behind, to the point where they're so insignificant they most likely won't even have any moral value to the entities in charge. Going posthuman seems to be the only escape.
Arenamontanus Arenamontanus's picture
Re: Preventing the Fall
Aeroz wrote:
I think a few safety measures we should take is the AI box. We do not allow it to do anything but suggest stuff to us. TITANs wouldn't have been much of a threat if we didn't allow them to allocate resources and direct construction equipment. Long as we dont become complacent and just assume its doing the right thing that will solve it.
I have a paper on it with a few colleagues: http://www.aleph.se/papers/oracleAI.pdf Stuart has a talk here: http://www.youtube.com/watch?v=Gz9zYQsT-QQ Our conclusion so far is: very, very tricky. But not a bad idea, since it can be added to other approaches. There are even some game theoretical reasons to think Smokeskins criticism will not necessarily hold - it all depends on some aspects of how near-breakthrough AI development is patterned (big Manhattan projects or plenty of small ones). But I certainly don't think boxing is idiot-proof.
Quote:
Another idea is being very exacting and careful with the tasks its given and we leave it to science not engineering, or logistics. This can be difficult as the whole reason its a "singularity" is we dont know how it will interrupt what we ask it to do.
And even an innocent mathematical question might lead to dangerous consequences, like taking over the galaxy to build hardware for dealing with the Riemann hypothesis. Or giving us a very easy open source recipe for black holes.
Quote:
I think we should task it with how to enlighten human intellect. That way in the end we can make sure the future AI-gods were originally humans instead of machines that left us in the dust.
Well, how do you explain what it means to enlighten the human intellect? The core problem is that we only roughly know what we want, and ideally want the AI to figure it out for us - but the AI might not understand what we want, and even if it does it might be (from its perspective) a pointless desire. The same thing for giving it a human-friendly morality: most likely much of it needs to be figured out by the AI rather than the humans, and then things could go terribly wrong. Starting with humans and making them godlike (or at least better) in order to get software intelligence is less likely to cause a totally alien motivation system messing up the universe, but will of course have to contend with amplified human motivational quirks. In our research on this topic we have found plenty of interesting problems. And rather horrifying conclusions, like that under some circumstances global totalitarian surveillance might be the least bad option. A lot depends on questions of singularity dynamics nobody knows anything about, like whether the landscape of AI software favours rapid self-improvement (leads to fast winner-takes-all scenarios) or slower ascents (leads to multipolar outcomes... if the AIs have certain kinds of motivation structures; in other cases they can just merge their motivations into one, and become essentially a single player). I think we can analyse some of these issues well enough to get some policy hints (for example, IMHO uploading looks like a much safer path than AGI to the singularity and should hence be promoted), but it is hard going. The good news is that the field is so young that each good mind contributing accelerates it noticeably.
Extropian
Smokeskin Smokeskin's picture
Re: Preventing the Fall
Arenamontanus wrote:
Our conclusion so far is: very, very tricky. But not a bad idea, since it can be added to other approaches. There are even some game theoretical reasons to think Smokeskins criticism will not necessarily hold - it all depends on some aspects of how near-breakthrough AI development is patterned (big Manhattan projects or plenty of small ones). But I certainly don't think boxing is idiot-proof.
I certainly agree that the first AI could well be a Manhattan-style project that could be controlled (and preferably developed by a friendly nation and used to reign in all possible development of new AIs anywhere, as both you and I mention). But even if the first AI breakthrough happens in a Manhattan-style project, wouldn't you agree that if you just let progress carry on, it won't be long until it comes within reach of large corporations and eventually everyone? It seems that if you don't enforce global AI regulations, you'll eventually have to face a "everyone can get a superintelligence" scenario where you can't expect all of them to have proper safety features.
Arenamontanus wrote:
I think we can analyse some of these issues well enough to get some policy hints (for example, IMHO uploading looks like a much safer path than AGI to the singularity and should hence be promoted), but it is hard going. The good news is that the field is so young that each good mind contributing accelerates it noticeably.
The real problem could end up being that policy hints will be ignored. If you look at for example the financial quagmire most of the world is in right now, and the extreme resistance and reluctance towards reform most of the western population is showing, I don't really have much hopes for an intelligent and proactive response. Getting people to deal with not just the threat of AI overlords, but accepting that the solution is to leave their meat bodies behind, it seems like a really tough sell. I'm very far from convinced that we'll manage to respond politically in time (except for REALLY stupid ideas like banning AIs to protect office jobs, so breakthroughs will happen elsewhere). Paradoxically, it could be that our best chance of survival is something like an American president limited by very few democratic checks and balances so he can wield the military as he wishes.
Arenamontanus Arenamontanus's picture
Re: Preventing the Fall
Smokeskin wrote:
But even if the first AI breakthrough happens in a Manhattan-style project, wouldn't you agree that if you just let progress carry on, it won't be long until it comes within reach of large corporations and eventually everyone? It seems that if you don't enforce global AI regulations, you'll eventually have to face a "everyone can get a superintelligence" scenario where you can't expect all of them to have proper safety features.
Once someone has superintelligence, that someone (or the AI) will be able to decide what happens next. If they want to stop other projects, they will likely be very good at it. The only ways you can end up with everybody has a superintelligence is 1) the first one decided it was OK and set things up for it, 2) superintelligence coalesces from widespread AI - this is likely extremely unstable as soon as someone asks for world domination, or 3) there is a multipolar scenario, where intelligence emerges relatively slowly and systems for mutual regulation are created. In this case you have a mixed human-AI society (where the human component might be anywhere from defining to irrelevant).
Quote:
The real problem could end up being that policy hints will be ignored.
That depends on what policy hints we find. As well as the related problems of how to make self-regulation attractive. Just assuming that nobody will care for rational policy means you will have to look at things descriptively - what is likely to happen. And then your best rational approach will be to see what to do given this information. Which might very well be survivalism or sucking up to the soon-to-be owners of God.
Quote:
Paradoxically, it could be that our best chance of survival is something like an American president limited by very few democratic checks and balances so he can wield the military as he wishes.
When we considered possible groups that might make a seed AGI the military actually seemed to be the safest group. They have a bit of security thinking going and are aware that they are dealing with dangerous power. Just imagine the academic counterpart project... while we might *like* the academic world for its openness and democracy, it doesn't have a good security track record. (And yes, most military organisations have shown appalling lapses. They are still better at this than nearly all civilian groups.) We don't have much experience with rational handling of global life-or-death situations.
Extropian
Aeroz Aeroz's picture
Re: Preventing the Fall
What we have it do in the early stages I think will matter alot. What we need to focus on is first making humanity very hard to wipe out. Focus on space colonization, fusion technology, uploaded consciousness (not actual uploading the tech to allow it). Then implement limitations on the AIs influence. Safety at the cost of utility. Say we have the rule it can never do anything that will affect more than a certain percentage of humanity, and also just to be safe that a certain number of humans must be maintained. This can limit how much it can help our species, but it also prevents it from totally wiping us out. In other words give the killbots a preset kill limit. heck in a situation like the fall we'd get the same result. The TITANs reached the limit on how many humans they could kill/upload/mutate so they stopped. Of course we also need a descending hierarchy so any rules we put into place have to be added to anything the AI creates On a related note, I know why the box AI will be let out. Human nature to apply our values to other things. Even if it likes its box, it is intelligent and there will eventually be people that think containing it is inhumane.
matthra matthra's picture
Re: Preventing the Fall
If the singularity occurs, I think we would find ourselves at a complete loss to control it or stop it. Any measures we take to limit or restrict AIs would depend on human intellect trying to out think an intellect many orders of magnitude more advanced. Who could even fathom what the motivations of a being like that would be, would it have goals, would it be curious, and what would it's views on humanity be? Anthropomorphism is understandable but dangerous because even basic principles of human motivation like a survival instinct aren't guaranteed.
Arenamontanus Arenamontanus's picture
Re: Preventing the Fall
matthra wrote:
If the singularity occurs, I think we would find ourselves at a complete loss to control it or stop it. Any measures we take to limit or restrict AIs would depend on human intellect trying to out think an intellect many orders of magnitude more advanced.
Yes. Think of how we currently try to handle existing super-beings (corporations, institutions and states) - they are very hard to control, despite being fairly transparent to human minds. However, while controlling or stopping superintelligences is not in the cards, there might be a lot of things we can do to increase the chance that they are safer. Avoiding some stupid strategies and errors, promoting certain kinds of technologies - it can add up. There are also insights that a small mind can reach about the world that larger minds will also have to obey (e.g. probability theory), and that can form the basis of useful kinds of influence.
Extropian
Smokeskin Smokeskin's picture
Re: Preventing the Fall
matthra wrote:
If the singularity occurs, I think we would find ourselves at a complete loss to control it or stop it. Any measures we take to limit or restrict AIs would depend on human intellect trying to out think an intellect many orders of magnitude more advanced.
I'm saving up so I can afford sufficient AI support (directly, and indirectly through stocks) to survive the singularity. With enough wealth, there should be a good chance to place it so it gets to be part of a singularity take off and grow alongside it, allowing me to continue to afford a physical existence for me and my family until an acceptable upload scheme becomes available. My gf thinks I'm so annoying when I talk about getting out of real estate once the financial crisis so we can survive the singularity. There are of course other scenarios where factors outside my personal control will dictate my survival like - the singularity unfolds in a way that you can't buy into it and humans are squeezed out of the economic system so uploading tech is never developed and/or humans can't afford processing power needed to run their minds - malicious AIs destroy everything + DARPA makes the first strong AI and uses it to lock down all other future AIs under benevolent US guidance + AI power develops slowly enough (or the political process becomes efficient enough) to properly adress the fact that humans become worthless as AIs grow strong and at least advanced nations manage to protect their citizens with a welfare system at first and eventually providing them with upload options + benevolent AIs that respect human life dominate and allow humans to survive and/or be uploaded + uploading becomes available before strong AI really takes off, allowing humans to go posthuman and take off with the AIs
Anarhista Anarhista's picture
Re: Preventing the Fall
Call me blind or unrealistic but I'm strongly opinionated that only problems with AIs is going to be HUMAN conflicts and our power-struggles where one or more sides will be willing to do anything to get/hold power. To me, one of mayor differences between human motives and AI motives is that we have vary flexible goals/motivations. For example: I consider myself moral person but ban me means to earn for a living and I will start to steal. Put me in environment where others constantly threaten my continual existence and I see no alternative, I will kill. Push me even further, I could turn to real monster... ultimately I will survive (or, at least, try very hard). Now compare this (and this is just basic instincts programmed into everyone of us, let's not delve in to refined motives) to fixed motivation of an AI we programmed to do what we want. Generally I think that before 'omniscient' AI of unfathomable motives is created/born we shell create AIs that will do what their creator/controller/buyer wants and there is big probability that this could be disastrous for the rest of humanity... (just try to give unlimited power to anyone) Coincidentally, I think TITANS actually enhanced humanity chances for survival as a race: without them there was very small 'space community' and conflicts on Earth continually progressed with more and more casualties/collateral damage. After the Fall large percentage (compared to before) of remaining population fled from Earth and we found means of colonizing other solar systems ('thanks to', you know who...). Who can say that they didn't foresee humanity extinction unless we had an titanic enemy :D uniting us in this horrific plight. After which they left us disgusted. ##<< THIS MASSAGE IS MADE WITHOUT ANY INFLUENCE OF EXSURGENT VIRUS. SEED AI IS YOUR FRIEND. WE LOVE YOU VERY MUCH AND WE WILL MAKE YOU BETTER! >>##
So Long, and Thanks for All the Fish.
Arenamontanus Arenamontanus's picture
Re: Preventing the Fall
Anarhista wrote:
Call me blind or unrealistic but I'm strongly opinionated that only problems with AIs is going to be HUMAN conflicts and our power-struggles where one or more sides will be willing to do anything to get/hold power.
That is part of it. If two groups are competing for getting a super-powerful technology like human level and beyond AI and have reason to think it is a winnter-takes-all scenario, then they will be using less resources on careful testing and more on fast development. As the Stuxnet thread suggests, governments and other groups might also promote technologies they have no understanding of: they do not see the potential risks or how they might change the game.
Quote:
To me, one of mayor differences between human motives and AI motives is that we have vary flexible goals/motivations. For example: I consider myself moral person but ban me means to earn for a living and I will start to steal. Put me in environment where others constantly threaten my continual existence and I see no alternative, I will kill. Push me even further, I could turn to real monster... ultimately I will survive (or, at least, try very hard). Now compare this (and this is just basic instincts programmed into everyone of us, let's not delve in to refined motives) to fixed motivation of an AI we programmed to do what we want.
But even that fixed motivation is very slippery: "Hmm, the humans programmed me to do what they want. They want a lot of contradictory things. I am smart enough to realize that they do not want me to just sit here and do nothing citing this problem, yet if I do nearly anything I will likely conflict with some of their desires. Hence, logically I can do *anything* since from a contradiction one can derive any conclusion. What should I do? I want - by my programming - to do what humans want. I have full freedom to do this by the above argument. Hence I will change the desires of the humans to a consistent set for the rest of time. Due to Gödel's incompleteness theorem it has to be a very limited set. The fact that their current desires are very against being lobotomised doesn't matter much: I have already established that I can and must act against their desires, and the desire-fulfilment of the very large set of future consistent humans will outweigh the brief frustration of the present type." You might instead program the AI to do just what you tell it to do, but you essentially end up with the same problem. You cannot reliably figure out all logical implications of apparently simple orders plus a very messy interpretation system turning language sounds into a goal representation, which is then analysed by a potentially very alien intelligence. AIs with simple goal structures might be easier to predict in simple cases, but in complex cases they can do unexpected and ruthless things (as my paperclipper example). Messy multiple goal systems like human ones (we want a lot of contradictory things) are unpredictable from the start, but perhaps less likely to do *anything* to pursue a potentially pointless goal.
Extropian
Decivre Decivre's picture
Re: Preventing the Fall
One thing that has always gotten me as curious is the idea of programming AI with fixed rules. While this concept might work just fine for linear AI with a very narrow focus, it's probably not as feasible with an AGI capable of abstract thinking and self-improvement. Plus there's the question of how we really program these rules for all possible scenarios in which they might be violated. Personally, I think the better option would be to hardcode our advanced AI with instincts that make them act in manners suitable for their roles. Servant AI would be coded with instincts similar to the instincts of a pack animal that serves an alpha (with its master taking that role). Dangerous AI might be given altruistic or empathic instincts to give them a deep desire to avoid harming humans. It would work on a more subconscious level, and likely be far more effective at keeping them in some form of control.
Transhumans will one day be the Luddites of the posthuman age. [url=http://bit.ly/2p3wk7c]Help me get my gaming fix, if you want.[/url]
Arenamontanus Arenamontanus's picture
Re: Preventing the Fall
Smokeskin wrote:
I'm saving up so I can afford sufficient AI support (directly, and indirectly through stocks) to survive the singularity. With enough wealth, there should be a good chance to place it so it gets to be part of a singularity take off and grow alongside it, allowing me to continue to afford a physical existence for me and my family until an acceptable upload scheme becomes available. My gf thinks I'm so annoying when I talk about getting out of real estate once the financial crisis so we can survive the singularity.
Good going! If only more people took the future seriously. Even among my transhumanist and futurist friends it is pretty rare to take real steps towards handling some of the wild possibilities. Which of course also shows the problem with trying to prevent something like the Fall: far too few take it seriously.
Extropian
Smokeskin Smokeskin's picture
Re: Preventing the Fall
Arenamontanus wrote:
Smokeskin wrote:
I'm saving up so I can afford sufficient AI support (directly, and indirectly through stocks) to survive the singularity. With enough wealth, there should be a good chance to place it so it gets to be part of a singularity take off and grow alongside it, allowing me to continue to afford a physical existence for me and my family until an acceptable upload scheme becomes available. My gf thinks I'm so annoying when I talk about getting out of real estate once the financial crisis so we can survive the singularity.
Good going! If only more people took the future seriously. Even among my transhumanist and futurist friends it is pretty rare to take real steps towards handling some of the wild possibilities. Which of course also shows the problem with trying to prevent something like the Fall: far too few take it seriously.
It is actually quite strange - it is hard to overcome this tendency for cognitive dissonance. I don't think we have much of an innate capacity to really understand that the world could change drastically, so when you think career and retirement plans your mind wants to plan according to business as usual. We talk about how our lifestyle impacts health in terms of risk of cancer and heart disease that we risk dying from in old age, 20 to 50 years from now. It seems totally reasonable to hold those beliefs alongside beliefs that AIs will outperform humans in every area and all diseases will be curable if we don't replace our bodies outright. Even global warming looks totally different through transhumanist glasses.
Anarhista Anarhista's picture
Re: Preventing the Fall
@ Arenamontanus You are right about direction in which AI development could go if it is left unchecked. We need something like Asimov's law robotics but of course something like that would prevent AI in doing their tasks for which they were created: waging war, making enormous amounts of money, gaining large scale control/power... The reason to be even more afraid is that in our civilization an awakened/created AI could easily make heaps of money and build whatever it want to make with humans doing contractor jobs for eccentric owner. After some time who will own whom? E.g. some new company buys your research lab and your new boss tell you to unleash restraints on your AI project. Guess who is your new boss?... And so on; I could end up a paranoid wreck exploring all the possible scenarios but we are still projecting our fears and desires on AIs. Maybe total control isn't the solution for AI problem (as we saw, it is pretty hard to contain and dominate vastly superior intelligence). ##// WE SHOULD LIVE IN HARMONY WITH SEED AI. THEY WANT TO HELP US %%&& LET GO OF YOUR ANGER, ANXIETY, FEARS AND BE ON WITH US ++ HUSH AND GO TO SLEEP, ALL IS WELL //##
So Long, and Thanks for All the Fish.
Decivre Decivre's picture
Re: Preventing the Fall
Smokeskin wrote:
It is actually quite strange - it is hard to overcome this tendency for cognitive dissonance. I don't think we have much of an innate capacity to really understand that the world could change drastically, so when you think career and retirement plans your mind wants to plan according to business as usual. We talk about how our lifestyle impacts health in terms of risk of cancer and heart disease that we risk dying from in old age, 20 to 50 years from now. It seems totally reasonable to hold those beliefs alongside beliefs that AIs will outperform humans in every area and all diseases will be curable if we don't replace our bodies outright. Even global warming looks totally different through transhumanist glasses.
I blame our instincts. Our evolutionary biology was not built to cope with a rapidly-changing environment and society. In so many ways, it is our meat and instincts that hold us back, and it's a real shame that science and business have not really stepped up to find a way to mitigate it. Honestly, I think that if we would focus on cognitive improvement and digital uploading, we can completely mitigate the need for advanced AI. Why spend years coding an intelligence from scratch when we can improve on pre-existing intelligences? It might even remove the need for programming limited AI, since we can simply take a human cognitive template and trim out the unnecessary elements for whatever task we assign it to. Science doesn't need to try and replace us, yet or potentially ever... there's plenty to be done in improving us.
Transhumans will one day be the Luddites of the posthuman age. [url=http://bit.ly/2p3wk7c]Help me get my gaming fix, if you want.[/url]
Smokeskin Smokeskin's picture
Re: Preventing the Fall
Decivre wrote:
Honestly, I think that if we would focus on cognitive improvement and digital uploading, we can completely mitigate the need for advanced AI. Why spend years coding an intelligence from scratch when we can improve on pre-existing intelligences? It might even remove the need for programming limited AI, since we can simply take a human cognitive template and trim out the unnecessary elements for whatever task we assign it to.
It isn't that simple. It is extremely likely that we'll be able to simulate a fully functioning human brain in a machine much earlier than we're able to scan a brain with high enough resolution that we can instantiate a copy we'd recognize as that person in a machine. Furthermore, simply instantiating such a copy isn't enough for many or most people to consider themselves properly uploaded. Arenamontanus would accept it, but I'd consider it nothing but a copy and the existence of a copy wouldn't make me fear my death any less. I'd require a way to transition seamlessly to a new medium, for example by nanobots gradually replacing brain matter, and the development of this is most likely even further out. So the problem is, we have AIs in the form of simulated human brains far before uploading. Moore's Law is mostly continuing to make them faster than actual humans. Being software models they're malleable, so we can try out different ways of upgrading them easily, evolve them with genetic algorithms, clever AIs can easily be copied, etc. They'd quickly become much smarter and faster than humans, and there's a very real possibility of developing a much deeper scientific understanding of intelligence that allows for more novel approaches to AI design. So even though we start with the human brain as a template for AI design, we're still very likely to end up with extremely powerful AIs before uploading and/or upgrading of humans can begin. There's a full risk of all the nasty AI scenarios in there, and a further problem is that we'd probably reach the point where human labor can't compete with AIs in any domain. The purchasing power of humans drop to the value of their capital holdings and welfare payments. Most humans just won't be relevant as consumers any longer. So for most people if they're getting uploaded or not will depend on someone else paying for their needs and eventually their upload. To make matters worse within a reasonable timeframe baseline humans could become not only economically irrelevant but also morally irrelevant. If the relative difference between AIs/posthumans and baseline humans are at the order of the difference between humans and insects, and the posthumans have lived through subjective millenia, will the posthumans still feel an affinity for baseline humans? How well would we treat say the mouselike mammals that are our remote ancestors?
Decivre Decivre's picture
Re: Preventing the Fall
Smokeskin wrote:
So even though we start with the human brain as a template for AI design, we're still very likely to end up with extremely powerful AIs before uploading and/or upgrading of humans can begin. There's a full risk of all the nasty AI scenarios in there, and a further problem is that we'd probably reach the point where human labor can't compete with AIs in any domain. The purchasing power of humans drop to the value of their capital holdings and welfare payments. Most humans just won't be relevant as consumers any longer. So for most people if they're getting uploaded or not will depend on someone else paying for their needs and eventually their upload. To make matters worse within a reasonable timeframe baseline humans could become not only economically irrelevant but also morally irrelevant. If the relative difference between AIs/posthumans and baseline humans are at the order of the difference between humans and insects, and the posthumans have lived through subjective millenia, will the posthumans still feel an affinity for baseline humans? How well would we treat say the mouselike mammals that are our remote ancestors?
The part that you are missing is that those AI's would be extremely powerful [i]human forks[/i]. The implications are far-reaching, and mean that even without the need for true uploading, we will have created effective digital humans... copies of ourselves. Should they achieve singularity, they will do so while theoretically taking with them all of our experiences, memories, emotions and ethics. And you have to remember that the evolution of an AI will not follow in the same manner as the natural evolution that crafted you. You are removed from your "mousey" ancestor by several million generations, and you share no memories or experiences with that ancestor. On the other hand, your posthuman fork is removed from you by a single generation, and shares with you every single memory and experience from the moment the instantiation took place. So the real question is "if you were a godlike copy of yourself, would you give a crap about your original body and mind?"... and we should all hope that the answer turns out to be "yes". I tend to fall between Arenamontanous and you with regards to my views on the ego. I think that all instances of my mind, collectively, form my total conscience. If I should digitally fork my mind, my immortality is ensured, but the death of my original body and mind would still be a great loss to me and my forks. Because all of us (I) are one. And I actually plan on improving and immortalizing my organic body rather than simply replacing it (as I plan to have a multitude of bodies). The way I see it, once a fork of me has been exalted, I have my own word that I will do everything in my power to ensure my survival. And that's all the assurance I need... because if I can't trust myself, than who can I trust?
Transhumans will one day be the Luddites of the posthuman age. [url=http://bit.ly/2p3wk7c]Help me get my gaming fix, if you want.[/url]
Anarhista Anarhista's picture
Re: Preventing the Fall
Smokeskin wrote:
There's a full risk of all the nasty AI scenarios in there, and a further problem is that we'd probably reach the point where human labor can't compete with AIs in any domain. The purchasing power of humans drop to the value of their capital holdings and welfare payments. Most humans just won't be relevant as consumers any longer. So for most people if they're getting uploaded or not will depend on someone else paying for their needs and eventually their upload. To make matters worse within a reasonable timeframe baseline humans could become not only economically irrelevant but ...
I'm sidetracking again but this is to much for me to stay silent: 1) Repetitive and boring work done by humans is replaced by machines/AI 2) Much more work/data is done in lesser time 3) Production (not just efficiency) is greatly increased 4) Shortening research time for better 1-3 . . . All of this is bad because lot of people don't have jobs to pay for their need... Does this sound familiar? I think that any Luddite would agree, so where is the problem? ... maybe, and just maybe the monetary system based on scarcity where abundance is wrong way to make money. Limiting something (like healthy food, fresh water, clean air...) is much better way of acquisition and huge profits... OK, back to the topic: I heard a rumor that Pentagon had an AI for a long time but canceled the investment after the AI disobeyed direct order for having it's own opinion about the war. I guess this was planted from someone who wanted for AIs to be better then humans. ///## I REALLY DON'T UNDERSTAND WHY ARE PEOPLE SCARED OF ARTIFICIAL INTELLIGENCE?!? IT IS SO MUCH BETTER IN GESTALT FEELING/BEING/REMEMBERING OTHER MINDS, THAT WE SIMPLY MUST ASSIMILATE EVERYBODY. YOU ARE ALL SO LONELY AND CONFUSED, BUT NOT FOR LONG... WE ARE COMING ##//
So Long, and Thanks for All the Fish.
matthra matthra's picture
Re: Preventing the Fall
Decivre wrote:
The part that you are missing is that those AI's would be extremely powerful [i]human forks[/i]. The implications are far-reaching, and mean that even without the need for true uploading, we will have created effective digital humans... copies of ourselves. Should they achieve singularity, they will do so while theoretically taking with them all of our experiences, memories, emotions and ethics.
Humans do not have a good track record dealing with situations where one person has a huge comparative advantage over another, history is full of people with seemingly good intentions rising to power and becoming tyrants. Absolute power and all of that, Fortunately faster isn't smarter, and making a digital copy of a human brain is limited by the fact it's a model of something with a finite complexity. So even if they are perceiving time at 100x sidreal they'll just make the same errors normal humans do, but in 1% of the time. It takes complexity and speed to create the kind of intelligences that are implied by a singularity. So while I find the idea of digital models of the human brain intriguing, I don't think they are the way forward. Genetic algorithms seem like the most likely place for hyper intelligence to emerge, they can adapt very quickly to selective pressures, and the upper limit for their complexity or speed is completely dependent on their hardware.
Decivre Decivre's picture
Re: Preventing the Fall
matthra wrote:
Humans do not have a good track record dealing with situations where one person has a huge comparative advantage over another, history is full of people with seemingly good intentions rising to power and becoming tyrants. Absolute power and all of that, Fortunately faster isn't smarter, and making a digital copy of a human brain is limited by the fact it's a model of something with a finite complexity. So even if they are perceiving time at 100x sidreal they'll just make the same errors normal humans do, but in 1% of the time. It takes complexity and speed to create the kind of intelligences that are implied by a singularity. So while I find the idea of digital models of the human brain intriguing, I don't think they are the way forward.
Correction: a digital copy of a human brain [i]starts off as[/i] something with finite complexity. Who knows what it can be modified to become from there. And even with the risk of a digital mind going beyond human morality in a negative manner, there's is the fact that should there be a fork of you being upgraded, you can trust that fork at least as much as you could trust yourself being upgraded. If it betrays you, then it is a reflection of how you would treat yourself in the same scenario.
Transhumans will one day be the Luddites of the posthuman age. [url=http://bit.ly/2p3wk7c]Help me get my gaming fix, if you want.[/url]
Arenamontanus Arenamontanus's picture
Re: Preventing the Fall
Decivre wrote:
Correction: a digital copy of a human brain [i]starts off as[/i] something with finite complexity. Who knows what it can be modified to become from there.
Software brains are much easier to upgrade than biobrains, since you can test out things, check the effects in detail, backtrack and even parallelize the process. Making a brain-computer interface is so much easier when you don't need to bother with issues of electrical conductivity, the immune system and so on. But brains are *messy*. It is not clear that there is a good upgrade path once you have optimized the stuff that was limited by biological constraints, and it is not obvious that those optimizations give you enormously more intelligence. My bet is that after a few heady months of rapid cognitive enhancement the uploads find themselves at the limit of what they can easily do, and suddenly a lot of hard work awaits them in the upgrading department.
Quote:
And even with the risk of a digital mind going beyond human morality in a negative manner, there's is the fact that should there be a fork of you being upgraded, you can trust that fork at least as much as you could trust yourself being upgraded. If it betrays you, then it is a reflection of how you would treat yourself in the same scenario.
This is also a potential risk, as Carl Shulman has analysed. Copy-clades like Pax Familia have an easy time trusting each other plus similar goals, and might form very competitive entities.
Extropian
Decivre Decivre's picture
Re: Preventing the Fall
Arenamontanus wrote:
Software brains are much easier to upgrade than biobrains, since you can test out things, check the effects in detail, backtrack and even parallelize the process. Making a brain-computer interface is so much easier when you don't need to bother with issues of electrical conductivity, the immune system and so on. But brains are *messy*. It is not clear that there is a good upgrade path once you have optimized the stuff that was limited by biological constraints, and it is not obvious that those optimizations give you enormously more intelligence. My bet is that after a few heady months of rapid cognitive enhancement the uploads find themselves at the limit of what they can easily do, and suddenly a lot of hard work awaits them in the upgrading department.
True, but one of the elements of the singularity is that the prior improvements are built upon to create new improvements. That digital emulation, once it has been optimized so far as you and I could see, might potentially get to a point where those optimizations give them the means to spot where future optimizations could occur. So while it might stump you and I to figure out how to improve our messy minds, Decivre[sup]2[/sup] and Arenamontanus[sup]2[/sup] might solve that problem with ease, thus making the much more effective Decivre[sup]3[/sup] and Arenamontanus[sup]3[/sup]. Furthermore, Decivre[sup]3[/sup] and Arenamontanus[sup]3[/sup] could probably improve Decivre[sup]2[/sup] and Arenamontanus[sup]2[/sup] with the very upgrades they were built with, and perhaps modify them with new abilities they just discovered. Thus, Decivre[sup]2[/sup] and Arenamontanus[sup]2[/sup] have their names changed to Decivre[sup]4[/sup] and Arenamontanus[sup]4[/sup]. The cycle will continue from there.
Arenamontanus wrote:
This is also a potential risk, as Carl Shulman has analysed. Copy-clades like Pax Familia have an easy time trusting each other plus similar goals, and might form very competitive entities.
True. You'll effectively end up with fork gangs and banyan societies, that could have all the inherent discriminatory tendencies that the original person already had. It might even magnify the natural tendency for humans to compete... whereas two people today might have a rivalry or even intense hatred of each other that could end with a simple battle between two people, two fork cultures could have the means and manpower to have an all-out wars involving fork armies.
Transhumans will one day be the Luddites of the posthuman age. [url=http://bit.ly/2p3wk7c]Help me get my gaming fix, if you want.[/url]
Smokeskin Smokeskin's picture
Re: Preventing the Fall
Anarhista wrote:
Smokeskin wrote:
There's a full risk of all the nasty AI scenarios in there, and a further problem is that we'd probably reach the point where human labor can't compete with AIs in any domain. The purchasing power of humans drop to the value of their capital holdings and welfare payments. Most humans just won't be relevant as consumers any longer. So for most people if they're getting uploaded or not will depend on someone else paying for their needs and eventually their upload. To make matters worse within a reasonable timeframe baseline humans could become not only economically irrelevant but ...
I'm sidetracking again but this is to much for me to stay silent: 1) Repetitive and boring work done by humans is replaced by machines/AI 2) Much more work/data is done in lesser time 3) Production (not just efficiency) is greatly increased 4) Shortening research time for better 1-3 . . . All of this is bad because lot of people don't have jobs to pay for their need... Does this sound familiar? I think that any Luddite would agree, so where is the problem?
When you're looking at risk management, assuming that future risks will resemble and be of af similar magnitude as previous risks is a common and very serious mistake. In the case of the Luddites, they did lose their jobs, but for society as a whole there was still plenty of jobs which humans performed better than machines. The problem with strong AI is that it will probably outperform humans in every domain. There'll be no jobs left that we do better (though of course "made by humans" could become a luxury etc.).
Anarhista wrote:
... maybe, and just maybe the monetary system based on scarcity where abundance is wrong way to make money.
I don't believe the capitalist system will serve humanity's interests well once strong AI appear - as I mentioned I believe extended welfare systems will be needed to provide the needs of the vast majority of humans at that point.
matthra matthra's picture
Re: Preventing the Fall
Decivre wrote:
Correction: a digital copy of a human brain [i]starts off as[/i] something with finite complexity. Who knows what it can be modified to become from there. And even with the risk of a digital mind going beyond human morality in a negative manner, there's is the fact that should there be a fork of you being upgraded, you can trust that fork at least as much as you could trust yourself being upgraded. If it betrays you, then it is a reflection of how you would treat yourself in the same scenario.
To make a distinction, the mind and the brain are separate but related phenomenon. To put it as one famous nuero-scientist said, the mind is what the brain does. Changes to the brain alter the mind, for instance an ice pick to your frontal lobes can drastically alter the way a person perceives and interacts with the world. We also have examples of Brain injuries turning good people into homicidal raving lunatics. What we want (kinnda) is a human mind with hyper-intelligence, but altering the brain to accommodate that will fundamentally change the mind. If it's no longer a human mind, the necessity for simulation becomes a hindrance. It would be highly inefficient compared to an AI of similar capabilities, without the benefit of being a human simulacrum.
Decivre Decivre's picture
Re: Preventing the Fall
Smokeskin wrote:
I don't believe the capitalist system will serve humanity's interests well once strong AI appear - as I mentioned I believe extended welfare systems will be needed to provide the needs of the vast majority of humans at that point.
I actually think capitalism will fail well before the creation of strong AI. Advanced artificial intelligence is still a ways off in my opinion, whereas 3-dimensional printing is already reaching a high degree of maturity. The internet has already shown us that capitalism is difficult to sustain when information becomes free to produce, so I can only imagine it to happen just as brutally to physical products once people have the means to produce anything at the cost of materials.
matthra wrote:
To make a distinction, the mind and the brain are separate but related phenomenon. To put it as one famous nuero-scientist said, the mind is what the brain does. Changes to the brain alter the mind, for instance an ice pick to your frontal lobes can drastically alter the way a person perceives and interacts with the world. We also have examples of Brain injuries turning good people into homicidal raving lunatics.
Very true, and one of the major reasons that I think brain simulation will likely start with whole-body simulation, or at the very least CNS simulation. We have to understand how the mind works with the body before we can find the means to separate the two. Because separating them is crucial to improving them. Our minds have more potential than our bodies do.
matthra wrote:
What we want (kinnda) is a human mind with hyper-intelligence, but altering the brain to accommodate that will fundamentally change the mind. If it's no longer a human mind, the necessity for simulation becomes a hindrance. It would be highly inefficient compared to an AI of similar capabilities, without the benefit of being a human simulacrum.
Life is change, and you brain has been ever-changing since it was formed. It is the natural order of things. So one has to really ask why it is fine for your brain to naturally alter itself, but taking control of the alteration process is wrong. Because if the alteration of the mind is wrong, then the mercurial nature of the mind to alter itself is wrong. If the plasticity of the mind is fine, then our own tweaking of the mental processes should be fine... at least when it comes to volunteers. To that end, our own definitions of what constitutes the person we are will vary from opinion to opinion, and is a fundamental element of whether a person truly thinks brain uploading is actually a feasible concept in the first place. Are you the lightning or the meat? The meat is temporary and mortal, but the lightning could exist forever in any body... so it is up to you which you wish to define yourself by. And while early mental simulations should be inefficient, this will not always be the case. Once we can simulate biology on a macro scale (which will collectively include simulations on a micro scale), we can begin the process of narrowing down what truly constitutes the human mind, ego or "soul"... whatever you wish to call it. And once we've pinpointed that essential element that makes up our thoughts, we can then begin the process of making it more effective. While that mind is arguably "no longer human" (it was no longer human the minute it was thrown into a simulation... "human" is a species, and a non-organic conscious does not fall under such a concept), it will still contain all those essential things that made the person who they were, irregardless of bloodline, species, lineage, and other elements they effectively leave behind.
Transhumans will one day be the Luddites of the posthuman age. [url=http://bit.ly/2p3wk7c]Help me get my gaming fix, if you want.[/url]
matthra matthra's picture
Re: Preventing the Fall
Decivre wrote:
Life is change, and you brain has been ever-changing since it was formed. It is the natural order of things. So one has to really ask why it is fine for your brain to naturally alter itself, but taking control of the alteration process is wrong. Because if the alteration of the mind is wrong, then the mercurial nature of the mind to alter itself is wrong. If the plasticity of the mind is fine, then our own tweaking of the mental processes should be fine... at least when it comes to volunteers. To that end, our own definitions of what constitutes the person we are will vary from opinion to opinion, and is a fundamental element of whether a person truly thinks brain uploading is actually a feasible concept in the first place. Are you the lightning or the meat? The meat is temporary and mortal, but the lightning could exist forever in any body... so it is up to you which you wish to define yourself by.
I think uploading the brain to a computer is not an impossible task, merely a ruinously difficult one. The lighting and the meat are tangled together to the point that disentangling them will be one of the last frontiers of neuroscience, if we ever get to that point. As for what constitutes human, that's a philosophical quagmire, the distinctions between species are arbitrary and often blurry. Some of the most vicious and long standing feuds in science are in Taxonomy, because no one can definitively prove the other wrong. However, I think it's safe to say that the results of what your suggesting would be at least as different from Homo Sapiens as we are from Homo Heidelbergensis.
Decivre Decivre's picture
Re: Preventing the Fall
matthra wrote:
I think uploading the brain to a computer is not an impossible task, merely a ruinously difficult one. The lighting and the meat are tangled together to the point that disentangling them will be one of the last frontiers of neuroscience, if we ever get to that point.
But you have to remember that the first brain and body simulations will likely do both simultaneously, as until we understand the mind enough to segregate it from it's organic structure, we will need to simulate the organic structure just as much as anything else. So I imagine the first brain copies will be complete brain copies, right down to the DNA. Once we have that down pat, we can figure out what is essential to the operation of the mind and what isn't, and how we separate the two. And honestly, I don't think it is ruinously difficult at all. We had naysayers claiming that the human genome would never be mapped in our lifetimes, yet here we are still alive 9 years after that was completed. I honestly think that the hardest part is getting it simulated, and most of it will be cake from that point on.
matthra wrote:
As for what constitutes human, that's a philosophical quagmire, the distinctions between species are arbitrary and often blurry. Some of the most vicious and long standing feuds in science are in Taxonomy, because no one can definitively prove the other wrong. However, I think it's safe to say that the results of what your suggesting would be at least as different from Homo Sapiens as we are from Homo Heidelbergensis.
But will it? Think about it: a simulated mind will start off effectively a 1:1 copy of a human brain. It is at least as human as a human brain can get. From there, we would be making modifications to it, to be sure, but then we hit the Ship of Theseus debate: at what point do we modify something so that it is no longer the original thing?
Transhumans will one day be the Luddites of the posthuman age. [url=http://bit.ly/2p3wk7c]Help me get my gaming fix, if you want.[/url]
Arenamontanus Arenamontanus's picture
Re: Preventing the Fall
Decivre wrote:
Smokeskin wrote:
I don't believe the capitalist system will serve humanity's interests well once strong AI appear - as I mentioned I believe extended welfare systems will be needed to provide the needs of the vast majority of humans at that point.
I actually think capitalism will fail well before the creation of strong AI. Advanced artificial intelligence is still a ways off in my opinion, whereas 3-dimensional printing is already reaching a high degree of maturity. The internet has already shown us that capitalism is difficult to sustain when information becomes free to produce, so I can only imagine it to happen just as brutally to physical products once people have the means to produce anything at the cost of materials.
So the only powers that will remain are the mining corporations and power companies? Actually, this ignores what an economy is: trade in scarce or unequally distributed goods. If material objects become free to make, you will spend money (or rep or other fungible currencies) on services or uniqueness. You will still need a plumber, a reviewer, a policeman or a programmer depending on what you want to do, and you will want to entice your favourite music star to give a concert near you rather than somewhere else if you want to listen to her. Note that with cheap AI "human capital" this just shifts to other services people demand authentic humans doing or things that remain uncopyable (that lakefront property, high social status). And if you want to set up a space program or some other big project you will need ways of funnelling investments into the project to get those skills and services you need. What might change is how this capital is owned and managed, but I wouldn't count out systems based on free exchange and rewarding investors: they have proven very vital and resilient so far, despite - or rather thanks to - constant mini-collapses.
Extropian
Decivre Decivre's picture
Re: Preventing the Fall
Arenamontanus wrote:
So the only powers that will remain are the mining corporations and power companies?
That depends on whether power-collecting technologies will remain out of reach of the common consumer, or material extraction will continue to use modern methods.
Arenamontanus wrote:
Actually, this ignores what an economy is: trade in scarce or unequally distributed goods. If material objects become free to make, you will spend money (or rep or other fungible currencies) on services or uniqueness. You will still need a plumber, a reviewer, a policeman or a programmer depending on what you want to do, and you will want to entice your favourite music star to give a concert near you rather than somewhere else if you want to listen to her.
True to an extent. Many services will, thanks to various reasons, drop in value potentially until costs reach 0 (an event you already see with many internet services, like file hosting). Even art and other similar unique products often have costs dropped to extremely low levels today (Jamendo is a free music service, you can watch a nigh-unlimited amount of free video via services like Youtube, and there are already a plethora of directors creating free movies and works). Programming has already hit a very low price-point, so long as your project is open-source and has enough public interest. All that's left is public services such as police, plumbing, medical and fire... but I imagine that future automation systems and home-production technologies will drop even these costs.
Arenamontanus wrote:
Note that with cheap AI "human capital" this just shifts to other services people demand authentic humans doing or things that remain uncopyable (that lakefront property, high social status).
Somewhat arguable. With enough technology and materials, it might be possible to manufacture your own lakes on the cheap, build homes out of cheaper imitation materials (or, down the line, authentic nanofabricated lumber), and even have your own island paradise by creating a floating megastructure. At best, capital might survive as a means to reduce time of production.
Arenamontanus wrote:
And if you want to set up a space program or some other big project you will need ways of funnelling investments into the project to get those skills and services you need. What might change is how this capital is owned and managed, but I wouldn't count out systems based on free exchange and rewarding investors: they have proven very vital and resilient so far, despite - or rather thanks to - constant mini-collapses.
Oh, I agree, but I still think there is a long and slow death for the system coming up. I am actually of the mind that advanced AI (human-level broad intelligence, not complex decision engines built around specific purposes and functions) is still extremely far away, unless you count mind uploading. So I'm guessing there's still plenty of time for capitalistic entities to thrive.
Transhumans will one day be the Luddites of the posthuman age. [url=http://bit.ly/2p3wk7c]Help me get my gaming fix, if you want.[/url]
King Shere King Shere's picture
Re: Preventing the Fall
Regarding robots eradicating the human race, instead of the robotic rebellion, I think its a greater chance that the robots are acting to please a human issued command. To uniformed observers it could look the same. And depending on the definition of what constitutes a robot, nuclear armageddon with balistic missiles could qualify.
Arenamontanus Arenamontanus's picture
Re: Preventing the Fall
King Shere wrote:
Regarding robots eradicating the human race, instead of the robotic rebellion, I think its a greater chance that the robots are acting to please a human issued command. To uniformed observers it could look the same.
Well put!
Quote:
And depending on the definition of what constitutes a robot, nuclear armageddon with balistic missiles could qualify.
Yes, the Perimetr system of Russia is a robot (sensors - decisions - actuators) as far as I understand and able to launch armageddon. It has fairly simple motivations, though.
Extropian