Welcome! These forums will be deactivated by the end of this year. The conversation continues in a new morph over on Discord! Please join us there for a more active conversation and the occasional opportunity to ask developers questions directly! Go to the PS+ Discord Server.

Existential Threats

17 posts / 0 new
Last post
root root's picture
Existential Threats
root@Existential Threats Transhumanity may at this point be approaching a Type I civilization on the Kardashev scale. The TITANs were greater, but did not approach the scale of a Type II civilization before they left. The AGI is a Type III or Type IV, working on scales and densities of information that transhumanity cannot possibly comprehend. The Factors are an unknown, that might be a singular individual composed of far-flung members communicating with quantum-entangled communicators. They may be more interested in the virii and bacteria inside of transhumans than with the transhumans themselves. Their ability to manufacture deadly plagues is unknown, but likely very great. The Iktomi are evidence enough of what can befall an unwary, or merely unlucky, civilization. Those are just a small handful of known or suspected external threats, some which may be so alien that no intellectual intercourse cannot be initiated, let alone maintained. What other forces threaten to extinguish transhumanity?
[ @-rep +1 | c-rep +1 | g-rep +1 | r-rep +1 ]
Rhyx Rhyx's picture
Re: Existential Threats
My best bet would be transhumanity versus its own darned self. With two economies competing and threatening each other's way of life, the Ultimates seeing eugenics as a perfectly legitimate way of shaping evolutions, the Jovians' siege mentality that they are the only on true path. Humanity is once again it's own worst enemy because so many diametrically opposite views exist between the political factions. All it would take is for one of these factions to gain more energy/manpower/technological advantage that I could very well see these philosophical debates turning into a hot war with people strapping boosters to asteroids and send them hurtling towards a competitior's habitats. Mankind has always been the weapon of choice towards exterminating mankind.
Rhyx Rhyx's picture
Re: Existential Threats
An other interesting concept I was playing with is actually uplifts being Transhumanity's doom. Imagine that a couple of Mercurial Octomorphs go to through the Chat Noir Gate, tentacle in tentacle with enough knowledge to reverse the genetic lockouts that the uplift engineers have put on their reproductions. An octopus can have 5000 babies. Give the two octomorphs some cornucopia machines and the right exoplanet to live on, in a minimum of 4 generations (80 years) they have got transhumanity totally outnumbered by themselves because of their exponential birth rates.
Arenamontanus Arenamontanus's picture
Re: Existential Threats
I wrote a list of possible extinction scenarios a long while back: http://www.eclipsephase.com/extinction-scenarios I think the main threat is going to be dangerous evolutionary attractors. It might turn out that the economically most efficient strategy is to turn into a cloud of endless forking and skillsoft sharing where things like consciousness and emotions are rare luxuries indulged in by an ever smaller fraction. Another attractor might be wild expansion and reproduction, using all available resources to make more of the "civilization" faster and faster. These are subtle threats, and much harder to defeat than an alien virus.
Extropian
TBRMInsanity TBRMInsanity's picture
Re: Existential Threats
In the book Age of Spiritual Machines, Ray Kurzweil lays out three possible futures for mankind: 1. The singularity occurs and humanity fights against it (leading to our destruction). This is the scenario in EP (kinda). 2. The singularity occurs and humanity is unaware, thus they accept the changes as given and merge with technology. Eventually transhumanity would be WAY different (most likely not biological) in the end. This is also happening in EP (kinda). 3. The singularity occurs but humanity accepts it and tries to guide it so that the new emerging AI sees humanity as it's parents. The new AI will continue to surpass their parents (as any good child should do) but it will also have a strong obligation to protect and care for their parents (As any good child should do when they grow up). The result is humanity stays relatively unchanged (AI will try to destroy everything that could harm humanity so they can live forever) but our technology will go one to conquer the universe in our name.
Spoiler: Highlight to view
This would be the ETI scenario
Jovian Motto: Your mind is original. Preserve it. Your body is a temple. Maintain it. Immortality is an illusion. Forget it.
icekatze icekatze's picture
Re: Existential Threats
hi hi For existential threats, anything with intelligence is an existential threat. But for a more thorough analysis, I provide this link (which I take no credit for) [url=http://www.transhumanist.com/volume9/risks.html]Risks[/url]
Arenamontanus Arenamontanus's picture
Re: Existential Threats
icekatze wrote:
[url=http://www.transhumanist.com/volume9/risks.html]Risks[/url]
I can leak that Nick is working on an updated version of the paper, which has turned into a book (or more properly, one book on existential risks and one on AGI theory). When they are finished is another matter (I better hurry up with my parts).
Extropian
root root's picture
Re: Existential Threats
root@Existential Threats [hr]
Arenamontanus wrote:
icekatze wrote:
[url=http://www.transhumanist.com/volume9/risks.html]Risks[/url]
I can leak that Nick is working on an updated version of the paper, which has turned into a book (or more properly, one book on existential risks and one on AGI theory). When they are finished is another matter (I better hurry up with my parts).
I was wondering how long it would take before these discussions met themselves coming the other way. A question I have about the Fermi Paradox. Suppose that humanity is following a normal curve for technological development, and technological societies are spread normally around the cosmos. Can't the Fermi Paradox be easily explained by communication being limited by the speed of light? Even if a civilization out there knew exactly where we were (despite us only having had the technological capabilities to project radio waves into space since WW2), how could they possibly get a signal to us? How would they power it? Basically, in order to get duplex communications, our signals need to find them, theirs need to find us, and we have to find a way around millennium long transmission times. Plus, what's the point? Unless there is some unforeseen valley of physics to be discovered, we are never going to be able to have any meaningful interaction with another civilization. It might be nice to know they are there for philosophical reasons, but there is no practical use for communications.
[ @-rep +1 | c-rep +1 | g-rep +1 | r-rep +1 ]
Arenamontanus Arenamontanus's picture
Re: Existential Threats
root wrote:
A question I have about the Fermi Paradox. Suppose that humanity is following a normal curve for technological development, and technological societies are spread normally around the cosmos. Can't the Fermi Paradox be easily explained by communication being limited by the speed of light?
Transmission issues: There is a set of papers by Benford, Benford and Benford (yes, one of them is the sf author), http://arxiv.org/abs/0810.3964 and http://arxiv.org/abs/0810.3966 that suggests that a properly designed array could signal over pretty impressive distances. And given current technological development it doesn't look that unlikely we could get the energy of a K1 civilization. What's the point?: While most people would likely find it pointless to try to message somebody extremely far away, it is enough that some people think it is a good thing. Consider that we sent a message to Arcturus this summer with an opera invitation! And imagine all religious people who would like to save the souls of the aliens... I think the real issue with the Fermi paradox is that it tells us something about our own survival chances. If there were a universe of aliens out there, even if we could or would not communicate with them, we would know that civilizations like ours have a chance. If it is an empty universe, then we are either a very rare occurrence or should expect existential threats around the corner.
Extropian
icekatze icekatze's picture
Re: Existential Threats
hi hi There is sort of a duality to technological progress that I think is derived from the motivation for progress itself. At a fundamental level, invention is motivated by problems. People may have vast amounts of theoretical knowledge, but unless there is some cause to need an invention, there is unlikely to be any practical application of that knowledge. Man is cold, man creates fire. Man is hungry, man creates bow and arrow. Man is threatened by aggressive super-power, man invents atomic weapon. There have been plenty of instances where the knowledge to create a device was present long before it actually was integrated into society, or even entire societies that have lived as hunter/gatherers until encroached upon by outside forces, and in the case of nuclear weapons, it is a prime example of how a technological invention did not catch on in common use. Human travel to the moon could be another example from contemporary advancement. So we have a paradox within a paradox, and perhaps the two cancel eachother out? Invention, a basic rebellion against the way things are, in favor of how we would like them to be. From that point of view, the exponential rise in invention has followed an equally exponential rise in problems that require solving. To the point that for the most part, the only problems that remain to be solved are ones that we have made ourselves, either intentionally or as an unintended consequence of other changes. Taken to its cynical conclusion, you could infer that any advancing society is bound to destroy itself by an exponentially increasing set of problems, thus explaining the Fermi Paradox.
root root's picture
Re: Existential Threats
root@Existential Threats [hr]
icekatze wrote:
hi hi There is sort of a duality to technological progress that I think is derived from the motivation for progress itself. At a fundamental level, invention is motivated by problems. People may have vast amounts of theoretical knowledge, but unless there is some cause to need an invention, there is unlikely to be any practical application of that knowledge. Man is cold, man creates fire. Man is hungry, man creates bow and arrow. Man is threatened by aggressive super-power, man invents atomic weapon. There have been plenty of instances where the knowledge to create a device was present long before it actually was integrated into society, or even entire societies that have lived as hunter/gatherers until encroached upon by outside forces, and in the case of nuclear weapons, it is a prime example of how a technological invention did not catch on in common use. Human travel to the moon could be another example from contemporary advancement. So we have a paradox within a paradox, and perhaps the two cancel eachother out? Invention, a basic rebellion against the way things are, in favor of how we would like them to be. From that point of view, the exponential rise in invention has followed an equally exponential rise in problems that require solving. To the point that for the most part, the only problems that remain to be solved are ones that we have made ourselves, either intentionally or as an unintended consequence of other changes. Taken to its cynical conclusion, you could infer that any advancing society is bound to destroy itself by an exponentially increasing set of problems, thus explaining the Fermi Paradox.
The "Great Filter" theory. Every technological civilization goes through an event that ends in extinction. The idea that we can invent our way out of all of our problems is somewhat problematic for the same reason that the world economy is tanking: energy. The thermodynamic cost of invention continues to increase (think of the cost of building CERN), and now that the basic cost of energy is increasing, that new cost has to be considered. If we were able to produce very plentiful and cheap energy, we might be able to survive. I just don't really see it happening.
[ @-rep +1 | c-rep +1 | g-rep +1 | r-rep +1 ]
Arenamontanus Arenamontanus's picture
Re: Existential Threats
root wrote:
The thermodynamic cost of invention continues to increase (think of the cost of building CERN)
This is wrong. What was the energy cost of inventing Facebook, Twitter, Skype, lasers or the Fast Fourier Transform algorithm? We can certainly construct bigger and more energetic things, but we have also created domains such as software where innovation can be done with a minimum of energy use. Energy is important, but it is not the key constraint on innovation. Similarly, many important innovations are not responses to urgent needs. I think none of the above examples were need-driven, but they became very successful because they then enabled a lot of new things. We might quibble about the importance of the first three, but the laser and the FFT are two of those ubiquitous, important technologies that actually makes our civilization work these days. They opened up a lot of new domains. (Still, Karl Schroeder had some very interesting takes on the Fermi paradox and need-driven innovation in "Permanence". His explanation was that once intelligent species got advanced enough they changed their environment to fit them perfectly or changed themselves to fit the environment. Without any need for innovation or struggle for existence, intelligence was selected against and the species became post-intelligent.) Exactly what drives true innovation is an important problem, and I don't think we actually know. There is historical evidence that human genius and innovation tend to cluster in time and space, but the causes are not well understood. Things like affluence and peace does not seem to be the key thing.
Quote:
, and now that the basic cost of energy is increasing, that new cost has to be considered. If we were able to produce very plentiful and cheap energy, we might be able to survive. I just don't really see it happening.
There is one graph that cheers me more than any of Kurzweil's exponential "it is all getting better" graphs, and that is the plot of US GDP per capita versus energy use per capita. Since WWII it went up along a perfectly straight line - "obviously" economic growth requires an increasing amount of energy. But in 1972 the curve turned on a dime and became horizontal (with some random noise). Since then the US economy has become about five times bigger, but it still uses about the same amount of energy per person. I think there are two important lessons here. One is that enormous trends can change surprisingly quickly. The second is that economic growth (which encompasses and is at least partially driven by innovation) can adapt to whatever material constraints we have. It is not as driven by land, energy or ores as it once was.
Extropian
Extrasolar Angel Extrasolar Angel's picture
Re: Existential Threats
Quote:
What's the point?: While most people would likely find it pointless to try to message somebody extremely far away, it is enough that some people think it is a good thing. Consider that we sent a message to Arcturus this summer with an opera invitation! And imagine all religious people who would like to save the souls of the aliens...
A interesting thread on why the assumptions and beliefs of SETI don't hold up under scrutiny: http://www.bautforum.com/showthread.php/105999-Time-to-radically-revise-... This particular post is very good:
Quote:
THIS is why the SETI people work on the assumption that the alien signals would be intentional beacons. They would be purposefully made easy to detect and transmitted on 1420MHz. However, this assumption is self-defeating: a narrow-band beacon, at the frequency of interstellar hydrogen is absolutely THE worst bet in terms of information capacity per energy cost. Basically, a civilization would spend a great amount of money to say HELLO and that's it. But if WE wanted to spend a lot of money to tell something to other intelligent races, we would want to say something more, wouldn't we? After all, if we had one shot at telling something to an alien civilization, wouldn't it make more sense to transmit the whole Library of Congress? But transmitting a sensible message would require using a more advanced modulation scheme with advanced coding and error correction -- except that it would be more difficult to decode, or even detect (see above). A way around this would be to have the message in parts. Send an AM beacon at 1420MHz containing information about the frequency where the next part is transmitted. The second part would contain the information about the way the next part is transmitted. So the listener would be instructed at each time, where to look for the next part and how to decode it. After several iterations, she could decode the actual message. This is workable, but the whole exercise would be insanely complicated and error prone. Which brings us back to the original question: would it be actually worthwhile?
To summ up the thread-its actually too costly for private investors to create signals of strenght that would be received by other civilizations-especially if(which is quite likely) they are thousands if not hundreds of thousands of light years away and you can count them on the fingers on your hands. In general both the state and private sector doesn't care enough to spend epic amounts of money required for such signals. So the Fermi Paradox doesn't tell us much besides SETI and it's beliefs itself(that there close and numerous civilizations that spend most of their existance and effort beaming radio signals to Earth-which if you look closely at it, seems very far fetched idea).We aren't constantly sending signals for other civilizations either, and likely won't looking at funds allocated to space exploration versus the funds needed for such activity. If we really want to search for ETI we probably should stop searching for directed signals and start looking for habitable planets, technological emissions in their atmospheres and megascale engineering projects like Dyson Spheres and so on.
[I]Raise your hands to the sky and break the chains. With transhumanism we can smash the matriarchy together.[/i]
Arenamontanus Arenamontanus's picture
Re: Existential Threats
Extrasolar Angel wrote:
To summ up the thread-its actually too costly for private investors to create signals of strenght that would be received by other civilizations-especially if(which is quite likely) they are thousands if not hundreds of thousands of light years away and you can count them on the fingers on your hands. In general both the state and private sector doesn't care enough to spend epic amounts of money required for such signals.
That depends on some very strong assumptions about limited economic growth and the structure of alien societies. Today private individuals regularly embark on projects that in the past would have required major government funding. The start of the thread had a more sensible insight: a lot of SETI research has been like looking under lampposts for the keys, driven by some pretty anthropic assumptions. Widening them to encompass more posthuman (or postalien) possibilities leads to different strategies: http://www.anthropic-principle.com/preprints/milan-seti.pdf
Extropian
Extrasolar Angel Extrasolar Angel's picture
Re: Existential Threats
Quote:
The start of the thread had a more sensible insight
Thanks(since I started it ;))
Quote:
Today private individuals regularly embark on projects that in the past would have required major government funding
Perhaps, but this doesn't mean the reach current level of government funding, plus the investments in area of space exploration are honestly small compared to others. If for example we discover numerous habitable planets but no civilizations it would require a great deal of effort to send reckognisable signals to each and every one in galaxy(and after all a galaxy is a small place in the universe)-effort I am not sure private investors or groups would be willing to undertake. I read Cirkovic before- he has interesting theories. In my view we still know too little to seriously debate the existance of Fermi Paradox, untill we get some widespread basic knowledge(like distribution of habitable planets in galaxy, search for biospheres in planets around exotic(to us) stars and objects) we can't be certain for sure of many variables. But personally I am of the opinion that yes-SETI is very outdated and search for Dyson Spheres and other would be more interesting and at least on the same level of plausibility if not more as search for constant radio signals beamed to our planet.
[I]Raise your hands to the sky and break the chains. With transhumanism we can smash the matriarchy together.[/i]
Arenamontanus Arenamontanus's picture
Re: Existential Threats
By the way, here is an amusing short story about the Fermi Paradox that might be used for an EP metaplot: http://www.tor.com/stories/2010/08/the-fermi-paradox-is-our-business-model
Spoiler: Highlight to view
The ETI has seeded a number of civilizations to produce valuable stuff. Maybe the TITANs were the stuff the ETI wanted and harvested through the gates, with the exsurgent virus part of the "cleanup" system. It is just that transhumanity in a freak accident survived. Maybe the Factors are other accidental survivors, anxious to avoid having the ETIs noticing that something is infesting their TITAN-making system. So far the Factors have had a low enough profile, but if humans start poking through the gates or make new TITANs the harvesters will notice. Unfortunately telling transhumanity this up front is also likely to trigger harvesting.
Extropian
Extrasolar Angel Extrasolar Angel's picture
Re: Existential Threats
Ah, this also reminds me of a thread I wanted to start long time ago(and which I started now): http://www.eclipsephase.com/why-we-shouldnt-trust-factorsspoilers
[I]Raise your hands to the sky and break the chains. With transhumanism we can smash the matriarchy together.[/i]