Welcome! These forums will be deactivated by the end of this year. The conversation continues in a new morph over on Discord! Please join us there for a more active conversation and the occasional opportunity to ask developers questions directly! Go to the PS+ Discord Server.

Still more Fermi's Paradox

36 posts / 0 new
Last post
Erenthia Erenthia's picture
Still more Fermi's Paradox
So I've been considering the questions raised by Fermi's Paradox, but looking at it from a different perspective than I usually do. Instead of trying to determine if there's life out there and if so where is it, I've been examining what must be true about life should it be out there. And while a lot of us hard sf fans like to point out how truly alien creatures that evolved on another planet should be, there are a few things we can surmise. For one, the Aliens would have evolved to sentience through evolutionary forces roughly analogous to the ones on earth. Natural Selection and Competition. In particular they would have a tendency to want to avoid death, at least until they reproduced. It would seem likely they would be familiar with Cooperation as concept as well as Competition since evolution on earth seems to use both techniques rather extensively, regardless of how natural the concept is for them. And as they developed technology it seems almost certain that they would be aware of rapidly accelerating growth in terms of technology, and how much powerful one species can be from another with as little as 1000 years of advancement. So let's assume these...First-ians evolved somewhere in our galaxy and rise to a state of scientific and technological mastery. They would also face the question of "Our we alone in the universe?" and even though they're the first they have no way of knowing that. Further, they realize that another species, even as little as a million years more advanced then they, would hopelessly outclass them, and since they have an effective fear of death, a logical evolutionary strategy would be to hide. If they choose to hide, then the next species that evolves in the galaxy would effectively be First-ians as well and could reasonably follow the same pattern. To summarize: All advanced civilizations eventually decide to hide from hypothetical hyper advanced aliens which they fear could do them harm. In particular, a sufficiently advanced being and/or civilization might even be able to impersonate less advanced cultures in an attempt to lure victims civilizations out into the open. For that matter, it could even create them. Given the potential threat from these - albeit hypothetical - aggressive aliens, I see no logical reason why any alien race would be willing to risk communicating with another, even one that seems less advanced.
The end really is coming. What comes after that is anyone's guess.
Gantolandon Gantolandon's picture
Erenthia wrote:I see no
"Erenthia" wrote:
I see no logical reason why any alien race would be willing to risk communicating with another, even one that seems less advanced.
But... [url=http://en.wikipedia.org/wiki/Voyager_Golden_Record]It happened[/url]. True, it most probably will be lost somewhere in the interstellar void, but this doesn't change the fact that we tried. Hiding "just in case" doesn't strike me as a valid evolutionary strategy. No species will voluntarily expend additional energy (or slow its expansion to new habitats) to conceal themselves because of a hypothetical threat.
Arenamontanus Arenamontanus's picture
Erenthia wrote:To summarize:
Erenthia wrote:
To summarize: All advanced civilizations eventually decide to hide from hypothetical hyper advanced aliens which they fear could do them harm.
This argument makes an important hidden assumption: all civilizations advanced enough to do anything noticeable also have members that completely follow the overall strategy. This assumption suggests that there are no defectors, crazies, dissidents or adventurous subgroups that decide to move away a bit from the mother civ and then expand in the open just to see if it is safe. Try to imagine what it would take to keep humanity that unified (we are beaming adverts and opera invitations to the stars right now!). Then realize that this has to work for billions of years. For *all* alien civilizations, no exceptions, no matter what their initial conditions were. I am arguing in an upcoming paper that this kind of convergence assumption is fairly unlikely to work on its own as a FP explanation because it needs to be amazingly strong. Not just that there is 99% chance that any civilization hides, but a 99.999999999% chance or better. That seems a tad ambitious for what is essentially a sociological claim. (That said, I like your reasoning. I would not be the least surprised if there were plenty of hiding civs. In fact, I am working on another little simulation to test the conditions where hiding works)
Extropian
Erenthia Erenthia's picture
Camouflage is an evolutionary
Camouflage is an evolutionary strategy that we already have an example of. And virtually no expenditure of energy is a waste to avoid total annihilation. Further, I would say that Voyager was the efforts of a relatively small number of people and can't be seen as an overriding strategy followed by the majority of the human race. The exposure we've risked thus far would be fairly easy to undo in a few hundred years. A few hundred years of exposure hardly counts in the absence of of ubiquitous bracewell probes. That's not to say there [em]aren't[/em] bracewell probes in ever solar system in the galaxy by now, but since our entire species could be nothing more than an elaborate scheme to lure them out into the open, I see no reason for them to contact us. Edit:
Arenamontanus wrote:
This argument makes an important hidden assumption: all civilizations advanced enough to do anything noticeable also have members that completely follow the overall strategy
That's fair, and I think the Voyager example demonstrates this, however, this isn't necessarily as hard as it sounds. Moving the earth away from TV/Radio towards FiberOptics/Wifi cuts down our exposure immensely. After that you just need a ruling body paranoid enough to put people in prison or execute them for taking actions that might expose the planets location. Further, you could make it very difficult for individuals to gain the resources necessary to expose a species to any significant extent. And a few hundred years of exposure may not be significant. But by far the most important aspect of this theory is that the Boogeyman need not really be there. A small amount of exposure that does not result in total annihilation probably won't convince anyone that the galaxy is free from things that want to eat/destroy/orWorse them. If one or a small group of individuals [em]does[/em] expose them temporarily, they're just as likely if not more to assume they got lucky, and double down on their attempts to hide. Edit 2: I'd also like to mention that the longer this hiding goes on, the more likely [em]new[/em] civilizations are to hide, since science-capable species will have greater and greater reason to wonder why the universe isn't full of life yet and more and more reason to wonder if there's something out there killing everybody. PS If this theory doesn't have an already established name, call it [em]The Boogeyman Hypothesis[/em]. That would just be awesome.
The end really is coming. What comes after that is anyone's guess.
Gantolandon Gantolandon's picture
Quote:Camouflage is an
Quote:
Camouflage is an evolutionary strategy that we already have an example of. And virtually no expenditure of energy is a waste to avoid total annihilation. Further, I would say that Voyager was the efforts of a relatively small number of people and can't be seen as an overriding strategy followed by the majority of the human race.
All evolutionary strategies are a response towards a specific threat. Camouflage is used either as a mean to hunt (and get food) or hid from an existing predator. In this second case the important thing is that the threat should be present to let the organism evolve in a certain direction. The situation you are describing could be compared to a herbivore which spontaneously develops camouflage despite no predators being present in its habitat. Granted, humans possess intelligence, but this most probably don't exempt us from the rules governing the evolutionary process. Looking how our civilization behaves even towards confirmed threats (global warming, for example), I don't believe they would care about a hypothetical threat at all. On the other hand, there is no reason an advanced civilization capable of interstellar travel would contact or attack us, either. If they need resources, there is plenty in uninhabited systems which no one ever touched. Living space is also plentiful - it's easier to find a barely habitable planet and import their own biosphere there, than adapt or exterminate an existing one. We would have no technology that they couldn't obtain just by observing us quietly, so any exchange most probably won't happen - why to prop up a potential competitor?
Erenthia Erenthia's picture
It's true, humans aren't
It's true, humans aren't great at responding rationally to threats, but we can't use ourselves as an example. Advanced civilizations/species are probably going to be a lot better at responding rationally than we are. However, there is one possibility that would give an Advanced species reason to attack a lesser one. Admittedly this is pure speculation, but consider the ramification of accelerating technological innovation in a universe with hard limits. It could very well be that there is an upper limit to how advanced a species can become, and that after the singularity a civilization will approach that hard limit quite rapidly. And since the older species is already tapped out when it comes to advancement, they can't stay ahead so they have to keep lesser species from reaching their level. If this is true though, it invalidates the Boogeyman hypothesis (or perhaps, makes the Boogeyman real) since presumably an Apex Civilization would [em]know[/em] they'e advanced as is possible in our universe. side note: if 84% of matter in the universe is dark matter, than shouldn't 84% of life in the universe be made of dark matter?
The end really is coming. What comes after that is anyone's guess.
Azathoth Azathoth's picture
I actually read a book "Alien
I actually read a book "Alien Invasion: the ultimate survival guide for the ultimate attack" and it takes a very similar view on alien civilization: they probably evolved from predators. The author believes projects like SETI are practically suicidal, and a wise course of action would be to avoid contact (that's why we haven't discovered any signs of intelligent life, any survivors are shy.) Not a great book, imo, but an interesting take on fermi.
Arenamontanus Arenamontanus's picture
Erenthia wrote:It's true,
Erenthia wrote:
It's true, humans aren't great at responding rationally to threats, but we can't use ourselves as an example. Advanced civilizations/species are probably going to be a lot better at responding rationally than we are.
But *all* advanced civilizations, at all times, everywhere? In our research we have a concept we call a "singleton": a singleton is a decision-making entity that can determine what goes on within its sphere of influence. It could be the World Government, everybody joined together into a collective hive mind, or a friendly superintelligence that runs things. What it says goes, and entities that disagree cannot do anything to change this. For the Boogeyman scenario to work, all civilisations have to 1) develop some form of singleton to keep members in line, and 2) these singletons all agree it makes sense to hide. But this doesn't really change things, since even if singletons are likely (perhaps because non-singleton civs get killed by self-caused xrisks) they might not always happen, and even if you are a singleton you might not want to hide completely. You might for example set up a little fake civilization in the open and see what happens as it expands and becomes more obvious.
Quote:
side note: if 84% of matter in the universe is dark matter, than shouldn't 84% of life in the universe be made of dark matter?
Can it form complex structures? Current models seem to suggest that it doesn't interact much with itself, so it might just be a dilute gas of particles. Not much to run life on. It is a bit like silicon: it is far more common than carbon on Earth, yet all life runs on the somewhat more chemically versatile carbon.
Extropian
Gantolandon Gantolandon's picture
Quote:It's true, humans aren
Quote:
It's true, humans aren't great at responding rationally to threats, but we can't use ourselves as an example. Advanced civilizations/species are probably going to be a lot better at responding rationally than we are.
A rational reaction is not to do anything until you confirm that the threat is even a distant possibility, otherwise you just waste a lot of effort for nothing. Reaction like that would be anything but rational and would at least require some events in the species' past that could cause it. Like an actual alien invasion.
Quote:
However, there is one possibility that would give an Advanced species reason to attack a lesser one. Admittedly this is pure speculation, but consider the ramification of accelerating technological innovation in a universe with hard limits. It could very well be that there is an upper limit to how advanced a species can become, and that after the singularity a civilization will approach that hard limit quite rapidly. And since the older species is already tapped out when it comes to advancement, they can't stay ahead so they have to keep lesser species from reaching their level.
If you imagine that something like this is possible, you must also assume that these older species can already exist. From what you know, it might already be somewhere there The fact that you are still around may mean everything - maybe your camouflage works and maybe it was all for naught, they saw through them and they are looking at your civilization even now. Maybe the only thing that prevents you from getting extinct is that they don't see you as a threat right now - will that change if you destroy that less advanced species and sterilize their planet? What if they don't see you right now and with this attack you reveal yourself? In such case, the best option is still doing nothing and gather more data. You have no idea if the things you do actually works, because you don't know anything about your enemy. You could as well try to protect yourself from the actual boogeyman.
Quote:
side note: if 84% of matter in the universe is dark matter, than shouldn't 84% of life in the universe be made of dark matter?
We don't know almost anything about dark matter and how it behaves. It's an umbrella term for the matter that we know it's somewhere because otherwise gravity would be weaker, but we can't observe it so we don't know what it is. And not all matter is equally probable to organize into simple lifeforms.
Erenthia Erenthia's picture
Arenamontanus wrote:But *all*
Arenamontanus wrote:
But *all* advanced civilizations, at all times, everywhere?
I actually think this may be a reasonable position. While hypothetical alien civilizations are the ultimate in potentially diverse backgrounds, they would have discovered - through the development of science and technology - that rational behavior makes them more likely to achieve their goals. I'm actually in the middle of watching a video by Steve Omohundro about self-improving AI where he makes the argument that self-improving AIs will want to be rational since it improves the their ability to maximize their fitness function. Also, while I suppose it would be possible to have a civilization that qualified as "advanced" without having similarly "advanced" individuals, I'm highly suspicious of that idea. And a society of ultraintelligent individuals might side-step the singleton requirement if the choice not to contact other species becomes what they would consider obvious. Setting up a "fake" civilization to see what happens to it is an interesting possibility though if there does turn out to be a relationship between intelligence and morality, they might be inclined not to. Before you ask: All of them? I don't know. But the number could be low enough that the number of civilizations that are willing to communicate is a tiny percentage of the ones that are actually out there (and are all much younger, since they wouldn't want it to be obvious that these species didn't evolve naturally)
Gantolandon wrote:
A rational reaction is not to do anything until you confirm that the threat is even a distant possibility, otherwise you just waste a lot of effort for nothing. Reaction like that would be anything but rational and would at least require some events in the species' past that could cause it. Like an actual alien invasion.
It's hard to say what existential risks are rational to worry about, since we've never had our species wiped out before. The very nature of the problem precludes having any experience with the issue. Unlike the movies, any "war" between species of different levels of advancement would result in one species going extinct with almost no chance of survival. Further I'd contend that Fermi's Paradox [em]itself[/em] demonstrates that being wiped out by an advanced civilization most certainly [em]is[/em] a distant possibility at least. The galaxy should be full of life. It doesn't appear to be, and one possible explanation for that is the possibility that there's a single alien agency responsible for destroying them all. Distant, but nonzero. PS Thanks for the info about Dark Matter. I know very little on the topic.
The end really is coming. What comes after that is anyone's guess.
Arenamontanus Arenamontanus's picture
Erenthia wrote:Arenamontanus
Erenthia wrote:
Arenamontanus wrote:
But *all* advanced civilizations, at all times, everywhere?
I actually think this may be a reasonable position. While hypothetical alien civilizations are the ultimate in potentially diverse backgrounds, they would have discovered - through the development of science and technology - that rational behavior makes them more likely to achieve their goals. I'm actually in the middle of watching a video by Steve Omohundro about self-improving AI where he makes the argument that self-improving AIs will want to be rational since it improves the their ability to maximize their fitness function.
By now you should have got to the point where he discusses the problem that they also will want resources. Having more resources means that you can ensure your other goals can be implemented with a higher probability (there is more on that in his paper). So if his argument is right and applies to civilisations, they should be very expansive. I think you underestimate how extreme the convergence argument is. It is like saying that all alien civilizations, no matter what they are or their histories, will realize that it makes sense to be totally peaceful (or warlike, or drive on the left side of the road). Sure, a lot of civilizations, but even one defector will break the equilibrium. In our paper we calculate that out present galaxy has been reachable by at least the civilizations of between 5,000,000 and a few billion *galaxies*. If there is just one civilization in every galaxy, if they have a 99.999% chance of being quiet, that still leaves hundreds of civilisations that could spread enormously widely, grab a lot of resources and be very visible. There are very few things even totally rational people agree on. Mainly they are statements about math. Very few agreed things are like ethics.
Quote:
Setting up a "fake" civilization to see what happens to it is an interesting possibility though if there does turn out to be a relationship between intelligence and morality, they might be inclined not to.
The "fake" might actually be entirely real, composed of dissidents or adventurers who are fine with living a bit risky. Or maybe settled by the B-Ark. :-)
Extropian
Extrasolar Angel Extrasolar Angel's picture
One issue is that even if one
One issue is that even if one civilization tries for example to break the cycle and expand or contact others, it could be hindered by civilization that had a lead start. If the first civilization was conservative, meaning opposed to interference with other starting up civilizations or uncontrolled expansion, it would be able to enforce its will on subsequent civilizations(in theory). Additionally the Fermi Paradox is complicated by the fact, that with our current advances it is now rather unlikely that any sufficiently developed civilization won't spot other obvious life-bearing planets and civilizations. Even for us that is a question of say next century or two when we will be able to detect many Earth-like planets and signs of industrial civilizations. Any civilization that was advanced and ahead of us in theory should already know of our planets existence and its biosphere. The following decade will be much interesting as it will start this process of detection. PS:Personally I am of the belief that if any advanced civilization exist, it already abandoned expansionism in favor of observation and conservation, and I find it unlikely that conversation with civilizations with millions of years of development ahead of us would result in anything but destruction. Something that a civilization with interest in unique culture and development wouldn't be interested in.
[I]Raise your hands to the sky and break the chains. With transhumanism we can smash the matriarchy together.[/i]
Prophet710 Prophet710's picture
This all seems to be under
This all seems to be under the assumption that emotion is also something that has evolved with these aliens, which may not necessarily be the case. Other evolved forms of life may not even be privy to consciousness as we know it as humans, being instead comprised of pure intellect. Which would mean then that there would be no crazies, or defectors, or if there were, they would be culled from the population as soon as possible to inhibit further dissent and distraction from the common goal. Still others may be possessed of emotion but lack certain emotions due to lack of stimulus. What if the environment that spawned these aliens is relatively peaceful and had little to no aggression? What if said aliens then had no ambition to drive them? Being completely content to thrive in a more agrarian nature, or not even progressing past tribal hunter gatherers. In this case it seems to be a toss up, limited only by imagination, at least as far as the game is concerned. The rest is honestly left to opinion. I like the hiding facet though, it would seem to be a bit more logical and it wouldn't be too unbelievable if at least one space-faring civilization follow such a ideal. I wonder what they would look like, what kind of technology they would possess. I'm reminded of the "Shadows" from Nexus: The Jupiter Incident. Keeping things as silent as possible, developing voiceless communications through telepathy or electronic means, developing both natural and artificial camouflage, an entire society dedicated to stealth and "leave no trace" mentality. Would be interesting to actually "see and observe" if such a thing could be possible.
"And yet, across the gulf of space, minds immeasurably superior to ours regarded this Earth with envious eyes. And slowly, and surely, they drew their plans against us."
Erenthia Erenthia's picture
Well, it's less of a matter
Well, it's less of a matter of emotion and more of a matter of evolution. Evolution is basically a combination of diversity, competition/cooperation, and death. While there are a LOT of things we can't say about aliens, there are a few that we can. Advanced Aliens could only evolve out of a diverse ecosystem because that's necessary to both drive and allow for the complexity of intelligent life. (And there might be plenty of non-Advanced Aliens, Fermi's Paradox doesn't really focus on them). In a diverse ecosystem, you're going to wind up with predators. That doesn't mean the AA will [em]be[/em] predators in the classic sense (humans, while being the ultimate Apex Predators, are sometimes vegetarians for moral reasons), but it absolutely means they will be aware of the concept of a predator, as well as wanting to avoid death. Well, most of the time. As Dawkins points out in the Selfish Gene, it's really more a matter of maintaining your genes, which is where we get self-sacrifice from. How the AA [em]feel[/em] doesn't really factor in, only how they behave. Actually part of the problem with evolving towards homogeneity or singleton governance is that it kicks out one of the legs of evolution: diversity. Now, I suppose you could say that the ability to use intelligence and auto-evolution makes natural evolution obsolete, but that remains to be seen. As we've discovered recently, genetic algorithms using random, brute-force techniques routinely out-perform programs written by individuals. Looking at the Drake Equation again, I find it interesting that it tries to take a snapshot of the galaxy as it is now, rather than trying to calculate how many detectable civilizations the galaxy will ever hold and them distributing them over the course of the galaxies lifetime. I might take a stab at this when I have some time, but in particular it would be interesting to look at what the standard deviation is between the formation of civilizations (how old is our next oldest cousin) and where do we stand in the context of all of galactic history?
The end really is coming. What comes after that is anyone's guess.
Extrasolar Angel Extrasolar Angel's picture
I would like to add that that
I would like to add that that if anyone is interested in issues of interstellar travel and Fermi Paradox, I recommend visiting centauri-dreams.org It's a site which deals with these issues in quite detailed way, and I had many interesting conversations there.
[I]Raise your hands to the sky and break the chains. With transhumanism we can smash the matriarchy together.[/i]
Arenamontanus Arenamontanus's picture
Prophet710 wrote:This all
Prophet710 wrote:
This all seems to be under the assumption that emotion is also something that has evolved with these aliens, which may not necessarily be the case. Other evolved forms of life may not even be privy to consciousness as we know it as humans, being instead comprised of pure intellect. Which would mean then that there would be no crazies, or defectors, or if there were, they would be culled from the population as soon as possible to inhibit further dissent and distraction from the common goal.
You don't need emotions or consciousness to have defectors or crazies. Run any evolutionary algorithm, or just check what is going on in any ecosystem - there is life and death game theory played out among the bacteria under your sink right now, with cooperative quasi-equilibria and defectors cheating them and mutants doing things that upset the balance. And "pure intellect" doesn't exist - intelligence is about solving problems efficiently, but what the goal is is not rationally deduced (no, not even among Objectivists, which surprised and amused me to no end when I read Rand's moral ontogeny). Even given a fixed goal the strategies of achieving it among other goal-directed agents can become arbitrarily Machiavellian.
Extropian
Erenthia Erenthia's picture
Arenamontanus wrote:
Arenamontanus wrote:
You don't need emotions or consciousness to have defectors or crazies. Run any evolutionary algorithm, or just check what is going on in any ecosystem - there is life and death game theory played out among the bacteria under your sink right now, with cooperative quasi-equilibria and defectors cheating them and mutants doing things that upset the balance.
The space that defines the human minds is such a tiny subsection of all possible minds that we often assume that we can't know anything about ETIs. It's simulations like these; however, that make me believe that robust AGI will ultimately be what allows us to solve Fermi's Paradox. Because AGI is the space of all possible minds, including any alien ones that could ever evolve. creepy side note: since evolutionary algorithms seem so much better at solving most problems than traditional approaches, an extremely advanced computer (say. a Mastroika Brain) might spawn a simulated reality any time it wants to solve a hard problem. Have a few dozen, hundred, or even million species evolve naturally in the simulation and see if any of them can solve the problem. After the problem has been solved, the simulations are of no more use.
The end really is coming. What comes after that is anyone's guess.
Unity Unity's picture
The Fermi Paradox is that we
The Fermi Paradox is that we're in a simulation and the virchbuilders keep deleting species that stop fitting. The Solar System and a few lightyears out from it are one of the few high-resolution areas. We are kept from detecting signs of alien civilization in other parts of the simulation. [disclaimer: I have no evidence for this and am basically throwing it out there for fun]
Arenamontanus Arenamontanus's picture
Unity wrote:The Fermi Paradox
Unity wrote:
The Fermi Paradox is that we're in a simulation and the virchbuilders keep deleting species that stop fitting.
Ouch. I have just spent the afternoon running a civilization spread simulation, analysing a fine point about the Fermi paradox... and simulated civs. Let's see, each sim I used had about 2000 civs, and I have been running about a hundred during testing... about 200,000 civs deleted today. Although they didn't have much content. What we are looking for is the relative ratio of real civilizations to simulated ones as a function of civilization starting time, given certain assumptions about planetary formation, interstellar/intergalactic spread and computational demands. If the runs I did today are any good, they seem to suggest that we might indeed be run in the real servers of an old and big civilization.
Extropian
Erenthia Erenthia's picture
Arenamontanus wrote:If the
Arenamontanus wrote:
If the runs I did today are any good, they seem to suggest that we might indeed be run in the real servers of an old and big civilization.
oh for the love of god, DETAILS! that is absolutely fascinating. How do these simulations work and what sort of data did they leave you with?
The end really is coming. What comes after that is anyone's guess.
Unity Unity's picture
Arenamontanus wrote:Unity
Arenamontanus wrote:
Unity wrote:
The Fermi Paradox is that we're in a simulation and the virchbuilders keep deleting species that stop fitting.
Ouch. I have just spent the afternoon running a civilization spread simulation, analysing a fine point about the Fermi paradox... and simulated civs. Let's see, each sim I used had about 2000 civs, and I have been running about a hundred during testing... about 200,000 civs deleted today. Although they didn't have much content. What we are looking for is the relative ratio of real civilizations to simulated ones as a function of civilization starting time, given certain assumptions about planetary formation, interstellar/intergalactic spread and computational demands. If the runs I did today are any good, they seem to suggest that we might indeed be run in the real servers of an old and big civilization.
Tell me more! Although, to whatever big and old civ we're talking about, we might not have much content. For all we know our civilizations are resource efficient skeletal models compared to the total complexity of the original that is running us.
Arenamontanus Arenamontanus's picture
Erenthia wrote:Arenamontanus
Erenthia wrote:
Arenamontanus wrote:
If the runs I did today are any good, they seem to suggest that we might indeed be run in the real servers of an old and big civilization.
oh for the love of god, DETAILS! that is absolutely fascinating. How do these simulations work and what sort of data did they leave you with?
Well, in this case the model starts by distributing civilizations randomly over a *big* space (I am talking tens of gigaparsecs here - for various reasons this is a model of fairly rare civilizations, although it could be scaled down to a teeming galaxy too). It also randomly determines when they emerge, based on a model of when planets form (the Lineweaver model) and how long it takes to evolve intelligence after that (we use a lognormal distribution as a guesstimate). Once civilisations emerge they will start expanding outwards, claiming stuff. If a civilization runs into another one they stop expanding in that direction. The result is that space is divided into a kind of Voronoi partition, although the borders a 3D hyperboloid surfaces rather than straight walls. In some cases civilizations emerge on a planet already inside a civilization, in which case they never go anywhere - they might never come about, or will be in the local zoo. There are also some complications due to the expansion of the universe (remember, this is a large scale model), so certain spots might never be reached by anybody because the accelerated expansion makes them move away too fast to be claimed. Why do we simulate this? It has to do with the question of how many observers there are in the universe of different kinds, and whether early civilizations pre-empt later civilisations. In particular, we would like to know how large the first civilization will be. This is interesting to calculate because if the first civilisation takes over nearly all of space, then we should expect *us* to be that civilization (since otherwise we would have been pre-empted by the one). Right now it looks like being the first civilisation gives you a major advantage, but there are still plenty of smaller and younger civilisations around. So this simulation doesn't tell us very strongly that we should think we are among the first. The second question deals with ancestor and alien simulations. Big advanced civilisations are likely to be able to simulate their past or possible alien civilisations in high resolution. Even if they do it at a fairly low rate it still means lots of simulated civilisations. This means that the number of simulated civs might far outnumber the real ones, and we should by basic probability assume we have a decent chance of being in an alien or posthuman simulation. The distribution of what simulations get done depends partially on whatever curiosity civilizations have, and partially about strategic needs: depending on what kind of universe they think they live in, they will be interested in figuring out things about possible aliens they might meet. This leads to a kind of game theory model of what simulations to expect, and then we can link that to what we observe: does the world look like that, or is it the kind of world that is unlikely to be simulated by a particular scenario? The result is a probabilistic estimate of what kind of real world there is - either out there in space, or outside our simulation. Philosophy is fun, although the code optimizing can sometimes be tricky. (I need to get rid of the darn square root calculation in my inner loop, it eats up 41% of my running time! What would Aristotle have coded?) I might if I get some time do a cut-down version for a single galaxy just to make fun space empires for games. An astronomer who is visiting us right now has a cellular automaton model of spreading galactic civilisations with galactic life zones and past gamma ray burst history I would like to adapt.
Extropian
Extrasolar Angel Extrasolar Angel's picture
Is there any online material
Is there any online material/publications based on this that can be read?
[I]Raise your hands to the sky and break the chains. With transhumanism we can smash the matriarchy together.[/i]
Arenamontanus Arenamontanus's picture
Extrasolar Angel wrote:Is
Extrasolar Angel wrote:
Is there any online material/publications based on this that can be read?
Not yet. This is current research, and I probably shouldn't even talk about it :-) But I think Nick Bostrom will turn it into a paper within a few weeks.
Extropian
Erenthia Erenthia's picture
Arenamontanus wrote:
Arenamontanus wrote:
Not yet. This is current research, and I probably shouldn't even talk about it :-)
I won't tell a soul :-) I'm interested in the fact that you have civilizations stop their advancement when they meet other civilizations. What was the rationale for that? I suppose the data would be less interesting if older civilizations always destroyed younger ones. You wouldn't need computer code to determine where that would lead, but it seems like civs that are highly interested in expansion wouldn't necessarily stop at the least resistance. Of course if they have plenty of room left to expand in other directions that might be logical. One interesting idea would to create a simulation model that randomly dispersed resources and tracked a civs expansion pressure around its border. If the civ can expand easily in other directions it does, but if it's blocked in, it turns imperialistic starting wars with its neighbors. How hard the civilization fights back depends on its own expansion pressure. I'm sure you can steal some effective equations from the ideal gas laws for this. In particular, you could cluster resources together (as in galaxies) to see how much expansion pressure is needed for a civ to go intergalactic. Of course the real gem would be to see how many civs eventually get destroyed through interstellar war. It's unfortunate but this model does seem to assume that constant war is pretty much the rule. Also, the thing that's really difficult about the Simulation Hypothesis is that with code, you can fundamentally change the laws of whatever simulated universe you're running. That means the host universe can be arbitrarily different from our universe. It could be completely devoid of causality, entropy, and other fundamental aspects of our universe that guide our thinking. Unlike with aliens operating in the same universe as us, we don't even have common physical laws like chemistry and evolution to make any kind of logical statements whatsoever. Sure in our universe, it appears likely that advance civilizations would make a lot of simulated civilizations for various purposes, but if we are simulated our host-universe might not possess things like "civilizations" or "intelligence". It might have physical laws that vary based on other factors. /peanut gallery On the other hand, I may be [em]seriously[/em] out of my depth. Mostly I'm just very curious about all these things since I'm not in a field that studies them, and I was never good at curbing my curiosity. :-)
The end really is coming. What comes after that is anyone's guess.
Unity Unity's picture
All very interesting and I'd
All very interesting and I'd love to read the paper that results, myself. I suppose coding the civs to be able to overtake one another rather than simply stop is too much for our current means to achieve without compromising other data, huh? Which brings up the question, how are you even running these sims?
Arenamontanus Arenamontanus's picture
Erenthia wrote:
Erenthia wrote:
I'm interested in the fact that you have civilizations stop their advancement when they meet other civilizations. What was the rationale for that?
We are likely going to run sims where this does not happen, either with civs merging or advanced civs taking over less advanced ones. But right now we want to analyse the simplest case. Also note that it is not obvious that there is any difference in capability between any of these civilisations: if what you can invent given the laws of physics is bounded, then any sufficiently advanced civilisation will have the same attack and defence capabilities. And if they do not have FTL or other really magical technologies, then the amount of mass-energy they can bring to bear on any spot of the boundary will be limited locally. Which means that the other side will have identical resources on their side, since boundaries typically are near planes. So the big and the small supercivilisation will not have much to gain from quarrelling. The issue of whether shields or spears dominate at the edge of technological capability is a profound and hard question we often discuss.
Quote:
Also, the thing that's really difficult about the Simulation Hypothesis is that with code, you can fundamentally change the laws of whatever simulated universe you're running. That means the host universe can be arbitrarily different from our universe. It could be completely devoid of causality, entropy, and other fundamental aspects of our universe that guide our thinking.
While this is possible (at least for some aspects - entropy is hard to avoid if you want to have an arrow of time), simulations will likely tend to have some relation to the simulator's world, since otherwise they would not be very interesting. In particular, it is likely that they will want to simulate things similar to important or exciting parts of their history, or aliens that could be a threat or problem to them. They are likely to make many other sims too, but these are the ones that can be analysed without making assumptions about what they are. (simulations occurring for no deliberate reason are equivalent to natural universes in a level 4 multiverse)
Extropian
Erenthia Erenthia's picture
Clearly when you said
Clearly when you said "current research" it simply didn't sink in through my thick skull. Obviously you need to start with the simplest case, not just to get your theoretical bases intact, but also to get your code working.
Arenamontanus wrote:
(simulations occurring for no deliberate reason are equivalent to natural universes in a level 4 multiverse)
I may require to a few days browsing wikipedia before I have any idea what that means. Perhaps there's a reason you speak at TEDx events and I'm limited to posting pretentious bullshit on roleplaying forums ;-) And yes, it actually makes sense that the simulations would likely be closely related things the civ found useful - at the corporate level at least. If our universe is a simulation, isn't it just as likely that we're an advanced version of Spore or Sim City? If that's the case then all bets are off. Suddenly I'm imagining some cosmic goth teenager saying, "I want my universe to be dark and edgy, so I'll add Evolution so that the strong survive and the weak perish, but then I'll add entropy so everything dies anyway no matter how strong they are..." Most interesting to me though, is the idea that the singularity might rocket a civilization to the "as advanced as possible" stage, putting all post-singularity life in the universe on par with each other. THAT of course means that any civilization capable of writing and farming is a potential threat, which makes it worthwhile to put Bracewell probes all over the galaxy and provides motivation to perform "abortions" on civilizations that could turn out to be threats. (Or to keep ANY civilization from experiencing the singularity besides yourself). In all seriousness, this is the reason I get nervous every time I hear about Astronomers finding a "jupiter-like mass" previously undiscovered in our solar system. I'm convinced there's a bracewell probe in our solar system.
The end really is coming. What comes after that is anyone's guess.
Smokeskin Smokeskin's picture
Arenamontanus wrote:
Arenamontanus wrote:
Philosophy is fun, although the code optimizing can sometimes be tricky. (I need to get rid of the darn square root calculation in my inner loop, it eats up 41% of my running time! What would Aristotle have coded?)
What are you using the number for afterwards? With a high risk of suggesting something you probably already know, sqrt(a) < b is much harder than a < b^2 (with the needed sign checks of course). Even if you track it through a series calculations it can be easier. Or cache the results, if the same numbers come up several times. How large a precision do you need? If you can use this it gives a 5x speed up or so iirc: http://en.wikipedia.org/wiki/Methods_of_computing_square_roots#Approxima... . Newton's method might not be bad either. But switching precision in the middle of a series of simulation runs will invalidate the old results perhaps? You're just doing simulations though. Even if you get it running much faster, with only 41% spend there, how much faster will you get done? A few days?
Arenamontanus Arenamontanus's picture
Smokeskin wrote:Arenamontanus
Smokeskin wrote:
Arenamontanus wrote:
Philosophy is fun, although the code optimizing can sometimes be tricky. (I need to get rid of the darn square root calculation in my inner loop, it eats up 41% of my running time! What would Aristotle have coded?)
What are you using the number for afterwards? With a high risk of suggesting something you probably already know, sqrt(a) < b is much harder than a < b^2 (with the needed sign checks of course). Even if you track it through a series calculations it can be easier.
Yup, that would have been the first optimization I would have done... except that distance behaves a bit oddly in this simulation since the universe is expanding. Two places will have different distances at different times, and the exact function is slightly complex. I suspect the smart solution is to either do an approximation or table lookup. But that requires *thinking*... from past experience I am very suspicious of too much cleverness in my software.
Quote:
You're just doing simulations though. Even if you get it running much faster, with only 41% spend there, how much faster will you get done? A few days?
Exactly. Since I am away from my office over the weekend anyway, my computer is happily (I hope) chugging through the simulations. We'll see if it gets enough over a weekend to be useful. The square root might just mean I need to laze around on Monday... :-) "Sorry, professor, but I can't do it, my computer is busy."
Extropian
Smokeskin Smokeskin's picture
Waiting isn't working
Waiting isn't working Hustle while you wait ;)
Arenamontanus Arenamontanus's picture
Smokeskin wrote:Waiting isn't
Smokeskin wrote:
Waiting isn't working Hustle while you wait ;)
You don't need to worry. I am giving a talk at a symposium during the weekend. Of course, the actual talk is about 30 minutes, and the rest of the time I am going to be educated/entertained by the other talks (or, if it against expectation is boring, entertain myself with working on my laptop). I think in terms of actual time spent on work-related stuff I work more than most people I know. But it all looks and feels like play (that is why it is important to wave equations around, it convinces most people - falsely - that you are doing serious stuff). One interesting question is whether I could tweak my simulations to examine the limits of the Boogeyman Hypothesis. I do have a probability distribution for early civilisations, and if I plug that into a visibility model... Hmm, now I almost hope there is some uninteresting talk tomorrow.
Extropian
Extrasolar Angel Extrasolar Angel's picture
Arenamontanus wrote:
Arenamontanus wrote:
Why do we simulate this? It has to do with the question of how many observers there are in the universe of different kinds, and whether early civilizations pre-empt later civilisations. In particular, we would like to know how large the first civilization will be. This is interesting to calculate because if the first civilisation takes over nearly all of space, then we should expect *us* to be that civilization (since otherwise we would have been pre-empted by the one).
Could you clarify what pre-empted means in this assumption? Not to be annoyingly repetitive, but if the first civilization turns out conservative and prohibits interference with young civilizations or endless expansion wouldn't it be also a possible solution to Fermi Paradox?
[I]Raise your hands to the sky and break the chains. With transhumanism we can smash the matriarchy together.[/i]
Arenamontanus Arenamontanus's picture
Extrasolar Angel wrote
Extrasolar Angel wrote:
Arenamontanus wrote:
Why do we simulate this? It has to do with the question of how many observers there are in the universe of different kinds, and whether early civilizations pre-empt later civilisations. In particular, we would like to know how large the first civilization will be. This is interesting to calculate because if the first civilisation takes over nearly all of space, then we should expect *us* to be that civilization (since otherwise we would have been pre-empted by the one).
Could you clarify what pre-empted means in this assumption? Not to be annoyingly repetitive, but if the first civilization turns out conservative and prohibits interference with young civilizations or endless expansion wouldn't it be also a possible solution to Fermi Paradox?
Yup. In my current simulation the size of the pre-empted civs is set to zero: either they are wiped out, they do not come into existence since all stars are superciv malls, or they are confined to such a small space that the total number of sims they run is insignificant compared to the number in the other civilizations.
Extropian
Erenthia Erenthia's picture
Arenamontanus wrote:One
Arenamontanus wrote:
One interesting question is whether I could tweak my simulations to examine the limits of the [strong]Boogeyman Hypothesis[/strong].
(emphasis mine) If you're using the name does this mean I get to tell my friends I've contributed to cutting edge scientific research? Please say yes! (and yes I know David Brin came up with the idea before I did)
The end really is coming. What comes after that is anyone's guess.
Arenamontanus Arenamontanus's picture
Erenthia wrote:Arenamontanus
Erenthia wrote:
Arenamontanus wrote:
One interesting question is whether I could tweak my simulations to examine the limits of the [strong]Boogeyman Hypothesis[/strong].
(emphasis mine) If you're using the name does this mean I get to tell my friends I've contributed to cutting edge scientific research? Please say yes! (and yes I know David Brin came up with the idea before I did)
I promise to refer to you if we use the name. It all depends on whether Brin had a good term for it or not in his 1993 paper on he Fermi paradox. What I like about Eclipse Phase is that it actually allows me to use my research for gaming and that gaming discussions sometimes turn into research.
Extropian