Welcome! These forums will be deactivated by the end of this year. The conversation continues in a new morph over on Discord! Please join us there for a more active conversation and the occasional opportunity to ask developers questions directly! Go to the PS+ Discord Server.

Performance of the mesh

7 posts / 0 new
Last post
Arenamontanus Arenamontanus's picture
Performance of the mesh
Charles Stross keynote at USENIX 2011 http://www.antipope.org/charlie/blog-static/2011/08/usenix-2011-keynote-... has, as usual, a lot of good thinking. In particular he made a few points that might be of relevance to EP: However, the mesh as described in EP seems to be a bit far away:
Quote:
This leaves aside a third model, that of peer to peer mesh networks with no actual cellcos as such – just lots of folks with cheap routers. I’m going to provisionally assume that this one is hopelessly utopian, a GNU vision of telecommunications that can’t actually work on a large scale because the routing topology of such a network is going to be nightmarish unless there are some fat fibre optic cables somewhere in the picture. It’s kind of a shame – I’d love to see a future where no corporate behemoths have a choke hold on the internet – but humans aren’t evenly distributed geographically.
The mesh is likely the *local* systems, but for moving data any further (and especially at good speed) you need big pipes, and they require coordination and investment. Hence Nimbus, and presumably some important Titanian Microcorps that run much communications in the outer system. He mentioned that the maximum wireless bandwidth may be around 2 terabytes per second (split between all users in a cell). If cells are small (like rooms, with the "landline" connection in the walls) that means overload is rarely a problem, except when egocasting. My own standard off-the-cuff estimate of an ego is somewhere around 15 terabytes or upwards, suggesting that it could be transmitted in just a few seconds. In practice it likely takes longer (error correction, other users, larger files), perhaps a minute. Transhuman senses can't take in more than a few gigabit per second or so, so running wireless XP/lifeblogging is entirely feasible. The real headache will be communications lags, both over long distances (see http://www.aleph.se/EclipsePhase/comms.pdf for some more notes on this, based on this thread) and due to packets jumping across lots of little servers.
Quote:
With lifelogging and other forms of ubiquitous computing mediated by wireless broadband, securing our personal data will become as important to individuals as securing our physical bodies. Unfortunately we can no more expect the general public to become security professionals than we can expect them to become judo black-belts or expert marksmen. Security is going to be a perpetual, on-going problem.
And this is why InfoSec is a survival skill. People without it, who rely on their muses, are delicious sheep for the wolves out there...
Quote:
We can expect the pace of innovation to slow drastically, once we can no longer count on routinely more powerful computing hardware or faster network connections coming along every eighteen months or so. But some forms of personal data – medical records, for example, or land title deeds – need to remain accessible over periods of decades to centuries. Lifelogs will be similar; if you want at age ninety to recall events from age nine, then a stable platform for storing your memory is essential, and it needs to be one that isn’t trivially crackable in less than eighty-one years and counting. Robustness and durabilitiy are going to be at a premium in the future
This is an interesting point. If robust and secure long-term storage that is hard to crack has been developed, this means there is a *lot* of pre-Fall data that is tough to get at. Some systems will also have generations of data. But the Fall might have disrupted a lot of these - people used to storage solutions that withstood wars and upheavals suddenly lost everything.
Quote:
we’re moving towards an age where we may have enough bandwidth to capture pretty much the totality of a human lifespan, everything except for what’s going on inside our skulls.
Just to note, 10^11 neurons firing at ~100 Hz means 10^13 action potentials per second. Each can be indexed by time (say 2 bytes for when in a second frame it occurred, frames stored in sequence) and which neuron (36 bits). So one second of total experience is 65 terabytes (uncompressed). A year is about one zettabyte; a lot, but still storable in 0.02 grams of diamond storage.
Extropian
Xagroth Xagroth's picture
Re: Performance of the mesh
There are more routes of increasing data transfer capabilities. Using more frequencies, better compression, distributing the data up/download across all day (your mesh still works wen you are sleeping, even if you sleep only 4 hours), new tech to make better circuit compression, and something much ignored today: software optimization. I think that right now, the most widespreaded and cheap data storage system that is also incredible robust would be the DVD-rom, by the way, it's only handicap being that it cannot be reutilized after filling it entirely.
Covariant Covariant's picture
Re: Performance of the mesh
I think you are missing the point that the max data transfer given is a theoretical maximum using all wireless frequencies. Likewise, data compression has fundamental limits, wireless nodes cause interference with each other, and we are already discussing a per second rate, so scaling it to a day's length doesn't actually change anything. Circuit compression is limited by atomic radii, electrical interference from nearby traces, parasitic capacitance, and the geometry requirements of the component parts. Software optimization is definitely not ignored, and the most widespread and cheap data storage system available today is still tape.
The Doctor The Doctor's picture
Re: Performance of the mesh
Arenamontanus wrote:
The mesh is likely the *local* systems, but for moving data any further (and especially at good speed) you need big pipes, and they require coordination and investment. Hence Nimbus, and presumably some important Titanian Microcorps that run much communications in the outer system.
It is indeed possible to link meshes over long distances, but it takes a little work. At least one pair of transceivers are required (one per mesh); they must be aligned so that the antennae have line of sight to swap signals. The nodes which act as gateways for meshes must announce routes on their mesh such that they are considered the default gateway for non-local traffic (to use TCP/IP parlance). It is possible for the local mesh administrators to work out such long-haul links - and in fact they are fairly easy to set up if you have a little RF knowledge but they are fiddly enough that they need to be debugged before they are usable. If one factors in multiple moving targets that need to stay in contact (say, a number of space habitats in orbit) the problem becomes one of automatically keeeping the long-haul links on target. One thinks that this would be a boring job for an AGI.
Quote:
And this is why InfoSec is a survival skill. People without it, who rely on their muses, are delicious sheep for the wolves out there...
The fiction practically writes itself...
The Doctor The Doctor's picture
Re: Performance of the mesh
Xagroth wrote:
There are more routes of increasing data transfer capabilities. Using more frequencies, better compression, distributing the data up/download across all day (your mesh still works wen you are sleeping, even if you sleep only 4 hours), new tech to make better circuit compression, and something much ignored today: software optimization.
I think that personal datacomm would likely move in the direction of frequency-hopping spread-spectrum transmission, likely with a user's equipment transmitting on multiple frequencies simultaneously to move more data. Most consumer-grade transceivers now can transmit and receive on one frequency at a time (for example, wireless channels), though the frequency can be changed more or less at will. When the amount of data that one can send and receive at one time becomes a lifestyle-limiting factor, manufacturers would likely start shotgunning multiple transceivers into each unit until the dominant datacomm technology could intelligently utilize four, eight, or even sixteen distinct frequencies simultaneously. Data prioritization might also become the norm rather than the exception (like Quality of Service, only with deep inspection thrown in so that communication equipment could distinguish between work e-mail, personal e-mail, streaming media download, and give work-related traffic priority over personal and entertainment traffic) to give the best performance. At some point it will no longer be feasible to miniaturize circuitry because heat dissipatation would become problematic in the extreme. Software optimization, however, would be interesting. Perhaps software would return to a pre-compiled state rather than be interpreted or compiled on a just-in-time basis. Computer technologies tend to run in historical cycles, so the push toward non-compiled or compiled-late languages would reverse again....
MirrorField MirrorField's picture
Re: Performance of the mesh
Arenamontanus wrote:
Charles Stross keynote at USENIX 2011
Quote:
This leaves aside a third model, that of peer to peer mesh networks with no actual cellcos as such – just lots of folks with cheap routers. I’m going to provisionally assume that this one is hopelessly utopian, a GNU vision of telecommunications that can’t actually work on a large scale because the routing topology of such a network is going to be nightmarish unless there are some fat fibre optic cables somewhere in the picture. It’s kind of a shame – I’d love to see a future where no corporate behemoths have a choke hold on the internet – but humans aren’t evenly distributed geographically.
The mesh is likely the *local* systems, but for moving data any further (and especially at good speed) you need big pipes, and they require coordination and investment. Hence Nimbus, and presumably some important Titanian Microcorps that run much communications in the outer system.
Agreed. I'd even elaborate further that there is a network of repeaters, routers, switches, etc. built in "walls" of any reasonable habitat. Most of these are AI-coordinated Open Source stuff comparable to electricity or plumbing today. Well-organized habitats may have planned wideband coverage everywhere while scum barge coverage may vary unpredictably in quality and bandwidth. For extra flavor the IPv6 can be considered to still be part of the protocol backbone the mesh :D
Arenamontanus wrote:
He mentioned that the maximum wireless bandwidth may be around 2 terabytes per second (split between all users in a cell). If cells are small (like rooms, with the "landline" connection in the walls) that means overload is rarely a problem, except when egocasting. My own standard off-the-cuff estimate of an ego is somewhere around 15 terabytes or upwards, suggesting that it could be transmitted in just a few seconds. In practice it likely takes longer (error correction, other users, larger files), perhaps a minute. Transhuman senses can't take in more than a few gigabit per second or so, so running wireless XP/lifeblogging is entirely feasible.
I doubt anyone would trust egocasting straight via mesh. Too many opportunities for mischief and IMHO even in Eclipse Phase the old-fashioned air gap remains a potent security measure. The Needed separate facilities with security measures are probably big part of the reason egocasting is so expensive.
Arenamontanus wrote:
The real headache will be communications lags, both over long distances (see http://www.aleph.se/EclipsePhase/comms.pdf for some more notes on this, based on this thread) and due to packets jumping across lots of little servers.
IMHO every muse worth it's salt in EP runs ephemeris software which can predict the lightspeed lag between habitats within few seconds. ("One-way lightspeed lag from Titan to Luna is about 92 minutes today. Guesstimating 30 minutes to hour for appropriate searches and composing a reply, we can adjourn until then. See you all in 4 hours.")
Arenamontanus wrote:
Quote:
we’re moving towards an age where we may have enough bandwidth to capture pretty much the totality of a human lifespan, everything except for what’s going on inside our skulls.
Just to note, 10^11 neurons firing at ~100 Hz means 10^13 action potentials per second. Each can be indexed by time (say 2 bytes for when in a second frame it occurred, frames stored in sequence) and which neuron (36 bits). So one second of total experience is 65 terabytes (uncompressed). A year is about one zettabyte; a lot, but still storable in 0.02 grams of diamond storage.
Which brings to one of my general SF ideas: Memory Database. Basically an AI that is constantly recording your XP stream into random-access database and indexing it six ways to sunday. You can easily eg. calculate how many peas you have eaten in your entire lifetime (or at least when the logging is on), you can surgically censor inconvenient memories without worrying about residual memories in neural-net of your mind, make plug-in skillsofts order of magnitude simpler and easier (or maybe possible at all), data-mine your own memories and so on. Of course, someone who pwns your infosec can then also pwn your ass that much harder and easier...
Covariant Covariant's picture
Re: Performance of the mesh
The move to frequency-hopping spread-spectrum transmission would probably end up looking like Cognitive Radio. Which is a bit like having every link in the mesh deciding which channel to communicate on at any given time interval by playing game theory against every other link on the mesh. It's a neat idea.