Welcome! These forums will be deactivated by the end of this year. The conversation continues in a new morph over on Discord! Please join us there for a more active conversation and the occasional opportunity to ask developers questions directly! Go to the PS+ Discord Server.

Programming in VR (and other mental actions)

21 posts / 0 new
Last post
lev_lafayette lev_lafayette's picture
Programming in VR (and other mental actions)
I refer to the rule: "Time itself is an adjustable constant in VR, though deviation from true time has its limits. So far, transhuman designers have achieved time dilation up to 60 times faster or slower than real time (roughly one minute equaling either one hour or one second). Time slowdown is far more commonly used, granting more time for simulspace recreational activities (more time, more fun!), learning, or work (economically effective). Time acceleration, on the other hand, is extremely useful for making long distance travel through space more tolerable." p241. A player in my game today argued that they be able to put their ego (and forks) in a simulspace VR environment and engage in a programming task 60 times faster than in real space, per ego and fork. That seemed to be too much a rules exploit to me. To me it seemed more like VR could provide an experience of +/- 60x rather than actual provision of mental tasks at that rate. But I'm interested in what others have to say about this..
lev_lafayette lev_lafayette's picture
Actually this has been
Actually this has been answered quite adequately on stack exchange I think... http://rpg.stackexchange.com/questions/34741/why-is-time-dilated-simulsp...
uwtartarus uwtartarus's picture
Slightly off topic, but how
Slightly off topic, but how different are cyberbrains from ghostriders? Ghost in the Shell: Arise part 4 had a Simulspace with the team coordinating their plan of attack during a high stakes mission.
Exhuman, and Humanitarian.
lev_lafayette lev_lafayette's picture
To quote from the rules:
To quote from the rules: "Cybernetic brains are where the ego (or controlling AI) resides in synthmorphs and pods. Modeled on biological brains, cyberbrains have a holistic architecture and serve as the command node and central processing point for sensory input and decision-making. Only one ego or AI may “inhabit” a cyberbrain at a time; to accommodate extras, mesh inserts (p. 300) or a ghostrider module (p. 307) must be used." So I'd interpret that as being that a Ghostrider rmodule is an habitation for a Cyberbrain. Which also provides a bit of an answer to the time dilation question. To treat it roughly, it would take about sixty of these to provide the equivalent of a 60x speedup (a slowdown, of course, is a lot easier, just put in `sleep` statements in your code). There is also a network complexity issue as the number increases etc. Basically, by using the same processing power as a cyberbrain/module you could achieve one 2x speedup.
Chase Chase's picture
Your player is essentially
Your player is essentially correct with being able to work in a 60x slower or so simulspace environment to program during downtime or have their forks do it on the side. The rules for +/-60 to a test come from TH. 97 "Combined Effects" which specifically only suggests "Timeframe-reducing effects such as Speed, rushing the job, and gaining an MoS on the Task Action are treated cumulatively; add the reductions together before applying to the timeframe." Simulspace time dilation is not actually a timeframe reducing effect, as the time inside the simulspace is the same amount of time it would take to complete the task action, but different than outside time. Since programming task actions take a long time (1 week for each cost level of an addon/software, 2 weeks for upgrading malware, 1 month per cost level of an upgrade, and 6 months for each cost level of an eidolon, all as base timeframes that can be adjusted) it actually makes sense to let your player use forks and/or simulspace time dilation rather than putting them on the sidelines for a long time. If you're running a low resource game and you want to limit their ability to make stuff, you can make it impossible for them to find servers that can handle large amounts of time dilation or something similar. That said, there's a tons of ways to mess with your player: 1. Lifestyle costs for every fork they have programming is one (Lists on TH. 104 for you to reference Infomorph lifestyle stuff). 2. Fork legality depending on where they are is another (Great for the Inner System). 3. Reintegration if they're merging the forks with their primary ego. They'd take at least a -20 to -30 penalty on the merging test due to the fork having been gone for days or weeks in it's own perception, and maybe an additional penalty for trying to mesh two completely different time experiences together if you feel it's appropriate. EP 275). 3. If they're not merging and only their forks are doing programming actions, you could limit the player's ability to raise their programming skill. 4. Forknapping or a fork attempting to assert it's own independence (If outer system) can make for an interesting side-story and wreck all kinds of havoc on your player. 5. For hacking software (Including firewalls) remember that you can have them degrade over time, and especially digital-security conscious enemies will also be upgrading their software suites. I wouldn't suggest a 1:1 ratio every time you play, give them a +10 or +20 bonus over the enemy one session, then close the gap with better security on their side or degradation of exploit/sniff/spoof software as exploit holes they used are closed or better monitoring software is deployed. Maybe total a net penalty to the player for a session if they're up a particularly well funded group, then let them catch up again and begin the cycle anew. The player can get around some of these with software addons like copylock and autodelete, but you can have some fun on the side before they realize it's smart to do that. If they're pruning their forks don't forget to make them drop the fork's skills and make them complete that programming test at the end of the test-action duration in simulspace. Hope that helps! Chase
[url=http://eclipsephase.com/complete-guide-hacking-ep]The Complete Guide to Hacking in EP[/url]
ShadowDragon8685 ShadowDragon8685's picture
lev_lafayette wrote:A player
lev_lafayette wrote:
A player in my game today argued that they be able to put their ego (and forks) in a simulspace VR environment and engage in a programming task 60 times faster than in real space, per ego and fork. That seemed to be too much a rules exploit to me. To me it seemed more like VR could provide an experience of +/- 60x rather than actual provision of mental tasks at that rate. But I'm interested in what others have to say about this..
Your player is [u]absolutely[/u] correct. That's a Thing They Can Do. It's even in that very rule you cited:
lev_lafayette wrote:
"Time itself is an adjustable constant in VR, though deviation from true time has its limits. So far, transhuman designers have achieved time dilation up to 60 times faster or slower than real time (roughly one minute equaling either one hour or one second). Time slowdown is far more commonly used, granting more time for simulspace recreational activities (more time, more fun!), learning, [b][u][i]or work (economically effective).[/i][/u][/b] Time acceleration, on the other hand, is extremely useful for making long distance travel through space more tolerable." p241.
Emphasis mine. Your player absolutely [i]can[/i] spend a day wired up to a 60x simulspace server and get two months worth of coding done. That's the [b]point[/b] of 60x simulspace acceleration. The real bitch, of course, is the logistics of it. First off, are they going into the Matrix themselves, or are they sending a fork in? If they're sending a fork in, well... They have problems. Look at how the player's character has behaved, and figure out how they'd take to being expected to slave away for two months followed by being executed. Have the fork act appropriately. Also, where are they getting the big iron they'd need to run a 60x simulspace environment? I mean, sure, if they own it outright themselves, that's one thing, but renting time on those things is expensive, and if someone else owns it, the security of what you're doing is inherently compromised.
Skype and AIM names: Exactly the same as my forum name. [url=http://tinyurl.com/mfcapss]My EP Character Questionnaire[/url] [url=http://tinyurl.com/lbpsb93]Thread for my Questionnaire[/url] [url=http://tinyurl.com/obu5adp]The Five Orange Pips[/url]
Trappedinwikipedia Trappedinwikipedia's picture
Assuming that a server which
Assuming that a server which can run 60 infomorphs can run a single infomorph at 60x speed (which is reasonable cost wise at least I think, though the same hardware might not be able to do both) then a 60x server runs you 15 thousand credits. Of course, if you want to use forks, that's a 15k server for each of them, which gets expensive fast (or get the blueprints [20k credits] and a nanofab [20k] and start making roughly 5 per day while power and feedstock last). That's not cheap, but at the cost of a few rail rifles, some high-end computing is a useful tool for many groups. Firewall of course, but having someone who can fact check things at better than real time is pretty helpful. Probably one of the important things is that any outward-facing work will be slowed to the rate of the slowest part of the connection. Depending on what kind of code you're making, a parts of a task could take a subjectively extremely long time to complete. Something like doing research might actually be significantly bottlenecked by a slow database connection for example. It's an extremely useful tool, but it's important to remember its downsides, for both verisimilitude and balance.
lev_lafayette lev_lafayette's picture
I'll be following what I've
I'll be following what I've put up on Stack Exchange. The speedup is allowed but it will require computational power and interconnect and therefore money. More of a rule elaboration than a rule change.
Lazarus Lazarus's picture
A couple of things should
A couple of things should probably be pointed out. First, and probably most important, 60x is the upper limit of what can be done. Compare that to the real world when we talk about things such as the upper limit of bandwidth or the upper limit of TFlOPS. When we hear about a new record in such statistics they are almost always being done in universities or laboratories. We are not talking about machines that your average person can get their hands on. These are machines often costing hundreds of thousands of dollars or more, they are specialty built and often breakdown a lot and they often can't bring that full ability to bear in a 'practical usage' application. They are designed to test a theory/prove a concept rather than being generally used. To be clear, this doesn't mean these machines aren't extremely powerful. They are. What I am saying is that you can't sit there and use simple math to extrapolate their performance for general use. If the machine is capable of performing 1000 times more FlOPS than my computer that doesn't mean it will play Call of Duty at 60,000 FPS. It would probably be blazingly fast (if it could actually run the software) but you would find plenty of other things getting in the way and seriously decreasing the final results. In a similar vein it is quite possible, even probable, that the subjects who have been 'accelerated' to 60x time dilation are probably unable to produce actual work at that speed. Secondly, a machine capable of running 60 egos at once is almost certainly not capable of running one ego at 60x speed. That's like saying if I buy ten wireless routers and ten wireless connection devices my computer will be able to transfer data ten times as fast. Even if I am dealing with 10 different streams there will be issues that degrade that performance and for 'practical' application (i.e. downloading a single large data stream) I'll have to do an awful lot of work before I can even see any boost in speed and the end result will still be less than 10x because of the extra overhead of breaking apart and re-integrating the stream. It's why the universities and laboratories don't break their previous records simply by slapping in another CPU/Bandwidth Connection/etc. Thirdly, programming is about far more than how fast you can type. Anyone who has written a large program (and I think we can consider most of these cases to be large programs) know that you spend a fair amount of time compiling code. The length of time compiling can take is one of the big things that pushes SSDDs since a computer using those can compile much faster. Well, guess what? VR acceleration isn't going to increase compiling speed at all. Neither will it increase the speed at which the program runs when you want to test it. Nor will it increase the speed when you submit something to your version control software or when you roll back to an earlier version because you screwed something up. Lastly, there's nothing in the original figure that said how long people were able to function at that maximum acceleration. There are quite possibly psychological and physiological effects from prolonged life at those speeds (yes, even for a synth or infomorph there could be 'physiological' issues cause by heating, lack of repair cycles, etc.). Olympic sprinters might get upwards of 30 mph but no one is crossing America on foot in 5 days. "But what about the fact that it specifically says that large corporations use VR to increase worker productivity?" you ask. To this I reply, "Don't start a sentence with a conjunction." I will also say that I am not claiming that doing the programming in VR won't be faster than doing it in 'meatspace'. Just like I admit that university/laboratory computer can play a game way faster than my PC I am happy to admit that corporate workers operating out of these VRs are capable of being much more productive than someone working outside of VR. Doing some quick Googling and number crunching it looks like IBM paid its workers about $28.5 billion last year and certainly spent more when you add in things like health insurance, retirement benefits, unemployment taxes, etc.. A system that costs them $500 million per year due to energy, repairs, and equipment amortization that allows them to lay off just 1/3 or their workforce because their programmers and engineers are about to work twice as fast would save them over $9 billion a year. That's a great deal. Now I'm not saying that 2x is the maximum practical speed, nor am I saying that is the speed the corporations are using. I'm just drawing some numbers out of thin air there to show that it is entirely possible for the system to A) be immensely expensive relative to the character's pocket books, B) not produce nearly the results the characters would think they could get while C) still being absolutely practical for the corporation. In the end the GM will have to come up with a maximum speed they are comfortable with and an equipment cost they think is fair. Some GMs may decide to go with 60x and only requiring a 15k machine, and that's fine. My point to this screed isn't to say you must lower that limit and/or make the machines more expensive. You don't have to agree with me that the initial statement is referring to specialized machines located in universities/laboratories, that compiling and testing would significantly slow things down, or that there's harmful effects from prolonged use of those speeds. I don't 'win' by making you play the game my way and I don't 'lose' if you play it your way. As long as you're having fun that's fine. All I'm doing is offering what I feel are some reasonable interpretations to the line as written that help prevent what might be a possible imbalance in your game. Use my advice as you see fit. Do not consume large amounts of alcohol while consulting my advice. Women who are pregnant or who wish to become pregnant may wish to ignore my advice. Possible side effects of my advice include dizziness, nausea, a vague feeling of dread, and the end of life as we know it.
My artificially intelligent spaceship is psychic. Your argument it invalid.
ShadowDragon8685 ShadowDragon8685's picture
Lazarus wrote:A couple of
Lazarus wrote:
A couple of things should probably be pointed out. First, and probably most important, 60x is the upper limit of what can be done. Compare that to the real world when we talk about things such as the upper limit of bandwidth or the upper limit of TFlOPS.
I'm sorry, but this is just not the case in much of Eclipse Phase. The very nature of economy and manufacturing is, for most people, different, because most people outside of enforced scarcity economies have nanoscale manufacturing literally in their homes. And it won't be [i]that[/i] much more resource intensive to build a 60x simulspace server than it will be to build one that's 40x or 20x. More resource-intensive, yes, orders of magnitude more intensive, no.
Skype and AIM names: Exactly the same as my forum name. [url=http://tinyurl.com/mfcapss]My EP Character Questionnaire[/url] [url=http://tinyurl.com/lbpsb93]Thread for my Questionnaire[/url] [url=http://tinyurl.com/obu5adp]The Five Orange Pips[/url]
SquireNed SquireNed's picture
ShadowDragon8685 wrote
ShadowDragon8685 wrote:
Lazarus wrote:
A couple of things should probably be pointed out. First, and probably most important, 60x is the upper limit of what can be done. Compare that to the real world when we talk about things such as the upper limit of bandwidth or the upper limit of TFlOPS.
I'm sorry, but this is just not the case in much of Eclipse Phase. The very nature of economy and manufacturing is, for most people, different, because most people outside of enforced scarcity economies have nanoscale manufacturing literally in their homes. And it won't be [i]that[/i] much more resource intensive to build a 60x simulspace server than it will be to build one that's 40x or 20x. More resource-intensive, yes, orders of magnitude more intensive, no.
Keep in mind that some of the resources might actually be different. While it's theoretically possible to simply make a machine with more power, that's not always effective: if that were true, we could have an infinitely scaled processor IRL for simply the cost of adding on new materials. The cost of simulated time is an exponential increase: to go faster than real time is passing while running a brain state probably requires a linear increase in power, but said linear increase is not going to be relative to the processor power of a system. Adding more people running at the same time at standard speed is likely similar, though memory might be more of a bottleneck than processor power. The high performance systems to go 60x might well require better cooling, power sources, or optical components than are needed for 60 1x minds running on separate computing modules.
lev_lafayette lev_lafayette's picture
Lazarus wrote:When we hear
Lazarus wrote:
When we hear about a new record in such statistics they are almost always being done in universities or laboratories. We are not talking about machines that your average person can get their hands on.
Which is pretty much what I have done (c.f., the rpgstackexchange link above).
Lazarus wrote:
Secondly, a machine capable of running 60 egos at once is almost certainly not capable of running one ego at 60x speed. That's like saying if I buy ten wireless routers and ten wireless connection devices my computer will be able to transfer data ten times as fast.
That's not a good analogy. With some overheard parallel code (multithreaded or message passing) can operate with significant performance improvements subject to the limitations of the serial code, (c.f., Amdahl's Law). The limitation in wireless performance, assuming different parallel tasks, is limited by upstream bandwidth.
Lazarus wrote:
Thirdly, programming is about far more than how fast you can type. Anyone who has written a large program (and I think we can consider most of these cases to be large programs) know that you spend a fair amount of time compiling code.
Compilation time is relatively trivial, even when it can take several hours in parallel (e.g., ATLAS http://math-atlas.sourceforge.net/). Actual coding is relatively modest portion of the time; the largest proportion of time is design and testing.
R.O.S.S.-128 R.O.S.S.-128's picture
21 will be the Ranger
I think the tl;dr of what he's trying to get at is "in theory you can, but you might need Big Iron." Which can definitely be an issue depending on your GM's view of the availability of computing power: depending on your GM, you may not even be able to run a single AGI at full strength/speed without shelling out for a mobile server. I've had quite a few negotiations that involve determining whether my Spd4 COG30 AGI can run at full strength on a ghostrider or ecto, or whether I'd run into speed/aptitude caps due to insufficient hardware. Along with discussions on whether I can remote from a mainframe where we're going (which of course requires occupying a relatively nearby mainframe). For a similar situation where you might start running into Big Iron requirements, consider this: with Spd 4 and appropriate multitasking upgrades I can take 12 mental actions per turn. Which I can use to order 12 drones around. Or I could issue orders to 12 forks of myself, which each control 12 drones for a total of 144. Roughly a one-AGI Company. But besides being a huge headache for my GM, it would also be pretty easy to argue that I'll probably need some Big Iron to act as a C3 node for a company-sized formation of drones.
End of line.
Lazarus Lazarus's picture
lev_lafayette
lev_lafayette wrote:
. . .Compilation time is relatively trivial, even when it can take several hours in parallel (e.g., ATLAS http://math-atlas.sourceforge.net/). Actual coding is relatively modest portion of the time; the largest proportion of time is design and testing.
Actually, that's sort of the point. Compiling is relatively trivial in the real world because in the worst case scenarios you can typically tell the compiler to run when you go home at the end of the day, come in the next day and *poof* it's done. Under VR though the situation is very different. Got a job that takes two hours to compile? That's now going to cost you 5 days in VR. That IDE that you open up and then get a cup of coffee because it takes a minute to launch? That's an hour. Yes, IRL the time to do things like compile your code, boot your machine, or even empty the trash can are only a small part of your workday. In a VR environment where things are running much faster? Those things may suddenly become massive bottlenecks. Also, I should probably mention that one of the major 'programming' jobs that characters might be interested in performing are probably much more impacted than pure coding; blueprints. Without a doubt part of the process of creating a new blueprint involves printing the current design, evaluating it, recycling it, modifying the design, lather, rinse, repeat. That trivial object that takes one week to program is probably printed nearly a dozen times (assuming a print before lunch and one at the end of the day). The 'VR time' to print 10 different copies? 25 days. (This is assuming that you must print your copies in the real world because a VR 'print' won't actually give you enough data). If you figure you also have to spend time in the real world evaluating each copy you're easily looking at another 10 hours of real life. Your remaining 20 hours where you are actually 'coding' the blueprint? Yeah, you shortened that to 20 minutes. Your 40 hour project is now 20 1/3 hours and while that is still a nice saving in time it is still a far cry from you being able to whomp up a blueprint for a trivial item in 40 minutes.
lev_lafayette wrote:
That's not a good analogy. With some overheard parallel code (multithreaded or message passing) can operate with significant performance improvements subject to the limitations of the serial code, (c.f., Amdahl's Law). The limitation in wireless performance, assuming different parallel tasks, is limited by upstream bandwidth.
Sure. Again, I'm not saying there would be no improvement. I'm saying that for 'practical' use the improvement might not be as great as the math would show. Yes, with all those extra pipes I can download 10 streams in the time it takes you to download 1 but since in a lot of cases I don't download 10 streams simultaneously the 'practical' effect is somewhat lessened. Even if you want to point to something like the multiple downloads that occur on your average web page the increased bandwidth would do nothing to eliminate latency or decrease server response time. Will I load a web page faster than you? Sure. Will it be 10x as fast? Probably not.
My artificially intelligent spaceship is psychic. Your argument it invalid.
Lazarus Lazarus's picture
ShadowDragon8685 wrote:I'm
ShadowDragon8685 wrote:
I'm sorry, but this is just not the case in much of Eclipse Phase. The very nature of economy and manufacturing is, for most people, different, because most people outside of enforced scarcity economies have nanoscale manufacturing literally in their homes. And it won't be [i]that[/i] much more resource intensive to build a 60x simulspace server than it will be to build one that's 40x or 20x. More resource-intensive, yes, orders of magnitude more intensive, no.
That's assuming that the university computer can easily be nanofabricated. Odds are pretty good that this isn't the case. First, there's the whole issue with blueprints. The university probably isn't going to just put them on line. The design is the product of teams of extremely bright people working for years, so the idea that a hacker would just make their own version doesn't hold up. That's not to say you won't have teams of hackers working together similar to RL opensource projects but the odds are that they won't be on the same level as what the university produces (and I've got nothing against opensource. In fact, I'm an immense fan and there are wonderful opensource projects out there. However, I don't think there's an opensource equivalent to IBM's Deep Blue). Then there's issues of resources. Believe it or not there are plenty of materials that cannot be simply nanofabricated. Radiologicals are often a prime example. Now the odds are pretty good the computer doesn't need those but that's just one example. What if those 60x mainframes are quantum computers? Can't just fabricate q-bits. You know what else you can't fabricate? Liquid helium. What if the computer needs that for cooling or for keeping certain parts in a superconductive state? What if metallic hydrogen is a critical component? Sure, a university or hypercorp lab might have the facilities to create and maintain metallic hydrogen but your average person? Not so much. And all of this assumes that nanofabrication is capable of handling individual atoms. Considering that a biomorph can't be fabricated I would argue that nanofabrication is not completely capable of working at the level of individual atoms. That means that the computer might require components (room temperature superconductors, electrical pathways that are a single atom wide, molecular lattices that have to be grown) which your average person can't fabricate.
My artificially intelligent spaceship is psychic. Your argument it invalid.
ShadowDragon8685 ShadowDragon8685's picture
Lazarus wrote:That's assuming
Lazarus wrote:
That's assuming that the university computer can easily be nanofabricated. Odds are pretty good that this isn't the case. First, there's the whole issue with blueprints. The university probably isn't going to just put them on line. The design is the product of teams of extremely bright people working for years, so the idea that a hacker would just make their own version doesn't hold up. That's not to say you won't have teams of hackers working together similar to RL opensource projects but the odds are that they won't be on the same level as what the university produces (and I've got nothing against opensource. In fact, I'm an immense fan and there are wonderful opensource projects out there. However, I don't think there's an opensource equivalent to IBM's Deep Blue).
Wow, it's like you're entirely unfamiliar with the concepts of the Argonauts and the Titanian Commonwealth. In short? Yes, those blueprints [i]are[/i] out there, open-sourced, much to the gnashing-toothed frustration of the inner-system hypercorps.
Quote:
Then there's issues of resources. Believe it or not there are plenty of materials that cannot be simply nanofabricated. Radiologicals are often a prime example. Now the odds are pretty good the computer doesn't need those but that's just one example. What if those 60x mainframes are quantum computers? Can't just fabricate q-bits. You know what else you can't fabricate? Liquid helium. What if the computer needs that for cooling or for keeping certain parts in a superconductive state? What if metallic hydrogen is a critical component? Sure, a university or hypercorp lab might have the facilities to create and maintain metallic hydrogen but your average person? Not so much. And all of this assumes that nanofabrication is capable of handling individual atoms. Considering that a biomorph can't be fabricated I would argue that nanofabrication is not completely capable of working at the level of individual atoms. That means that the computer might require components (room temperature superconductors, electrical pathways that are a single atom wide, molecular lattices that have to be grown) which your average person can't fabricate.
A biomorph can be nanofabricated, it's just the brain that can't. And even that is kind of weird, since the entire ego bridge tech is, itself reliant on nanoscale manipulation. The computer, on the other hand? Oh, yes, that can be.
Skype and AIM names: Exactly the same as my forum name. [url=http://tinyurl.com/mfcapss]My EP Character Questionnaire[/url] [url=http://tinyurl.com/lbpsb93]Thread for my Questionnaire[/url] [url=http://tinyurl.com/obu5adp]The Five Orange Pips[/url]
Lazarus Lazarus's picture
ShadowDragon8685 wrote:
ShadowDragon8685 wrote:
Wow, it's like you're entirely unfamiliar with the concepts of the Argonauts and the Titanian Commonwealth. In short? Yes, those blueprints [i]are[/i] out there, open-sourced, much to the gnashing-toothed frustration of the inner-system hypercorps.
No. I am quite familiar with the concepts. I am also pretty sure that they do not have the blueprints for everything in existence. They have versions of a lot of the more common and mundane items but clearly either not everything invented at Titanian University or the Argonauts is released or else those groups are lagging behind the hypercorps. Otherwise all the hypercorps could just close down their own R&D departments and use these free supercomputer blueprints. As for the hypercorp's own supercomputer, the only way those blueprints are made available to the Titanian Commonwealth or the Argonauts is if they can crack the copy protection. To do that they have to first get the blueprints which the hypercorps probably aren't disseminating.
Quote:
A biomorph can be nanofabricated, it's just the brain that can't.
No, it can't. You can't even nanofabricate major organs. It takes about six months to grow the parts for a pod. If the brain was the only limiting factor Titan could fabricate millions of biomorphs and install cyberbrains instead of meat brains. No more infomorphs waiting for bodies (since there's no longer the need for much in the way of resources beyond carbon, hydrogen, nitrogen, oxygen, phosphorus, and sulfur which are pretty abundant on Titan). Along with no more infomorphs waiting for bodies there would be no more clanking masses. It is actually harder to get some of the resources for synths than it is for biomorphs (assuming you could fabricate a biomorph). Synths are suddenly not for the people who couldn't afford a biomorph, they are for people who could afford more expensive materials than carbon.
Quote:
And even that is kind of weird, since the entire ego bridge tech is, itself reliant on nanoscale manipulation.
No it isn't. The ego bridge uses nanites to map and modify neurons. Neurons are considerably larger than nanoscale.
Quote:
The computer, on the other hand? Oh, yes, that can be.
Even if the above were corrent you still haven't addressed the issue other possible requirements such as quantum bits and metallic hydrogen. Additionally, if it were so trivially easy as you suggest why are there still so many egos in storage? Just fabricate up some more of those computers for these poor guys. Now once again, I'm not demanding that you run your game so that it isn't possible for people to easily nanofabricate these machines. If that floats your boat then great. What I am saying is that ample argument can be made that such machines could be out of the player's reach.
My artificially intelligent spaceship is psychic. Your argument it invalid.
lev_lafayette lev_lafayette's picture
Lazarus wrote:
Lazarus wrote:
First, there's the whole issue with blueprints. The university probably isn't going to just put them on line. The design is the product of teams of extremely bright people working for years, so the idea that a hacker would just make their own version doesn't hold up.
Well, we do currently. The architecture for cluster computing (from which we get supercomputers), for the implementation of message passing and shared memory parallelism is already freely available - mainly because computer scientists (correctly) view this as a scientific and technical problem subject to academic and research investigation.
Lazarus wrote:
However, I don't think there's an opensource equivalent to IBM's Deep Blue).
There certainly is. Go through the top500 list (not the best metric, I know, but the one that's most cited) and you'll see that the overwhelming number of systems are built with open standards - around 99.8% of the top 500, for example, run Linux for starters for the operating system, as mentioned they use FOSS for their parallelism, and they will use FOSS for most of their scientific applications. Now where there is some proprietary material is on certain interconnects (e.g., Torus) in comparison to open standards (Infiniband, Ethernet) and some pretty awful-to-work-with applications (e.g., MATLAB(R)).
R.O.S.S.-128 R.O.S.S.-128's picture
Can't win and can't break even
At this point the argument seems to basically come down to a disagreement on whether you can get a free lunch. Now the way I see it, resources in EP are abundant but by no means free. On most habs this is especially true of cubic meters. In most situations, unless you have a really well-established HQ it'll probably be easier (and cheaper) to rent out server space for the duration of the mission than to print your own server. If you want to run a "free lunch" campaign I suppose that's your decision as GM, though of course you'll have to be prepared to deal with the fallout from that: I assure you your players will think of some pretty mind-bending things they can do if you allow them to use nanofabrication as an infinite fountain of free stuff. I can say for certain that you can't do 60x simulspace acceleration on an ecto/insert for one very good reason: those are considered "Personal Computers", and on page 247 of the rulebook it states that personal computers cannot run simulspace programs at all. However, a simulspace subscription is listed as just Moderate per month. It doesn't specify acceleration factor affecting the cost in any way, though it'd be safe to assume that slower services are on the lower end of that range (~500 credits or credit equivalents), and a 60x would be on the upper end of that range (~1400 credits or credit equivalents). If your op lasts less than a month that's effectively a one-time cost (and likely the option you'll go with if it lasts more than 4 days, which is roughly the point where daily rental costs the same as monthly). Of course, if you want to be more restrictive you could rule that ~1000+/-400 credits only gets you a 1x simulspace, and that you have to pay extra on top of that base cost for each acceleration factor. For example, 1000 for a 1x subscription plus another 300-600 per level of acceleration (with bundle-style discounts at round numbers like 5, 10, 20, etc).
End of line.
uwtartarus uwtartarus's picture
Plus folks get really
Plus folks get really pitchforky, regardless of polity when you start building supercomputers, what with The Fall and all.
Exhuman, and Humanitarian.
ThatWhichNeverWas ThatWhichNeverWas's picture
With this new hardware, my lag is 60x worse!
There are a couple of odd assumptions in this discussion which I don't necessarily agree with. The first is a purely Rule-based: Who said that time dilation and action count is directly linked? In a couple of other threads I've mentioned that in my head cannon 60x dilation is having Speed 4 and the mental speed augmentation/software. It's also worth noting that Transhuman says that the maximum reduction in a Task Action's speed is 90%, presumably because bottlenecks are inevitable. The second thing is more interesting: If 60x (or any value) Time acceleration is freely available... then it's freely available. Let me clarify. Let's assume that 60x server space is available on demand. The cost to use it is irrelevant at this point. It's pretty clear that, as has been pointed out, Time Dilation is only really useful for specific tasks, if only because the only thing that actually runs faster is the User - all the other programs and connections they're using are going to be running at a set speed. In that case there's no real reason to use it when not doing those tasks... and there's no real reason to not use the highest available Time Dilation you can get when you are doing those tasks. If we accept that Time Dilation at "any" level is commonly available then we can simply fold that into the status quo, and it stops being an issue. We simply have to decide what level of Dilation is so easy to get that it's not worth mentioning any more. Regarding MorphBrains; The easy way to explain the trouble in creating neural tissue is that the processes required to Nanofab them from scratch cause problems which are disproportionately hard to fix. The cells may adapt to depend on the presence of the Nanofabricators to function, or develop an aggressive immune response to nanotech in general, or form begin forming connections which don't occur in "naturally" grown brains, and so on. Sure, ego bridges alter brains, but the actual work they do may not be that extensive compared to the total mass - the morph's bonuses remain with it, as do traits like Memory Artifact. The bridge rebuilds/rewires existing structures in the brain, it doesn't create them from scratch.
In the past we've had to compensate for weaknesses, finding quick solutions that only benefit a few. But what if we never need to feel weak or morally conflicted again?