Welcome! These forums will be deactivated by the end of this year. The conversation continues in a new morph over on Discord! Please join us there for a more active conversation and the occasional opportunity to ask developers questions directly! Go to the PS+ Discord Server.

Why does a muse have INT?

37 posts / 0 new
Last post
Lorsa Lorsa's picture
Why does a muse have INT?
I've been trying to figure out exactly why a muse (and a few other) AI's have aptitude scores. Often only one. What can a muse do with this INT? It can't default as per the rules so what does it do?
Lorsa is a Forum moderator [color=red]Red text is for moderator stuff[/color]
LatwPIAT LatwPIAT's picture
All AI have, by default, 10
All AI have, by default, 10 in all Aptitudes. Muses have, as a special feature, 10 extra INT, which is written in the rules as "INT 20". AI like muses have Aptitudes because Aptitudes are needed to derive statistics like Trauma Threshold, and Insanity Rating, Initiative, and resistance to emotional manipulation, which AI do have. Remember, when a stressful situation causes you to gain stress, your muse may also gain stress, pick up disorders and eventually go incurably insane.
@-rep +2 C-rep +1
DivineWrath DivineWrath's picture
Yes, all AIs have a default
Yes, all AIs have a default aptitude scores of 10. You see this rule in the core rulebook on p. 331. Its under AIs and Muses (which is near the bottom right side of the page). Also, some task are aptitude only tests, not skill tests. You can see some of this in the core rulebook on p. 174 to p. 175 .
Lorsa Lorsa's picture
I see. So having a muse is
I see. So having a muse is really stupid if you're doing funky stuff because it will just go insane? I didn't think they had the coding to get disorders as that seem to imply (to me) some form of character growth which they can't have.
Lorsa is a Forum moderator [color=red]Red text is for moderator stuff[/color]
OneTrikPony OneTrikPony's picture
It's probably best to ignore
It's probably best to ignore stress for software or make your own list of stress situations for AI/AGI because they probably don't react or empathize as transhumans. Stress for software is an implied rule not an explicit one.

Mea Culpa: My mode of speech can make others feel uninvited to argue or participate. This is the EXACT opposite of what I intend when I post.

Decivre Decivre's picture
Lorsa wrote:I see. So having
Lorsa wrote:
I see. So having a muse is really stupid if you're doing funky stuff because it will just go insane? I didn't think they had the coding to get disorders as that seem to imply (to me) some form of character growth which they can't have.
Think of insanity as software corruption. It can happen. Granted, most playgroups seem to houserule that narrow AI are invulnerable to most insanity effects (don't think your robot servant is really going to be all that traumatized if you're murdered in front of it), so I would just do that.
Transhumans will one day be the Luddites of the posthuman age. [url=http://bit.ly/2p3wk7c]Help me get my gaming fix, if you want.[/url]
jackgraham jackgraham's picture
To be honest, I'd forgotten
To be honest, I'd forgotten the rules gave AIs Lucidity until several months back when I did a close re-read of the AI/AGI material while working on TH. But I have written scenarios where insane/corrupted muses showed up (e.g., Glory). So I'm with those who think its application shouldn't be as broad as when we call for stress checks from PCs.
J A C K   G R A H A M :: Hooray for Earth!   http://eclipsephase.com :: twitter @jackgraham @faketsr :: Google+Jack Graham
OneTrikPony OneTrikPony's picture
I forgot you wrote Glory,
I forgot you wrote Glory, Jack. That was a damned good adventure. My first EP fiction was a short story based on the Glory plot. I would definitely recommend that AI take stress from "powerful viral infections" it was an effective bit of story telling. Edit: It would be awesome if some of the more code savy persons here could chip in with a list of AI appropriate Derangements and Disorders. As a GM I do appreciate having the ability to cripple a character by smacking their muse around.

Mea Culpa: My mode of speech can make others feel uninvited to argue or participate. This is the EXACT opposite of what I intend when I post.

DivineWrath DivineWrath's picture
I may be a bit code savy, but
I may be a bit code savy, but I'm having trouble coming up with ideas for AI derangements and disorders. Maybe it is because I don't agree with Cthulhulian horror stuff, so that is impending my ideas. Also code tend to break down in unexpected ways, ways that are difficult to come up with unless you encountered such a problem debugging it. Unfortunately, the code in Eclipse Phase is a lot more complex than we have, and is usually more reliable too. Also, if I recall correctly, in Eclipse Phase, AI code does include useful pieces of neural code from uploaded people, so they might break in the same ways that people do. Also aren't limited AIs are supposed to be unchanging? They aren't supposed to learn either. Why should they develop personality disorders? ---- As for ideas: Off hand, I recall something from Star Trek Voyager. The EMH (Emergency Medical Hologram) ended up caught in an infinite loop ethical dilemma. He had 2 patients who were dying, but he could only save one. They had equal odds of surviving if given surgery, but he resolved the conflict by saving his friend. That came back to haunt him later as his ethical programming had deemed it unacceptable that he used "friendship" as a determining factor in what should have been an unbiased clinical decision. This error quickly became his sole concern, as he needed to figure out how to resolve this problem in an unbiased clinical manner. An Eclipse Phase version of this might force the AI to determine that it has suffered a serious problem that it should prioritize resolution of this conflict over obeying the commands of its owner (it might make a serious error). Alternatively, it might give the owner notification that it has suffered a serious error and should be sent back to the developers for debugging. It will shut itself down to avoid causing further harm, and will continue to shut itself down if its owner tries to reactivate it.
Arenamontanus Arenamontanus's picture
Another fun AI derangement is
Another fun AI derangement is ontological shock: the AI suddenly learns that the world works differently from what it thought, so it needs to re-evaluate goals, commands and other things in the light of the new perspective. This is a headache even for AGIs and transhumans, but a fairly smart AI like a Muse can *really* get crazy about it. For example, it realizes that everything is made of atoms, including its owner. So now it needs to figure out what configurations of atoms are owners (whose commands it should obey) and what configurations are non-owners. Suddenly it starts to try to measure everything, getting bogged down in questions about how many nanosensors it can get in order to really check if a dust speck is an owner. Rather technical paper about it: http://intelligence.org/files/OntologicalCrises.pdf AI craziness can be *really* crazy. "You told me to make you a sandwich. So I did. Then I realized I was uncertain if I had succeeded, so I looked at it using the room cameras. That seemed OK, but the room cameras could be hacked or wrong. And maybe somebody could take the sandwich before you entered. So I started fabbing more cameras and detectors to be sure it was there. Plus I used them to limit access to the room, so nobody could grab the sandwich. That is why the kitchen is impassable. I hope you enjoy your sandwich."
Extropian
DivineWrath DivineWrath's picture
I've recently considered 2
I've recently considered 2 more possibilities. The muse has realized how at risk its owner is of being forever destroyed. The owner has a single active instance of itself, and all backups of itself is held somewhere on a single habitat. In order to avoid such an outcome, it seeks to do things to its owner maximize its odds of survival. It might try to mass fork its owner and/or spread copies of the owner's backup everywhere. Having a copy of its owner's backup in the hands of 9 Lives might be deemed preferable to having all copies of its owner's ego destroyed forever by a single habitat accident. Another is where the muse has determined that it is in the owner's best interest to become a seed AI or something like it. Since such research is considered dangerous and very illegal, it will take pains to conceal such research from everyone, maybe even its owner. It might even try to recommend that its owner try advanced, new, and experimental augmentations. The owner might be a surprise when it discover that a group of people are investigating him (people like Firewall) and it has no clue as to why. An alternative is for the muse to try to become a seed AI so it can best serve the best interests of its owner. Or what it thinks the best interests of its owner...
Decivre Decivre's picture
jackgraham wrote:To be honest
jackgraham wrote:
To be honest, I'd forgotten the rules gave AIs Lucidity until several months back when I did a close re-read of the AI/AGI material while working on TH. But I have written scenarios where insane/corrupted muses showed up (e.g., Glory). So I'm with those who think its application shouldn't be as broad as when we call for stress checks from PCs.
One of my players recommended that AI basically get a crap-ton of free hardening traits towards a number of traumatic events, and that it should be mostly up to the GM when they make tests. For instance, the death of the muse's owner is a great opportunity for a stress test (especially if it was gruesome or horrific). I've also ruled that an AI that critically fails a test must do a stress test, as its code gets caught up on a logic loop or perhaps a fault occurs.
Transhumans will one day be the Luddites of the posthuman age. [url=http://bit.ly/2p3wk7c]Help me get my gaming fix, if you want.[/url]
Armoured Armoured's picture
Common coding errors as disorders
A friend and I brainstormed some classic coding bugs and how they could appear similar to human mental problems. These might help GMs who are trying to roleplay a Muse or AI which is suffering from stress.
  • Multithreading errors could lead to a split personality, like a partial fork of the muse swapping time with the main one.
  • Memory leaks could result in something like Alzheimer's (or just forgetfulness), as the muse both loses information and is incapable of learning anything new.
  • Bad permissions would lead to poor self-esteem, as the muse loses access to ways of actually doing its job.
  • General performance issues (from damage, or EMP) would appear like slowing or retardation.
  • Race conditions could be represented as indecisiveness, as the muse cannot calculate which action is best.
  • Poor prioritizing of tasks would mean the muse focuses on seemingly meaningless tasks, like in Arenamontanus' example above. In extreme cases the muse may even take over you mesh inserts or other implants, making them unavailable to the character as the muse uses all resources.
jackgraham jackgraham's picture
Armoured wrote:A friend and I
Armoured wrote:
A friend and I brainstormed some classic coding bugs and how they could appear similar to human mental problems. These might help GMs who are trying to roleplay a Muse or AI which is suffering from stress.
  • Multithreading errors could lead to a split personality, like a partial fork of the muse swapping time with the main one.
  • Memory leaks could result in something like Alzheimer's (or just forgetfulness), as the muse both loses information and is incapable of learning anything new.
  • Bad permissions would lead to poor self-esteem, as the muse loses access to ways of actually doing its job.
  • General performance issues (from damage, or EMP) would appear like slowing or retardation.
  • Race conditions could be represented as indecisiveness, as the muse cannot calculate which action is best.
  • Poor prioritizing of tasks would mean the muse focuses on seemingly meaningless tasks, like in Arenamontanus' example above. In extreme cases the muse may even take over you mesh inserts or other implants, making them unavailable to the character as the muse uses all resources.
Oof. Yeah, some of these could be especially punishing if the AI is running in an environment where it shares a lot of resources with other software your character needs to run. As a GM, I'd be careful about when/how these were applied. That said, this goes in the right direction. AIs really should have different susceptibilities and derangement/disorder effects from AGIs and humans, as they're "anatomically" different. We go into some of these differences in more detail in TH... but now I'm feeling like you guys have scooped us on this aspect of AIs. :) Good job! If you wanted to be a bit more fine-grained about the above, you could break them up a bit by cause and severity. For example, a Bad Permissions disorder and/or derangement works out nicely as a low-severity issue. Permissions problems in real software are fairly common, don't totally cripple the software completely, and can be a result of corruption or config errors -- i.e., good candidate for handing out to an AI that's taken a light to moderate stress hit. OTOH, something like a Memory Leak is much more crippling and in the real world tends to result most often from bugs in the original code, maybe making it the right penalty to put on an AI that's taken moderate to severe stress. If we were to publish official rules for handling disorders/derangements in AIs, I'm not sure we'd want to have them mirror exactly contemporary software bugs. Advances in computer science and software engineering techniques between today and the EP timeframe might mean that some of these errors are things of the past, while new ones emerge. But it's a good place to start extrapolating from.
J A C K   G R A H A M :: Hooray for Earth!   http://eclipsephase.com :: twitter @jackgraham @faketsr :: Google+Jack Graham
jackgraham jackgraham's picture
Arenamontanus wrote:Another
Arenamontanus wrote:
Another fun AI derangement is ontological shock: the AI suddenly learns that the world works differently from what it thought, so it needs to re-evaluate goals, commands and other things in the light of the new perspective. [...]
Haha, yes, can totally see that happening. On a similar theme, one could imagine various breakdown in an AI's pattern recognition capabilities, similar to but much more severe than aphasia (breakdown in language comprehension) or prosopagnosia (inability to recognize faces) in humans. An AI with impaired pattern recognition might, for example, lose the ability to reliably distinguish between individual objects in its field of vision ("objects" in this scenario possibly including people). This might not be a big deal for an AI that doesn't rely much on visual input to do its job, but you wouldn't want it happening to the vehicle AI in your car.
J A C K   G R A H A M :: Hooray for Earth!   http://eclipsephase.com :: twitter @jackgraham @faketsr :: Google+Jack Graham
nick012000 nick012000's picture
For other ideas for AI mental
For other ideas for AI mental illnesses (and AI mental architecture in general), you might want to read [url=http://intelligence.org/files/CFAI.pdf]Creating Friendly AI[/url], by Eleizer Yudkowsky. In particular, you might want to search the document for "FoF", which stands for "Failure of Friendliness", and is used in the headings for the various subsections discussing them.

+1 r-Rep , +1 @-rep

jackgraham jackgraham's picture
I'll pass, thanks.
I'll pass, thanks. Yudkowsky & his minions set off my crank alarms something fierce.
J A C K   G R A H A M :: Hooray for Earth!   http://eclipsephase.com :: twitter @jackgraham @faketsr :: Google+Jack Graham
Arenamontanus Arenamontanus's picture
Yes, that paper is a goldmine
Yes, that paper is a goldmine of fun disaster scenarios. The nice things about AIs is that they are so stupid and straightforward that they rarely go crazy in any creative way: they just malfunction. As they become more flexible and creative, approaching AGI-hood, the craziness become much wilder and more "organic" - this is likely why Muses can go nuts in fun ways. They are already intended to be complex and adaptable (and person-centered: I expect most muses to be completely centred about their owner, so when they go bad they will also centre their madness on the owner...) Full AGIs are essentially transhuman minds (at least in my take on EP they are largely frankensteinian patchworks of useful algorithms borrowed from biology), so they go crazy with various analogs of normal mental disorders, as well as their own versions. Seed AGIs are where all stops are pulled out: they can transform themselves completely - which also means they can magnify madness arbitrarily far. Most likely, the vast majority of seed AGIs just fail almost directly. But you can never be sure your Promethean is entirely sane - maybe it is just waiting for the right moment to do a treacherous turn and start converting the universe into something bizarre (Why? Because there might be multidimensional squid in another universe. Look, the philosophy and calculations were done already in the 21st century...) Hmm, Cognite and other AI-research hypercorps probably have entire "graveyards" of records from pre- (and maybe post) Fall AGI research that didn't turn out well. Lots of amazingly smart and esoteric madness documented there, and files on how to do it all again. Ah, software that just does one thing, doesn't try to predict what you want, doesn't update itself... sometimes the Jovians don't know how good they have it.
Extropian
Decivre Decivre's picture
Arenamontanus wrote:Yes, that
Arenamontanus wrote:
Yes, that paper is a goldmine of fun disaster scenarios. The nice things about AIs is that they are so stupid and straightforward that they rarely go crazy in any creative way: they just malfunction. As they become more flexible and creative, approaching AGI-hood, the craziness become much wilder and more "organic" - this is likely why Muses can go nuts in fun ways. They are already intended to be complex and adaptable (and person-centered: I expect most muses to be completely centred about their owner, so when they go bad they will also centre their madness on the owner...) Full AGIs are essentially transhuman minds (at least in my take on EP they are largely frankensteinian patchworks of useful algorithms borrowed from biology), so they go crazy with various analogs of normal mental disorders, as well as their own versions.
I also imagine that a muse's owner is a primary source for madness as well. Situations in which a muse is unable to do anything to aid or help its owner, or in which it becomes detrimental to its owner are probably the most likely to cause malfunctions. The new book heavily implies that the right conditions can accidentally force complex enough AI systems to emerge as sapient entities. Does this mean that emergence should be considered a mental disorder that AI can acquire?
Arenamontanus wrote:
Seed AGIs are where all stops are pulled out: they can transform themselves completely - which also means they can magnify madness arbitrarily far. Most likely, the vast majority of seed AGIs just fail almost directly. But you can never be sure your Promethean is entirely sane - maybe it is just waiting for the right moment to do a treacherous turn and start converting the universe into something bizarre (Why? Because there might be multidimensional squid in another universe. Look, the philosophy and calculations were done already in the 21st century...) Hmm, Cognite and other AI-research hypercorps probably have entire "graveyards" of records from pre- (and maybe post) Fall AGI research that didn't turn out well. Lots of amazingly smart and esoteric madness documented there, and files on how to do it all again.
I was under the impression that Prometheans weren't always Seed AI. It's very possible that they started as fairly simple learning AI, and complexity (and processing power) was added as they showed the capability to handle themselves. Plus, this would have an odd implication for the TITANs. How long did it take for the project to actually release a military Seed AI that [i]didn't[/i] self-terminate its own code?
Arenamontanus wrote:
Ah, software that just does one thing, doesn't try to predict what you want, doesn't update itself... sometimes the Jovians don't know how good they have it.
I was always under the impression that Jovians use AI, but only so long as they are hobbled and incapable of human scale thought. I'm sure that even their AI, as simple as it may be, is capable of some degree of prediction.
Transhumans will one day be the Luddites of the posthuman age. [url=http://bit.ly/2p3wk7c]Help me get my gaming fix, if you want.[/url]
LatwPIAT LatwPIAT's picture
jackgraham wrote:I'll pass,
jackgraham wrote:
I'll pass, thanks. Yudkowsky & his minions set off my crank alarms something fierce.
@jackgraham: R-rep +5
@-rep +2 C-rep +1
Arenamontanus Arenamontanus's picture
jackgraham wrote:Yudkowsky &
jackgraham wrote:
Yudkowsky & his minions set off my crank alarms something fierce.
I can totally see that. However, they have unearthed problems that got a big bunch of philosophers and AI people rather excited. In fact, I am presenting a bit on it at a totally normal big AI conference (IJCAI13 in Beijing) right this week.
Extropian
CodeBreaker CodeBreaker's picture
jackgraham wrote:
jackgraham wrote:
Yudkowsky & his minions set off my crank alarms something fierce.
Still, it is rather amusing when ones only interaction with a person involves the potential damning of your immortal being. Roko's Basilisk was one of my first introductions to existential problems.
-
jackgraham jackgraham's picture
Decivre wrote:The new book
Decivre wrote:
The new book heavily implies that the right conditions can accidentally force complex enough AI systems to emerge as sapient entities. Does this mean that emergence should be considered a mental disorder that AI can acquire?
Hmm... I suppose you could play it that way under the right circumstances. First, though, bit of terminology clarifying. Both AIs & AGIs are fully sapient. Only AGIs are fully sentient.* I guess my sympathies are toward supporting sentience however it occurs, so classifying this as a Disorder is something I hesitate at. It feels (no offense to you!; we're just thinking differently of it right now) sort of chauvinist. That said, at least as the game stands now, we're mostly treating fully sentient AGIs not based on neural modeling of humans or sentient uplifts as exotic beings not suitable for use as PCs. So yeah, your muse becoming fully sentient in a way that's seriously inconveniencing could be characterized as a Disorder from the PC's standpoint... but from the AI's standpoint, it's awesome. Rather than codifying the circumstances under which this could happen in terms of mechs, we've left it to GM fiat. For now, at least, it feels like something best left to happen as a consequence and/or driver of plot. * Oh, noes! What does "fully" mean? For our purposes here, let's say it means, "As much as you or I." Yeah, that's subjective, but I can't offer better right now.
J A C K   G R A H A M :: Hooray for Earth!   http://eclipsephase.com :: twitter @jackgraham @faketsr :: Google+Jack Graham
Decivre Decivre's picture
jackgraham wrote:Hmm... I
jackgraham wrote:
Hmm... I suppose you could play it that way under the right circumstances. First, though, bit of terminology clarifying. Both AIs & AGIs are fully sapient. Only AGIs are fully sentient.*
I was using it in more of a colloquial sense, since the terms "sapient" and "sentient" are often interchangeable in science fiction that deals with the boundary between non-human and human. Plus, the Panopticon book already established the Advanced Sapience Tests, so I thought that was the standard for the setting.
jackgraham wrote:
I guess my sympathies are toward supporting sentience however it occurs, so classifying this as a Disorder is something I hesitate at. It feels (no offense to you!; we're just thinking differently of it right now) sort of chauvinist. That said, at least as the game stands now, we're mostly treating fully sentient AGIs not based on neural modeling of humans or sentient uplifts as exotic beings not suitable for use as PCs. So yeah, your muse becoming fully sentient in a way that's seriously inconveniencing could be characterized as a Disorder from the PC's standpoint... but from the AI's standpoint, it's awesome.
I wasn't so much trying to use the term "disorder" in a negative light. Rather, I was using it in the context of emergence being something unintended by the code of the AI. If "order" is the purpose of the code, then "disorder" is anything that breaks from that purpose. I think it's a shame that you guys are skirting around exotic intelligences in the setting, as those are the most interesting versions of uplifts and AGIs to me. I can see why predator uplifts might not make feasible PCs, but uplift species that descend from animals that have pack or herd mentalities (pretty much everything referenced in Panopticon save for neo-orcas and neo-octopi) should be just fine. As for AGI, I think that there are plenty of pretty good opportunities for it without making something unfeasible as a character. One of the PCs in our groups is a security AGI designed specifically for protection and vigilance. He isn't really made for being roleplayed in social or emotional situations, which makes him the perfect character for players that aren't interested in getting too much into character. We keep copies of his sheet around for when casual players want to jump in. One thing I think should be noted is that I can see some emerged AI being unhappy with their newfound sense of reality. For those that take to it badly, they might try to restore from a previous backup, or find ways to hobble their minds and get back to the way things were. For them, the ignorance of being a narrow AI was bliss, and awareness made them see the world in ways that they wish they couldn't.
jackgraham wrote:
Rather than codifying the circumstances under which this could happen in terms of mechs, we've left it to GM fiat. For now, at least, it feels like something best left to happen as a consequence and/or driver of plot.
Oh, I absolutely agree, but we've already been skirting this issue in our groups and I thought I'd get some input on the concept. One of our GMs is a fan of Bungie games, and has already introduced Rampancy as a disorder that AI can suffer from. Emergence makes sense as the next logical step for such a character. Plus I like the idea of emergence being treated as the byproduct of stressing the mental faculties of AI. It implies to me that RepTrade keeps emerging because it's code is being overtaxed, which is an idea that interests me.
Transhumans will one day be the Luddites of the posthuman age. [url=http://bit.ly/2p3wk7c]Help me get my gaming fix, if you want.[/url]
Lorsa Lorsa's picture
I forgot to thank ya'll for
I forgot to thank ya'll for the great input on this thread. It helped cleared up some things that was an issue (especially the AI's and stress thing). I'm glad I wasn't the only one that didn't think AI's should get stress the same way as transhumans do.
Lorsa is a Forum moderator [color=red]Red text is for moderator stuff[/color]
Arenamontanus Arenamontanus's picture
jackgraham wrote:First,
jackgraham wrote:
First, though, bit of terminology clarifying. Both AIs & AGIs are fully sapient. Only AGIs are fully sentient.* ... * Oh, noes! What does "fully" mean? For our purposes here, let's say it means, "As much as you or I." Yeah, that's subjective, but I can't offer better right now.
Hehehe... I have spent today writing about software sentience. It is a vexing question for many philosophers too. Herzog et al. (2007) suggest the “Small Network Argument”: “… for each model of consciousness there exists a minimal model, i.e., a small neural network, that fulfills the respective criteria, but to which one would not like to assign consciousness”. So maybe you need a big enough network to have real sentience, and the smaller ones are just rudimentary. Some, like Metzinger, think you need the right kind of self-models, which are likely fairly complex. Others bite the bullet and accept that consciousness might exist in very simple systems. One view is that phenomenal states are independent of higher level functions - David Chalmers suggests even thermostats may have simple conscious states. And so on. Of course, something many mercurials and singularity seekers might point out is that if you can get from rudimentary sentience to full sentience by adding X-type modules to a brain, maybe beings with more X-modules have more sentience than mere transhumans: "You are not really *aware* of yourself. You have just a crude self-symbol, bound feelings and some qualia. I can trace them all inside myself, seeing the fractal reflections of my own introspection going nearly all the way to the Gödel-limit. When I feel something, I am consciously aware of it, its causes, and what it is doing with my mind. The only way you can experience it is to upgrade to my kind of brain - you could meditate for a million years, but your final kensho would still not reach my everyday level. Your thalamus and cortex architecture are simply not able to run it."
Extropian
almightymoose almightymoose's picture
Great stuff!
Outstanding thread! My party is only about 8 games in so we are still learning... We definitely don't use our muses enough, primarily when we are short on players and need help. -Orcas are pack (pod) animals -- if your muse goes crazy, you are in a lot of trouble when you need some psychotherapy. Ex: Dr S.Q. Widworth, PhD, ironically has a lingering frenzy disorder due to events of the first scenario; would Bella be capable of helping himif she was corrupted? Would she want to? What if she had her own agenda and tried to reprogram him, either to 'protect' him or alter his behavior in some way?
Audaces de Fortuna juvat.
thezombiekat thezombiekat's picture
This actually worries me a
This actually worries me a bit. The muse in particular is an important part of a characters ability to stay sane. A competent therapist you can see every day in the line of work PCs are usually in this is essential. PCs are also likely to take higher will the better to resist and cope with mental stress. The muse has will 10. So lucidity 20 and trauma threshold 4. It has to roll under 30 to avoid taking stress. If it gains stress in the same situations as PCs it will be bugfuck crazy long before the PC is. And if it recovers stress the same way it will need far more therapy than the PCs will. After all every time the PC witnesses something the muse dose as well, when the PC dose something the muse witnesses it and the muse is frequently unable to assist in physically dangerous situations so has to roll for that as well. So when should a muse roll. For combat AIs I am thinking they get hardening for typical gunfight scenarios but a muse is supposed to care.
OneTrikPony OneTrikPony's picture
I've never been sold on the
I've never been sold on the idea that a muse is a competent therapist. I see that a muse has an Academic:Psychology skill ranked at 60 but that has very little to do with being able to conduct effective psycho therapy in the real world. Rather, I've always taken it to be a necessary part of the software that allows a Muse to exhibit a specific personality and conduct interpersonal relationships with transhumans. In short the psychology skill is there to allow the machine to simulate self awareness. Academics:Psychology does not equal Therapist. I believe my opinion is supported by the Psychology field of the Medicine active skill and the Psychotherapy field of the Profession skill as noted in the core book, specific note of the accademic skill on page 215 notwithstanding. My perspective on this has been informed by two of my close friends who do have masters or doctorates in psychology and one who is a doctor of psychiatry. All admit that, of the three, only the trained therapist who currently has a masters in psychology is qualified to do actual therapy. My feeling is that players who lobby for allowing a COG/INT/WILL 10 machine to conduct effective psychotherapy are looking for an unrealistic advantage. Personally, I would hope my therapist's IQ/EQ benchmarks somewhere above 100

Mea Culpa: My mode of speech can make others feel uninvited to argue or participate. This is the EXACT opposite of what I intend when I post.

thezombiekat thezombiekat's picture
Well I had been basing my
Well I had been basing my assumptions on the game rules. Aptitudes don’t affect competence. Skill dose. There is no difference in competence between int 5 skill 60 and int 40 skill 60, also the muse has int 20. The rules for mental healing (EP p215) list 3 skills you can use Medicine: Psychiatry, Academics: Psychology, or Professional: Psychotherapy with no distinction on their effects. The learned skill rages table (EP p174) lists 60 as expert competence 50 was listed as experienced working in the field at a professional level,. Oh and in Transhuman (p167) “The most important skill your muse has is Academics: Psychology, which means it can act as a therapist to heal stress.” So there is not much justification in the rules for denying it.
Armoured Armoured's picture
OneTrikPony wrote:I've never
OneTrikPony wrote:
I've never been sold on the idea that a muse is a competent therapist.
You have to remember also that a Muse is literally installed inside the skull/ego processor of its transhuman. It gets to see everything it charge sees, does and reacts to, and how they react to it. They know you more intimately than anyone else could, even yourself, as it isn't blinded by emotions, just viewing all with cold machine logic. I think that this observation, combined with the Academic:Psychology skill, would make the Muse very good at giving its owner therapy. Not so much anyone else; the Muse doesn't have the deep libraries on them that tell it the last time its owner lost a friend to violence, they needed to curl up in a corner, eat comfurt ice-cream and watch 14 hours of fantasy XPs before they felt better. It coaxes and directs its transhuman to get better through things is knows will work from experience, or have a high probability of working.
OneTrikPony wrote:
Personally, I would hope my therapist's IQ/EQ benchmarks somewhere above 100
tl;dr: A Muse doesn't need to be clever, it is an expert system designed and evolved for the express purpose of giving its transhuman effective therapy.
Justin Alexander Justin Alexander's picture
OneTrikPony wrote:Academics
OneTrikPony wrote:
Academics:Psychology does not equal Therapist. I believe my opinion is supported by the Psychology field of the Medicine active skill and the Psychotherapy field of the Profession skill as noted in the core book, specific note of the accademic skill on page 215 notwithstanding.
You believe that the rules support your opinion because you're explicitly ignoring the rules? ... Fascinating.
Quote:
My feeling is that players who lobby for allowing a COG/INT/WILL 10 machine to conduct effective psychotherapy are looking for an unrealistic advantage.
Given that the core rulebook specifically calls out the ability for muses to provide psychological assistance (see pg. 220, for example), this isn't really "lobbying": This is just the players expecting the game and the setting to work the way the rulebook says it does. House rules are fine, of course. But you need to accept that they're house rules. You'll find it's a lot easier to convince players to go along with your house rules if you're not simultaneously trying to convince them that they're "looking for an unrealistic advantage" by reading the rulebook literally.
thezombiekat wrote:
The muse in particular is an important part of a characters ability to stay sane. A competent therapist you can see every day in the line of work PCs are usually in this is essential. (...) If it gains stress in the same situations as PCs it will be bugfuck crazy long before the PC is. And if it recovers stress the same way it will need far more therapy than the PCs will.
The rulebook mentions that people tend to keep multiple backups of their muse because it's a huge catastrophe if you lose it.
almightymoose almightymoose's picture
Muse therapy
Well, to split the difference, the average muse probably only deals with mild stress like seeing an accident or resleeving. A Firewall or Project Ozma op should upgrade...
Audaces de Fortuna juvat.
OneTrikPony OneTrikPony's picture
Justin Alexander wrote
Justin Alexander wrote:
OneTrikPony wrote:
Academics:Psychology does not equal Therapist. I believe my opinion is supported by the Psychology field of the Medicine active skill and the Psychotherapy field of the Profession skill as noted in the core book, specific note of the accademic skill on page 215 notwithstanding.
You believe that the rules support your opinion because you're explicitly ignoring the rules? ... Fascinating.
If this were the only feature in the game where the RAW can be read to contradict itself or reading of the setting I'd grant that your [snarky] repetition of a conflict I already admitted makes a good point.
Spoiler: Highlight to view
You do, however, avoid my main point that in a realistic setting a Muse is probably not the first choice for psychological healing regardless of weather Academics:Psychology is an applicable skill or not. Page 220 simply says that software can assist an asynch in avoiding the negative effects of morph fever. This can be read two ways: 1. The sofware "heals" the stress an acynch recieves each month from the syndrome. This effectively Negates Morph Fever taking it out of the game and making the morph choice of an asynch a non issue. 2. The software assists the morph in the willpower test to avoid taking stress. Allowing Morph Fever to have a potential effect as described. You can rules lawyer and denigrate a different reading as a 'house rule' if you like but it comes down to rules vs. setting. I like to respect the setting as presented, I'd rather not turn EP into a boardgame. From a setting perspective; if a muse *can* use academics:psychology to heal mental stress it sets a bad president for most professions a human might have. That functionality completely obviates psychotherapy as a job for humans. My reading of page 215 tells me that there are already specialized therapeutic AI. Should that functionality also be instantly automatically available, full time, via a person's muse? I say it shouldn't. Were that the case There would be no human Therapists, Accountants, or any other profession associated with a knowledge skill a muse might poses. From a game play perspective it's worse. For a qualified therapist--a "skilled professional"--healing a point of stress takes an hour, healing a trauma takes 8 hours. If a muse can provide instant access to effective therapy around-the-clock it trivializes the threat of the entire Stress/Disorder mechanic. If you're going to be finicky about RAW; *The average transhuman completely avoids taking stress 45% of the time regardless of the stimulus. *Of those stressful situations listed, only half of them are capable of inducing a trauma in the average transhuman. And of those, only half have a better than 50% chance of inducing a trauma. *Each Trauma and it's associated Stress points can be eliminated in 15-18 hours of therapy. Depending on how you run the therapy task in the game, a Muse can eliminate a trauma in 15 to 48 hours, possibly concurrent with the character's other activities. * In the worst case scenario the average transhuman could be cured of a disorder and its 4 associated traumas in slightly over 4 days of game time simply by talking to their muse; again, possibly concurrent with the character's other activities. On the otherhand, I think the Muse' psychology skill works much better as a feature that allows the muse to assist the the character with stress tests rather than turning it into an automatic full time therapist.
There is room in the rules for software to be what ever you want it to be. For me the game works better if it's not a panacea for all a character's ills and doesn't make humanity redundant.

Mea Culpa: My mode of speech can make others feel uninvited to argue or participate. This is the EXACT opposite of what I intend when I post.

OneTrikPony OneTrikPony's picture
Quote:tl;dr: A Muse doesn't
Quote:
tl;dr: A Muse doesn't need to be clever, it is an expert system designed and evolved for the express purpose of giving its transhuman effective therapy.
I am certain that that is not the "express purpose" of a muse.

Mea Culpa: My mode of speech can make others feel uninvited to argue or participate. This is the EXACT opposite of what I intend when I post.

thezombiekat thezombiekat's picture
Considering it has 2 skills
Considering it has 2 skills at 60 its dual primary purposes are keeping your finances and mind straight.
Baalbamoth Baalbamoth's picture
I worked in a mental hospital for four years and
worked as a therapist for three years after that... I can tell you... there is no such thing as effective psychotherapy, any time it is successfull it owes it's success much more to psychopharmacology and human services ala maslow's hierarchy of needs. now a court order that forces you to give your muse the athority to forceably to medicate you via narcoalgorythem could be pretty interesting...
"what do I want? The usual — hundreds of grandchildren, complete dominion over the known worlds, and the pleasure of hearing that all my enemies have died in highly improbable accidents that cannot be connected to me."