Welcome! These forums will be deactivated by the end of this year. The conversation continues in a new morph over on Discord! Please join us there for a more active conversation and the occasional opportunity to ask developers questions directly! Go to the PS+ Discord Server.

Thought Experiment: Powerful ALIs?

6 posts / 0 new
Last post
DivineWrath DivineWrath's picture
Thought Experiment: Powerful ALIs?
I was toying with an idea regarding limited AIs. It basically covers how big of a bonus to dice rolls they might get. First off, limited AIs may have up to 40 ranks in active skills, and 90 in knowledge skills (though I've never seen any of them have knowledge skills beyond 80). However, they can get complementary skill bonuses, right? Does this mean their active skill rolls up to 70 because of complementary skill bonuses (61 or higher gives +30)? Can this stack with multiple skills, so 2 +30 complementary skills would give a dice roll of 100? I see nothing in the book that explicitly says yes or no. Is there any complementary skills for combat skills? I'm inclined to say no as it'll quickly make limited AIs a dangerous threat beyond what their programing should allow. They are supposed to be limited after all. Limited is what supposed to make them "safe" as opposed to other types of AIs. I'm inclined to say the same for any opposed test, such as infosec. However, doing things like programing blueprints isn't an opposed test. In fact, it is suggested in the core rulebook p. 284-285 that Academics: Nanotechnology is an appropriate complementary skill. I'm wondering if I could program a limited AI with 40 in programing and 2 complementary skills, then tell it to make blueprints.
UnitOmega UnitOmega's picture
Well, you can tell it to make
Well, you can tell it to make blueprints, but you'd have to tell it to make a blueprint you already know how to make (or it already knows how to make, I guess). ALI can't really improvise or create on their own, so telling them to make something "new" doesn't work. You could arm it with a blueprint and tell it to maximize production of this item and diffusion of the blueprint - that's how you get Wild Artificials and "Paperclip Maximizers", a legit risk in the ALI department. As for Complementary skills, I'd say to apply them as narrowly as needed/desired. The skills truly need to overlap. For example, I have a player in my current game with Profession: Manufactured Soldier. I told him, this will not complement with all combat rolls. But, should a situation arise where he's actually in a situation like the drills he did takes place, his complement should apply. Most AI don't have great skills to complement, but say the Vaettir AI has "Current Mission" as an Interest at 80. So if they're current mission is to shoot some people, I'd say it applies. But this creates a whole new plot issue, namely how shifty it is that the Titanians are sending disposable AI assets to shoot people.
H-Rep: An EP Homebrew Blog http://ephrep.blogspot.com/
DivineWrath DivineWrath's picture
I think the risk of a
I think the risk of a "Paperclip Maximizer" is low to nil with modern EP AIs. Most AIs include code to make them able to emphasize with transhumans, obey local laws, and avoid harming people in general. Even Muses can disobey their owners if their owners appear to be acting strongly against their best interests or are in some sort of death spiral. Even report their owners to authorities if it must to get much needed help. Exceptions can exist, but I think code that makes AIs safe for transhuman is open source and well made. There should be few excuses for not having such code in AIs after the Fall. Improvising and creativity. So you are saying that your average AI wouldn't have the knowledge needed for making blueprints, and it wouldn't have the ability to improvise and be creative enough to fill in the gaps? How would you feel with an AI needing to make a dice roll to determine if they have enough information? They can do a Research test, but they're normally capped at 40 for active skills. It'd be smart of them to take extra time to get a big bonus. However, a knowledge skill like Academic: Firearms or Kinetic weapons could be much higher in terms of ranks and be sufficiently narrow to provide useful information. Maybe in this case, the Academics skill could use Research as a complementary skill (assuming it spends the time to do the research).
UnitOmega UnitOmega's picture
Well, Transhuman describes
Well, Transhuman describes making a new blueprint for a completely new item as a multi-step process involving a very diverse load of skills. I would say if an ALI had all the appropriate skills to make a new blueprint for an item, (mostly knowledge) you might give it some processor cycles and get it to try - but as you say, most ALI are intrinsically limited to work with transhumans - so you'd probably have to give them very specific instructions. If you tell it "build a better mousetrap" the AI will either have a freak-out (ALI have LUC too), or probably more likely in this day and age it will to the best of its ability make something which fits the criteria of better. If you don't clarify though, your resultant mouse-trap will probably not be that impressive. "Better" just means "more good" than the previous iteration. On the player facing side, letting an AI roll Research to see if it can gather sufficient data or roll Language to see if it can sufficiently understand your intent (or Kinesics too, but that's not a great roll for most AI) is solid. If you have narrowly focused an AI, having it roll Profession: Pest Control could apply to see if the AI has the in-built knowledge to work with. On the GM's side, if you're trying to define a plot point or setting element, just saying it works is fine too. Of course, since ALI can encounter stressful situations and take SV, repeated failures in trying to execute ambiguous commands without clarifying input is a great way to get a deranged AI who might accidentally cross a wire into "paperclip maximizer" territory. Or, y'know, just as likely, someone tries this plan with an older model AI or with an experimental AI they haven't worked all the bugs out in, which is also how you get Wild Artificials. So if you want to maximize machines designing machines, use rigorous testing and specific instructions. And you need to be prepared for when your blueprint AIs accumulate too much experience and processing power and start to emerge as generalized. Either pith them or let them start a union.
H-Rep: An EP Homebrew Blog http://ephrep.blogspot.com/
DivineWrath DivineWrath's picture
Next thing you will tell me
Next thing you will tell me is that logical paradoxes will cause ALIs to explode. I'm finding hard to take your stance on what will cause AIs stress seriously. EP software is supposed to be able to process data in the same way humans can, and AIs can ignore some commands (like those that would bring harm to their user), so I would think they would be able to recognize difficult or impossible commands. At the very least, I think they would ask for clarification or more input instead of freaking out. I also wasn't planing on making them create original blueprints. I was more thinking of creating blueprints for existing goods so I don't need to spend credits or rep to get them, or to check if any of the free blueprints I can find are any good. Maybe make some variations to existing goods like a sniper rifle with longer range, a faster vehicle, or armor that offers more protection. I suppose whether or not an ALI can create original items is a topic in of itself. A transhuman can default on stuff they don't know, while an ALI can't. However, an ALI can default to field skills, so it might be wise to cast a wide net when it comes to knowledge skills that an ALI should have. ALIs aren't supposed to be able to learn, but they have memories which offers a limited form of adaptation. A Muse can learn their owner very well, so I guess you can say they can say they learn in ways that they are programmed to learn. I think its best to think of it as Academics: Specific Person and Interests: Specific Person. In the same way that a Muse can learn about its owner by asking thousands of questions, observation, and interaction with its owner over their lifetime, maybe a blueprint ALI can learn about the devices it is supposed to make. It could read textbooks and wiki articles with the same or better proficiency that modern programs can turn pictures of written words into text.
UnitOmega UnitOmega's picture
Well, great way to read the
Well, great way to read the first half of my sentence and stop there, always a great way to know somebody's actually reading what you type. After all, I did say
Quote:
or probably [b]more likely[/b] in this day and age it will to the best of its ability make something which fits the criteria of better.
But I would say that since people find repeated failures stressful, an ALI could easily be confronted with its limitations and might take stress, but that's a specific case where a user is clearly doing it wrong, just letting the AI sit on an ambiguous command and try and fulfil it, and the dice just don't go their way. For best results, outline what you want. Optimization tasks are probably well suited to appropriate Expert systems. You give them the specs on the object, and you give them the parameters for improvement, they can use their knowledge and information gathering to try and output the ordered results. They might not always get a result, but they can probably try repeatedly no problem. In normal research setups you probably include a few checks and balances and authorization points to make sure your AI(s) don't go overboard with their work and go full paperclip (or blow your research budget), but otherwise, this is probably a task you might make an ALI do. I'll reiterate that running such a project on a large enough scale is likely to make an ALI emerge as an AGI - IIRC, the Consortium stock exchange basically keeps being on the cusp of proper sentience because of all the numbers and data it handles but they keep pithing it down. On the subject of Muses, my usual way of thinking about them is that they are a complex series of sliders (an astonishing, amazing series of sliders - let's not underplay how incredible the Muse is). All the responses the Muse gives are effectively pre-programmed options that it "learns" to use based on your conscious or subconscious input. Technically the Muse doesn't "learn" in the classic sense, it's software just determines (such as via it's Psychology Knowledge) which of it's pre-programmed outputs is appropriate for the user. Since this system works for virtually all transhumanity, it's an amazing piece of work.
H-Rep: An EP Homebrew Blog http://ephrep.blogspot.com/