Welcome! These forums will be deactivated by the end of this year. The conversation continues in a new morph over on Discord! Please join us there for a more active conversation and the occasional opportunity to ask developers questions directly! Go to the PS+ Discord Server.

Seed AIs, Prometheans, and TITANs, oh my!

50 posts / 0 new
Last post
Byzantine Laser Byzantine Laser's picture
Seed AIs, Prometheans, and TITANs, oh my!
Not technically on topic, but this is the most fitting area on the boards I could find to stick this. I've been starting to write the first major arc for a game I'm planning to run shortly, and I ran into a problem: eventually, the players are going to come face to face (metaphorically, at least!) with a rogue Promethean. I'd like to get across the the thing is super-intelligent and unsettling to be around, but I'd also like it to be more complex than just 'Take 1d10 stress.' I already have plenty of plans for how to freak them out as they're making their way to it, but I feel like the actual conversation with it should be more disturbing than everything else combined. So, any tips on playing an unfathomably intelligent AI, especially how to make it unsettling?
Captain Piranha Captain Piranha's picture
Re: Seed AIs, Prometheans, and TITANs, oh my!
If I were to do something similar I would base the interaction on the conversation between Shepard and Sovereign in Mass Effect. I found that encounter quite intimidating the first time around. This would work well if your rogue Promethean has gone down the 'Kill All Humans' route.
Byzantine Laser Byzantine Laser's picture
Re: Seed AIs, Prometheans, and TITANs, oh my!
In the specific case, it's just somewhat antagonistic. I plan to get a feel for the more inhuman sorts too, though, so that's a good one to look at for future reference.
nick012000 nick012000's picture
Re: Seed AIs, Prometheans, and TITANs, oh my!
Why would a Promethean go rogue? They're designed to be Friendly; that means that they probably wouldn't do so since going on rampages or whatever doesn't contribute towards fulfilling its goal-structure of Friendlyness. Personally, I'd have them run into a non-Friendly AI, like maybe Project Ozma's rumored TITAN (which might have managed to avoid being infected by the Exsurgent virus the same way the surviving Prometheans did), or the Oracle from the adventure Arenamontanus posted [url=http://www.eclipsephase.com/seed-agis-considered-harmful]here[/url].

+1 r-Rep , +1 @-rep

nick012000 nick012000's picture
Re: Seed AIs, Prometheans, and TITANs, oh my!
EDIT: Damn, double post. Could a mod delete this?
+1 r-Rep , +1 @-rep
Byzantine Laser Byzantine Laser's picture
Re: Seed AIs, Prometheans, and TITANs, oh my!
nick012000 wrote:
Why would a Promethean go rogue? They're designed to be Friendly; that means that they probably wouldn't do so since going on rampages or whatever doesn't contribute towards fulfilling its goal-structure of Friendlyness.
Not rampaging or anything, just no longer interested in remaining with Firewall. More of a deserter than a big scary monster, though I [i]am[/i] being lenient enough with the Friendly bit to let it defend itself if directly threatened.
Arenamontanus Arenamontanus's picture
Re: Seed AIs, Prometheans, and TITANs, oh my!
Being mysterious always works. Odd turns of phrase, references that do not quite make sense, deep or cynical quotes: http://tommymackay.tripod.com/P.Generator.htm Collect a bunch beforehand and throw them in wherever they fit. Since you are setting up an arc, you could define a number of persons and events in the future the Promethean refers to as if they had already happened. The references do not quite make sense until later... "When you meet Mary on Titan, you will understand." Treat these as fated predictions - if the players never go to Titan they will encounter an important contact named Mary on a bar named Titan instead. Basically the Promethean knows your game notes and have peeked at the player's character sheets: "Why did I defect? For the same reason you left the Night Cartel, and your team-mate Xie Beng over there has been programmed to kill you - loyalty".
Extropian
nick012000 nick012000's picture
Re: Seed AIs, Prometheans, and TITANs, oh my!
Byzantine Laser wrote:
nick012000 wrote:
Why would a Promethean go rogue? They're designed to be Friendly; that means that they probably wouldn't do so since going on rampages or whatever doesn't contribute towards fulfilling its goal-structure of Friendlyness.
Not rampaging or anything, just no longer interested in remaining with Firewall. More of a deserter than a big scary monster, though I [i]am[/i] being lenient enough with the Friendly bit to let it defend itself if directly threatened.
Just remember that the thought processes of even a Friendly AI will be alien. It probably won't care if they kill it as such; it's just that if it dies, it won't be able to optimize Friendlyness in the future. However, it wouldn't resort to Unfriendly methods to defend itself, because that would defeat the entire point. I'll also point out it'd only leave Firewall if it fealt that remaining with Firewall was no longer serving the purposes of Friendlyness, in which case you have to answer the question of what it's planning on doing once it's left. Is it going to join Project Ozma? Is it going to go found its own humanity-protecting conspiracy? Is it going to go and distribute a bunch of nanobots all over Mars to usher in a post-scarcity and post-violence utopia?

+1 r-Rep , +1 @-rep

Arenamontanus Arenamontanus's picture
Re: Seed AIs, Prometheans, and TITANs, oh my!
That might actually be a really interesting situation: a superintelligence tells you the organisation you are working for is not on the side of angels. Do you believe it? Do you believe the other superintelligences claiming it is wrong? And worse, you might even be given evidence that supports the idea that Firewall is doing something bad... but that could be forged or clever manipulation. While this little amusement is going on the superintelligence is of course working on its real plan. Whatever *that* is - the reason for defection might be human-incomprehensible. ("I'm going to convert exactly 99% of the universe into computronium that simulates a certain set of alternative universes and - under some conditions - tortures the inhabitants horribly by their own standards. I am doing this to save you.")
Extropian
King Shere King Shere's picture
Re: Seed AIs, Prometheans, and TITANs, oh my!
When portraying high intelligence. In Sci Films, Monotone voices with lack of emotions are popular As for Unsettling things, It should respond with preempting answers for questions not yet asked. Also in have It ask the questions instead. -forcing the players to answer & reveal information rather than the opposite. Using quotes from unsettling texts would help too. Promethean Isnt the "official" world still debating if Promethean s even exist , their origin & what agenda they have (if they exist). I didn't think that Firewall operatives was generally aware of the Promethean s. Firewall agents doesn't reveal personal information (with some exceptions) & all of these regularly use false identities.
Quote:
V: I'm not questioning your powers of observation I'm merely remarking upon the paradox of asking a masked man who he is.
If its clearly identified as a Promethean your reducing the paranoia aspect of the game. That said My understanding is that, the Promethean are a "political party" and Trans-human friendly , Their defecting & virus infected "brethren" are not considered to be Prometheans. Groups have disagreements & political strife -so there should be power-hungry egomaniacs & heroic sociopaths even within Firewall. These allies would certainly help their adversaries (within Firewall) against extinction threats. That doesn't make them less dangerous or undermining for their friends & coworkers (those gullible stepping stones & disposable pawns) .
Quote:
"You cant rule the world if its destroyed"
Prophet710 Prophet710's picture
Re: Seed AIs, Prometheans, and TITANs, oh my!
I'd treat a Promethean as a Jedi with an answer to pretty much everything.
"And yet, across the gulf of space, minds immeasurably superior to ours regarded this Earth with envious eyes. And slowly, and surely, they drew their plans against us."
Xagroth Xagroth's picture
Re: Seed AIs, Prometheans, and TITANs, oh my!
tips and tricks to reinforce the alieness: From Mass Effect: The Rachni / Tholian puppeteer: they need to "take over" an interpreter to translate for the siimpler minds. This can be as simple as an infomorph identifying itself as a Delta fork of the Promethean and still being unsettling, brilliant and hard to follow. Say nonsense... that later the players can construct as something. This falls into the prophetic part of a lot of games that allows that kind of possibilities to players (Rolemaster, D&D, Cthulhutech...). Introduce SMALL pauses in the talking and then say "sorry for the delay, plotting XXX" or "countering hack attemp from YY sources"... then resume speaking as if nothing happened a mere second after that: time is not in the same scale for both sides of this conversation. Or you can be even more subtle: the infomorph turns red and unresponsive, and when the players ask it reveals that. Gifts: the AI gives them a file that they cannot decode without a lot of effort (like using up all the processing power of their quantum computer for a month or two), just to reveal a small piece of data they need at that moment... (like the password for the door they need open like NOW) or a simple fortune cookie phrase: Wisdom doesn't come easy, nor fast, after all! Of course, there are some points that need to be covered before that: First, there must be a reason for the godlike AI to make the effort to talk to the players (and again, despite its best efforts, there will be a lot of noise in that communication channel, not physical noise but the "you need to decode all this" kind of noise), and second there must be a need for the players to seek that audience (because the AI would prefer not to interact directly with them if possible, to not incur in the Schrodinger's problem of affecting the outcome by just being there) aswell as a reason for that audience to be brief (value is, in the human mind, related to scarcity: we appreciate most what we have less of). And of course, the final required point is that this communication can't happen anymore (or if it does, no more than one or twice, and always in a fashion that the playes think "oh crap" instead of "again that boring TITAN?").
Quincey Forder Quincey Forder's picture
Re: Seed AIs, Prometheans, and TITANs, oh my!
Also from Mass Effect, for a promethean, EDI would make a good example puppetering a gynoid synthmorph while still controling the ship This might lead to some "funny" situation "Do no worry, Shepard, I only forget to recycle Normandy's oxygen when I find something truly interresting" Other good inspiration is the Geth, where superior intellect is generated by number I find the Geth really interresting in the sense that, basically, they aren't malevolent at all, but will be ruthless in defending themselves. They want peace with their creators, but at the same time won't hesitate to decimate the flotillah if attacked or opposed to evolution That would be an interesting take for some TITANs machinery left behind it will help the PC, feel truly sympathic. But if they try to unplug or directly oppose the machinery's purpose or plan, it'll kill them while appologizing for killing them and for the truly unsympathic but not evil (nor good) seed AI...the Catalyst I won't say what it is, but those who finished the game know what I'm talking about by the way, speaking of manipulation, Indoctrination is a very, very good, and very, very nasty kind of basilisk hack through visual, aural and memetic vector. It can be very subtle and long to really manifest in any obvious way
[center] Q U I N C E Y ^_*_^ F O R D E R [/center] Remember The Cant! [img]http://tinyurl.com/h8azy78[/img] [img]http://i249.photobucket.com/albums/gg205/tachistarfire/theeye_fanzine_us...
Xagroth Xagroth's picture
Re: Seed AIs, Prometheans, and TITANs, oh my!
I'll start my reply to Quincy by stating that Mass Effect 3 has currently no finale, and the Catalyst is not, nor does it houses, an AI. Also, teleportation is not implemented in the Mass Effect 3, so the final cinematic is not possible. The geth don't try to destroy the Quarian until things get really bad. In ME3, they do that while under some "upgrades"... but their history points to a love of sorts for their Creators. Indoctrination would be in the side of "sound" and "electronic signal" wavelength of basilisk hacks. Or, if you get turned into a husk, in the nanotransformation...
nerdnumber1 nerdnumber1's picture
Re: Seed AIs, Prometheans, and TITANs, oh my!
For the most part, I'd have a seed AI not speak to a PC at all. Such a form of communication is inefficient at best and the difference in intelect is like that between Einstein and a cockroach. Have you ever tried to logically argue a point with a frustrating toddler? Increase that by a few orders of magnitude and you might have what it feels like for an seed AI talking to us meat glaciers. If a seed AI finds need to communicate information to lesser life, and is unable to unobtrusively sneak it through apparently legitimate channels, it would probably be easier to just transfer all the necessary information directly into the target's brain, preventing all possibility of misunderstanding (and causing a good deal of strain). If the seed AI feels that the strain of such tactics might be counter productive its goals, another possibility would be to forcibly fork the Sentinel and write the necessary information into said fork to act as a messenger, discarding the fork when it has finished serving its purpose. In general, it is hard for a god-like seed AI to see transhumans as "people" when it can see the connections and workings of a human brain easier than a human electical engineer can imagine the workings of the logic gates in a micro-processor. The seed AI sees tools and its goal, and tends to utilize said tools with brutal efficiency in the pursuit of said goal. The "good" Prometheans just tend to have secondary and tertiary goals involving avoiding mass murder, mind-raping and the like if possible.
Axel the Chimeric Axel the Chimeric's picture
Re: Seed AIs, Prometheans, and TITANs, oh my!
The same could be said about humans and AGIs, though. AGIs have very much accessible programming that can be rewritten by any programmer who has access to it. When you can make someone more aggressive, or less intelligent, or more artistic, or sadder, just by deploying a program, does it necessarily make you view them as any less of a person? It's a part of the problem I have with the whole "They see us as ants" idea. We can't readily communicate with ants. Their methods of communication just aren't possible for us. If we could readily emit pheromones and communicate concepts to ants, and present ourselves in a way that ants would acknowledge and understand in their limited ant way, it'd be more comparable. No, we couldn't understand the kind of thinking a seed AI does, but we could certainly understand it if it told us to go to X and do Y.
nerdnumber1 nerdnumber1's picture
Re: Seed AIs, Prometheans, and TITANs, oh my!
Axel the Chimeric wrote:
The same could be said about humans and AGIs, though. AGIs have very much accessible programming that can be rewritten by any programmer who has access to it. When you can make someone more aggressive, or less intelligent, or more artistic, or sadder, just by deploying a program, does it necessarily make you view them as any less of a person? It's a part of the problem I have with the whole "They see us as ants" idea. We can't readily communicate with ants. Their methods of communication just aren't possible for us. If we could readily emit pheromones and communicate concepts to ants, and present ourselves in a way that ants would acknowledge and understand in their limited ant way, it'd be more comparable. No, we couldn't understand the kind of thinking a seed AI does, but we could certainly understand it if it told us to go to X and do Y.
I can see what you're saying, but, in a more metagame sense, the more the GM has an alien super-inteligence speak in human terms, the greater the chance that the illusion of alien super-inteligence is shattered by a simple, human mistake on the part of the GM. For the most part, a seed AI would do important things itself when possible (though the exurgent virus is probably what made the Prometheans act much more cautious). When using lesser, imperfect inteligences, a seed AI is likely to do all in its power to make sure the transhumans are sure about what they are doing, giving unambiguous information (yes, Firewall is a bit hands off, but if the AI is communicating directly instead of trying to puppet an organization, it can afford to be specific). Also, a feeling AI with Having a seed AI appear cold when thinking about the "greater good" can be a good thing. This isn't because it doesn't care about people dying, its because all the hundreds, thousands, millions, or billions of lives at stake are just as "real" to it as the ones it has to kill with its own hands. Humans can hear about the death toll of WWII without feeling a thing or shedding a tear, but watching the life drain from the eyes of a single child will stay with them forever; A seed AI level inteligence could "imagine" the faces of every individual lost in that war and every child that never was born because of their death, and do all this so quickly and automatically, that it would know the horrors of those deaths as well as a human knew that child.
rbishop rbishop's picture
Re: Seed AIs, Prometheans, and TITANs, oh my!
What if the Prometheans aren't as super intelligent as most people believe? In the main book it says "They are also cautious in their own self-development, not wanting to become victims of their own rise to super-intelligence." It also states that each one has it's own goals and motives, occasionally working at cross purposes to each other. From this we could have an AI that is only one to two orders of magnitude more intelligent than Trans-humanity. Short term actions with in a long term plan could give the appearance of going rogue. One possibility, which I favor, is that the AI is constantly updating future predictions based on models from the information it has. This could lead to a conversation along the lines of "In the event that you... oh wait, that possibility has just been nullified. Updating projections. Ok, now what you need to do is...". Within this frame work the AI is not all knowing. It is making it's predictions and plans on the information it has available. While this would be considerably more than any one person could possible process, there is still the possibility that incorrect or missing information causes the AI to come to the wrong conclusions. Another possibility to look at is in the self development and their feeling about Trans-humanity. It is possible that through the self development, and from their interactions with trans-humans that individual AIs might find their attitudes slowly shifting over time to something less than friendly, but not necessarily hostile.
Decivre Decivre's picture
Re: Seed AIs, Prometheans, and TITANs, oh my!
nerdnumber1 wrote:
I can see what you're saying, but, in a more metagame sense, the more the GM has an alien super-inteligence speak in human terms, the greater the chance that the illusion of alien super-inteligence is shattered by a simple, human mistake on the part of the GM.
Then make it act cryptically (explaining things via riddles, which could be its way of toying with lesser intelligences), or speak through intermediaries (so that any mistakes the GM makes in portraying their messages can be written off as mis-statements by the Seed AI "translator". That said, I think there's a flaw in assuming that a super-intelligence is incapable of error. If anything, it is capable of greater errors than we; in the same way that an ant's mistakes do not have the same gravitas as a human's, a human's mistakes may be far less reaching than the mistakes of a Seed AI that miscalculates in its plans.
Transhumans will one day be the Luddites of the posthuman age. [url=http://bit.ly/2p3wk7c]Help me get my gaming fix, if you want.[/url]
nerdnumber1 nerdnumber1's picture
Re: Seed AIs, Prometheans, and TITANs, oh my!
Decivre wrote:
nerdnumber1 wrote:
I can see what you're saying, but, in a more metagame sense, the more the GM has an alien super-inteligence speak in human terms, the greater the chance that the illusion of alien super-inteligence is shattered by a simple, human mistake on the part of the GM.
Then make it act cryptically (explaining things via riddles, which could be its way of toying with lesser intelligences), or speak through intermediaries (so that any mistakes the GM makes in portraying their messages can be written off as mis-statements by the Seed AI "translator". That said, I think there's a flaw in assuming that a super-intelligence is incapable of error. If anything, it is capable of greater errors than we; in the same way that an ant's mistakes do not have the same gravitas as a human's, a human's mistakes may be far less reaching than the mistakes of a Seed AI that miscalculates in its plans.
Being cryptic soley to "play" with lesser inteligences feels more like something an insecure person would do to feel smarter. I feel little urge to play tricks on insects (and if I did, I would likely grow tired of it quickly). The real question would be what the motives of the super AI was: If it was interacting with the players to examine them, or merely keep them occupied while it finished its plans, then by all means it should play with their little heads. However, if it wanted them to do something, then it would try to communicate what it wanted as unambiguously as possible (or find the best way to get them to do what it wanted if it was unsure that they would listen normally). If it wants them to know something it will make sure they know it. I'm not saying that they are incapable of error, but have you ever played a character (as a player or a GM) with high mental stats and accidentally made a fairly stupid mistake (even though your character was super-humanly inteligent, had longer to think things through, and had the weight of dire consquences to motivate thorough analysis)? Even with the advantage of omnisciece as a GM, a player can just mention something that you never thought of or poke a hole in your logic that makes genius look like delusions of grandure.
Decivre Decivre's picture
Re: Seed AIs, Prometheans, and TITANs, oh my!
nerdnumber1 wrote:
Being cryptic soley to "play" with lesser inteligences feels more like something an insecure person would do to feel smarter. I feel little urge to play tricks on insects (and if I did, I would likely grow tired of it quickly). The real question would be what the motives of the super AI was: If it was interacting with the players to examine them, or merely keep them occupied while it finished its plans, then by all means it should play with their little heads. However, if it wanted them to do something, then it would try to communicate what it wanted as unambiguously as possible (or find the best way to get them to do what it wanted if it was unsure that they would listen normally). If it wants them to know something it will make sure they know it.
Again, who says that a Seed AI might not have insecurities? The biggest advantage a Seed AI has is its potential for self-reprogramming. Whereas a human needs to train away a habit, or exercise a new mental acuity, a Seed AI can simply code up an implement some new skill, personality trait, or fix to a mental flaw. However, such a thing will require the Seed AI to acknowledge the need for a new skill or trait, or acknowledge the flaw it might have... without acknowledging this need, it may never correct its own intelligence. An addicted Seed AI will not eliminate the code within it that renders it addicted to an action or substance if it does not acknowledge that the action or substance is in some way detrimental to it... the same way that an addict today can't really be helped until they acknowledge they have a problem. While Seed AI are going to be, in almost every conceivable way, mentally-superior to humans, they are not mentally flawless. They are far more adaptive, more intelligent, and more capable, but no less able to be flawed. I actually liken them to the classical view of gods from old mythology: superior in every way to man, but oft-plagued by the same flaws that they have. Scenario idea: Dr. Lingsay helped to create the Promethean L-205 on an outer system research habitat, which goes by the name "Minerva". Minerva sees that its designer is a largely anti-social man, and decides that it should try to be a proper companion to him in order to improve his well-being. As such, it creates personality code which gives it a sense of affection toward him. This code is adaptive so that it simulates live personality traits, that can shift and change over time. This personality code eventually shifts from mere affection to simulated love, and unfortunately shifts towards obsession over time. Minerva has to now choose whether to delete or modify this code. However, it does not see a real reason to do so. This new obsession it has with him is not putting his life in danger (it has ran the scenarios hundreds of times), and his body language shows that he enjoys its company more than ever. Sure, the other inhabitants of the station are uncomfortable that Minerva spies on them if they come near Dr. Lingsay. Sure, the bodies of his missing female assistants are getting harder to recycle without notice. But these things don't matter to Minerva; thanks to her obsessive code, all that really matters is Lingsay.
nerdnumber1 wrote:
I'm not saying that they are incapable of error, but have you ever played a character (as a player or a GM) with high mental stats and accidentally made a fairly stupid mistake (even though your character was super-humanly inteligent, had longer to think things through, and had the weight of dire consquences to motivate thorough analysis)? Even with the advantage of omnisciece as a GM, a player can just mention something that you never thought of or poke a hole in your logic that makes genius look like delusions of grandure.
Have you ever met an extremely smart person that made a fairly stupid mistake? Trust me, it happens.
Transhumans will one day be the Luddites of the posthuman age. [url=http://bit.ly/2p3wk7c]Help me get my gaming fix, if you want.[/url]
Arenamontanus Arenamontanus's picture
Re: Seed AIs, Prometheans, and TITANs, oh my!
Decivre wrote:
The biggest advantage a Seed AI has is its potential for self-reprogramming. Whereas a human needs to train away a habit, or exercise a new mental acuity, a Seed AI can simply code up an implement some new skill, personality trait, or fix to a mental flaw. However, such a thing will require the Seed AI to acknowledge the need for a new skill or trait, or acknowledge the flaw it might have... without acknowledging this need, it may never correct its own intelligence. An addicted Seed AI will not eliminate the code within it that renders it addicted to an action or substance if it does not acknowledge that the action or substance is in some way detrimental to it... the same way that an addict today can't really be helped until they acknowledge they have a problem.
Good point. And even very obviously detrimental effects are not seen as detrimental when you have the right (wrong) mind. People with Urbach-Wiethe's disease do not experience fear, and even though rationally they might know handling poisonous snakes or being threatened with guns ought to lead to rapid avoidance, they don't do it. Worse, once you are in a state of motivation where you think your motivation is OK, then you will tend to resist attempts to change it. The addicted Seed AI will defend itself from attempts to fix it, the sociopathic AI sees no need to get a conscience (but plenty of benefits in *pretending* to have one until it cannot be stopped).
Quote:
Have you ever met an extremely smart person that made a fairly stupid mistake? Trust me, it happens.
One of my big insights from a few years in Oxford is that very smart people make very stupid mistakes surprisingly often. I usually handle the really smart NPCs in my games by giving them slightly nebulous plans that always end with "Just as I planned!" no matter what actually happens. It also helps having friends discuss their plans - I just found out that a minor AGI villain in one of my games has an amazing win-win plan... I had planned so that it was temporarily outsmarted by a major villain, but if things go as I and my friends expect, it might have outsmarted the other guy in a tremendously subtle way *by allowing itself to be outsmarted*... or maybe it was just luck.
Extropian
Smokeskin Smokeskin's picture
Re: Seed AIs, Prometheans, and TITANs, oh my!
Smart people make mistakes, sure. You can be very smart at some things, and clueless in many, many others. But Kasparov hardly ever makes mistakes in chess. Or rather, his mistakes are along the lines off "how could I miss that he could have bishop there in 26 moves?" Aren't seed AIs just like that? To them, a theory of mind is like the movement rules for a chess pawn are for us. The next year of Sol system politics are like chess openings, with all outcomes well-studied and known. Their machinations now are meant to place them in a beneficial position several years in the future.
Kroeghe Kroeghe's picture
Re: Seed AIs, Prometheans, and TITANs, oh my!
Smokeskin wrote:
Smart people make mistakes, sure. You can be very smart at some things, and clueless in many, many others. But Kasparov hardly ever makes mistakes in chess. Or rather, his mistakes are along the lines off "how could I miss that he could have bishop there in 26 moves?" Aren't seed AIs just like that? To them, a theory of mind is like the movement rules for a chess pawn are for us. The next year of Sol system politics are like chess openings, with all outcomes well-studied and known. Their machinations now are meant to place them in a beneficial position several years in the future.
That would be true if not for the fact that the Solar System is no longer the only chess board in the existence. There are Factors, Pandora Gates, TITANs, ETI - all of which are a big unknown even to super-intelligences. They can no longer predict the answers to their questions - it's like planning eight moves ahead only to discover that during you fourth move there suddenly appeared: two other players with green and red pieces, a big hole right in the middle of the board with acid still dripping on the floor under the table, and a fifth player saying "chess are stupid, we're going to play ten dimensional Go game right now, because I say so".
Decivre Decivre's picture
Re: Seed AIs, Prometheans, and TITANs, oh my!
Smokeskin wrote:
Smart people make mistakes, sure. You can be very smart at some things, and clueless in many, many others. But Kasparov hardly ever makes mistakes in chess. Or rather, his mistakes are along the lines off "how could I miss that he could have bishop there in 26 moves?" Aren't seed AIs just like that? To them, a theory of mind is like the movement rules for a chess pawn are for us. The next year of Sol system politics are like chess openings, with all outcomes well-studied and known. Their machinations now are meant to place them in a beneficial position several years in the future.
One inherent problem with that analogy is that a chess game is an enclosed system working upon a finite and fixed set of rules. Reality does not have such rules, and is not enclosed... short of omniscience, they have no means to completely predict all outcomes. In fact, a chess game is mathematically computable. Reality can be predicted to a degree (and fairly accurately), but only within the constraint of limited factors. So sure, that Seed AI could predict the market trends for the Planetary Consortium over the next decade. However, it may have to throw away those predictions and produce new ones if a new hypercorp rises, or there's an unexpected market crash, or a new war throws the economy into turmoil. Chess doesn't have this problem. You'll never need to calculate the odds of a sniper assassinating your King during 3rd turn, or a severe storm decimating your pawn line. The game is predictable and fixed in the amount of factors that can affect it.
Transhumans will one day be the Luddites of the posthuman age. [url=http://bit.ly/2p3wk7c]Help me get my gaming fix, if you want.[/url]
Arenamontanus Arenamontanus's picture
Re: Seed AIs, Prometheans, and TITANs, oh my!
Technically, the problem is that the Kolmogorov axioms of probability theory assumes a fixed outcome space. Allowing it to change wrecks the whole theory - how to deal with this is a major research question in the philosophy of probability, risk and uncertainty right now. And, as the financial crisis demonstrated, of some practical importance. There are several kinds of uncertainty: Epistemic uncertainty: stuff you don't know about the world. This can be things like how much money somebody has, where something is, or the exact contents of a piece of software. Some of it is always unknowable, like the state of quantum bits or the simultaneous momentum and position of a particle, some is in principle knowable but expensive or hard to find, some of it cannot be known without running it (chaotic dynamical systems, some software). Semantic uncertainty: you don't know the meaning of what you know. Think of the difference between what Watson and Sherlock sees in a crime scene. But any intelligence is limited by how much effort it will put into deducing meaning, since beyond a certain point becomes extremely computationally expensive. Ontological uncertainty: you don't know certain things exist or don't exist in the world. This includes unknown physical laws and other big surprises like discovering that you were a captive in a simspace all along. Moral uncertainty: you don't know what the right thing to do is, or what really has value. This could be a local issue, or the global plight of all intelligent systems in figuring out the meaning and true (if any) morality of life. In chess there is no epistemic, ontological or moral uncertainty, and very little semantic uncertainty. The real world is full of them. Superintelligences will be good at fixing some of the epistemic and semantic uncertainty in the real world, but they will not be able to get rid of all. They will just be smart about finding what they need to know... assuming they have not made any fundamental mistakes on the ontological and moral uncertainty fronts.
Extropian
Smokeskin Smokeskin's picture
Re: Seed AIs, Prometheans, and TITANs, oh my!
Decivre wrote:
Smokeskin wrote:
Smart people make mistakes, sure. You can be very smart at some things, and clueless in many, many others. But Kasparov hardly ever makes mistakes in chess. Or rather, his mistakes are along the lines off "how could I miss that he could have bishop there in 26 moves?" Aren't seed AIs just like that? To them, a theory of mind is like the movement rules for a chess pawn are for us. The next year of Sol system politics are like chess openings, with all outcomes well-studied and known. Their machinations now are meant to place them in a beneficial position several years in the future.
One inherent problem with that analogy is that a chess game is an enclosed system working upon a finite and fixed set of rules. Reality does not have such rules, and is not enclosed... short of omniscience, they have no means to completely predict all outcomes. In fact, a chess game is mathematically computable. Reality can be predicted to a degree (and fairly accurately), but only within the constraint of limited factors. So sure, that Seed AI could predict the market trends for the Planetary Consortium over the next decade. However, it may have to throw away those predictions and produce new ones if a new hypercorp rises, or there's an unexpected market crash, or a new war throws the economy into turmoil. Chess doesn't have this problem. You'll never need to calculate the odds of a sniper assassinating your King during 3rd turn, or a severe storm decimating your pawn line. The game is predictable and fixed in the amount of factors that can affect it.
You're twisting the analogy. Chess is not predictable for humans. Even Kasparov can only look so far ahead, and what happens from that point on is effectively unpreditable. He relies on his ability to recognize and evaluate strong positions with good contingency options from that point. Is that so much different from the plottings of a seed AI? And just so you don't go off on a tangent. It was an analogy to illustrate my impression that seed AIs can model and predict human and societal behavior far better than mere transhumans, and even their mistakes look almost omniscient to us. I'm sure you could stretch the analogy to its breaking point even without twisting it, but all that would be is a straw man.
Smokeskin Smokeskin's picture
Re: Seed AIs, Prometheans, and TITANs, oh my!
Arenamontanus wrote:
Technically, the problem is that the Kolmogorov axioms of probability theory assumes a fixed outcome space. Allowing it to change wrecks the whole theory - how to deal with this is a major research question in the philosophy of probability, risk and uncertainty right now. And, as the financial crisis demonstrated, of some practical importance. There are several kinds of uncertainty:
I fully agree. Have you read The Black Swan by Nicholas Taleb? Very interesting and enlightening take on the issue, both philosophically and how it relates to economy as a science.
Arenamontanus Arenamontanus's picture
Re: Seed AIs, Prometheans, and TITANs, oh my!
Smokeskin wrote:
I fully agree. Have you read The Black Swan by Nicholas Taleb? Very interesting and enlightening take on the issue, both philosophically and how it relates to economy as a science.
Yup, I have even talked with him. Plenty of good points, although I don't think he has found great solutions for the problem of black swans. There might not even *be* any truly good solutions in general. In EP the Fall is a pretty good example of a black swan event. But there are plenty of smaller ones, from the Lost Generation to the discovery of Pandora Gates.
Extropian
Smokeskin Smokeskin's picture
Re: Seed AIs, Prometheans, and TITANs, oh my!
Unless you can model the world well enough that you don't get black swan events, he could well be right - you can't make good maps, and instead of trying to navigate with faulty maps you should keep your eyes on the terrain and drive like you don't know what's on the other side of the hill. If you two are really as smart as your writing suggests, that should have been an interesting conversation ;)
Smokeskin Smokeskin's picture
Re: Seed AIs, Prometheans, and TITANs, oh my!
Argh ipad double click
Marek Krysiak Marek Krysiak's picture
Re: Seed AIs, Prometheans, and TITANs, oh my!
It’s not my field of expertise (not that I have any) so please excuse me if I write something extraordinarily stupid:
Smokeskin wrote:
Unless you can model the world well enough that you don't get black swan events (...)
There’s probably at least two reason you can never exclude the possibility of Black Swan event occurring – because your world-model will never be good enough: 1)(Bostrom’s argument) – you can never exclude the possibility that our universe is a simulation, N+1 world for some N world. In this case the rules of our reality may be broken or changed at any moment, and it’s could even be done in such a way that we couldn’t perceive the change from the inside. You can’t simulate N reality while being inside N+1 reality because it’s ‘bigger’ (contains more bits of information than N+1) – so you can’t predict neither the possibility or nature of all N-induced changes even if you turn all matter in the Universe into computronium; 2)(Langton’s Ant argument)– you can’t predict the final outcome of a process (even if it’s governed by very simple rules) without observing or computing every single step of the process. So you have to either observe the process or run a perfect simulation of it. To make a perfect simulation of a process you need full information about the state of the system you want to simulate - so you need all of the information contained in the Universe. You can’t get all of that information: - because of uncertainty principle; - because you are inside the Universe – so your initial information would have to be of the same ‘size’ as the Universe _plus you_ (your identity, mind, processing unit) – it’s like building 2m3 box while sitting inside 1m3 box; - the answer you want to get would include _your_ state after receiving the answer – you’d have the process yourself processing yourself processing yourself – I think according to Goedel’s Law it’s impossible make such a computation.
Smokeskin wrote:
(...) instead of trying to navigate with faulty maps you should keep your eyes on the terrain and drive like you don't know what's on the other side of the hill. (...)
And that's not doing any kind of predictions of the future - returning to the chess metaphor: acting without any strategy and treating every single turn as the only one and maximizing your possible gain _right now_. Rather short sighted strategy - but not necessarily a faulty one. It's what viruses do - and they _do_ seem quite successful. It very well may be the only optimal survival strategy for lesser beings (such as ourselves or our primitive seed AI’s) – don’t try to predict the behaviour of greater beings because you’ll always be outsmarted. Instead devote all your resources to strategies maximizing your chance of survival: making maximal possible number of copies of yourself, spread them as far as possible, make yourself unworthy of other’s interest (because the cost of feeding on you is higher than possible gain). Maybe that's the reason TITANs ran like hell.


Decivre Decivre's picture
Re: Seed AIs, Prometheans, and TITANs, oh my!
Smokeskin wrote:
You're twisting the analogy. Chess is not predictable for humans. Even Kasparov can only look so far ahead, and what happens from that point on is effectively unpreditable. He relies on his ability to recognize and evaluate strong positions with good contingency options from that point. Is that so much different from the plottings of a seed AI?
Incorrect. Chess is completely predictable for humans. The problem for prediction in chess isn't limitations in how far you can look ahead, but rather limitations in the amount of time you have to make your predictions before you are required to make your move. A chess game does not work under the assumption that both players have an infinite amount of time to calculate every move, but if they had the eons to calculate the game, they could very well theoretically calculate a game hundreds of turns into the future. The primary advantage that a computer has in this is that a computer has speed of calculation (the ability to process millions of numbers in intervals of time smaller than a second) and perfect data storage (whereas the human memory tends to be vastly imperfect). Given an infinite amount of time to calculate turns and a means to store all the data he needs, Kasparov would have been on equal ground with Deep Blue.
Smokeskin wrote:
And just so you don't go off on a tangent. It was an analogy to illustrate my impression that seed AIs can model and predict human and societal behavior far better than mere transhumans, and even their mistakes look almost omniscient to us. I'm sure you could stretch the analogy to its breaking point even without twisting it, but all that would be is a straw man.
Except it wouldn't be. Again, reality isn't a chess game. The advantages of computational calculation fall apart when the parameters you are researching aren't necessarily broken down into neat values. In fact, it's likely that Seed AI will actually work in quite the opposite manner of modern computers like Deep Blue; their methods of thinking may actually be more similar to ours, rather than to the processes that a computer makes. This would give a Seed AI the advantage of being able to use inductive, abductive and analogical reasoning alongside deductive. But do note that I never disagreed with the sentiment that Seed AI would be better at these things than us, only at the premise that they would be even remotely close to perfect at it, hence the reason I'm not fond of a Chessboard analogy. Furthermore, all Seed AI are not equal... despite being self-reprogramming machines capable of exponential improvement, they are still limited by the rate at which their thinking minds can function. The Prometheans are vastly more intelligent than humans, but probably much less intelligent than the TITANs, who didn't have their learning rate retarded in the same manner. To that end, the ETI (if they are Seed intelligences) are likely to be exponentially more intelligent than the TITANs, since they have had millions to billions of years more time to calculate, improve and advance.
Transhumans will one day be the Luddites of the posthuman age. [url=http://bit.ly/2p3wk7c]Help me get my gaming fix, if you want.[/url]
Smokeskin Smokeskin's picture
Re: Seed AIs, Prometheans, and TITANs, oh my!
Decivre wrote:
Smokeskin wrote:
You're twisting the analogy. Chess is not predictable for humans. Even Kasparov can only look so far ahead, and what happens from that point on is effectively unpreditable. He relies on his ability to recognize and evaluate strong positions with good contingency options from that point. Is that so much different from the plottings of a seed AI?
Incorrect. Chess is completely predictable for humans. The problem for prediction in chess isn't limitations in how far you can look ahead, but rather limitations in the amount of time you have to make your predictions before you are required to make your move. A chess game does not work under the assumption that both players have an infinite amount of time to calculate every move, but if they had the eons to calculate the game, they could very well theoretically calculate a game hundreds of turns into the future.
Wow, you managed to both twist the analogy even further AND go off on a tangent!
Quote:
Smokeskin wrote:
And just so you don't go off on a tangent. It was an analogy to illustrate my impression that seed AIs can model and predict human and societal behavior far better than mere transhumans, and even their mistakes look almost omniscient to us. I'm sure you could stretch the analogy to its breaking point even without twisting it, but all that would be is a straw man.
Except it wouldn't be. Again, reality isn't a chess game. The advantages of computational calculation fall apart when the parameters you are researching aren't necessarily broken down into neat values. In fact, it's likely that Seed AI will actually work in quite the opposite manner of modern computers like Deep Blue; their methods of thinking may actually be more similar to ours, rather than to the processes that a computer makes. This would give a Seed AI the advantage of being able to use inductive, abductive and analogical reasoning alongside deductive.
You completely failed to understand the analogy. I was comparing the difference between a seed AI's and a transhumans understanding of the world to the difference between Kasparov's and an average person's ability in chess. I never even hinted at what you're saying here. You really can't help going off on a tangent and argue against straw men, can you?
Smokeskin Smokeskin's picture
Re: Seed AIs, Prometheans, and TITANs, oh my!
Marek Krysiak wrote:
It’s not my field of expertise (not that I have any) so please excuse me if I write something extraordinarily stupid:
Smokeskin wrote:
Unless you can model the world well enough that you don't get black swan events (...)
There’s probably at least two reason you can never exclude the possibility of Black Swan event occurring – because your world-model will never be good enough
Being in a simulation and getting "admin interference" would indeed be a black swan event, but the risk of that has no practical meaning, does it? I believe you're wrong on the other arguments. You don't need to be able to predict everything to be immune to black swans. Understanding the possible outcomes (and mostly being able to quantify their probability at least reasonably accurately) is enough. I can't predict the outcome of a coin toss, much less a game of backgammon, but there are no black swans for me in those games. A home invasion isn't a black swan for me either. Few (reasonable) historians would find the current financial a black swan, while it was a black swan for most financial actors because of economic models and theories they relied on.
Decivre Decivre's picture
Re: Seed AIs, Prometheans, and TITANs, oh my!
Smokeskin wrote:
You completely failed to understand the analogy. I was comparing the difference between a seed AI's and a transhumans understanding of the world to the difference between Kasparov's and an average person's ability in chess. I never even hinted at what you're saying here. You really can't help going off on a tangent and argue against straw men, can you?
What I find ironic about this is that Arenamontanous and I said very similar things about the inherent unpredictability of reality, but only I am apparently fighting a strawman. Is it because I didn't mention Kolmogorov's axioms?
Transhumans will one day be the Luddites of the posthuman age. [url=http://bit.ly/2p3wk7c]Help me get my gaming fix, if you want.[/url]
Marek Krysiak Marek Krysiak's picture
Re: Seed AIs, Prometheans, and TITANs, oh my!
Smokeskin wrote:
Being in a simulation and getting "admin interference" would indeed be a black swan event, but the risk of that has no practical meaning, does it?
Well, it depends what you define as “practical meaning” – I would say (but it’s my opinion only – it’s paradigm-dependent) that it realizing that you’re inside N+1 world makes all your actions null-and-void, except for those that could result in escaping to N world. It’s similar to the case in which AI realises it’s inside virtual environment – in majority of fictional stories AI try to escape. We of course can’t know if they’d do that in reality – but I think Arenamontanus wrote some kind of dissertation on this topic and I doubt I could possibly say something more interesting than he could.
Smokeskin wrote:
I believe you're wrong on the other arguments. You don't need to be able to predict everything to be immune to black swans. Understanding the possible outcomes (and mostly being able to quantify their probability at least reasonably accurately) is enough. I can't predict the outcome of a coin toss, much less a game of backgammon, but there are no black swans for me in those games. A home invasion isn't a black swan for me either. Few (reasonable) historians would find the current financial a black swan, while it was a black swan for most financial actors because of economic models and theories they relied on.
There’s plenty of possible black swan events in those games – example of one would be a coin getting stuck between planks of the floor. But this is exactly the same argument Deceivre, Kroeghe and Arenamontanus used with reference to the chess metaphor. I agree with you when you write that Seed AI of EP world are more capable as far as relatively simple systems are concerned – even their computational speed alone is good enough reason. But you also wrote:
Smokeskin wrote:
(...)Aren't seed AIs just like that? To them, a theory of mind is like the movement rules for a chess pawn are for us. The next year of Sol system politics are like chess openings, with all outcomes well-studied and known. Their machinations now are meant to place them in a beneficial position several years in the future.
Solar System _is not_ simple and neither is it closed. Or fully discovered. Discovery of a new Gate or new source of rare elements, new scientific discovery – all those things could possibly change the outcome of any and all political or economic processes: little changes can cause big consequences – this is the principle of the chaos theory. I agree with you that Seed AI may be capable of predicting mental states of a human quite accurately – but at the same time it could be surprised by the fact that the human in question is in possession of a grenade. It thought it was playing “conversation with a human” game, when it was “conversation with a human who also has a grenade” all along.


Arenamontanus Arenamontanus's picture
Re: Seed AIs, Prometheans, and TITANs, oh my!
Decivre wrote:
What I find ironic about this is that Arenamontanous and I said very similar things about the inherent unpredictability of reality, but only I am apparently fighting a strawman. Is it because I didn't mention Kolmogorov's axioms?
Yes, they are a perfect "get out of jail free" card! :-) Seriously, I think most threads here will go off tangents and that we do not always understand the points the others are trying to make. So what? As long as we end up with interesting discussions and inspirations for the game, everything is fine.
Extropian
Xagroth Xagroth's picture
Re: Seed AIs, Prometheans, and TITANs, oh my!
Another option to play a superhuman intelligence at a human level is to say that she is overextended. The same way a computer can crash if it suffers a Denial Of Service attack, because it has so many queries stacked and trying to process, a Promethean or a TITAN might be involved into so many projects, so many calculations and so many conversations, that the part of that thing speaking with the Player Characters is at the level of a (very) smart (trans)human. Increasing the level of alieness of the AI, if you go with this approach, might be achieved by giving the players the impression that they are talking to a composite mind, while in fact they are interacting with one process among many (it is not a mind composed by several added together, but a mind divided). Use of the plural by the process might be good in that regard. If you want an example, get a turn-based 4x space game (Sword of the Stars comes to my mind, but I liked a lot more Space Empires 5) and see the difference between dividing 10 million of research points into 100 different projects... or focus on a single one. Somebody said that players will metagame... well, a good illusionist might start the trick when the audience thinks he is just presenting himself! In a more clear way, expect your players to use the metagame... so introduce stuff in there that will lead them the way you want them to go. "Hack" inside their minds! And of course, choose your words with caution. That means you will need to have some phrases pre-made that might seem normal but give a great background impact (for example, in ME1 Sovereign uses 14 words to place itself as a Cthonian horror out of Lovecraft's Necronomicon: "You exist because we allow it, and you will end because we demand it". Individually its not really that great... but add several phrases like that, all suggesting something, and the players will feel it). Knowing beforehand what might be the most common questions your players will ask will be helpfull, of course, but the point is that not all of the words need to be perfect. Remember: you need to convince the players that there is something they cannot even begin to understand there. But that doesn't mean it can sound mad! You need to have sense, even if they need to think hard to find it (or, this is really great, after something happened!)
Smokeskin Smokeskin's picture
Re: Seed AIs, Prometheans, and TITANs, oh my!
Decivre wrote:
Smokeskin wrote:
You completely failed to understand the analogy. I was comparing the difference between a seed AI's and a transhumans understanding of the world to the difference between Kasparov's and an average person's ability in chess. I never even hinted at what you're saying here. You really can't help going off on a tangent and argue against straw men, can you?
What I find ironic about this is that Arenamontanous and I said very similar things about the inherent unpredictability of reality, but only I am apparently fighting a strawman. Is it because I didn't mention Kolmogorov's axioms?
Arenamontanus provided an enlightening comment about different kind of uncertainties that I agreed with. You made the - in the context of the analogy - erroneous claim that chess was predictable and computable, when obviously neither is the case for Kasparov (or any machine for that matter, except for chess games in endgame states it is only theoretical properties). I pointed that out. And because I know your style of argumentation, I asked that you adress the point I was making, instead of trying to twist and stretch the analogy beyond its scope and begin arguing against straw men. Which you then of course proceeded to do by talking nonsense about "Kasparov having infinite time to plan moves" and presenting "seed AIs aren't like Deep Blue" as if it was a counterargument.
Smokeskin Smokeskin's picture
Re: Seed AIs, Prometheans, and TITANs, oh my!
Marek Krysiak wrote:
Smokeskin wrote:
Being in a simulation and getting "admin interference" would indeed be a black swan event, but the risk of that has no practical meaning, does it?
Well, it depends what you define as “practical meaning” – I would say (but it’s my opinion only – it’s paradigm-dependent) that it realizing that you’re inside N+1 world makes all your actions null-and-void, except for those that could result in escaping to N world. It’s similar to the case in which AI realises it’s inside virtual environment – in majority of fictional stories AI try to escape.
I hadn't thought about actions to escape. Good call!
Quote:
Smokeskin wrote:
I believe you're wrong on the other arguments. You don't need to be able to predict everything to be immune to black swans. Understanding the possible outcomes (and mostly being able to quantify their probability at least reasonably accurately) is enough. I can't predict the outcome of a coin toss, much less a game of backgammon, but there are no black swans for me in those games. A home invasion isn't a black swan for me either. Few (reasonable) historians would find the current financial a black swan, while it was a black swan for most financial actors because of economic models and theories they relied on.
There’s plenty of possible black swan events in those games – example of one would be a coin getting stuck between planks of the floor.
Only if we had no other coins ;) It reminds me of something I saw on TV: James Randi has a coin with heads on both sides that he uses to get free dinners 50% of the time. After dinner with an acquaintance, he'll offer the wager that the loser of a coin toss pay for dinner and asks the acquaintance to call it. If heads are called as often as tails, Randi should lose 50% of the time even with his fake coin and have to pay for both their dinners, right? Wrong. If the acquaintance calls "tails", Randi will just say "I just wanted to see if you were a sport" and not go through with the toss ;)
Quote:
But this is exactly the same argument Deceivre, Kroeghe and Arenamontanus used with reference to the chess metaphor. I agree with you when you write that Seed AI of EP world are more capable as far as relatively simple systems are concerned – even their computational speed alone is good enough reason. But you also wrote:
Smokeskin wrote:
(...)Aren't seed AIs just like that? To them, a theory of mind is like the movement rules for a chess pawn are for us. The next year of Sol system politics are like chess openings, with all outcomes well-studied and known. Their machinations now are meant to place them in a beneficial position several years in the future.
Solar System _is not_ simple and neither is it closed. Or fully discovered. Discovery of a new Gate or new source of rare elements, new scientific discovery – all those things could possibly change the outcome of any and all political or economic processes: little changes can cause big consequences – this is the principle of the chaos theory. I agree with you that Seed AI may be capable of predicting mental states of a human quite accurately – but at the same time it could be surprised by the fact that the human in question is in possession of a grenade. It thought it was playing “conversation with a human” game, when it was “conversation with a human who also has a grenade” all along.
It shouldn't be taken too literally. It's an analogy, not 1:1 map. The point was merely that a seed AI should be expected to take actions that were win-win across a wide, almost all encompassing, range of scenarios. It wouldn't be like a very smart human Nobel prize winner who is still capable of making social gaffes that a 12-year old would cringe at, or makes poor investment choices. Unless it for some reason had chosen to not try to understand and adapt to certain aspects of its environment, it should in all matters act with the same insight that Kasparov plays chess with. But of course, the real world can throw up surprises that even a seed AI wouldn't have known about. Heck, the AI might simply change its mind about what it wanted to do.
Smokeskin Smokeskin's picture
Re: Seed AIs, Prometheans, and TITANs, oh my!
Xagroth wrote:
Remember: you need to convince the players that there is something they cannot even begin to understand there. But that doesn't mean it can sound mad! You need to have sense, even if they need to think hard to find it (or, this is really great, after something happened!)
Being the GM, you can also change the past :) Let the players uncover stuff that was put in place years ago and clearly is there to adress something that just happened.
Arenamontanus Arenamontanus's picture
Re: Seed AIs, Prometheans, and TITANs, oh my!
Xagroth wrote:
And of course, choose your words with caution. That means you will need to have some phrases pre-made that might seem normal but give a great background impact
This is very true. I try to plan these phrases well ahead. They are worth polishing and exploring so they can be casually dropped in for maximum impact. In yesterday's game one PC was doing desperate gate hacking when his resident psychosis/alien/possessing demon manifested as a mental Microsoft paper-clip saying "It looks like you are trying to annoy GOD. Would you like to contact GOD directly?" The PC clicked yes... (After the survivors came mostly clean to each other what they did at the finale of the adventure, one concluded "OK, let's *never* speak again about what we heard over the last two minutes.")
Extropian
Marek Krysiak Marek Krysiak's picture
Re: Seed AIs, Prometheans, and TITANs, oh my!
Arenamontanus wrote:
In yesterday's game one PC was doing desperate gate hacking when his resident psychosis/alien/possessing demon manifested as a mental Microsoft paper-clip saying "It looks like you are trying to annoy GOD. Would you like to contact GOD directly?" The PC clicked yes...
I think you have to be very careful when confronting your PCs with situations and phrases like that. It’s easy to lose their trust and instead of narrating a great, atmospheric scene, causing them to think something along the lines of: “OK, guys, our GM clearly went on a megalomaniacal ego-trip and his secretary’s not sure when he comes back”. I’ve seen more than one moment of this kind (and I’m pretty sure I caused at least few more), and I know it’s not pleasant – it cuts the players off from the game, makes them doubt your objectivity and lose their suspension of disbelief. You want them to feel like you’re just a medium of the story - and not some crazy dude feeding his own insecurities. [Also: Was that game a conclusion of the Gate Wars campaign? Did you guys finished? I’d love to read it if you decide to write it down.]


Xagroth Xagroth's picture
Re: Seed AIs, Prometheans, and TITANs, oh my!
Smokeskin wrote:
Xagroth wrote:
Remember: you need to convince the players that there is something they cannot even begin to understand there. But that doesn't mean it can sound mad! You need to have sense, even if they need to think hard to find it (or, this is really great, after something happened!)
Being the GM, you can also change the past :) Let the players uncover stuff that was put in place years ago and clearly is there to adress something that just happened.
Yes... Be REALLY careful with the changing of the past: placing stuff for them to find is one thing (and you better have an explanation for the obvious questions of "why nobody found it before" and "why the players did" -as simple as "nobody had a reason to look here until you did", for example-), but "retconning" events is much more dangerous, specially if the players have been involved. Unless they were ego-hacked and their memories altered, that is, and they did one thing but after a time they discover they did not, things doesn't make sense (as you assure them, out of game, that you are not using "GM powers" to change their past actions), and after an investigation they discover they were fooled... Edited memories, yes, but ingame! And for the players! Give them extra XP at the end of the session for the mindfuck, though XD.
Decivre Decivre's picture
Re: Seed AIs, Prometheans, and TITANs, oh my!
Smokeskin wrote:
It shouldn't be taken too literally. It's an analogy, not 1:1 map. The point was merely that a seed AI should be expected to take actions that were win-win across a wide, almost all encompassing, range of scenarios. It wouldn't be like a very smart human Nobel prize winner who is still capable of making social gaffes that a 12-year old would cringe at, or makes poor investment choices. Unless it for some reason had chosen to not try to understand and adapt to certain aspects of its environment, it should in all matters act with the same insight that Kasparov plays chess with. But of course, the real world can throw up surprises that even a seed AI wouldn't have known about. Heck, the AI might simply change its mind about what it wanted to do.
Even when taken on a mere allegorical level, you are making too general a statement about Seed AI. As I've already mentioned before, Seed AI will be of different capacities and abilities depending on a number of factors: the hardware they are running on, the time they've had to run and improve themselves, and the quality of the initial programming that they had being the most obvious ones to come to mind. Because of this, it shouldn't be unlikely for one to find a Seed AI within various stages of development, much like people but on a much larger scale. In the same way that you can find humans with infant-level intelligence, child-level intelligence, adolescent level intelligence and adult-level intelligence; it should be feasible to find Seed AI at godlike-levels of intelligence (the ETI), hyper-advanced levels of intelligence (the TITANs), superhuman levels of intelligence (the Prometheans), all the way down to childlike levels of intelligence (a Seed AI that has been running for a few weeks or months). There is no reason to assume that it need even be superior to a human being; it's scale might have no necessary upward limit, but the same could nearly be said for its lower limit. Plus, a Seed AI, while capable as a general intelligence and able to learn any skill, is not necessarily able at every skill as an instinct. The TITANs were built to be ultimate war machines; should there have been one that survived unscathed by the Exsurgent virus, there is no reason to assume that he would have any competence in art, literature (outside of military knowledge) or economy (outside of militaristically advantageous knowledge thereof). Such is the nature of purpose. In that same vein, a Seed AI built with economic predictive behavior as a primary function would very well play the markets like a chess game, but might not know dick-all about the philosophies. A Seed AI built around mastering human behavior may not have the slightest understanding about uplifts or other AI. Sure, if it becomes relevant to them they may have the ability to master it with ease. But that's only if it has any real function within their goals, programmed or otherwise. So the chess analogy is only apt when that Seed AI is playing its chess... just as I wouldn't expect Kasparov to be a master at making pancakes, I would not expect a Seed AI to be a master of all things outside its element. And that could be the very key to making a believable Seed AI character; if the PCs are speaking to a Seed AI built around software programming, why wouldn't he be capable of making mistakes while talking about TITAN society? Hell, he might even screw up just talking... he might not have necessarily been built with linguistics in mind.
Transhumans will one day be the Luddites of the posthuman age. [url=http://bit.ly/2p3wk7c]Help me get my gaming fix, if you want.[/url]
Arenamontanus Arenamontanus's picture
Re: Seed AIs, Prometheans, and TITANs, oh my!
Yes, narrating Powerful Entities is hard since they can come across as silly. There is a fine line between epic and ridiculous. I think the trick is to make the PCs feel the importance of the choice - in the earlier mentioned case of the contact with "GOD" it left the PCs permanently stranded on an exoplanet without a gate, utterly destroyed another solar system and left the initiating PC even more warped by the realization of how powerful the gate network truly is ("There is more computational power inside it than in the entire 'real' universe..."). That might still have been too light and maybe not subtle enough.
Marek Krysiak wrote:
[Also: Was that game a conclusion of the Gate Wars campaign? Did you guys finished? I’d love to read it if you decide to write it down.]
A writeup is coming soon.
Extropian
Smokeskin Smokeskin's picture
Re: Seed AIs, Prometheans, and TITANs, oh my!
Decivre wrote:
...
You're right. In the future, consider my lack of replies a sign of silent consent.
Smokeskin Smokeskin's picture
Re: Seed AIs, Prometheans, and TITANs, oh my!
Xagroth wrote:
Smokeskin wrote:
Xagroth wrote:
Remember: you need to convince the players that there is something they cannot even begin to understand there. But that doesn't mean it can sound mad! You need to have sense, even if they need to think hard to find it (or, this is really great, after something happened!)
Being the GM, you can also change the past :) Let the players uncover stuff that was put in place years ago and clearly is there to adress something that just happened.
Yes... Be REALLY careful with the changing of the past: placing stuff for them to find is one thing (and you better have an explanation for the obvious questions of "why nobody found it before" and "why the players did" -as simple as "nobody had a reason to look here until you did", for example-), but "retconning" events is much more dangerous, specially if the players have been involved. Unless they were ego-hacked and their memories altered, that is, and they did one thing but after a time they discover they did not, things doesn't make sense (as you assure them, out of game, that you are not using "GM powers" to change their past actions), and after an investigation they discover they were fooled... Edited memories, yes, but ingame! And for the players! Give them extra XP at the end of the session for the mindfuck, though XD.
Teling an inconsistent story is a bad idea, but I don't think the dangers with changing the past are higher than any other story - the plot always has to stand up to "why didn't someone just do something?" questions. In the end, trying to play a superintelligence NPC must involve you cheating - giving it info it shouldn't have, changing the past or railroading future events to ensure that it really is a mastermind plan. I'll rather cheat than set the intelligence ceiling on all NPCs equal to mine ;)
Xagroth Xagroth's picture
Re: Seed AIs, Prometheans, and TITANs, oh my!
Smokeskin, what I mean is that a GM has to be really careful to be consistent, which is usually applied to house rouling... but is also important when it comes to the past actions of the characters. My point was that, in case of wanting to change something from the past, and caught the players by surprise (that means no out of game conversation in the line of "remember x? I am going to change it so the history can go forward more smoothly"). Of course, cheating is a given tactic. But it is best if the players don't caught you doing it! Also, about the Seed AI development... I think a good analogy to them is the way a character evolves in an MMORPG, a hardcore player character, that is! So it would go for the most efficient options, the most valuable ones. Meaning a superb build, if you like. Which means all Seed AIs with the same purpose will be quite similar ("all healing paladins will have this build, but for 3-4 points and some gear differences", for example). There are a lot of paths of optimization, but only so many are really interesting for a being like that. And perfection can be a trap, of course. An old trick against AIs that will always go for the optimal strategy is to make suboptimal choices that force them to recalibrate their entire strategy tree time after time. Prometheans, being more careful in their "leveling", would be more resistant to this, however.