I've been brain-struggling over the issue (ok, it might just be an issue for me) of rep networks and how they securely and accurately connect rep to people, trying to resolve a few of my issues with it.
I have some concerns with how I think it works in-game, and I have some suggestions for how I think it ought to work in-game.
At first glance it seems to make some sense to tie a rep to a meshID. That way when you meet someone you can tag their meshID, run a lookup, and know their rep score.
But, if I imagine the rep network as a huge list of mapped meshIDs to repScores, it means I can then browse one of my networks (e.g. Science), copy down a list of meshIDs, run a lookup on a different network (e.g. Crime) and if I get positive hits I've found a whole bunch of criminal scientists. While that may be useful, it hits my 'oh my god, what about privacy' nerve.
Maybe I don't quite get what a meshID is ... but to me it is like a hardware (MAC) address. In 'real life' it is trivial to sniff network traffic, pick up MAC addresses, and on the fly change the MAC address of your own computer. When I look at meshIDs in this way, it seems too easy to spoof someone else's meshID to benefit from their rep level (not necessarily spend their rep - but just have people in awe at you for having such an awesome rep).
These two issues don't sit well with me.
So instead I took a look (mentally) at how I'd design a rep network. Firstly the underlying system needs to be robust and secure enough to let people 'publically' and 'privately' ping each other: i.e. I need to have a publically known piece of data that anyone can send to my network to get information on me, and I need a private piece of data I can send to my network to authenticate my transaction on the network. The public/private key encryption system seems perfect for this.
See: [url=http://en.wikipedia.org/wiki/Public-key_cryptography]public key cryptography - wikipedia[/url]
The following example is described in a non-automated way, but I imagine systems would be in place to handle such things seamlessly in the background (poor muse having to handle all the mundane boring jobs - then again, I'd never trust my muse with my passwords ... paranoid much?):
[collapse title=Validating Sally's network rep]
Sally walks into a bar, she has a {network: beer} rep of 50, and pings Bob the barkeep this information along with her public key. Bob (unsurprisingly) happens to also be on {network: beer} and decides to check this.
Bob takes Sally's public key and encrypts it with his private key and hands it to Sally. Sally encrypts this package with her private key and sends it to {network: beer}.
{Network: beer} then starts unwrapping the layers, first Sally's private key, then Bob's private key, and is left with Sally's public key.
{Network: beer} then validates that Sally's private key is in fact the pair to her public key and sends a message to Bob to confirm that Sally's {network: beer>} rep is indeed 50. {Network: beer} also sends a message to Sally informing her that it has validated her to Bob, in addition {network: beer} also sends Sally Bob's public key.
[/collapse]
The rep network has managed to function as needed without ever needing to map to anything other than network-unique key pairs. In addition it has traded public keys between Bob and Sally in a trustworthy way which might let them 'friend' or trust-validate each other on the network.
If Sally wished to then 'spend' some of her rep, I can imagine such a transaction taking place in a similar way.
[collapse title=Sally switches network and possible risks]Later, Sally leaves the bar and walks into a coffee shop. She doesn't want people knowing she is on {network: beer} (she's either paranoid or ashamed), so instead flashes up her {network: coffee} rep to the Skimble the barista, and a similar process as before unfolds.[/collapse]
If Skimble and Bob happen to be forks of the same person, suddenly Sally's {network: beer} and {network: coffee} IDs could, in Bob/Skimble's mind, be associated as being the same person - they could share/sell this info. But if Sally ever finds out she's going to 'thumbs down' Bob/Skimble big time! But at least they couldn't get this info just from walking past Sally and noting down what meshID she was broadcasting.
In addition to this underlying process, I would then imagine that the systems/servers that manage the rep network would run and operate as a distributed network, never relying on a mesh connection to a single node, taking advantage in a p2p bittorrent-like way of runtime on the systems of all of those subscribed/connected to the network.
This would allow anyone to create a new identity on a network simply by signing up and creating your public/private key pair.
So what's to stop someone signing up 100 times and then gaming the system? Maybe algorithms that monitor for suspicious activity, or perhaps the sign-up process requires a meshID - with the proviso that the meshID is never revealed by the system? But then what's to stop someone getting 100 fake meshIDs and gaming the system anyway?
There seems to be zero value to signing up with meshID, I'd go straight for egoID. The signup process would require validating your egoID at the time you make your public/private key pair, the egoID would be on record solely to prevent any future sign-up from the same egoID - and also as a mechanism to re-issue new public/private keys in the event of key 'loss'.
Fake account problems could be mitigated, to some extent, by a 'web of trust' (See [url=http://en.wikipedia.org/wiki/Web_of_trust]web of trust - wikipedia[/url]) with, for example, achievable rep levels being directly related to the number of other network members that have 'trust' validated your key pair/network identity. To be able to have a rep in the 60-70 bracket, you need X existing members with rep 60+ to have trust-validated you (where X is either a static number or %age of the 60+ network membership, etc).
[b]Note:[/b]Being trust-validated isn't the same as being given a 'thumbs up', or a ++like etc.
[collapse collapsed title=Why your enemy should trust-validate you]Anyone on the network that hates your guts has just as much incentive to trust-validate you as someone that worships the ground you walk on. The more trusted your identity is, the harder it is for you to separate yourself from the ID, and when your sworn enemy wants to 'thumbs down' you or --like you, they want to be damned sure the rating sticks.
Would --like be the same as ++hate?[/collapse]
This system I have described also seems like a natural expansion of the current public/private key system and the concept of a 'web of trust'. I can easily see (in my overactive imagination) early decentralised 'web of trust' cryptography systems expanding to become rep networks.
I'm sure there are flaws in the above, hopefully some inspiration too. I'd be delighted to hear the opinions of others.
Welcome! These forums will be deactivated by the end of this year. The conversation continues in a new morph over on Discord! Please join us there for a more active conversation and the occasional opportunity to ask developers questions directly! Go to the PS+ Discord Server.
My take on how rep systems work
Tue, 2010-03-09 07:57
#1
My take on how rep systems work
root@My take on how rep systems work
[hr] I've had an idea bouncing around in my head on this for a bit, and the honorable The Green Slime provided me with a reference that finally gave the idea something to connect to. Take the reputation networks, which are organized into macroscopic groups that have varying degrees of overlap. These reputation networks will have smaller networks, factions, that make them up, also with varying degrees of overlap and stability. These reputation factions eventually come down to the smallest reputation unit of a single person, so the smallest reputation connection is between two individuals. If alice-rep and bob-rep are an adorable couple that like to keep rep-boosting each other, the network between them can be seen to be strong. These clusters will relate to each other, and the connections between individual elements of the network get abstracted into the previously mentioned factions and reputation networks. The key to they why of this is as mickeykitsune mentioned: the Mesh is a mesh. Reputations only matter when they are being pinged, so there is no reason to propagate the data past the areas it will be useful. This will lead to just-in-time reputation updates, and the reputation system is then a multi-leveled complex system that has a striking resemblance to how neurons in the brain are organized. Reputation won't be a fixed number, but a reasonably stable range of numbers depending on who is asked by whom, and how. It also becomes a vast engine of research as organizations look at different ways to tweak the weights on network connections in simulated models of the economy so they can flush out information they find more useful. For instance, Firewall likely does this when they have identified a potential x-threat. They would redraw the model with the x-threat's reputation connections given different weights to see how far damage from this source can propagate. The system will be much more robust than something with a centralized server, as poisoning someones reputation from one direction does little to no good, and subverting all of a popular person's reputation would work about as well as a DNS attack on a top level domain. The system has a dramatic ability to repair itself (with all of the dangers associated with self-repairing systems), and reorganize to new circumstances (say the Factors figure out how to parse grammar not described in snot, and want to join up). The best part is the potential for the system to show aspects of self awareness. Hobbes' Leviathan come to life. [EDIT] Better: the explicit numbering of reputations as a way of modeling channel strength between agents is a form of systemic self-awareness. It isn't new data (the reputations would be there anyway), but it is now accessible at a much higher level of abstraction and can be communicated much more efficiently once reputations are posted outside of distributed, disconnected think-meats.The Green Slime r-rep++
@-rep +1
|c-rep +1
|g-rep +1
|r-rep +1
]root@My take on how rep systems work
[hr] Note to self: do a little research before thinking my ideas are clever.@-rep +1
|c-rep +1
|g-rep +1
|r-rep +1
]Some general properties I think the reputation system should have:
I've implemented a model for a system that I believe satisfies these criteria, and it's behaving much sanely (and quickly!) than the neural network model. The general approach I took is:
An alternate opinion weighting function would be to take all of the people you care about, normalize their reputation vector, and the reputation vectors for the same people with respect to all of your friends, and then weight their opinion by s( < your_rep_vector, their_rep_vector > ), where s is a sigmoid function, and < . , . > is an inner product. The more closely a friend rates people the way that you do, the more representative you consider his or her opinion. I haven't tested this to see how well it works.
total = 0 nfriends = 0 for friend in self.friends: if friend == user or visited( friend ): continue total += friend.get_rep(user, hops - 1) * self.get_rep( friend, hops - 1) nfriends += 1 total /= nfriends
75% + 25% * what you believe to be the score of an average person. The usefulness of this part of the computation might be influenced heavily by the small size of my test networks, and that as you get dozens/hundreds of friends, it makes more sense to not priviledge your own opinion more than that of your friends.root@My take on how rep systems work
[hr] I have a hazy idea of what I'm talking about. More specifically, I have an idea of what sort of game theory is involved, and I try to work back to what the system should look like based on that. I am not always accurate enough in relation to current practice to make any sense when I try the idea out. I am also not always accurate enough in relation to reality to make any sense, but that happens less frequently than it used to. Here is a paper on using Hidden Markov Models to handle the problems associated with modeling a recommender's reputation as a source of accurate reputations of other agents. I'm still working on finding out if and when a neural-network is a reasonable model to use for a reputation system. Sometimes academics get more excited about the idea than how to bring it down to practice, and then academic fankids like me start waving it around in sci-fi forums. [EDIT] Sorry, forgot to link the paper.@-rep +1
|c-rep +1
|g-rep +1
|r-rep +1
]root@My take on how rep systems work
[hr] If you want to avoid using negative reputation, you might deal with Hitlers by making a reputation network specific to them. Say Pol Pot, Che, Ahmadinejad, Assange, and Hitler are in a reputation network of People Other People Love To Hate, this network could act as as a second order term to dampen the positive effect of the networks they are connected to, but without having Hilter cause interest groups like Art very much in the way of a damped influence.@-rep +1
|c-rep +1
|g-rep +1
|r-rep +1
]root@My take on how rep systems work
[hr] No, the paper I was looking at is titled The HMM-Based Model for Evaluating Recommender’s Reputation. There are quite a few papers on topics like this, as neural-networks have been very popular as models in between funding droughts in AI research.@-rep +1
|c-rep +1
|g-rep +1
|r-rep +1
]