If you spend time in certain areas of IT, trustless protocols seem to be all the rage. Why rely on simple trust when you can verify?
But human interactions rely on trust, or at least on having a sense of how much the other side can and should be trusted. The problem is that this very human instinct does not scale. Trust relies on knowledge of prior actions and behavior, and we cannot apply the models we have trained with face to face interactions to hundreds or even thousands of people online.
On top of that arises the problem that the majority of people in a given social network probably don’t understand how the technology they are using really works. I work in this field for quite a while now, but even I could not tell you immediately how the software I am writing this blogpost works in detail.
So in order to have a safe social universe, certain things need to work, and in order to use it in a meaningful way, I need to be able to trust a bunch of things and people not to mess up:
- my browser
- my internet service provider
- the webhoster that runs the hardware of my chosen social instance
- the admin of the instance, and whoever works with them
- the software that makes up the instance, and the people who wrote it
- and not to forget, the people I am actually sending pictures of my cat to
- (I’m not trusting the cat though. He’s a mean bugger.)
That is a lot of moving parts and people, just to share cat pictures, recipes and maybe the occasional call for peaceful resistance against an oppressive regime.
So trust is important, and any platform should make an effort to give users the information to reliably give or withhold that trust. I believe that transparency about how things work, who has access to what and why and observable and verifiable code will help a lot with that.
Another thing is that, especially with a decentralized or distributed platform, data is not in a central repository, and the various systems can check with each other whether any part is violating the common agreement:
- the common API should be designed that only the minimum of information is shared
- access to system and data by admins, hosters and moderators needs to be restricted and monitored
- automated checks could alert users when certain instances reveal more data than allowed through the API
There is also the option of users marking other users as trustworthy and then using that web of trust as another metric. The downside of that is that such a web of trust could be an attack vector for certain threats. Sharers of cat pictures won’t worry about that, but the dissidents in some countries should.
What is the point of this post? Trust is hard. And we want to think about it quite a bit more, so we can earn it from you.
No Comments