When you start out, you can receive content from anyone who publishes publicly (ie. publish to anyone, even if they aren't in the publishers trust list).
Over time, you mark users as trusted or not trusted. Content from untrusted users will be filtered out. Users you trust will also have a trust list of their own. If you trust someone, you will begin to trust the users w/good reputation on their trust list.
You can, of course, deicde if you want to only send/receive content from people you already trust, or not.
Starts as a trust all and slowly/instantly migrates to a web of trust?
Let's say I trust everyone and see your content. I mark you as trusted. Does that then mean I only see content based on your web of trust? And from that point forward I only trust a subset of whomever I first trust, and by extension their web of trust?
Any idea of the information bubble or echo chambers this might create?
Honestly not trying to nit pick. Just trying to understand how that would pragmatically function. In a security context where web of trust exists, it in of itself is not a disqualifying attribute. It simply lets them know the key is not trusted. But ultimately the user still makes the decryption or authentication call. Meaning it's still observed.
But here, it acts as a filter. Which I assume means it's excluding other's content based strictly upon the established web of trust?
Any practical implementations? How do they deal with these types of issues? Trust thresholds?
A great deal of work has been done on this subject: http://www.draketo.de/english/freenet/friendly-communication-with-anonymity
You can translate the required input to the Web of Trust as described in the scalability calculation to use information available in the federation:
As WoT identity, use the URL of a user on an instance. It is roughly controlled by that user. As peer trust from Alice to Bob, use "if Alice follows Bob, use trust (100 - (100 / number of messages from Alice to Bob))". As negative trust use a per-user blacklist (blocked users). For initial visibility, just use visibility on the home instance.
It seems some measure of trust threshold is used. I suspect this has subtle implications, both pros and cons. I also suspect ripe for censorship bots.
Let's say you trust someone who trusts someone. That third person is a shill, who in turn trusts a web of shill accounts. All of whom distrust important content producers. Presumably this also negatively impacts your calculated trust value for those content producers.
Are we ultimately looking at bayesian filter analysis on the web of trust and weighting?
(post is archived)