WelcomeUser Guide
ToSPrivacyCanary
DonateBugsLicense

©2025 Poal.co

118

TL;DR: In decentralized applications (of which Poal is not, but the future of the internet is decentralized) each individual using the application must decide, for themselves, who they want to receive content from, and who not to. This is censorship resistant because there isn't a central arbiter who is deciding what you can and cannot see; everyone has to make that choice on their own.

ie, if you don't like Poal giving you the ability to block faggots, then you're gonna have a hard time adjusting to the decentralized web.

edit: faggot

TL;DR: In decentralized applications (of which Poal is not, but the future of the internet is decentralized) each individual using the application must decide, *for themselves*, who they want to receive content from, and who not to. This is censorship resistant because there isn't a central arbiter who is deciding what you can and cannot see; everyone has to make that choice on their own. ie, if you don't like Poal giving you the ability to block faggots, then you're gonna have a hard time adjusting to the decentralized web. edit: faggot

(post is archived)

[–] 1 pt

You can translate the required input to the Web of Trust as described in the scalability calculation to use information available in the federation:

As WoT identity, use the URL of a user on an instance. It is roughly controlled by that user. As peer trust from Alice to Bob, use "if Alice follows Bob, use trust (100 - (100 / number of messages from Alice to Bob))". As negative trust use a per-user blacklist (blocked users). For initial visibility, just use visibility on the home instance.

It seems some measure of trust threshold is used. I suspect this has subtle implications, both pros and cons. I also suspect ripe for censorship bots.

Let's say you trust someone who trusts someone. That third person is a shill, who in turn trusts a web of shill accounts. All of whom distrust important content producers. Presumably this also negatively impacts your calculated trust value for those content producers.

Are we ultimately looking at bayesian filter analysis on the web of trust and weighting?

You bring up a good point. It is important that you can determine the degree of trust separation.

If you looked at a graph of your trust list, you would start to see you've been distrusting a bunch of users (shills) who you're only connected to via that one shill account. Ban that one shill account and the rest disappear. Your friend could lose some reputation with you as well, because he trusted a shill. etc. etc. etc.

Like you said, pros and cons to everything.