I’m considering migrating away from Lemmy.world on account of downtime and the recent blocking of piracy communities. However, I am quite fond of Lemmy.world’s other choices in defederation, e.g. nazi nonsense and CSAM.
Is there an easy method of comparing these blockings between instances to better choose where to migrate to?
OP is claiming that they agree with lemmy world’s defederation choices driven by CSAM, which is unquestionably nonsense. Lemmy world admins have made several in depth posts explaining defederation decisions and none of them had anything to do with CSAM. In some jurisdictions, it would likely be illegal to give such an explanation as it would amount to creating a pointer to a source of CSAM that hasn’t yet been taken down. By and large, these things are reported directly to law enforcement and cleaned up quietly, without showing up in modlogs… and in many jurisdictions the law REQUIRES handling CSAM in precisely that fashion in order to prevent it from being archived before it’s taken down.
Is there a non-zero amount of CSAM in the Fediverse? Sadly yes. Once you achieve a certain scale, people do all the things… even the bad ones. This research paper (from Stanford, it’s reputable and doesn’t include or link to CSAM) discusses finding, in a sample of 320k Mastodon posts, over 100 verified samples of CSAM and something like 1k-3k likely adjacent posts (for example that use associated keywords). It’s pretty likely that somewhere on Lemmy there are a non-zero number of such posts, unfortunately. But moderators of all major instances are committed to taking appropriate steps to respond and prevent reoccurrence.
Additionally, blahaj.zone defederated from lemmynsfw over the adorableporn community. The lemmynsfw admins take reports of CSAM very seriously, and the blahaj admins stopped short of accusing them of hosting actual CSAM. But they claimed that models of verified age “looked too young” and that the community was courting pederasts. These claims were largely baseless, but there was a scuffle and some of the secondary and tertiary discussion threw around terms like CSAM loosely and incorrectly.
I think OP is probably hearing echoes of these kinds of discussions 3rd hand and just not paying attention to details. There’s certainly no well-known and widely federated CSAM communities, and all responsible admins would take immediate action if anything like that was found. CSAM doesn’t factor into public federation decisions, because sources of CSAM can’t be discussed publicly. Responding to it is part of moderation at scale though, and somewhere some lemmy admin has probably had to do so.
Idk. That ‘study’/article fails to recognize the consequences and ethics of the legal situation in Japan for example. And I think everything is a bit too vague to really claim to be scientific.
This topic sometimes makes me a bit angry/disappointed. On the one hand I perfer my favorite places on the internet (eg the fediverse)/not to be used for disgusting stuff and crime. On the other hand politicians like Ursula von der Leyen, who is now head of the EU parliament, have been using exactly this subject for years (and in my eyes thus abusing the stories of the victims yet again) to advertise for 100% online-surveillance, getting rid of end to end encryption and storing massive amounts of data about everyone, just in case…
“Just think about the children…”
And this is just not the way to solve that issue. I don’t want to live in their 1984-society fantasies and there are better solutions around.
A second thing I find kind of alarming. The article mentiones those automatic content detection tools by Google and Microsoft. They are NOT available for the free world. I think if legislature really forces us to filter on upload… And it’s only big corporations that own the databases of CSAM… This is their way to easily get rid of the fediverse. And every platform build by and for the people.
I’m a bit disgusted. But this is why i’m interested in the subject.