Other samples:
Android: https://github.com/nipunru/nsfw-detector-android
Flutter (BSD-3): https://github.com/ahsanalidev/flutter_nsfw
Keras MIT https://github.com/bhky/opennsfw2
I feel it’s a good idea for those building native clients for Lemmy implement projects like these to run offline inferences on feed content for the time-being. To cover content that are not marked NSFW and should be.
What does everyone think, about enforcing further censorship, especially in open-source clients, on the client side as long as it pertains to this type of content?
Edit:
There’s also this, but it takes a bit more effort to implement properly. And provides a hash that can be used for reporting needs. https://github.com/AsuharietYgvar/AppleNeuralHash2ONNX .
Python package MIT: https://pypi.org/project/opennsfw-standalone/
NSFW != porn
To be fair, most non-porn “NSFW” is probably “NSFL”. So NSFW in its exclusive usage is almost entirely porn.
Tho some people consider nudity art, without including anything sexual about it
In many cultures around the world nudity in itself isn’t considered inappropriate or sexual.
It’s an interesting idea, and given the direction some economies are moving (looking at you EU & UK) something like this is likely going to feature whether we like it or not. The question for me however is what is the nature of the training data? What some places consider “porn” (Saudi Arabia, the Vatican, the US) is just people’s bodies in more civilised places. Facebook’s classic “free the nipple” campaign is an excellent example here: why should anyone trust that this software’s opinion aligns with their own?
Yeah. Have been thinking of this exact scenario. How to create solutions around anything that might “filter” while respecting the worldviews of all. I feel the best approach so far, is if filters are to be implemented. It should never be a 1 all be all and should always be “a toggle”. Ultimately respecting the user’s freedom of choice while providing the best quality equipment to utilize effectively when needed
2 of them are lincensed under BSD-3, so not open source. The the 3rd one uses Firebase, so no thanks.
Edit: BSD-3 is open source. I confused it with BSD-4. My bad.
How is BSD-3 not open source? I think you are confusing “Free/Libre” and Open Source. BSD-3/MIT licenses are absolutely open source. GPL is Free/Libre and Open Source (FLOSS)
It’s not by OSD definition. Having code source available =/ open source.
And most Lemmy clients I have seen use GPL or AGPL licences, so they couldn’t use code licensed under BSD.
Edit: This is incorrect. I confused it with BSD-4. My bad.
What in the BSD-3 license goes against OSD exactly?
You are clearly confused. The BSD-3 isn’t only “having the source”, it gives you the right to package, distribute, and modify the source code at will. What it doesn’t have compared to the GPL is protections from someone not sharing their modifications (for example when used in closed source products). In that sense it is more “freedom” than the GPL, but that freedom comes with a cost to the community, and in a sense the freedom afforded to the original author.
It is literally approved by the OSI itself: https://opensource.org/license/bsd-3-clause/
And yes, BSD-3 libraries are compatible with the GPL: https://fossa.com/blog/open-source-software-licenses-101-bsd-3-clause-license/
Is there a confidently wrong community on Lemmy yet?
You are correct. I’m sorry, I confused it with BSD-4 as that used to be the 3rd clause. I updated my post and thank you for calling me out.
That’s still wrong though. The BSD-4 is literally FSF approved. It’s just not GPL compatible and not technically OSI approved. But only on a technicality. The only difference between BSD-3 (BSD New) and BSD-4 (BSD Old) is the advertisement clause. It has nothing to do with redistribution, packaging, or modification of the code. OSI doesn’t agree with the advertisement clause so it’s not officially approved, doesn’t mean it isn’t Open Source.
That’s where I disagree. While it’s true that the only difference is the GPL complience it’s definetely against the spirit of open source and OSD. So it is source available license, but calling it open source is a stretch. The simple fact that it renders it unsable for GPL projects go against what open source stands for.
True as that maybe be, your original statement “BSD-4” is not open source is still completely wrong, plain and simple. BSD-4 is not just having access to the source, it gives you significant rights over the source as well. The incompatibility lie with a technicality, an inconvenient one, but a technicality nontheless. Even the FSF agrees.
By definition you can’t have some of these things open source, CSAM/NSFW detection needs to be closed source because people are constantly trying to get around it.
Security through obscurity doesn’t work. These systems need to be actually robust, which is only trustworthy with open source
That is literally not the problem, it’s not security. It’s obfuscation on purpose so things can’t be reverse engineered. I agree with you in most other cases, but this is one I don’t. It’s the same reason there aren’t public hash lists of these vile images out there, because then the people out there will change them. Same with fuzzy hashing and other strategies, these lists and bits of code must remain private so they aren’t tipped off to their stuff tripping the content.
This can’t be a cat and mouse game all the time when it comes to CSAM, it must work for a while. So I’m fully on board with keeping it private while we can, it’s the one area I am okay with doing that. If it’s open bad actors will just immediately find a way to get around detection and all modes of knowing it will be obsolete until we find another way, and in that time we’re waiting to find another way they’re going around posting that shit everywhere, then it doesn’t matter how open source Lemmy is, because all of our domains will be seized.
Because any detector has to be based on machine learning you can open source all code providing you keep model weights and training data private.
But there’s a fundamental question here, that comes from Lemmy being federated. How can you give csam detecting code/binaries to every instance owner without trolls getting access to it?
Some instances will be run by trolls, and blackbox access is enough to create adversarial examples that will bypass the model, you don’t need source code.
That discussion is happening, right now the prevailing idea is that it’s an instance admin opt-in feature, where you can host it yourself or use a hosted tool elsewhere to prevent it. on top of that, instance admins should be allowed to block federating images, so things uploaded on other instances are not federated to us and instead those images are requested directly from your instance. That would help cut down on the spread of bad material, and if something was purged on the home instance it could be purged everywhere