Other samples:

Android: https://github.com/nipunru/nsfw-detector-android

Flutter (BSD-3): https://github.com/ahsanalidev/flutter_nsfw

Keras MIT https://github.com/bhky/opennsfw2

I feel it’s a good idea for those building native clients for Lemmy implement projects like these to run offline inferences on feed content for the time-being. To cover content that are not marked NSFW and should be.

What does everyone think, about enforcing further censorship, especially in open-source clients, on the client side as long as it pertains to this type of content?

Edit:

There’s also this, but it takes a bit more effort to implement properly. And provides a hash that can be used for reporting needs. https://github.com/AsuharietYgvar/AppleNeuralHash2ONNX .

Python package MIT: https://pypi.org/project/opennsfw-standalone/

  • Scrubbles@poptalk.scrubbles.tech
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    1 year ago

    That is literally not the problem, it’s not security. It’s obfuscation on purpose so things can’t be reverse engineered. I agree with you in most other cases, but this is one I don’t. It’s the same reason there aren’t public hash lists of these vile images out there, because then the people out there will change them. Same with fuzzy hashing and other strategies, these lists and bits of code must remain private so they aren’t tipped off to their stuff tripping the content.

    This can’t be a cat and mouse game all the time when it comes to CSAM, it must work for a while. So I’m fully on board with keeping it private while we can, it’s the one area I am okay with doing that. If it’s open bad actors will just immediately find a way to get around detection and all modes of knowing it will be obsolete until we find another way, and in that time we’re waiting to find another way they’re going around posting that shit everywhere, then it doesn’t matter how open source Lemmy is, because all of our domains will be seized.

    • OhNoMoreLemmy@lemmy.ml
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      Because any detector has to be based on machine learning you can open source all code providing you keep model weights and training data private.

      But there’s a fundamental question here, that comes from Lemmy being federated. How can you give csam detecting code/binaries to every instance owner without trolls getting access to it?

      Some instances will be run by trolls, and blackbox access is enough to create adversarial examples that will bypass the model, you don’t need source code.

      • Scrubbles@poptalk.scrubbles.tech
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        That discussion is happening, right now the prevailing idea is that it’s an instance admin opt-in feature, where you can host it yourself or use a hosted tool elsewhere to prevent it. on top of that, instance admins should be allowed to block federating images, so things uploaded on other instances are not federated to us and instead those images are requested directly from your instance. That would help cut down on the spread of bad material, and if something was purged on the home instance it could be purged everywhere