Key Facts:

  • The AI system uses ten categories of social emotions to identify violations of social norms.

  • The system has been tested on two large datasets of short texts, validating its models.

  • This preliminary work, funded by DARPA, is seen as a significant step in improving cross-cultural language understanding and situational awareness.

    • betterdeadthanreddit@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      arrow-down
      3
      ·
      1 year ago

      Could be helpful if it silently (or at least subtly) warns the user that they’re approaching those boundaries. I wouldn’t mind a little extra assistance preventing those embarrassing after-the-fact realizations. It’d have to be done in a way that preserves privacy though.

      • Overzeetop@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Like most scientific and technical advances, it could be an amazing tool for personal use. It won’t, of course. It will be used to make someone rich even richer, and to control or oppress people. Gotta love humanity.