• Bak@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 year ago

    Any explainers? How does building a concept association network make AI safer?

    • TechLich@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      I think the idea is that there are potentially alignment issues in LLMs because it’s not clear what concepts map to what activations. That makes it difficult to see what they’re really “thinking” about when they generate text. Eg. if they’re being misleading or are incorrectly associating concepts that shouldn’t be connected etc.

      The idea here is to use some mechanistic interpretability stuff to see what text activates what neurons in an LLM and then crowd source the meanings behind that and see if that’s something you could use to look up some context from an ai. Sort of trying to make a “Wikipedia of AI mind reading”

      Dunno how practical it is or how effective that approach is but it’s an interesting idea.