I don’t know: it’s not just the outputs posing a risk, but also the tools themselves
Yeah, that’s true. Poisoning the training corpus of models is at least a potential risk. There’s a whole field of AI security stuff out there now aimed at LLM security.
it shouldn’t require additional tools, checking for such common flaws.
Well, we are using them today for human programmers, so… :-)
Yeah, that’s true. Poisoning the training corpus of models is at least a potential risk. There’s a whole field of AI security stuff out there now aimed at LLM security.
Well, we are using them today for human programmers, so… :-)
True that haha…