• 0 Posts
  • 10 Comments
Joined 1 year ago
cake
Cake day: June 29th, 2023

help-circle





  • I will admit this is almost entirely gibberish to me but I don’t really have to understand. What’s important here is that you had any process at all by which you determined which answer was correct before writing an answer. The LLM cannot do any version that.

    You find a way to answer a question and then provide the answer you arrive at, it never saw the prompt as a question or its own text as an answer in the first place.

    An LLM is only ever guessing which word probably comes next in a sequence. When the sequence was the prompt it gave you, it determined that Homer was the most likely word to say. And then it ran again. When the sequence was your prompt plus the word Homer, it determined that Simpson was the next most likely word to say. And then it ran again. When the sequence was your prompt plus Homer plus Simpson, it determined that the next most likely word in the sequence was nothing at all. That triggered it to stop running again.

    It did not assign any sort of meaning or significance to the words before it began answering, did not have complete idea in mind before it began answering. It had no intent to continue past the word Homer when writing the word Homer because it only works one word at a time. Chat GPT is a very well-made version of hitting the predictive text suggestions on your phone over and over. You have ideas. It guesses words.




  • But I don’t think the software can differentiate between the ideas of defined and undefined characters. It’s all just association between words and aesthetics, right? It can’t know that “Homer Simpson” is a more specific subject than “construction worker” because there’s no actual conceptualization happening about what these words mean.

    I can’t imagine a way to make the tweak you’re asking for that isn’t just a database of every word or phrase that refers to a specific known individual that the users’ prompts get checked against and I can’t imagine that’d be worth the time it’d take to create.



  • Of course, one reason I might mind is if the machine uses what it learns from reading my work to produce work that could substitute for my own. But at the risk of hubris, I don’t think that’s likely in the foreseeable future. For me, a human creator’s very humanity feels like the ultimate trump card over the machines: Who cares about a computer’s opinion on anything?

    This is really naïve. A huge number of people simply don’t care about creative works in those terms. We’re all encouraged to treat things as content to be consumed and discarded, not something to be actually thought about in terms of what it was expressing and why. The only value of a creator in that framework is that the creator fuels the machine and AI can fuel the machine. Not especially well at the moment but give it some time.