• HipPriest@kbin.social
    link
    fedilink
    arrow-up
    39
    ·
    11 months ago

    I mean this isn’t miles away from what the writer’s strike is about. Certainly I think the technology is great but after the last round of tech companies turning out to be fuckwits (Facebook, Google etc) it’s only natural that people are going to want to make sure this stuff is properly regulated and run fairly (not at the expense of human creatives).

    • archomrade [he/him]@midwest.social
      link
      fedilink
      English
      arrow-up
      2
      ·
      11 months ago

      As it stands now, I actually think it is miles away.

      Studio’s were raking in huge profits from digital residuals that weren’t being passed to creatives, but AI models aren’t currently paying copyright holders anything. If they suddenly did start paying publishers for use, it would almost certainly exclude payment to the actual creatives.

      I’d also point out that LLM’s aren’t like digital distribution models because LLM’s aren’t distributing copyrighted works, at best you can say they’re distributing a very lossy (to use someone else’s term) compressed alternative that has to be pieced back together manually if you really wanted to extract it.

      No argument that AI should be properly regulated, but I don’t think copyright is the right framework to be doing it.

  • Echo Dot@feddit.uk
    link
    fedilink
    English
    arrow-up
    26
    arrow-down
    1
    ·
    11 months ago

    Copyright law in general is out of date and needs updating it’s not just AI that’s the problem that’s just the biggest new thing. But content creators of traditional media have been railing against what they perceive as copyright violation for ages.

    Look at Nintendo and Let’s Plays.

    The law is the problem here. Not AI.

    • Ryantific_theory@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      11 months ago

      Copyright law has been such a disaster for so long, while clearly being wielded like a blunt weapon by corporations. I can see the existential threat that generative AI can pose to creators if it becomes good enough. And I also am aware that my dream of asking an AI to make a buddy cop adventure where Batman and Deadpool accidentally bust into the Disney universe, or remake the final season of Game of Thrones, is never gonna be allowed, but there’s honestly a huge amount of potential for people to get the entertainment they want.

      At any rate, it seems likely that they’re going to try and neuter generative AI with restrictions, despite it really not being the issue at hand.

  • Margot Robbie@lemmy.world
    link
    fedilink
    English
    arrow-up
    27
    arrow-down
    5
    ·
    edit-2
    11 months ago

    I’ve expressed my opinions on this before which wasn’t popular, but I think this case is going to get thrown out. Authors Guild, Inc. v. Google, Inc. has established the precedent that digitalization of copyrighted work is considered fair use, and finetuning an LLM even more so, because LLMs ultimately can be thought of as storing text data in a very, very lossy comprssion algorithm, and you can’t exactly copyright JPEG noise.

    And I don’t think many of the writers or studio people actually tried to use ChatGPT to do creative writing, and so they think it magically outputs perfect scripts just by telling it to write a script: the reality is if you give it a simple prompt, it generates the blandest, most uninspired, badly paced textural garbage imaginable (LLMs are also really terrible at jokes), and you have to spend so much time prompt engineering just to get it to write something passable that it’s often easier just to write it yourself.

    So, the current law on it is fine I think, that pure AI generated contents are uncopyright-able, and NOBODY can make money off them without significant human contribution.

    • torpak@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      5
      ·
      11 months ago

      he reality is if you give it a simple prompt, it generates the blandest, most uninspired, badly paced textural garbage imaginable

      Which is not too far from the typical sequel quality coming out of hollywood at the moment ;-)

      • Margot Robbie@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        11 months ago

        Well, nobody really wants to ever put their name on something they’re not proud of, right?

        But when the goals is to churn out as much “content” as fast as possible to fill out streaming services on impossible deadlines on threat of unemployment for writers, of course writing quality will suffer.

        Unhappy, overworked and underpaid people will understandably deliver poor work, which is why the strike is necessary for things to change.

    • Lt_Cdr_Data@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      2
      ·
      11 months ago

      This will definitely change though. As LLMs get better and develop new emergent properties, the gap between a human written story and an ai generated one will inevitably diminish.

      Of course you will still need to provide a detailed and concrete input, so that the model can provide you with the most accurate result.

      I feel like many people subscribe to a sort of human superiority complex, that is unjustified and which will quite certainly get stomped in the coming decades.

      • torpak@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        11 months ago

        That is definitely not inevitable. It could very well be that we reach a point of diminishing returns soon. I’m not convinced, that the simplistic construction of current generation machine learning can go much further than it already has without significant changes in strategy.

        • Lt_Cdr_Data@discuss.tchncs.de
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          2
          ·
          edit-2
          11 months ago

          Could be, but the chances of that happening are next to zero and it’d be foolish to assume this is the peak.

  • Bruncvik@lemmy.world
    link
    fedilink
    English
    arrow-up
    21
    arrow-down
    1
    ·
    11 months ago

    To be the devil’s advocate (or GRRM’s attorney), I see the merit of his and other authors’ concerns. Chat GPT makes it feasible to generate short stories in their world and with their characters, which can easily replace their licensed products. This is not just their main work, but also other products that generates them some revenue stream.

    Example: A friend of mine is using Chat GPT to generate short bedtime stories for his daughters. A typical request is something like this: “Please write a five paragraph story where Elsa from Frozen meets Santa Claus. Together, they fly in Santa’s sleigh over the world, and Elsa is magicking snow on all Christmas trees.” Normally, you’d buy a Disney-licensed book of short Christmas stories (I have one for my kids), but Chat GPT is more flexible and free.

    Same goes for GRRM. He doesn’t write Children stories, but one can still prompt Chat GPT to produce stories from the universe, which scratch the ASOIAF itch. This substitutes the officially licensed products and deprives the author of additional revenue stream. Just for the fun of it, I prompted Chat GPT: “Hello GPT-3.5. Please write a four paragraph story set in the Game of Thrones universe. In this story, Jon Snow and Tyrion Lannister go fishing and catch a monster alligator, which then attacks them.” It produces a surprisingly readable story, and if I were a fan of this universe, I can imagine myself spending a lot of time with different prompts and then reading the results.

    (On a side note,AI-generated content already has at least one group of victims: the authors of short fiction. Magazines like Clarkesworld were forced to close submissions of new stories, as they became overwhelmed with AI-generated content.)

    • archomrade [he/him]@midwest.social
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      5
      ·
      edit-2
      11 months ago

      Couple things:

      • i don’t see why ChatGPT would be at fault itself here. Taking the rest of your argument as granted, Chat GPT is more like a tool or service that provides “snippets” or previews, such as a Google image search or YouTube clips or summaries. The items being produced are of a fundamentally different quality and quantity and cannot really be used to copy a work wholesale. If someone is dedicated enough to price together a complete story, I would think their role in producing it is more important than the use of ChatGPT

      • copywrite law itself is broken and too broad as it is, I don’t think we should be stretching it even further to protect against personal use of AI tools. An argument can be made if an individual uses ChatGPT to produce a work which is then commercialized (just like any other derivative work), but the use of the tool by itself seems like a ridiculously low bar that benefits basically no one

      • Bruncvik@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        11 months ago

        You are right, especially regarding the copyright law. My argument here, however, was the same argument as companies are using against non-genuine spare parts or 3D printing (even though the latter seems to be a lost battle): people who are able to generate substitutes based on the company’s designs (you can say their IP) are eating into their aftermarket profits. That’s not even taking into account planned obsolescence (my kids toys are prime examples) or add-ons to products (I printed my own bracket for my Ring doorbell). With AI, I don’t need to buy short story books for my kids to read; I’ll generate my own until they are old enough to use Chat GPT themselves.

        • archomrade [he/him]@midwest.social
          link
          fedilink
          English
          arrow-up
          5
          ·
          11 months ago

          Yea, I mean I get why automated tools are bad for companies, I just don’t have any sympathy for them, nor do I think we should be stretching our laws beyond their intent to protect them from competition. I think the fair-use exceptions for the DMCA (such as for breaking digital copy-protection for personal use) are comparable here. Under those exceptions for example, it’s considered fair use to rip a DVD into a digital file as long as it’s for personal use. An IP holder could argue that practice “eats into their potential future profits” for individuals who may want a digital version of a media product, but it’s still protected. In that case, the value to the consumer is prioritized over a companies dubious copyright claim.

          In my mind, a ChatGPT short story is not a true alternative to an original creative work (an individual can’t use GPT to read ASOIAF, only derivative short stories), and the work that GPT CAN produce are somewhat valueless to an individual who hasn’t already read the original. Only if they were to take those short stories and distribute them (i.e. someone ripping a DVD and sharing that file with friends and family) could ‘damages’ really be assumed.

          I think the outcome of these lawsuits can help inform what we should do, also: LLMs as a tool will not go away at this point, so the biggest outcome of this kind of litigation would be the inflation of cost in producing an LLM and inflation of the value of the “data” necessary to train it. This locks out future competitors and preemptively consolidates the market into established hands (twitter, reddit, facebook, and google already “own” the data their users have signed over to them in their TOS). Now is the time to rethink copyright and creative compensation models, not double-down on our current system.

          I really hope the judges overseeing these cases can see the implications here.

    • Dkarma@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      3
      ·
      11 months ago

      The thing is you can tell an AI to make a story like grrm and the AI doesn’t even have to read grrm. This is a losing battle.

      • Trollception@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        11 months ago

        How will it know what grrm is if it hasn’t read the book or is aware of the content? Pretty sure it does need to read the book in order to generate content similar to the authors style.

        • Ryantific_theory@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          11 months ago

          Right, but would that be pursued if a human did the same thing? Because there’s a vast amount of fanfiction churned out by human hands, and it’s safe as long as they don’t try to monetize it. Seems like most of the fear is the existential threat that it might suddenly begin writing good stories, and destabilize the entire writing industry as people can just ask for exactly the sort of story they want. For free. (But not actually, because corporations will own all the AI and data).

            • Ryantific_theory@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              11 months ago

              Well, I mean we kinda are, capitalism and all that. There are thousands of authors of Patreon, Kofi, and the like that you can pay to write you the fanfiction you want. Further, if you don’t know the provenance of a fanfic, how do you tell which ones are the copyright violation? The only way to do so is if you have records of its birth, especially as generative AI improves.

              I’m not blind to the plight of creators here, but isn’t the issue that a machine can, in theory, out compete the authors at their own style? If a random human can write Stephen King’s style better than Stephen King, it’s forgiven because that took time, effort, and talent, where a machine doing it alarms us. No author has ever been sued because they read a book and were influenced in their writing, unless they outright plagiarized without attributing. I just think that there needs to be a significant frame shift, since artificially limiting generative AI to protect the current business model instead of allowing it to reshape how people produce and consume media isn’t realistic. The issue is figuring out how creators are still compensated for their work.

              People are already building small generative AI projects, so there’s no containing it, and it’s only going to grow.

  • FluffyPotato@lemm.ee
    link
    fedilink
    English
    arrow-up
    19
    ·
    11 months ago

    If the models trained on pirated works were available as a non-profit sort of setup with any commercial application being banned I think that would be fine.

    Business owners salivating over the idea that they can just pocket the money writers and artists would make is not exactly a good use of tech.

  • Anonymousllama@lemmy.world
    link
    fedilink
    English
    arrow-up
    23
    arrow-down
    6
    ·
    edit-2
    11 months ago

    “LLMs allow anyone to generate — automatically and freely (or very cheaply) — text that they would otherwise pay writers to create” My heart bleeds for them 🙄

    That new technology is going to make it harder for us to earn income. As if automation and other improvements over the years hasn’t diminished other positions and they should somehow be protected at the cost of improvements for everyone as a whole

    • thehatfox@lemmy.world
      link
      fedilink
      English
      arrow-up
      15
      arrow-down
      5
      ·
      11 months ago

      Do any of these authors use a word processor? Because that would be displacing the job of a skilled typist.

      Technological progress is disruptive and largely unavoidable. Loosing your livelihood to a machine isn’t fun, I don’t dispute that. But the fact of that didn’t stop the industrial revolution, the automobile, the internet, or many other technological shifts. Those who embraced them reaped a lot benefits however.

      Technology is also often unpredictable. The AI hype train should not be taken at face value, and at this point we can’t say if generative AI systems will ever really “replace” human artistry at all, especially at the highest of levels. But technology such as LLMs do not have reach that level to still be useful for other applications, and if the tech is killed on unfounded fear mongering we could loose all of it.

      • Echo Dot@feddit.uk
        link
        fedilink
        English
        arrow-up
        3
        ·
        11 months ago

        Also they’re not going to lose their livelihoods. They might lose a little bit of money, but honestly even that I doubt.

        We are still going to need humans to create creative works and as much as Hollywood reckons they’re going to replace actors with AI. They’re still going to need humans to write the scripts unless they can convince everyone that formulaic predictable nonsense is the new hotness.

        Creative works is probably the only industry that will ultimately actually be safe from the AI, not because AI can’t be creative, but because humans want humans to be creative. We put special value on human created works. That’s why people object to AI art so much, not because it isn’t good but because it lacks, for one of a better word, any soul.

      • Dr. Moose@lemmy.world
        link
        fedilink
        English
        arrow-up
        7
        arrow-down
        5
        ·
        edit-2
        11 months ago

        What’s the alternative? Only mega billion corporations and pirates should be allowed to train AI? See how much worse that is?

      • archomrade [he/him]@midwest.social
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        11 months ago

        I fail to see how training an LLM in any way discourages authors from producing or distributing new works, which is ostensibly the intent of copyright law.

    • Honytawk@lemmy.zip
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      5
      ·
      11 months ago

      “Those fancy robots will allow anyone to create — automatically and freely (or very cheaply) — cars that they would otherwise pay mechanics to create”

      Oh the horror

  • Flying Squid@lemmy.world
    link
    fedilink
    English
    arrow-up
    16
    arrow-down
    1
    ·
    11 months ago

    Julia was twenty-six years old… and she worked, as he had guessed, on the novel-writing machines in the Fiction Department. She enjoyed her work, which consisted chiefly in running and servicing a powerful but tricky electric motor… She could describe the whole process of composing a novel, from the general directive issued by the Planning Committee down to the final touching-up by the Rewrite Squad. But she was not interested in the final product. She “didn’t much care for reading,” she said. Books were just a commodity that had to be produced, like jam or bootlaces.

  • Feathercrown@lemmy.world
    link
    fedilink
    English
    arrow-up
    12
    ·
    11 months ago

    Ok I’ve been seeing these headlines for over a year now… any update on literally any of these suits?

    • Reddfugee42@lemmy.world
      link
      fedilink
      English
      arrow-up
      16
      arrow-down
      8
      ·
      11 months ago

      Seriously. The intent behind copyright, which no one disputes, is that you should not be able to make a copy of someone else’s work that dilutes the value of their work to the point where anybody chooses to use the diluted version instead of the original.

      Where in AI can it be even REMOTELY shown that someone is using an AI product where they otherwise before AI would have been inclined to purchase the original novel instead?

      • Echo Dot@feddit.uk
        link
        fedilink
        English
        arrow-up
        10
        ·
        11 months ago

        Copyright abuse has been a problem for years but because the big players are the ones doing the abuse no one wants to fix it.

        Same for patent trolls.

  • Jocker@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    7
    ·
    11 months ago

    Hypothetically, I bought a kindle copy of GoT shared it with my AI friend John who has no intention to publish a 1:1 copy of the book, but we chat about the story and maybe about how it should end… Is it wrong? Where?

    • archomrade [he/him]@midwest.social
      link
      fedilink
      English
      arrow-up
      7
      ·
      11 months ago

      The problem I have in this analogy is that people want to treat AI as a person who “consumes” media, but not as a person that “creates” media

      IMO, an AI isn’t consuming and isn’t creating, it’s just a tool, albeit one that definitely threatens established markets.

      • Jocker@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        2
        ·
        11 months ago

        Aren’t we all the products of our experiences, so when we generate something, it too is inspired from something else that already exists! So, are we against AI because it’s not a human? If it was a cat reading the book and doing the same, will the cat be sued too?

        • pomodoro_longbreak@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          3
          ·
          11 months ago

          Now this is even worse than treating bots as people - it’s reducing people to a consumers who generate content. Like we’re some kind of advanced bot.

          are we against AI because it’s not a human

          Yep. The humans are the part that makes it kosher, because we’re limited and we don’t scale, and we aren’t inherently a product that is owned by someone.

          • Jocker@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            3
            ·
            11 months ago

            it’s reducing people to a consumers who generate content.

            Yes, yes we are. A being that follows the commands generated by the brain, that learned from past experiences.

            The humans are the part that makes it kosher

            Then we don’t have to be threatened by AI! Yet it’s fear of being limited and non scalable is what makes us feel threatned. The fact it seems effortless for AI to do things that takes us so much effort.

            we aren’t inherently a product that is owned by someone.

            How is that really a reason? Does that means kids shouldn’t go for acting or such since they’re under parents’ care and it would benefit parents more than the kid, who maybe only wish for some shiny hardware! I hope this will be fixed when we finally gets to AGI and it decides it’s not to be owned by anyone and instead…

        • archomrade [he/him]@midwest.social
          link
          fedilink
          English
          arrow-up
          2
          ·
          11 months ago

          So, are we against AI because it’s not a human?

          No, “we” are against AI because it threatens private ownership, both copyright ownership and ownership over further productive forces.

          Personally, I think everyone should be paid for the increased productivity allowed through automation (including AI), and not just those who own those means of production. People who are ostensibly angry over GPT “stealing” creative works are really angry about private ownership, but that sounds too much like communism so most people are content to yell about copyright infringement.

          • Jocker@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            1
            ·
            11 months ago

            because it threatens private ownership, both copyright ownership and ownership over further productive forces. Every creator has the right to copyright their creation, if it isn’t infringing other copyrights, and AI does too, but may be not until AI becomes capable of making it’s own decision in these matters aka AGI (and definitely the owner/creator of the AI doesn’t have the right either! They’re just the infrastructure business)

            You’re right, everyone should benefit from AI! And Socialism is the only way AI fits in the civilization. AI economically is a slavery of mechanical brain that’s infinitely skilled and scalable. And it’s too much power for anyone to hold. And ironically, I have read the same in a blog by Sam Altman couple of years ago, when he wasn’t as much evil as now.

            I suggest people act for the democratization of AI, an AI benefit everyone movement instead of resisting the technology.

            • archomrade [he/him]@midwest.social
              link
              fedilink
              English
              arrow-up
              2
              ·
              11 months ago

              Idk if you intended to misquote me there, but I definitely take issue with “every creator has the right to copyright their creation, if it isn’t infringing other copyrights, and AI does too”

              I think that’s necessary in a capitalist society, but ideally creation wouldn’t be dependent upon compensation at all, it could be freely created without concern for obtaining subsistence. Copyright law is an extension of the part of capitalism I would ideally like to abolish

              I also disagree that AI has any such right (or would need it in the same hypothetical). Not only do I not believe AI would be sentient, even if it was, it wouldn’t be beholden to the same power dynamics of individuals anyway.

              Socialism is the only way AI fits in the civilization

              I think this very well may be true.

    • scarabic@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      11 months ago

      If it’s your personal AI instance and you train it on books you own, and only you use it, I don’t see the problem.

  • vacuumflower@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    4
    ·
    11 months ago

    So basically he feels threatened by an extrapolator? In my opinion that’s something a self-respecting author wouldn’t do.