I mean, I agree that the developers of these AI tools need to be made to be more ethical in how they use stuff for training, but it is worth noting that that’s kind of also how humans learn. Every human artist learns, in part, by absorbing the wealth of prior art that they experience. Copying existing pieces is even a common way to practice.
Yeah, that shrug you did about how it would be nice if AI didn’t steal art is part of the problem. Shrugging and saying joink doesn’t work when you want to copyright stuff.
Human learns by assimilating other people work and working it into their own style, yes. That means that the AI is the human in this and the AI owns the artistic works. Since AI does not yet have the right to own copyrights, any works produced by that AI is not copyrightable.
That is if you accept that AI and humans learn art in the same way. I don’t personally think that is analogous but it doesn’t matter for this discussion.
There’s a reason I said “they should be made to be more ethical” and not just “they should be more ethical”. I know that they aren’t going to do it themselves and I’ll support well-written regulations on them.
but it doesn’t matter for this discussion.
Isn’t it what almost your entire comment was about?
The argument was basically “that is how humans learn too”. I accepted that analogy because it doesn’t change my conclusion that AI can’t be copyrighted. Had the discussion been about something else I wouldn’t have accepted that argument.
The difference is a human artist can then make new unique art and contribute to the craft so it can advance and they can make a living off it. AI made art isn’t unique, it’s a collage of other art. To get art from AI you have to feed it prompts of things it’s seen before. So when AI is used for art it takes jobs from artists and prevents the craft from advancing.
My point is that this description literally applies just as much to humans. Humans are also trained on vast quantities of things they’ve seen before and meanings associated with them.
it’s a collage of other art
This is genuinely a misunderstanding of how these programs work.
when AI is used for art it takes jobs from artists and prevents the craft from advancing.
Because the only art anyone has ever done is when someone else paid them for it? There are a lot of art forms that generally aren’t commercially viable, and it’s very odd to insist that commercial viability is what advances an art form.
I do actually get regularly paid for a kind of work that is threatened by these things (although in my case it’s LLMs, not images). For the time being I can out-perform ChatGPT and the like, but I don’t expect that that will last forever. Either I’ll end up incorporating it or I’ll need to find something else to do. But I’m not going to stop doing my hobby versions of it.
Technology kills jobs all the time. We don’t have many human calculators these days. If the work has value beyond the financial, people will keep doing it.
Human brains don’t have perfect recollection. Every time we retell a story or remember a memory or picture an image in our head it is distorted with our own imperfections.
When I prompt an AI to create an image it samples the images it learned from with perfect recollection.
This is incorrect actually. The models these AIs run from by definition have imperfect recall otherwise they would be ENORMOUS. No, that’s actually exactly the opposite of how these work.
They train a statistically weighted model to predict outputs based on inputs. It has no actual image data stored internally, it can’t.
This is incorrect actually. The models these AIs run from by definition have perfect recall and that is why they require ENORMOUS resources to run and why ChatGPT became less effective when the resources it was allocated were reduced.
No, they take exponentially increasing resources as a consequence of having imperfect recall. Smaller models have “worse” recall. They’ve been trained with smaller datasets (or pruned more).
As you increase the size of the model (number of “neurons” that can be weighted) you increase the ability of that model to retain and use information. But that information isn’t retained in the same form as it was input. A model trained on the English language (an LLM, like ChatGPT) does not know every possible word, nor does it actually know ANY words.
All ChatGPT knows is what characters are statistically likely to go after another in a long sequence. With enough neurons and layers combined with large amounts of processing power and time for training, this results in a weighted model which is many orders of magnitude smaller than the dataset it was trained on.
Since the model weighting itself is smaller than the input dataset, it is literally impossible for the model to have perfect recall of the input dataset. So by definition, these models have imperfect recall.
In other words they require exponentially more input because the AI doesn’t know what it is looking at.
It uses its perfect recollection of that input to create a ‘model’ of what a face should look like and stores that model like a collage of all the samples and then uses that to reproduce a face.
Well, what you described is simply not a perfect recollection. It is many small tidbits of information that combined together can make a larger output.
The reason they fuck up hands is because hands are usually moving during pictures and have many different configurations compared to any other body part.
So when these image AIs refer back to all the pictures of hands they’ve been fed and use them to create an ‘average approximation’ of what a hand looks like they include the motion blur from some of their samples, a middle finger sticking up from another sample or extra fingers from the sample pictures of people holding hands etc and mismatch them together even when it doesn’t fit in the picture being created.
The AI doesn’t know what a hand is. It is just mixing together samples from its perfect recollection.
How many pictures do you see online where the hands are in motion, or even blurred?
Hands are usually behind objects when they hold something and can indeed have tons of variations and configurations. Even human artists fuck up all the time or just not draw them at all.
AI don’t combine samples. If they did they wouldn’t be able to generate new pictures of whatever subject you want in a specific style you want and then have multiple variations of that picture.
It isn’t a copy and paste, it is interpreting the drawing and modifying it based upon the prompt.
My point is that this description literally applies just as much to humans. Humans are also trained on vast quantities of things they’ve seen before and meanings associated with them.
In which case the machine would get the copyright (which legally they can’t now), not the prompter.
I agree. Well, that is assuming there’s no human editing of the results of the AI tool afterwards. There was heaps of it in the piece referenced in the article, and there usually is if you want to get something actually good. The piece referenced was entered in to a photomanipulation and editing category too, which seems like it’s very much in keeping with the spirit of the competition. But the reason I said that was because the comment I was replying to wasn’t about who has the copyright of the tool’s output, it was about the value of the output and tools in general
Where the law is on copyright it looks like we’re figuring out. For now I’m glad to see rulings like this as it will, hopefully, take some of the wind out of Hollywood studios and aide union negotiations.
Well, that is assuming there’s no human editing of the results of the AI tool afterwards. There was heaps of it in the piece referenced in the article
If there was, then the artist should have discussed those heaps of human editing that went into the creation of this piece of art, and he would have been granted a copyright.
The fact that he refused to disclose what - if anything - was done after the AI spit out the result is what resulted in him not being granted copyright.
He did? This article mentions it only briefly, but he talked about it more when it was first getting attention for winning the competition. Is this something he did in the court case that you’ve read elsewhere?
But also, if you used Midjourney at the time that the image was made, you’ll know that you did not get an image like that straight out of it
The Copyright Office asked him to provide them with an unedited version of the image generated by Midjourney in order to determine how much (human) work went into the final version.
Allen refused to provide them with an unedited version, so the Copyright Office had no way to verify how much or how little work was actually done by the artist compared to work that was done by the AI, so they had to assume that the vast majority of the work was done without any human artistic contribution.
They were essentially forced to reject his copyright application because he refused to provide evidence that he actually did any kind of creative artistic work.
Copyright just isn’t compatible with AI, we need to abolish it.
If a picture gets generated, who is the owner? The one writing the prompt? The AI that generated it? The researchers that created the AI? The artist on which the picture is based?
How about none of them? It is a picture, a piece of information. It doesn’t need an owner.
What? Humans don’t learn to paint by looking at paintings, most people learn by just painting. Humans can also draw art without having ever seen any. AI on the other hand can only draw from other people’s works, it has no creativity of its own.
I mean, I agree that the developers of these AI tools need to be made to be more ethical in how they use stuff for training, but it is worth noting that that’s kind of also how humans learn. Every human artist learns, in part, by absorbing the wealth of prior art that they experience. Copying existing pieces is even a common way to practice.
Yeah, that shrug you did about how it would be nice if AI didn’t steal art is part of the problem. Shrugging and saying joink doesn’t work when you want to copyright stuff.
Human learns by assimilating other people work and working it into their own style, yes. That means that the AI is the human in this and the AI owns the artistic works. Since AI does not yet have the right to own copyrights, any works produced by that AI is not copyrightable.
That is if you accept that AI and humans learn art in the same way. I don’t personally think that is analogous but it doesn’t matter for this discussion.
There’s a reason I said “they should be made to be more ethical” and not just “they should be more ethical”. I know that they aren’t going to do it themselves and I’ll support well-written regulations on them.
Isn’t it what almost your entire comment was about?
The argument was basically “that is how humans learn too”. I accepted that analogy because it doesn’t change my conclusion that AI can’t be copyrighted. Had the discussion been about something else I wouldn’t have accepted that argument.
To play devil’s advocate: What about artists that use assistants, is using AI not the same as using an assistant?
The difference is a human artist can then make new unique art and contribute to the craft so it can advance and they can make a living off it. AI made art isn’t unique, it’s a collage of other art. To get art from AI you have to feed it prompts of things it’s seen before. So when AI is used for art it takes jobs from artists and prevents the craft from advancing.
My point is that this description literally applies just as much to humans. Humans are also trained on vast quantities of things they’ve seen before and meanings associated with them.
This is genuinely a misunderstanding of how these programs work.
Because the only art anyone has ever done is when someone else paid them for it? There are a lot of art forms that generally aren’t commercially viable, and it’s very odd to insist that commercial viability is what advances an art form.
I do actually get regularly paid for a kind of work that is threatened by these things (although in my case it’s LLMs, not images). For the time being I can out-perform ChatGPT and the like, but I don’t expect that that will last forever. Either I’ll end up incorporating it or I’ll need to find something else to do. But I’m not going to stop doing my hobby versions of it.
Technology kills jobs all the time. We don’t have many human calculators these days. If the work has value beyond the financial, people will keep doing it.
Human brains don’t have perfect recollection. Every time we retell a story or remember a memory or picture an image in our head it is distorted with our own imperfections.
When I prompt an AI to create an image it samples the images it learned from with perfect recollection.
AI does not learn the same way humans do.
This is incorrect actually. The models these AIs run from by definition have imperfect recall otherwise they would be ENORMOUS. No, that’s actually exactly the opposite of how these work.
They train a statistically weighted model to predict outputs based on inputs. It has no actual image data stored internally, it can’t.
This is incorrect actually. The models these AIs run from by definition have perfect recall and that is why they require ENORMOUS resources to run and why ChatGPT became less effective when the resources it was allocated were reduced.
-ChatGPT
No, they take exponentially increasing resources as a consequence of having imperfect recall. Smaller models have “worse” recall. They’ve been trained with smaller datasets (or pruned more).
As you increase the size of the model (number of “neurons” that can be weighted) you increase the ability of that model to retain and use information. But that information isn’t retained in the same form as it was input. A model trained on the English language (an LLM, like ChatGPT) does not know every possible word, nor does it actually know ANY words.
All ChatGPT knows is what characters are statistically likely to go after another in a long sequence. With enough neurons and layers combined with large amounts of processing power and time for training, this results in a weighted model which is many orders of magnitude smaller than the dataset it was trained on.
Since the model weighting itself is smaller than the input dataset, it is literally impossible for the model to have perfect recall of the input dataset. So by definition, these models have imperfect recall.
In other words they require exponentially more input because the AI doesn’t know what it is looking at.
It uses its perfect recollection of that input to create a ‘model’ of what a face should look like and stores that model like a collage of all the samples and then uses that to reproduce a face.
It’s perfect recollection with an extra step.
Well, what you described is simply not a perfect recollection. It is many small tidbits of information that combined together can make a larger output.
That’s exactly how our brains work too
I’m pretty sure that the way they constantly fuck up hands is a solid demonstration that these AI tools do not have a perfect recollection
The reason they fuck up hands is because hands are usually moving during pictures and have many different configurations compared to any other body part.
So when these image AIs refer back to all the pictures of hands they’ve been fed and use them to create an ‘average approximation’ of what a hand looks like they include the motion blur from some of their samples, a middle finger sticking up from another sample or extra fingers from the sample pictures of people holding hands etc and mismatch them together even when it doesn’t fit in the picture being created.
The AI doesn’t know what a hand is. It is just mixing together samples from its perfect recollection.
What? No
How many pictures do you see online where the hands are in motion, or even blurred?
Hands are usually behind objects when they hold something and can indeed have tons of variations and configurations. Even human artists fuck up all the time or just not draw them at all.
AI don’t combine samples. If they did they wouldn’t be able to generate new pictures of whatever subject you want in a specific style you want and then have multiple variations of that picture.
It isn’t a copy and paste, it is interpreting the drawing and modifying it based upon the prompt.
In which case the machine would get the copyright (which legally they can’t now), not the prompter.
I agree. Well, that is assuming there’s no human editing of the results of the AI tool afterwards. There was heaps of it in the piece referenced in the article, and there usually is if you want to get something actually good. The piece referenced was entered in to a photomanipulation and editing category too, which seems like it’s very much in keeping with the spirit of the competition. But the reason I said that was because the comment I was replying to wasn’t about who has the copyright of the tool’s output, it was about the value of the output and tools in general
The tools are valuable for sure.
Where the law is on copyright it looks like we’re figuring out. For now I’m glad to see rulings like this as it will, hopefully, take some of the wind out of Hollywood studios and aide union negotiations.
If there was, then the artist should have discussed those heaps of human editing that went into the creation of this piece of art, and he would have been granted a copyright.
The fact that he refused to disclose what - if anything - was done after the AI spit out the result is what resulted in him not being granted copyright.
He did? This article mentions it only briefly, but he talked about it more when it was first getting attention for winning the competition. Is this something he did in the court case that you’ve read elsewhere?
But also, if you used Midjourney at the time that the image was made, you’ll know that you did not get an image like that straight out of it
This wasn’t a court case.
This was a copyright application.
The Copyright Office asked him to provide them with an unedited version of the image generated by Midjourney in order to determine how much (human) work went into the final version.
Allen refused to provide them with an unedited version, so the Copyright Office had no way to verify how much or how little work was actually done by the artist compared to work that was done by the AI, so they had to assume that the vast majority of the work was done without any human artistic contribution.
They were essentially forced to reject his copyright application because he refused to provide evidence that he actually did any kind of creative artistic work.
Copyright just isn’t compatible with AI, we need to abolish it.
If a picture gets generated, who is the owner? The one writing the prompt? The AI that generated it? The researchers that created the AI? The artist on which the picture is based?
How about none of them? It is a picture, a piece of information. It doesn’t need an owner.
Can we get UBI before we start abolishing people’s income though?
What? Humans don’t learn to paint by looking at paintings, most people learn by just painting. Humans can also draw art without having ever seen any. AI on the other hand can only draw from other people’s works, it has no creativity of its own.