pouët.net

Go to bottom

AI art in compos

category: general [glöplog]
So, you don't know what they actually do either...

Look, you are a demoscene legend and I am not. But this situation is not as black and white as you are presenting it. You have been pretty level-headed in all my previous interactions with you, so I am just asking nicely. Please research. This is not what RJ Palmer said it is. There are issues with this technology, and there are aspects of it to be uncomfortable with, sure. What it does not do, however, is it does not really "knocks off existing artists' work", by any stretch of the imagination.
added on the 2022-08-16 19:15:26 by introspec introspec
You're not exactly debunking.
added on the 2022-08-16 19:19:36 by Gargaj Gargaj
showcasing how your algorithm can replicate certain styles popularized by certain artists is not any proof of malicious intent. styles are not under copyright. any artist can make any drawing in any style.

not wanting to allow an algorithm to use certain styles because "they're gonna take our jobs" is not a valid argument against technological development. i get why some people might feel threatened by this, suddenly their uniqueness is a lot easier to replicate without involving them in the process. except it already was, it just used to take more/other human effort to do it.

all jobs evolve through time, some faster then others, if you're good at what you do you'll use the new technology to cut some corners, save you time and show your inherent value to the process through other aspects (an algorithm doesn't have years experience and sensibility/context for the project at hand). and if you're not willing to adapt, well, you'll be the last blacksmith in a city full of cars, and the last portrait painter while everyone is taking photographs.
added on the 2022-08-16 19:53:38 by psenough psenough
Quote:
showcasing how your algorithm can replicate certain styles popularized by certain artists is not any proof of malicious intent.

Yes it is. That's why file transfer clients at least try to keep a veneer of legitimacy by keeping off rips of films / TV series off their screenshots.

If people tell you what they're doing, believe them.
Quote:
"they're gonna take our jobs" is not a valid argument against technological development

Yes, the same way "they might make a bomb out of this" was not an argument against nuclear power - but the danger is there and after the first two catastrophes, now we have international treaties to ensure that the use of the technology is morally correct.

It was never a question of technology - it's a question about the use of technology AS IT IS CURRENTLY STANDING.
Quote:
last portrait painter while everyone is taking photographs

You can't take photographs of things that don't exist - that's why there's still an industry for illustrators, concept artists, designers...
added on the 2022-08-16 20:02:21 by Gargaj Gargaj
Quote:
Computers have been better than humans for decades now. Did we allow computer-assisted players in competitions? No.


Well, actually that's a yes. It's called "advanced chess", "cyborg chess" or "centaur chess". It's just a different category, just like motorbike racing didn't replace bicycle racing, painting didn't replace photography, and freestyle graphics competitions didn't replace pixelart graphics competitions.

So that's one question: what category does AI generated art fit in? Can we put it in freestyle graphics? Do we need a new one? Similar drama happened a few years ago as people entered in tracked music compos with huge XM files with very long samples and few actual tracking. The solution was 1. to have a separate streaming music compo and 2. to show the tracker onscreen while playing tracked music so people can see how the track is constructed, and vote accordingly. Similarly for showing workstages of a pixel drawing, and there could be a rule for showing raw/unretouched versions of pictures in a photograph compo.

But there's another question: can a person ask an AI to generate a thing, and then say it's their own and use it to enter a compo? Or does it belong to the AI? Or collectively to all the people whose existing art was scanned by the AI? And in what ways does this impact the entry? Peoplo deveâopping and selling these AIs say there is no copyright on the generated images. This seems dubious to me, but what do I know about intricate details of copyright. But apparently they also say that you don't fully own copyright on a thing the AI generated for you, you're not the author. So that's two things that would already disqualify from most compos. Do we need more specific rules for this case?

And a third question: do we really care?
Because you did not sound like someone who wants to talk about it.

OK, fair enough, I'll give it a go. The purpose of the original website that RJ Palmer found was not to steal anyone's work. It was to create a map of the latent space of the Stable Diffusion, one of the text-to-image models that are making waves recently. The map is necessary because it tends to be hard to explain styles to the machine (mostly because we do not have a particularly precise language for it). When the AI is trained, it gets pieces of text together with images, and since the artists tend to be identified in these pieces of text, artist names act as markers identifying points in the latent space of these models. So by using one or several names of the authors you get a style or a mixture of styles to use during the generation. You are not using any specific art anymore. You are not recombining pieces of existing art either. You are simply trying to explain to the model what approximately you are trying to do.

Next time you say to your friends, let us make a demo in the style of ASD, you are committing the same crime, basically.

The relationships between artist names and styles are very imprecise. First, every model pretty much thinks about the art differently. Second, the number of inputs from a specific artist is not particularly large. So what model ends up defining as a particular artist's style can have pretty poor resemblance of the actual artist style. (If you actually visit the website RJ Palmer complained about, you will find there multiple examples of this.) Hence, the talk about "knocking off existing artists" is completely missing the point. The names of the artists are acting as no more than tags to something that we have no other way of defining, in language at least.

What is important to understand is that "latent space" is fundamentally next to impossible to understand. The model does not really know pieces of art associated with specific artist names, the level of compression in the model like this is absolutely insane. (Actually, there are some special situations where the model would memorize pieces of art more completely, but this "overfitting" is exactly something that is to be avoided in a good model, so a lot of effort goes into techniques to mitigate this.) Just to give you a sense of the scale of what is going on there: the dataset used to train the model contains 2 billion images that are used to define ~1 billion parameters. The model literally has about 2 parameters per training image! (100 terabytes of data get squashed into under 10 gigabytes, which is what, compression ratio of 10000). It does not remember specific images, because it can't. It does not really remember specific artists because it can't.

So the amount of stealing that is taking place here is no more that the amount of stealing that happens when you go to a museum, read a book or attend a demoparty.

Now, the issue with using art to train models is actually difficult. Current laws definitely do not protect an artist in any way, but this might change given how much these models will be able to do soon. The job market for artists will definitely change too and this is another big issue, which needs to be discussed. So, I am not dismissive of these concern.

However, I am very dismissive of the culture where, sorry, an idiot writes some unfounded stuff in his Twitter and suddenly thousands of people are up in arms and against someone that actually tried hard to do well by people. Stable Diffusion is not a first text-2-image model. DALL-E, Midjourney, DALL-E 2 all came out earlier, all use, not the same, but similar input interfaces, all have to use similar principle of explaining the styles and using artist names to tag them. Having a go, not even at Stable Diffusion, but at people who are studying Stable Diffusion to help others to explain styles is beyond bonkers.
added on the 2022-08-16 20:14:16 by introspec introspec
Oh, I forgot to mention that Midjourney and DALL-E 2 are actually already charging for the image generation, so if you feel that strongly about artists' rights, you would have a go at them first.

If you are rational and if you understand what is going on, that is.
added on the 2022-08-16 20:21:14 by introspec introspec
Quote:
When the AI is trained, it gets pieces of text together with images, and since the artists tend to be identified in these pieces of text, artist names act as markers identifying points in the latent space of these models. So by using one or several names of the authors you get a style or a mixture of styles to use during the generation. You are not using any specific art anymore.

I have a vague, admittedly cursory, understanding of machine learning, but my whole point is that we should avoid falling into the trap of thinking that this is any way a technological problem, or using that as a cover not to make moral decisions. Technology gets invented for all sorts of reasons (and again I'll use weapons as an example here), but what they're immediately utilized for is generally a good indicator where things will be going. (Again, it didn't take long for the adorable robot dogs to suddenly have a sniper rifle mounted on them - even if it wasn't Boston Dynamics who did it.)

Any AI/ML is only as good as its dataset, and what you feed into it defines what it will output. If you feed it portrait photos, you'll clearly have a machine like ThisPersonDoesntExist.com that generates fake photos, and it has a strong implication what your goal was with it. They fed art into it from existing artists who didn't opt-in. That's enough motive for me to draw an inference - people do damaging things for two reasons: personal gain or stupidity, and the latter isn't an excuse either.

Quote:
So the amount of stealing that is taking place here is no more that the amount of stealing that happens when you go to a museum, read a book or attend a demoparty.

You said TWO BILLION IMAGES. (For comparison, a 72-hour demoparty at 60fps is 15 552 000 images.)

Quote:
Oh, I forgot to mention that Midjourney and DALL-E 2 are actually already charging for the image generation, so if you feel that strongly about artists' rights, you would have a go at them first.

They're not exempt from my statements (and his statements) either, I think AI image generation as a concept in itself at the very least should be immediately regulated to only contain data from artists who opt-in, or that they should get residual payments from every image generated.
added on the 2022-08-16 21:07:31 by Gargaj Gargaj
Quote:
Technology gets invented for all sorts of reasons (and again I'll use weapons as an example here), but what they're immediately utilized for is generally a good indicator where things will be going. (Again, it didn't take long for the adorable robot dogs to suddenly have a sniper rifle mounted on them - even if it wasn't Boston Dynamics who did it.)

As a professional scientist, I understand this very well. However, in this particular case, you use 15K people (beta-testers) giddily creating new, beautiful, previusly unseen imagery that flooded twitter during the course of the last week as an indication that something is wrong with the technology? Or are going to say that since, I don't know, 20? people did something shitty with it, is an indication that the technology is fundamentally evil?!

Maybe we should ban the demoscene because some people make fucktros? Some of them made some people really offended, did you know?

Quote:
Any AI/ML is only as good as its dataset, and what you feed into it defines what it will output. [...] They fed art into it from existing artists who didn't opt-in. That's enough motive for me to draw an inference - people do damaging things for two reasons: personal gain or stupidity, and the latter isn't an excuse either.

This is a dubious issue, but this has been happenning for decades now. It will probably get regulated at some point. However, your assumption that it is fundamentally bad idea to do something like this is flawed. Google Books scanned books illegally, but to me as a person this ended up being massively useful (and it saved many books from oblivion). Internet being flooded with illegal mp3 recordings changed the way we listed to the music, but did it really kill the music as the trade? Does not seem to be the case. The truth is, we don't know what it will do to the artists, but it is definitely not obvious, and not obviously disasterous.

Specifically for the opt-in issue, I think it'll happen at some point. It is definitely legal just using art in their way right now (just as it is legal for us to browse and view the same artworks online). But the opt-in won't do anything. The fact that we are not very efficient at exploring latent space right now makes variety of styles valuable. But given how much research is being done to better understand and map the latent space, the art by specific artists will become less and less relevant. Scientifcially, this battle is already lost (not in the sense that the machine art is better, it clearly is not (yet), but in the sense that it is already becoming a technological issue, not a scientific issue, and pretending you can stop it will just leave you behind).

Quote:
You said TWO BILLION IMAGES. (For comparison, a 72-hour demoparty at 60fps is 15 552 000 images.)

Yes, but one does not become an artist by visiting a single demoparty, this takes years of learning. And that's a lot more than 2 billion images, even at 24 fps. And another yes, current AI is very wasteful in that it seems to require a lot more input data to train, compared to humans. However, my point was that all the model keeps from your input image is 2 floating point numbers. It is at best like a vague idea of what you've seen there.

By the same definition, you probably steal more after attending a party than modern AI can.

Once again, I understand that this is watershed moment and that it will make tectonic change in the ways we produce art (as a society). And that there will be winners and losers from this redistribution. However, I am very far from offering any specific solutions for the problems that we don't really understand yet.
added on the 2022-08-16 21:49:52 by introspec introspec
Quote:
Quote:
Technology gets invented for all sorts of reasons (and again I'll use weapons as an example here), but what they're immediately utilized for is generally a good indicator where things will be going. (Again, it didn't take long for the adorable robot dogs to suddenly have a sniper rifle mounted on them - even if it wasn't Boston Dynamics who did it.)

As a professional scientist, I understand this very well. However, in this particular case, you use 15K people (beta-testers) giddily creating new, beautiful, previusly unseen imagery that flooded twitter during the course of the last week as an indication that something is wrong with the technology? Or are going to say that since, I don't know, 20? people did something shitty with it, is an indication that the technology is fundamentally evil?!

Again, that's not what I said at all. What I said is that currently, in its unregulated laissez-faire form, the abuse is so easy and prevalent, that we should reject it until the dangers have been mitigated.
added on the 2022-08-16 21:58:11 by Gargaj Gargaj
Example: self-driving cars.

Cool tech? Sure.
Dangerous? Possibly, we don't know yet.
Regulated? To the absolute minute detail.

But of course the auto-industry is much larger than whatever unions illustrators may have, so, there's that.
added on the 2022-08-16 22:01:35 by Gargaj Gargaj
Gargaj: /imagine prompt: an AI that's trained on all the best movies ever made, which can generate scenes based on text prompts. Scenes contain consistent characters throughout. You can decide camera angles, time of day and weather conditions.

I can see how such an AI would be able to produce recognizable scenes..
(like e.g. Arnold Schwarzenegger's T2 motorcycle scene, or the scene where the T1000 grows out of the floor .. the movie industry would be on that like a ton of bricks because money and lawyers .. as opposed to the lowly artists who created the background paintings for My Neighbour Totoro, of which you can also find many AI-generated derivatives)

Anyway, you got me convinced.
added on the 2022-08-17 06:58:39 by farfar farfar
Quote:
the movie industry would be on that like a ton of bricks because money and lawyers

Oh they'd think the tech is great, they just wouldn't want YOU to use it - they'd be happy to utilize it as long as they can expedite working with those pesky VFX studios.
added on the 2022-08-17 09:28:17 by Gargaj Gargaj
Here's a thought experiment for folks excited by a new technology.

How does [technology] reinforce existing power imbalances in your society? How does it harm people with less economic security and benefit people with more economic security?

Well, if it does this harm, who do you want to be?

In a society that perpetually denigrates creatives and lionizes technologists, these AI art generators are deeply troubling.

And the massive imbalance between user input and the perceived level of output also makes them inappropriate for competitions. Think photo compos are long and boring? Welcome to that times a million.
But I assume party organizers are already considering that.
Quote:
Internet being flooded with illegal mp3 recordings changed the way we listed to the music, but did it really kill the music as the trade? Does not seem to be the case.


like.. pretty much, yes? it made it almost impossible to make a living wage from music sales for all but a tiny top % of artists who make most of the money, and the rest now have to tour to survive. but that's a discussion for another thread.
added on the 2022-08-17 10:50:22 by smash smash
Quote:
Think photo compos are long and boring? Welcome to that times a million.
But I assume party organizers are already considering that.


No need to slam the people who take pics and enter photo compos - but yeah at least I was considering it, hence the OP.

I do still think there is a skill in getting good AI pieces, but it's hard to deny that there are troubling ethical considerations too, so it's been a pretty informative thread.
added on the 2022-08-17 11:16:00 by farfar farfar
Quote:
And the massive imbalance between user input and the perceived level of output also makes them inappropriate for competitions. Think photo compos are long and boring? Welcome to that times a million.


Well, this being the demoscene, it's not hard to add arbitrary limitations, like, can we make a contest where the AI prompt is limited to 16 bytes? What would people manage then?

Also, what's with photo compos? Making a good photograph requires a lot of work too. That would be like saying pixelart compos are long and boring becauseanyone can doodle an ugly thing in mspaint in 30 seconds and submit it. Yet, somehow that doesn't happen, or there is a preselection to filter it out when it does.
I would've thought the demoscene would be eager to exploit the heck out of any new technology. Just put it in a separate compo, maybe set some interesting rules, and see what kinds of cool stuff people can come up with it. :)

If we see new tech more as a threat than an opportunity, are we perhaps getting a bit old? :D When I was young we had to draw uphill both ways...
added on the 2022-08-17 20:07:53 by Byproduct Byproduct
I welcome this new technology and also I think it will not harm artists (not even in the short term) but, on the contrary, it will unleash an incredible amount of creativity just as photography and video did in the past. Those who are ranting today will adopt it in the near future, the moment they notice the new creative opportunities that are opening.
added on the 2022-08-17 20:20:02 by ham ham
There's always plenty of opportunities with theft.
At the moment it's not clear at all which machine learning generators are not based on art stealing.
added on the 2022-08-17 20:31:54 by Zavie Zavie
Quote:
If we see new tech more as a threat than an opportunity, are we perhaps getting a bit old? :D

To be clear, I don't see it as a threat to the scene (other than the fact that we're gonna end up with less skilled artists in the future because it won't be a viable career), but I do see it as a threat to many industries.
added on the 2022-08-17 20:47:23 by Gargaj Gargaj
Quote:
other than the fact that we're gonna end up with less skilled artists in the future because it won't be a viable career


Probably not in the short term. There are AI to generate code too and they are not anywhere near capable of writing working code on their own.

Likewise with these AI generators, they may replace some jobs, but they won't replace the artists. The people who come up with new ideas and new styles. The AI can only replicate already existing styles, so that will be fun for a while, but then at some point people will get bored with it.

And the prompt are not enough for practical uses. The people doing the Ikea catalog can't just enter their product name and get a good rendering of it. If you make an illustrated book or a videogame, you need all your illustrations/sprites/textures to be in a consistent style. You need your characters to look the same on each page.

So it's fun to explore AI generation, but it reaches its limitations quite quickly. Simply because for anything a bit specific, you will never be able to write a prompt that works.
Maybe it's not good for Ikea catalogs, but for generating scene'ish graphics it seems great. Exactly the kind of stuff that graphics compos are traditionally full of.
added on the 2022-08-18 20:43:24 by yzi yzi
maybe we could let the AI sceners watch all the compos full of AI-art in and endless loop.. endlessly generating, watching, voting, screaming
added on the 2022-08-18 21:51:00 by farfar farfar
Quote:
The AI can only replicate already existing styles, so that will be fun for a while, but then at some point people will get bored with it.


I tend to agree with that - I used midjourney a few weeks, but quickly got bored precisely because it becomes so repetitive. You can kind of "see through" the generated art to the training set that lies beneath the AI... at least in many cases you can. There's something about the composition choices and colour choices that repeat again and again (probably because most users of midjourney aren't good at writing prompts, so the algorithm falls back to some defaults?)

anyway... I don't make art because I want or need art.. I do it because I like the process of doing it and I like learning new stuff.
added on the 2022-08-18 21:54:17 by farfar farfar

login

Go to top