If you respect the art of game development in any way, please, do not use AI Art.

Status
Not open for further replies.

gstv87

Regular
Regular
Joined
Oct 20, 2015
Messages
3,360
Reaction score
2,553
First Language
Spanish
Primarily Uses
RMVXA
what it's doing is closer to the collage generator that people think image generators are.
in code, there is not many 'free interpretation' variants on how to write an instruction.
either it is the instruction you need, or it isn't.
and, either by honest deduction or straight up copy-paste, you'll arrive at the same solution.

that's why there are some properties of some languages that let you convert one block into another, if you're ever limited by another instruction down the line.
sometimes using one or the other is what gives it away, and a coder who's written the code themselves will know why the block is there.
and a coder hitting a roadblock would often have commented lines with previous attempts at changing the code, for reference, and would present the whole block with mistakes and all.
if an AI removes comments when copying code, it'll show.
 

TheoAllen

Self-proclaimed jack of all trades
Regular
Joined
Mar 16, 2012
Messages
7,526
Reaction score
11,916
First Language
Indonesian
Primarily Uses
N/A
I'm not saying chatGPT is the worst thing ever, I just find it interesting more people are fine with it when its process is far more akin to stealing than AI art. I suppose text and code seems less of a human input?
It's still "human input", but what you generate from ChatGPT is a chuck of code from a large picture. For example, you can't ask the AI to generate a full battle system code, but you can ask for piece-by-piece code of it such as "what is the code/formula to make a picture move from point A to B". So, you're asking for a piece of a jigsaw puzzle. This is how coding question has been for years. It is knowledge. Meanwhile, image AI generates a full picture. Taken from everyone's hard work.
 

Htlaets

Regular
Regular
Joined
Feb 1, 2017
Messages
404
Reaction score
217
First Language
English
Primarily Uses
in code, there is not many 'free interpretation' variants on how to write an instruction.
either it is the instruction you need, or it isn't.
and, either by honest deduction or straight up copy-paste, you'll arrive at the same solution.

that's why there are some properties of some languages that let you convert one block into another, if you're ever limited by another instruction down the line.
sometimes using one or the other is what gives it away, and a coder who's written the code themselves will know why the block is there.
and a coder hitting a roadblock would often have commented lines with previous attempts at changing the code, for reference, and would present the whole block with mistakes and all.
if an AI removes comments when copying code, it'll show.

Putting aside how much fights over code-stealing come up...

ChatGPT can also write dialogue, text, etc. I wouldn't say it's particularly great at it, mind, but where an image AI would look at pattern data without images as reference (images were referenced to make the pattern data but the AI itself never touches those images) to infer what a given prompt should look like after wiping away noise, ChatGPT and other language models predicts text based on the stuff in its database.

Give a text prediction model a task to finish a sentence, for example like:
A horse trots

What it will do is look through actual text to predict what a good finisher for that would sound like and it'll come up with something like:

along the dusty trail, its mane blowing in the wind as the sun sets on the horizon.

By making a collage of all relevant text to figure out what follows.

It's still "human input", but what you generate from ChatGPT is a chuck of code from a large picture. For example, you can't ask the AI to generate a full battle system code, but you can ask for piece-by-piece code of it such as "what is the code/formula to make a picture move from point A to B". So, you're asking for a piece of a jigsaw puzzle. This is how coding question has been for years. It is knowledge. Meanwhile, image AI generates a full picture. Taken from everyone's hard work.
To jump off your analogy, a battle scene cannot be created by a single picture either unless you're just making a mostly-text RPG.

You could get an AI to generate a battleback, but that doesn't give you a battle scene, you'd also need the battlers, animations, frames for different characters, etc.

In both cases the AI is only giving a single piece of the puzzle. In the ChatGPT case, it is looking at real code written by people, in the Art AI case it is actually not looking at any images when it generates an image, the model is only 4 gb. The model was created through looking at others images, but that's far less direct than what ChatGPT does.
 

TheoAllen

Self-proclaimed jack of all trades
Regular
Joined
Mar 16, 2012
Messages
7,526
Reaction score
11,916
First Language
Indonesian
Primarily Uses
N/A
You could get an AI to generate a battleback, but that doesn't give you a battle scene, you'd also need the battlers, animations, frames for different characters, etc.

In both cases the AI is only giving a single piece of the puzzle
Except, the general use of AI art is not only for game development. Sometimes, a single picture is enough. And it was never to solve a puzzle. It is just a picture generation.

In the ChatGPT case, it is looking at real code written by people, in the Art AI case it is actually not looking at any images when it generates an image, the model is only 4 gb.
You seem to only focus on the "data training" and try to force an analogy to a text generator to justify why it is ok and not ok.

As I previously said, it is a pool of knowledge. While images/pictures are creative endeavors.
 

Htlaets

Regular
Regular
Joined
Feb 1, 2017
Messages
404
Reaction score
217
First Language
English
Primarily Uses
Except, the general use of AI art is not only for game development. Sometimes, a single picture is enough. And it was never to solve a puzzle. It is just a picture generation.
Sometimes a single function is enough.

More often than not it's not, true. But, there are plenty of cases where pictures are not one-offs, too. The cases where it's not are small private commissions, and one-offs in art galleries.

In other use-cases, it is a piece of a puzzle. A single picture doesn't decorate a house or a building. A single picture can't make an advertising campaign, an animated film, game, or a slide-show presentation, which are common use-cases for art.

Only thing that prevents ChatGPT from going farther than the jigsaw pieces either way is how much memory openAI allows people access to because of infrastructure. Of course, if it generates large amounts of code the amount of wrongness would also be large.

For the moment. Emphasis on that. The thing is, if in a couple years of improvement chatgpt can indeed write an entire large program by itself with human minimal code checking will you feel the same way about the way it's using data?

What percentage of the perceived job can the AI do before it crosses from OK to not OK?

You seem to only focus on the "data training" and try to force an analogy to a text generator to justify why it is ok and not ok.

As I previously said, it is a pool of knowledge. While images/pictures are creative endeavors.
I'm not saying why it's OK or not OK just pointing at the inconsistency.

I can understand why someone would be upset about the way image models are trained by looking at people's art without permission perfectly well. What I can't understand is then being fine with chatgpt.

An image AI looks at drawings to make a program to learn make images, a text AI looks at text, also without permission, and puts it into the model and tries to learn what order to put the text in. I think if you're upset with the former, you should probably dislike the latter more.
 

gstv87

Regular
Regular
Joined
Oct 20, 2015
Messages
3,360
Reaction score
2,553
First Language
Spanish
Primarily Uses
RMVXA
I think if you're upset with the former, you should probably dislike the latter more.
I feel like I've said that thing somew--- oh, wait, I did! yeah...
funny that I got called disrespectful right after.

see? I always give things for granted. I know we're getting there eventually.
 

TheoAllen

Self-proclaimed jack of all trades
Regular
Joined
Mar 16, 2012
Messages
7,526
Reaction score
11,916
First Language
Indonesian
Primarily Uses
N/A
Sometimes a single function is enough.
Precisely what I'm saying. When you ask a coding question, you ask this single or a few functions, because you have a larger scope to work with.

In other use-cases, it is a piece of a puzzle. A single picture doesn't decorate a house or a building.
It does, I can have one single picture on a wall and that's enough.

A single picture can't make an advertising campaign
It does, slap an anime character in your banner, done.

an animated film
Personally, I can justify this. An AI could make animating a film easier.

which are common use-cases for art.
Have you heard that in Japan, it was totally "fine" to sell AI art in Comiket? I've heard that they have their own booth location. There is no "puzzle" here. Just sell pictures.

Of course, if it generates large amounts of code the amount of wrongness would also be large.
See, you have already aware of this.
For coding specifically, each problem is unique. How the code is structured has implications. We (programmers) are not as insecure as artists that AI would take our job. it is the other way around. A picture with the wrong anatomy wouldn't make your machine explodes. But the wrong code does.

The thing is, if in a couple years of improvement chatgpt can indeed write an entire large program by itself with human minimal code checking will you feel the same way about the way it's using data?
I will repost this picture. And I would say, yes.
255-effort-shift.png

I actually would challenge that idea. Programmers like automation. We still need to curate every inch of the code to make sure it works as intended though. This will benefit programmers. It is much less beneficial for non-programmers.

What percentage of the perceived job can the AI do before it crosses from OK to not OK?
I don't have a strong opinion on this. I'm on the neutral spectrum. I understand both artist's side and the other side. What I merely told you is what I understand from the artist's side.

But if you ask AI code specifically, my take is. I'm skeptical about it if it is going to take our job.

What I can't understand is then being fine with chatgpt.
What's hard to understand about the concept of a pool of knowledge?
Knowledge is the "public domain'. At least, mostly.

And if you ask if people are fine with ChatGPT. They are not, at least some. People hate if an entire journal is written by AI. Professors sure hate it if their students wrote a thesis using AI.

An image AI looks at drawings to make a program to learn make images, a text AI looks at text, also without permission, and puts it into the model and tries to learn what order to put the text in. I think if you're upset with the former, you should probably dislike the latter more.
I don't understand why I would upset the latter more. Is it because it uses terabytes of data?
 

Htlaets

Regular
Regular
Joined
Feb 1, 2017
Messages
404
Reaction score
217
First Language
English
Primarily Uses
I feel like I've said that thing somew--- oh, wait, I did! yeah...
funny that I got called disrespectful right after.

see? I always give things for granted. I know we're getting there eventually.
If it was to me I don't quite know what you're referring to. Your previous comment was basically saying it'd be easy to see if an Ai copied code because it'll remove comments? Which is a non-sequitur when it comes to that. I'm not sure how that's really relevent unless you're just saying that it's a matter of how similar the output is, in which case it's also hard to pick out where AI art got its pattern data from unless the model is tuned to one artist or you're telling it to copy/trained it to do so.

Unless you're talking about where you mentioned AI learning music to learn art, which I'm still not quite sure I see the tie in.

Of course, if it wasn't to me I could've missed it. You might be confusing me with someone else who doesn't have an avatar.
Precisely what I'm saying. When you ask a coding question, you ask this single or a few functions, because you have a larger scope to work with.


It does, I can have one single picture on a wall and that's enough.


It does, slap an anime character in your banner, done.
And, you can have a program with a single function and it would be able to be used for a purpose.

There's not necessarily a puzzle there either.

If you have a house with a single picture on the wall, I would not call your house decorated, I would call the walls mostly barren. Which is fine, but I wouldn't say you have a decorated house, just a single decoration.

If you have a single banner as an advertisement you could hardly call that a campaign.



Have you heard that in Japan, it was totally "fine" to sell AI art in Comiket? I've heard that they have their own booth location. There is no
Yes, I am aware.

See, you have already aware of this.
I put emphasis on current day state of things for a reason. If you're assuming I'm taking a particular hard stance beyond "it's inevitable the technology will develop and replace people one way or the other" you would assume wrong.

For coding specifically, each problem is unique. How the code is structured has implications. We (programmers) are not as insecure as artists that AI would take our job. it is the other way around. A picture with the wrong anatomy wouldn't make your machine explodes. But the wrong code does.
Human programmers are perfectly capable of making programs that explode. But that's a whole other conversation.


I will repost this picture. And I would say, yes.
I actually would challenge that idea. Programmers like automation. We still need to curate every inch of the code to make sure it works as intended though. This will benefit programmers. It is much less beneficial for non-programmers.
If a project can be done with 2 people rather than 20 because of AI in a few years, then programmers will be getting replaced.

I don't have a strong opinion on this. I'm on the neutral spectrum. I understand both artist's side and the other side. What I merely told you is what I understand from the artist's side.

But if you ask AI code specifically, my take is. I'm skeptical about it if it is going to take our job.
For the record, my stance is on the neutral spectrum as well, I do understand both sides of the argument.

What's hard to understand about the concept of a pool of knowledge?
Knowledge is the "public domain'. At least, mostly.
It gets real questionable with code and companies, but that aside the thing is it's also public domain for an artist to look at another artist's art for reference. When I've commissioned artists I also linked to references of how I want things drawn.

And, again, you're still ignoring the part of ChatGPT as a writing tool, where it has literal novels in it.


And if you ask if people are fine with ChatGPT. They are not, at least some. People hate if an entire journal is written by AI. Professors sure hate it if their students wrote a thesis using AI.
I'm talking in aggregate not in absolutes. I specifically singled out people who are fine with ChatGPT but think AI image generation is theft.

I don't understand why I would upset the latter more. Is it because it uses terabytes of data?
Because if the former had been done by a human (looked at art to learn how to do art) it's considered fine, if the latter had been done by a human (take snippets of text you have at hand from other sources and arrange them to meet the goal) it would be more likely to get accusations of plagiarism if they weren't skillful enough at arranging things.

If you were to reverse the situation it would be ChatGPT as an image generator would be traces of many different drawings and combining separately traced pieces, because it has those pieces in its database, while AI image generation does not have any memory of those images (or writing) specifically, just the methods to make images.
 

NamEtag

Regular
Regular
Joined
Jun 27, 2020
Messages
187
Reaction score
96
First Language
English
Primarily Uses
N/A
I'm talking in aggregate not in absolutes. I specifically singled out people who are fine with ChatGPT but think AI image generation is theft.
I don't think that's a good idea. "I am only talking to these people who I think are wrong, and everyone else should ignore my arguments due to that context".

Whatever arguments you present, they should at least attempt to function contextless, both to ensure clear communication, and because we live in an age where people will take what you say and apply it in entirely the wrong situation.

Also, we're several pages deep into a public thread. I'm not really sure your target audience will be reading this.
 

gstv87

Regular
Regular
Joined
Oct 20, 2015
Messages
3,360
Reaction score
2,553
First Language
Spanish
Primarily Uses
RMVXA
If it was to me I don't quite know what you're referring to
no, not you... another thread you may not know about.
don't worry about it.
 

TheoAllen

Self-proclaimed jack of all trades
Regular
Joined
Mar 16, 2012
Messages
7,526
Reaction score
11,916
First Language
Indonesian
Primarily Uses
N/A
And, you can have a program with a single function and it would be able to be used for a purpose.
Yes, it can.
You can ask "shell scripts to rename files in batch", that is a single function for a single purpose.

There's not necessarily a puzzle there either.
However, most coding questions are asking for a specific part of the larger picture.

(I am pretty sure my post there would be scrapped by ChatGPT and I'm perfectly fine)

If you have a house with a single picture on the wall, I would not call your house decorated, I would call the walls mostly barren. Which is fine, but I wouldn't say you have a decorated house, just a single decoration.
But a picture can be used that way. can't it?

Human programmers are perfectly capable of making programs that explode. But that's a whole other conversation.
So why would you assume the AI won't?

If a project can be done with 2 people rather than 20 because of AI in a few years, then programmers will be getting replaced.
Jokes on you, we already got the project done in small teams of programmers.

Here is what I knew so far in my IT industry experience.
The client doesn't know what they want.

Heck, even my peers sometimes don't know the system requirement. I had to "interrogate" them about the specification of the program I was going to make. In the comic I put there, it said you have to fully define specifications if AI going to replace the programmers. Few to no one is able to do that. Even sometimes I figured things out as I type the code. A set of requirements I never thought of.

If anything, the programmer jobs would be shifted to become the "mediator" between the client and AI who's going to write the code. If you spin this a bit, this is exactly what the programmers do. "Talk to the computer"

Programmers won't be replaced.

Things that get replaced are mundane tasks such as creating login forms on a website. As far as I know, no programmers would mind that (me, personally, I hate menu/UI coding). We prefer to work on more exciting things.

It gets real questionable with code and companies, but that aside the thing is it's also public domain for an artist to look at another artist's art for reference. When I've commissioned artists I also linked to references of how I want things drawn.
The problem with this 4GB of diffusion data training is, it won't evolve. If we hypothetically shut all image reference downs and humanity is reset back to where it started. All we have as a reference is the real-world model. Each individual would develop their unique style even though we are using the same real-world model. Using the same model, the AI would only create an image close to them.

It gets blurry because we have been taught that taking from a single source is stealing. Taking from various sources is research. Because it's nothing like the original. This is why we (people here) got to this argument.

And, again, you're still ignoring the part of ChatGPT as a writing tool, where it has literal novels in it.
I'm not a writer. My only experience is only for coding/programming, so I can only say about that particular topic. Ask the actual writer what they think about it

But if I must say from the perspective of writers. We have tropes. A commonly used writing convention to tell the story. So, if this AI would use the exact same place, and the same character, but a bit different plot, I'd probably be upset because that character and the place are my trademarks. Otherwise, I wouldn't care.

I'm talking in aggregate not in absolutes. I specifically singled out people who are fine with ChatGPT but think AI image generation is theft.
If anything, I see people who are fine with AI art never actually make art themselves. They indeed do have other perspectives worth exploring and how they thought about it. However, I haven't yet found an artist that is like "this tech is amazing, please use my art for your data training".

The first person that does it would probably be me (you can see more about the other takes on the replies or quote tweets, actually nvm, Twitter is broken). But even with that, the one who should get the most benefit should be me. Either the tool is exclusive to me or I got the royalty every time someone uses it.

if the latter had been done by a human (take snippets of text you have at hand from other sources and arrange them to meet the goal) it would be more likely to get accusations of plagiarism if they weren't skillful enough at arranging things.
Now, I see where you came from.
Even back in my college, there was a plagiarism detector to see if our thesis was plagiarized from a certain source or not. I still wonder why that would be a problem though. Arranging words so that they won't get accused is such an unproductive activity. One of the reasons why I hate colleges.

If you were to reverse the situation it would be ChatGPT as an image generator would be traces of many different drawings and combining separately traced pieces, because it has those pieces in its database, while AI image generation does not have any memory of those images (or writing) specifically, just the methods to make images.
I have already addressed this in my previous points.
 
Last edited:

123edc

Regular
Regular
Joined
Nov 17, 2021
Messages
392
Reaction score
270
First Language
german
Primarily Uses
RMMZ
Precisely what I'm saying. When you ask a coding question, you ask this single or a few functions, because you have a larger scope to work with.
chat cpt managed, to take googles L3 entry exams with perfect scores ... just saying ...

that's not a "single piece" anymore ... it's a full picture,
yes, the hands may still look off and will need a human, to "correct" the slight errors, but even, if you don't ... you can actually use the picture as you intended

Yes, it can.
You can ask "shell scripts to rename files in batch", that is a single function for a single purpose.
you can ask "blue haired elf with golden trimmed dress holding a way to large bow in his hand, while aiming at a bird" ... that is a single picture, for a single purpose

if i want to create an entire work of it,
i need a bulk of single pictures / functions,

each and every must be individually fitting towards it's intended use case,
and they all need to be fit together in a consistent string


he/she/it is right ... it IS the same argument

so ... there are coding questions, that can be done in a single function
so ... there are artworks, that can fullfill it's intended use in a single picture

so ... there are coding questions, that are part of a larger picture
so ... there are artworks, that are part of a larger picture / work

right?

But a picture can be used that way. can't it?
i can print out a line of text - be it code or story - and put it above my home's entrence ... many people actually do that for footmats "welcome home" and stuff

it can be used that way, can't it?

does it mean, that a single line of text - be it code or story - will create you the entire book ... or the entire game [maybe, once technology is advanced enough]

in the same way, i can ask you, to show me a game, consisting of a single picture ...
 

TheoAllen

Self-proclaimed jack of all trades
Regular
Joined
Mar 16, 2012
Messages
7,526
Reaction score
11,916
First Language
Indonesian
Primarily Uses
N/A
@123edc
before I humor you, I'm gonna ask this. Are you trying to understand or just trying to troll?
Based on my past experience having a conversation with you, you sound like the latter.
 

Htlaets

Regular
Regular
Joined
Feb 1, 2017
Messages
404
Reaction score
217
First Language
English
Primarily Uses
I don't think that's a good idea. "I am only talking to these people who I think are wrong, and everyone else should ignore my arguments due to that context".

Whatever arguments you present, they should at least attempt to function contextless, both to ensure clear communication, and because we live in an age where people will take what you say and apply it in entirely the wrong situation.
I provided context in the beginning of that back and forth. I simply restated that context. This wasn't a third party to that line of quotes, I had started that back and forth specifically citing one particular sentiment (people who think AI images are theft but chatGPT is fine. There's no way for any discussion to be understood without at least some context in the first place.

Also, we're several pages deep into a public thread. I'm not really sure your target audience will be reading this.
I'm not totally sure what you mean by target audience. Forum threads are just stream of consciousness in the end anyway.
Yes, it can.
You can ask "shell scripts to rename files in batch", that is a single function for a single purpose.
However, most coding questions are asking for a specific part of the larger picture.

(I am pretty sure my post there would be scrapped by ChatGPT and I'm perfectly fine)
Yes, and a very large portion, if not most, art is done or purchased as part of some larger project, whether that be decorating a home or creating an advertising campaign.
But a picture can be used that way. can't it?
This is a bit circular, because as you acknowledge, a program can have a single function.
So why would you assume the AI won't?
When did I assume AI won't? I'm just saying that it could progress to the point where that's not a distinction.
Jokes on you, we already got the project done in small teams of programmers.
Depends on the company.

The client doesn't know what they want.
Yeah, but if it gets to the point where a single or couple programmers can figure out what they want and then use the AI with a sanity check, or, hell, language models get complex enough where they can figure out what the client actually wants, then
If anything, the programmer jobs would be shifted to become the "mediator" between the client and AI who's going to write the code. If you spin this a bit, this is exactly what the programmers do. "Talk to the computer"

Programmers won't be replaced.

Things that get replaced are mundane tasks such as creating login forms on a website. As far as I know, no programmers would mind that (me, personally, I hate menu/UI coding). We prefer to work on more exciting things.
Yeah, but that's the thing isn't it? Hours are logged doing mundane tasks, assuming AI only progresses enough to handle those. Hours that people are paid for. If there are less hours of mundane work, there are less people getting paid for work.


The problem with this 4GB of diffusion data training is, it won't evolve. If we hypothetically shut all image reference downs and humanity is reset back to where it started. All we have as a reference is the real-world model. Each individual would develop their unique style even though we are using the same real-world model. Using the same model, the AI would only create an image close to them.
That's not a distinction from chatGPT though. ChatGPT's model doesn't learn or evolve either, in fact it wouldn't scrape your posts here because ChatGPT is not even connected to the internet, all of its data is from... I want to say pre 2021?

Internet is also why bing-bot is nuts, but that's another thing (it's not really even bringing the internet into its model so much as when it starts looking around the internet based on user input, it makes the model nuts).

Creating the model is where the AI learns. The model itself is, at the moment, static for both language and image models. There's memory for individual sessions that the AI tunes to for language models that can be thought of as (very temporary) learning, but the image generator equivalent to that is tuning a single image one on top of another and/or using controlnet.

And, both models can be trained on their own data, you can train an AI image model on AI generated images, for example.

I'm not a writer. My only experience is only for coding/programming, so I can only say about that particular topic. Ask the actual writer what they think about it

But if I must say from the perspective of writers. We have tropes. A commonly used writing convention to tell the story. So, if this AI would use the exact same place, and the same character, but a bit different plot, I'd probably be upset because that character and the place are my trademarks. Otherwise, I wouldn't care.
That's the thing, though. Art has styles, techniques, and is learned by observing other art as well. I still don't quite get the distinction.

If anything, I see people who are fine with AI art never actually make art themselves. They indeed do have other perspectives worth exploring and how they thought about it. However, I haven't yet found an artist that is like "this tech is amazing, please use my art for your data training".

The first person that does it would probably be me (you can see more about the other takes on the replies or quote tweets, actually nvm, Twitter is broken). But even with that, the one who should get the most benefit should be me. Either the tool is exclusive to me or I got the royalty every time someone uses it.
There are actually artists that have trained models with their own images to help their workflow, though.

And, it's not like writers got a say in putting their books in the machine for it to learn from, it's just less obvious since it's text.

My view on the reason for the difference is more on the ground of how close it is to encroaching on one-off fan art commissions and artists for whom that is their bread and butter don't really want to play a part in replacing themselves, which I understand.

That, and deep fakes, models that have been trained to copy individual artists, and otherwise definitely is inflammatory to say the least.


Edit: As for not seeing ChatGPTs database, an easy way to see it is to merge two different things. Like ask it to tell a short story in the style of H.P Lovecraft about spongebob squarepants and it'll come up with stuff from both.
 
Last edited:

TheoAllen

Self-proclaimed jack of all trades
Regular
Joined
Mar 16, 2012
Messages
7,526
Reaction score
11,916
First Language
Indonesian
Primarily Uses
N/A
Yes, and a very large portion, if not most, art is done or purchased as part of some larger project, whether that be decorating a home or creating an advertising campaign.
This is a bit circular, because as you acknowledge, a program can have a single function.
Hmm... I agree this is a bit circular. But let's put it this way.

I'll shed some light on how the function works in programming.
When you program things, it usually has a dependency on other things. You can not just take that single function and call it a program. It likely won't run because it depends on the other code.

When a person asks about programming, they ask for wisdom on how to do stuff. This function may (or may not) run on its own. However, they will still need to implement their own functions based on that insight to be compatible with their larger project. For example, I can not just take easing functions as is in my program. I need to fine-tune to fit into my program.

Whereas home decoration, still depends on the decoration itself, but if we are talking about a picture, it is totally fine to take one from one house to another. In another word, a picture is independent on its own already.

Yeah, but if it gets to the point where a single or couple programmers can figure out what they want and then use the AI with a sanity check, or, hell, language models get complex enough where they can figure out what the client actually wants, then
I don't believe that would ever happen. When that happens (at all), the AI still needs maintenance from the actual programmers who code the AI. Except if you mean a hypothetical situation where the AI went rogue, no longer needs maintenance, enslaves all humanity (or destroys it), and can regenerate like a living being and update the software and hardware on its own.

Yeah, but that's the thing isn't it? Hours are logged doing mundane tasks, assuming AI only progresses enough to handle those. Hours that people are paid for. If there are less hours of mundane work, there are less people getting paid for work.
I would argue those "mundane tasks" are not actually a programmer's work. If they can only write basic syntax, a couple of HTML tags, that can be replaced by AI, and no analytic skills, they have no value, to begin with. Work on something else. Programmers always solve unique problems. And the unique problem appears every day.

That's not a distinction from chatGPT though. ChatGPT's model doesn't learn or evolve either, in fact it wouldn't scrape your posts here because ChatGPT is not even connected to the internet, all of its data is from... I want to say pre 2021?
Creating the model is where the AI learns. The model itself is, at the moment, static for both language and image models. There's memory for individual sessions that the AI tunes to for language models that can be thought of as (very temporary) learning, but the image generator equivalent to that is tuning a single image one on top of another and/or using controlnet.
Let me get back to you that it is about a pool of knowledge vs creative endeavor.
If you get back at your question "what about novels?", ask an actual writer about that. I don't have enough knowledge to represent them.

However, the usage of ChatGPT as far as I can tell from social media is to generate silly conversations, memes, ask about plugins, and others. Content generation (journals, etc) is still frowned upon.

There are actually artists that have trained models with their own images to help their workflow, though.
The keyword is "their workflow", right?

And, it's not like writers got a say in putting their books in the machine for it to learn from, it's just less obvious since it's text.
Maybe you are right in this part.
It's a text, especially if it is a novel, you need to actually read it (the same goes for the music, you need to listen to it). It takes time. Pictures? you see at the glance, you understand.

My view on the reason for the difference is more on the ground of how close it is to encroaching on one-off fan art commissions and artists for whom that is their bread and butter don't really want to play a part in replacing themselves, which I understand.

That, and deep fakes, models that have been trained to copy individual artists, and otherwise definitely is inflammatory to say the least.
It gets even worse because the people behind AI monetize the tech based on the data trained from various sources. If AI stays to generate a photorealistic image (which already exists for several years), artists probably wouldn't lose their minds.

Even with that, people might still concern if their photo is in the data training. The same argument could be made "your photo isn't in our training data. No link about your identity is in our files". Would people buy that? some might, some might not.

I do, however, like some fanart generated from AI though. Sometimes from weird prompts, I don't think any artist would think of such prompts, only AI and weird imagination from fans can.
 

Htlaets

Regular
Regular
Joined
Feb 1, 2017
Messages
404
Reaction score
217
First Language
English
Primarily Uses
Hmm... I agree this is a bit circular. But let's put it this way.

I'll shed some light on how the function works in programming.
When you program things, it usually has a dependency on other things. You can not just take that single function and call it a program. It likely won't run because it depends on the other code.

When a person asks about programming, they ask for wisdom on how to do stuff. This function may (or may not) run on its own. However, they will still need to implement their own functions based on that insight to be compatible with their larger project. For example, I can not just take easing functions as is in my program. I need to fine-tune to fit into my program.

Whereas home decoration, still depends on the decoration itself, but if we are talking about a picture, it is totally fine to take one from one house to another. In another word, a picture is independent on its own already.
I have written programs with single functions for just renaming files or for lazy branching, a function by itself can be a program, and a function can be reused in other programs depending on context.
I don't believe that would ever happen. When that happens (at all), the AI still needs maintenance from the actual programmers who code the AI. Except if you mean a hypothetical situation where the AI went rogue, no longer needs maintenance, enslaves all humanity (or destroys it), and can regenerate like a living being and update the software and hardware on its own.
I mean, that latter situation is totally possible because the totally not scary "AI self-controlled fighter jet test" thing already happened and apparently went well.

Thaaat being said, you have to acknowledge that the number of staff to maintain an AI would be far less than what it replaces (or else companies wouldn't be pouring billions into these things).

I would argue those "mundane tasks" are not actually a programmer's work. If they can only write basic syntax, a couple of HTML tags, that can be replaced by AI, and no analytic skills, they have no value, to begin with. Work on something else. Programmers always solve unique problems. And the unique problem appears every day.
But programmers do those mundane tasks, if they're gone there are less man hours required for programmers on a team to do a particular thing, there are less crap jobs to dump on interns as they learn the ropes. There are a finite number of problems that companies and consumers want to pay money for, on top of that more and more of those problems will, over time, be taken up more and more by AI.

Let me get back to you that it is about a pool of knowledge vs creative endeavor.
If you get back at your question "what about novels?", ask an actual writer about that. I don't have enough knowledge to represent them.

However, the usage of ChatGPT as far as I can tell from social media is to generate silly conversations, memes, ask about plugins, and others. Content generation (journals, etc) is still frowned upon.
Even before chatgpt AI dungeon kbold and other language models were being used as writing aids using traceable snippets from writer's works (in AI dungeons case it revealed some seriously bad stuff used for training but I digress).

On top of that a few medium-name tech/market news outlets let AI write their articles. Blew up in their face for most of them because they didn't double check their work, but that's another for now thing.

But, even silly conversations, memes and the like are arguably creative work, granted not normally paid creative work (though arguably 90% of youtube fits into silly conversations and memes).

Speaking of youtube, I know of a few content creators that had entire videos written by ChatGPT (As a test, mind, but there's a creative field for you).

The keyword is "their workflow", right?
Wouldn't be possible for them to independently use it for their workflow like that without models being able to train on whatever someone can get their eyes on.

It gets even worse because the people behind AI monetize the tech based on the data trained from various sources. If AI stays to generate a photorealistic image (which already exists for several years), artists probably wouldn't lose their minds.
The photorealistic stuff can arguably be a lot worse with deepfakes. A looot more damage is going to come out of deep fakes than AI 2d drawings, imo (heck, even the "this face does not exist" stuff was used for bot network social media profile pics).

We'll eventually reach a point where photographic evidence can't be trusted again unless it's on physical film. We're honestly pretty close.

Even with that, people might still concern if their photo is in the data training. The same argument could be made "your photo isn't in our training data. No link about your identity is in our files". Would people buy that? some might, some might not.
The same thing can be said about novels, copyrighted code, articles, and other things stuffed into chatgpt. Or, as I said, a lot more since, again, diffusion image models contain no images.

AI image gen staying photorealistic only is not even a realistic outcome to even think about either way. It'll either become corporate models only or some variant of open source like it is now.

Actually, scratch that, the comic copyright case hints that AI generated images will stay legal themselves, probably so long as it doesn't actually look very like another image (since image copyright claims are judged by a human looking for similarities).

And, if that's true, AI image gen is already open-source and locally trainable, so it'd just be the reverse of emulation if things got locked down where the images are legal but the program isn't.

Then you get into even weirder realms of "what if someone uses technically legal AI art to make an image model?" Then you can have models trained entirely on AI art for inception legal dodge.

Also, it's unlikely, but if you're looking for things to get locked down on the model side it'll be the getty images vs Stability AI case result you're looking for.
 

Tamina

Regular
Regular
Joined
Dec 22, 2019
Messages
246
Reaction score
147
First Language
English
Primarily Uses
RMMZ
The same thing can be said about novels, copyrighted code, articles, and other things stuffed into chatgpt. Or, as I said, a lot more since, again, diffusion image models contain no images.

Whether they contain the image or not does not matter in the copyright debate. Since the whole point of copyright law is to protect human creator's privilege to make money, what matters is whether the image is being used or not, not if it is stored.

Diffusion models can't generate new images without data. If you use someone else's copyrighted image, which is created by human to make money, then you generate a new image with it and use it to make money. You are the one making money and the creators get nothing.

This rule applies to even small snippets of image....if you want to use an icon from Shutterstock for your UI you need to pay. If you want to use a texture for a 3D model from a texture pack you need to pay. Even if you delete the source file of the icon or texture from your hard drive after the output, if your final UI or 3D model output used the copyrighted image, you need to pay.

In that case it is very unfair that if you want to use an image made by someone you need to pay. BUT if you screenshot the image, give it to AI, and the AI generates a new similar image for the same function you don't need to pay, only because AI generated image is supposed to be a "new image". This is discouraging.

The case for ChatGPT is quite different. If a novelist sells a novel, they are selling the entire story, not snippets of paragraph or chapters or plot summary. So ChatGPT learning how to write a story from another person's story does not affect a novelist's profit that much, if at all.

For codes, according to this, code snippets isn't copyrighted:


Not everything you write is copyrightable – it must be sufficiently creative such that it deserves protection. For example, short phrases are rarely considered copyrightable nor are facts or functional pieces of a work. For source code, that means that anything that is very short or provides only functional capabilities would not be considered copyrightable.

So if I ask ChatGPT "please create a photo editing program like adobe Photoshop for me" then they can write code for an entire program, then it will likely have copyright problems as Adobe lose money.

If I ask ChatGPT to generate snippets of code for a function, it is probably safe. Before ChatGPT exists snippets of code wasn't copyrighted to begin with. Unlike a small icon sold by another artist to be used as an asset.

You see, what matters is how business model works, not how AI generates output. Art asset business works differently from novel and code, that is why the debate is different.
 
Last edited:

TheoAllen

Self-proclaimed jack of all trades
Regular
Joined
Mar 16, 2012
Messages
7,526
Reaction score
11,916
First Language
Indonesian
Primarily Uses
N/A
I have written programs with single functions for just renaming files or for lazy branching, a function by itself can be a program, and a function can be reused in other programs depending on context.
And what I'm saying is that the majority of programming question is not about this single function for a single purpose. You could write an app that the whole company at its stake but you wondered "how do I concatenate string again?"

Thaaat being said, you have to acknowledge that the number of staff to maintain an AI would be far less than what it replaces (or else companies wouldn't be pouring billions into these things).
While it might be true, I still need to know the context of what replacing what. If you mean "programmers", perhaps we have a different definition of "programmers". Or you meant something else?

But programmers do those mundane tasks,
... such as this definition.
It is not my definition of a programmer.

if they're gone there are less man hours required for programmers on a team to do a particular thing, there are less crap jobs to dump on interns as they learn the ropes.
I would say, the skill or talent of a programmer is innate. They are not learning the ropes from mundane tasks. They learn the ropes from a complex set of requirements guided by a technical lead, or autodidact. I would say this is a "natural selection" so I wouldn't get paired with crap peers who don't know how to program and only follow instructions from programming courses, to be honest. I might sound like an elitist, but it is what it is.

Before AI is a thing, programmers already have automation on their hands. A set of frameworks to make their life easier. Or in-house framework. I even make my own tool to simplify my mundane task. AI would help them more.

There are a finite number of problems that companies and consumers want to pay money for, on top of that more and more of those problems will, over time, be taken up more and more by AI.
It is already taken care of by AI. There's already machine learning to analyze company data. But who's going to use the AI again? the programmers. The CEO of the company or the director won't be touching it. They have more important matters to attend to. Again, a-wannabe programmers who only write HTML tags won't likely be the ones operating the AI. The competition for programmers is already fierce, and AI won't even make a difference.

Even before chatgpt AI dungeon kbold and other language models were being used as writing aids using traceable snippets from writer's works (in AI dungeons case it revealed some seriously bad stuff used for training but I digress).
I heard about the AI dungeon, but I don't know much about it.
Is it paid or do they get any monetary gain at all such as donations?

... Actually, never mind. That question probably contributes nothing. Tho, I'm still curious.
Wouldn't be possible for them to independently use it for their workflow like that without models being able to train on whatever someone can get their eyes on.
If they could train the AI using their own drawing, and exclusively for them (no one else can't use it), then it might be fine.

The photorealistic stuff can arguably be a lot worse with deepfakes. A looot more damage is going to come out of deep fakes than AI 2d drawings, imo (heck, even the "this face does not exist" stuff was used for bot network social media profile pics).
I agree, but it is a whole set of new problems different than what we are discussing (art theft), not saying it's the least of a problem or a solution though.

We'll eventually reach a point where photographic evidence can't be trusted again unless it's on physical film. We're honestly pretty close.
Image editing is already a thing.

The same thing can be said about novels, copyrighted code, articles, and other things stuffed into chatgpt. Or, as I said, a lot more since, again, diffusion image models contain no images.
Copyrighted code is usually kept hidden and encrypted. I don't know about the other text work (EDIT: Actually, Tamina nailed it well). And if you keep saying "diffusion models contain no images", I would also be repeating "these diffusion models won't exist unless it's been trained from existing images", so why not train other legal images you have permission for? It would also create the "same" diffusion models.

If I say, "don't use my drawing for training models", are you keep insisting "don't worry, I won't retain your drawing in my diffusion models"? Or are you going all the way around trying to scrap my entire gallery because you believe what you believe even though I said no?

Then you get into even weirder realms of "what if someone uses technically legal AI art to make an image model?" Then you can have models trained entirely on AI art for inception legal dodge.
I thought about this silly scenario too. I mean, if they want to train humans with 6 fingers, it's more power to them.
 
Last edited:

123edc

Regular
Regular
Joined
Nov 17, 2021
Messages
392
Reaction score
270
First Language
german
Primarily Uses
RMMZ
what matters is whether the image is being used or not, not if it is stored.
actually, no ... if a image is being used or not is irrelevant,
what matters is the question: is it a transformative work or not [i.e how closely does it resemble said image]

So ChatGPT learning how to write a story from another person's story does not affect a novelist's profit that much, if at all.
... just think about all these newspapers, rewriting the text - delivered from the big news agencys - for their respective audience ...

of curse, it will hurt their profit, if that job can be done by an ai ...

Copyrighted code is usually kept hidden and encrypted.
usually, but not always ... there's more then enough unencrypted code out in the world ... sometimes it's not even possible to ...

"these diffusion models won't exist unless it's been trained from existing images",
again, the same argument is true for the text/novel writing ai ...
it wouldn't exist, unless it'd been trained from other / existing texts

If I say, "don't use my drawing for training models",
if you say it in clear, machine readable code within your drawings data ...
and you can proof to me, that i used your drawing for training it

then yes, where i life, i would be getting a problem according to law ...
 
Last edited:

Tamina

Regular
Regular
Joined
Dec 22, 2019
Messages
246
Reaction score
147
First Language
English
Primarily Uses
RMMZ
actually, no ... if a image is being used or not is irrelevant,
what matters is the question: is it a transformative work or not [i.e how closely does it resemble said image]

Is this really the case? Copyright law is a complicated issue, I would't say it definitely works this way unless I am a copyright law expert.

Case to the point, Capcom was accused for copyright infringement in 2021, because they used unlicensed textures in RE4.

If you look at the texture on the left, it is being transformed into a logo by a human. And it is pretty much unrecognizable in the final output. I absolutely can't tell that texture is part of the logo by looking at it.
Screenshot_20230228-075317_Chrome.jpg

And yet it is still copyright infringement.

Even if this texture is being transformed even more, does it make it not copyright infringement? What draws the line?

Right now there are many internet opinions about AI copyright issues, but most of them aren't opinions from law experts. And even law experts are still working on sorting this out. In other words: some of these opinions may become incorrect in the future once this is settled.

I wouldn't jump into conclusion this fast.

And that means...if you use AI generated output in your commercial project now, there is a risk once this is settled in the future. Maybe AI art is safe, maybe not......or maybe it is somewhere in-between, case by case based on the final output result. we don't know anything about it YET.

... just think about all these newspapers, rewriting the text - delivered from the big news agencys - for their respective audience ...

of curse, it will hurt their profit, if that job can be done by an ai ...

you still need real news reporters for the content, since ChatGPT doesn't have access to the newest info in the world.
 
Last edited:
Status
Not open for further replies.

Latest Threads

Latest Posts

Latest Profile Posts

So the concept for my Game Jam project is way more than I can complete before the deadline. But I think I found a way to get the core done and make the incomplete parts either not really noticeable or make them a part of the story for now. That sneaky, dastardly Krampus! Then I can complete the rest. Did I mention that this is fun?
I've been working on a game called "Eternal Sunset". Is there anywhere here i can "show it off" maybe? Wondering what others think.
What do you do when there are lots of offers online but you don't have even a slight chance of ever playing those myriads of games?
Ugh, mispelled my username.

Forum statistics

Threads
136,545
Messages
1,267,355
Members
180,218
Latest member
aezwasd1
Top