Hmm... I agree this is a bit circular. But let's put it this way.
I'll shed some light on how the function works in programming.
When you program things, it usually has a dependency on other things. You can not just take that single function and call it a program. It likely won't run because it depends on the other code.
When a person asks about programming, they ask for wisdom on how to do stuff. This function may (or may not) run on its own. However, they will still need to implement their own functions based on that insight to be compatible with their larger project. For example, I can not just take
easing functions as is in my program. I need to fine-tune to fit into my program.
Whereas home decoration, still depends on the decoration itself, but if we are talking about a picture, it is totally fine to take one from one house to another. In another word, a picture is independent on its own already.
I have written programs with single functions for just renaming files or for lazy branching, a function by itself can be a program, and a function can be reused in other programs depending on context.
I don't believe that would ever happen. When that happens (at all), the AI still needs maintenance from the actual programmers who code the AI. Except if you mean a hypothetical situation where the AI went rogue, no longer needs maintenance, enslaves all humanity (or destroys it), and can regenerate like a living being and update the software and hardware on its own.
I mean, that latter situation is totally possible because the totally not scary "AI self-controlled fighter jet test" thing already happened and apparently went well.
Thaaat being said, you have to acknowledge that the number of staff to maintain an AI would be far less than what it replaces (or else companies wouldn't be pouring billions into these things).
I would argue those "mundane tasks" are not actually a programmer's work. If they can only write basic syntax, a couple of HTML tags, that can be replaced by AI, and no analytic skills, they have no value, to begin with. Work on something else. Programmers always solve unique problems. And the unique problem appears every day.
But programmers do those mundane tasks, if they're gone there are less man hours required for programmers on a team to do a particular thing, there are less crap jobs to dump on interns as they learn the ropes. There are a finite number of problems that companies and consumers want to pay money for, on top of that more and more of those problems will, over time, be taken up more and more by AI.
Let me get back to you that it is about a pool of knowledge vs creative endeavor.
If you get back at your question "what about novels?", ask an actual writer about that. I don't have enough knowledge to represent them.
However, the usage of ChatGPT as far as I can tell from social media is to generate silly conversations, memes, ask about plugins, and others. Content generation (journals, etc) is still frowned upon.
Even before chatgpt AI dungeon kbold and other language models were being used as writing aids using traceable snippets from writer's works (in AI dungeons case it revealed some seriously bad stuff used for training but I digress).
On top of that a few medium-name tech/market news outlets let AI write their articles. Blew up in their face for most of them because they didn't double check their work, but that's another for now thing.
But, even silly conversations, memes and the like are arguably creative work, granted not normally paid creative work (though arguably 90% of youtube fits into silly conversations and memes).
Speaking of youtube, I know of a few content creators that had entire videos written by ChatGPT (As a test, mind, but there's a creative field for you).
The keyword is "their workflow", right?
Wouldn't be possible for them to independently use it for their workflow like that without models being able to train on whatever someone can get their eyes on.
It gets even worse because the people behind AI monetize the tech based on the data trained from various sources. If AI stays to generate a photorealistic image (which already exists for several years), artists probably wouldn't lose their minds.
The photorealistic stuff can arguably be a lot worse with deepfakes. A looot more damage is going to come out of deep fakes than AI 2d drawings, imo (heck, even the "this face does not exist" stuff was used for bot network social media profile pics).
We'll eventually reach a point where photographic evidence can't be trusted again unless it's on physical film. We're honestly pretty close.
Even with that, people might still concern if their photo is in the data training. The same argument could be made "your photo isn't in our training data. No link about your identity is in our files". Would people buy that? some might, some might not.
The same thing can be said about novels, copyrighted code, articles, and other things stuffed into chatgpt. Or, as I said, a lot more since, again, diffusion image models contain no images.
AI image gen staying photorealistic only is not even a realistic outcome to even think about either way. It'll either become corporate models only or some variant of open source like it is now.
Actually, scratch that, the comic copyright case hints that AI generated images will stay legal themselves, probably so long as it doesn't actually look very like another image (since image copyright claims are judged by a human looking for similarities).
And, if that's true, AI image gen is already open-source and locally trainable, so it'd just be the reverse of emulation if things got locked down where the images are legal but the program isn't.
Then you get into even weirder realms of "what if someone uses technically legal AI art to make an image model?" Then you can have models trained entirely on AI art for inception legal dodge.
Also, it's unlikely, but if you're looking for things to get locked down on the model side it'll be the getty images vs Stability AI case result you're looking for.