First, I’m glad that Kyle Drake asks people to call it “machine learning” instead of AI. “AI” is a stupid buzzword used to sell the similar kind of technology our old flip phones had with predictive dictionaries. Calling it “intelligence” is insulting and misleading.
What everyone is afraid of is
workflow automation. Right? The last start-up I was a part of was originally an AI R&D company. The founder and CEO, a man actually deeply concerned about the state of the environment for the future of his young son, was an “AI engineer” himself (that’s not the only type of software engineering he knows, by the way), and worked as a consultant for companies like Zapier. Long story short, it turns out machine learning was
NOT the tool our userbases needed the most (or at all), and we ended up letting go every single AI engineer in the company. We pivoted
away from AI, mostly because it added nothing to our product’s UX, and actively
detracted from it. I wish we had axed ML from our product earlier—everyone wished that.
I’ve befriended photographers who report to me that they and their colleagues have lost 60% of their income to AI. I see other
photographers who say they cater to a who has clientele has zero interest in AI, so it doesn’t affect them negatively. I befriended a photographer who actually now passionately creates
detailed, Hiëronymus Bosch-esque murals using manual lassoing of areas (“in-painting”), continuous refining, and working within a manually set
node-based environment, really not unlike any other node-based environment that has to be manipulated by anyone working in VFX. (Obviously, giant tapestries like these are separate from his photography work.) His wife is an art photographer and he has introduced her to his new ML passion-hobby; I sat in on him going through his workflow to create a composition about mortality, and she made an amusing little work, using the same techniques, on AI robots being used for money laundering. I befriended another artist who began his artistic career as a
lapidarist, paints in physical media and was a pioneer and leading
fractalist,
a man who has been creating art for longer than I have lived, and now creates generative “digigraphs” in much the same complex, intentional workflow as my previous friend. (And he still does lapidary and works with other media, as well.) Last year, I had the privilege of coming in contact with a group of French avant-garde AV and graphic designers who use generative assets in many of their collage-based graphic design and video works. I’m also close to the games industry and hear of the coming and present bloodbaths due to ML being used to replace the artists who used to make promotional and concept art. And I have former colleagues who work with the cinema and TV actors who are rightfully concerned that they could be replaced by generative actors (or even impressions of themselves, if they give up the rights to their own likeness!), and the striking American TV and film writers who do not want to lose their jobs to machine-generated schlop.
Nobody except money launderers wants the (
increasingly extremely obvious) pump-and-dump of mass-generated, lazy cashgrabs, least of all, actual artisans. Nobody wants to lose their jobs to this. Unfortunately, the nature of publicly-traded companies is to garner as much short-term revenue as possible, and that often means cutting all conceivable corners—and then some. We have now
multiple generations of experience demonstrating that
any new technological advancement will be seized on by capital in its relentless pursuit of quickening the pace of profit extraction and accumulation. Who is using the technology, and how? The same people will always try to take advantage, you know, and the same people will always get trampled on. The tool doesn’t matter, in a system that entrenches inequality.
Honestly, I love what machine learning can produce. When trained exclusively on paintings from the 18th and 19th centuries, then prompted with word salads (or beautiful prose), it can produce stunning surrealist, abstract illustrations. Humans don’t think like machines, these machines can’t yet think like humans, so they produce something hazy, unreal, “insane” yet of perfect sense to their own internal workings,
and I genuinely love that. There’s also the inherent fun in generating shitposts and you can tell who goes into this with a Newgrounds type of stance, versus the hacks who just want to sell poorly generated pictures of kangaroos on Adobe Stock. It was
part of my last job to identify and
ban these false “artists”. The greed and stupidity bound up together was insufferable and insulting.
...I actually have a phrase I use; the "value of potentiality", e.g. what is the value of all the potential outcomes you can get from an interaction, what new things can you learn? what relationships can you build? what can you offer to the person or thing you are interacting with?
In almost all cases the value of potentiality from AI generated work is almost 0, whereas its quite high for any human made work. For example; If a human writes the dialogue for a game character; I can find out who they are, and contact them, saying how much I like their work (or hate it!), and who knows where that interaction could lead. We could becomes friends, they could marry my cousin, we could end up stranded on a desert island, or we could just have a short interesting conversation ~ all of those are are potential outcomes that add some value to our lives, and they still exist, even if none of them ever happen and we never interact!
(That's why existing is fun!)
However if an LLM generates the dialogue of a game character, then there is no further interaction I can have; the potential outcome is 0, nothing, no further possibilities. This potential interaction is part of what makes engaging with art so fascinating and powerful to people, and its why content generated by an LLM will never really be a threat; even if the content is of a high quality, it cannot satisfy the human need for possibilities, and therefor it cannot satisfy the purpose of art
It might seem flashy for a while, but what you'll quickly find is that people will gravitate towards art that has a higher potentiality (For example; Thats why YouTube beats TV, because your potential of having an interaction with a YouTuber feels more real than having an interaction with a TV show)
Yes, exactly! Humans learn from doing, from working and interacting with one another,
from bouncing things off, experimenting, failing, refining, trying out new ideas and finally executing well. Convenience at the expense of
human experience is something afforded by the technology around us, everywhere. It’s not just ML, everything is being prioritised and designed based on
convenience. And ironically, the quality of our life suffers. I think that is a real problem and that cannot be addressed by simply value-condemning a technology which companies are going to attempt to leverage whether you like it or not. A struggle has to be waged for rules, regulations, and the right to live off of one’s artistry.