Seeing other people’s AI art is like hearing other people’s dreams

If you’ve been on Twitter (or Discord or plenty of other places) this month and your feed is anything like mine, it’s currently full of weird machine-generated text and art. The launch of ChatGPT, Lensa, and other tools has made a once difficult-to-access tool ubiquitous, and the result is an explosion of people posting their interactions with artificial intelligence.

I love these tools, and I’ve been playing with them for a while now — at least since the release of AI Dungeon 2 in 2019. But over the years, I’ve realized there’s a huge gap between the way I experience work I generate and work created by someone else. The closest way I can describe it is this: seeing somebody else’s AI art is like hearing the plot of someone else’s dream. Occasionally it’s fascinating. Often it’s dull. And for me, it’s almost never as much fun as experiencing the process.

Author Robin Sloan — who has experimented with AI fiction alongside traditional novels — mentioned the dream comparison to me in an interview about AI Dungeon last year. “I don’t think people quite know how dreams work, but I think it’s probably not too far afield from the way that the associative weirdness of an AI model unfolds,” he said. “And as everyone knows, you can have the wildest dream and then tell someone about it the next day. And for some weird reason, there is nothing more boring than hearing about somebody else’s dream.”

It’s clear by now that AI art is often pretty mediocre without careful instruction — the better it gets, the more capably it imitates bland SEO-bait filler text or a generic stock image. But as Rob Horning notes, it’s become a joke format for showcasing someone’s cleverness at prompting the AI.

It allows people to show off that creative consumption, how clever they can be in prompting a model. It is as though you could have a band simply by listing your influences. You don’t even have to absorb those influences in practice; you can just name-check them. You just need to be familiar with the signifiers.

AI whimsicality goes all the way to the top, where companies like OpenAI demonstrate how art generators handle instructions like “an armchair in the shape of an avocado” or “an illustration of a baby daikon radish in a tutu walking a dog.” There are genuine reasons to do this — it demonstrates how models combine multiple concepts and objects — but there’s also a sort of 2000s-era “lol so random” quality to it all.

And many prompts aren’t all that creative, as Horning notes.

Even that level of knowledge is perhaps unnecessary. AI can take a nothing prompt and suddenly make it seem worthwhile, giving anybody something fun and surprising to share.

This is where I disagree with Horning’s conclusion because, as AI art generators have gotten better, the bar for me finding them amusing or interesting has gotten progressively higher. Much current-generation AI is a so-so straight man in an improv comedy show, dutifully approaching whatever ridiculous prompt you offer with total seriousness. (I actually watched one Brooklyn comedy troupe run an AI improv night where AI Dungeon generated key plot beats, but unfortunately, the connection stuttered midway through the show and killed the premise.)

Some people are great at this kind of collaboration. Janelle Shane, author of the fantastic You Look Like A Thing And I Love You, keeps an incredible blog of surreal “AI weirdness” that teeters between plausibility and complete nonsense. Over the last week, I legitimately laughed at ChatGPT writing Bible verses about removing a sandwich from a VCR. I enjoy people finding the cracks in ChatGPT’s SEO-optimized boilerplate and DALL-E’s stock illustrations, forcing them to produce something genuinely strange.

But most of it is no longer that strange — and the results often make my eyes glaze over. I offered this take to my AI expert colleague James Vincent, and he said he hadn’t reached this saturation point… but he says he also likes hearing other people’s dreams, so I’m not sure that disproves my point.

It’s a stark contrast with how much I enjoy watching AIs produce my prompts. It’s not that I think I’m particularly smarter or funnier than most people. I just love the give-and-take of figuring out what a system like ChatGPT or DALL-E knows and guessing at ways to force it out of its comfort zone. A fanfic crossover between Stranger Things and the heist comedy series Leverage: extremely predictable. A crossover between Leverage and the experimental 1980s film Koyaanisqatsi: it’s vaguely aware the film has some kind of environmentalist overtones, but it also thinks “Koyaanisqatsi” is the name of a kaiju. (It is not.) I don’t expect you to find any of the resulting stories interesting, but I loved figuring out how to make them.

I’m not sure how common this preference is. Sloan, for instance, wasn’t describing other people’s posts; he was talking about directly communicating with an OpenAI GPT-powered storytelling AI. “I think people have different responses. Mine was not the wonder of a dream. It was much more like having someone tell me their dream in real time,” he explained. Sloan was one of comparatively few authors working with AI writing before the explosion of OpenAI-powered tools, but at the time I spoke with him, he’d mostly lost interest in the field.

That said, seeing everybody’s AI fiction and art doesn’t bother me, although there are plenty of open questions around AI’s copyright status, its potential for bias, and other serious concerns. Horning describes the process as submitting to an AI’s vision of the world, but to me, it feels like watching people figure out the limits of a video game. I just don’t love watching most people stream games, either — and given the popularity of Twitch, that might mean my feelings aren’t remotely typical.

Ironically, the funniest AI art I’ve seen in the past week was literally a description of someone’s dream. It’s an image from my colleague Victoria Song, who was testing a sleep app that makes AI pictures based on your dream logs. This was the result:

AI-generated creepypasta.
Image: Victoria Song

This is a hilarious image to me, and like a lot of humor, I can’t quite convey why. Maybe it’s the context of putting this utter nightmare fuel in an app designed to help you sleep better. Maybe it’s the model clearly associating teeth falling out with something scary, then reverse-engineering a creepypasta image instead of a scene any human would describe as accurate. AI art is best when it’s wrong and right at the same time — but that’s a harder balance than it seems.



Read the full article Here

Leave a Reply

Your email address will not be published. Required fields are marked *

DON’T MISS OUT!
Subscribe To Newsletter
Be the first to get latest updates and exclusive content straight to your email inbox.
Stay Updated
Give it a try, you can unsubscribe anytime.
close-link