• Welcome! The TrekBBS is the number one place to chat about Star Trek with like-minded fans.
    If you are not already a member then please register an account and join in the discussion!

Yikes! Did season 1 episode 6 use AI-generated art?

How adorable. You have faith on flimsy "proof".
And you are skeptical based on being skeptical, which is cute as a clumsy kitten.

An AI kitten, because you can tell from the paws.

I mean, that's all you've got. Anthony Pascale is a stand-up guy and he does have contacts in the productions. If he says he checked it out, he has.

That's evidence, flimsy or not. Mind you, it's not proof.

OTOH, you're simply pushing your suspicions. So, while you may consider that poster to be credulous, there's no excuse for being high-handed toward them when they've offered more information than you've mustered so far.
 
Last edited:
Trying to stop A.I. from affecting human creativity is like trying to stop the ocean with a broom.

To an extent, but the question is how it's used, and how much. People are increasingly finding out that it actually creates more work rather than less, because it takes more human effort to fix its mistakes and make it presentable than it would've taken just to have a human do it in the first place. Most of what's being aggressively and deceptively marketed as "AI" (i.e. large language models and large multimodal models) is turning out to be a boondoggle, the latest overhyped tech bubble that's probably heading for a collapse as consumers and businesses realize how badly it falls short of the hype.

John Scalzi did an excllent blog post the other day assessing the realities of the situation, including the limited ways in which "AI" of one sort or another is inevitable, and the ways in which it isn't:



What if a story is fully created and imagined by a human being, but that person uses A.I. to solve his writer’s block? What if A.I. is used to brainstorm ideas, maje dialogue better, or help structure /plot? What if it’s only used to help visualize storyboards or concept art?

I'm skeptical that any genuinely talented writer would need LLMs to do any of those things. An LLM's output is averaged out from thousands of inputs, and thus is generic and homogenized by its very nature. Worthwhile art is that which stands out from the generic.

Also, questions like these ignore the larger issues like whether the technology is based on plagiarized work, or the damage it's doing to the environment, or the way industries are passing the electric costs onto consumers and using the tech as an excuse to fire their employees. Those ethical questions should be solved before anyone starts talking about the mere creative applications of its use.


Maybe the studio is thinking twice about releasing the individual who drew the art because people may badger him on social media claiming his work is not his.....just saying

The article did not specify the gender of the art staffer.

And I doubt they're hiding the name for any reason; it's probably just that it was someone low-ranked who wouldn't normally be given screen credit for their work, and they're treating this the same way they'd treat anything else done by a staffer on that level.


Ironically, some of the people who are championing human creativity may be throwing shade on REAL creators saying their content is AI when it's not.

Right. We should never make assumptions about other people without evidence, or take caution too far to the point of reflexively assuming the worst.


What's unfortunate is that this sort of thing is exactly what AI should be used for. It's graphics that are onscreen for meer moments. The time spent drawing those could have been better used.

I don't agree AI should've been used for this, because it's exactly the kind of work that graphic artists are supposed to do, and replacing them with machines means they aren't getting enough work to live on or enough experience to rise through the ranks. The problem with arguing that it's okay to replace little jobs is that people have to learn from the little jobs in order to rise to the bigger ones.

For example, to fact check the Sisko family tree.

Maybe it could be used for supplementary tasks of some sort, but I wouldn't trust the tech to fact-check anything, given how readily it hallucinates nonsense. LLMs have no concept of fact or accuracy, they're just models of the structure of language.

Also, what "facts" are there that could possibly have been checked? Most of the names beyond the handful mentioned in DS9 would have been made up for the episode, so there are no outside facts to compare them to. It looks like the family tree omits Ben's siblings and his and Kasidy's child, but they were never named onscreen, so an automated process wouldn't have caught those omissions; only human judgment and imagination could have filled in those gaps.



One of the most exhausting aspects of Ai is the erosion of trust it engenders. Ai has supercharged our scepticism, anyone at any skill level can create an increasingly convincing fake of anything in just a few minutes and everyone is an expert in sniffing it out, myself included.

The important thing is to remember to apply equal skepticism to our own assumptions and opinions. If we're going to question others, we need to start by questioning ourselves.


I look at this Trek comic and I see elements that I think look Ai and I wonder whether its hybrid work. Some of the backgrounds appear to be photos that have been drawn over.

Which is a very common practice in comics art -- look at Gray Morrow's or Deryl Skelton's work in DC's Trek comics from the 1980s-90s. So it hardly constitutes evidence of "AI" -- in fact, I'd say it's just the opposite, since an LLM would statistically sample many different images rather than tracing a single one.

However some of the interiors have detailing that looks like Ai hallucinations and ‘Not Sulu’ has some pretty ropey hands in a couple of shots that go beyond bad drawing – but then Ai is good enough now, that bad hands aren’t a clear cut signifier.

Also, hands are hard to draw. You can find plenty of badly drawn hands in comic art predating "AI." Plenty of rough, sketchily drawn background art too. Humans are perfectly capable of drawing badly without computer assistance.


There’s nothing to gain from admitting that they did use Ai on that comic and it’s plausible that they didn’t and it was just a rushed effort, but thanks to Ai I’m not 100% convinced. As I said, exhausting.

We can never be 100% certain of anything, but that's no reason to jump to the conclusion that it's false. We can just follow the evidence and assess probabilities, accepting the best evidence-based model we have until and unless further evidence gives us reason to modify it. And we should remember that the burden of proof is on the accuser, and not assume something was done by AI unless we can rule out every alternative explanation.
 
I mean, it's one thing to question the validity of the product, it's another thing entirely to accuse the people behind it of lying about it. I mean, in the long run, this seems like such an inconsequential thing to me that if it were AI they just would have said, "yeah we used AI, sorry" rather than going on record saying otherwise.
 
It's very possible it was just rushed to be ready in time (as all screengraphics and whatnot in Trek are, hence typos and errors because they're not meant to be freeze-framed and zoomed in on), but looking at those changing collars...
 
To an extent, but the question is how it's used, and how much. People are increasingly finding out that it actually creates more work rather than less, because it takes more human effort to fix its mistakes and make it presentable than it would've taken just to have a human do it in the first place. Most of what's being aggressively and deceptively marketed as "AI" (i.e. large language models and large multimodal models) is turning out to be a boondoggle, the latest overhyped tech bubble that's probably heading for a collapse as consumers and businesses realize how badly it falls short of the hype.

John Scalzi did an excllent blog post the other day assessing the realities of the situation, including the limited ways in which "AI" of one sort or another is inevitable, and the ways in which it isn't:





I'm skeptical that any genuinely talented writer would need LLMs to do any of those things. An LLM's output is averaged out from thousands of inputs, and thus is generic and homogenized by its very nature. Worthwhile art is that which stands out from the generic.

Also, questions like these ignore the larger issues like whether the technology is based on plagiarized work, or the damage it's doing to the environment, or the way industries are passing the electric costs onto consumers and using the tech as an excuse to fire their employees. Those ethical questions should be solved before anyone starts talking about the mere creative applications of its use.




The article did not specify the gender of the art staffer.

And I doubt they're hiding the name for any reason; it's probably just that it was someone low-ranked who wouldn't normally be given screen credit for their work, and they're treating this the same way they'd treat anything else done by a staffer on that level.




Right. We should never make assumptions about other people without evidence, or take caution too far to the point of reflexively assuming the worst.




I don't agree AI should've been used for this, because it's exactly the kind of work that graphic artists are supposed to do, and replacing them with machines means they aren't getting enough work to live on or enough experience to rise through the ranks. The problem with arguing that it's okay to replace little jobs is that people have to learn from the little jobs in order to rise to the bigger ones.



Maybe it could be used for supplementary tasks of some sort, but I wouldn't trust the tech to fact-check anything, given how readily it hallucinates nonsense. LLMs have no concept of fact or accuracy, they're just models of the structure of language.

Also, what "facts" are there that could possibly have been checked? Most of the names beyond the handful mentioned in DS9 would have been made up for the episode, so there are no outside facts to compare them to. It looks like the family tree omits Ben's siblings and his and Kasidy's child, but they were never named onscreen, so an automated process wouldn't have caught those omissions; only human judgment and imagination could have filled in those gaps.





The important thing is to remember to apply equal skepticism to our own assumptions and opinions. If we're going to question others, we need to start by questioning ourselves.




Which is a very common practice in comics art -- look at Gray Morrow's or Deryl Skelton's work in DC's Trek comics from the 1980s-90s. So it hardly constitutes evidence of "AI" -- in fact, I'd say it's just the opposite, since an LLM would statistically sample many different images rather than tracing a single one.



Also, hands are hard to draw. You can find plenty of badly drawn hands in comic art predating "AI." Plenty of rough, sketchily drawn background art too. Humans are perfectly capable of drawing badly without computer assistance.




We can never be 100% certain of anything, but that's no reason to jump to the conclusion that it's false. We can just follow the evidence and assess probabilities, accepting the best evidence-based model we have until and unless further evidence gives us reason to modify it. And we should remember that the burden of proof is on the accuser, and not assume something was done by AI unless we can rule out every alternative explanation.

To an extent, but the question is how it's used, and how much. People are increasingly finding out that it actually creates more work rather than less, because it takes more human effort to fix its mistakes and make it presentable than it would've taken just to have a human do it in the first place. Most of what's being aggressively and deceptively marketed as "AI" (i.e. large language models and large multimodal models) is turning out to be a boondoggle, the latest overhyped tech bubble that's probably heading for a collapse as consumers and businesses realize how badly it falls short of the hype.

John Scalzi did an excllent blog post the other day assessing the realities of the situation, including the limited ways in which "AI" of one sort or another is inevitable, and the ways in which it isn't:





I'm skeptical that any genuinely talented writer would need LLMs to do any of those things. An LLM's output is averaged out from thousands of inputs, and thus is generic and homogenized by its very nature. Worthwhile art is that which stands out from the generic.

Also, questions like these ignore the larger issues like whether the technology is based on plagiarized work, or the damage it's doing to the environment, or the way industries are passing the electric costs onto consumers and using the tech as an excuse to fire their employees. Those ethical questions should be solved before anyone starts talking about the mere creative applications of its use.
Fair points, and I agree the way it’s being marketed is overhyped, and can often create more work fixing errors rather just having a real person do it.

Im mainly pointing out that even if this A.I. craze fizzles out, , A.I. tools are still going to sneak into creative workflows in smaller ways, and it’ll be almost impossible to police or prove who used what. Especially as it gets better over time.


Do not necessarily disagree, about LLM output being generic and “averaged out.” But I don’t think using it automatically means the writer isn’t talented - some people might use it the same way they’d use an editor, a sounding board etc

And I definitely don’t disagree about the larger ethical issues either. The plagiarism concerns, companies using it as justification to cut jobs are all real concerns.. I just think those issues are exactly why it’s going to be so hard to regulate in practice. Because this technology is already spreading faster than the rules can catch up yo stop it if curtail it
 
To an extent, but the question is how it's used, and how much. People are increasingly finding out that it actually creates more work rather than less, because it takes more human effort to fix its mistakes and make it presentable than it would've taken just to have a human do it in the first place
This is definitely NOT true at all. I’m saving A LOT of time since I’ve started using AI for my side stuff since September and getting much better results.
I use it mainly to design posters for events of a no-profit I’m part of (need to do around 6 per month, mainly for internal use, and the budget is zero) and write formal emails.

Most of what's being aggressively and deceptively marketed as "AI" (i.e. large language models and large multimodal models) is turning out to be a boondoggle, the latest overhyped tech bubble that's probably heading for a collapse as consumers and businesses realize how badly it falls short of the hype
This is exactly the kind of talk I hear from people that never actually tried to use one.
 
The discussion now seems to be those who accept the revelation, and those who do not as the OG art hasnt been produced and an artist hasn't been named.
 
I mean, it's one thing to question the validity of the product, it's another thing entirely to accuse the people behind it of lying about it. I mean, in the long run, this seems like such an inconsequential thing to me that if it were AI they just would have said, "yeah we used AI, sorry" rather than going on record saying otherwise.

Yes. In those instances where productions have used "AI," like the MCU Secret Invasion titles, they've admitted it when questioned. So there's no reason to doubt them here.


Im mainly pointing out that even if this A.I. craze fizzles out, , A.I. tools are still going to sneak into creative workflows in smaller ways, and it’ll be almost impossible to police or prove who used what. Especially as it gets better over time.

Yes, that's addressed in the John Scalzi article I linked to. The problem is that "AI" is being used as a marketing term for multiple different technologies, many of which are not AI in any meaningful sense, so the hyping of the plagiaristic slop machines just obscures the question of what technologies may actually end up being useful once the bubble bursts.

Really, Scalzi explains it far better than I can. The essay is free at the link, and it's a really good, informed, balanced piece.


Do not necessarily disagree, about LLM output being generic and “averaged out.” But I don’t think using it automatically means the writer isn’t talented - some people might use it the same way they’d use an editor, a sounding board etc

Why not just ask an actual editor or beta reader, then? One of the problems I've heard about LLMs is that they don't criticize, they just tell you what you want to hear and reinforce your preconceptions or biases or mistakes. They have no actual judgment or awareness but just react to what you put into them, so it inevitably becomes a self-reinforcing loop. A good editor or beta reader is someone who can tell you when you're wrong, someone who can say "no" to you. I only became a professional writer because Analog editor Stanley Schmidt spent 5 years educating me with his rejection letters, telling me why my stories didn't work so that I could make the later ones better and eventually produce something good enough for him to buy.


And I definitely don’t disagree about the larger ethical issues either. The plagiarism concerns, companies using it as justification to cut jobs are all real concerns.. I just think those issues are exactly why it’s going to be so hard to regulate in practice. Because this technology is already spreading faster than the rules can catch up yo stop it if curtail it

I think the rules definitely could catch up to it if we still lived in a country whose government was concerned with protecting its citizenry rather than helping its billionaire donors get even richer. The abuses of the technology are just a symptom of the larger problem of the oligarch class taking over all the levers of power to benefit themselves.


This is definitely NOT true at all. I’m saving A LOT of time since I’ve started using AI for my side stuff since September and getting much better results.

Just because it's not true for you doesn't mean it's not true for anyone.


This is exactly the kind of talk I hear from people that never actually tried to use one.

Please read the John Scalzi article.
 
Why not just ask an actual editor or beta reader, then? One of the problems I've heard about LLMs is that they don't criticize, they just tell you what you want to hear and reinforce your preconceptions or biases or mistakes. They have no actual judgment or awareness but just react to what you put into them, so it inevitably becomes a self-reinforcing loop
Another legend from someone that doesn’t know how to properly use this TOOL.
Just because it's not true for you doesn't mean it's not true for anyone.
It was not I who made wild blanket statements. I am only someone that has been using this tool for a lot of tasks with great satisfaction for over five years.
Please read the John Scalzi article.
Please don’t make wild statements about things you obviously know nothing about.
 
This is definitely NOT true at all. I’m saving A LOT of time since I’ve started using AI for my side stuff since September and getting much better results.
I use it mainly to design posters for events of a no-profit I’m part of (need to do around 6 per month, mainly for internal use, and the budget is zero) and write formal emails.


This is exactly the kind of talk I hear from people that never actually tried to use one.

Yes. In those instances where productions have used "AI," like the MCU Secret Invasion titles, they've admitted it when questioned. So there's no reason to doubt them here.




Yes, that's addressed in the John Scalzi article I linked to. The problem is that "AI" is being used as a marketing term for multiple different technologies, many of which are not AI in any meaningful sense, so the hyping of the plagiaristic slop machines just obscures the question of what technologies may actually end up being useful once the bubble bursts.

Really, Scalzi explains it far better than I can. The essay is free at the link, and it's a really good, informed, balanced piece.




Why not just ask an actual editor or beta reader, then? One of the problems I've heard about LLMs is that they don't criticize, they just tell you what you want to hear and reinforce your preconceptions or biases or mistakes. They have no actual judgment or awareness but just react to what you put into them, so it inevitably becomes a self-reinforcing loop. A good editor or beta reader is someone who can tell you when you're wrong, someone who can say "no" to you. I only became a professional writer because Analog editor Stanley Schmidt spent 5 years educating me with his rejection letters, telling me why my stories didn't work so that I could make the later ones better and eventually produce something good enough for him to buy.




I think the rules definitely could catch up to it if we still lived in a country whose government was concerned with protecting its citizenry rather than helping its billionaire donors get even richer. The abuses of the technology are just a symptom of the larger problem of the oligarch class taking over all the levers of power to benefit themselves.




Just because it's not true for you doesn't mean it's not true for anyone.




Please read the John Scalzi article.

You both make some good points. I think iy really depends on the work. My company keeps trying A.I. but it hasn't given them the results they want yet. We work fraud and there are so many clients and ways of doing the work thst a.i just can't keep up yet. Theyll keep trying though. For now we use it asxa tool for simple tasks.

I do know it will take my job one day. Its a depressing but theres nothing I can do about it. Progress.
 
Please read the John Scalzi article.
There's nothing there that rebutts or addresses jackoverfull's observations in a meaningful way, other than that Scalzi uses Microsoft Word and isn't impressed by its AI add-ins.

I tend to agree that the tech is not an economic threat to all narrative fiction writers. The AI videos I've seen vary in quality most directly with regard to whether there are human writers and performers behind the generated imagery. But the demand for writers in most fields is going to take a nose-dive pretty quickly.

The question is not "Will AI replace human workers" in thus-and-such-a-field. It is, rather, "How many fewer workers will there be a market for, and what is likely to be their market value (as in, "How much will I get paid?"). The answers are a) Lots. Lots fewer, and b) "Less than you are now."

For the most part, this "debate" continues to go on between people with a strong emotional bias against AI and people who work on it or have embraced working with it in various respects.
 
You both make some good points. I think iy really depends on the work. My company keeps trying A.I. but it hasn't given them the results they want yet. We work fraud and there are so many clients and ways of doing the work thst a.i just can't keep up yet. Theyll keep trying though. For now we use it asxa tool for simple tasks.

I do know it will take my job one day. Its a depressing but theres nothing I can do about it. Progress.
What I find impressive is also how fast it’s evolving. One year ago I couldn’t do what I do now at all. Right now I’m trying to have it automate part of a process (event publishing) and it’s failing miserably UNLESS I turn on the “thinking” mode (there it works like half of the times), which I can’t keep on permanently as it’s much slower and can be used only for a while with my subscription. I have no doubt that the model I’ll b using in six months or so will handle these tasks with no issues.
 
Here's a more useful article about the current state of AI as far as its penetration into the economic and creative spheres.

The writer is somewhat alarmist in his expectations - but at least he's someone with broad professional experience and understanding of the subject matter.


To view this content we will need your consent to set third party cookies.
For more detailed information, see our cookies page.

Think back to February 2020.
If you were paying close attention, you might have noticed a few people talking about a virus spreading overseas. But most of us weren't paying close attention. The stock market was doing great, your kids were in school, you were going to restaurants and shaking hands and planning trips. If someone told you they were stockpiling toilet paper you would have thought they'd been spending too much time on a weird corner of the internet. Then, over the course of about three weeks, the entire world changed. Your office closed, your kids came home, and life rearranged itself into something you wouldn't have believed if you'd described it to yourself a month earlier.

I think we're in the "this seems overblown" phase of something much, much bigger than Covid.
"But I tried AI and it wasn't that good"

I hear this constantly. I understand it, because it used to be true.
If you tried ChatGPT in 2023 or early 2024 and thought "this makes stuff up" or "this isn't that impressive", you were right. Those early versions were genuinely limited. They hallucinated. They confidently said things that were nonsense.

That was two years ago. In AI time, that is ancient history.

The models available today are unrecognizable from what existed even six months ago. The debate about whether AI is "really getting better" or "hitting a wall" — which has been going on for over a year — is over. It's done. Anyone still making that argument either hasn't used the current models, has an incentive to downplay what's happening, or is evaluating based on an experience from 2024 that is no longer relevant. I don't say that to be dismissive. I say it because the gap between public perception and current reality is now enormous, and that gap is dangerous... because it's preventing people from preparing.

How fast this is actually moving

Let me make the pace of improvement concrete, because I think this is the part that's hard to believe if you're not watching it closely.

In 2022, AI couldn't do basic arithmetic reliably. It would confidently tell you that 7 × 8 = 54.

By 2023, it could pass the bar exam.

By 2024, it could write working software and explain graduate-level science.

By late 2025, some of the best engineers in the world said they had handed over most of their coding work to AI.

On February 5th, 2026, new models arrived that made everything before them feel like a different era.

If you haven't tried AI in the last few months, what exists today would be unrecognizable to you.

There's an organization called METR that actually measures this with data. They track the length of real-world tasks (measured by how long they take a human expert) that a model can complete successfully end-to-end without human help. About a year ago, the answer was roughly ten minutes. Then it was an hour. Then several hours. The most recent measurement (Claude Opus 4.5, from November) showed the AI completing tasks that take a human expert nearly five hours. And that number is doubling approximately every seven months, with recent data suggesting it may be accelerating to as fast as every four months[/B][/B]
 
What I find impressive is also how fast it’s evolving. One year ago I couldn’t do what I do now at all. Right now I’m trying to have it automate part of a process (event publishing) and it’s failing miserably UNLESS I turn on the “thinking” mode (there it works like half of the times), which I can’t keep on permanently as it’s much slower and can be used only for a while with my subscription. I have no doubt that the model I’ll b using in six months or so will handle these tasks with no issues.

No doubt. Right now seedance2 can make a few seconds of movie quality film. Estimates are it will soon able to generate full Hollywood quality movie by the late 2020s to early 2030s. It will still need human involvement to guide it with revisions and edits. The generation of the initial movie might take several hours to a couple days but thats much faster than humans in Hollywood. After the movie is generated the human can make edits, fix any errors it sees etc. So its coming. It shows the speed of a.i. enhancement. Do I like it? Not for movies. But some love it and its going to happen no matter what.
 
@Christopher

Perhaps I didn’t express myself as succinctly as I could have done and being a visual artist rather than a writer, I fear I’m not going to do any better now ;) With some of my comments I was musing that it was possible that the work was a hybrid of Ai and human. I wasn’t intending to propose that it was definitively one thing or the other, but working as a creative I could imagine the type of pipeline involved between the two if it had been hybrid.

When I pointed out the backgrounds I meant that they looked like a human had drawn over them and was supporting the idea that a human had meaningfully been involved with the making of this comic. As for the hands; yeah they’re a pain to draw and there are indeed plenty of bad examples, of which I’m sure I’ve contributed many! Perhaps when the person involved drew a hand with 5 fingers and a thumb (possibly 2 thumbs), maybe that really was an error, albeit quite a big one.

I have no intention of dying on this hill, if they’ve said a human created all of it, great. We’ll have to take them at face value. Am I still sceptical? A bit (as I’m sure you can tell), but what can you do. Unless something else comes to light, which I don’t think it will, this is case closed.

As for questioning my own assumptions and opinions as you suggest, well obviously I’m going to be biased :) The idea that I could lose my job to this is an emotional one. Seeing younger artists already lose their jobs to this is emotional. The idea that they won’t get the chances I had because some tech bros decided human creativity should be turned into the equivalent of using a One Arm Bandit is something I can only feel great fury for and I mourn for their lost potential. With that in mind I think I’ve been quite measured in my responses. I get why users find it appealing, why they wouldn’t necessarily understand what the price of all this was, how could they? There are countless disciplines I know absolutely nothing about.

But hey ho, maybe this will be like when people proclaimed Kindle would be the death of physical books, I doubt it, but maybe...and with that, I’m clumsily tapping out on this thread :)
 
If you are not already a member then please register an account and join in the discussion!

Sign up / Register


Back
Top