To an extent, but the question is
how it's used, and how much. People are increasingly finding out that it actually creates
more work rather than less, because it takes more human effort to fix its mistakes and make it presentable than it would've taken just to have a human do it in the first place. Most of what's being aggressively and deceptively marketed as "AI" (i.e. large language models and large multimodal models) is turning out to be a boondoggle, the latest overhyped tech bubble that's probably heading for a collapse as consumers and businesses realize how badly it falls short of the hype.
John Scalzi did an excllent blog post the other day assessing the realities of the situation, including the limited ways in which "AI" of one sort or another is inevitable, and the ways in which it isn't:
Because it feels like a good time to do it, some current thoughts on “AI” and where it, we and I are about the thing, midway through February 2026. These are thoughts in no particular o…
whatever.scalzi.com
I'm skeptical that any genuinely talented writer would
need LLMs to do any of those things. An LLM's output is averaged out from thousands of inputs, and thus is generic and homogenized by its very nature. Worthwhile art is that which stands out from the generic.
Also, questions like these ignore the larger issues like whether the technology is based on plagiarized work, or the damage it's doing to the environment, or the way industries are passing the electric costs onto consumers and using the tech as an excuse to fire their employees. Those ethical questions should be solved before anyone starts talking about the mere creative applications of its use.
The article did not specify the gender of the art staffer.
And I doubt they're hiding the name for any reason; it's probably just that it was someone low-ranked who wouldn't normally be given screen credit for their work, and they're treating this the same way they'd treat anything else done by a staffer on that level.
Right. We should never make assumptions about other people without evidence, or take caution too far to the point of reflexively assuming the worst.
I don't agree AI should've been used for this, because it's exactly the kind of work that graphic artists are supposed to do, and replacing them with machines means they aren't getting enough work to live on or enough experience to rise through the ranks. The problem with arguing that it's okay to replace little jobs is that people have to learn from the little jobs in order to rise to the bigger ones.
Maybe it could be used for supplementary tasks of some sort, but I wouldn't trust the tech to fact-check anything, given how readily it hallucinates nonsense. LLMs have no concept of fact or accuracy, they're just models of the structure of language.
Also, what "facts" are there that could possibly have been checked? Most of the names beyond the handful mentioned in DS9 would have been made up for the episode, so there are no outside facts to compare them to. It looks like the family tree omits Ben's siblings and his and Kasidy's child, but they were never named onscreen, so an automated process wouldn't have caught those omissions; only human judgment and imagination could have filled in those gaps.
The important thing is to remember to apply equal skepticism to our own assumptions and opinions. If we're going to question others, we need to start by questioning ourselves.
Which is a very common practice in comics art -- look at Gray Morrow's or Deryl Skelton's work in DC's Trek comics from the 1980s-90s. So it hardly constitutes evidence of "AI" -- in fact, I'd say it's just the opposite, since an LLM would statistically sample many different images rather than tracing a single one.
Also, hands are hard to draw. You can find plenty of badly drawn hands in comic art predating "AI." Plenty of rough, sketchily drawn background art too. Humans are perfectly capable of drawing badly without computer assistance.
We can never be 100% certain of anything, but that's no reason to jump to the conclusion that it's false. We can just follow the evidence and assess probabilities, accepting the best evidence-based model we have until and unless further evidence gives us reason to modify it. And we should remember that the burden of proof is on the accuser, and not assume something was done by AI unless we can rule out every alternative explanation.