• Welcome! The TrekBBS is the number one place to chat about Star Trek with like-minded fans.
    If you are not already a member then please register an account and join in the discussion!

Yikes! Did season 1 episode 6 use AI-generated art?

Source the scientific study that was done in 2025.

Even for those rare people it does save time, they're only looking at a time savings of about 2.8% when you factor in all the additional work created.
lol, perhaps try use an ai to explain to you what you actually linked.


My company seems to be switching from full automation to a.i. assistants. So supposedly each agent will gave an a.i. agent to assist in our work. Its designed to up our output. They have been talking a.i for about 2.5 years but seems to be taking them forever to decide and.implement a program.
This is the correct way as of now.
You can’t just tell the ai “do this” and leave it unsupervised at the moment, it will produce unwanted result. As a tool, once you learn how to use it, it’s already invaluable though.


AI is a bubble in so far that it is currently overvalued and there's no way for it to generate such an amount of additional profit to make the current investments feasible.

It is very much like the dot com bubble in this regard. It will burst. But it will NOT be the end if the internet.

AI is here to stay. AI wont be the answer to everything, and certainly most ideas pitched today after crap, and it is still finding it's use case & profitability.
But it will be part of our lifes going forwarded from now, never disappear, and the people and companies who don't adopt will either have their niches or simply be left behind, like people today who live and work without internet (and don't get me wrong - there's a shocking amount of people who live without smartphone & internet!)

No, a lot of people seems convinced that it’s a trend that will die. It’s not and it won’t.

A weird conclusion to make when only a few posts earlier I just said that I am incorporating AI tools in my work as a graphic designer (at the behest of the agency that employs me), and yet I’m very much questioning its use and ethics. I’m afraid it’s not quite as black and white as you like to think.
So your point was?


What you are describing is “a bubble bursting”. The vast majority of people seriously talking about this subject seem to agree that we’ll likely see the bubble bursting, but that of course doesn’t mean AI as a technology will go away. Just the clients’ willingness to spend money on it; that will likely go away. And with it businesses that made themselves over-reliant on said clients spending money on AI products and servieces.
As said, this already happen. Revolutionary technology appear>companies start investing in it like crazy in the hope of coming out on top>most of them won’t and fail. Remember how many search engines existed back then? How many computer platforms and OSes? How many browsers? There is only so much space in a market.
 
So your point was?
:confused: Huh? You claimed “people questioning AI are invariably those who don’t use it” and I pointed out that that assumption is false with myself as proof. The point was that what you said is false. You want to dismiss a critical view of AI as coming only from those who don't use it and I’m telling you that this is not the case.

As said, this already happen. Revolutionary technology appear>companies start investing in it like crazy in the hope of coming out on top>most of them won’t and fail. Remember how many search engines existed back then? How many computer platforms and OSes? How many browsers? There is only so much space in a market.
Okay, so you agree that it’s happening. Why did you then say earlier that you “find it incredibly amusing when people say it’s “a bubble” that will burst soon”, how people said that about earlier technologies as well and were “invariably dead wrong”? So are people wrong in assuming the bubble is or will burst or is it actually already happening? Those two seem to be mutually exclusive.

You also seem to be puzzled by why people would be troubled by this bubble-bursting. Yet you recognize that “companies failing to capitalise on AI“ will go bankrupt. Surely you don’t mean to tell me I just shouldn’t worry about my employer potentially going bankrupt, right?
 
Maybe I’m assuming wrong, but I’m honestly a little surprised that you don’t seem to have at least given ChatGPT a try. Not for generating material that you would submit in any kind of professional capacity of course, but just in a casual way out of curiosity to see what the hype is all about. ChatGPT is free right now and can be used even without creating an account. As I said, I am highly skeptical of AI myself, but would still recommend at least giving at least one chat a go. At the end of the day it’s definitely a weirdly sycophantic algorithm that guesses which words often go together. But I think it helps having experienced it first hand.
As someone currently in a similar position to Christopher, if any of my clients thought for a moment that I used an LLM in my work I'd loose a significant portion of my monthly income.
 
Maybe I’m assuming wrong, but I’m honestly a little surprised that you don’t seem to have at least given ChatGPT a try.

It's built on work stolen from my fellow authors, and quite probably from me. (I know for a fact that Anthropic stole from me, and I've already applied for my share of the settlement in the class action suit.) I find its very existence anathema. Not to mention that our planet is already teetering on the brink of an irreversible climate catastrophe, and the added heat load of runaway "AI" use is pushing it over the edge even faster. I'm sure as hell not going to add to that out of mere curiosity.


I’m afraid the contempt for AI slop or even what really constitutes AI slop is not as prevalent and universally understood as one might think. In my experience the discussion about AI online can give a warped impression of how people IRL talk and receive AI generated content. And there’s also definitely the kind of client who believes using AI in a product or marketing campaign will have the effect of making their business appear more “future savvy” and forward-thinking. There’s absolutely a kind of hype to having used AI in a very visible way at least once if you’re a business, because then you can claim to your investors that you’re working with the newest tools, I guess.

It would hardly be the first time people were fooled by marketing hype into believing something was more beneficial than it really is, or that something harmful was actually beneficial. There was a time when heroin was advertised as a miracle health drug. (It was actually a brand name meant to be pronounced "hero-in," because it would make you heroically strong and fit.)


What’s perhaps an interesting counter movement to that is how some corporations will now very visibly publish human created campaigns, focussing on communicating the fact that they are valuing hand-made, human art.

As I said, that's pretty much the default among fiction publishers these days.


Well, I think one needs to be careful not to extrapolate their own individual experiences to other areas. What I wrote was very much written from a perhaps limited perspective of a graphic designer. I do think there are fields where AI probably does make you faster and more efficient.

Maybe, depending on what it is. As Scalzi pointed out in his column, the problem is that the label is used as a blanket marketing term for many different technologies, some more useful than others, and most of them not actual AI in any meaningful sense. That makes it harder to differentiate the good from the bad.



What you are describing is “a bubble bursting”. The vast majority of people seriously talking about this subject seem to agree that we’ll likely see the bubble bursting, but that of course doesn’t mean AI as a technology will go away. Just the clients’ willingness to spend money on it; that will likely go away. And with it businesses that made themselves over-reliant on said clients spending money on AI products and services.

That's what needs to happen. The hype and the artificial push to make it ubiquitous need to go away, as they don't benefit anyone but the people trying to sell the technology. Let evolution happen -- weed out the useless and maladaptive applications, so that only the worthwhile, genuinely necessary applications survive. And pass laws to protect people against plagiarism and skyrocketing electric bills and to mitigate the environmental impact.

EDIT: In particular, people need to get over the delusion that LLMs are useful as search engines or information sources, because they're ridiculously easy to manipulate into making false claims: https://www.bbc.com/future/article/...pt-and-googles-ai-and-it-only-took-20-minutes
 
Last edited:
It's built on work stolen from my fellow authors, and quite probably from me. (I know for a fact that Anthropic stole from me, and I've already applied for my share of the settlement in the class action suit.) I find its very existence anathema. Not to mention that our planet is already teetering on the brink of an irreversible climate catastrophe, and the added heat load of runaway "AI" use is pushing it over the edge even faster. I'm sure as hell not going to add to that out of mere curiosity.




It would hardly be the first time people were fooled by marketing hype into believing something was more beneficial than it really is, or that something harmful was actually beneficial. There was a time when heroin was advertised as a miracle health drug. (It was actually a brand name meant to be pronounced "hero-in," because it would make you heroically strong and fit.)




As I said, that's pretty much the default among fiction publishers these days.




Maybe, depending on what it is. As Scalzi pointed out in his column, the problem is that the label is used as a blanket marketing term for many different technologies, some more useful than others, and most of them not actual AI in any meaningful sense. That makes it harder to differentiate the good from the bad.





That's what needs to happen. The hype and the artificial push to make it ubiquitous need to go away, as they don't benefit anyone but the people trying to sell the technology. Let evolution happen -- weed out the useless and maladaptive applications, so that only the worthwhile, genuinely necessary applications survive. And pass laws to protect people against plagiarism and skyrocketing electric bills and to mitigate the environmental impact.

EDIT: In particular, people need to get over the delusion that LLMs are useful as search engines or information sources, because they're ridiculously easy to manipulate into making false claims: https://www.bbc.com/future/article/...pt-and-googles-ai-and-it-only-took-20-minutes

Yup. Lawsuits are starting. A.I. takes a lot and can be derivative of other people's work because of the learning it gets and the platform it gets all its information from. Thats why the stories it can generate can seem so familiar.
 
A.I. takes a lot and can be derivative of other people's work because of the learning it gets and the platform it gets all its information from. Thats why the stories it can generate can seem so familiar.

Which is why LLM-generated books are probably only ever going to show up through self-publishing. Editors and slush-pile readers have always had to wade through hundreds of generic, unoriginal, routine stories to find the handful that stood out from the pack by offering something fresh and distinctive. LLM slop just means there's a whole lot more of it to wade through.
 
Which is why LLM-generated books are probably only ever going to show up through self-publishing. Editors and slush-pile readers have always had to wade through hundreds of generic, unoriginal, routine stories to find the handful that stood out from the pack by offering something fresh and distinctive. LLM slop just means there's a whole lot more of it to wade through.
It would be a giant waste to try and publish an LLM generated book since nobody can own the copyright to it.
 
Last edited:
So the other day I was designing a 1930s style airline luggage label. I started with a 3D model of the aircraft, to get an image at just the right angle. But I what I needed was a simple 2 color image. I could have drawn over it manually in Illustrator, but it would take a long time and, I knew I would inevitably end up making it too detailed. So I gave Gemini the image of the model and an example of an ad from the '30s with the simplified style I was going for. It gave me my airplane image in the 2 tone stylization I wanted and I was able to image trace in Illustrator to make it a vector and continue the project manually from there.
 
I will never get over science fiction authors and fans flat out refusing revolutionary technologies just because it might change the job market, did arrive with some copyright violations during its creation & isn't quite perfect from the get-go.

Like - we wouldn't be in space at all of it weren't for Werner Von Braun & his slaves labour in nazi rocket production camps, then for Nasa against the Soviets, computers developed to get an edge over the enemy, and now having the Xittler guy being in charge of space for America. And rocket launches aren't exactly environment friendly themselves either.

"Fire dangerous. Inventing fire was mistake"
 
I will never get over science fiction authors and fans flat out refusing revolutionary technologies just because it might change the job market, did arrive with some copyright violations during its creation & isn't quite perfect from the get-go.
People are set in their ways and are afraid of technological revolutions.
Look at world govts trying to restrict social media, Youtube creators getting copyright violation notices on their own original work, Tiktok getting banned in multiple countries.
Seedance 2.0 shook Hollywood to its core, but this is what the future is going towards, and I welcome it.
 
People are set in their ways and are afraid of technological revolutions.
Look at world govts trying to restrict social media, Youtube creators getting copyright violation notices on their own original work, Tiktok getting banned in multiple countries.
Seedance 2.0 shook Hollywood to its core, but this is what the future is going towards, and I welcome it.
TikTok is your big idea? TikTok has destroyed children's attention spans for anything!
 
I will never get over science fiction authors and fans flat out refusing revolutionary technologies just because it might change the job market …
:confused: I’m sorry, but I’m honestly kind of baffled at the lack of empathy demonstrated with statements like these. Should I just welcome the very real prospect of the business that employs me right now going bankrupt and finding myself unemployed in a job market that doesn’t even really need my skillset anymore? I’m dumbfounded what could possibly be so hard to grasp about this. Being accepting of the progression of technology is one thing, but just defeatedly walking straight into destitution and giving myself a pat on the back for how future forward I am as a science-fiction fan? One can be perfectly curious about AI and willing to work with it and still be skeptical and wary about the real world implications that technology brings with it. I’m not really interested in this becoming a protracted argument, but please — from one human being to another — think about what you said there just for a minute. Thank you.
 
Huh? You claimed “people questioning AI are invariably those who don’t use it”
That’s not what I claimed at all.
What I claimed is that the people who claim stuff such as “the AI only makes error” and “you only waste time, don’t save it” are this who don’t use it.
There are some very legit questions on current AI and the concept of AI in general and in the first to ask them.
Okay, so you agree that it’s happening. Why did you then say earlier that you “find it incredibly amusing when people say it’s “a bubble” that will burst soon”, how people said that about earlier technologies as well and were “invariably dead wrong”? So are people wrong in assuming the bubble is or will burst or is it actually already happening? Those two seem to be mutually exclusive.

Again, that’s not what I said.
What I said is that it’s stupid to think that after the “burst” AI will be less ubiquitous. In fact it will be everywhere, much more than today, but you’ll have a given set of companies which can profit from it instead of many that hope to come out in top.
:confused: I’m sorry, but I’m honestly kind of baffled at the lack of empathy demonstrated with statements like these. Should I just welcome the very real prospect of the business that employs me right now going bankrupt and finding myself unemployed in a job market that doesn’t even really need my skillset anymore? I’m dumbfounded what could possibly be so hard to grasp about this. Being accepting of the progression of technology is one thing, but just defeatedly walking straight into destitution and giving myself a pat on the back for how future forward I am as a science-fiction fan? One can be perfectly curious about AI and willing to work with it and still be skeptical and wary about the real world implications that technology brings with it. I’m not really interested in this becoming a protracted argument, but please — from one human being to another — think about what you said there just for a minute. Thank you.
I emphasise with the fear, but you can’t bear your life in fear.

A few years ago my piano teacher at the conservatory recommended me to switch to full time composing, I didn’t because I knew this was coming. I’m focusing more and more on live performance these days, as it’s something that AI can’t do yet and probably won’t for a while.
 
Being accepting of the progression of technology is one thing, but just defeatedly walking straight into destitution and giving myself a pat on the back for how future forward I am as a science-fiction fan? One can be perfectly curious about AI and willing to work with it and still be skeptical and wary about the real world implications that technology brings with it.

It's worth keeping in mind that the original Luddites weren't opposed to technological progress per se as generally assumed, but specifically to management using new technology as an excuse to fire their workers. What they wanted was a world where technological progress didn't come at the expense of people's livelihood.


As for the science fiction perspective: As an SF fan, I've always been drawn to sentient computers and robots as characters, and as an SF writer, I've featured multiple truly sapient digital intelligences as protagonists. But I also understand that it's misguided to believe that the technologies deceptively hyped as "AI" are actual artificial intelligence of the sort that leads to Data or Kryten or the Vision. I gather that there are Silicon Valley tech executives who believe it is and are hyping it as such, but they're ignoring the scientific consensus that things like large language models are a dead end for artificial intelligence, because language alone is not the basis of intelligence.

This is another case where the marketing practice of using "AI" as a blanket label for multiple different technologies is dangerously deceptive. Even if there are some software innovations that could potentially become the seeds of true digital consciousness someday, that doesn't mean every tool being marketed as "AI" is the same kind of thing, or that the "AI" tools we have today are actually thinking or aware in any sense. I firmly believe that true digital consciousness will exist someday, but we're nowhere near there yet, and mistaking ChatGPT or Grok for a conscious entity is as delusional as asking a department store mannequin for directions.
 
I work in legal for a large healthcare organization. A.I. has certainly shook my company to its core and there have been layoffs already due to it in some areas of the business. (Notably, there also have been considerable layoffs in the past two years since there were changes to Medicare reimbursement , but A.I. has also started to take a chunk.) We keep getting told A.I. will be a core tenet of our work in 2026 but no one has exactly explained how since they don’t want us putting our documents in there for review or drafting anything yet. At the moment, all it’s doing for me now is to proof my emails before sending.
 
I work in legal for a large healthcare organization. A.I. has certainly shook my company to its core and there have been layoffs already due to it in some areas of the business. (Notably, there also have been considerable layoffs in the past two years since there were changes to Medicare reimbursement , but A.I. has also started to take a chunk.) We keep getting told A.I. will be a core tenet of our work in 2026 but no one has exactly explained how since they don’t want us putting our documents in there for review or drafting anything yet. At the moment, all it’s doing for me now is to proof my emails before sending.

Everyone is rushing to center everything around "AI" on the assumption that it works, but it's grossly premature and reckless. This kind of thing should be done more gradually and carefully, testing the waters to find out what it's actually good for and what it's terrible at, rather than just throwing all our eggs into the same basket before we even know if it can hold the weight. That reckless haste is bound to produce enormous waste, and I don't understand what's driving this frenzy, unless it's just pressure from the execs whose profits depend on the adoption of their tech whether it works or not.
 
Everyone is rushing to center everything around "AI" on the assumption that it works, but it's grossly premature and reckless. This kind of thing should be done more gradually and carefully, testing the waters to find out what it's actually good for and what it's terrible at...

People never do that.

People gonna people.
 
That’s not what I claimed at all.
I literally quoted what you said. How can it not be what you claimed?

What I claimed is that the people who claim stuff such as “the AI only makes error” and “you only waste time, don’t save it” are this who don’t use it.
And I say that’s not the case and cite myself as an example of someone who uses AI tools but thinks they make too many mistakes and often don’t save time. I’m not sure how many more ways to say this there are.

What I said is that it’s stupid to think that after the “burst” AI will be less ubiquitous.
Well, in that case I think you are arguing something that no-one here seems to have claimed. I think there will be a burst of the AI bubble, but that absolutely won’t mean the technology is going away or will be used less.

I emphasise with the fear, but you can’t bear your life in fear.
Well, thanks for empathising, genuinely. Although I have to point out that you framing what I said as “fear” kind of paints it as this irrational, abstract, knee-jerk emotion, when what I really talk about is being faced with the very real possibility of me being out of a job in a couple of years. I don’t live in constant “fear” of AI and what it means for my livelihood, I just wish (and hope, really) that my management, politicians or society at large will find a way to use these technologies that will not destroy what I’ve worked a lifetime to build for myself.
 
If you are not already a member then please register an account and join in the discussion!

Sign up / Register


Back
Top