• Welcome! The TrekBBS is the number one place to chat about Star Trek with like-minded fans.
    If you are not already a member then please register an account and join in the discussion!

Yikes! Did season 1 episode 6 use AI-generated art?

@Christopher

Perhaps I didn’t express myself as succinctly as I could have done and being a visual artist rather than a writer, I fear I’m not going to do any better now ;) With some of my comments I was musing that it was possible that the work was a hybrid of Ai and human. I wasn’t intending to propose that it was definitively one thing or the other, but working as a creative I could imagine the type of pipeline involved between the two if it had been hybrid.

When I pointed out the backgrounds I meant that they looked like a human had drawn over them and was supporting the idea that a human had meaningfully been involved with the making of this comic. As for the hands; yeah they’re a pain to draw and there are indeed plenty of bad examples, of which I’m sure I’ve contributed many! Perhaps when the person involved drew a hand with 5 fingers and a thumb (possibly 2 thumbs), maybe that really was an error, albeit quite a big one.

I have no intention of dying on this hill, if they’ve said a human created all of it, great. We’ll have to take them at face value. Am I still sceptical? A bit (as I’m sure you can tell), but what can you do. Unless something else comes to light, which I don’t think it will, this is case closed.

As for questioning my own assumptions and opinions as you suggest, well obviously I’m going to be biased :) The idea that I could lose my job to this is an emotional one. Seeing younger artists already lose their jobs to this is emotional. The idea that they won’t get the chances I had because some tech bros decided human creativity should be turned into the equivalent of using a One Arm Bandit is something I can only feel great fury for and I mourn for their lost potential. With that in mind I think I’ve been quite measured in my responses. I get why users find it appealing, why they wouldn’t necessarily understand what the price of all this was, how could they? There are countless disciplines I know absolutely nothing about.

But hey ho, maybe this will be like when people proclaimed Kindle would be the death of physical books, I doubt it, but maybe...and with that, I’m clumsily tapping out on this thread :)
Yours have been the most cogent and pertinent remarks here. Thank you.
 
Right now seedance2 can make a few seconds of movie quality film.

That depends on how you define "quality." Does it look convincingly like a movie at first glance? No doubt. Does it have a good story, clever dialogue, good performances, imaginative direction and cinematography, rather than just a simulation of an average example of the category of thing being asked for? Does it express someone's lived experience and pain and hopes and connect with those things in its audience, as opposed to just being a superficial exercise in imagery and sound? Does it have thematic meaning and raise challenging questions? All that will still have to come from humans, and it will take a hell of a lot more than just tweaking an LMM's output.

I could see a future where actually creative, talented people who do the writing, directing, and voice acting themselves could use the software as an animation tool, just a way of streamlining the technical end of the process while the creative end remained in human hands -- provided that it were a system that didn't use plagiarized material and didn't bring us closer to an ecological catastrophe. But too many untalented people are bound to use it to generate quantity over quality, like what's already happening with the glut of "AI"-slop books flooding the e-book market -- books that, as John Scalzi pointed out in his essay, are unlikely ever to be read by anyone, since it takes marketing to bring attention to a book, and that requires actually putting work into it.

For myself, I don't see any value in watching or reading something that isn't the product of an actual person exercising their skill and effort. I love good performances or good writing or good music or well-directed scenes because I'm impressed by the talent and work that went into creating them. Using "AI" for creative goals rather than merely technical or logistical ones seems like missing the entire point of creativity. It's like thinking marathon runners should just take a bus to the finish line. It would save labor, but the labor is not an impediment to the goal, it is the goal.

True, there's always been an abundance of low-grade entertainment fodder out there; per Sturgeon's Law, 90% of everything is garbage. But making it easy to churn out simulations of "books" and "movies" with the push of a button will probably only increase that percentage, without adding to the quantity or quality of the actual good stuff.


When I pointed out the backgrounds I meant that they looked like a human had drawn over them and was supporting the idea that a human had meaningfully been involved with the making of this comic. As for the hands; yeah they’re a pain to draw and there are indeed plenty of bad examples, of which I’m sure I’ve contributed many! Perhaps when the person involved drew a hand with 5 fingers and a thumb (possibly 2 thumbs), maybe that really was an error, albeit quite a big one.

It would hardly be the first time a comic-book or animation artist drew the wrong number of fingers on a hand.


As for questioning my own assumptions and opinions as you suggest, well obviously I’m going to be biased :) The idea that I could lose my job to this is an emotional one.

When we feel strongly about something, that's when it's most important to question ourselves. Which is why I try to temper my own animosity to "AI" and concede that some form of it may have some valid use in some contexts if the enormous ethical problems can be solved. And it's why I was able to appreciate John Scalzi's essay and admit that he knows much more about the subject than I do, so I trust his balanced take on it.
 
Maybe it could be used for supplementary tasks of some sort, but I wouldn't trust the tech to fact-check anything, given how readily it hallucinates nonsense. LLMs have no concept of fact or accuracy, they're just models of the structure of language.

Also, what "facts" are there that could possibly have been checked? Most of the names beyond the handful mentioned in DS9 would have been made up for the episode, so there are no outside facts to compare them to. It looks like the family tree omits Ben's siblings and his and Kasidy's child, but they were never named onscreen, so an automated process wouldn't have caught those omissions; only human judgment and imagination could have filled in those gaps.

No, I mean the human should have done a better job with the family tree (there were major errors, you must have not read that thread, most egregious labeling his stepmom as the Prophet, not Sarah). And thus the time needed to do that properly was used elsewhere. Obviously this is just a theoretical comparison, being in different episodes and probably by different staffers.

I don't agree AI should've been used for this, because it's exactly the kind of work that graphic artists are supposed to do, and replacing them with machines means they aren't getting enough work to live on or enough experience to rise through the ranks. The problem with arguing that it's okay to replace little jobs is that people have to learn from the little jobs in order to rise to the bigger ones.

The graphic artist would be the one using the ai to do the first pass, which they would then clean up. In the Standard Definition days those images wouldn't have been blurry blobs, so a graphic artist wouldn't have drawn them then either. They would have thrown something much simpler together to get the idea across. Its only HD resolution that requires fully realised art, and the time required is part of the reason new TV is so expensive and time consuming.

Want more episodes? make it cheaper.
 
As a senior art director working in the corporate world of an advertising agency, I can say that the pressure to work with AI tools has continued to increase, especially in the last year or so. Management hopes that this will significantly increase efficiency and save time, but to be honest, very few people in my creative team would say that this is really the case. To varying degrees we have all found ways to integrate AI tools into our everyday work lives. Some have very quickly recognized the opportunities this opens up for them. Others — and I would count myself among them — remain skeptical in view of the numerous ethical concerns that the technology raises. But there's really no getting around it anymore; everyone has to deal with it in some way, myself included. In the context of working in an advertising agency, it unfortunately feels like tilting at windmills.

Management has declared that we are now an “AI first” agency, but when asked critically whether there is also a “Plan B” in view of the likely impending bursting of the AI bubble, the response is unfortunately just perplexity. From now on, new hires are expected to have an affinity for AI, and one wonders whether the next generation of graphic designers can still be expected to even learn creative work if there is such a heavy reliance on AI.

As for the idea of saving time with AI: Unfortunately, I often see it the same way as @Argosy, who compares working with AI to playing a one-armed bandit: Sometimes you get lucky and it spits out exactly what you need. But it's much more likely that you have to try again and again until you finally get three lemons. We haven't yet reached a point where we can really rely on AI to the extent that we can reliably plan a project with it. The worst-case scenario, where you realize after six hours of prompting that you still don't have a usable result, still happens far too often.

However, it must also be said that clients’ expectations or demands of using AI in creative work are also growing. It is often taken for granted that elements that would have required specific talent just a few years ago — such as voice actors for a video, videographers, or 3D artists — can now be generated with AI. And for me personally, a little piece of my creative self-image always dies a little when I am forced to take this shortcut. It still just doesn’t feel right to cut out creative humans like this and I hate how normalized it has become.

I’d be lying if I said all of this doesn’t deeply trouble me. It’s also annoying to what extent this whole topic has overtaken everyday discourse between creatives. My wife is an illustrator and we basically talk about it every day in some form or another. It’s really tiresome. This is the first time in my career where I wonder how long I can keep on working in this field and if it makes sense to think about alternative ways to earn money.

Looking beyond creative work, there’s so many other aspects of this that have me worried, too, like how it’s completely destroying trust in recorded images and makes creation of fake news trivially easy. Not to even mention the whole environmental aspect.

As for the faux comic book in “Come, Let’s Away” — as other have said, there’s no way for us to really know at this point, but if the producers of the show say AI wasn’t used in creating the artwork, I’m inclined to trust them. Not because I’d blindly trust everything they tell us (and I have seen cases of corporate entities claiming that obviously AI generated art they used wasn’t generated when faced with a backlash), but because I really don’t recognize enough obvious AI tells in the art. It’s not very good in my opinion, but it does indeed look to me like it was just created by an art department staffer who’s not a professional comic book illustrator. Maybe there are individual elements in this that are generated, but I suspect the vast majority of it is crafted by a human.
 
@Christopher

Perhaps I didn’t express myself as succinctly as I could have done and being a visual artist rather than a writer, I fear I’m not going to do any better now ;) With some of my comments I was musing that it was possible that the work was a hybrid of Ai and human. I wasn’t intending to propose that it was definitively one thing or the other, but working as a creative I could imagine the type of pipeline involved between the two if it had been hybrid.

When I pointed out the backgrounds I meant that they looked like a human had drawn over them and was supporting the idea that a human had meaningfully been involved with the making of this comic. As for the hands; yeah they’re a pain to draw and there are indeed plenty of bad examples, of which I’m sure I’ve contributed many! Perhaps when the person involved drew a hand with 5 fingers and a thumb (possibly 2 thumbs), maybe that really was an error, albeit quite a big one.

I have no intention of dying on this hill, if they’ve said a human created all of it, great. We’ll have to take them at face value. Am I still sceptical? A bit (as I’m sure you can tell), but what can you do. Unless something else comes to light, which I don’t think it will, this is case closed.

As for questioning my own assumptions and opinions as you suggest, well obviously I’m going to be biased :) The idea that I could lose my job to this is an emotional one. Seeing younger artists already lose their jobs to this is emotional. The idea that they won’t get the chances I had because some tech bros decided human creativity should be turned into the equivalent of using a One Arm Bandit is something I can only feel great fury for and I mourn for their lost potential. With that in mind I think I’ve been quite measured in my responses. I get why users find it appealing, why they wouldn’t necessarily understand what the price of all this was, how could they? There are countless disciplines I know absolutely nothing about.

But hey ho, maybe this will be like when people proclaimed Kindle would be the death of physical books, I doubt it, but maybe...and with that, I’m clumsily tapping out on this thread :)

Its the undiscovered country that we are in now and just starting to drive in it. Being a star trek fan all my life(since 4) ive dreamed of living in a society that had computers that we could talk to
That depends on how you define "quality." Does it look convincingly like a movie at first glance? No doubt. Does it have a good story, clever dialogue, good performances, imaginative direction and cinematography, rather than just a simulation of an average example of the category of thing being asked for? Does it express someone's lived experience and pain and hopes and connect with those things in its audience, as opposed to just being a superficial exercise in imagery and sound? Does it have thematic meaning and raise challenging questions? All that will still have to come from humans, and it will take a hell of a lot more than just tweaking an LMM's output.

I could see a future where actually creative, talented people who do the writing, directing, and voice acting themselves could use the software as an animation tool, just a way of streamlining the technical end of the process while the creative end remained in human hands -- provided that it were a system that didn't use plagiarized material and didn't bring us closer to an ecological catastrophe. But too many untalented people are bound to use it to generate quantity over quality, like what's already happening with the glut of "AI"-slop books flooding the e-book market -- books that, as John Scalzi pointed out in his essay, are unlikely ever to be read by anyone, since it takes marketing to bring attention to a book, and that requires actually putting work into it.

For myself, I don't see any value in watching or reading something that isn't the product of an actual person exercising their skill and effort. I love good performances or good writing or good music or well-directed scenes because I'm impressed by the talent and work that went into creating them. Using "AI" for creative goals rather than merely technical or logistical ones seems like missing the entire point of creativity. It's like thinking marathon runners should just take a bus to the finish line. It would save labor, but the labor is not an impediment to the goal, it is the goal.

True, there's always been an abundance of low-grade entertainment fodder out there; per Sturgeon's Law, 90% of everything is garbage. But making it easy to churn out simulations of "books" and "movies" with the push of a button will probably only increase that percentage, without adding to the quantity or quality of the actual good stuff.




It would hardly be the first time a comic-book or animation artist drew the wrong number of fingers on a hand.




When we feel strongly about something, that's when it's most important to question ourselves. Which is why I try to temper my own animosity to "AI" and concede that some form of it may have some valid use in some contexts if the enormous ethical problems can be solved. And it's why I was able to appreciate John Scalzi's essay and admit that he knows much more about the subject than I do, so I trust his balanced take on it.


Agreed. It really is surprising on how fast its catching on abd how fast even huge companies like Google, amazon, meta etc as well as ither are dumping employees as fast as possible.

Ive seen a.i. written material, art etc. It seems artifical. Derivative. Could I be fooled? Maybe especially with the art but I still prefer humans making art, writing, music, movies. Having a computer create these things defeats the purpose. But yeah there will be a glut of garbage from people that have necer written a book or made a movie in their lives and be lauded for using a.i. to make something. Theyll use a.i. for the ideas abd struture and take the credit. That I certainly do not like.
 
No, I mean the human should have done a better job with the family tree (there were major errors, you must have not read that thread, most egregious labeling his stepmom as the Prophet, not Sarah).

That's my point. LLMs have no actual knowledge or understanding; they're just language simulators. They're incapable of checking the factual accuracy of their output, and constantly hallucinate false results or mix and match unrelated things. So the kinds of mistakes you're talking about are exactly the kinds of mistakes that "AI" would create, not correct.


The graphic artist would be the one using the ai to do the first pass, which they would then clean up. In the Standard Definition days those images wouldn't have been blurry blobs, so a graphic artist wouldn't have drawn them then either. They would have thrown something much simpler together to get the idea across. Its only HD resolution that requires fully realised art, and the time required is part of the reason new TV is so expensive and time consuming.

All the more reason it would've been better to get an experienced comic book artist to do it, since it was a given that Star Trek viewers would scrutinize the life out of every last detail. And the best option would've been to work with IDW to do a cross-promotion and actually have the comic ready to be released as a tie-in after the episode came out.


As a senior art director working in the corporate world of an advertising agency, I can say that the pressure to work with AI tools has continued to increase, especially in the last year or so. Management hopes that this will significantly increase efficiency and save time, but to be honest, very few people in my creative team would say that this is really the case.

That's consistent with what I've heard, that it's more something being aggressively pushed on people from above in the hope that it will work, rather than something being organically adopted because it actually gives proven results.


To varying degrees we have all found ways to integrate AI tools into our everyday work lives. Some have very quickly recognized the opportunities this opens up for them. Others — and I would count myself among them — remain skeptical in view of the numerous ethical concerns that the technology raises. But there's really no getting around it anymore; everyone has to deal with it in some way, myself included. In the context of working in an advertising agency, it unfortunately feels like tilting at windmills.

As a freelance writer of fiction, I've never found a need for it myself, and indeed, most of the available markets for my work would refuse to take "AI"-generated material.



As for the idea of saving time with AI: Unfortunately, I often see it the same way as @Argosy, who compares working with AI to playing a one-armed bandit: Sometimes you get lucky and it spits out exactly what you need. But it's much more likely that you have to try again and again until you finally get three lemons. We haven't yet reached a point where we can really rely on AI to the extent that we can reliably plan a project with it. The worst-case scenario, where you realize after six hours of prompting that you still don't have a usable result, still happens far too often.

Wow. That sounds incredibly frustrating, just the sort of thing that would drive me crazy.


However, it must also be said that clients’ expectations or demands of using AI in creative work are also growing. It is often taken for granted that elements that would have required specific talent just a few years ago — such as voice actors for a video, videographers, or 3D artists — can now be generated with AI. And for me personally, a little piece of my creative self-image always dies a little when I am forced to take this shortcut. It still just doesn’t feel right to cut out creative humans like this and I hate how normalized it has become.

I'm surprised that there's so much desire for it, since the impression I've gotten is that there's widespread public contempt for "AI slop" and a lot of backlash. But I guess you're talking about businesspeople who prefer AI because it's cheaper than hiring talent.


As for the faux comic book in “Come, Let’s Away” — as other have said, there’s no way for us to really know at this point, but if the producers of the show say AI wasn’t used in creating the artwork, I’m inclined to trust them. Not because I’d blindly trust everything they tell us (and I have seen cases of corporate entities claiming that obviously AI generated art they used wasn’t generated when faced with a backlash), but because I really don’t recognize enough obvious AI tells in the art. It’s not very good in my opinion, but it does indeed look to me like it was just created by an art department staffer who’s not a professional comic book illustrator. Maybe there are individual elements in this that are generated, but I suspect the vast majority of it is crafted by a human.

From what you said earlier, it sounds like just drawing it from scratch would be likely to take less time than pulling the slot machine lever over and over until it gives you what you want. I'm no expert, but it seems to me that a series of comic pages have too many elements that would have to be gotten right -- reasonably consistent character faces and uniforms, continuity from panel to panel, coherent text and balloon placement, etc. -- and the work I imagine it would take to correct the "AI" over and over and over again to create that consistency would probably be very time-consuming.

Hey -- maybe what we should do is require the execs/clients/whoever who want "AI" to be used to pay the entire cost of the electricity required to use it. Maybe that would deter them from insisting on its use.
 
That's my point. LLMs have no actual knowledge or understanding; they're just language simulators. They're incapable of checking the factual accuracy of their output, and constantly hallucinate false results or mix and match unrelated things. So the kinds of mistakes you're talking about are exactly the kinds of mistakes that "AI" would create, not correct.

Let me try again. No AI should be used on the Family tree. A human needed adequate time to put it together. They did not. By using AI elsewhere in the art department, enough man hours could have been saved so the family tree could have been done correctly by a human.
 
The graphic artist would be the one using the ai to do the first pass, which they would then clean up. In the Standard Definition days those images wouldn't have been blurry blobs, so a graphic artist wouldn't have drawn them then either. They would have thrown something much simpler together to get the idea across. Its only HD resolution that requires fully realised art, and the time required is part of the reason new TV is so expensive and time consuming.

Want more episodes? make it cheaper.
This is probably exactly what they did.

This is also what I think Trekmovie.com got wrong - and why I think it's insane they went all "CONFIRMED" when even the shows creators are completely silent on the matter (and they do have social media, you know) - is that what they likely meant is "it wasn't entirely created by AI", and they made that to "at no point AI was involved". Which, you know, we have eyes.

However - and that's the part I just don't get - yes, there might be ethical concerns or whatever about AI use in general. But that's the case for ALL new technology.

And IF AI were to be used - this is exactly the perfect opportunity! Have some small background prop be produced in a detailed quality that just wouldn't be cost efficient for a full-time artist.

You know - I'd be angry if they did their title intro with AI. Or major vfx shots then having AI mistakes.

But an important, but small prop, that's only visible in freeze-frames? Heck go for it. That's exactly the perfect use case.
 
Let me try again. No AI should be used on the Family tree. A human needed adequate time to put it together. They did not. By using AI elsewhere in the art department, enough man hours could have been saved so the family tree could have been done correctly by a human.

As Michael's post pointed out, the premise that "AI" actually saves people time is highly questionable.
 
I'm surprised that there's so much desire for it, since the impression I've gotten is that there's widespread public contempt for "AI slop" and a lot of backlash
Not really, just a very vocal minority. Most people don’t recognise or care if it’s ai.

I always find incredibly amusing when people say it’s “a bubble” that will burst soon, as I’ve been reading the exact same thing for every single revolutionary technology introduced during the past 30+ years and it’s invariably been dead wrong.

I completely expect some companies failing to capitalise on AI and go bankrupt, the usual suspects say “the bubble is bursting, we told you” and then AI just becoming a major part of life all the same, exactly as it happened with computers, the internet, WiFi, desktop publishing, smartphones, social media, laptops, you name it.

BTW, the clever companies will avoid bankruptcy by using AI itself to find ways to make AI remunerable.
 
Actually there have been scientific studies on the subject that shows LLM tend to create the illusion that a person is saving time instead of actually doing so.
Source your cousin?

As said: I’ve been using AIs (not only LLMs) since 2020, I know exactly how much time I’m saving.
 
Source your cousin?

As said: I’ve been using AIs (not only LLMs) since 2020, I know exactly how much time I’m saving.

My company seems to be switching from full automation to a.i. assistants. So supposedly each agent will gave an a.i. agent to assist in our work. Its designed to up our output. They have been talking a.i for about 2.5 years but seems to be taking them forever to decide and.implement a program.
 
I always find incredibly amusing when people say it’s “a bubble” that will burst soon, as I’ve been reading the exact same thing for every single revolutionary technology introduced during the past 30+ years and it’s invariably been dead wrong.

I completely expect some companies failing to capitalise on AI and go bankrupt, the usual suspects say “the bubble is bursting, we told you” and then AI just becoming a major part of life all the same, exactly as it happened with computers, the internet, WiFi, desktop publishing, smartphones, social media, laptops, you name it.

BTW, the clever companies will avoid bankruptcy by using AI itself to find ways to make AI remunerable.
AI is a bubble in so far that it is currently overvalued and there's no way for it to generate such an amount of additional profit to make the current investments feasible.

It is very much like the dot com bubble in this regard. It will burst. But it will NOT be the end if the internet.

AI is here to stay. AI wont be the answer to everything, and certainly most ideas pitched today after crap, and it is still finding it's use case & profitability.
But it will be part of our lifes going forwarded from now, never disappear, and the people and companies who don't adopt will either have their niches or simply be left behind, like people today who live and work without internet (and don't get me wrong - there's a shocking amount of people who live without smartphone & internet!)
 
As a freelance writer of fiction, I've never found a need for it myself, and indeed, most of the available markets for my work would refuse to take "AI"-generated material.
Maybe I’m assuming wrong, but I’m honestly a little surprised that you don’t seem to have at least given ChatGPT a try. Not for generating material that you would submit in any kind of professional capacity of course, but just in a casual way out of curiosity to see what the hype is all about. ChatGPT is free right now and can be used even without creating an account. As I said, I am highly skeptical of AI myself, but would still recommend at least giving at least one chat a go. At the end of the day it’s definitely a weirdly sycophantic algorithm that guesses which words often go together. But I think it helps having experienced it first hand.

Wow. That sounds incredibly frustrating, just the sort of thing that would drive me crazy.
There is a bit of a learning curve with AI tools and in my experience you do get a bit better in understanding which tool can or cannot do what for you. But still with every tool there remains this uncertainty if you’ll going to get what you need. And what’s even worse is cases when the generated output is like 80% there and in trying to get it closer to 100% newer outputs look even worse. In my experience an AI generated output is almost never directly ready for publishing; there’s always something you need to fix about it. Which might not be a big problem when it’s an output in the field of your expertise, but can also be something you’re just not able to fix when it’s something you don’t really understand. As an example, a few months ago I used an LLM to write me Python code that I could then input into Blender to create a comparatively simple 3D scene. I got it really close to what I needed, but in trying to get the LLM to change minor elements of it, it kept introducing new mistakes with every iteration. Since I’m not as proficient in Blender I couldn’t fix it myself and ultimately had to give up working on that approach.

I'm surprised that there's so much desire for it, since the impression I've gotten is that there's widespread public contempt for "AI slop" and a lot of backlash. But I guess you're talking about businesspeople who prefer AI because it's cheaper than hiring talent.
I’m afraid the contempt for AI slop or even what really constitutes AI slop is not as prevalent and universally understood as one might think. In my experience the discussion about AI online can give a warped impression of how people IRL talk and receive AI generated content. And there’s also definitely the kind of client who believes using AI in a product or marketing campaign will have the effect of making their business appear more “future savvy” and forward-thinking. There’s absolutely a kind of hype to having used AI in a very visible way at least once if you’re a business, because then you can claim to your investors that you’re working with the newest tools, I guess. What’s perhaps an interesting counter movement to that is how some corporations will now very visibly publish human created campaigns, focussing on communicating the fact that they are valuing hand-made, human art.

From what you said earlier, it sounds like just drawing it from scratch would be likely to take less time than pulling the slot machine lever over and over until it gives you what you want. I'm no expert, but it seems to me that a series of comic pages have too many elements that would have to be gotten right -- reasonably consistent character faces and uniforms, continuity from panel to panel, coherent text and balloon placement, etc. -- and the work I imagine it would take to correct the "AI" over and over and over again to create that consistency would probably be very time-consuming.
Well, I think one needs to be careful not to extrapolate their own individual experiences to other areas. What I wrote was very much written from a perhaps limited perspective of a graphic designer. I do think there are fields where AI probably does make you faster and more efficient. All I can say is that from my experiences (and those of my immediate colleagues) AI does on the whole not really appear to speed things up. But who knows, maybe we are using it wrong. I couldn’t tell. It’s all moving very fast, too. So who knows what I would say a year from now (assuming I still have a job at that point, of course).

Yet people questioning it are invariably those who don’t use it. :rolleyes:
A weird conclusion to make when only a few posts earlier I just said that I am incorporating AI tools in my work as a graphic designer (at the behest of the agency that employs me), and yet I’m very much questioning its use and ethics. I’m afraid it’s not quite as black and white as you like to think.

I completely expect some companies failing to capitalise on AI and go bankrupt, the usual suspects say “the bubble is bursting, we told you” and then AI just becoming a major part of life all the same, exactly as it happened with computers, the internet, WiFi, desktop publishing, smartphones, social media, laptops, you name it.
What you are describing is “a bubble bursting”. The vast majority of people seriously talking about this subject seem to agree that we’ll likely see the bubble bursting, but that of course doesn’t mean AI as a technology will go away. Just the clients’ willingness to spend money on it; that will likely go away. And with it businesses that made themselves over-reliant on said clients spending money on AI products and services.
 
If you are not already a member then please register an account and join in the discussion!

Sign up / Register


Back
Top