• Welcome! The TrekBBS is the number one place to chat about Star Trek with like-minded fans.
    If you are not already a member then please register an account and join in the discussion!

Yikes! Did season 1 episode 6 use AI-generated art?

The vast majority of people seriously talking about this subject seem to agree that we’ll likely see the bubble bursting, but that of course doesn’t mean AI as a technology will go away. Just the clients’ willingness to spend money on it; that will likely go away. And with it businesses that made themselves over-reliant on said clients spending money on AI products and services.

I dunno. I'm still waiting for the recession economists promised us three or four years ago. Market predictions have a piss-poor track record.

There will be some pullback. We've already been seeing it in tech sector investment for several months now. It seems to be less about disillusionment with AI than with uncertainty and fear about which industries are going be gobbled up first by AI innovation, with software companies as favorite candidates to fall due to disintermediation. The other big concern is a shrinking job market and the impact that will have on the consumer economy.

People talk about an "AI bubble" as analogous to the 2001 dot-com bubble. That is not likely to reoccur here. And the most important takeaway from that market reset is that, a quarter of a century later, the technologies at the center of it now dominate modern culture.

Might as well talk about the "steam-engine bubble."
 
Last edited:
Well, in that case I think you are arguing something that no-one here seems to have claimed. I think there will be a burst of the AI bubble, but that absolutely won’t mean the technology is going away or will be used less.

It's not going away, but I certainly think it will be used less, or should be in a sane world. LLMs should never be promoted as search engines, information sources, or problem-solving tools, because they're no better at that than a Ouija board, just passively pointing to whatever answer the collective pressure of the masses pushes them toward. They're being used for lots of things at which they're proving to be disastrous, even dangerous failures, and people are going to be hurt and killed as a result, and there will be lawsuits and there will be consequences. Hopefully, in time, the illegitimate and harmful uses will be abandoned, leaving only those more limited applications that are actually sane and useful.
 
Ah yes. "Anonymous

It's not going away, but I certainly think it will be used less, or should be in a sane world. LLMs should never be promoted as search engines, information sources, or problem-solving tools, because they're no better at that than a Ouija board, just passively pointing to whatever answer the collective pressure of the masses pushes them toward. They're being used for lots of things at which they're proving to be disastrous, even dangerous failures, and people are going to be hurt and killed as a result, and there will be lawsuits and there will be consequences. Hopefully, in time, the illegitimate and harmful uses will be abandoned, leaving only those more limited applications that are actually sane and useful.

We are not in a sane world . But even if we are, I occasionally use A.I LLMs in combination with my own logic and experience. Just like many tools in life, you need to be able to use your own judgment and understanding to get the most out of it.. The better instructions and perimeters you give, the better (usually) the answers you get.

When calculaters first came out , same teachers worried that students would become so dependent on them that an equation punched up as 23+ 125 = 4567 would just be just accepted as fact because some students wouldn't have some baseline knowledge of math. Calculators are still being used.

Unless your going have ChatbotGPT teach students in the education system , unless they stop school courses where you have actual in person learning and live discorsds happening between students and professors , unless parents stop raising and teaching their kids - passing on life experiences and baseline knowledge, i don't think we are even close to A.I being a super harmful influence in society.

It may change or evolve but it's not going anywhere.

Personally it has helped me find some solutions. Nothing wrong with that.

I would aargue it's definitely more useful than a Ouija board at the very least.
 
Last edited:
Ok, given you keep misquoting me I think this discussion is over. Bye.
Assuming this addresses me: I truly don’t know what you mean by me misquoting you. If I understood wrong or framed anything you said other than you intended, I’m sorry. That was genuinely not my intention. From my perspective I literally quoted statements you made and challenged them. You made a very broad and IMHO overgeneralized claim (“people questioning [AI] are invariably those who don’t use it”), which I challenged, and then you seemed to reframe your claim a bit narrower (“the people who claim stuff such as ‘AI only makes error’ and ‘you only waste time’ are those who don’t use it”), which I would still consider overly general and simply not based in facts. You seem to have perceived my challenging your statements as “misquoting” or mischaracterizing them, either as a defense or because you genuinely feel wronged. But I feel confident that if you go back you will find no instance of me misrepresenting anything you said on purpose.
 
We are not in a sane world . But even if we are, I use A.I LLMs in combination with my own logic and experience. Just like many tools in life, you need to be able to use your own judgment snd understanding to get the most out of it..

Which is exactly why they're so dangerous -- because they have no safeguards against all the ways that people without judgment and understanding will inevitably misuse them and be hurt by them or hurt others. The history of product safety is a history of people misusing a product in stupid and dangerous ways until its creators figure out ways to idiot-proof it, or at least label it with stern warnings about how not to use it.


When calculaters first came out , same teachers worried that students would become so dependent on them that an equation punched up as 23+ 125 = 4567 would just be just accepted as fact because some students wouldn't have some baseline knowledge of math.

I was once scandalized in a classroom study group when another group member unthinkingly used their calculator to divide by one. And they didn't even understand why I was bothered by it.

I admit, though, while I usually try to figure out simple arithmetic in my head to keep my mind sharp, I've increasingly gotten lazy and let the calculator do it more often than I used to.


Unless your going have ChatbotGBT teach students in the education system, unless Parents stop raising and teaching their kids and passing on life experiences and baseline knowledge, i don't think we are even close to A.I being a super harmful influence in society.

I think you're being overoptimistic about the state of baseline knowledge in the US (if that's the country you're thinking of). I mean, we're having measles outbreaks because a large percentage of our population is too ignorant to understand how vital vaccines are. We're literally suffering from an epidemic of willful ignorance. And there are groups in power that actively encourage ignorance and false beliefs because it helps them stay in power. These are the same groups that are aggressively pushing "AI."
 
Which is exactly why they're so dangerous -- because they have no safeguards against all the ways that people without judgment and understanding will inevitably misuse them and be hurt by them or hurt others. The history of product safety is a history of people misusing a product in stupid and dangerous ways until its creators figure out ways to idiot-proof it, or at least label it with stern warnings about how not to use it.




I was once scandalized in a classroom study group when another group member unthinkingly used their calculator to divide by one. And they didn't even understand why I was bothered by it.

I admit, though, while I usually try to figure out simple arithmetic in my head to keep my mind sharp, I've increasingly gotten lazy and let the calculator do it more often than I used to.




I think you're being overoptimistic about the state of baseline knowledge in the US (if that's the country you're thinking of). I mean, we're having measles outbreaks because a large percentage of our population is too ignorant to understand how vital vaccines are. We're literally suffering from an epidemic of willful ignorance. And there are groups in power that actively encourage ignorance and false beliefs because it helps them stay in power. These are the same groups that are aggressively pushing "AI."

But if A I is is just averaging online knowledge , then how different is that v.s searching the internet, NOT using your judgment and coming up with bad answers. Can you regulate that? Of course not.

They are many tools out there in life that are unregulated and can be misused / lead to bad answers / be dangerous.

The Internet in itself introduced a new way of finding bad ( or good) answers . Everyone was worried the internet would be a bad influence. Are we using it more or less today? Yeah A.I is different as it is being billed as artificial intelligence tools. And its not always intelligent or accurate.

Now I do have time for the argument that we as a society need to get better at using are own judgment . Whether that's better education or with other pursuits, I have time for that.

I also have time for needing to better warn /advise consumers on the limitations of A.I tools.

But I disagree they should be removed completely or we should stop trying to develop it.

I can say with reasonable amount of confidence that a better education would benefit us magnitudes more that just removing a.i tools without doing much to ( proactively) better education.

I'm in Canada.
 
Last edited:
But if A I is is just averaging online knowledge , then how different is that v.s searching the internet, NOT using your judgment and coming up with bad answers. Can you regulate that? Of course not.
You are not helping yourself by implying technological regulations in schooling are implausible and unnecessary.
 
You are not helping yourself by implying technological regulations in schooling are implausible and unnecessary.

Most of the weight of that comment was in reference to "anyone " using the internet. I.e me ...right now can do an online search and find bad answers. And me...right now can use A.I tool and find bad answers . Neither of those methods are regulated.
 
Can you read what I was answering to instead of taking words out of context, perhaps?
So let me get that straight: First you repeatedly claim — falsely, I might add — that I’ve somehow “misquoted” you. But now you seem to realize how ridiculous that claim is and pivot to claiming I was somehow “taking your words out of context”, when I have done no such thing. Every time I responded to your claim I was very much responding to it in context. Boy, I genuinely don’t get what your problem is. I don’t even care that we have a difference of opinion on this topic, truly. But these constant accusations of “misquoting” you are just straight-up bullshit.
 
But if A I is is just averaging online knowledge , then how different is that v.s searching the internet, NOT using your judgment and coming up with bad answers. Can you regulate that? Of course not.

The difference is that the sites a traditional search engine finds were written by actual human beings who knew what they were saying, and at least some of those human beings were writing accurate information. You can seek out sources likely to be reliable, and can compare information from different sources to see if it's corroborated. But LLMs simply manufacture simulations of what an answer to a question might sound like, with no ability to determine if its content is actually meaningful, since they're simply simulators of the structure of written language. They literally do not know what they're saying. They are not intelligences, just brute-force number crunchers, which is why they demand such insane amounts of electricity to do even the most trivial tasks.

In other words, yes, you have to use your own judgment to sort out the valid, informed sources from the trash. But LLMs reside exclusively in the latter category. None of their output can be trusted. With a proper search engine, you can at least find the legitimate sources that are out there among the chaff. But using an LLM as a search engine makes no sense. It's putting the valid and invalid stuff in a blender and mixing it together so there's no way to tell one from the other.


I can say with reasonable amount of confidence that a better education would benefit us magnitudes more that just removing a.i tools without doing much to ( proactively) better education.

And I never said anything about "just removing" them. I said that eventually we're going to have to move beyond the hype and the industry pressure to force universal adoption, to stop imposing "AI" as a solution to nonexistent problems, and apply a reasonable set of standards for determining what these tools are actually good for and what they are not good for.
 
The difference is that the sites a traditional search engine finds were written by actual human beings who knew what they were saying, and at least some of those human beings were writing accurate information. You can seek out sources likely to be reliable, and can compare information from different sources to see if it's corroborated. But LLMs simply manufacture simulations of what an answer to a question might sound like, with no ability to determine if its content is actually meaningful, since they're simply simulators of the structure of written language. They literally do not know what they're saying. They are not intelligences, just brute-force number crunchers, which is why they demand such insane amounts of electricity to do even the most trivial tasks.

In other words, yes, you have to use your own judgment to sort out the valid, informed sources from the trash. But LLMs reside exclusively in the latter category. None of their output can be trusted. With a proper search engine, you can at least find the legitimate sources that are out there among the chaff. But using an LLM as a search engine makes no sense. It's putting the valid and invalid stuff in a blender and mixing it together so there's no way to tell one from the other.




And I never said anything about "just removing" them. I said that eventually we're going to have to move beyond the hype and the industry pressure to force universal adoption, to stop imposing "AI" as a solution to nonexistent problems, and apply a reasonable set of standards for determining what these tools are actually good for and what they are not good for.

Regarding your first comment, you kinda made my point .

As I said earlier, the better instructions you give it ( i.e via using your own judgment) the better answers it gives.

If im asking for instructions that I need or want to be accurate, I asked it to give me citations.

I also often using other tools in addition to a.i

When my mother recently died, I tried to translate my Euology in proper Italian via Google translate . But it failed to translate some of the contextual lingo . So i used an a.i tool which has some baseline contextual knowledge and it helped to accurately translate the Eulogy . There's one example where it helped me at a moment of grieving . I know a fair bit of italian (spoken not written)but i also asked an Italian relative to review it first and he said it was good. Could I have asked him to do the translation? sure. But I was in a period of grieving and A.I helped me when time was hard to find.

I also asked it to provide me with a checklist of what I should do/check to put my father's affairs in order.

Our Funeral director did squat . I literally had more direction with A.I tools then my our own funeral director .

All I'm saying it can be a valuable tool
 
Last edited:
The difference is that the sites a traditional search engine finds were written by actual human beings who knew what they were saying, and at least some of those human beings were writing accurate information. You can seek out sources likely to be reliable, and can compare information from different sources to see if it's corroborated. But LLMs simply manufacture simulations of what an answer to a question might sound like, with no ability to determine if its content is actually meaningful, since they're simply simulators of the structure of written language. They literally do not know what they're saying. They are not intelligences, just brute-force number crunchers, which is why they demand such insane amounts of electricity to do even the most trivial tasks.


Meh, the usual “they don’t understand what they are saying”. They do offer reasoning more on point than many humans, though, including some egregiously in this very topic. What is “understanding” exactly? If they simulate it perfectly what difference does it make if it’s “real” or not?

The concern of current AIs being very resource intensive is quite valid, what’s very interesting is that AI is already being applied to fine solutions.

In other words, yes, you have to use your own judgment to sort out the valid, informed sources from the trash. But LLMs reside exclusively in the latter category. None of their output can be trusted. With a proper search engine, you can at least find the legitimate sources that are out there among the chaff. But using an LLM as a search engine makes no sense. It's putting the valid and invalid stuff in a blender and mixing it together so there's no way to tell one from the other.
Have you tried using a current AI for research? They put the sources each statement comes from, so you can double check them.

Last month I had to fix an issue with a very uncommon piece of hardware, I couldn’t even understand how it worked in the beginning (it has a very clever and very unusual use of magnets instead of traditional switches). Until a few months ago I would have spent hours on google trying to understand it, ChatGPT found the solution, buried deep in an old, unknown forum page, within seconds.
 
If you are not already a member then please register an account and join in the discussion!

Sign up / Register


Back
Top