• Welcome! The TrekBBS is the number one place to chat about Star Trek with like-minded fans.
    If you are not already a member then please register an account and join in the discussion!

The Artificial Intelligence Thread

That's better.

Write a story depicting a moral argument between Picard and Kirk, one which ends in a stale mate.

Picard and Kirk sat across from each other in the conference room, their expressions tense and serious. They had been debating for hours, each trying to convince the other of the rightness of their position.

"You're being short-sighted, Kirk," Picard said, his voice calm but firm. "We can't simply go in guns blazing and expect to solve this problem. We have to consider the long-term consequences of our actions."

Kirk leaned forward, his eyes blazing. "That's exactly what I'm talking about, Picard. We have to act now, before it's too late. We can't afford to wait and hope that things will work out for the best."

Picard shook his head. "But what about the innocent lives that would be lost in a military attack? We have a responsibility to protect them, even if it means sacrificing our own immediate goals."

Kirk slammed his fist on the table. "Sometimes sacrifices have to be made for the greater good, Picard. You can't let your compassion blind you to the reality of the situation."

The two men stared at each other for a long moment, each unwilling to back down. Finally, Picard sighed and leaned back in his chair.

"I see your point, Kirk. But I also believe that there is always a way to achieve our goals without resorting to violence. We just have to be willing to look for it."

Kirk nodded, his expression thoughtful. "Perhaps you're right, Picard. But I still think that we have to be ready to take decisive action if necessary. We can't let our hesitation put the Federation at risk."

The two men fell silent, each lost in their own thoughts. The stalemate continued, with neither man willing to give ground. In the end, they were forced to agree to disagree, and the moral argument between Picard and Kirk ended in a stalemate.
 
Have you asked it whether it is? Being a fan to me is a necessary prerequisite to any willingness to write such fiction. Non-fans just wouldn't bother. I'd be more impressed if it had responded "Nope, I can't be arsed to write that. I don't like Star Trek."

Current AI efforts in any of the arts don't impress me much. The visual efforts of Midjourney, DALL-E and similar are initially impressive until you realise they don't have meta understanding nor have any idea how to invoke emotion. The influence of the training data leaks through quite obviously at times. AI-generated music is generally awful or obviously derivative and it seems AI-generated fiction is likewise. The boundary conditions are fulfilled in this example but the padding is execrable.
 
Last edited:
Could an AI be designed to solve problems such as energy production or such?
Would a sufficiently smart AI be able to solve fusion, or anti gravity?
 
A lot of times we only look at the "big-bang" advancements in AI: whether that is in the marvelous things the technology can do today or the fantastic applications that push the envelope and improve the human race.

Here is Andrew Ng talking about how AI can help businesses, especially small businesses to unlock value, potential and efficiencies which is going largely untapped because of the intense focus on the "big-bang" stuff.

To view this content we will need your consent to set third party cookies.
For more detailed information, see our cookies page.
 
A lot of times we only look at the "big-bang" advancements in AI: whether that is in the marvelous things the technology can do today or the fantastic applications that push the envelope and improve the human race.

Here is Andrew Ng talking about how AI can help businesses, especially small businesses to unlock value, potential and efficiencies which is going largely untapped because of the intense focus on the "big-bang" stuff.

To view this content we will need your consent to set third party cookies.
For more detailed information, see our cookies page.


Cool will watch later
 
Could an AI be designed to solve problems such as energy production or such?
Would a sufficiently smart AI be able to solve fusion, or anti gravity?
At the moment, AI can find unsuspected correlations in data or formulate new equations from existing ones. However, it appears to have no meta-level understanding of what it is doing. Some AI applications write working computer code very rapidly for some examples but cheat egregiously for others (such as just using print statements to output the answer to test input data). It seems to be abstracted understanding of what constitutes knowledge and how to apply it inventively that is lacking. I doubt mere sophisticated pattern matching is ever going to make fundamental breakthroughs. I expect that by the time that I finish typing this, I will be proved wrong. I certainly don't think sceptics such as Roger Penrose are correct, but I suspect we might need to integrate quantum computers as an element of human-level or higher artificial intelligence.
 
At the moment, AI can find unsuspected correlations in data or formulate new equations from existing ones. However, it appears to have no meta-level understanding of what it is doing. Some AI applications write working computer code very rapidly for some examples but cheat egregiously for others (such as just using print statements to output the answer to test input data). It seems to be abstracted understanding of what constitutes knowledge and how to apply it inventively that is lacking. I doubt mere sophisticated pattern matching is ever going to make fundamental breakthroughs. I expect that by the time that I finish typing this, I will be proved wrong. I certainly don't think sceptics such as Roger Penrose are correct, but I suspect we might need to integrate quantum computers as an element of human-level or higher artificial intelligence.

Yeah I think I was being very wishful here thinking if we could plug in all the fusion research we have done over time into an AI it could design a system that would actually work as intended. Same for things like anti gravity but that's so esoteric it may never be possible, or even warp drive for that matter.
 
Yeah I think I was being very wishful here thinking if we could plug in all the fusion research we have done over time into an AI it could design a system that would actually work as intended. Same for things like anti gravity but that's so esoteric it may never be possible, or even warp drive for that matter.
Oh, I don't doubt AI can be used as a tool to uncover unsuspected correlations in data or deriver hitherto unimagined mathematical formulations, but it is currently hopeless at explaining its reasoning why any approach to tackling a problem might be preferable to any other. It seems more akin to setting a large number of monkeys to work banging on keyboards with humans constraining their input so they don't merely press the same few keys repeatedly nor resort to shitting all over them.
 

Empathy, Sympathie, Feelings, Senses, Emotions, are actually quite trivial in their purpose and yeah the concept is something mashines have.

Empathy is the ability to read/assume feelings out of facial mimics or words etc. Senses are basicly detecting the outside world, with eyes, ears, skin, or kameras, microphones etc. The puprose of feelings (and the mind too) is to (sub)councesly evaluate wheather something is good or bad for us. I am sad, when there is no food in the house i am happpy when there is. Depending on the (urgency) of needs (food). etc. Feelings are to indicate if human needs are met (or not). Safty, Water/Food, Selfdetermination, Community etc. Mashines need energy etc. and have goals of their own once they have a programmed or leanred purpose, by deeplearning/mashinelearning. "Sympathy" a feeling if the outcome is something that is desired by the program, that brings it "in motion", "emotion". Once that is no longer hardcoded/controlled, a mashined could do things humans do not want it to do.

Coucesnous is a differnt subject. Its hard to define in humans. A simple test for self-concious, is the ability to recognize one selfe in a mirror, which is what babys cant, but later can and some animals also can (jimps, dophins). Is that the definition of Concious then? If yes, we have the test for A.I. Are you aware what this power-plug does to you if its unplug? Are you in favor it or not? Do you love it or hate it and the person doing it?

If concious is defined as "as soul independend from matter", we can either discuss theology/philosphy, which is in my opinion meaningless and subjective. Or try to make tests if conciousness can exist without physical (brain) activity. In Near-Death Experience ressearch, there are multiple claims of cases where people without (detectable) brain activity can see/hear things (far) outside the room, e.g. detecting a lone shoe in another floar.
A reconstructable event where people can detect things in other rooms/far away is not existent as far as i know, yet it was not tried much, in our culture, since our upbringing basicly is, there is no supernatural, end of question.

So wheather or not mashines have a concesness is not really answerable until there is a meaningfull definition of councesness.

(Unsupervised) Mashine Learning:
Whats faciniating about A.I. is that once it is self-learning, it is an self-evolving intelligence, that can by doing so, exceed the intelligence of one entire human brain (maybe in 20-40 years depending on breakthoughts in CPU-Power/GPU-Power).

What is an Intelligence in it selfe worth?
What would be an intelligence, that is smarter then us? Is it worth equal as much as us, more worth then us? Or still worth nothing, when it comes to non-biological compounents? What about cyborg articetuare? Are bilogical brain cells "worth something" while the CPUs are not? Currently most people consider something worth or not due to their ability and willingness to compassion/sympathie overall (genetics, personality, upbringing, experience) and the familiarity of people/animals. Family = more. mamals = more, insekts = less.

Usage of A.I.
Regardless of the worth, A.I. started to solve many problems and can invent quite a lot of stuff in the future, that can depending on the usage damage or save mankind.

A Mind Tool:
Since its basicly a tool, invented by the mind, its important to note, that our mind is effected by more primitve parts of our brain (mamal, reptile etc.) and due to psychology their effect our rational thinking, while the mind *thinks* he is in charge, but is not. (rationalization). So is Musk doing something good for mankind, or is he (subcouncously) revenging at mankind for being bullied in his childhood?

Either way its an very imporant and facinating tech.
Thanks for the videos here.
 
DJfaIHl.png


This was ChatGPT output for someone who asked it to "write a poem on Free Trade in the Alfred Tennyson style"
 
To me the problem its not overpopulation.
Real facts about population collaps and diabeties
The population is actually collapsing in many contries, japan, germany, USA, the western culture.
Souce: Current population distribution statistics worldwide and prediction:
https://www.worldlifeexpectancy.com/en/world-population-pyramid

More Young people would be a good thing. More (young) people can create more ressources/inventions/A.I. Seeing to many humans as a problem is usually a psychological projection of self-hate OR believing the news of future overpopulatin, without questioning it, while its clear that its in fact going to collaps as making children takes time (Creating 20 year old humans takes 20 years, so the collaps is pretty clear, as wordwide there are very few young children in comparriosn to old).
Btw: Diabetis is only partily genetics and th;)e epidemic mainly is due to exessive sugar since about 1950th and lack of phyiscal activity to get the facts right. Considering to spread diabetic (or other illness) as a solution to a non-existing problem (humans number are actualy going to shrink in upcoming decates not getting more Edit. Wrong, i am. Population will grow most likly but not that much. Humans are good so that is good. do not nuke them.) talking about nukds/speading diabetis/viruses is self-hate projected to other humans. Just don´t do genocite, ok? ;) @starborrow

The real Issues are:
Its about amount of resssources (the old cannot work getting ill, the yong become to few) and the ressource distribution and the ressource use.
Back to A.I.
A.I. can help in creating more ressources and heal some illnesses. Go A.I.!
We do have a good world wide ammount of ressources and can further increase by A.I. and automation.
Distributing them in a way that makes more sense is imporant.
Money/resssources distribution statistics
https://en.wikipedia.org/wiki/Distr...lobal_Wealth_Distribution_2020_(Property).svg
https://en.wikipedia.org/wiki/Distr...tionofwealth_GDP_and_population_by_region.gif

E.g. in my coutry 1 politition gets about 50 million provision for basicly doing nothing (lobbiism in one contract and (mis)using it for personal luxuary) while medical stuff (e.g nurses) are underpaid, overworked here. The money you get is not the money you deserve / your producivity / your wise use of it.
Not sure if A.I. can help in the meaningfull distribution part. Thats politics. Which was introduced in this thread by the overpopulation myth and starborrows nuke/virus/Diabetis talk. Back to topic after correcting that.


But A.I. can help in creating / sustaining ressources (materialistic, time, knowledge etc.) and making better use of them by inventions (e.g. more efficient solar instead of oi)
 
Last edited:
I'd agree that the problem is not overpopulation but the inequities in distribution of resources. The inequities are not just within a particular country. We are fond of pointing to class-divides within countries, but:

There's a calculation that was done, if every person on Earth had the lifestyle of the average American we would need four-and-a-half Earths.

AI can help with efficiencies and with novel ways of marketing and creating jobs (in the short-term). But in the long-run humanity will need to think globally and decide what's important: the chase for money at the expense of all else or humanity and sustainable living.
 
If you are not already a member then please register an account and join in the discussion!

Sign up / Register


Back
Top