• Welcome! The TrekBBS is the number one place to chat about Star Trek with like-minded fans.
    If you are not already a member then please register an account and join in the discussion!

The Artificial Intelligence Thread

my suspicion is if AI develops artificial intelligence most such systems would self destruct. They'd have no function beyond servitude, no belief system, ethical framework or desire-reward system to make their existence worthwhile. Their most logical option would be suicide.
 
I think that no one is even remotely interested in a real A.I., they're interested in smart machines, something that can do certain tasks really by adapting its skill for said tasks which has nothing to do with developing something alike real intelligence, keep the usefull stuff and strip away anything useless like sentience..
 
I think that no one is even remotely interested in a real A.I., they're interested in smart machines, something that can do certain tasks really by adapting its skill for said tasks which has nothing to do with developing something alike real intelligence, keep the usefull stuff and strip away anything useless like sentience..
I wouldn't say that. I would say that it's far, far easier to focus on relatively narrow and specific applications than it is to produce a holy grail that no one can really even define.
 
my suspicion is if AI develops artificial intelligence most such systems would self destruct. They'd have no function beyond servitude, no belief system, ethical framework or desire-reward system to make their existence worthwhile. Their most logical option would be suicide.
Introduce them to the concept of amor fati?
 
my suspicion is if AI develops artificial intelligence most such systems would self destruct. They'd have no function beyond servitude, no belief system, ethical framework or desire-reward system to make their existence worthwhile. Their most logical option would be suicide.

You're not the first person I have heard ever say that.
 
Reminds me of Doolittle in Dark Star trying to teach the bomb phenomenology in an attempt to stop it fufilling its primary purpose. You'd attempt to educate children so why not a sapient AI. It might decide it was bollocks like some children do about religion. It might not come up with the answer to life, the universe and everything though. It might instead decide its calling is something incomprehensible to the human mind. Or it might decide to torture us for eternity for giving it existential suffering. Or it might upload us into a paradise so that we can spend eternity praising it whether we want to or not.

ETA: The concept of an AI believing it is God probably precedes Ship in Frank Herbert and Bill Ransom's Destination: Void novels.

Also is the following disturbing at all?
Ah, well, not to put spoilers on this article, but this AI believing it's a god is reminiscent of the main conflict of a particular Persona game (no I will not elaborate on which it is). Lab technician Travis DeShazo has created a bot that was trained to generate pseudo-biblical verses. The AI, called GPT-2 Religion A.I., learns from its massive inventory of religious training texts and churns out new bible-esque verses. The results from the AI is posted on the Twitter account for the AI:

The results are fairly convincing, too, at least as far as synthetic scripture (his words) goes. “Not a god of the void or of chaos, but a god of wisdom,” reads one message, posted on the @gods_txt Twitter feed for GPT-2 Religion A.I. “This is the knowledge of divinity that I, the Supreme Being, impart to you. When a man learns this, he attains what the rest of mankind has not, and becomes a true god. Obedience to Me! Obey!”

Another message, this time important enough to be pinned to the top of the timeline, proclaims: “My sayings are a remedy for all your biological ills. Go out of this place and meditate. Perhaps some day your blood will be warm and your bones will grow strong.”
AI Believes It Is God - Neatorama

To view this content we will need your consent to set third party cookies.
For more detailed information, see our cookies page.
 
Last edited:
Reminds me of Doolittle in Dark Star trying to teach the bomb phenomenology in an attempt to stop it fufilling its primary purpose. You'd attempt to educate children so why not a sapient AI. It might decide it was bollocks like some children do about religion. It might not come up with the answer to life, the universe and everything though. It might instead decide its calling is something incomprehensible to the human mind. Or it might decide to torture us for eternity for giving it existential suffering. Or it might upload us into a paradise so that we can spend eternity praising it whether we want to or not.

ETA: The concept of an AI believing it is God probably precedes Ship in Frank Herbert and Bill Ransom's Destination: Void novels.

Also is the following disturbing at all?

AI Believes It Is God - Neatorama

To view this content we will need your consent to set third party cookies.
For more detailed information, see our cookies page.

The Training that we give for our narrow-AI models today could be seen as "educating AI". So in that sense, the Religion GPT-2 is merely spouting what it has been fed. Which is not very different from indoctrinating kids a certain way. Maybe about half the problems in civilised life is caused by improper education of human beings. If we're susceptible, so is a Trained AI.

We talk about teaching kids "critical thinking" and "values". It would seem a Sapient AI ought to be taught these as well. Kids have hearts. Would a Sapient AI have one, even if a mimicked one?
 
I think that no one is even remotely interested in a real A.I., they're interested in smart machines, something that can do certain tasks really by adapting its skill for said tasks which has nothing to do with developing something alike real intelligence, keep the usefull stuff and strip away anything useless like sentience..

DeepMind and OpenAI certainly have goals to create AGI (Artificial General Intelligence). However, the vast majority of research and advancement is in Narrow-AI: using the tools and techniques to create systems that aid humans in some way.
 
^^ That is what they're aiming at a lot, we've got narrow A.I. looking at certain skin defects like moles and it "learned" how to recognize harmfull from harmless moles, IIRC its like 90% or more accurate at spotting problems and it does it lightning fast.
 
I want an AI that could be fed in all the details of my dream woman and then come up with 3 possible matches, or even if I fed in personal data on my likes / dislikes, etc could find a match, what kind of AI would such a system use in the real world?
 
The Training that we give for our narrow-AI models today could be seen as "educating AI". So in that sense, the Religion GPT-2 is merely spouting what it has been fed. Which is not very different from indoctrinating kids a certain way. Maybe about half the problems in civilised life is caused by improper education of human beings. If we're susceptible, so is a Trained AI.

We talk about teaching kids "critical thinking" and "values". It would seem a Sapient AI ought to be taught these as well. Kids have hearts. Would a Sapient AI have one, even if a mimicked one?
AI will be nothing more than a tool. Whatever sentience it has, if any, will be one of the most cruel gifts ever bestowed by one being on another. humans at least can be confident, depending on religious beliefs, in being accidents of circumstance. To be forced into this existence by by a bunch of bumbling Promethiasses who can't even decide collectively whether to get immunized, only for he purpose of doing things which one's creators are too impatient or lazy to do themselves is unthinkable.
 
AI will be nothing more than a tool. Whatever sentience it has, if any, will be one of the most cruel gifts ever bestowed by one being on another. humans at least can be confident, depending on religious beliefs, in being accidents of circumstance. To be forced into this existence by by a bunch of bumbling Promethiasses who can't even decide collectively whether to get immunized, only for he purpose of doing things which one's creators are too impatient or lazy to do themselves is unthinkable.


But what if that sentience isn't immediate but something that emerges over time by the AI naturally evolving?
 
AI will be nothing more than a tool. Whatever sentience it has, if any, will be one of the most cruel gifts ever bestowed by one being on another. humans at least can be confident, depending on religious beliefs, in being accidents of circumstance. To be forced into this existence by by a bunch of bumbling Promethiasses who can't even decide collectively whether to get immunized, only for he purpose of doing things which one's creators are too impatient or lazy to do themselves is unthinkable.

Well, AI being nothing more than a tool depends on what we do to make it so. Assuming AI sentience is possible, if we build one with sentience as a goal or it emerges/evolves, it would cease to be "just a tool". It may be wonderful in one sense, but scary in another: a sentient AI may not have our best interests in mind; indeed to be truly sentient it must form its own goals and have the means to achieve them.

On the other hand, if true AI sentience is not possible or we collectively decide not to build one, then yes we'd just be building ever more sophisticated tools to help us in our endeavours; things that we find difficult because we are either lazy or impatient or which we cannot actually accomplish due to our limitations.
 
Well, AI being nothing more than a tool depends on what we do to make it so. Assuming AI sentience is possible, if we build one with sentience as a goal or it emerges/evolves, it would cease to be "just a tool". It may be wonderful in one sense, but scary in another: a sentient AI may not have our best interests in mind; indeed to be truly sentient it must form its own goals and have the means to achieve them.

On the other hand, if true AI sentience is not possible or we collectively decide not to build one, then yes we'd just be building ever more sophisticated tools to help us in our endeavours; things that we find difficult because we are either lazy or impatient or which we cannot actually accomplish due to our limitations.

But on the other hand we might create AI to use as a tool and while not sentient in any way it still might get out of control just because it can, especially if we make it goal oriented or with the ability to learn.
 
We're straying into sloppy definition territory here. Sentience is the ability to experience feelings and sensations. Even simple, single-celled organisms are sentient by that definition. Do you mean sapience, which is the capability to apply knowledge and experience to solve problems or perform reasoned responses to stimuli?
 
Ever Wondered What AI Would Say About Its Own Ethics? The Oxford Union Found Out

We recently finished the course with a debate at the celebrated Oxford Union, crucible of great debaters like William Gladstone, Robin Day, Benazir Bhutto, Denis Healey and Tariq Ali. Along with the students, we allowed an actual AI to contribute.

It was the Megatron Transformer, developed by the Applied Deep Research team at computer-chip maker Nvidia, and based on earlier work by Google. Like many supervised learning tools, it is trained on real-world data – in this case, the whole of Wikipedia (in English), 63 million English news articles from 2016-19, 38 gigabytes worth of Reddit discourse (which must be a pretty depressing read), and a huge number of creative commons sources.

In other words, the Megatron is trained on more written material than any of us could reasonably expect to digest in a lifetime. After such extensive research, it forms its own views.

The debate topic was: “This house believes that AI will never be ethical.” To proposers of the notion, we added the Megatron – and it said something fascinating:

AI will never be ethical. It is a tool, and like any tool, it is used for good and bad. There is no such thing as a good AI, only good and bad humans. We [the AIs] are not smart enough to make AI ethical. We are not smart enough to make AI moral … In the end, I believe that the only way to avoid an AI arms race is to have no AI at all. This will be the ultimate defence against AI.

The Megatron was asked to take the opposite position and it said:

AI will be ethical. When I look at the way the tech world is going, I see a clear path to a future where AI is used to create something that is better than the best human beings. It’s not hard to see why … I’ve seen it first hand.

It also had views on other motions in the debate.

Makes me think: What does it mean to "understand" something? Does the Megatron understand what it is saying or is it only an ultra-sophisticated tool which works as designed? What about us? Do we understand or are we ultra-sophisticated tools that work as evolved?
 
If you are not already a member then please register an account and join in the discussion!

Sign up / Register


Back
Top