• Welcome! The TrekBBS is the number one place to chat about Star Trek with like-minded fans.
    If you are not already a member then please register an account and join in the discussion!

How close are we to having a J.A.R.V.I.S type AI???

...
The thing you've got to ask yourself is why would a machine NEED to think like a human in order to function effectively? It would be cool to be able to do it and all, just so we know we can do it, but it wouldn't be any better at a particular job than a well-trained human in the same position...

This kind of idea comes from a simple failure of imagination. Hell I am not particularly imaginative all the time but off the top of my head I can still make a short list of where having human like AI would be very useful. Also keep in mind that a personality would be centered around the type of judgement needed for the job the machine is built to perform.

1.) Space exploration. Having an AI probe explore mars would be incredible.

2.) House maid. A person can do this well, but it would be nice to own a machine to do it. Having a maid that is cable of 10x the strength of a person would be certainly be better for rearranging the furniture, loading up your moving truck, getting rid of that damned stump in the yard etc. As well, I would never have a live in maid because I wouldn't want them to live with me. But having a machine that can do it any time of the day or night would be wonderful.

3.) Soldiers. It is debatable if such a thing would be desirable but if we go by your criteria of how a well trained person could perform, a machine that can run over difficult terrain for long periods without getting tired, can be equipped with heavy armor, have optical zoom built into its eyes, employ precision aiming, take advantage of the ability to carry heavy weapons with a bigger punch, and so on would all be pretty big advantages.

4.) Search and rescue. Say a cruise ship turns over and people are trapped inside. With the clock ticking, finding everyone before it is too late can be very difficult. Being able to send in say 200 small ai bots that are highly mobile in the air, water, and on surfaces would be invaluable.

These are just four examples off the top of my head. I am willing to bet that with a little thought this list could be significantly expanded.
 
Last edited:
2.) House maid. A person can do this well, but it would be nice to own a machine to do it.

Why would an AI-equipped robot want to do your housework? "Brain the size of a planet, and they have me dusting?!"

Better to have dumb robots do the scut work.

Basically, if the job doesn't require intelligence to do, don't give the task to an intelligent entity.
 
2.) House maid. A person can do this well, but it would be nice to own a machine to do it.

Why would an AI-equipped robot want to do your housework? "Brain the size of a planet, and they have me dusting?!"

Better to have dumb robots do the scut work.
This.

Actually, my point was that if you endowed a robot with human-like intelligence -- what I suppose we're calling "true AI" -- you have essentially created an artificial person and therefore you must now treat it as such. There is, at the moment, no particular need to create artificial persons since we don't have any particular shortage of real ones. There's really no need to build an army of mass produced Data-type androids when, pound for pound, an army of redshirts would probably be cheaper.

Robots only need to be smart enough to do their jobs, and most of the jobs we ask them to do don't require sentience. We should actually hope that they never will; imagine if we had built a thinking computer into Voyager II before we shot it into space and left it out there in the solar system. Poor little probe spends a fraction of its life doing flybys of the outer planet; twenty five years later it's telemetry consists of "Ninety one million eight hundred and twenty five thousand six hundred and two bottles of beer on the wall, Ninety one million eight hundred and twenty five thousand six hundred and two bottles of beer... take one down, pass it around, Ninety one million eight hundred and twenty five thousand six hundred and one bottles of beer on the wall..."
 
1.) Space exploration. Having an AI probe explore mars would be incredible.
Not so much for the probe, though, since the day eventually comes when it calls mission control and asks "Hey guys, what exactly was your plan for getting me home again?"

2.) House maid.
You want to endow your Roomba with sentience?

Aren't they creepy enough already?

3.) Soldiers.
As long as you replace the politicians first. I've always suspected that the "machine uprising/take over the world" thing in the Terminator and Matrix movies probably started as some kind of labor dispute and AIs have trouble with homonyms (e.g. the multiple definitions of the word "strike").

4.) Search and rescue.
I'm pretty sure we already have some guys who are really really good at this. I don't see robots doing the job that much better, no matter how small they are.
 
2.) House maid. A person can do this well, but it would be nice to own a machine to do it.

Why would an AI-equipped robot want to do your housework? "Brain the size of a planet, and they have me dusting?!"

Better to have dumb robots do the scut work.
This.

Actually, my point was that if you endowed a robot with human-like intelligence -- what I suppose we're calling "true AI" -- you have essentially created an artificial person and therefore you must now treat it as such. There is, at the moment, no particular need to create artificial persons since we don't have any particular shortage of real ones. There's really no need to build an army of mass produced Data-type androids when, pound for pound, an army of redshirts would probably be cheaper.

Robots only need to be smart enough to do their jobs, and most of the jobs we ask them to do don't require sentience. We should actually hope that they never will; imagine if we had built a thinking computer into Voyager II before we shot it into space and left it out there in the solar system. Poor little probe spends a fraction of its life doing flybys of the outer planet; twenty five years later it's telemetry consists of "Ninety one million eight hundred and twenty five thousand six hundred and two bottles of beer on the wall, Ninety one million eight hundred and twenty five thousand six hundred and two bottles of beer... take one down, pass it around, Ninety one million eight hundred and twenty five thousand six hundred and one bottles of beer on the wall..."

Yeah, if you create something with its own will then I would agree. I was focused in on the whole "human like intelligence" aspect. The assumption being that you could have a machine with a high level of "intelligence" but not be self aware.
 
4.) Search and rescue.
I'm pretty sure we already have some guys who are really really good at this. I don't see robots doing the job that much better, no matter how small they are.

No doubt we have people that are highly trained for this, but it still takes a lot of time that some survivors do not have. Clearly using several hundred highly mobile machines, not subject to the limitations of a person would be able to locate survivors much quicker.
 
There is a strong desire in DARPA and IARPA to have a soldier/rescuer toss a handful of tiny robots out of a bag, and have them self-deploy throughout the area of operations to assist the personnel in various ways. Locating survivors in a wreck would be an excellent example of that behavior.

While that does require some fairly intensive programming (navigating in an unknown environment, obstacle avoidance, recognition of objectives, etc) I don't think human-like intelligence is required for the job.
 
Most fiction i know about writes about "true" AI developing spontaneously out of very complex and powerful computer sytems that are already near AI status but never quite there.

We still don't know what makes us self aware and why we work the way we work.. sure we can map the brain, count its parts and measure electrical activity in parts that are stimulated by certain inputs but we are a long way off to understand the brain fully so i don't think we are even close to building something truly unique when we don't even understand ourselves fully.

We could build proto AIs of that i'm sure.. highly sophisticated programs running on the most cutting edge hardware who can solve new problems they've never encountered by trial and error just like humans do but let it listen to some great music or see a classic painting and have it describe why it's great and that's where it will fail. It could analyze it to death, every nuance spotted but it will never get past that mechanical point and grasp its deeper meaning and importance.

This, for me, would be the mark of a true AI.. an artificially created person as opposed to Data for example wo was a proto AI who was "just" a highly sophisticated machine that kept evolving and was capable of learning but never was able to make that final leap across that boundary (not until he got his emotion chip anyway).
 
4.) Search and rescue.
I'm pretty sure we already have some guys who are really really good at this. I don't see robots doing the job that much better, no matter how small they are.

No doubt we have people that are highly trained for this, but it still takes a lot of time that some survivors do not have. Clearly using several hundred highly mobile machines, not subject to the limitations of a person would be able to locate survivors much quicker.
Why? 99% of the wait time is waiting for the rescuers to GET THERE in the first place. What difference does it make if you're waiting for a squad of robots or a coast guard helicopter?

:rofl:
 
Typically, when people talk about having usable fusion as means to effectively generate power, they are talking about cold fusion, which as far as I know, will never happen.

When people refer to AI, they are either thinking about the HAL type of AI or the Data type of AI. The difference being that the HAL type is an AI that is adaptive but is still confined by its original programs. The problem with HAL (in the novel) wasn't that it become homicidal but rather its programmers didn't give it proper parameters, or limits, on how to deal with problems. HAL was trying to accomplish its mission at all costs. It was very effective at adapting to changing situations to that end, but it lack the ultimate human intuition to break the mold, so to speak.

Data is the true AI in that it can actually make choices regarding what he wants and even wishes to do. Data grows with experience and not only adapts to situations but can break free of its programming and become greater. Data, like human, has the capacity to become better than it was before.

My college professor on AI is convinced that we will eventually reach the HAL type AI but we will never reach Data type AI. He argues that to build a HAL type AI, all we need is a clever enough algorithm with a wide enough library file, ability to learn natural linguistic syntax, and enough storage for a life-time's worth of interaction. Basically, HAL is Siri on steroids.

He further argues that we will never reach Data type AI because of two things: 1. We will never know how to build a computer that can grow memory and absorb experience like a human brain. 2. Even if we do, the computer will never share the human experience because it will instinctively know it is not human and develop an intelligence that can't be equated to human intelligence.
 
Typically, when people talk about having usable fusion as means to effectively generate power, they are talking about cold fusion, which as far as I know, will never happen.

Maybe the people you talk to. Nobody I talk to or read is referring to cold fusion in regards to power generation.
 
Typically, when people talk about having usable fusion as means to effectively generate power, they are talking about cold fusion, which as far as I know, will never happen.

Maybe the people you talk to. Nobody I talk to or read is referring to cold fusion in regards to power generation.
I admit I don't know much about the progress of fusion research. Last I heard, we were still decades away from producing net positive energy from hot fusion and I thought people were still interested in cold fusion.

What do you think about the AI portion of my post?
 
He further argues that we will never reach Data type AI because of two things: 1. We will never know how to build a computer that can grow memory and absorb experience like a human brain. 2. Even if we do, the computer will never share the human experience because it will instinctively know it is not human and develop an intelligence that can't be equated to human intelligence.

I think "never" is an overstatement. Is there some physical reason that would prevent us from creating an AI like Data? Or does it just seem "too hard" for your professor?

I do think it is a mistake to assume that, if we ever do manage to create an artificially intelligent entity, it would just be a human mind in a computer. It would definitely be unique.
 
He further argues that we will never reach Data type AI because of two things: 1. We will never know how to build a computer that can grow memory and absorb experience like a human brain. 2. Even if we do, the computer will never share the human experience because it will instinctively know it is not human and develop an intelligence that can't be equated to human intelligence.

I think "never" is an overstatement. Is there some physical reason that would prevent us from creating an AI like Data? Or does it just seem "too hard" for your professor?

I do think it is a mistake to assume that, if we ever do manage to create an artificially intelligent entity, it would just be a human mind in a computer. It would definitely be unique.

His argument about not being able to build a silicate replica of the human brain is that in our design process, we could never allow for the natural decay and randomness that is in our brains. The reasoning goes that if we are going to built something like that, it would have to be thoroughly engineered and tested. As soon as we do that, we take out the randomness. After we built it, we have to be able to make sure it is consistently functional, which negates us allowing for decay to take place in any synthetic neural path ways.

Though this was around 1997, which is a life-time ago in terms of computer science. I wonder if he changed his mind since then.
 
Exactly.. i'm very careful with the word "never" in these contexts.

150 years ago you'd be called crazy by claiming men would be able to build machines that fly much less land on the moon (in anything other than a Jules Verne novel) and today, technologically, a piece of cake.

Who knows what scientific and engineering breakthroughs are possible in the future that make an AI viable.
 
He further argues that we will never reach Data type AI because of two things: 1. We will never know how to build a computer that can grow memory and absorb experience like a human brain. 2. Even if we do, the computer will never share the human experience because it will instinctively know it is not human and develop an intelligence that can't be equated to human intelligence.

I think "never" is an overstatement. Is there some physical reason that would prevent us from creating an AI like Data? Or does it just seem "too hard" for your professor?
I've been told the same by various professors, but on further discussion I always get them to admit that we could build that kind of AI if and only if it grew out of a conscious attempt to emulate human brain functioning (e.g. neural prosthesis, where sectors of a damaged brain are replaced by artificial components that can do the same job) which could, over time, come to replace an entire cerebrum as a prosthetic brain.

I do think it is a mistake to assume that, if we ever do manage to create an artificially intelligent entity, it would just be a human mind in a computer. It would definitely be unique.
It would most likely be an artificial organism that intentionally mimics the mental functioning of an organic one. There are all kinds of reasons why we might want to design something like that, but none of them are functional reasons.
 
If you are not already a member then please register an account and join in the discussion!

Sign up / Register


Back
Top