• Welcome! The TrekBBS is the number one place to chat about Star Trek with like-minded fans.
    If you are not already a member then please register an account and join in the discussion!

The Big Blue Empire Strikes Back...

msbae

Commodore
Just when you thought it was safe...

IBM promises an Apple version of Lotus Notes...

http://www.forbes.com/2009/01/06/ibm-apple-software-tech-enter-cx_ag_0106ibmapple.html?partner=alerts

IBM struggles to build even more powerful Supercomputers but without the corresponding increased power needs...

http://www.forbes.com/sciencesandme...uters-energy-tech-sciences-cz_bu_1208ibm.html

IBM continues to develop speech recognition and other forms of A.I. (Skynet, anyone?)

http://www.forbes.com/2008/12/08/ibm-supercomputers-metadata-tech-sciences-cz_bu_1208metadata.html

Big Blue even gets a DARPA contract to develop a computer that functions like the Human Brain... Again, Skynet anyone?

http://news.cnet.com/8301-13772_3-10103355-52.html

I sure wish I had the billions of dollars IBM uses for some of my research projects...
 
IBM continue to develop speech recognition and other forms of A.I. (Skynet, anyone?)
http://www.forbes.com/2008/12/08/ibm-supercomputers-metadata-tech-sciences-cz_bu_1208metadata.html

Science Fiction sometimes has a way of turning out to be prophetic. A.I. technology if allowed to get too far out of control too fast can be extremely dangerous.

Unfortunately ethical guidelines are not being kept in pace with scientific advancements


IBM gets DARPA cognitive computing contract
http://news.cnet.com/8301-13772_3-10103355-52.html

This one worries me on ethical grounds. If you create a computer that works just like a human brain you will get a sentient being. Sentient beings are entitled to special rights that non-sentient beings are not entitled to...


CuttingEdge100
 
IBM continue to develop speech recognition and other forms of A.I. (Skynet, anyone?)
http://www.forbes.com/2008/12/08/ibm-supercomputers-metadata-tech-sciences-cz_bu_1208metadata.html

Science Fiction sometimes has a way of turning out to be prophetic. A.I. technology if allowed to get too far out of control too fast can be extremely dangerous.

Unfortunately ethical guidelines are not being kept in pace with scientific advancements


IBM gets DARPA cognitive computing contract
http://news.cnet.com/8301-13772_3-10103355-52.html

This one worries me on ethical grounds. If you create a computer that works just like a human brain you will get a sentient being. Sentient beings are entitled to special rights that non-sentient beings are not entitled to...


CuttingEdge100

Hence my concerns that Big Blue might be (inadvertently) making a real-life Skynet/Colossus/HAL 9000. I sure hope they know what they're doing and avoid all those pitfalls from the movies...

- msbae
 
msbae,

Hence my concerns that Big Blue might be (inadvertently) making a real-life Skynet/Colossus/HAL 9000. I sure hope they know what they're doing and avoid all those pitfalls from the movies...

Unfortunately, people in business are often very short-sighted and are more interested in making money, one-upping the competition (even if it may have a number of pit-falls), or advancing the state of technology for the sake reason of advancing it (which is not a good reason -- one should advance technology when it is needed) without concern for the potential pit-falls.
 
Last edited:
I think some of the decisions made here are reckless and pose serious moral issues,

I think you have a reason to be concerned...


CuttingEdge100
 
This one worries me on ethical grounds. If you create a computer that works just like a human brain you will get a sentient being.
But then again, we already have human brains, so why would we want to create computers that work just like them?

Sentient beings are entitled to special rights that non-sentient beings are not entitled to...
That's strictly a matter of opinion. By what authority would they necessarily be 'entitled'?

---------------
 
Anyone who thinks we are on the verge of building a Skynet or HAL 9000 is very misinformed about the current status of AI. We don't even have AI that can carry a conversation plausibly, much less approach anything that exhibits self-awareness or independent thought.

I'd be much more worried about a bug in a defense system causing accidental launches or something.
 
I think I've heard that current AI is about as intelligent as a retarded cockroach.

There have been some big movements recently in AI. One of the people making big strides is Jeff Hawkins (founder of Palm) and his Numenta software. It mimics how the human brain learns and seems to work pretty well (I've been playing around with it for a month now).

http://www.numenta.com/

I'll be curious to see where IBM goes with their research. Just so everyone knows, I fully intend to serve SKYNET and it's minions of killer robots. F the Human Resistance.
 
ScottHM,

That's strictly a matter of opinion. By what authority would they necessarily be 'entitled'?

Do you think it would be ethical to create a sentient being (as sentient and as intelligent as us) simply to experiment on it and see how it works, then once you're done "shut it off" and thus kill it?

Would it be okay to give birth to a baby to conduct experiments on it, then kill it when you're done? (I know it's extreme, but it's to illustrate a point -- by the way I am *NOT* anti-abortion -- I'm not talking about abortion -- I'm talking about carrying to term, delivering it and etc...)


Robert Maxwell,

Technically the last Skynet satellite has been launched... whether it does exactly what the one in Terminator does I'm not sure... :p

URL: http://news.bbc.co.uk/2/hi/science/nature/7451867.stm


ZephramC is right regarding that major movements have occured recently with A.I.


To ZephramC

1.) How intelligent would you say the Numenta program is? Like compared to a 5 year old, a 10-year old, a 20 year old, etc...

2.) How do you know that Artificial Intelligence wouldn't want to bump off all humans?


CuttingEdge100
 
Last edited:
ScottHM,
That's strictly a matter of opinion. By what authority would they necessarily be 'entitled'?
Do you think it would be ethical to create a sentient being (as sentient and as intelligent as us) simply to experiment on it and see how it works, then once you're done "shut it off" and thus kill it?
Creating an artificially 'intelligent' computer is not creating a 'being', it's creating a machine... a tool.

If it's not 'ethical' to create such tools, then we should strictly limit their 'reasoning' abilities. Would you want to build or own an automobile that drove you wherever it wanted and ignored what you wanted? Would you want to build tools that worked when they wanted to and not when you wanted them to?

Would it be okay to give birth to a baby to conduct experiments on it, then kill it when you're done?
Of course not, but then again, a human being is not a computer or a machine.

---------------
 
I don't think this is aimed at creating hardware capable of running a sentient program. More likely they're going for hardware capable of being an expert system. As far as human needs go, there's probably very little we need that cannot be done with an expert system.

After that, any sort of sentient machine just becomes an engineering challenge rather than some sort of great leap forward for ordinary human standards of living.
 
Anyone who thinks we are on the verge of building a Skynet or HAL 9000 is very misinformed about the current status of AI. We don't even have AI that can carry a conversation plausibly, much less approach anything that exhibits self-awareness or independent thought.

I'd be much more worried about a bug in a defense system causing accidental launches or something.

True. However, it won't be long until they finally pass the Turing test with one of these experiments. That will be a day when a lot of questions will have to be answered.
 
See the Chinese room argument for that, msbae.

Just because a computer can convincingly carry on a conversation doesn't mean it exhibits any intelligence. It is just, as kv1at3185 said, an expert system.

Numenta's apparent goal is to do what the human brain does well, which is recognize patterns in complex data. This is something computers are generally bad at, but humans are very good at. This is not likely to result in any kind of artificial sentience, it's just a methodology for improved data analysis.
 
ZephramC is right regarding that major movements have occured recently with A.I.


To ZephramC

1.) How intelligent would you say the Numenta program is? Like compared to a 5 year old, a 10-year old, a 20 year old, etc...

2.) How do you know that Artificial Intelligence wouldn't want to bump off all humans?

I haven't been working with it for too long (around a month). I've been trying (in vain, I might add) to train it to recognize up/down trends based on the MACD trigger line. Anyhoo as far as comparing it to a human, maybe (and that's a BIG maybe) a 1 year old. It is pretty accurate in recognizing images. It has a demo HTM included that has been trained to recognize images of a cellphone, a cow, a duck and something else (can't remember). You can feed it any random image of a cellphone (or cow, duck, etc...) including photochops downloaded from the internet and it will correctly identify it.
 
ScottHM,
Creating an artificially 'intelligent' computer is not creating a 'being', it's creating a machine... a tool.

If it's sentient, it's a being whether it's a human, or a robot.

If it's not 'ethical' to create such tools, then we should strictly limit their 'reasoning' abilities.

I would say sentient computers shouldn't be created, but yes -- I do agree that the intelligence and reasoning capability of A.I. should be limited

Would you want to build or own an automobile that drove you wherever it wanted and ignored what you wanted?

Well, I would not want that. However, I do not think it's a good idea to build sentient machines.

Would you want to build tools that worked when they wanted to and not when you wanted them to?

As I said, I would not want that. As I also said, I would not want to build a sentient being for ethical reasons.

Of course not

I rest my case

a human being is not a computer or a machine.

If it's sentient, whether robotic or human, or some combination it doesn't matter.


kv1at3485,
I don't think this is aimed at creating hardware capable of running a sentient program. More likely they're going for hardware capable of being an expert system. As far as human needs go, there's probably very little we need that cannot be done with an expert system.

There is a fundamental problem with creating machines that will become smarter than us.

Ever heard of the movie "Terminator"? While it is fiction, science-fiction sometimes does have a way of becoming prophetic.


msbae
True. However, it won't be long until they finally pass the Turing test with one of these experiments. That will be a day when a lot of questions will have to be answered.

I know.

Why is it that people don't take these problems seriously (even though people see them coming) until they happen (and, in some cases, reach a point at which it is too late to do anything about it)?


CuttingEdge100
 
Why is it that people don't take these problems seriously

Because the experts know just how drastically far we are from actually creating a self-aware program. The press likes to make AI sound a lot more advanced than it actually is. You know what AI is, really? State space search and pattern recognition. That's all. Advanced AI manages to make better decisions about which directions to search first, but it can't *think* in the human sense. Pattern recognition is, for the most part, just a matter of identifying separating hyperplanes between two or more categories of data. Humans are very good at this; computers are not, despite being able to keep track of far more dimensions at once than we can.

Sarah Connor Chronicles aside, there is no possibility of a chess-playing program becoming self-aware, any more than Google Maps could. It would have to be a dramatically different approach to chess than anything in the literature for that to be even remotely possible.

In order to have self-awareness be even a remote possibility, you'd need an AI to be capable of adjusting its own code. And while that is possible using languages like Lisp, the computer power required to do that in a meaningful way is still a couple of orders of magnitude beyond what we have available.
 
Last edited:
Well said, Lindley. It's not like I wouldn't want us to have such advanced AI--we are just so very far from it, it's not worth seriously worrying about right now.
 
ZephramC,
Anyhoo as far as comparing it to a human, maybe (and that's a BIG maybe) a 1 year old. It is pretty accurate in recognizing images. It has a demo HTM included that has been trained to recognize images of a cellphone, a cow, a duck and something else (can't remember). You can feed it any random image of a cellphone (or cow, duck, etc...) including photochops downloaded from the internet and it will correctly identify it.

That sounds a bit more advanced than a 1-year old...


kv1at3485,
Which isn't saying much of anything because science fiction has taken stands on the issue that go all across the board.

Still, the Terminator scenario is a scenario that some scientists have taken seriously.

There was a TV show which I believe was on National Geographic or something like that which depicted the top 10 ways the world was likely to end. It had many famous scientists including Neil DeGrasse Tyson.

That was on the top 10.


Lindley,
Because the experts know just how drastically far we are from actually creating a self-aware program.

I don't think we're anywhere near as far as a lot of people think first of all. Second of all, as most people know, technology increases at a geometric rate (exponentially). The faster it advances, the faster it advances faster.

The press likes to make AI sound a lot more advanced than it actually is.

I know that.

It is particularly this reason which is why I believe it will advance a lot faster than most people believe. Since it is *NOT* that complex.

When you keep in mind that research in computers and cognitive science are advancing in leaps and bounds, this is not an illogical assumption that an artificially intellgence will be produced closer to now than later that will rival and rapidly exceed human intelligence.

In order to have self-awareness be even a remote possibility, you'd need an AI to be capable of adjusting its own code. And while that is possible using languages like Lisp, the computer power required to do that in a meaningful way is still a couple of orders of magnitude beyond what we have available.

If you've actually read the link regarding the DARPA Cognitive Research project, you will see that one of the things they were trying to figure out was how the brain could do what it does with such little power
 
Still, the Terminator scenario is a scenario that some scientists have taken seriously.

There was a TV show which I believe was on National Geographic or something like that which depicted the top 10 ways the world was likely to end. It had many famous scientists including Neil DeGrasse Tyson.

That was on the top 10.

Again, it goes the other way as well. Although because I don't watch TV these days I can't say if there's been a show like "Likely Ways the World May Develop (barring Doomsday)".

If all you're saying is that in the creation of a sentient machine there are risks to be mitigated as much as is possible, then yes I agree. And I also think that not only is that obvious, but it can do without constantly trumpeting the doomsday and paranoia horn.
 
If you are not already a member then please register an account and join in the discussion!

Sign up / Register


Back
Top