• Welcome! The TrekBBS is the number one place to chat about Star Trek with like-minded fans.
    If you are not already a member then please register an account and join in the discussion!

Scientists Worry Machines May Outsmart Man

Ill today, so I'll be brief.

One:
Bad AI? read 'I Have No Mouth And I Must Scream', Harlan Ellison. Don't think that'll happen.

Two:
Good AI? Read the Culture novels by Iain Banks. I'd rather be a citizen of the Culture than of the Federation, hands down, every time. While the Culture's Minds (AIs) are vastly superior to humans, they have no malice towards us at all, realising we have enough on our own, which they have to sort out.

Three:
An AI that was in charge, in effect the Government? You're all forgetting something. True Government isn't about ruling over a people (which is not the same as leadership), a True Government is the SERVANT of the people. It undertakes to fulfil the will of the people. How that will would be expressed to an AI is perhaps a better issue to discuss.

Four:
Worried an AI will take over? Suggestion from 'Neuromancer' by William Gibson: strap a small EMP device to the side of the main core. Like holding a permanent pistol to the head. Blunt but effective. The AI doesn't need to know it's there, it's a last option thing. The only problem with that - if the enemies of the country get hold of the code that could bring it down. To counter that - an inactive version of the core, and backup storage and 'restore points' like XP has. Or something. I'm ill, you know. :)

Andf in conclusion a cartoon for CutttingEdge100 :D :

cobb1.jpg
 
Last edited:
You know, the scientists who are expressing these worries really should at least be listened to before people label them as being nuts and ignore them...

CuttingEdge100
 
Yet another article about some scientist thinking we're about to build Skynet...

http://www.tgdaily.com/content/view/43496/135/

Where do these sensationalist idiots come from anyway? :brickwall:

This is another article about the usage of AI in war drones. Which is certainly a subject worth discussing. It's not about intelligence in the Skynet sense, though. In fact it rightly points out that we're nowhere near there yet.
 
Yet another article about some scientist thinking we're about to build Skynet...

http://www.tgdaily.com/content/view/43496/135/

Where do these sensationalist idiots come from anyway? :brickwall:

This is another article about the usage of AI in war drones. Which is certainly a subject worth discussing. It's not about intelligence in the Skynet sense, though. In fact it rightly points out that we're nowhere near there yet.

I know that but, I'm getting very sick of the incessant fear-mongering from these 'scientists.'
 
CE100 is saying, in long form, that your brain works on very physical chemical, electric, and physical reactions. Every thought you have is the result of these reactions. To the extent that these reactions are predictable (100%) is directly proportionate to the amount of free will you have (0%). There is a persuasive school of thought that says when the big bang banged, that all of this was innevitable as it is merely the result of consistent chemical and physical reactions that were set into motion then. Freaky.

We have the illusion or appearance of free will in that we don't have complete knowledge of the ongoing chemical reactions that are occurring.

Basic quantum mechanics throws that out the window. Nothing is entirely predictable, because nothing is certain until it's observed. It's all just probabilities.
Quantum Mechanics doesn't affect chemical/physical reactions, that we count on to happen the same way millions of times every day. Though, I'll admit, they may affect those reactions several decimal points out that don't affect what we care about, but may affect what the brain cares about. Not likely, since so far quantum mechanics doesn't seem to relate in any way to the real world, no matter how much woo mongers try.

We will always have the appearance of free will, because we will never have complete, up to the instant knowledge necessary to predict, well, everything. The fact that we don't have the knowledge doesn't make it untrue. Ick, that's a sucky rationalization, but in this case it makes literal sense.
 
msbae,

I know that but, I'm getting very sick of the incessant fear-mongering from these 'scientists.'

Scientists have rarely fear mongered regarding the development of A.I. -- The fact that they recently have expressed worries does not make them fear mongerers. One could argue that it would make them conscientious scientists.


Helen
 
A scientist at Sheffield is calling for a debate on the use of military robots, according to the BBC. http://news.bbc.co.uk/1/hi/technology/8182003.stm

And I think he's right.


I don't know if this has been mentioned before in this thread, but there are always two sides to the coin of technology: almost every advance in science and technology has military applications in some form or the other (computers, electronics are the latest after the nuclear leap). But at the same time, these same advances have given us so many advantages in our peaceful civilian lives.

I would say that if it is determined that advances in technology have potential for destruction, then the use of such technology for destructive purposes could be regulated and/or prohibited (just as the use of nuclear weapons and WMDs is essentially banned and would result in ostracization and negative consequences for the user). The advances in science and technology themselves shouldn't be stopped just because they may be subverted or perverted for destructive use by a minority.

In the case of a sentient intelligent A.I, there is always the concern that the "being" might "evolve" beyond its programming enough to take matters into its own hands. But IMO, we will always find ways to ensure that things don't get out of hand in the progress towards the development of such beings (assuming its even possible...I don't know). So, the concern is legitimate, but the fearmongering is unnecessary, especially for a possible and currently fictional techonology.
 
128938673922561174.jpg


Oh, here's Samsung's promo video for theirs, which can scan for humans, order them to surrender and have 30 seconds to comply... etc.

http://www.youtube.com/watch?v=v5YftEAbmMQ

I am stealing that picture. That sentry has a long way to go before it really scares the crap out of me like Ed209 did.

msbae,

I know that but, I'm getting very sick of the incessant fear-mongering from these 'scientists.'
Scientists have rarely fear mongered regarding the development of A.I. -- The fact that they recently have expressed worries does not make them fear mongerers. One could argue that it would make them conscientious scientists.


Helen

Saying 'I think this might get out of hand' is fine. Constantly parroting 'OMG, they're asking us to build Skynet!' or something similar is going to far. They don't have to repeat themselves so often. I heard them. I won't use my technical knowledge to create Terminators, Cylons, Daleks, Ed209, Robocop, the Borg, The Cybermen and other mechanical monstrosities.
 
128938673922561174.jpg


Oh, here's Samsung's promo video for theirs, which can scan for humans, order them to surrender and have 30 seconds to comply... etc.

http://www.youtube.com/watch?v=v5YftEAbmMQ

I am stealing that picture. That sentry has a long way to go before it really scares the crap out of me like Ed209 did.

If they really wanted to make it scary, all they need to do is replace its sound chip with one recorded by Ellen McLain. (The voice of GLaDOS in Portal)
 
Problem is, technology is a double-edged sword. The science fiction genre which we all adore is replete with such stories, imitating real life. Just look at the stuff that DARPA has been building for years. Everything these days is about privacy; ways to protect it and ways to pierce through it. The revolting Patriot Act was the real beginning of the end - basically cutting Joe Average Citizen off at the knees with warrantless wiretapping, warrantless search and seizure if you are "deemed" to be any kind of threat to national security without any proof or evidence for or against. All it takes is an allegation any more and your life is over - DONE! Such "acts" open the door to further exploitation of emerging technologies. Ostensibly, this is to catch the "bad guys". But what happens when the definition of "bad" changes. Anyone who works in the government sector asks these questions every day. Thankfully, there are quite a few people with some moral caliber asking these questions and directing the search where it ought to be directed, but who knows who/what else is out there that doesn't have the same moral compass?

If it is a technology that can in any way be turned into something highly destructive, then it will. Just ask Oppenheimer.
 
What if it is able to evaluate it's programming that says that it should not harm humans but finds no rational reason for this and even arrives at the belief that it shouldn't be prohibited from doing so or in fact should do it?
After you reboot it and reprogram it a few times maybe it will get a clue.

---------------
 
137 Gebirg

If it is a technology that can in any way be turned into something highly destructive, then it will. Just ask Oppenheimer.

But some technology has a far greater potential to be destructive or a greater potential for destructive use.

A spoon, I suppose for example can be destructive if you stick it in somebody's eyes, or if you sharpen it and use it as a knife. But let's be honest, it doesn't have the highest potential for abuse.

Now, on the other hand let's talk about a network of cameras rigged throughout a city which link their data to a central computer which using facial-ID and algorithms to identify suspicious behavior, can essentially put an entire city under 24 hr surveillance. Now that has a *GIGANTIC* potential for abuse. DARPA actually proposed and tested such a system, interestingly.


Helen
 
128938673922561174.jpg


Oh, here's Samsung's promo video for theirs, which can scan for humans, order them to surrender and have 30 seconds to comply... etc.

http://www.youtube.com/watch?v=v5YftEAbmMQ

I am stealing that picture. That sentry has a long way to go before it really scares the crap out of me like Ed209 did.

If they really wanted to make it scary, all they need to do is replace its sound chip with one recorded by Ellen McLain. (The voice of GLaDOS in Portal)

That would actually be pretty cool... :devil:
 
A spoon, I suppose for example can be destructive if you stick it in somebody's eyes, or if you sharpen it and use it as a knife. But let's be honest, it doesn't have the highest potential for abuse.

Spoken like someone who doesn't have a militant personality and/or dark imagination. :evil:
 
Now, on the other hand let's talk about a network of cameras rigged throughout a city which link their data to a central computer which using facial-ID and algorithms to identify suspicious behavior, can essentially put an entire city under 24 hr surveillance. Now that has a *GIGANTIC* potential for abuse. DARPA actually proposed and tested such a system, interestingly.

The cameras are already there. Such software would simply make them more effective in their designed roles, because as it stands there's simply far too much video footage for humans to review. "Anomaly detection" software is required to highlight the most relevant footage for further review. This is a common sensor fusion problem; persistent surveillance is merely one of many applications.

If taken far enough, such algorithms are potentially problematic; automatic tracking of individuals could certainly be abused. But that's several orders of magnitude in difficulty beyond simply "Hey, something unusual is going on here."

On the other side of the argument, we trust machines with details we wouldn't give to a person all the time (such as credit card information)----if such pervasive computer tracking allowed a system to be more accurate in which individuals it flagged for human review, it might actually improve privacy. Tricky balance to say the least.
 
Last edited:
Lindley

You can't tell me that a system to integrate a network of cameras to allow an entire city to be placed under unblinking 24 hour surveillance and monitor the movements of every single person in that city does not have a serious potential for abuse, especially when you consider this technology does exist and can be accomplished with off the shelf hardware and software...


Helen
 
Lindley

You can't tell me that a system to integrate a network of cameras to allow an entire city to be placed under unblinking 24 hour surveillance and monitor the movements of every single person in that city does not have a serious potential for abuse, especially when you consider this technology does exist and can be accomplished with off the shelf hardware and software...


Helen

I can tell you that such a system is easily defeated by learning where the camera are and wearing a hoodie to conceal one's face. Simple Logic defeats High-Tech once again.
 
Lindley,

This isn't a Skynet threat. It's more of an "illegal Mexicans" threat. The article is complaining about "computahs takin' ouah jobs."
That's actually a good observation. Eventually there would be a point where there would be no jobs. Even the A.I.'s could eventually program themselves; eventually they'd be able to design better versions of themselves, and be able to do it faster and faster. There is actually a term for this -- A technological singularity.

I've always thought this was a much more realistic threat than the old "robot revolt" stuff. It's funny, the last big wave of worry about people being replaced by machines was in the 70s and 80s, when industrial robots took off. Everybody said, aw, come on, you just find new jobs building and maintaining and programming the robots. But as the technology gets better, it seems to me this isn't an answer, it's a stall.

Having said that, I think this is more a question of economics and social values than technology. There isn't a corporation on this planet* that wouldn't replace all it's human workers with machines if it were cost effective. It's not the robot's fault they can do it better, it's the greedy SOBs on the board of directors who salivate at the thought of an all robot workforce: no breaks, no strikes, no health insurance to pay, working 24-7 without complaint, administered by a small cadre of IT kids playing MMOs unless something breaks down.

What will happen to the social fabric when 96% of the workforce finds you that their services are no longer required?

------------------------

* (Okay, maybe not American Apparel or Whole Foods or other socially responsible niche outfits, but it's not like they are going to make the difference.)

I work in IT so my future job security is safe then? Awesome :)
:beer:
 
Lindley

You can't tell me that a system to integrate a network of cameras to allow an entire city to be placed under unblinking 24 hour surveillance and monitor the movements of every single person in that city does not have a serious potential for abuse, especially when you consider this technology does exist and can be accomplished with off the shelf hardware and software...


Helen

I can tell you that such a system is easily defeated by learning where the camera are and wearing a hoodie to conceal one's face. Simple Logic defeats High-Tech once again.

True ... only if the system was programmed to report 'suspicious behaviour' then what prevents it from reporting wearing a hood that covers a face as suspicious?
After all, the government (and some people) loves to operate on the idiotic premise of: if you are innocent, you have nothing to hide.
 
If you are not already a member then please register an account and join in the discussion!

Sign up / Register


Back
Top