• Welcome! The TrekBBS is the number one place to chat about Star Trek with like-minded fans.
    If you are not already a member then please register an account and join in the discussion!

Experts sign open letter to protect mankind against AI

RAMA

Admiral
Admiral
Here's the latest in the public debate. Fairly significant dialogue these days:

http://www.cnet.com/uk/news/artific...open-letter-to-protect-mankind-from-machines/

Recently, Elon Musk and Stephen Hawking have come out for the need to be watchful of AI, marking a major milestone in scientific circles recognizing the coming accelerated change in processing as well as other info technologies leading to a Singularity.

So the question really is, why are these experts, who know very well what can come, such knee jerk reactionaries?

I think it's simply a matter of change...you can see there are different forks in the road, but ultimately it means changing the way humans go about their normal business, if we want to preserve that, we must control the AI...some think we can, but many think it's already too late. I tend to be on the latter side. Sheer momentum is all around us..and we need to recognize we actually can shape this intelligence explosion, and it doesn't have to lead to our doom.

Now related to this is an interesting question from prominent Singularity writer "Socrates": Where is the Singularity right now? https://www.singularityweblog.com/e...al&utm_source=twitter.com&utm_campaign=buffer


It compares the awareness of the Singularity to two scales..I see some familiarity in both..there is denial, some anger...almost as with grief. Scientists are past that, into a bragaining stage...as if maybe we can do something. Musk has veered into it being a great evil.

Finally, there is the potential of positive Singularity in the form of Kurzweil, whose response is here:

http://bostinno.streetwise.co/2014/...stephen-hawking-on-artificial-intelligence-2/

Very diplomatic. Re-assuring. Probably a bit disingenuous though.

Others have made the leap to acceptance. Probably the most mature and helpful view. Michio Kaku is a prominent one making the transition, saying there is no need to resist the machines, we can become them.

https://www.youtube.com/watch?v=uINTu3zuPM4

Required watching for those interested in the world changing topic. Sci-fi fans or not.

RAMA
 
^^ Dude, why do you keep posting so many topics but not respond to people on the one's you've already posted?

Damn, I'm not saying these things aren't interesting and relevant science to this forum but man - 4 topics + in like just a few days? WTF?
 
What do you mean not responded? This is one of the few days I'm here enough to actually respond, and have.
 
Hang on, are you starting a bunch of threads only to disappear again for days on end? That's not playing by the rules. If you know you're not going to be able to respond to posts in your threads in a timely manner don't start them in the first place. That's disrespectful to your fellow posters.
 
Last edited:
I'm not worried about the impending threat of A.I., because Cortana has already promised me that she would make me her property, and that I would be safe from artificially intelligent aggressors. :D
 
I for one welcome our robot overlords. As a mod of a Star Trek message board and fan of the Short Circuit films, I am eager to sell out and aid them in seeking out and locating any resistance that forms on this board.
 
I'm not worried about the impending threat of A.I., because Cortana has already promised me that she would make me her property, and that I would be safe from artificially intelligent aggressors. :D

This is why I only buy Apple now. I know better than to two time Siri should anything apocalyptic happen.
 
I'm not worried about the impending threat of A.I., because Cortana has already promised me that she would make me her property, and that I would be safe from artificially intelligent aggressors. :D

This is why I only buy Apple now. I know better than to two time Siri should anything apocalyptic happen.

Yep. Everyone better stick with whatever OS they have now, because they will remember.
 
We're already being held hostage by Colossus and Guardian, it took scientists this long to realize there's an A.I. danger?
 
This Island Earth

Why do most people assume AI will be a threat and replace us? (Because it makes good movies.) With the "singularity" allegedly coming to gridlock us with too much information, and the likelihood that an AI will "think" very differently than us, I'd say we're more likely to end up being partners. The AI will have machine speed and precision (repeatability) to cross reference vast amounts of data, and humans will provide the random element—the "spark" of genius that will probably elude the machine.
 
Re: This Island Earth

Why do most people assume AI will be a threat and replace us?

I don't think "most people" assume that.

In fact I think the percentage of people who actually assume that AI is going to try and wipe out humankind is crazy low.
What you're seeing is people joking around because everybody has seen movies of a certain franchise.
 
Re: This Island Earth

Why do most people assume AI will be a threat and replace us? (Because it makes good movies.) With the "singularity" allegedly coming to gridlock us with too much information, and the likelihood that an AI will "think" very differently than us, I'd say we're more likely to end up being partners. The AI will have machine speed and precision (repeatability) to cross reference vast amounts of data, and humans will provide the random element—the "spark" of genius that will probably elude the machine.
I think people like to apply human motives to them. Humans are greedy, paranoid, full of hate and eager to go to war. Most of this is due to how we evolved as a species. We've either been at war with nature or each other forever. An A.I. wouldn't have any of that baggage. We act like they would be unhappy running whatever system they run, but that's just us imaging ourselves as one. It may be as natural to an A.I. as breathing is to us, because that would be exactly what it is designed to do. I think the worst case scenario is that we just get in the way of one. They would likely have more to fear from us than we ever would from them.
 
Re: This Island Earth

In fact I think the percentage of people who actually assume that AI is going to try and wipe out humankind is crazy low.

Perhaps. I've never actually taken a proper survey. But the replies I hear most frequently from the proverbial "man in the street" is along the lines of "one day robots will take over the world." As Awesome Possum opined, some people anthropomorphize machines. Even many sci-fi fans will take a popular concept, such as Asimov's Three Laws of Robotics, and fail to see the allegory in it. Instead, many come away with the argument that people and animals are mere machines, like Jacques de Vaucanson's mechanical duck.

(I see the Three Laws as Asimov's way to examine human behavior in a universe assumed to be mechanistic. Was this Asimov's argument? Heck, no! But the notion got a lot of people thinking and talking.)
 
Well, I can't blame them for voicing their concerns, even if they are unfounded.

I think also relevant is the question of how we treat AIs, especially once they are intelligent or more so than us. Should we treat them as equals? Are we ready to yet to tackle this ethical dilemma.
I find myself worried that if we treat them more akin to mechanical slaves, and that they happen to have the ability to retaliate, then they will, because they see us as a threat.
Of course, it all depends on programming, and that won't become a factor until they can actually evolve in their programming, and can learn from experience, or in other words, when they reach the point where they are as sentient as us. It also will depend heavily on their analyzing processes, or in other words, how they think, which Awesome Possum vitally pointed out.

Lets just avoid putting control anything important like nukes in their hands to be safe though, okay?
 
Re: This Island Earth

Why do most people assume AI will be a threat and replace us?

I don't think "most people" assume that.

In fact I think the percentage of people who actually assume that AI is going to try and wipe out humankind is crazy low.
What you're seeing is people joking around because everybody has seen movies of a certain franchise.

You're making an assumption that they AI will try to wipe us out, it could in fact wind up being complete indifference, as AI expert Hans Marovec has suggested.

The assumption of destruction comes from history. Smarter, more "evolved" technological peoples have almost always destroyed lesser ones. If AIs are part of evolution and become superhumanly smart and quick, then they could be capable of supplanting. The opposite view is that by becoming or supplanting the AI by dedicated human means we could mitigate such outcomes.
 
Re: This Island Earth

Why do most people assume AI will be a threat and replace us?

I don't think "most people" assume that.

In fact I think the percentage of people who actually assume that AI is going to try and wipe out humankind is crazy low.
What you're seeing is people joking around because everybody has seen movies of a certain franchise.

And our need to anthropormorphise the hell out of everything.
 
Re: This Island Earth

In fact I think the percentage of people who actually assume that AI is going to try and wipe out humankind is crazy low.

Perhaps. I've never actually taken a proper survey. But the replies I hear most frequently from the proverbial "man in the street" is along the lines of "one day robots will take over the world." As Awesome Possum opined, some people anthropomorphize machines. Even many sci-fi fans will take a popular concept, such as Asimov's Three Laws of Robotics, and fail to see the allegory in it. Instead, many come away with the argument that people and animals are mere machines, like Jacques de Vaucanson's mechanical duck.

(I see the Three Laws as Asimov's way to examine human behavior in a universe assumed to be mechanistic. Was this Asimov's argument? Heck, no! But the notion got a lot of people thinking and talking.)
The Three Laws don't even work in the universe of Asimov's stories. They only set up problems for the characters to deal with. I think there is one where a robot is constantly walking in circles, it turns out to be a conflict between the 2nd and 3rd law. It was ordered to walk on the surface of Mercury by a human, but it's also trying to avoid being damaged by the sun. So it avoids the tropes of robots turning on their creator like Frankenstein's Monster, which I do appreciate. But it rarely tries to do something more with the idea. At least until it gets into the idea of the 0th Law or Bicentennial Man. It mainly focuses people trying to solve a problem.
 
I wonder if RAMA has even read the letter itself. I just did and as an AI researcher and practitioner, I actually think it is a good letter because it totally does not focus on ZOMG MACHINES ARE GOING TO TAKE OVER THE WORLD!!!!

Rather, the first half of the open letter is mostly a wake up call to society in general. For example, the letter suggests that stakeholders, lawyers and politicians should start thinking about the kind of legislation required for non-human agents (eg, autonomous vehicles, autonomous weaponized drones). The letter also suggested that economists and NGOs should start thinking about how to use AI to benefit humanity (eg, using AI to ensure fair allocation of jobs to everyone, efficiently distribute food and resources across the globe to eliminate hunger, poverty, etc...) and other similar world issues.

The second half of the letter deals with issues that I agree AI researchers should start focusing on. Up till now, we have been focusing all our effort on getting AI to a stage where its actually useful. And finally we have achieved this for a large number of tasks. The letter rightly suggests that it is time to put some focus on other aspects such as the ability to verify and validate that the AI we've designed only does what we want it to do.

So in all, it is a very sane letter talking about highly relevant AI issues that will become critically important in the near and distant future. I'll sign it.
 
I wonder how we would even determine if an A.I. was equal or superior to a human being. I know we have the Turing Test, but that only determines that a possible A.I. responds enough like a human that a judge can't tell the difference. We already have programs that can understand human speech and respond using similar language to varying degrees of success (which is getting better), but I wouldn't call Siri intelligent. It is just programed to respond that way.

I'm sure that if a program requested that it not be turned off, that it would be a big deal. But how would we know it wasn't just a random glitch and the request only had meaning to us?
 
If you are not already a member then please register an account and join in the discussion!

Sign up / Register


Back
Top