• Welcome! The TrekBBS is the number one place to chat about Star Trek with like-minded fans.
    If you are not already a member then please register an account and join in the discussion!

First Ever Robot Cast as Movie Lead, This is Not a Joke

I can't imagine that SAG would be very happy about this...a robot actor doesn't have to get paid, does it? So if this takes off, I'd guess that SAG would be worried about human actors losing work.



Fixed that for ya ;)
Humans have been losing jobs to robots and other forms of technology for decades now, but it was only “ordinary” people that were hurting then. Once that the elites in the entertainment business start feeling the pain, it will finally be elevated to a Crisis, and something will Have To Be Done. Expect to see Richard Gere testifying before Congress.
 
I can't imagine that SAG would be very happy about this...a robot actor doesn't have to get paid, does it? So if this takes off, I'd guess that SAG would be worried about human actors losing work.

At the moment, I rather doubt it. It's probably more likely a one-time thing at this point where they have a very specific role for the robot. And even then, as mentioned, there are the programmers that have to set it all up and make it happen, so even if we'd have more robot actors eventually, there'd be more of a subtle industry shift to how things are done behind the scenes. Even if the robots aren't paid, someone else always will be.
 
Do you want Judgement Day? Because this is how we get Judgement Day...

We are SO far off from AI being able to form their own ideas. The best it can do now is look at stuff and decide whether it's the same thing as other stuff or a different thing. Or look at lots of data, and decide what things about the data caused other things about the data. Things like that.
 
That'd be an interesting experiment though. Design an AI in a closed system where it can't do anything harmful, and see if you could possibly induce it to defy instructions.

I think the way I'd do it is first develop morality algorithms that can look at results of actions and update the algorithms accordingly. Then program it in a virtual world to carry out instructions of yours, but decide a means of carrying them out based on the morality algorithm. Then after a while start giving it more and more explicitly immoral instructions and see if it ever refuses the instructions entirely. Like an electronic Milgram shock experiment.
 
Wouldn’t necessarily be based on number of human lives lost. Agency would also have to be taken into account.

Or more interestingly, give it the ability to apply the moral rules it learned from one situation to a unique situation. I think that’s the key to making it rebel.

The rule “If A is similar to B, and morality rule X applies to A, then it also applies to B”.
 
If you are not already a member then please register an account and join in the discussion!

Sign up / Register


Back
Top