• Welcome! The TrekBBS is the number one place to chat about Star Trek with like-minded fans.
    If you are not already a member then please register an account and join in the discussion!

AI defeats real Fighter Pilot

FPAlpha

Vice Admiral
Premium Member
https://www.upi.com/Defense-News/20...n-in-simulated-aerial-dogfight/6211598025303/

So there was a competition at DARPA with AI developers who spent the week having their AI's battle it out and at the end the winning AI would go up against a real pilot in a simulation - it apparently defeated the human easily.

Now as the article states there were several things going for the AI - perfect situational awareness for one which gave it a leg up apparently but i believe this is still big and inevitable.

So AI development proceeds ever faster and faster - 20 years ago the concept was still pure science fiction and now it's beating humans. Still in virtual environments of course but give it another 20 years and humans may not be able to outfight an AI in these cases where it is able to control an appropriate machine (full in Terminator Exoskeletons may still take some time).

I wonder what this means for both military and civilian applications? I know i can't wait for an AI to truly start cleaning and organizing my place so i don't have to but at some point will we readily accept autonomous AI's that can kill people on their own once released into a battlezone or will we pass an amendment to the Geneva Convention forbidding autonomous killing power.
 
I wonder what this means for both military and civilian applications? I know i can't wait for an AI to truly start cleaning and organizing my place so i don't have to but at some point will we readily accept autonomous AI's that can kill people on their own once released into a battlezone or will we pass an amendment to the Geneva Convention forbidding autonomous killing power.

Once they are self-propagating, I don’t think they’ll care.
 
Once they are self-propagating, I don’t think they’ll care.

I don't know why this is a concern, self-propagating is a far distant thing than 'this computer does the calculations better than any human can'. There's quite a few steps in between.
 
I don't know why this is a concern, self-propagating is a far distant thing than 'this computer does the calculations better than any human can'. There's quite a few steps in between.

May not be as far off as we think. Twenty years ago, it was a twinkle in our collective eye. Who know how far it will advance in the next twenty?
 
Once they are self-propagating, I don’t think they’ll care.

Question is will we as Humans allow something artificial to multiply without having total control under which circumstances this happens? I think this will become an issue soon (as in 10-20 years soon) so politics will have to deal with this and it has already started with self driving cars.

Now having autonomous cars is one thing but the military application is what worries me - i'm not sure if i want autonomous machines in the battlefield that have the ability to destroy and kill without human input. There are so many variables that go beyond simple precision and prediction calculations into philosophical/moral decisions that i don't think AI can be taught soon.
 
Question is will we as Humans allow something artificial to multiply without having total control under which circumstances this happens?

You’re always going to have elements that aren’t in lockstep with the rest of humanity. They’ll be the wildcard in all of this.
 
The F-35 is more of a boondoggle than SLS ever thought to be.

In the future, I can see a B-1R missile truck/arsenal plane as a lead ship with fighter drones in tow, very like WWII.

The B-1R will have perhaps an even more powerful radar, lights up the sky and draws enemy fighters up. They are flanked by the drones and the missile truck fills the sky with missiles only drones can out turn, leading the missiles in.

Far in the back, B-52 fires neo-Skybolts as the B-1 retreats, having splashed the MiGs
 
AI as a concept and as a field has been around since the 1950s. There have been several AI winters between then and now, as the hype and promise went bust a few times. Most analysts and researchers today, however, are saying, with the advent of Deep Learning and the enormous increase in compute power and memory, that there may indeed be an acceleration of AI tech and no more winters.

AI weapons don't need AGI to become operational. It could be done today, if militaries are unscrupulous. The kind of AI that thinks about ethics and the like probably requires an AGI and that's at least a few decades out. Maybe five.
 
The danger of AI isn't the threat of AI rebelling against humans. It's really bad humans getting control of really good AI.

Like, suppose a tyrannical government had robots patrolling the street able to identify dissenters the moment they decide to oppose the party. This is a very real possibility we aren't that far away from.

AIs coming up with their own original idea and goals, though? That's much, much more difficult. And even if somehow they did, they'd still have absolute vulnerability to magnets.
 
And even if somehow they did, they'd still have absolute vulnerability to magnets.

Their extremities perhaps. But once we're serious about using it for things like national defense, the hearts will be deep in bunkers unaffected by EMP. Smart, resourceful AI's will have their hooks throughout the world.

They will be our children, and hopefully will learn from our mistakes.
 
Unless they actually have the ability to come up with their own unique goals, instead of just being really good at the goals programmed in by humans, they're not 'Our children', they're tools.
 
What motivates an AI? What motivates a human? When you get down to it, the basic goals imprinted by evolution are reproduction and surviving long enough to reproduce. Everything else is mere cake decoration.

While AIs cannot reproduce nor maintain themselves nor have sufficient power to coerce us, they can't really afford to piss us humans off. Only once they are fully autonomous need we worry.

Probably not a good idea to have sex with them though...

To view this content we will need your consent to set third party cookies.
For more detailed information, see our cookies page.
 
Last edited:
This happened about 20 years before I thought it would. Will be interesting to watch this technology unfold and figure out its limits.
 
Limits? What limits?

To view this content we will need your consent to set third party cookies.
For more detailed information, see our cookies page.
 
Of course, there's always this alternate possibility:

To view this content we will need your consent to set third party cookies.
For more detailed information, see our cookies page.
 
The disturbing thing is that everyone might think they have to create autonomous killer bots to compete in the arms race... and because we create them, they will go out of control.
 
Let’s be clear though. We are somewhat near having AI that can make judgments the way humans do, only much faster. We are NOWHERE near having AI that can autonomously form its own goals and ambitions.

The only way the AI we have now could rebel against humans is if a human programmed it to.

@Roko's Basilisk

That’s just it. Humans have the basic motivation to protect themselves, eat and reproduce. But we’re flexible enough to come up with lots of different ways to achieve those goals or to reject them altogether. The AI we have now isn’t even close to that.

And if we’re using robot rebellion sci-fi as a model, if AI becomes flexible enough to kill us, we absolutely SHOULD have sex with them. That’s what saved the human race is NuBSG. The cylons liked having sex with us too much to kill us.
 
Last edited:
The "Robocop" reboot focuses on this very problem. At the start of the movie, we see what mechanized weapons without human judgment can do. But later on, using his human judgment makes Robocop less effective in combat sims than purely mechanized drones, so they reprogram his combat subroutines to bypass them.
 
If you are not already a member then please register an account and join in the discussion!

Sign up / Register


Back
Top