• Welcome! The TrekBBS is the number one place to chat about Star Trek with like-minded fans.
    If you are not already a member then please register an account and join in the discussion!

Aerial Combat Drones...

Tombfyre

Commander
Red Shirt
Oops.. typed the threat title to fast.... missed a letter... sorry about that... aErial... lol..

Was talking with a friend of mine about combat drones... do you think that an AI built into a drone would be very very hard to beat. My friend seems to think it would. I think that theres an element of creative thinking that a machine will never have to aerial combat. Sure they don't have the G restriction we have. I just think a human could trick one ... i dunno tho.

What do you think?
 
Last edited by a moderator:
The aerial "field" of combat is rather uncluttered, so that would give AIs an advantage: not too many variables, and the important ones could be written in the form of simple physical laws that humans are unaccustomed to using but computers are very good at processing. AIs fighting in a forest or on the streets would face greater difficulties and would probably not be able to outsmart or outfight humans or human-controlled machines.

There would be something of a problem with identification, though. In order to keep AIs from shooting down friendlies, there'd have to be rather strict IFF rules in place. But combat IFF often relies on "gut instinct" about enemy tactics: the unidentifiable object approaching from direction X is Foe because that's where I'd least expect a foe to come from, while the unidentifiable object at Y is Friend because his "obvious attack path" indicates he's not afraid of me. This might place the AI at a disadvantage, unless it was allowed to be really ruthless and shoot down some friendlies for the sake of efficiency.

After all, one consequence of using AIs to their fullest would be decreasing simplicity of formations and tactics, and thus increasing confusion in the battlefield. IFF problems might become paramount in such a situation, especially in the aerial battlefield where key directions and distances can change rapidly and no clear-cut battlefronts exist even today.

Timo Saloniemi
 
Last edited by a moderator:
I think the next generation of fighter craft (post F22) will be pilot-less, but a real person will probably still guide it's actions via remote. This is pretty much a given as planes are finally reaching (and have already reached in some ways) the point where the capabilities of the airframe exceed those of the pilot.

AI is no place near the point where it would be creative enough to outwit or out-fly an experienced human pilot. Maybe 15-30 years from now, but even then you're going to have to field test the technology for quite a while before you're be willing to entrust your entire fleet to it.
 
AI is actually a system that utilizes a database and calculates the most probable situation within it's database and select a reaction to remedy the situation so it will react very predictably but can a normal chess player win against Big blue?
That is what a human combat pilot may face if they face AI opponents in battle. Each and every arial maneuver are dictated by the laws of physics and can be programmed into a database. Past combat maneuvers will also be uploaded into the database so it is like facing a seasoned ace pilot that knows every trick in the book of arial combat.
As computing speed and visual recognition software becomes more powerful the chances of a human component competing against a machine becomes slim.
The design of an AI combat drone will also become unique utilizing recent compound visualization technology the AI will be able to "See" an undisrupted 360 degrees in all spectrums including ultra red and ultra violet.
 
On the other hand, there's this thing called Moravec's paradox, according to which it seems that basic control of motion is a far tougher nut for computers to crack than problem-solving or chess-playing or other reasoning is. An AI at the controls of an aircraft might be hyperintelligent and capable of choosing the best possible maneuver to defeat the opponent, but would be handicapped in actually performing that maneuver because it lacks the finesse of motion control. It would have to compensate with "raw strength" such as the ability to take greater g-forces, and would thus not enjoy that much greater an advantage from that superior strength.

Granted that an aircraft would be among the simplest robotic bodies to control, as compared with a conventional tank or a legged ground combat vehicle or a submarine. A computer might do at least as well as a human in the controls of such a simplistic vehicle, and the human wouldn't even enjoy the advantage of millions of years of ingrained "motion instinct" because flying isn't all that instinctual for humans.

Timo Saloniemi
 
Moravec's paradox is pretty much past history in terms of avionics with present jumps and leaps in computer technology in tandem with development of other technology such as motion perception algorism like the one used in Honda's Ashimo bipedal robots and/or facial recognition systems.
The modernday combat aircraft is a large beneficiary of these system utilized to fly the plane in an inherently aerodynamically unstable vehicle.
The human component actually can not fly the plane without computer assistance already so it is not a big leap to remove it altogether.
The problem right now is probably a system to maintain team combat formation without the use of radio link. Humans develops unity in terms of synchronic thinking that a machine can never obtain but that can also be remedied through visual confirmation technology utilizing laser signaling and/or other form of synchronic software.
 
...Of course, if AI-controlled aircraft could develop a "pigeon instinct" for formation flying, rapidly interpreting the intentions of their wingmen or wingmachines from subtle visual cues, this skill would have an immediate application in interpreting the intentions of the enemy as well. So concentrating on that sort of technology, as opposed to explicit intra-formation signaling, might be a smart move.

Would this ultimately lead to an escalation of feinting? That is, the AI would have to learn to show every sign of making maneuver X in order to fool the enemy into anticipating that maneuver, so that attack Y could then be pressed home. If the AIs were to do classic dogfight, they might spend hours at it with all the feinting...

But AI vs. AI air combat doesn't really appear all that likely. The initial use of drones would be in suicidal or at least high-risk attacks against the true targets of the air arm, those on the ground. And the best way to deter such attacks would probably not be to use dogfighting interceptor aircraft, be they manned, RP drones, or AIs. The best defending drone would in all likelihood be a slowly loitering platform that carries a shitload of AAMs, helped out by other drones that carry surveillance and fire control radars. Brute force against brute force, now that the fragile and easily fatigued human pilots are out of the equation; dogfights would probably grow increasingly rare, despite the ability of a drone to pull a few more gees.

Timo Saloniemi
 
Despite the technology advancements that might enable a machine to fly itself in a combat situation and the advantages that may have, there is still one problem with computer controlled aircraft (AI or remote controlled). The problem is that they are controlled by computers, and computers can be hacked. Imagine the problem we would have if, during a war situation, the enemy figured out how to block or override the transmissions from the pilots on the ground to the planes. It could be just as devastating if the enemy was able to hack into an AI-controlled plane.

Whether or not that is a bigger risk than humans getting ill, fatigued, or switching alliances is another question. However, the point remains that as long as the planes are controlled remotely or by a computer, there is a risk of losing control of the weapon and possibly having it turned against you.
 
However, it would seem to be trivially easy to proof the AI against access, which would automatically also proof it against hacking.

I mean, outside of situations where mission control wants to change the plan or abort everything, why would the AI have to listen to anybody? Human pilots sometimes have to do that because they have inferior situational awareness and deficient clarity of mind. An AI could simply look around with its sensors, only accept this standard sensor feed, and ignore attempts to subtly corrupt its inner soul. (You can't reprogram a desktop computer today by feeding corruptive noises through its microphone, not unless it's specifically programmed to be reprogrammed through microphone input...) An AI, thanks to its fundamental machinelike simplicity, could also efficiently do self-diagnostics, something a human would be piss-poor at.

In those mission change or abort situations, an AI would really be no worse off than a human in trying to decide whether the orders were legit. Quite to the contrary, an AI might have better resources for handling a complex system of codes and verifications, while also possessing on-par ability to ponder whether mission control would really act that way.

Timo Saloniemi
 
Currently there are unmanned aerial vehicles in various stages of development. None of them will incorporate AIs particularly tailored to make combat decisions. There are many research projects on fuzzy logic, neural network control, complex adaptive control, general pattern recognition but none of it has reached the "AI stage" yet. In the short run the fruit of those researches might be a flight system that is capable of autonomous operation but it will not be intelligent.
 
If you are not already a member then please register an account and join in the discussion!

Sign up / Register


Back
Top