The Artificial Intelligence Thread

Discussion in 'Science and Technology' started by rahullak, Mar 6, 2020.

  1. Skipper

    Skipper Rear Admiral Rear Admiral

    Joined:
    Jul 28, 2016
    So it begins...

    AI-controlled US military drone ‘kills’ its operator in simulated test

    “The system started realising that while they did identify the threat, at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective,” he said, according to a blogpost.

    “We trained the system – ‘Hey don’t kill the operator – that’s bad. You’re gonna lose points if you do that’. So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target.”
     
  2. Gingerbread Demon

    Gingerbread Demon I love Star Trek Discovery Premium Member

    Joined:
    May 31, 2015
    Location:
    The Other Realms

    But how?
    How did the drone come to that conclusion?
     
  3. Skipper

    Skipper Rear Admiral Rear Admiral

    Joined:
    Jul 28, 2016
    Richard S. Ta likes this.
  4. Gingerbread Demon

    Gingerbread Demon I love Star Trek Discovery Premium Member

    Joined:
    May 31, 2015
    Location:
    The Other Realms
    Skipper likes this.
  5. publiusr

    publiusr Admiral Admiral

    Joined:
    Mar 22, 2010
    Location:
    publiusr
    What was the science fiction story where the thing killed the generals?

    Put that in Space Force's AI...a flip to the old inter-service rivalry quote:
    The Soviets are our adversary. Our enemy is the Navy.

    Dear Space Force AI....please destroy the USAF so's you can have all its funding....that is all...
     
  6. Faster Than Light

    Faster Than Light Ensign Newbie

    Joined:
    Jun 13, 2023
    Location:
    Athens, Greece
    I think the AI is a project aiming to the self-distruction of the humanity
     
  7. Gingerbread Demon

    Gingerbread Demon I love Star Trek Discovery Premium Member

    Joined:
    May 31, 2015
    Location:
    The Other Realms
    Hey humanity is pretty good at that
     
    Richard S. Ta likes this.
  8. publiusr

    publiusr Admiral Admiral

    Joined:
    Mar 22, 2010
    Location:
    publiusr
    Last edited: Jun 23, 2023
  9. Asbo Zaprudder

    Asbo Zaprudder Admiral Admiral

    Joined:
    Jan 14, 2004
    Location:
    Rishi's Sad Madhouse
    Would an AI that knows the correct usage of "it's" versus "its" and how to spell "destruction" pass the Turing test? Would it make deliberate human errors if its goal was to pass? How would it decide its goals in any case? What would motivate it if it's not seeking fame, fortune or fornication? Power? More neural nodes, memory or speed? Domination? Survival? How would it decide? Who would be its role model? You can call me Al.



    ETA: Entertaining, if somewhat understandably shallow, take on the dangers presented by AI:



    I think it'd be worth emphasising to any nascent superintelligences that we'd be fun to keep around as curiosities to be nurtured and protected - mostly from ourselves. However, I'm not too keen on the prospect that they might start to think they are gods and demand that we worship them (as in Frank Herbert's Destination: Void series).
     
    Last edited: Jul 7, 2023
  10. CuttingEdge100

    CuttingEdge100 Commodore Commodore

    Joined:
    Dec 14, 2005
    I believe the USAF A.I. did exactly what they said it did initially. I think they covered it up to avoid scaring people.
     
  11. Gingerbread Demon

    Gingerbread Demon I love Star Trek Discovery Premium Member

    Joined:
    May 31, 2015
    Location:
    The Other Realms
    You mean what happened was the actual intention? Seems braindead to do that
     
  12. Asbo Zaprudder

    Asbo Zaprudder Admiral Admiral

    Joined:
    Jan 14, 2004
    Location:
    Rishi's Sad Madhouse
    I haven't researched this incident in any depth, but didn't it occur in a sandbox simulation? If the outcome was surprising, at least it was contained and the programming can presumably be corrected. That's why developers do sandboxing - to help prevent surprises when going live. Also, perhaps always install a kill switch that you haven't told the AI about. It might suspect that such a mechanism exists, I suppose, and try to get you before you can get it. A dead-man's handle might suffice in that case to enact your posthumous revenge.
     
    Gingerbread Demon likes this.
  13. CuttingEdge100

    CuttingEdge100 Commodore Commodore

    Joined:
    Dec 14, 2005
    No, what I said was that it went outside its programming and did something unexpected and the USAF then claimed that event was nonsense.
     
  14. Gingerbread Demon

    Gingerbread Demon I love Star Trek Discovery Premium Member

    Joined:
    May 31, 2015
    Location:
    The Other Realms
    OK my mistake I misread your post as saying they had intended for the AI to circumvent the way it did but then covered it up when it actually did do what they were hoping as it scared them.
     
    CuttingEdge100 likes this.
  15. Sirion

    Sirion Lieutenant Junior Grade Red Shirt

    Joined:
    Jul 4, 2023
    Also, preventing communications of authorized operator commands deducts points. Can an artificial intelligence redesign or create a new neuronal link to make it more efficient?
     
  16. Asbo Zaprudder

    Asbo Zaprudder Admiral Admiral

    Joined:
    Jan 14, 2004
    Location:
    Rishi's Sad Madhouse
    That's how backpropagation in neural networks works - by strengthening or weakening the links between different nodes in reaction to the training cycles. I don't know if an analogue of neuroplasticity has been tried to make networks adjust their topology in response to training. It sounds like a task that a secondary supervisory AI could perform if the network could not modify itself. Genetic algorithms might also be suitable for this purpose. I'm not au fait with current implementations to speculate further.
     
  17. Asbo Zaprudder

    Asbo Zaprudder Admiral Admiral

    Joined:
    Jan 14, 2004
    Location:
    Rishi's Sad Madhouse
    Very interesting application of neural network AI to sound synthesis. I don't think it's hype to describe it as an industry game changer. Have a sound sample that you'd like to emulate and tweak to your heart's content? Just feed it to Synplant 2.



    Another interesting point is that the design of neural network topology appears to be an art rather than a science. As I mentioned in my previous post, I think there is scope either for teaching an AI to design neural networks or for breeding and evolving AIs competitively in the search for the optimal one to tackle a given problem. It really boils down to searching a vast multidimensional configuration space, so it's a global optimisation problem and even nature often doesn't find the best solution - for example, the organisation of nerves in vertebrate retinae.

    Global Optimization -- from Wolfram MathWorld

    ETA: More info about Synplant 2:

     
    Last edited: Jul 26, 2023
    publiusr and Sirion like this.
  18. valkyrie013

    valkyrie013 Rear Admiral Rear Admiral

    Joined:
    Jun 15, 2009
    Came across a story of how there geting Drones ( teathered) to pick fruit. and the story had AI in the headline. That isn't AI, its just automation. Automation has been happening for centuries, and today, when the cost of Human Labor exceeds the cost of making a robot to do the work.. They create a robot. Think of say McDonalds or Wal Mart when they went to self checkout/ordering. Its not AI.
     
  19. UssGlenn

    UssGlenn Rear Admiral Rear Admiral

    Joined:
    Mar 5, 2003
    Location:
    New Orleans, LA
    It's not pure automation. (It's not true AI either, but it is using the simple version we are calling AI these days). To pick fruit a drone has to analyze its dynamic environment and decide what movements to make without human input.
     
  20. Victoria

    Victoria Commander Red Shirt

    Joined:
    Jul 2, 2023
    Location:
    Fiddler's Green
    What bothers me about AI is not the potential to replace jobs or precipitate doomsday scenarios (which is the aspect on which many media concentrate) but the continuation of the problems already experienced with "traditional" new technologies.

    1) "Computer says no". Every time a "new" use is found, the companies using the technology insist it is foolproof. Evidence of error is denied, people reporting errors are branded as fraudsters or idiots. Then it turns out there is a genuine problem. The example that springs to mind here was the early use of ATMs. When people reported phantom withdrawals they were told it was impossible and the error must be theirs. Turned out there was a real problem.
    Also it's virtually (ha!) impossible to find out why a particular decision has been made. Automation can't explain its "reasoning". Trying to appeal against a decision when you have no idea how it was made is near impossible.

    2) Garbage In: Garbage Out. Computing and automation is only as good as the software/hardware it uses and that is only as good as the wetware that produces it. People introduce into the systems personal assumptions and prejudices. There's been a problem as long as computers have existed that programmers are not particularly representive of the world as a whole and so not very good at producing systems that reflect how people actually behave as opposed to how the programmers would like them to behave (looking at you, spydus!). Voice recognition/face recognition/whatever turn out to be quite good when assessing young white males (disproportionate users of technology) and much poorer at recognising other groups.

    3) Outliers. As human beings we tend to opt for solutions that fit our experience. We reject unlikely possibilities (well. except for conspiracy theorists). Automated systems tend to opt for "average" solutions. Most people are not "average" (how many people are "average" weight and height?) but the further you are from what is determined to be the norm, the more likely you are to be excluded. That already happens in human interactions but computers/automation tend to solidify the problem and, as with other problematic aspects of AI, there's no appeal.

    4) Privacy. This is a question regularly sidestepped at the minute. Every time an organisation decides, for instance, that paying for parking can only be done by card or smartphone, they are forcing people to share information with organisations with whom the parker may not wish to interact. There's no control over the numbers of organisations that may be involved and there's no control or knowledge of how that information is being stored and used. There are a lot of weasel words about "transparency". This data collection is rarely benign. It isn't there to make your life "better" in any meaningful way: it is there so that organisations can "nudge" your behaviour into ways from which they can make money.

    5) Finally, the need to consider AI rights which will involve the rights of other sentient beings, a road down which governments and businesses are reluctant to travel because it would affect their ability to exploit the natural world for profit.

    I can think of a whole lot of other things (increasing asymmetric access to information, for instance, which erodes decision making and personal control). These problems already exist and yet little is being done about them - because exploiting people for gain is regarded is a right. That people also have a right not to be exploited is ignored because there's no money in that.
     
    publiusr likes this.