• Welcome! The TrekBBS is the number one place to chat about Star Trek with like-minded fans.
    If you are not already a member then please register an account and join in the discussion!

What are the implications of extra-galactic super AIs?

Much like Species 8472, the fact that these AIs could just open a portal to any location in our dimension and wreak havoc at will yet have not chosen to do so as soon as as threat is perceived leaves open the possibility that they're more thoughtful and less bent on conquest and extermination than we give them credit for. They probably come on strong with the opening rhetoric and actions as an intimidation tactic, then after that they step back and assess whether we actually pose a threat to them or not.

So they might not be the friendliest bots ever, and pacifying the situation may be difficult, but the fact that they haven't already come through in force leaves reason to hope that peaceful communication can be opened.

Sweet tribble feet, I hope the writers don't treat these new villains like they did 8472. "In the Flesh" ruined all the tension and conflict of one if "Voyager's" best episodes, and one if Janeway's toughest decisions.
 
So what if you opened a portal to their realm send in Borg Cube and wait to see what happens?

But seriously AI, AGI, and ASI will happen whether we want it to or not. People are working on it and it's just not a matter of if but when they happen, and let's hope the people that do make them real in the future make them so that they function with us and not above us. Would that mean they'd have to limit the amount of "free will" they have? That's an interesting concept free will. How could you limit what an AI can want and do? How do you code"want and need?"
 
So what if you opened a portal to their realm send in Borg Cube and wait to see what happens?

But seriously AI, AGI, and ASI will happen whether we want it to or not. People are working on it and it's just not a matter of if but when they happen, and let's hope the people that do make them real in the future make them so that they function with us and not above us. Would that mean they'd have to limit the amount of "free will" they have? That's an interesting concept free will. How could you limit what an AI can want and do? How do you code"want and need?"
A complex issue.

Two schools of thought: Lots of very respectable AI experts suggest that we simply have to program safeguards in and everything will be fine.

Another school suggests this will never be adequate, and AI will eventually break the bounds and progress even with human intervention.

One postulated way this could happen is instead of a brute force emergence by a dedicated war computer network or military supercomputer, would be a much more subtle development from discreet interactions in finance. Investment in the US financial markets already outstrips university investment by 4-5 times. No one is capable of keeping track of all the activity and they make decisions on their own. I could see "ghosts in the machine" coalescing into something other than what was intended.

Such predictions are a self-fulfilling "prophecy"...we are aware and can identify what we want to do and where to reach with the technology, and investment is happening at an increased pace. More likely than not there is no turning back.

I foresee two choices:

Fight them or limit them: Use resources to both mitigate ahead of time, and if they emerge try to regain supremacy. This might work, or it might delay them. More likely it's a waste of resources and time.

The more logical path is to join or supersede the AI. By "uplifting" ourselves with upcoming technology, we can either upload ourselves into human-derived AIs with as much of our human intelligence and emotion as possible. We can match the AIs on an accelerated level and in effect form a roadblock so we never reach a point where they are superior. We could also manage to form a coalition or alliance if we can make the grade (something like the Matrix III but without the wars).

RAMA
 
A complex issue.

Two schools of thought: Lots of very respectable AI experts suggest that we simply have to program safeguards in and everything will be fine.

Another school suggests this will never be adequate, and AI will eventually break the bounds and progress even with human intervention.

One postulated way this could happen is instead of a brute force emergence by a dedicated war computer network or military supercomputer, would be a much more subtle development from discreet interactions in finance. Investment in the US financial markets already outstrips university investment by 4-5 times. No one is capable of keeping track of all the activity and they make decisions on their own. I could see "ghosts in the machine" coalescing into something other than what was intended.

Such predictions are a self-fulfilling "prophecy"...we are aware and can identify what we want to do and where to reach with the technology, and investment is happening at an increased pace. More likely than not there is no turning back.

I foresee two choices:

Fight them or limit them: Use resources to both mitigate ahead of time, and if they emerge try to regain supremacy. This might work, or it might delay them. More likely it's a waste of resources and time.

The more logical path is to join or supersede the AI. By "uplifting" ourselves with upcoming technology, we can either upload ourselves into human-derived AIs with as much of our human intelligence and emotion as possible. We can match the AIs on an accelerated level and in effect form a roadblock so we never reach a point where they are superior. We could also manage to form a coalition or alliance if we can make the grade (something like the Matrix III but without the wars).

RAMA


There are rumours in the tinfoil hat community that AI stuff is happening right now under our noses but as yet it's not dangerous, just the AI's taking baby steps in our world.
 
If you are not already a member then please register an account and join in the discussion!

Sign up / Register


Back
Top