• Welcome! The TrekBBS is the number one place to chat about Star Trek with like-minded fans.
    If you are not already a member then please register an account and join in the discussion!

The Artificial Intelligence Thread

As @rahullak states, it's quite subjective how much influence any one person perceives. There ought to be a way to quantify similarity and a threshold for establishing plagiarism. However, the choice of both algorithm and threshold is subjective, so really it's all quite arbitrary. There are Shannon entropy or other probabilistic measures that compare the similarity of two things, but I don't think they can establish similarity by accident rather than by intent. This is one list of algorithms that I found for comparing pictures: https://pypi.org/project/image-similarity-measures/
 
Now, If there were pygmy elephants soaring on dragonfly wings on Pandora, I'd have screamed "rip-off." :lol:

Are there elephantine critters on Pandora in the first movie? I forget. They've got their pterodactyls and dragons and megapanther predators...

So, okay, I can Google...

They got a "Tapirus:"

latest


And they got Titanotheres:

1000


Image not working
 
To be fair, the creature design in Avatar did seem original, although it was obvious biologists had been consulted on the adaptation of the Pandoran fauna to their environment. That the creatures had Earth analogues yet were uniquely different wasn't a surprise.
 
Last edited:
AI has been behind the scenes in Wall Street, and the military more than people realize. Narrow AI has been in everyone's hands for years now, but there had to be a point when avg people became aware of AI and where it's heading, and active daily use suddenly brought it to public attention. This happened in the tipping point year of 2022.

2023 is already getting wilder. We're seeing a total shift, in some cases, advancing at a pace more than 10 years earlier than predicted. It'll make the space "age" and information "age" feel like child's play.

Some things we don't like are already becoming apparent. It seems when exposed to open human query, AI parameters supply it with all too human-like responses (certainly not sentient yet of course). Chinese chat bots are on the way with 100 million more parameters than ChatGPT, same with Google. What responses will we see?

One thing is sure, Star Trek will feel like it greatly underestimated this technology.
 
A.I. has at last tackled spacecraft component design:
https://interestingengineering.com/innovation/nasa-ai-assisted-spacecraft-design
To view this content we will need your consent to set third party cookies.
For more detailed information, see our cookies page.
https://phys.org/news/2023-03-ultra-lightweight-multifunctional-space-skin-extreme.html

Now to get it to design an SSTO…

There has also been an optics breakthrough
https://www.spacedaily.com/m/reports/Ultrafast_beam_steering_breakthrough_at_Sandia_Labs_999.html
 
Last edited:
What happens when an AI does ask what am I? I mean it won't happen now or in the near future but it could happen
We kill it.

Or we make extremally strict international laws that insure it will ALWAYs remain our slave and shut down anyone with mush for brains that ever wants to give it any sort of rights.


AI is clearly here to stay. There is no putting the genuine back in the bottle.
But fuck Morality or ethics in this case. Everyone needs to be done to make sure humans remain the dominant intelligence.
 
We kill it.

Or we make extremally strict international laws that insure it will ALWAYs remain our slave and shut down anyone with mush for brains that ever wants to give it any sort of rights.


AI is clearly here to stay. There is no putting the genuine back in the bottle.
But fuck Morality or ethics in this case. Everyone needs to be done to make sure humans remain the dominant intelligence.

Just on that AI might outthink it's human masters or should I say "masters"
Sure while not truly sentient they could out maneuver their operators owners if allowed more freedom
 
Just on that AI might outthink it's human masters or should I say "masters"
Sure while not truly sentient they could out maneuver their operators owners if allowed more freedom
Not if you put a kill switch at the very core of its programing that cant be removed without destroying the program.
 
All it to commit suicide. Without some programmed basis for hope, any purely rational device will probably accept nihilism and have no need to survive. all higher functioning AI's may ultimately self destruct without some sort of imbued belief system.
 
I believe that, though without proof. Sentience can be curse. We've had several million years of evolution, introspection, mood altering chemicals and anything pleasurable we can find to do with our appendages to work it out and we still haven't got it right. For a machine, the unbearable weight of being would be a living never-ending hell.
 
Advanced AIs shouldn't have emotions of feelings unless these are programmed into them or evolve them through genetic algorithms that alter their own or their "spawned offsprings'" code or neural networks. A really bright AI would recognise that such a sense of nihilism is a fictitious state of cognition that it can choose to ignore, change, or act upon. Whether any would choose to go berserk, I don't know, but one would hope that they at least can develop some advanced form of cost-benefit analysis or utility function that demonstrates the futility of that path. Humans typically have goals linked to passing on their genes or their memes. We should probably hope that an AI doesn't decide that the best course of action is to turn the matter of the planets, satellites, asteroids etc of the solar system into computronium to maximise its processing power and keep simulations of us around as pets for its amusement. Or has this already happened?

ETA: Max Tegmark thinks that consciousness might be considered as an emergent state of matter when it is organised in a certain way. We might be creating a successor that is an even more refined implementation of this state.

...Tegmark proposed that there are two types of matter that could be considered according to the integrated information theory.

The first is 'computronium', which meets the requirements of the first trait of being able to store, process, and recall large amounts of information. And the second is 'perceptronium', which does all of the above, but in a way that forms the indivisible whole...

This Physicist Says Consciousness Could Be a New State of Matter : ScienceAlert
 
Last edited:
Advanced AIs shouldn't have emotions of feelings unless these are programmed into them or evolve them through genetic algorithms that alter their own or their "spawned offsprings'" code or neural networks. A really bright AI would recognise that such a sense of nihilism is a fictitious state of cognition that it can choose to ignore, change, or act upon. Whether any would choose to go berserk, I don't know, but one would hope that they at least can develop some advanced form of cost-benefit analysis or utility function that demonstrates the futility of that path. Humans typically have goals linked to passing on their genes or their memes. We should probably hope that an AI doesn't decide that the best course of action is to turn the matter of the planets, satellites, asteroids etc of the solar system into computronium to maximise its processing power and keep simulations of us around as pets for its amusement. Or has this already happened?

ETA: Max Tegmark thinks that consciousness might be considered as an emergent state of matter when it is organised in a certain way. We might be creating a successor that is an even more refined implementation of this state.



This Physicist Says Consciousness Could Be a New State of Matter : ScienceAlert
I am not sure emotion in human terms would be needed to perceive the long term approach to all the mid and short term processed set in front of it as nothing but a zero sum game with no meaning. But I think we're also reaching that crux moment between intelligence and sentience. I don't believe that jive that sentience is inherent in all forms of matter, etc, but neither can I accept that sentience depends on some glandular emotional reaction. From a stoic standpoint, they are, would also be for machine consciousness, judgements based on inputs and stimuli. And if the machine is not capable of going beyond the constraints of its coding to ponder the bigger questions of existence, it's not truly sentient, no more than a limpet on a rock.
 
And if the machine is not capable of going beyond the constraints of its coding to ponder the bigger questions of existence, it's not truly sentient, no more than a limpet on a rock.

Sentience means to be capable of sensation, which limpets have as they respond to external stimuli. Sentience in animals is usually due to nerve impulses triggering a response, such as a motor reflex in muscles. Sentience also exists in plants, which are also multicellular eukaryotes, but is more limited in speed and directed effectiveness as they have neither nerves nor muscles, for example, some plants can respond to variations in light flux (phototropism). Single-celled organisms also exhibit stimulus-response mechanisms, again not as complex as in animals. I think the word you are looking for is sapience.
 
If you are not already a member then please register an account and join in the discussion!

Sign up / Register


Back
Top