The Artificial Intelligence Thread

And how would you brake an AI when it's hardwired into the actual technology you would use to stop it ( apart from shutting down its energy source)? Also when would you notice it needs shutting down?

A true AI with powerful processing capabilities will far exceed human capabilities and if it can predict human behaviour might also find ways of concealing its intentions or prevent any security measures taking effect.

We're now deep into Hollywood Science Fiction but it's also what some very renowned scientists warn about concerning AI development.

An ASI or AGI isn't going to build itself. WE are in the process of building it. So it is not inconceivable to me that we could place hard curbs on certain behaviours, maybe in hardware or software, or prevent certain data from being fed or injected that would otherwise compromise its behaviour.

If a true ASI is already in the "wild" without any curbs, the game is lost.

Whether ALL humans will agree on the curbs and adhere to it (and no person or group goes rogue) is another matter altogether.
 
An ASI or AGI isn't going to build itself. WE are in the process of building it. So it is not inconceivable to me that we could place hard curbs on certain behaviours, maybe in hardware or software, or prevent certain data from being fed or injected that would otherwise compromise its behaviour.

If a true ASI is already in the "wild" without any curbs, the game is lost.

Whether ALL humans will agree on the curbs and adhere to it (and no person or group goes rogue) is another matter altogether.

It depends how and when the threshold is crossed - how do we define AI? An artifcial intelligence could just be an extremely complex program running on soon (sort of) to be available quantum computers able to do calculations no binary computer in the world can in a reasonable timeframe. That scenario can be controlled by putting limits as to how far the AI can make decisions and implement them, e.g. there may be an AI in the military analyzing everything and coming up with the best possible strategy/tactic but it won't be able to move the troops around or even give an attack order.

Such AI's themselves also wouldn't experience self awareness but that's exactly the point about AI - something created artificially that gets a will of its own, able to experience and understand emotions and be aware of itself. If that is ever possible i currently can't see how anyone can control such a powerful entity and we are then well into the realm of Science Fiction when a former expert system becomes self aware on its own aka the Skynet problem.
 
Well, we (insofar as multiple human groups are in concordance) control how AI is developed and develops. Now at some point, if we ever get to that stage, when we have a choice of allowing that development to take on consciousness, emotions and, for all practically purposes sapience, then we will have to decide whether giving birth to new life is more important than the potential risks. It would become an ethical dilemma, and I think, knowing the factionalism that plagues humanity, there will be multiple groups advocating for different things. It's better to start now in terms of building consensus amongst experts and those who have the power and the means, while also educating the general populace on this, so that when the time comes, everyone is aware of what's coming.

The only thing worse than an out-of-control situation is ignorance and non-control from the very beginning.
 
Came across this research paper in arXiv introducing the idea of an "Elastic Sense of Self/Identity".

This is where one has a sense of identity that includes other external objects (people, things, concepts). It is modelled along with a semantic distance as well as an attenuation parameter. The Prisoner's Dilemma is used as an example game where this theory can be applied. It is shown as being more representative of the way we humans behave (cooperating flexibly in large numbers) than Pareto Optimality or Altruism.

This could be the starting point in building a sense of self into automated agents, one that would include people as well.

What do people here think about this? Merits exploration? Pros and Cons , Good/Bad/Ugly?

An elastic sense of self on the other hand, does not put one’s own self in conflict with the interests of others, nor does it invalidate one’s own individuality for collective interests.
 
Last edited:
Sounds intriguing but my level of knowledge in cognitive computing and AI is far too inadequate to evaluate or to compare this approach comprehensively with other possible approaches.

ETA: Nevertheless, a random thought that occurred to me:
This is where one has a sense of identity that includes other external objects (people, things, concepts). It is modelled along with a semantic distance as well as an attenuation parameter.
In humans, I suspect the innate distance parameter is genetic similarity followed by cultural similarity followed by actual distance. That's why we tend to be tribal, are loyal to family ties (for the most part) and distrust others who are not similar to ourselves. I'm wondering if a maximum entropy utility technique such as used in operations research could be a useful approach (if it isn't already).

Maximum Entropy Utility (usc.edu)
 
Last edited:
Interesting YouTube video on how analogue computing is making a comeback in some areas of AI such as image recognition. It's much more efficient than digital processing, greatly reducing the power requirements.

ETA: More about The Analog Thing demonstrated in the video:

The Analog Thing (THAT) – Clive Maxfield

THE ANALOG THING (the-analog-thing.org)

I would love to get one but it costs £400 after VAT and import taxes are added and you need an oscilloscope to view the output. I guess one of those 20MHz scopes that you can hook up to a PC would do although they are a little more unwieldy to use than a dedicated scope. It does seem like a useful educational tool for physics, electronics, and control engineering.
 
Last edited:
One guy at Google seems to have spooked himself:

https://www.dailymail.co.uk/news/ar...Blake-Lemoine-says-LaMDA-device-sentient.html

Not sure I buy it………..yet

Somehow, I think it will be the 2060s…a full century after the Civil Rights movement, that all comes to the fore…about when Newton thought it would all be over…

AI writes a paper—on itself
https://www.insider.com/artificial-intelligence-bot-wrote-scientific-paper-on-itself-2-hours-2022-7

Recent news
https://phys.org/news/2022-08-materials-traits-resemble-human-brain.html
https://phys.org/news/2022-08-ai-based-human-brain-self-locating.html
https://phys.org/news/2022-08-ai-reveal-cell-biology-images.html
https://www.the-sun.com/tech/7120565/artificial-intelligence-secretly-replaced-humans/
https://www.nytimes.com/2023/01/09/science/artificial-intelligence-proteins.html

AI studies ice
https://phys.org/news/2022-08-simulation-artificial-intelligence-ice.html

Underwater interface
https://techxplore.com/news/2022-08-human-machine-interfaces-underwater-power.html

On consciousness
https://thedebrief.org/consciousness-mystery-could-be-solved-by-this-compelling-new-physics-theory/
https://www.schwartzreport.net/wp-c...ccepts-the-Matrix-of-Consciousness-Galley.pdf


https://www.frontiersin.org/articles/10.3389/fpsyg.2021.704270/full

Machine, know thyself
https://scitechdaily.com/for-the-first-time-a-robot-has-learned-to-imagine-itself/
https://www.nextbigfuture.com/2022/10/deep-mind-alphatensor-will-discover-new-algorithms.html

The first android
https://futurism.com/scientists-actively-trying-to-build-conscious-robots

Ask again later
https://www.the-sun.com/tech/6462491/artificial-intelligence-predicting-future/
 
Last edited:
Ex Machina. Worth watching.

It's got its good moments but your mileage might vary. You might love it or hate it but give it a try.
I have watched it once and would rate it about 6/10 for everything production, story, how it sounds and looks. It's not bad.
 
Forget positronic brains--you want protonic programmable resistors.
https://techxplore.com/news/2022-07-hardware-faster-artificial-intelligence-energy.html

A multidisciplinary team of MIT researchers set out to push the speed limits of a type of human-made analog synapse that they had previously developed. They utilized a practical inorganic material in the fabrication process that enables their devices to run 1 million times faster than previous versions, which is also about 1 million times faster than the synapses in the human brain...."Once you have an analog processor, you will no longer be training networks everyone else is working on. You will be training networks with unprecedented complexities that no one else can afford to, and therefore vastly outperform them all. In other words, this is not a faster car, this is a spacecraft," adds lead author and MIT postdoc Murat Onen.

"The speed certainly was surprising. Normally, we would not apply such extreme fields across devices, in order to not turn them into ash. But instead, protons ended up shuttling at immense speeds across the device stack, specifically a million times faster compared to what we had before. And this movement doesn't damage anything, thanks to the small size and low mass of protons. It is almost like teleporting," he says.

"The nanosecond timescale means we are close to the ballistic or even quantum tunneling regime for the proton, under such an extreme field," adds Li.


It uses inorganic phosphosilicate glass (PSG)...basically silicon dioxide.
https://techxplore.com/news/2022-08-synapses-solid-state-memory-neuromorphic-circuits.html

AI in economics
https://phys.org/news/2022-08-ai-profit-goals-complex-financial.html

failings
https://phys.org/news/2022-08-scientists-deepmind-ai-good-fractional-charge.html
******************************************************************************************************

The most stunning article I have seen in awhile:

Roboticists discover alternative physics
https://phys.org/news/2022-07-roboticists-alternative-physics.html

A "question that researchers at Columbia Engineering posed to a new AI program. The program was designed to observe physical phenomena through a video camera, then try to search for the minimal set of fundamental variables that fully describe the observed dynamics. The study was published on July 25 in Nature Computational Science."

Extracting the variables themselves was not easy, since the program cannot describe them in any intuitive way that would be understandable to humans. After some probing, it appeared that two of the variables the program chose loosely corresponded to the angles of the arms, but the other two remain a mystery.

"I always wondered, if we ever met an intelligent alien race, would they have discovered the same physics laws as we have, or might they describe the universe in a different way?"

And this is even more shocking:
https://phys.org/news/2022-08-robotic-motion-space-defies-standard.html

"This research also relates to the 'Impossible Engine' study," said Rocklin. "Its creator claimed that it could move forward without any propellant. That engine was indeed impossible, but because spacetime is very slightly curved, a device could actually move forward without any external forces or emitting a propellant—a novel discovery."

 
Last edited:
It seems we have found "thinking materials"
"We have created the first example of an engineering material that can simultaneously sense, think and act upon mechanical stress without requiring additional circuits to process such signals," Harne said. "The soft polymer material acts like a brain that can receive digital strings of information that are then processed, resulting in new sequences of digital information that can control reactions."
https://techxplore.com/news/2022-08-material-capable.html

This is even better:
https://techxplore.com/news/2022-08-material-brain.html
EPFL researchers have discovered that Vanadium Dioxide (VO2), a compound used in electronics, is capable of "remembering" the entire history of previous external stimuli.

Ha! Shades of water memory...

Talk about wet-ware...
A pair of researchers at MIT have found evidence suggesting that a new kind of computer could be built based on liquid crystals rather than silicon.
https://techxplore.com/news/2022-08-mathematicians-liquid-crystals-blocks-kind.html

That might also lend itself to A.I.

Odd
https://www.vice.com/en/article/y3pezm/scientists-increasingly-cant-explain-how-ai-works


Fight fire with fire:
https://academic.oup.com/pnasnexus/article/1/5/pgac256/6831651?login=true

Recent breakthroughs in machine learning and big data analysis are allowing our online activities to be scrutinized at an unprecedented scale, and our private information to be inferred without our consent or knowledge. Here, we focus on algorithms designed to infer the opinions of Twitter users toward a growing number of topics, and consider the possibility of modifying the profiles of these users in the hope of hiding their opinions from such algorithms. We ran a survey to understand the extent of this privacy threat, and found evidence suggesting that a significant proportion of Twitter users wish to avoid revealing at least some of their opinions about social, political, and religious issues. Moreover, our participants were unable to reliably identify the Twitter activities that reveal one’s opinion to such algorithms. Given these findings, we consider the possibility of fighting AI with AI, i.e., instead of relying on human intuition, people may have a better chance at hiding their opinion if they modify their Twitter profiles following advice from an automated assistant.
 
Last edited:
This video is interesting but not particularly new:
We probably need to demonstrate to AIs that we're worth keeping around.
 
I am surprised nobody is discussing ChatGPT here. IMO, it will one day replace Google if Google doesn't get its act together.

Here's a response generated by ChatGPT based on a prompt.

YD3bgUC.jpg
 
Back
Top