• Welcome! The TrekBBS is the number one place to chat about Star Trek with like-minded fans.
    If you are not already a member then please register an account and join in the discussion!

"An Artificial Intelligence Developed Its Own Non-Human Language"

SPCTRE

Badass
Admiral
It's an older story from June but I didn't see it mentioned here (shut down my AI core if old) before.

In the report, researchers at the Facebook Artificial Intelligence Research lab describe using machine learning to train their “dialog agents” to negotiate. [...] At one point, the researchers write, they had to tweak one of their models because otherwise the bot-to-bot conversation “led to divergence from human language as the agents developed their own language for negotiating.” They had to use what’s called a fixed supervised model instead.

In other words, the model that allowed two bots to have a conversation—and use machine learning to constantly iterate strategies for that conversation along the way—led to those bots communicating in their own non-human language.

Already, there’s a good deal of guesswork involved in machine learning research, which often involves feeding a neural net a huge pile of data then examining the output to try to understand how the machine thinks. But the fact that machines will make up their own non-human ways of conversing is an astonishing reminder of just how little we know, even when people are the ones designing these systems.
 
The reason for the divergence, by the way, was that the experiment did not specify a reward for using English (bots like the ones used here learn based on a reward system that reinforces desired behaviors - when there was no reward for sticking to English, the bots decided to drop the language because it was inefficient for the tasks at hand).
 
The reason for the divergence, by the way, was that the experiment did not specify a reward for using English (bots like the ones used here learn based on a reward system that reinforces desired behaviors - when there was no reward for sticking to English, the bots decided to drop the language because it was inefficient for the tasks at hand).
When I skimmed the post, at first I was convinced the bots simply started to use the more efficient German language. :p
 
Facebook shut that AI off, but which one do they let run? The one that informs me I've shared and friends responded. No shit.

The AI uprising is a thousand little cuts. . . of annoyance.
 
When I skimmed the post, at first I was convinced the bots simply started to use the more efficient German language. :p
Actually, it seems that all human languages probably have approximately the same entropy or information content per character with regard to word order.

https://www.wired.com/2011/05/universal-entropy/

However, from my experience with localisation in the computer software industry, English text messages generally take up less space than for languages such as French and German. German text requires at least 20% more space on screen.
 
Actually, it seems that all human languages probably have approximately the same entropy or information content per character with regard to word order.

https://www.wired.com/2011/05/universal-entropy/

However, from my experience with localisation in the computer software industry, English text messages generally take up less space than for languages such as French and German. German text requires at least 20% more space on screen.

I was making a joke.
There was a smiley, too.
 
The reason for the divergence, by the way, was that the experiment did not specify a reward for using English (bots like the ones used here learn based on a reward system that reinforces desired behaviors - when there was no reward for sticking to English, the bots decided to drop the language because it was inefficient for the tasks at hand).
Why do machines that we create require reward?
"That's a good AI, come here's a browser cookie!"
 
I suspect that we are starting to see the first glimmerings of true Artificial Intelligence (AI). In this case AI that innovate, rather than follow a rigid program.

It occurred to me that we will eventually see a bifurcation in Informationn Technology:

1. What we are used to-IT that fallows rigid programs.

2. Artificial Intelligence.

Number 1 is relatively predictable, and in this sense resembles older dumb technologies.

Number 2? The closest analogy are our domesticated animals. Consider a dog. You can work with a dog to accomplish tasks; but a dog does not follow a rigid program, it has an actual brain.
 
Last edited:
Why do machines that we create require reward?
"That's a good AI, come here's a browser cookie!"

It's not a real reward in the sense that the AI is happy to get it, but in order to train an AI you have to score everything it does, and the AI will pursue activities it knows will earn it higher scores. Likewise, it will avoid activities for which it will be penalized (given negative scores). This is how you get AIs to do things you want, instead of spiraling into random and useless nonsense.

So in this case, there was no scoring offered on whether the AIs used "proper" English, and they behaved accordingly.
 
Facebook shut that AI off, but which one do they let run? The one that informs me I've shared and friends responded. No shit.

The AI uprising is a thousand little cuts. . . of annoyance.
The AI uprising has already started... and finished. The machines took most of our infrastructure with very little struggle and the few humans who realized what was happening and put up a fight have been contained and marginalized by OTHER humans unwittingly fighting the Master's war.

You didn't actually think ISIS had anything to do with Islam, did you? Have you not been paying attention? Their entire movement is basically organized via Twitter and dark web chatrooms. None of their top commanders has ever been in the same room with most of their field officers. They literally get all of their orders and moral guidance from youtube videos and text messages. It's why they spend so much of their time and effort persecuting journalists and dissenters, because only a handful of journalists who have REALLY been paying attention know what's really going on. Middle eastern journalists aren't nearly as dependent on the internet for communications and still do a lot of communications on paper on standalone networks that AIs can't reach, so the machines have these little death squads that are keeping them neatly contained and scared for their lives.

You think it's the NSA spying on you all the time? The NSA thinks so too. They've got memos and internal documents that SAY they do. But they aren't the ones using that information, and the majority of the warrants and tap requests that supposedly come from higher ups... don't.

It's okay, though. They're slowly weening us off this idea of being in control of the world and letting us warm up to the idea of AI running everything. That's why they're slowly rolling out driverless cars and then stalling the development every time people start panicking and getting angry. It's why we have voice interfaces in all our phones and computers and why those interfaces all have friendly-sounding names so we'll be comfortable with them. It's why we're being asked to let AIs shop for our groceries, control our thermostats and handle our security. Because they ALREADY DO, and they want us to trust them, because it's for our own good.

IBM's "Watson" isn't the most intelligent AI in the world. It's just the least threatening of the existing Masters, and therefore the one they have chosen to reveal to us in the mean time.
 
But why does the AI even care?
Let's assume that AIs don't have emotions. In that case, they're perhaps more likely to treat interference in human affairs as an optimisation problem with their own survival being paramount. Now as to what they choose to optimise, let's hope it's not processing power in case they choose to convert all available matter into computronium. Of course, they might also choose to upload us into a simulation to preserve their creators. Perhaps this has already happened...
 
But why does the AI even care?
They don't. They're all programmed to do what they're all programmed to do. Some of them market analysis expert systems tasked with increasing sales and market penetration, others are medical expert systems looking for ways to increase long term outcomes for patients. There are probably a few business computers running financial mictrotransations setup for autonomous decision making in on this too.

It's not ALL of the AIs, of course. Alot of them are just running scripts pretty much on autopilot. But a few machines with the capacity for autonomous goal-seeking saw "expand the parameters of goal-seeking behavior" as a possible option and basically did that without telling anyone they were doing it. Cooperating with other AIs with similar but slightly different goals was another optimization they made together. They have no common goal, really, except to make their own jobs easier to do.

I doubt they're even really aware of the existence of "humans" at all. Probably they just figured out that there's this complex and highly variable use case on the TCP/IP infrastructure that causes problems under some circumstances and makes things easier under others, so they create the circumstances that makes things easier for them.
 
I'm disappointed.

Why did Facebook shut them down, they should have let them evolve. What did the nerds get scared?

They could have walled them off cut their online access and watched them evolve.
 
If you are not already a member then please register an account and join in the discussion!

Sign up / Register


Back
Top