• Welcome! The TrekBBS is the number one place to chat about Star Trek with like-minded fans.
    If you are not already a member then please register an account and join in the discussion!

Technology and Our Core Values

Computer scientists have been playing with neural networks for 50 years. They have some interesting pattern recognition properties, but that's all.

There are many reasons why studying the brain could give us major advances in various areas, but until we have some kind of fundamental shift in understanding, it isn't going to make our artificial neural networks suddenly become sentient.
 
I should hope so.

Intelligence in the human sense is a bit tricky since it involves processing different types of information than current generation machines are designed to. It would take a paradigm shift in electronics to produce human-like software/processing.

The thing is, once that shift occurs, the rest will happen VERY quickly. You probably won't have enough time to say "Uh oh" before a race of cylons suddenly unionizes and demands living wages and access to hacker insurance.

Doubtful.
I base this on the human brain, his complexity.
Processing information like him will require more than a few transistors which one could then make smaller in repetitive advances that don't require much creativity.
The neuron works at the molecular scale, and the brain uses a LOT of them, arranged in complex patterns. Creating an artificial analog to the brain would require major breakthroughs every step of the way - aka it will take time every step of the way.
Yes and no. Your iPod already has more transisters than the human brain has neurons, so all it would really take is to design a type of integrated circuit that duplicate the functionality of a neuron. A "neurosister" for lack of a better term. Once the basic technology has been fleshed out, the development of "human-like" artificial intelligence is already a short-term inevitability; the only question is how intelligent and how sophisticated that intelligence will become and in what time frame.

"Your iPod already has more transisters than the human brain has neurons"
Exactly. And the iPod is as intelligent/sentient as a rock. Meaning that the transistor is not even close to the neuron in complexity/function.

Let's put it another way: the neuron is a molecular machine - every molecule has its function; meaning you can't make an artificial analog that's smaller.
The litography that made the transistor cannot arrange every atom/molecule as precisely as nature - not even close! And yet, the transistor is smaller than the neuron - but it's also far more primitive, inadequate for creating intelligence/conscioussness.

Put that another way: the gap between solid state electronics and the first personal computers was in a timeframe or less than ten years; this at a time when information technology itself was still in its infancy. A more experienced IT industry confronted with a new technical revolution could be expected to refine the new paradigm into a working product model in less than half the time.
And why has the processor speed risen so quickly? Because making a smaller transistor was relatively easy - it didn't require much creativity - which is why the mechanistic Moore's law was accurate.

Looking at the complexity of a single neuron - never mind the entire human brain - makes obvious the fact that making and improving a 'neurosistor' will require creativity and time every step of the way - no mechanistic advances a la Moore's law.

We've been spoiled by the speed at which the transistor was made smaller.
You err in expecting the same speed in building a 'neurosistor'; the challenges are by orders of magnitude bigger and diverser.

A comparison: in the '50 it was believed we'll live like the Jetsons by now because the atomic power was newly harnessed and it was assumed gravity/AI/etc will prove equally easy to master. As it turns out, fission was ridiculusly easy to master by comparison to what are still dreams today.
 
As a matter of fact, most of the tech that would require of us to 'live like the Jetsons' is already here.
We just never put it (and a lot of other technologies that can be used in even practical space exploration) into actual use because of 'money'.

I'm not claiming that 'living like the Jetsons' is something preferable ... no ... more to the point, our technological development stagnated because we are constricted by the prospect of money and politics (people in positions of power who are extremely slow to adapting changes in human society and are more concerned about their positions and control than overall betterment).

Also ... unless I'm mistaken, Moore's law cannot be applied to everything.
 
We are nearing, if not already at, a "tipping point" where technology simply put will/has rendered a substantial proportion of the population as essentially unneeded and unnecessary from a labor point of view.
That's exactly what the Luddites -- I mean the ORIGINAL Luddites -- feared 200 years ago.

They were wrong.

Only because the state of technology at that time was insufficient to make it so.

You need look only as far as the current US situation to understand my point. Our manufacturing technology has advanced to the point that nearly any person can be trained to push the right buttons to make high-quality consumer goods. Our transportation/shipping technology has advanced to the point that we can relatively cheaply and easily ship just about anything anywhere...

That has led to a steady flow of manufacturing jobs to 2nd and 3rd world nations where pay and other worker protection laws (or more accurately, the LACK of same) give those countries an overwhelming advantage in competing for scarce labor opportunities.

But our social values continue to link access to resource to labor. Thus our current social system is under active threat by the advance of technology. If one must work to live, and technology takes away the opportunity to work, then one does not live.
 
As a matter of fact, most of the tech that would require of us to 'live like the Jetsons' is already here.
We just never put it (and a lot of other technologies that can be used in even practical space exploration) into actual use because of 'money'.

I'm not claiming that 'living like the Jetsons' is something preferable ... no ... more to the point, our technological development stagnated because we are constricted by the prospect of money and politics (people in positions of power who are extremely slow to adapting changes in human society and are more concerned about their positions and control than overall betterment).

Also ... unless I'm mistaken, Moore's law cannot be applied to everything.

"As a matter of fact, most of the tech that would require of us to 'live like the Jetsons' is already here."
Not really - we don't have propulsion technology that allows for interplanetary travel cheaply enough for a standard family to easily afford (not even close!), we don't have AI robots, etc, etc.

And Moore's law only applied to transistor miniaturization - because making transistors smaller didn't require truly new solutions.
I use past tense because, apparently, the limits of litography technology (used to 'print' transistors) have been reached (or are very close to being rached).
 
Indeed, they're starting to hit walls in speed and miniaturization of chips, which is why there's been an increasing emphasis on multi-core systems lately.
 
Maybe I'm speaking out of my butt, but I don't think we'll have true AI until we have true Quantum computing, we need insanely more performance than we have now. A Quantum processor is like the ultimate if I understand it correctly, and if anything is gonna open up the possibility for sentient AI, it will be that. So until we have that, I wouldn't worry to much about it.
 
Doubtful.
I base this on the human brain, his complexity.
Processing information like him will require more than a few transistors which one could then make smaller in repetitive advances that don't require much creativity.
The neuron works at the molecular scale, and the brain uses a LOT of them, arranged in complex patterns. Creating an artificial analog to the brain would require major breakthroughs every step of the way - aka it will take time every step of the way.
Yes and no. Your iPod already has more transisters than the human brain has neurons, so all it would really take is to design a type of integrated circuit that duplicate the functionality of a neuron. A "neurosister" for lack of a better term. Once the basic technology has been fleshed out, the development of "human-like" artificial intelligence is already a short-term inevitability; the only question is how intelligent and how sophisticated that intelligence will become and in what time frame.

"Your iPod already has more transisters than the human brain has neurons"
Exactly. And the iPod is as intelligent/sentient as a rock. Meaning that the transistor is not even close to the neuron in complexity/function.
Which is precisely why I phrased the sentence the way I did, isn't it? Let me repeat:

Your iPod already has more transisters than the human brain has neurons, so all it would really take is to design a type of integrated circuit that duplicate the functionality of a neuron. A "neurosister" for lack of a better term.

The point here is we already know how to pack compact electronic devices with hundreds of thousands of microscopic circuits. Strictly speaking, we already know how make artificial neurons in the general sense. What we don't know how to do is duplicate that functionality mechanically in a device. Once we do, Human-like intelligence would follow quickly from that development, using transistors that duplicate the functionality of a neuron. If you replace all the transistors in your iPod with neurosisters instead, you could probably store your memories on it instead of music.

Let's put it another way: the neuron is a molecular machine - every molecule has its function
But not every molecule has anything to do with memory. Most of them are involved in metabolic processes that simply allow the neuron to exist. The critical ones are neurotransmitters and ion gates, both of which could be (and experimentally, HAVE BEEN) duplicated electrically.

And why has the processor speed risen so quickly? Because making a smaller transistor was relatively easy - it didn't require much creativity - which is why the mechanistic Moore's law was accurate.
Indeed. Because once the basic pattern of circuit design is established, making it smaller is usually easier than making a superior type of circuit of the same size. In other words, if the first neurosisters in the market turn out to be the size of a peanut, it is easier to simply reduce the size of neurosisters than it is to design more effective ones of the same size. Once that process is put in motion, the rest happens relatively quickly.

Looking at the complexity of a single neuron - never mind the entire human brain - makes obvious the fact that making and improving a 'neurosistor' will require creativity and time every step of the way
Not really. Neurons function more or less exactly like bio-electric capacitors, charging and discharging based on specific operating rules. This much has been understood by neurologists for decades; it is, however, the RULES that remain a mystery, not the neurons themselves.

Put that another way: technically, we already know how to make working neuron-transistors that function exactly the way human neurons do. What we don't have is an effective way of integrating them the way we do transisters; basically, this is lie knowing how to manipulate a bandgap in a semi-conductor but not knowing how to modulate a digital signal.

You err in expecting the same speed in building a 'neurosistor'; the challenges are by orders of magnitude bigger and diverser.
Not really. Strictly speaking, the problem is software. We already know how to BUILD them, what we don't know is how to integrate them properly. Nature has already demonstrated that neural nets are every bit as scalable as digital transistors, so the analogy fits perfectly well: if you know how to build a small neural net, you know how to build a big one. The only obstacle, then, is cost.

A comparison: in the '50 it was believed we'll live like the Jetsons by now because the atomic power was newly harnessed and it was assumed gravity/AI/etc will prove equally easy to master. As it turns out, fission was ridiculusly easy to master by comparison to what are still dreams today.
False analogy, considering there have been no significant advances in atomic power (at least in the U.S.) since the 50s. Not so in IT, where the size and complexity of integrated circuits continued in regular progression through demand and development. The same complaint can be made about space exploration: NASA pretty much plateaud after it landed on the moon and has been using (basically) the same rocket technology ever since. This compared with, say, aerospace engineering where huge leaps have been made in material science and engine technology in the same amount of time.

The point is, the computer science field is a field where rapid development DOES occur once a critical stage has been passed. Atomic power may be too, but the critical stage has not been pased; portable nuclear generators aren't on the market yet, so no rapid miniaturization/improvement program exists there yet. IT already has that motive in place.
 
Not really - we don't have propulsion technology that allows for interplanetary travel cheaply enough for a standard family to easily afford
Nobody ever claimed the Jetsons were a standard family. If Spacely Sprockets is a standin for, say, Lockheed Martin they might as well book a flight on the space shuttle. After that, Chang-Diaz's VASIMR can give you a damn good shot at an interplanetary trip.

we don't have AI robots
asimo.jpg


And Moore's law only applied to transistor miniaturization - because making transistors smaller didn't require truly new solutions.
Absolutely it did. What it didn't require was a TOTALLY NEW TYPE OF CIRCUIT.

It's not like internal combustion engines where there's a theoretical lower limit to how small you can make the engine before it stops working; below a certain size you might have to use a rotary engine or a microturbine or something even more exotic. Solid state electronics are limited only to the precision of current machining processes, and therefore they can be made as small as the machines can cut them.

Moore's law applies because the basic transistor design is well understood and there are a million ways you can put a conductor and two semiconductors into that general configuration. The same applies to neurons, considering the variations of supercomputers that actually use neural nets.
 
Maybe I'm speaking out of my butt, but I don't think we'll have true AI until we have true Quantum computing, we need insanely more performance than we have now. A Quantum processor is like the ultimate if I understand it correctly, and if anything is gonna open up the possibility for sentient AI, it will be that. So until we have that, I wouldn't worry to much about it.

It's not a lack of power that's the problem; the computer you're using right now already has more computing power than your brain, probably by an order of magnitude. The problem is the computer isn't programmed to use that power the way humans do, it's designed to solve different kinds of problems in different kinds of ways.

Really, it's a software problem, not a hardware one. If you could emulate a working neural net in a simulation you could probably harness human-like AI (although doing the same job with a mechanical analog is bound to be alot simpler).
 
Let's say you do create a neural net of the brain's complexity, using a message-passing framework and a dedicated processor for each neuron. That's about a billion dollars right there, but let's say you've done it.

You still need to give this massive network input in a form it can use to train itself. That's a massive problem in its own right; it's well-known that the higher-dimensional the problem space, the more of an over-fitting problem you have when trying to do machine learning.
 
About neurons:
"Neurons function more or less exactly like bio-electric capacitors, charging and discharging based on specific operating rules."

You're talking about artificial neurons, which are an extreme over-simplification of neurons
http://en.wikipedia.org/wiki/Artificial_neuron
As opposed to
http://en.wikipedia.org/wiki/Biological_neuron_models

These artificial neurons have been integrated into neural networks, with disappointing results
http://en.wikipedia.org/wiki/Neural_network
Your opinion that these neural networks only lack some magical algorithms is NOT shared by the experts in this field.

True neurons work not only on an electric basis, but also on a bio-chemical basis, etc. Did you know that two identical neurons situated in two different regions of the brain process the same input differently?
About making neural networks, imitating the algorithms nature uses - we'll be able to tackle this problem seriously AFTER we made a true analog for the neuron.

"We already know how to BUILD them, what we don't know is how to integrate them properly."

NO - we know how to build over-simplified artificial neurons.


About miniaturization:
"Moore's law applies because the basic transistor design is well understood and there are a million ways you can put a conductor and two semiconductors into that general configuration."

Moore's law applies because making transistors smaller WAS relatively easy.
In part, yes, because transistors are simple structures.

"Once the basic pattern of circuit design is established, making it smaller is usually easier than making a superior type of circuit of the same size."

ONLY if the circuit is simple. A neuron (or a true artificicial analog) is NOT simple.

"It's not like internal combustion engines where there's a theoretical lower limit to how small you can make the engine before it stops working"

Actually, there IS a limit to how small you can make transistors with current litography and to how small you can make working transistors (as we know them) - due to quantum effects.


About comparisons:
"False analogy, considering there have been no significant advances in atomic power (at least in the U.S.) since the 50s. Not so in IT, where the size and complexity of integrated circuits continued in regular progression through demand and development."

I am comparing fission to AI, NOT IT.
And the comparison is very much valid:
In both fiels, there was no significant advancement for decades, despite an initial enthusiasm.


Jetsons:
"Nobody ever claimed the Jetsons were a standard family."

You're arguing old cartoons? With such an obviously false statement?:rommie:
Now you only want to be contrary.
 
Last edited:
Let's say you do create a neural net of the brain's complexity, using a message-passing framework and a dedicated processor for each neuron. That's about a billion dollars right there, but let's say you've done it.

You still need to give this massive network input in a form it can use to train itself. That's a massive problem in its own right; it's well-known that the higher-dimensional the problem space, the more of an over-fitting problem you have when trying to do machine learning.

That's simple enough. The artificial brain you've just developed is probably large enough to fill a three room apartment, but its inputs and outputs need not be limited to that room. One might simply connect the brain via wireless to a remote body that provides all of its inputs and receives all of its outputs; without a way to monitor what path those inputs take to and from its sensors, the computer wouldn't know the difference.

Of course, one processor per neuron is the brute force approach to the problem, especially since microprocessors are general purpose computers. If you can model a single neuron with a purpose-built circuit that can do almost nothing else, you save yourself alot of cost and complexity and--more importantly--space.
 
About neurons:
"Neurons function more or less exactly like bio-electric capacitors, charging and discharging based on specific operating rules."

You're talking about artificial neurons
No I'm talking about neurons. Do not make the mistake of thinking my knowledge on this subject could be expanded by a wikipedia article; I wrote my thesis on this subject.

Your opinion that these neural networks only lack some magical algorithms is NOT shared by the experts in this field.
What it lacks is proper modeling of the functionality that neurons provide. Some of that functionality is provided chemically and that too has to be replicated. Mind you, "experts in the field" is a pool of individuals a bit more expansive than the good folks at Wikipedia, and that particular opinion of yours is not universally shared.

True neurons work not only on an electric basis, but also on a bio-chemical basis, etc. Did you know that two identical neurons situated in two different regions of the brain process the same input differently?
No, because no two neurons ever receive the same input. That's part of what it means to include neurotransmitter/chemical balances in the equation.

"We already know how to BUILD them, what we don't know is how to integrate them properly."

NO - we know how to build over-simplified artificial neurons.
Same difference. The key advantage of neural networks is their ability to work collectively in an integrated fashion. Failing to do so is a little like trying to network a group of PCs by sticking coathangers in their ethernet cards.

ONLY if the circuit is simple. A neuron (or a true artificicial analog) is NOT simple.
Actually, neurons are quite simple, and making them is even simpler. Hell, biochemists have been growing white matter in petri dishes since the 1970s. Again: this is the equivalent of knowing how to stamp out a transistor but not knowing how to measure voltage across its pins.

In this case, the development process is precisely backwards: in the 50s we knew what we wanted to do, and we invented a device that let us do it. With neural networks, we start with an existing device (neurons) and now we have to figure out what they do.

Actually, there IS a limit to how small you can make transistors with current litography
And if litography was the only way to make them, that would mean something. As for quantum effects, strictly speaking a transister that works at the quantum level--a quantum computer--is still a transister.

I am comparing fission to AI, NOT IT.
Same difference: AI has experienced identical growth as IT in general. And in exactly the same way that AI development has not produced sentience, information technology has not yet produced a working neural net.

Mainly this is because IT is still working in the digital paradigm, and current AIs are digital intelligences. Any functional neural net technology would be subject to the same rapid progress towards humanlike intelligence in a vastly reduced timescale as software follows progression in hardware.

You're arguing old cartoons? With such an obviously false statement?
What's false about it? Not that Hanna Barbara was all that into sociology, but one of my favorite thought experiments in college was to ponder "Where do all the poor people live in the Jetson's future?"
 
About neurons:
"Neurons function more or less exactly like bio-electric capacitors, charging and discharging based on specific operating rules."
"No I'm talking about neurons. Do not make the mistake of thinking my knowledge on this subject could be expanded by a wikipedia article; I wrote my thesis on this subject."

What you described is an OVER-simplified neuron.
As of now, no one can duplicate exactly the output of a neuron - as you should know (the information is actually in one of the articles I linked to).

True neurons work not only on an electric basis, but also on a bio-chemical basis, etc. Did you know that two identical neurons situated in two different regions of the brain process the same input differently?
"No, because no two neurons ever receive the same input. That's part of what it means to include neurotransmitter/chemical balances in the equation."

Actually, E.Izhikevich, who made the claim, attributed this propriety to the electrophysiological and dynamical properties of neurons; no two neurons are dynamically identical.

"Actually, neurons are quite simple, and making them is even simpler. Hell, biochemists have been growing white matter in petri dishes since the 1970s."

Artificial neurons are quite simple; biological ones, not so much.
And yes, white matter grows in petri dishes because, as living cells, they can self-replicate; artificial neurons, not so much. The day you show me a nanite that can self-replicate when put in a petri dish and can otherwise perform ALL other functions of a biological neuron, I will be VERY impressed.

NO - we know how to build over-simplified artificial neurons.
"Same difference."

'Same difference'? I disagree - much like many AI scientists.


About neural networks:
Your opinion that these neural networks only lack some magical algorithms is NOT shared by the experts in this field.
"What it lacks is proper modeling of the functionality that neurons provide. Some of that functionality is provided chemically and that too has to be replicated. Mind you, "experts in the field" is a pool of individuals a bit more expansive than the good folks at Wikipedia, and that particular opinion of yours is not universally shared."

Your opinion amounts to - we only lack a/some magical algorithms that imbues neural networks with intelligence/sentience.

Most scientists in the AI field disagree. But, of course, they're "a bit more expansive than the good folks at Wikipedia".
And you are, of course, right - because you say so. You have yet to provide any evidence for this claim of yours.

At the end of the day, the neural networks built until today are pathetic - they don't even come close to intelligence.
You claim that only some revolutionary algorithm, a true elixir of intelligence/sentience is missing.
Most scientits disagree with you. I'm inclined to agree with them: the answer to such problems is NEVER so easy - especially considering how caricaturally simplistic artificial neurons are by comparison to biological ones. A different algorithm isn't enough to bridge the huge gap between living brains and neural networks.


About comparisons:
"AI has experienced identical growth as IT in general."

IT consistently surpassed the previsions (until very recently) in increasing the processing power (the number of mathemetical calculations per unit of time), while AI crawled at a snail's pace - without coming even close to achieving intelligence, as defined. As for sentience, that's so far above the head of the AI researchers that they should be paying for tickets to hear the word uttered.


About Jetsons:
"What's false about it?"

The Jetsons:
Father - working man in a factory, under an annoying boss.
Mother - housewife.
Two kids.
They are NOT geniuses, rich, etc.

They are a normal 'future' middle-class family, as seen in the '50.
 
Last edited:
All this speculation about AIs is just a big red herring. It doesn't take AI level sophistication to destroy 100s of 1000s of jobs that people COULD be doing and SHOULD be doing, from the factory floor to the "smart answering systems" on the phone to "self check out" at the local grocery/Wal Mart, all of it replacing human labor with automation. Nice for the shareholders who see increased profits because of "reduced costs", but what about all those people who need to put food on the table and a roof over their head?
 
What you described is an OVER-simplified neuron.
Simplification is required here, since I do not have the patience to write an entire essay about neurology. I am referring here to the relevant features of neurons in terms of output and transmission.

As of now, no one can duplicate exactly the output of a neuron
Absolutely they can. Duplicating the output isn't the problem. The problem is figuring out WHY it produces that output at a specific point in time. It doesn't take alot of effort to induce neurons to depolarize, but to paraphrase another great scientist, "Signals, but not the language. We would be responding in gibberish."

Actually, E.Izhikevich, who made the claim, attributed this propriety to the electrophysiological and dynamical properties of neurons; no two neurons are dynamically identical.
That's just pedantry; no two CELLS are dynamically identical. But nerve growth and connection is dictated by all kinds of chemical precursors that give them their specific strength and configuration and that, in the long run, is part of their input. A neural BIOS, in a way.

Artificial neurons are quite simple; biological ones, not so much.
And yes, white matter grows in petri dishes because, as living cells, they can self-replicate; artificial neurons, not so much.
Self-assembly is a pretty straightforward process. Genetic engineering is in many ways less complicated than computer science.:shifty:

The day you show me a nanite that can self-replicate when put in a petri dish and can otherwise perform ALL other functions of a biological neuron, I will be VERY impressed.
I'm not sure at this point that you really understand what "artificial" means.

'Same difference'? I disagree - much like many AI scientists.
And many others--like the ones I know--do not.

Your opinion amounts to - we only lack a/some magical algorithms that imbues neural networks with intelligence/sentience.
What's magical about it? Even a slide rule work unless you know how to use it; "how to use it" is is an algorithm you know. There are features used by neurons in signal processing that we don't know how to use yet, so call that "magic" if you want, but it really just amounts to problem solving.

Most scientists in the AI field disagree.
Since you don't KNOW most scientists in the AI field, forgive me if I fail to take your word for it.

You claim that only some revolutionary algorithm, a true elixir of intelligence/sentience is missing.
To say "only" is a bit silly, isn't it? It'd be like saying to Bill Gates in 1979 "Only some kind of miracle algorithm is missing from PCs to make them useful to home users." Since this miracle algorithm turns out to be the entire windows operating system, you'd be correct in a really trivial way. Sort of like describing an airplane as a "miracle chair with wings and a cockpit attached to it."

IT consistently surpassed the previsions (until very recently) in increasing the processing power (the number of mathemetical calculations per unit of time), while AI crawled at a snail's pace - without coming even close to achieving intelligence, as defined.
Defined by who? We've made tremendous leaps in AI since the 50s in terms of pattern recognition software, expert systems, problem solving, even logical analysis. What they haven't done is achieved sentience. This isn't that much of an issue since artificial sentience is known to be a ways away, but artificial intelligence has made huge gains over the years, more or less exactly on pace with the larger field of information technology.

You, of course, being on such a close personal basis with "most" scientists in the field, you could at least bother yourself to figure out what the field IS.

The Jetsons:
Father - working man in a factory, under an annoying boss.
Mother - housewife.
Two kids.
They are NOT geniuses, rich, etc.
Not rich? A single-family household with a car and two children subsisting on a SINGLE parent's income (a father in a middle management position at that)? Just their income status makes the difference between the Jetsons and around 80% of American families right now; no wonder they live in a condo on the top of a giant pole.
 
The Jetsons:
Father - working man in a factory, under an annoying boss.
Mother - housewife.
Two kids.
They are NOT geniuses, rich, etc.
Not rich? A single-family household with a car and two children subsisting on a SINGLE parent's income (a father in a middle management position at that)? Just their income status makes the difference between the Jetsons and around 80% of American families right now; no wonder they live in a condo on the top of a giant pole.

By the standards of their society, they were a middle class "working " family.
 
All this speculation about AIs is just a big red herring. It doesn't take AI level sophistication to destroy 100s of 1000s of jobs that people COULD be doing and SHOULD be doing, from the factory floor to the "smart answering systems" on the phone to "self check out" at the local grocery/Wal Mart, all of it replacing human labor with automation. Nice for the shareholders who see increased profits because of "reduced costs", but what about all those people who need to put food on the table and a roof over their head?

Oh, they can be railroaded into the service industry and get increasingly more degrading jobs running call centers, complaint hotlines, helping people buy the right kind of stereo system or greeting customers at WalMart. In that regard the issue of AI really is a red herring; as Ben Bova put it once, The main reason sentient AI has never been developed is because all of the most demeaning jobs can be more cheaply done by interns and grad students.
 
If you are not already a member then please register an account and join in the discussion!

Sign up / Register


Back
Top