• Welcome! The TrekBBS is the number one place to chat about Star Trek with like-minded fans.
    If you are not already a member then please register an account and join in the discussion!

Envisioning the world of 2100

That is an almost childlike simplification of the entire process. And you are maybe unaware that in the many ways that "intelligence" actually manifest, machines have been more intelligent than humans for almost twenty years now. You are, in fact, conflating "intelligence" with the capacity for creative output, which are not at all the same thing.

When you peel the onion of human decision in the development of technology, the next layer down from creative output is goal-setting and prioritizing, another thing that machines do exceptionally poorly by their very nature. This means that even if we develop AIs that are creative enough to design space ships without human input, they still have to be TOLD to design space ships by humans who set the priorities for what kind of space ship is going to be designed. Humans spend less and less of their energy DOING things and resort to simply DECIDING things and making the machines do all the work.

Consequently, that is an eventual death sentence for industrial society, and ultimately for the machines themselves. Because a machine, by its very definition, is a tool, something wielded by others to do a job. If there is no one to wield it, the machine has no reason to exist and shuts down until someone tells it what to do.

Unless you ask a machine to simulate a human.
Machines can already do that, with varying degrees of fidelity. In the process of which they nevertheless remain machines.

The thing about computers is that they can model reality
... but only to the extent that they UNDERSTAND reality. The fundamental concept in any simulation is "GIGO", meaning "Garbage in, garbage out." In other words, the simulation is only as accurate as the variables being fed into it.

Well we just make sure its not garbage in, that is all. The human brain is only a finite object, if we simulate enough of it, we reproduce a human personality.

There is a practical upper limit to how accurately any theoretical machine could simulate a human being. But even leaving that aside, even if you could somehow scan a human being down to the subatomic level and feed every bit of that data into a computer, you then run into quantum indeterminacy whereby the computer cannot be entirely certain of the quantum states of that human's nervous system without ALTERING those states in the process. In other words, 100% fidelity is not physically achievable. Practically speaking, even 70% is probably a bit much.

You don't need subatomic precision to simulate a human being, you need only cellular precision, if we move an atom out of place in your body, you are still you. Much of the random molecular movements is not really information, those can be simulated statistically through the gas laws for instance. A cellular model of the human brain is a lot less complicated than a molecular or subatomic one where electrons all have to be tracked in their orbital shells, that is definite overkill. The errors in the approximations you make cancel each other out if the simulation is sufficiently precise. One can for instance predict the weather without simulating the path of every molecule and atom in our atmosphere, there are statistical means of tracking air masses and weather fronts. The human brain can be simulated in similar detail.

Now imagine a computer in the form of a humanoid robot simulating a human being, and whenever the human in the simulation moves a leg, the robot moves a leg, when the human moves an arm the robot moves and arm, when the human looks around, the robot looks with its cameras and what the robot sees is relayed back to the human in the simulation and thus the sim human sees, feels, tastes, smells, and hears what the robot hears and is effectively the robot. What we have here is an android.
No, what you have there is TELEPRESENCE, which is just an extremely immersive form of teleoperation, which has been around since at least the 1960s. A teleoperated machine is still a machine, even if you choose to call it an android; when you remove the human from the simulation, it ceases to function altogether.

Telepresence is when an actual real human being operates a robot by remote control, if that robot is simulating a human being internally, then its not real telepresence as its not remotely operated but internally operated.
 
^Even the cellular level of detail would be unnecessary once we figure out how brains encode and process information with cells (moving from the physical hardware to an abstraction of a machine, even sets of equations that describe an odd analog circuit, and finally a description of what that circuit does).


We certainly would try to simulate a computer application by modelling an Intel processor down to the charge state of its transistors (like they were RF amplifiers). That's what we'd do if we had no idea how a transistor worked, or the design rules for digital circuits, and didn't understand microcode, machine language, and algorithms.

Eventually we'll get there, and that will be a wonderful and frightening time, as people play around with machines that begin to show true intelligence, or run simulations of copies of dead people's brains that remember and feel, and try to figure out what really goes on in a cat's brain. Then, of course, we take advantage of the speed of electrons compared to neurons, unlimited storage capacity, networked connectivity, and build our doomsday philosopher who says "42."
 
One can for instance predict the weather without simulating the path of every molecule and atom in our atmosphere ... The human brain can be simulated in similar detail.

Ah, so it can be simulated to a rough degree of accuracy, but no better? And you can guess what it will do some of the time, and other times it will behave totally freakishly and unpredictably?

Not a human being I'd want to hang around with. Keep it away from sharp implements.
 
One can for instance predict the weather without simulating the path of every molecule and atom in our atmosphere ... The human brain can be simulated in similar detail.

Ah, so it can be simulated to a rough degree of accuracy, but no better? And you can guess what it will do some of the time, and other times it will behave totally freakishly and unpredictably?

Not a human being I'd want to hang around with. Keep it away from sharp implements.
Humans are unpredictable anyway, having approximations will just create different random numbers. A jumble of molecules approximated is another jumble, what makes us intelligent is our brains abilities to turn the random numbers it generates into intelligent thought. For example one way to calculate the square root of a number is to generate a random number between zero and the number to be square rooted, if you square the random number selected and it turns out to be larger than the original number, you select a random number between zero and the previous number that was squared, you keep on doing this until you come to a close approximation of the square root of the number.
 
^Even the cellular level of detail would be unnecessary once we figure out how brains encode and process information with cells (moving from the physical hardware to an abstraction of a machine, even sets of equations that describe an odd analog circuit, and finally a description of what that circuit does).


We certainly would try to simulate a computer application by modelling an Intel processor down to the charge state of its transistors (like they were RF amplifiers). That's what we'd do if we had no idea how a transistor worked, or the design rules for digital circuits, and didn't understand microcode, machine language, and algorithms.

Eventually we'll get there, and that will be a wonderful and frightening time, as people play around with machines that begin to show true intelligence, or run simulations of copies of dead people's brains that remember and feel, and try to figure out what really goes on in a cat's brain. Then, of course, we take advantage of the speed of electrons compared to neurons, unlimited storage capacity, networked connectivity, and build our doomsday philosopher who says "42."
You could have a virtual "holodeck" inside a robot, the "holodeck" simulates the environment around the robot, so that there is sensory feedback and the robot does whatever the virtual simulation does but in the real world, and the sim is based on the sensory inputs from the robot, what you have in essence is a robot run by an AI program, we don't know the details of what's going on in the person's head, all his virtual brain cells interact with each other in a way modeled after a real brain and produces the same result in simulation if the sim is good enough.
 
One can for instance predict the weather without simulating the path of every molecule and atom in our atmosphere ... The human brain can be simulated in similar detail.

Ah, so it can be simulated to a rough degree of accuracy, but no better? And you can guess what it will do some of the time, and other times it will behave totally freakishly and unpredictably?

Not a human being I'd want to hang around with. Keep it away from sharp implements.
Humans are unpredictable anyway, having approximations will just create different random numbers. A jumble of molecules approximated is another jumble, what makes us intelligent is our brains abilities to turn the random numbers it generates into intelligent thought. For example one way to calculate the square root of a number is to generate a random number between zero and the number to be square rooted, if you square the random number selected and it turns out to be larger than the original number, you select a random number between zero and the previous number that was squared, you keep on doing this until you come to a close approximation of the square root of the number.

This stuff about simulating human existence (brains, etc.) is always espoused by people who seem to not know much about actual technology or computer science.
 
Well we just make sure its not garbage in, that is all. The human brain is only a finite object, if we simulate enough of it, we reproduce a human personality.
1) How much is "enough of it" and how do you collect that data without altering it in the process?
2) Even in that case it isn't a reproduction as much as a simulation. Remove the stored variables, the simulation immediately ends.

You don't need subatomic precision to simulate a human being, you need only cellular precision
Except that human beings are subject to all kinds of microscopic influences, particularly genetic, hormonal, biochemical and neurological, all of which involve chemical interactions at the molecular, sub-molecular and -- yes -- subatomic scale.

Although, I agree that the whole point of a simulation is to APPROXIMATE something with a reasonable amount of fidelity, usually with the intention of reproducing the behavior of a thing without actually having the thing itself. In which case, the simulated behavior of, say, a non-player character in Call of Duty 3 is no different from a robot running a simulation of a human brain. The only difference is realism.

One can for instance predict the weather without simulating the path of every molecule and atom in our atmosphere
And the computer models used to predict weather patterns have huge margins for error as a result. For similar reasons, there is not now nor will there ever be a computer model capable of making accurate predictions for turbulent-flow fluid dynamics.

The human brain is such a complex organ that an error margin of around 30% can cause its basic processes to collapse altogether. It's not even that it would cause the brain to behave in ways uncharacteristic of a human, it would cease to function effectively in any way shape or form and exhibit something not totally unlike debilitating epilepsy.

Telepresence is when an actual real human being operates a robot by remote control, if that robot is simulating a human being internally, then its not real telepresence as its not remotely operated but internally operated.
If it's not expressing the actions of a real person, then it's not expressing live human consciousness, not even vicariously. It is -- in a word -- SIMULATING human consciousness. The purpose of a simulation is to reproduce the behavior of a thing without actually reproducing the thing itself. In other words, a thing that acts like a person without actually BEING a person.

Put the Blade Runner scenario aside and think of the actual MOVIE. Harrison Ford is an actor; as such, a few scenes at a time, he is SIMULATING an individual named Rick Deckard. Like most actors, he does this by internalizing -- or otherwise inventing -- characteristics about Deckards background, his motivations, his emotional states, his fears, his desires, his pain threshhold, etc. He is simulating a completely different person who otherwise does not really exist, and because this is Harrison Ford we're talking about, he's doing a damn good job of it.

But no matter how good an actor plays the role, the role remains fictional. Harrison Ford does not forget who he is and BECOME Rich Deckard, nor would he even if he was forced to play this role all day every day for years at a time. Only the BEHAVIOR has any realism to it; the character and personality of Rick Deckard does not.

To complete this analogy: take an android with some stupendous personality simulation software, sublimely designed so that you can feed it some information about a specific person and it will behave very close to the way that person would behave in real life. You program this android with Rick Deckard's fictional life story, details about his career, his love life, his emotional states, his future aspirations, even some supplemental details about the world he lives in for context. The android extrapolates this data and produces behavior accordingly. Meanwhile, your EEG is picking up a staggering amount of electrical activity in the android's CPU as it works through its scenes and produces the needed behavior.

Is that electrical activity the mind of Rick Deckard? No it is not. Because "everything we know about Rick Deckard" has been reduced to an equation, the android's brain is the calculator, and its behavior is the output of the equation.
 
^Even the cellular level of detail would be unnecessary once we figure out how brains encode and process information with cells (moving from the physical hardware to an abstraction of a machine, even sets of equations that describe an odd analog circuit, and finally a description of what that circuit does).
But then we wouldn't need simulations, we could get much better results by building synthetic brains that behave on a hardware level fairly similar to the real mccoy.

One can for instance predict the weather without simulating the path of every molecule and atom in our atmosphere ... The human brain can be simulated in similar detail.

Ah, so it can be simulated to a rough degree of accuracy, but no better? And you can guess what it will do some of the time, and other times it will behave totally freakishly and unpredictably?

Not a human being I'd want to hang around with. Keep it away from sharp implements.
Humans are unpredictable anyway, having approximations will just create different random numbers.
GIGO, remember? Garbage-in, garbage out. If you feed incomplete or inaccurate data into the computer, you get an incomplete or inaccurate simulation. In computer programming languages, this can be as little as a decimal point out of place, a value being negative that should be positive, a recursive process that shouldn't be recursive or two logical statements that contradict in a hard-to-see way.

Keep in mind that "incomplete and inaccurate" for a SIMULATION is different than it would be for a human; you could end up with a simulation that does nothing but laugh uncontrollably 24 hours a day, or a simulation that says the same three word phrase every time you blink at it, or a simulation that twitches from time to time in random directions but otherwise doesn't move.

Bad data means a bad simulation: garbage in, garbage out.

For example one way to calculate the square root of a number is to generate a random number between zero and the number to be square rooted, if you square the random number selected and it turns out to be larger than the original number, you select a random number between zero and the previous number that was squared, you keep on doing this until you come to a close approximation of the square root of the number.
And using this method in an actual calculator means the calculator will produce the square root of a number only 2% of the time. This gets even worse if you use a formula that includes a square root; it will literally NEVER get the right answer no matter how many times you punch in the formula, because all of its calculating processes are based on trial and error randomness.
 
A child born today will only be 88 years old in the year 2100. It’s time to start thinking and caring about the twenty-second century now.The next 88 years may see changes that come exponentially faster than the previous 88 years...

The child won't be born... i'm extremely pessimistic about humanity making it through the next 50 years, let alone the next hundred...

M
 
^Even the cellular level of detail would be unnecessary once we figure out how brains encode and process information with cells (moving from the physical hardware to an abstraction of a machine, even sets of equations that describe an odd analog circuit, and finally a description of what that circuit does).
But then we wouldn't need simulations, we could get much better results by building synthetic brains that behave on a hardware level fairly similar to the real mccoy.

Well, I guess that would be the point, along with perhaps mapping a human brain onto the new hardware so people think they're upgrading to a new, improved physicality.

Even a fairly rough copy would probably do fine, since people are constantly forgetting, misremembering, getting hammered, and getting in wrecks and we still have little trouble accepting the continuity of their existence. For most, it would probably be less of a behavioral change than sobering up, finding Jesus, or surviving a date with Lindsay Lohan.

One step in this process might be getting a good enough copy to run, after which the copy could be monitored with vastly more precision than something like an MRI scan of the brain. Then we could start experimenting with how thinking actually works, moving up a layer of abstraction so we can design a better brain, or understand how to just add knowledge to an existing one at the core level, instead of trying to "teach" it via external sensory inputs, leading to "I need a pilot program for a B-212 helicopter."
 
For example one way to calculate the square root of a number is to generate a random number between zero and the number to be square rooted, if you square the random number selected and it turns out to be larger than the original number, you select a random number between zero and the previous number that was squared, you keep on doing this until you come to a close approximation of the square root of the number.
And using this method in an actual calculator means the calculator will produce the square root of a number only 2% of the time. This gets even worse if you use a formula that includes a square root; it will literally NEVER get the right answer no matter how many times you punch in the formula, because all of its calculating processes are based on trial and error randomness.

What?

I know it varies by calculator, but the gist of what Mars says is how calculators perform square roots. They find something approximate and then refine to a specific level. It's not correct 2% of the time, and I'm not really sure where you even get that percentage from.
 
For example one way to calculate the square root of a number is to generate a random number between zero and the number to be square rooted, if you square the random number selected and it turns out to be larger than the original number, you select a random number between zero and the previous number that was squared, you keep on doing this until you come to a close approximation of the square root of the number.
And using this method in an actual calculator means the calculator will produce the square root of a number only 2% of the time. This gets even worse if you use a formula that includes a square root; it will literally NEVER get the right answer no matter how many times you punch in the formula, because all of its calculating processes are based on trial and error randomness.

What?

I know it varies by calculator, but the gist of what Mars says is how calculators perform square roots. They find something approximate and then refine to a specific level. It's not correct 2% of the time, and I'm not really sure where you even get that percentage from.

Each time the computer generates a random number its from a narrower range of numbers than when the previous number was selected, you find out if the number is too high or low by squaring it or multiplying it by itself, if its too high then the previous number selected becomes the upper end of the range of possible numbers, if it is too low then it becomes the lower end. A human brain is not a very good numerical calculator.
 
Well, most calculators find the square root of X as e^(0.5*ln(X)), where e and ln are found using a Taylor series expansion or some other efficient algorithm. Intel processors speed up trig functions by using a look-up table for a close initial guess so the series converges much more rapidly.

I don't know of any processor or compiler that would use a random number generator for a common math functions because random numbers are somewhat expensive to generate (unless done in hardware) and using them would make execution times harder to predict.
 
Well, most calculators find the square root of X as e^(0.5*ln(X)), where e and ln are found using a Taylor series expansion or some other efficient algorithm. Intel processors speed up trig functions by using a look-up table for a close initial guess so the series converges much more rapidly.

I don't know of any processor or compiler that would use a random number generator for a common math functions because random numbers are somewhat expensive to generate (unless done in hardware) and using them would make execution times harder to predict.

The main difference it that humans have to learn to do math, and with computers that is an innate part of their function. If a computer is to think like a human, it needs to be bad in math, as all this mathematical precision is expensive in energy terms. Humans don't usually do e^(0.5*ln(X)) when trying to figure out the square root of some number. One way is to pick a number between 0 and X and multiply that number by itself, if that number is greater than X, then that number is the upper bound of the next number you pick and 0 is the lower bound, otherwise that number picked is the lower bound of the next number picked and X is the upper. If one keeps following that algorithm one gets fairly close to the actual square root fairly quickly. The brain does a lot of its thinking by guesses or random numbers, and often times a quick decision is better than a mathematically precise answer which is what computers are used for.
 
^Even the cellular level of detail would be unnecessary once we figure out how brains encode and process information with cells (moving from the physical hardware to an abstraction of a machine, even sets of equations that describe an odd analog circuit, and finally a description of what that circuit does).
But then we wouldn't need simulations, we could get much better results by building synthetic brains that behave on a hardware level fairly similar to the real mccoy.

Well, I guess that would be the point, along with perhaps mapping a human brain onto the new hardware so people think they're upgrading to a new, improved physicality.

Even a fairly rough copy would probably do fine, since people are constantly forgetting, misremembering, getting hammered, and getting in wrecks and we still have little trouble accepting the continuity of their existence. For most, it would probably be less of a behavioral change than sobering up, finding Jesus, or surviving a date with Lindsay Lohan.

One step in this process might be getting a good enough copy to run, after which the copy could be monitored with vastly more precision than something like an MRI scan of the brain. Then we could start experimenting with how thinking actually works, moving up a layer of abstraction so we can design a better brain, or understand how to just add knowledge to an existing one at the core level, instead of trying to "teach" it via external sensory inputs, leading to "I need a pilot program for a B-212 helicopter."
I imagine it would be similar to the project to map the human genome. If you find a way to quantify patterns of human thought -- some formal system of memetics, let's say -- you could probably develop a baseline for digital capture or transfer of thought patterns from one person to another.

One thing to consider, though, is that human beings have different kinds of memory that are stored different ways. Your pilot program for the B-212 would probably be downloaded as a set of memories copied from an actual helicopter pilot; you suddenly remember taking three years of pilot training with five years flying gunships in 'Nam. But since you've never BEEN to Vietnam and you don't know what the instructor looks like, your memory will vary slightly from the actual pilot they were copied from; you're mapping new data on top of old and the old data gives (wrong) context to the new.


For example one way to calculate the square root of a number is to generate a random number between zero and the number to be square rooted, if you square the random number selected and it turns out to be larger than the original number, you select a random number between zero and the previous number that was squared, you keep on doing this until you come to a close approximation of the square root of the number.
And using this method in an actual calculator means the calculator will produce the square root of a number only 2% of the time. This gets even worse if you use a formula that includes a square root; it will literally NEVER get the right answer no matter how many times you punch in the formula, because all of its calculating processes are based on trial and error randomness.

What?

I know it varies by calculator, but the gist of what Mars says is how calculators perform square roots. They find something approximate and then refine to a specific level.
Incorrect. Calculators produce their results by logical relationships hard wired directly into their circuitry. Basically, it's a series of voltage gates that physically play out the AND/OR/NAND/NOR/etc logical processes. There's nothing "random" about it; it's essential a conversion from one data type (binary/boolean) to a more easily readable one (base ten decimal).

Software-based calculators (javascript, for example) are even simpler, since they can perform logical operations on whole numbers without resorting to boolean relationships (although, deep down, that's what computers are doing when they run a javascript anyway).
 
The main difference it that humans have to learn to do math, and with computers that is an innate part of their function. If a computer is to think like a human, it needs to be bad in math...
This is fundamentally impossible for digital computers; their most basic data processes are purely mathematical in nature, and the only way they are able to interact with humans at all is by the grace of god and some clever programmers who designed algorithms to convert those processes into data that is meaningful to the decidedly non-mathematical beings who use them.

One of the reasons for this is that the human thought engine is analog in nature, not digital. We process thoughts by chemical and electrochemical reactions of various strength, duration and timing. IOW, our brains operate in a way fundamentally different from a digital computer, and therefore even if the computer were to simulate the output of a human brain, it can only do so by employing some sort of incredibly complex mathematical algorithm.

The brain does a lot of its thinking by guesses or random numbers, and often times a quick decision is better than a mathematically precise answer which is what computers are used for.
Except the computer invariably comes up with a precise answer hundreds of times faster than a human can come up with an approximation. Like, for example, the Janken robot, which is able to beat humans in Rock Paper Scissors 100% of the time because it can read your hand motions and figure out which one you're going to pick within a microsecond and act accordingly. Humans couldn't use that kind of algorithm; they're just not that fast or that intelligent, but machines have no difficulty with these kinds of tasks.

Which leads me to wonder if you've really thought through the UTILITY of creating a machine that thinks like a human. Machines can ALREADY do everything better than us and are presently limited only by the software available to drive their activities. Any task that requires human thought might as well be performed by an actual human (we've already got plenty of those), and any task that humans don't need to be bothered with could and more efficiently by a non-thinking machine running a program (the programming is almost certainly easier in this case as well). At the end of the day, only real utility of developing a MACHINE that thinks like a human is to produce a group of "sort of people" who can do a lot of work for you without the hassle of having them pay them anything. This niche in society is currently filled by immigrants, convicts, and graduate students, whom -- again -- we have in abundance.
 
I imagine it would be similar to the project to map the human genome. If you find a way to quantify patterns of human thought -- some formal system of memetics, let's say -- you could probably develop a baseline for digital capture or transfer of thought patterns from one person to another.

And we can surmise that there must be some sort of underlying system at work, given that the X,Y,Z coordinates of your axons and my axons, and their activation potentials, are not going to be the the same except at a random level, yet we can both watch an episode of a show, recount almost identical storylines for it, and agree on everything from arcane trivia to the vast literary context it's embedded in. Somehow we have systems that can use a million different physical implementations to implement the same functions (kind of like having a million ways to route a signal from A to B, where the actual routing is irrelevant).

One thing to consider, though, is that human beings have different kinds of memory that are stored different ways. Your pilot program for the B-212 would probably be downloaded as a set of memories copied from an actual helicopter pilot; you suddenly remember taking three years of pilot training with five years flying gunships in 'Nam. But since you've never BEEN to Vietnam and you don't know what the instructor looks like, your memory will vary slightly from the actual pilot they were copied from; you're mapping new data on top of old and the old data gives (wrong) context to the new.

Very true. In early attempts, the required knowledge and muscle memory would be scattered all over the brain, such as the importance of glancing at the rate-of-climb indicator and oil-pressure-gage when doing X, and then reflexively checking your rotor clearance, because of a near-fatal experience in Vietnam when you were trying to impress the kids who sold you a Coke when you were talking to their teacher about .... Visual memories, auditory memories, muscle memories, mental models of physics, flight, arm motions, eye motions, gage locations, etc.

And then when you figure out what the behavior needs to be, filtering out the particular memories or circumstances that lead to the knowledge, skill, or habit, to find the underlying pattern that should be common to anyone skilled at the maneuver, you should be able to download skills without at least most of the memories, other than perhaps the abstract (balloon-animal body, not Charlize Theron) memories are required for pattern and sequence recognition, the way you listen to a mix tape so long that you spend years expecting a song to always lead in to the one that followed it on your tape.

BTW, Trinity asked for a pilot program for a B-212, but Tank downloaded the pilot program for a B-206 (says a blooper site). That probably explains why she ended up dangling from a fire hose. :D
 
In early attempts, the required knowledge and muscle memory would be scattered all over the brain, such as the importance of glancing at the rate-of-climb indicator and oil-pressure-gage when doing X, and then reflexively checking your rotor clearance, because of a near-fatal experience in Vietnam when you were trying to impress the kids who sold you a Coke when you were talking to their teacher about .... Visual memories, auditory memories, muscle memories, mental models of physics, flight, arm motions, eye motions, gage locations, etc.
Well, procedural memory is actually pretty robust, it doesn't have the same complex relational structure that episodic memory. This is one of the reasons people are still able to do some things halfway competently while sleepwalking, for example. They won't remember doing it -- that part of the brain is largely shut down at the time -- but they'll manage to accomplish some fairly complex tasks without being conscious of having done so. I've even had the experience myself; when I was in college and my dumbass instructor had us sub-netting IP addresses by hand for three hours straight. I fell asleep halfway through a worksheet... and woke up with the worksheet completed. Of course the answers were all wrong (wrote the same address for all ten of them) but it was written legibly enough that it might as well have been awake.

Disentangling learned procedural memory from its relational background can be tricky, though. There are alot of things you know how to do that you don't really remember learning (typing on a keyboard, for instance). But there are other things you know from experience, and the experiences themselves are a factor in your skills (ever try to spell a word and catch yourself thinking "I before E except after C"? I do, for some reason always in the voice of Linus Van Pelt). An even better example is in driving a car: everyone has that one weird habit they have as a driver that originates from something they experienced or something they were taught and they remember more or less how they were taught and why they do it that way, if only in the vaguest sense. Procedural memory is, in that way, modified by experiences stored in episodic memory.

The good news is you can probably extract discrete episodes completely independent of the larger context. You could probably transfer a memory of arguing politics with a really smart and admirable man in a dark room somewhere, though you wouldn't remember exactly what the argument was about, where the room was, how you got there or what happened when you left. But if the substance of the conversation was "... and that's why it would be a good idea to bring $5,000 to the corner of North and Madison on August 5th, 2013," you may rightly begin to suspect that that particular memory is a plant.

ETA: I just remembered, that was basically the premise for the movie "Dark City." I always remember that part where John's flashing back through his childhood and he suddenly remembers his teacher saying "You're probably wondering why I keep appearing in your memories. That's because I have implanted myself in them!" STILL one of the most awesome movies ever made.
 
Well, most calculators find the square root of X as e^(0.5*ln(X)), where e and ln are found using a Taylor series expansion or some other efficient algorithm. Intel processors speed up trig functions by using a look-up table for a close initial guess so the series converges much more rapidly.

I don't know of any processor or compiler that would use a random number generator for a common math functions because random numbers are somewhat expensive to generate (unless done in hardware) and using them would make execution times harder to predict.

The main difference it that humans have to learn to do math, and with computers that is an innate part of their function. If a computer is to think like a human, it needs to be bad in math, as all this mathematical precision is expensive in energy terms. Humans don't usually do e^(0.5*ln(X)) when trying to figure out the square root of some number. One way is to pick a number between 0 and X and multiply that number by itself, if that number is greater than X, then that number is the upper bound of the next number you pick and 0 is the lower bound, otherwise that number picked is the lower bound of the next number picked and X is the upper. If one keeps following that algorithm one gets fairly close to the actual square root fairly quickly. The brain does a lot of its thinking by guesses or random numbers, and often times a quick decision is better than a mathematically precise answer which is what computers are used for.

At this point I have to conclude you know absolutely nothing about actual computer science, so maybe you should refrain from commenting on how to simulate human brains (or anything else, really.)
 
The main difference it that humans have to learn to do math, and with computers that is an innate part of their function. If a computer is to think like a human, it needs to be bad in math...
This is fundamentally impossible for digital computers; their most basic data processes are purely mathematical in nature, and the only way they are able to interact with humans at all is by the grace of god and some clever programmers who designed algorithms to convert those processes into data that is meaningful to the decidedly non-mathematical beings who use them.

One of the reasons for this is that the human thought engine is analog in nature, not digital. We process thoughts by chemical and electrochemical reactions of various strength, duration and timing. IOW, our brains operate in a way fundamentally different from a digital computer, and therefore even if the computer were to simulate the output of a human brain, it can only do so by employing some sort of incredibly complex mathematical algorithm.

The brain does a lot of its thinking by guesses or random numbers, and often times a quick decision is better than a mathematically precise answer which is what computers are used for.
Except the computer invariably comes up with a precise answer hundreds of times faster than a human can come up with an approximation. Like, for example, the Janken robot, which is able to beat humans in Rock Paper Scissors 100% of the time because it can read your hand motions and figure out which one you're going to pick within a microsecond and act accordingly. Humans couldn't use that kind of algorithm; they're just not that fast or that intelligent, but machines have no difficulty with these kinds of tasks.

Which leads me to wonder if you've really thought through the UTILITY of creating a machine that thinks like a human. Machines can ALREADY do everything better than us and are presently limited only by the software available to drive their activities. Any task that requires human thought might as well be performed by an actual human (we've already got plenty of those), and any task that humans don't need to be bothered with could and more efficiently by a non-thinking machine running a program (the programming is almost certainly easier in this case as well). At the end of the day, only real utility of developing a MACHINE that thinks like a human is to produce a group of "sort of people" who can do a lot of work for you without the hassle of having them pay them anything. This niche in society is currently filled by immigrants, convicts, and graduate students, whom -- again -- we have in abundance.
Even if you don't "pay" the computer anything, a computer still costs something to run and maintain, a computer wanting to be paid, is just taking over the responsibility of self maintenance itself, this is sort of like the difference between a slave and an employee. The slave doesn't get paid, but he still needs to be fed, and sheltered, in order to do work. A computer is a tool, but if you ask it to emulate a human, then one of the things it will do while emulating a human is ask to be paid in exchange for the work it does, and it will demand to be treated just as any other human would. To properly emulate a human the computer will probably generate a lot of random numbers and use feedback and learning to develop ways of dealing with the real world. The problem with computers is that in the way they are usually used as calculating machines, they execute certain algorithms, and we don't have an algorithm for common sense besides trial and error. A computer needs to have the ability to learn and develop common sense and intuition.
 
If you are not already a member then please register an account and join in the discussion!

Sign up / Register


Back
Top