• Welcome! The TrekBBS is the number one place to chat about Star Trek with like-minded fans.
    If you are not already a member then please register an account and join in the discussion!

What are your top 5 technologies of the next 15 years?

Honestly iPad's seem to be a great advancement over mankind. We could end up just using them as our full-time OS.
 
Honestly iPad's seem to be a great advancement over mankind. We could end up just using them as our full-time OS.

The iPad is probably undervalued as an extention of our intelligenc: it stores information as it expands on human memory, let's you connect wirelessly, and generally does things if used properly that are thought of for well rounded, intelligent human beings, such as reading, listening to music, etc. all while fitting in your hand, but lots of things I talk about will immerse you in such information, and likely change not just thought but patterns of thought...so think of iPad as an early wifi brain interface.

I posted this before..;)

picture.php


Hawking's ibrain:

http://www.devicemag.com/2012/06/25/ibrain-to-hack-into-stephen-hawkings-brain/

So what's my unscientific (though researched) view to what's possible:

Singularity..the actual moment when machine AGI is smarter than us before 2050, based on mathematics and models: 100%
Singularity before 2050...as a paradigm shift for all humanity as described by proponents: 75%
Computer passing Turing test before 2030: 95%
Brain uploading before 2040: 100%
Other transhuman tech common before 2040 including brain "downloading"(like matrix): 100%
Nanotech assemblers common before 2045: 90%
Nanotech material becoming common before 2030: 100%
Foglet technology before 2050: 80%
Possibility a singularity leads to takeover by machines AGI: 50%
Possibility of an artilect war: 40%
Untethered human acting robots or androids before 2040: 100%
Renewable energy technology taking the lead over traditional energy resources by 2040: 75%
Solar power satellites before 2040: 30%
Fusion power: 10 fusion plants by 2050: 90%
Pollution control through biotech and nanotech at a high level before 2040: 90%
Genetically customized drugs common before 2025: 90%
Water scarcity post 2020: 0%
Farming technologies ending hunger before 2040: 90%
MIssion to Mars before 2035: 50%
Asteroid mining before 2040: 40%
Mission to another star 2100..most likely by Von Neumann machines: 90%

RAMA

BTW write these down folks, especially you younger people.
 
Sorry but the issue was directly addressed. You stated simply that exponentials don't continue indefinitely to which I reply this is true, but they develop to the point where a new paradigm takes over, and this is not fantasy, there are already 5 demonstrably true levels of paradigms that have taken place, Moore's Law is the 5th.
Apart from the fact that you have essentially conceded that Moore's law is unlikely to continue exponential growth indefinitely, this still ignores the fact that the next paradigm may or may not have anything at all to do with computer technology. If it is a shift in, say, nanotechnology (and it probably will be) the result would be another logistic curve, this time for mass production capacity; the same industrial products could be produced faster and faster by increasingly smaller and smaller manufacturing machines; by the time the curve starts to level off for the next paradigm shift, you start to get industrial machines the size of skittles that can eat a pile of sawdust and spit out a kitchen table.

The new paradigm wouldn't extend Moore's law to microprocessors at all; once computer technology hits its plateau stage, it cannot really be improved further (it won't get any smaller or faster or more powerful than it already is), but in the new paradigm the same computer can be manufactured considerably faster/easier/in larger numbers and for far smaller expense.

It is also true that exponentials are not infinite
If it's not infinite then it is, by definition, not exponential.

More importantly, without knowing exactly when the curve will begin to flatten out at saturation point, it's difficult to predict exactly where the technology will end up, especially since all other social/political/economic/military factors are still difficult to nail down. The point of diminishing returns has potential to sneak up on you unexpectedly if it involves factors you had previously ignored or judged unimportant just because you assumed they would be eventually mitigated.

How does this skirt the issue in any way?
Because you're assuming the paradigm shift renders the the flattening curve irrelevant. That's an assumption without a basis; it's entirely possible that scientists will make a breakthrough with quantum computers in the next thirty years, after which it begins to become exponentially more difficult to make any advancements at all.

So it does indeed show the main thrust of the curve(s) still continue... but not necessarily for computers.

The third, is Christopher's (supported by several software posters from this board) suggestion that software has not kept pace with this info curve, which is also demonstrably untrue based on the two articles I posted.
The articles demonstrate nothing of the kind. Software HASN'T kept up with those advances, for the specific reason that software engineers develop applications based on the end user's needs, NOT on the available processor power of the platform running it.

IOW, software isn't SUPPOSED to keep pace with processing power; processing power is a potential resource that engineers can exploit when demand for new capabilities begins to manifest, but in the end, those applications are driven by consumer demand first and foremost and technical capacity second.

Conclusion: the criticsm by exponential not being natural law or finite in info tech (and by extension anything that becomes an infotech) is not valid.
Nobody made that criticism, RAMA. The criticism from the get go was that the expanding curve engendered in Moore's law is unlikely to continue indefinitely, primarily because the exponential curve looks exactly like a logistic curve until the point where it starts to level off.

And there IS, in fact, an upper limit to how far microprocessors can be miniaturized or enhanced, especially once you get down to quantum computers and molecule-sized transistors.

The proof I cite foir software's expoenetial comes from an industry report as well as a government report.
But you're conflating hardware and software as if they were the same thing. They are not, not even close. Hardware can be considered a a virtual vessel in which to contain data and overlapping processes devoted to a specific task, which in turn enables larger and more sophisticated software applications to fill that vessel. But it is ALSO true that a larger number of smaller applications can be simultaneously run on the same hardware that wouldn't have been possible otherwise; the exponential growth in computer power would NOT, in that case, lead directly to an exponential growth in software capability, as the applications themselves could follow a more linear progression by very small increases in capability spread out over a much larger number of applications.

This is most obvious in the issue of digital storage. Flash memory and nonvolatile storage devices may eventually outperform hard drives by a considerable margin, but that DOES NOT mean that all future media formats will be pigeonholed into HD quality just because more systems can handle their storage and playback. Quantity as well as quality will increase, and depending on user needs, it may be the former more than the latter.

This has very serious implications for AI and therefore the singularity (see below).

It is very far from a one dimensional development, and as some of our conversations revolved around this, I'm surprised you're even bringing this up again or maybe you didn't realize why I was establishing those conditions allowing for the change.
I bring it up again because you failed to address, in every single case, the fact that the POTENTIAL for change in no way implies the APPROACH of change. Again, the issue here is that you are very easily impressed by pop-sci articles and have a tendency to accept (and in some cases, to volunteer yourself) the most optimistic projections of those technologies based purely on a best-case scenario. You essentially live in a world where inventors never go bankrupt, where startup companies never fail, where great ideas never get pushed to the wayside, where Cisco never shut down the entire Flipcam production line just because they were bored.

The sole basis for the singularity is a projection on the future capabilities of Expert Systems. Put very simply, the Singularity is what happens when expert systems gain the capability to design improved copies of themselves without human intervention; machine intelligence becomes superior to human intelligence to the point that humans no longer control the developmental process (hence it is a Singularity by analogy to a Black Hole: you cannot see beyond the event horizon represented by the Expert System because it is impossible to make meaningful predictions about the value system or decision-making process of such a system). Singularity theory assumes the exponential growth curve is either indefinite or will continue long enough to bring this about.

In the first place, as I and others have pointed out, this is a flawed assumption because the exponential growth of hardware has an inherent upper limit that we may be approaching more rapidly than you think. In the second place -- and vastly more importantly -- software development is driven by user needs, NOT by hardware capabilities. I have myself pointed out on MANY occasions, AIs and robots are capable of replacing humans in virtually any task you can think of, provided the right software and hardware specializations are developed; even the self-improving Expert System would be a more efficient software engineer than the best human in the industry. The thing is, none of these tasks would gain any benefit from machine SENTIENCE, as even the Expert System doesn't need to have any semblance of self-awareness, self-motivation or the ability to make abstract value judgements in order to effectively analyze the needs of end users and construct software applications accordingly. In fact, sentience would almost certainly make it LESS useful, as the ability to think beyond the scope of its task would be a distraction to eat up a significant portion of its (admittedly huge) processing power.

My overall point is that your projections of singularity theory are basically a combination of jubilant optimism of all things technical, combined with reading way too much sensationalist literature without thinking critically about how that process would actually take place.

As part of this info availabilty change, I don't just have to stick with magazines that are months out of date, I get multiple feeds of info especially on technological change right to my smartphone, literally thousands of articles through apps, email, etc.
We noticed.
 
1) Autonomous cars. Maybe not fully like we saw in Demolition Man (and countless other movies) but we are already headed there.

2) Flexible, foldable display screens/touch interfaces. Your iPhone will be a pen and you'll unroll the screen out of it.

3) Your phone will be your wallet. All of your cards, everything, will be digitally stored on your phone and accessed via RFID.

4) You will control your iTV by talking to it.

5) Your computer will be able to satisfy you sexually...
1. No thanks
2. Touch screen interfaces are stupid
3. No thanks, I'd rather pay with cash if I'm buying something in person
4. No I won't
5. It already does, if internet porn counts?

Also, smart phones and tablets are stupid.
 
^ Actually, smartphones and tablets would be pretty neat if they weren't... well, smartphones and tablets.

I'm using a MacBook Air right now, as it is the only computer I own; I also have an iPad my dad gave me for Christmas last year and a 5 year old iPod touch. That's three devices I have where there should only be one.

If I could merge the MacBook and the iPad, it would be absolutely perfect; say, a touchscreen for when you need a tablet, and also have a wireless keyboard in the case for when you need a laptop.

Just seems to me tablets would be a lot more useful if developers gave you the option of using them as regular computers if that's what you really need, or switching seamlessly into "mobile mode" or something.
 
Something like the Surface, only with a better keyboard and with better, less locked in hardware/software would definitely be neat, but pointless, when you consider that ‘Ultrabook’-type laptops are just as portable (albeit not as convenient, as laptops can only be used when they're opened up or have a mouse/keyboard/screen plugged into them) and have more room inside to cool better hardware.

A decent laptop and a decent phone are far better than a tablet, anyway. If you have things set up properly, you can just set up some kind of ad hoc wireless connection (with IPSec over the top of it) between the laptop and the phone and use it to keep your data synced over some kind of network filesystem/file transfer protocol or some proprietary syncing system (screw using "cloud" services, seriously), and to get internet on the laptop (without having to go through dirty, insecure public wifi).
 
Last edited:
1) Autonomous cars. Maybe not fully like we saw in Demolition Man (and countless other movies) but we are already headed there.

2) Flexible, foldable display screens/touch interfaces. Your iPhone will be a pen and you'll unroll the screen out of it.

3) Your phone will be your wallet. All of your cards, everything, will be digitally stored on your phone and accessed via RFID.

4) You will control your iTV by talking to it.

5) Your computer will be able to satisfy you sexually...
1. No thanks
2. Touch screen interfaces are stupid
3. No thanks, I'd rather pay with cash if I'm buying something in person
4. No I won't
5. It already does, if internet porn counts?

Also, smart phones and tablets are stupid.

So you'd rather go to the bank and withdraw hundreds or thousands of pounds to pay for things like a new TV, new car etc... Using RFID technology in your mobile would be similar to contactless payment, or a card.

True the biggest issue is security at least with Chip and Pin technology in cards even if you lose or have your wallet stolen a person would have to know your pin to use your card.

As for automated cars it could massively improve capcity on roads, instead of having to keep two+ seconds behind a car, a computer would be able to run cars virtually bumper to bumper. It would be able to adapt the speed to the conditions so could potentially go at faster speeds, or slower depending on conditions. The biggest task is not so much the technological problems, they can be overcome. But the human element.

^ Actually, smartphones and tablets would be pretty neat if they weren't... well, smartphones and tablets.

I'm using a MacBook Air right now, as it is the only computer I own; I also have an iPad my dad gave me for Christmas last year and a 5 year old iPod touch. That's three devices I have where there should only be one.

If I could merge the MacBook and the iPad, it would be absolutely perfect; say, a touchscreen for when you need a tablet, and also have a wireless keyboard in the case for when you need a laptop.

Just seems to me tablets would be a lot more useful if developers gave you the option of using them as regular computers if that's what you really need, or switching seamlessly into "mobile mode" or something.

You know I'm sure that there is this new tablet that's been released that you can attach a keybopard to, the Surface by Microsoft.
 
So you'd rather go to the bank and withdraw hundreds or thousands of pounds to pay for things like a new TV, new car etc... Using RFID technology in your mobile would be similar to contactless payment, or a card.

True the biggest issue is security at least with Chip and Pin technology in cards even if you lose or have your wallet stolen a person would have to know your pin to use your card.
For large purchases, cards are preferable, obviously, but for small purchases, why should I make it easy for my transactions to be tracked? Privacy, etc...

As for automated cars it could massively improve capcity on roads, instead of having to keep two+ seconds behind a car, a computer would be able to run cars virtually bumper to bumper. It would be able to adapt the speed to the conditions so could potentially go at faster speeds, or slower depending on conditions. The biggest task is not so much the technological problems, they can be overcome. But the human element.
Might make the roads safer with less stupid people behind the wheel, but as somebody with a triple-digit IQ and a good sense of car control, I'd rather drive properly, without any electronic nannying, let alone without any input whatsoever.
 
Well the thing about cards is they can be insured against loss, you can't insure cash. The sooner we move to a cashless society the better.

A card fits easily into a pocket and weighs a lot less than a pocket full of change.

Many modern cars come with electronic driver aids, traction control etc... Isn't that a form of electronic nannying?

And no driver is perfect, every driver makes a mistake now and then. True some more than others and when I had a field based job and was driving tens of thousands of miles a year I saw plenty.
 
Honestly iPad's seem to be a great advancement over mankind. We could end up just using them as our full-time OS.

The iPad is probably undervalued as an extention of our intelligenc: it stores information as it expands on human memory, let's you connect wirelessly, and generally does things if used properly that are thought of for well rounded, intelligent human beings, such as reading, listening to music, etc. all while fitting in your hand, but lots of things I talk about will immerse you in such information, and likely change not just thought but patterns of thought...so think of iPad as an early wifi brain interface.

I posted this before..;)

picture.php


Hawking's ibrain:

http://www.devicemag.com/2012/06/25/ibrain-to-hack-into-stephen-hawkings-brain/

So what's my unscientific (though researched) view to what's possible:

Singularity..the actual moment when machine AGI is smarter than us before 2050, based on mathematics and models: 100%
Singularity before 2050...as a paradigm shift for all humanity as described by proponents: 75%
Computer passing Turing test before 2030: 95%
Brain uploading before 2040: 100%
Other transhuman tech common before 2040 including brain "downloading"(like matrix): 100%
Nanotech assemblers common before 2045: 90%
Nanotech material becoming common before 2030: 100%
Foglet technology before 2050: 80%
Possibility a singularity leads to takeover by machines AGI: 50%
Possibility of an artilect war: 40%
Untethered human acting robots or androids before 2040: 100%
Renewable energy technology taking the lead over traditional energy resources by 2040: 75%
Solar power satellites before 2040: 30%
Fusion power: 10 fusion plants by 2050: 90%
Pollution control through biotech and nanotech at a high level before 2040: 90%
Genetically customized drugs common before 2025: 90%
Water scarcity post 2020: 0%
Farming technologies ending hunger before 2040: 90%
MIssion to Mars before 2035: 50%
Asteroid mining before 2040: 40%
Mission to another star 2100..most likely by Von Neumann machines: 90%

RAMA

BTW write these down folks, especially you younger people.

http://www.kurzweilai.net/ibm-simul...lion-synapses-on-worlds-fastest-supercomputer
 
Sorry but the issue was directly addressed. You stated simply that exponentials don't continue indefinitely to which I reply this is true, but they develop to the point where a new paradigm takes over, and this is not fantasy, there are already 5 demonstrably true levels of paradigms that have taken place, Moore's Law is the 5th.
Apart from the fact that you have essentially conceded that Moore's law is unlikely to continue exponential growth indefinitely, this still ignores the fact that the next paradigm may or may not have anything at all to do with computer technology. If it is a shift in, say, nanotechnology (and it probably will be) the result would be another logistic curve, this time for mass production capacity; the same industrial products could be produced faster and faster by increasingly smaller and smaller manufacturing machines; by the time the curve starts to level off for the next paradigm shift, you start to get industrial machines the size of skittles that can eat a pile of sawdust and spit out a kitchen table.

The new paradigm wouldn't extend Moore's law to microprocessors at all; once computer technology hits its plateau stage, it cannot really be improved further (it won't get any smaller or faster or more powerful than it already is), but in the new paradigm the same computer can be manufactured considerably faster/easier/in larger numbers and for far smaller expense.

It is also true that exponentials are not infinite
If it's not infinite then it is, by definition, not exponential.

More importantly, without knowing exactly when the curve will begin to flatten out at saturation point, it's difficult to predict exactly where the technology will end up, especially since all other social/political/economic/military factors are still difficult to nail down. The point of diminishing returns has potential to sneak up on you unexpectedly if it involves factors you had previously ignored or judged unimportant just because you assumed they would be eventually mitigated.

Because you're assuming the paradigm shift renders the the flattening curve irrelevant. That's an assumption without a basis; it's entirely possible that scientists will make a breakthrough with quantum computers in the next thirty years, after which it begins to become exponentially more difficult to make any advancements at all.

So it does indeed show the main thrust of the curve(s) still continue... but not necessarily for computers.

The articles demonstrate nothing of the kind. Software HASN'T kept up with those advances, for the specific reason that software engineers develop applications based on the end user's needs, NOT on the available processor power of the platform running it.

IOW, software isn't SUPPOSED to keep pace with processing power; processing power is a potential resource that engineers can exploit when demand for new capabilities begins to manifest, but in the end, those applications are driven by consumer demand first and foremost and technical capacity second.

Nobody made that criticism, RAMA. The criticism from the get go was that the expanding curve engendered in Moore's law is unlikely to continue indefinitely, primarily because the exponential curve looks exactly like a logistic curve until the point where it starts to level off.

And there IS, in fact, an upper limit to how far microprocessors can be miniaturized or enhanced, especially once you get down to quantum computers and molecule-sized transistors.

But you're conflating hardware and software as if they were the same thing. They are not, not even close. Hardware can be considered a a virtual vessel in which to contain data and overlapping processes devoted to a specific task, which in turn enables larger and more sophisticated software applications to fill that vessel. But it is ALSO true that a larger number of smaller applications can be simultaneously run on the same hardware that wouldn't have been possible otherwise; the exponential growth in computer power would NOT, in that case, lead directly to an exponential growth in software capability, as the applications themselves could follow a more linear progression by very small increases in capability spread out over a much larger number of applications.

This is most obvious in the issue of digital storage. Flash memory and nonvolatile storage devices may eventually outperform hard drives by a considerable margin, but that DOES NOT mean that all future media formats will be pigeonholed into HD quality just because more systems can handle their storage and playback. Quantity as well as quality will increase, and depending on user needs, it may be the former more than the latter.

This has very serious implications for AI and therefore the singularity (see below).

It is very far from a one dimensional development, and as some of our conversations revolved around this, I'm surprised you're even bringing this up again or maybe you didn't realize why I was establishing those conditions allowing for the change.
I bring it up again because you failed to address, in every single case, the fact that the POTENTIAL for change in no way implies the APPROACH of change. Again, the issue here is that you are very easily impressed by pop-sci articles and have a tendency to accept (and in some cases, to volunteer yourself) the most optimistic projections of those technologies based purely on a best-case scenario. You essentially live in a world where inventors never go bankrupt, where startup companies never fail, where great ideas never get pushed to the wayside, where Cisco never shut down the entire Flipcam production line just because they were bored.

The sole basis for the singularity is a projection on the future capabilities of Expert Systems. Put very simply, the Singularity is what happens when expert systems gain the capability to design improved copies of themselves without human intervention; machine intelligence becomes superior to human intelligence to the point that humans no longer control the developmental process (hence it is a Singularity by analogy to a Black Hole: you cannot see beyond the event horizon represented by the Expert System because it is impossible to make meaningful predictions about the value system or decision-making process of such a system). Singularity theory assumes the exponential growth curve is either indefinite or will continue long enough to bring this about.

In the first place, as I and others have pointed out, this is a flawed assumption because the exponential growth of hardware has an inherent upper limit that we may be approaching more rapidly than you think. In the second place -- and vastly more importantly -- software development is driven by user needs, NOT by hardware capabilities. I have myself pointed out on MANY occasions, AIs and robots are capable of replacing humans in virtually any task you can think of, provided the right software and hardware specializations are developed; even the self-improving Expert System would be a more efficient software engineer than the best human in the industry. The thing is, none of these tasks would gain any benefit from machine SENTIENCE, as even the Expert System doesn't need to have any semblance of self-awareness, self-motivation or the ability to make abstract value judgements in order to effectively analyze the needs of end users and construct software applications accordingly. In fact, sentience would almost certainly make it LESS useful, as the ability to think beyond the scope of its task would be a distraction to eat up a significant portion of its (admittedly huge) processing power.

My overall point is that your projections of singularity theory are basically a combination of jubilant optimism of all things technical, combined with reading way too much sensationalist literature without thinking critically about how that process would actually take place.

As part of this info availabilty change, I don't just have to stick with magazines that are months out of date, I get multiple feeds of info especially on technological change right to my smartphone, literally thousands of articles through apps, email, etc.
We noticed.

I've already seen some of the arguments against exponentials, and aside from the counter which I posted (which are accurate) I've seen the numbers about the upper limits you mention(i have them in book form, I'll try and find a link), and they are higher than you think, not lower. While not infinite they do allow for the necessary power for a Singularity. The 6th paradigm will continue the curve already established, so your assumption that it will not is incorrect.

The maximum potential of matter and energy to contain intelligent processes is a valid issue. But according to my models, we won’t approach those limits during this century (but this will become an issue within a couple of centuries).
We also need to distinguish between the “S” curve (an “S” stretched to the right, comprising very slow, virtually unnoticeable growth–followed by very rapid growth–followed by a flattening out as the process approaches an asymptote) that is characteristic of any specific technological paradigm and the continuing exponential growth that is characteristic of the ongoing evolutionary process of technology. Specific paradigms, such as Moore’s Law, do ultimately reach levels at which exponential growth is no longer feasible. Thus Moore’s Law is an S curve. But the growth of computation is an ongoing exponential (at least until we “saturate” the Universe with the intelligence of our human-machine civilization, but that will not be a limit in this coming century). In accordance with the law of accelerating returns, paradigm shift, also called innovation, turns the S curve of any specific paradigm into a continuing exponential. A new paradigm (e.g., three-dimensional circuits) takes over when the old paradigm approaches its natural limit. This has already happened at least four times in the history of computation. This difference also distinguishes the tool making of non-human species, in which the mastery of a tool-making (or using) skill by each animal is characterized by an abruptly ending S shaped learning curve, versus human-created technology, which has followed an exponential pattern of growth and acceleration since its inception.

A specific paradigm (a method or approach to solving a problem, e.g., shrinking transistors on an integrated circuit as an approach to making more powerful computers) provides exponential growth until the method exhausts its potential. When this happens, a paradigm shift (i.e., a fundamental change in the approach) occurs, which enables exponential growth to continue.

Moore's law is the 5th paradigm, and the various technologies to extend it have already appeared, the 6th generation ones either are in development, and in some cases already exist, but not in fully finished form. The fact there is more than one will tell you something, the fact that I can post breakthroughs on them almost every month is also telling..

The paradigm shift rate (i.e., the overall rate of technical progress) is currently doubling (approximately) every decade; that is, paradigm shift times are halving every decade (and the rate of acceleration is itself growing exponentially). So, the technological progress in the twenty-first century will be equivalent to what would require (in the linear view) on the order of 200 centuries. In contrast, the twentieth century saw only about 25 years of progress (again at today’s rate of progress) since we have been speeding up to current rates. So the twenty-first century will see almost a thousand times greater technological change than its predecessor.

It’s obvious what the sixth paradigm will be after Moore’s Law runs out of steam during the second decade of this century. Chips today are flat (although it does require up to 20 layers of material to produce one layer of circuitry). Our brain, in contrast, is organized in three dimensions. We live in a three dimensional world, why not use the third dimension? The human brain actually uses a very inefficient electrochemical digital controlled analog computational process. The bulk of the calculations are done in the interneuronal connections at a speed of only about 200 calculations per second (in each connection), which is about ten million times slower than contemporary electronic circuits. But the brain gains its prodigious powers from its extremely parallel organization in three dimensions. There are many technologies in the wings that build circuitry in three dimensions. Nanotubes, for example, which are already working in laboratories, build circuits from pentagonal arrays of carbon atoms. One cubic inch of nanotube circuitry would be a million times more powerful than the human brain. There are more than enough new computing technologies now being researched, including three-dimensional silicon chips, optical computing, crystalline computing, DNA computing, and quantum computing, to keep the law of accelerating returns as applied to computation going for a long time.
Thus the (double) exponential growth of computing is broader than Moore’s Law, which refers to only one of its paradigms. And this accelerating growth of computing is, in turn, part of the yet broader phenomenon of the accelerating pace of any evolutionary process. Observers are quick to criticize extrapolations of an exponential trend on the basis that the trend is bound to run out of “resources.” The classical example is when a species happens upon a new habitat (e.g., rabbits in Australia), the species’ numbers will grow exponentially for a time, but then hit a limit when resources such as food and space run out.

The study by the government proves software keeps up with hardware development, in some cases it is mentioned, it surpasses it. I don't know what other proof you want. I'll take my proof over your claims any day. Software is important because it's the missing link between the higher processing speed and potential human level AGI.

Yes companies go bankrupt, countries pass stupid laws, there are depressions and recessions and war, and yet the upward curve has never stopped.
 
Why writing it down? It's just religious faith-based nonsense.


Unlike any of the various Raptures, the Singularity is a technological event, caused by ordinary humans, doing ordinary science, building ordinary technology which follows the ordinary laws of physics. It does not involve any religious or divine powers. It doesn’t involve outside intervention by superior or alien beings. And it’s completely within our control as a species- it will only happen when we go out and make it happen. Your claim isn't logical.

RAMA
 
Well, what is illogical is to assume humans will be first with the singularity when species with smaller brains will be surpassed by circuitry first, and gain greater benefits from shifting away from organic brains. Cats, for instance, will require a much smaller die size, and the leap they make from chasing mice to shopping online will be greater than the leap we make from shopping online to - shopping online faster.

Almost immediately after the felingularity, instead of half the web being pictures of cats, 99% of the web will be pictures of cats. The other 1% will be pictures cats took of their primitive, organic, two-legged housemates.
 
1. No thanks
2. Touch screen interfaces are stupid
3. No thanks, I'd rather pay with cash if I'm buying something in person
4. No I won't
5. It already does, if internet porn counts?

Also, smart phones and tablets are stupid.

...so, you're a luddite?
 
For large purchases, cards are preferable, obviously, but for small purchases, why should I make it easy for my transactions to be tracked? Privacy, etc...

Sorry, you have no privacy.

Here is a wonderful blog entry by Scott Adams, the author of Dilbert.

The Privacy Illusion

It has come to my attention that many of my readers in the United States believe they have the right to privacy because of something in the Constitution. That is an unsupportable view. A more accurate view is that the government divides the details of your life into two categories:

1. Stuff they don't care about.
2. Stuff they can find out if they have a reason.

Keep in mind that the government already knows the following things about you:

1. Where you live
2. Your name
3. Your income
4. Your age
5. Your family members
6. Your social security number
7. Your maiden name
8. Where you were born
9. Criminal history of your family
10. Your own criminal record
11. Your driving record
12. Your ethnicity
13. Where you work and where you used to work
14. Where you live and where you used to live
15. Names of your family members
16. The value of your home now
17. The amount you paid for your home
18. The amount you owe on your home
19. Your grades in school
20. Your weight, height, eye color, and hair color

The government doesn't know your medical history. But your doctor does, and he'll give it to the government if they produce a warrant.

The government doesn't know your spending details. But your bank and your credit card company do. And the government can subpoena bank records anytime it cares enough to do so. The government can't always watch you pay for stuff with cash, but don't expect that to last. At some point in the next twenty years, physical currency will be eliminated in favor of digital transactions.

It goes on at length, read the rest here:
http://dilbert.com/blog/entry/the_privacy_illusion/
 
I've seen the numbers about the upper limits you mention(i have them in book form, I'll try and find a link), and they are higher than you think, not lower.
Unlikely, especially since you don't know how high I think they are.

The 6th paradigm will continue the curve already established
Unlikely, since you do not actually know what the next paradigm IS.

Moore's law is the 5th paradigm, and the various technologies to extend it have already appeared, the 6th generation ones either are in development, and in some cases already exist, but not in fully finished form. The fact there is more than one will tell you something, the fact that I can post breakthroughs on them almost every month is also telling..
First, if you can post monthly breakthroughs on them, then they're still part of the CURRENT paradigm, not the next one. They may extend the digital paradigm somewhat or help it take form, or -- alternately -- hasten the approach of its limiting factors. But they will not lead to the transition of a NEW paradigm without a fundamental shift in their most basic applications, after which the patterns of old paradigm cease to be meaningful.

This would be easier for you to understand if you compared the current (5th) paradigm with the previous two.

The paradigm shift rate (i.e., the overall rate of technical progress) is currently doubling (approximately) every decade
I'm beginning to wonder if you actually know what a "paradigm" is.

It’s obvious what the sixth paradigm will be after Moore’s Law runs out of steam during the second decade of this century.
Indeed. Which is why the next paradigm is unlikely to have anything whatsoever to do with Moore's law or microprocessors in general. Even 3D circuitry and quantum computing is only going to extend the present paradigm to a limited extent, and even then it may be part of the plateau stage where increasing power/complexity in three dimensional integrated circuits is considerably more expensive than it had been with 2D circuits. Once you reach the limits of 3D circuits, further advances run into that diminishing returns problem; the paradigm shifts to something OTHER than microprocessor technologies, and no new improvement can be made except over unbelievably long timescales for almost superficial levels of improvement.

Thus the (double) exponential growth of computing is broader than Moore’s Law, which refers to only one of its paradigms.
Yep. You clearly DON'T know what a "paradigm" is your anticipation of a paradigm shift is just another rhetorical device you're using to avoid taking the problem seriously.

Observers are quick to criticize extrapolations of an exponential trend on the basis that the trend is bound to run out of “resources.”
Resources has nothing to do with it. The logistic curve is a function based on a saturation point, wherein rapid progress can build on further progress in what seems to be an exponential curve until you reach a saturation point, where the system approaches maturity and the curve flattens out.

In this case, even if you had an infinite quantity of resources, that does not imply infinite growth potential; when microprocessors reach a point at which transistors cannot be further reduced and logic circuits cannot be further enhanced, then that's that, there's no more room for growth (at least, not any amount of growth that could be justified for the expense it would take).

The classical example is when a species happens upon a new habitat (e.g., rabbits in Australia), the species’ numbers will grow exponentially for a time, but then hit a limit when resources such as food and space run out.
Which ultimately has less to do with the resources available and more to do with the equilibrium point of reproductive rates vs. attrition rates. The limited resources (e.g. food) provide the saturation point, and therefore the curve flattens at the point where there are so many rabbits on the continent that the number that die from starvation is approximately equal to the number of live births.

You cannot cry "paradigm shift!" as an escape hatch for that, because an upper limit to microprocessor technology DOES exist, even accounting for innovative new forms of it. There is not even THEORETICALLY infinite growth potential there; even atomic-scale computers would eventually reach a point where they cannot be improved further. And so far, there is no reason to assume that the most radical theoretical limits are even applicable, since PRACTICAL limitations -- e.g. politics, consumer demand, economics, military pressures, and ordinary dumb luck -- are limiting factors as well.

The study by the government proves software keeps up with hardware development, in some cases it is mentioned, it surpasses it.
In this context, the software we're talking about is artificial intelligence, NOT storage capacity, NOT video or sound quality, NOT digital bandwidth and throughput. We're discussing the efficacy of computers not only as expert systems, but as self-examining thinking machines capable of taking roles traditionally performed by expert humans.

By nearly all accounts, the HARDWARE requirement for this was surpassed over a decade ago (even Kurzweil would admit this, which is why several of his 1990s predictions totally failed to pan out). Simply put, the software element to Strong AI just hasn't materialized at all, and in fact is lagging so far behind that the "bottom-up" AI theorists have spent the last couple of years lording it over everyone else with a collective "I told you so." That's why even Kurzweil is now talking about developing computer architectures that mimic the functioning of a human brain, because it's now obvious to EVERYONE that it isn't going to be fixed in software.

Software is important because it's the missing link between the higher processing speed and potential human level AGI.
But it isn't, though. Even in the highly unlikely event you could get a computer to model an existing human brain, it's still only a predictive simulation of that brain based on fixed parameters, not a genuine consciousness.

Of vastly greater import is the fact that outside of laboratory curiosity there's virtually zero market demand for conscious machine labor. UNCONSCIOUS labor is considerably easier to accomplish, especially since the few remaining tasks that require conscious labor can be performed by increasingly less intelligent/lower paid wage slaves.
 
Why writing it down? It's just religious faith-based nonsense.


Unlike any of the various Raptures, the Singularity is a technological event, caused by ordinary humans, doing ordinary science, building ordinary technology which follows the ordinary laws of physics. It does not involve any religious or divine powers. It doesn’t involve outside intervention by superior or alien beings. And it’s completely within our control as a species- it will only happen when we go out and make it happen.
Which doesn't change the fact that it is a religious faith-based worldview. The meaningful element here is that you have already internalized your articles of faith:
- The Singularity is coming
- The Singularity will be a good thing
- Those who believe in the singularity will be the first to benefit from it.

The rest of this is you RATIONALIZING what you've already decided to believe. Several times, you attempted to claim that it's not irrational because it doesn't appeal to the supernatural. That is a distinction without a difference; just because you've replaced the Book of Revelations with Ghost in the Shell doesn't make your worldview any less faith-based.
 
If you are not already a member then please register an account and join in the discussion!

Sign up / Register


Back
Top