All 'models' predicting the technological singularity are based upon, they require continual exponential growth - of intelligence, of technology, etc.
Well, if history showed anything, it showed that exponential growth in anything other than abstract mathematics is not sustainable - regardless of your attempts to 'cheat' this rule.
Technology matures and can't be improved further; etc.
Yes. Real-life processes aren't simple mathematical curves; there are many factors that interact and affect one another, and eventually any short-term trend is going to slow or stop or even be reversed. Generally, the norm is equilibrium; rapid change occurs when the circumstances are right and there's a need or incentive for it, but eventually a new equilibrium will be reached and things will stabilize again.
Sure, computers are transforming our lives in ways our forebears couldn't predict, and that might continue. Eventually we may have computers so advanced that they can precisely model and predict things like weather, natural disasters, economic patterns, social and psychological dysfunctions, etc. and give us reliable mechanisms for avoiding problems and disasters before they happen, bringing a new age of peace and security and prosperity to all. And they may bring new breakthroughs in physics and technology that will let us expand into space and improve our standard of living and restore the Earth's ecosystem. But the people who enjoy it will probably not be any more fundamentally intelligent than we are. Will they have more immediate access to any information they need? Sure, and they'll be able to draw on the problem-solving ability of the rest of humanity through crowdsourcing as well as that of the superfast computers. But they'll still probably think on much the same level that we do. And there's no guarantee that the computers will be any more intelligent, just faster and more powerful.
Who invented the transporter and replicator? And computer?
Charles Babbage gets a lot of mileage these days, inventing what is basically a mechanical computer called the difference engine, which a working model was made of in 1991 from his design! Usually when we think steampunk, this is where it originates from.
Computers using more familiar techniques appeared in 1939. In 1940 a computer used remote accessing(like the internet...no Al Gore wasn't around). In 1944, a machine called Colossus did it's number crunching in breaking Nazi codes, it was kept a secret till the 1970s! The famous, and gigantic ENIAC appeared in 1946. The first microcomputer appeared in 1971, things moved slowly but surely, finally snowballing 10 years later into PCs and Macs. In 1960 the first modem was used, and in 1970 Arpanet was started. During the 70s SF writer's often had their terms "used" be real life researchers, such as "worm", et al...
BUT SF writers seemed to be slow in understanding the implications of computers, preferring slide rules to stored program or even mechanical computers of more sophistication. The earliest mention I can find of an info giving machine was in 1726, in Gulliver's Travels. "The Machine Stops"(1909) was a revelation:it provided life support, entertainment, communication and lots of things we associate modern computers with. In 1939, the ever reliable Robert Heinlein used a ship with a navigation computer.
I'm not including other forms of AI in this post.
Computer History
Replicators: First mention..Tom Swift(1910)...byproducts of a cyclotron are used to make any material desired. 1933, The Man Who Woke includes a dizzying array of technologies, including molecular replicators:
Today when we think of replicators, we think of nanotech assemblers, creating whatever we might want from molecules upward. Some current examples of 3D printers are primitive examples of making items out of raw materials for just about any need. NASA uses electron beams in experiments in orbit to create objects.
3D Printing
Sometimes Science fiction begats or spurs forward whole philosophies and new fields of study, working almost hand-in-hand with scientists/technologists/futurists. In terms of the Singularity--possibly one of the future defining moments of mankind--defined as a point in time where computers or AI outstrip the natural evolution of human intelligence to the degree that predicting the thought process and technological leaps afterward are impossible to those preceeding it unaided.
The first conceptualization: 1847, the "Primitive Expounder" suggested eventually machines may become perfect, and surpass the ideas of humanity. 1951, Alan Turing expected machines to eventually outstrip humans and take control. In 1958, Stanislaw Ulam wrote:I.J. Good wrote of an intelligence explosion in 1965. The idea didn't seem to go anywhere until 1983, when scientist and science fiction writer Verner Vinge was central in popularizing it in his: "The Coming Technological Singularity" essay(expanded in 1993), and it specifically tied the term in with AI. He wrote novels using the speculation in 1986 and 1992, "The Fire Upon the Deep" being one of the most acclaimed and popular of the sub-genre. Advances in computers tied into Moore's Law of exponential growth in transistors placed on an integrated circuits and later processing speed and memory capacity made the idea seem more plausible. Cybernetic researchers such as Hans Moravec claimed the reality of advancing AI would have a timeline, and predicted the future on these mathematical models in 1988. The pace of sholarly and speculaive books continued, in 2005 Ray Kurzweil combined theories of nanotech, AI and immortality into a book which was made into a film. He espouses the positive side of the explosion of intelligence. Also in 2005, the story Accelerando makes an attempt at the "impossible", trying to discern what generations of a family might be like before, during and after the singularity. Another type of singularity might be the evolution from physical beings to discrete energy beings, or those that evolve and "leave" the universe. Speculation on such events have often led directly from first evolving into AI or mechanical beings, as in Gregory Benford's far future stories of the Galactic Center, or the nanotech manifested, virtual beings of Stephen Baxter's "The Time ships". Star Trek has multiple examples of such beings.One conversation centered on the ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue.
So far 3 non-fiction movies have been made on the subject of a technological singularity.
In SF, visual fiction has barely touched the topic...Colossus:The Forbin Project(1970), Demon Seed, War Games, Terminator have all scratched the surface of the subject portraying relatively one-sided views of computer takeover. A much more expansive film, The Matrix and it's sequels go into it with more depth, where AI and humanity finally reach an uneasy equilibrium in the end. A culture that builds a Dyson sphere/swarm or other monumental works involving whole solar systems including ringworlds, might well have gone through a Singularity, or even several. Examples of these have appeared in STNG, Andromeda, Stargate, Halo, Ringworld.
RAMA
"Le voyage dans la lune".
Using a canon to launch the ship into space. E.pic.
All 'models' predicting the technological singularity are based upon, they require continual exponential growth - of intelligence, of technology, etc.
Well, if history showed anything, it showed that exponential growth in anything other than abstract mathematics is not sustainable - regardless of your attempts to 'cheat' this rule.
Technology matures and can't be improved further; etc.
IF you can keep up continual exponential growth in the AI field (and the signs are that you can't), you may - or may not (perhaps 'intelligence' in humans is a mature 'technology') - be able to have a being more intelligent than humans, functioning. But, in any case, you won't be able to keep improving that intelligence; sooner or later, you'll hit a wall.
Singularity proponents gamble that this 'wall' is beyond the singularity - and they have no convincing arguments for it.
It's almost certain there isn't a logic fundamentally 'better' than the one known to us - meaning, we have already hit the wall in this area; you may have a being thinking faster than us (quantitatively), but not qualitatively 'better'.
This qualm is easily explained away..exponential growth has limits till it reaches the next paradigm shift, there is already a next generation of processor technology(s) ready to supplant the current one...in fact, the aforementioned 3D chip technology is one of them, and it appeared just two days ago. The fact that there have been 5 paradigms already that fit the pattern makes it less like wishful thinking and more like a probability.![]()
You know Rama, I really think the singularity is an altered reality. So Vger gaining sentience altered reality and might need to be reexplored again. An alternate reality would seem no different than the one we are in now except that the future may be different especially for the machine - or possible machine man interface like 'Demon Seed'. There's another thread going on in the movie section about Kirk and company never making it out of Vger or at least in the same reality but rather in a virtual simulated reality where he is just a memory or something. Go read it.
Using a canon to launch the ship into space.
....But the people who enjoy it will probably not be any more fundamentally intelligent than we are. Will they have more immediate access to any information they need? Sure, and they'll be able to draw on the problem-solving ability of the rest of humanity through crowdsourcing as well as that of the superfast computers. But they'll still probably think on much the same level that we do. And there's no guarantee that the computers will be any more intelligent, just faster and more powerful.
One recent study ("Report to the President and Congress, Designing a Digital Future: Federally Funded Research and Development in Networking and Information Technology" by the President's Council of Advisors on Science and Technology) states the following: "Even more remarkable—and even less widely understood—is that in many areas, performance gains due to improvements in algorithms have vastly exceeded even the dramatic performance gains due to increased processor speed. The algorithms that we use today for speech recognition, for natural language translation, for chess playing, for logistics planning, have evolved remarkably in the past decade ... Here is just one example, provided by Professor Martin Grötschel of Konrad-Zuse-Zentrum für Informationstechnik Berlin. Grötschel, an expert in optimization, observes that a benchmark production planning model solved using linear programming would have taken 82 years to solve in 1988, using the computers and the linear programming algorithms of the day. Fifteen years later—in 2003—this same model could be solved in roughly one minute, an improvement by a factor of roughly 43 million. Of this, a factor of roughly 1,000 was due to increased processor speed, whereas a factor of roughly 43,000 was due to improvements in algorithms! Grötschel also cites an algorithmic improvement of roughly 30,000 for mixed integer programming between 1991 and 2008. The design and analysis of algorithms, and the study of the inherent computational complexity of problems, are fundamental subfields of computer science."
^Again, I don't deny that it's possible to improve the performance of the brain in certain ways. But the study mentioned in that article I linked to suggested that such improvements would come with a cost, that there would be tradeoffs for any gain, and that eventually you'd reach a point of diminishing returns. It's just wishful thinking to assume the brain can be augmented without limit, or that any system can be scaled upward without limit. That's Kurzweil's fundamental mistake, that failure to recognize that not everything can be extrapolated forward indefinitely.
Moore's Law is not an inviolable law of nature, just a description of a process Moore observed in his time. Moore himself never expected it to apply indefinitely into the future; in fact, the cutoff point at which he assumed it would cease applying is already in our past. So you can't automatically assume that computer capacity will continue to scale up indefinitely just because it did so in the past, and you sure as hell can't assume that there are no obstacles to bringing that same unlimited amplification to the human brain, because there are countless other variables you'd need to factor into that equation.
I think Singularity advocates sometimes forget that the Singularity is supposed to be a point beyond which our ability to extrapolate the future fails because we don't have enough information to make any intelligent conjectures. So to claim certainty about what the Singularity will mean is oxymoronic.
No one claimed there was no limit to computer/AI processing capacity, but as I already said, this limit is immense, and we can quantifiably predict there will be a time when we can reach it.When my 1999 book, The Age of Spiritual Machines, was published, and augmented a couple of years later by the 2001 essay, it generated several lines of criticism, such as Moore’s law will come to an end, hardware capability may be expanding exponentially but software is stuck in the mud, the brain is too complicated, there are capabilities in the brain that inherently cannot be replicated in software, and several others. I specifically wrote The Singularity Is Near to respond to those critiques.
I cannot say that Allen would necessarily be convinced by the arguments I make in the book, but at least he could have responded to what I actually wrote. Instead, he offers de novo arguments as if nothing has ever been written to respond to these issues. Allen’s descriptions of my own positions appear to be drawn from my 10-year-old essay. While I continue to stand by that essay, Allen does not summarize my positions correctly even from that essay.
Allen writes that “the Law of Accelerating Returns (LOAR). . . is not a physical law.” I would point out that most scientific laws are not physical laws, but result from the emergent properties of a large number of events at a finer level. A classical example is the laws of thermodynamics (LOT). If you look at the mathematics underlying the LOT, they model each particle as following a random walk. So by definition, we cannot predict where any particular particle will be at any future time. Yet the overall properties of the gas are highly predictable to a high degree of precision according to the laws of thermodynamics. So it is with the law of accelerating returns. Each technology project and contributor is unpredictable, yet the overall trajectory as quantified by basic measures of price-performance and capacity nonetheless follow remarkably predictable paths.
If computer technology were being pursued by only a handful of researchers, it would indeed be unpredictable. But it’s being pursued by a sufficiently dynamic system of competitive projects that a basic measure such as instructions per second per constant dollar follows a very smooth exponential path going back to the 1890 American census. I discuss the theoretical basis for the LOAR extensively in my book, but the strongest case is made by the extensive empirical evidence that I and others present.
Allen writes that “these ‘laws’ work until they don’t.” Here, Allen is confusing paradigms with the ongoing trajectory of a basic area of information technology. If we were examining the trend of creating ever-smaller vacuum tubes, the paradigm for improving computation in the 1950s, it’s true that this specific trend continued until it didn’t. But as the end of this particular paradigm became clear, research pressure grew for the next paradigm. The technology of transistors kept the underlying trend of the exponential growth of price-performance going, and that led to the fifth paradigm (Moore’s law) and the continual compression of features on integrated circuits. There have been regular predictions that Moore’s law will come to an end. The semiconductor industry’s roadmap titled projects seven-nanometer features by the early 2020s. At that point, key features will be the width of 35 carbon atoms, and it will be difficult to continue shrinking them. However, Intel and other chip makers are already taking the first steps toward the sixth paradigm, which is computing in three dimensions to continue exponential improvement in price performance. Intel projects that three-dimensional chips will be mainstream by the teen years. Already three-dimensional transistors and three-dimensional memory chips have been introduced.
This sixth paradigm will keep the LOAR going with regard to computer price performance to the point, later in this century, where a thousand dollars of computation will be trillions of times more powerful than the human brain1. And it appears that Allen and I are at least in agreement on what level of computation is required to functionally simulate the human brain2.
Allen then goes on to give the standard argument that software is not progressing in the same exponential manner of hardware. In The Singularity Is Near, I address this issue at length, citing different methods of measuring complexity and capability in software that demonstrate a similar exponential growth
Just out of curiosity, can the singularuty be equated with religion's rapture (a white hole) or a black hole (alternate ultimate reality?) Once again as is usual when talking science with Christopher, I don't know what I'm talking about.
Cloning:
Cloning in biotechnology is a complex discipline where several different processes are used to create copies of DNA, cells, or organisms. It was possibly first accomplished in 1952 on tadpoles and the first published work was from a procedure performed on carp, 1963. Mammals were cloned in 1986 and 1997, with the first ape cloned in 2000. Today cloning stem cells is seen as a major area of research. In 2006 the FDA approved mass consumption of cloned meat in USA. In 2009 the first extinct animal(Ibex) was cloned but only lived for 7 minutes. On the 7th of December 2011 its was announced that a team from the Siberian mammoth museum and Japan's Kinki University plan to clone a woolly mammoth from a well preserved sample of bone marrow found in August 2011.
So, let's recap. First, these bone marrow cells need to be absolutely pristine for cloning to work...and there's no guarantee of that. Next, we need to transplant those cells into African elephant eggs...and many of those will fail. Then, the embryos need to survive the pregnancy...and if 1 in 100 do that, it'd be a massive success. After that, the mammoth needs to be born and survive infancy...again, the odds are stacked against it. Finally, the mammoth clone needs to thrive in a world in which it is completely, absolutely alone...which is hardly a guarantee. And that's not even worrying about the question of this clone giving birth to more mammoths down the line.
Taken all together, the odds that any of us will ever see an adult woolly mammoth with our current levels of cloning technology is probably somewhere between 1 in 10,000 and 1 in a million. And whatever, I'd say the five-year estimate is hugely optimistic - I'd be pleasantly surprised if a live birth of a mammoth happens in the next twenty years, even if it dies almost immediately.
Yes, with this new discovery, we're closer to cloning a mammoth than ever before. The problem is, we're still a long, long, long way away, and in the absence of some major breakthrough in cloning technology, that's likely to remain the case for the foreseeable future.
On Star Trek, clones were treated as abominations.
I believe I answered the exponential limit claim already...exponentials reach limits only until surpassed by a new paradigm. My example was processor technology. Something claimed by critics for many years...that there would eventually be a materials limit in Moore's Law, but which has again been surpassed: http://www.trekbbs.com/showthread.php?t=153184
Doesn't Riker kill some clones without a thought?
We use essential cookies to make this site work, and optional cookies to enhance your experience.