• Welcome! The TrekBBS is the number one place to chat about Star Trek with like-minded fans.
    If you are not already a member then please register an account and join in the discussion!

Some science fiction "firsts"

"Le voyage dans la lune".

Using a canon to launch the ship into space. E.pic.
 
Tanks! Not uncommon today, they were something of a shock when they were first used in WWI. First conceived of in real life AND in fiction in 1903 by HG Well's "Land Ironclads" and French captain Levavasseur, whose project was abandoned 5 years later. In 1911 the Austrians developed two independent designs but both were rejected. In America, the tracked tractor was developed in 1907 by Benjamin Holt, his designs were used as a basis for artillery haulers and supply carriers, thousands were produced. The French investigated the idea of combining a tread or "pedrail" on a cannon carrying vehicle, but the British beat them to the punch, using them in battle for the first time in 1916. Developments in tanks stayed steady but tactics didn't between world wars, until finally the infantry and air supported "Blitzkreig" offensive was developed. Tanks today are centered around a combination of size, firepower, speed, protection in a ratio unreachable in the 1940s. The future tanks however, may evolve from large, tall vehicles that can't traverse some public highways and bridges to low profile, speedy fire support. The US has developed an easily air-transportable big-gunned tank that weighs 30 tons less than the current M1, but even more futuristic is the US Army semi-autonomous unmanned technology demonstrator 'Black Knight'...which may be the next wave of tank technology. Eventually they could be armed with kinetic impact weapons based around an electromagnetic rail gun.



FCS

In 1936, HG Wells "Things to Come" showed the development of pre-WWII style tanks to advanced versions of his "land ironclads" Wells Tank. In literature, the Bolo is a huge, AI driven heavy tank. Hover tanks are a common SF tech...with perhaps SW:The Phantom Menace the most prominent of these to reach the movie screen. In much of anime, tanks have been replaced by "mecha", highly mobile armored suits or robots with tank weaponry and unparalleled maneuverability. In some cases, swarms of robots have replaced the single place tank in land combat. Something the US military is already taking seriously: Swarm
 
Last edited:
All 'models' predicting the technological singularity are based upon, they require continual exponential growth - of intelligence, of technology, etc.
Well, if history showed anything, it showed that exponential growth in anything other than abstract mathematics is not sustainable - regardless of your attempts to 'cheat' this rule.
Technology matures and can't be improved further; etc.

Yes. Real-life processes aren't simple mathematical curves; there are many factors that interact and affect one another, and eventually any short-term trend is going to slow or stop or even be reversed. Generally, the norm is equilibrium; rapid change occurs when the circumstances are right and there's a need or incentive for it, but eventually a new equilibrium will be reached and things will stabilize again.

Sure, computers are transforming our lives in ways our forebears couldn't predict, and that might continue. Eventually we may have computers so advanced that they can precisely model and predict things like weather, natural disasters, economic patterns, social and psychological dysfunctions, etc. and give us reliable mechanisms for avoiding problems and disasters before they happen, bringing a new age of peace and security and prosperity to all. And they may bring new breakthroughs in physics and technology that will let us expand into space and improve our standard of living and restore the Earth's ecosystem. But the people who enjoy it will probably not be any more fundamentally intelligent than we are. Will they have more immediate access to any information they need? Sure, and they'll be able to draw on the problem-solving ability of the rest of humanity through crowdsourcing as well as that of the superfast computers. But they'll still probably think on much the same level that we do. And there's no guarantee that the computers will be any more intelligent, just faster and more powerful.

A very conventional view(again, a linear view, with simply more power and speed, one that is not supported by past history), but I don't think all that likely, its been said many times that man has reached the ultimate level of intelligence, only to be proven wrong time and again. Frankly, with so much to learn, and with us barely out of the technological cradle, the increases in speed and power inevitably have to help us learn more, but the level real intelligence will be more than that. Star Trek(another linear view), I am almost positive, will not be even remotely accurate. It should pale in comparison to real events.

Who invented the transporter and replicator? And computer?

Charles Babbage gets a lot of mileage these days, inventing what is basically a mechanical computer called the difference engine, which a working model was made of in 1991 from his design! Usually when we think steampunk, this is where it originates from.

Computers using more familiar techniques appeared in 1939. In 1940 a computer used remote accessing(like the internet...no Al Gore wasn't around). In 1944, a machine called Colossus did it's number crunching in breaking Nazi codes, it was kept a secret till the 1970s! The famous, and gigantic ENIAC appeared in 1946. The first microcomputer appeared in 1971, things moved slowly but surely, finally snowballing 10 years later into PCs and Macs. In 1960 the first modem was used, and in 1970 Arpanet was started. During the 70s SF writer's often had their terms "used" be real life researchers, such as "worm", et al...

BUT SF writers seemed to be slow in understanding the implications of computers, preferring slide rules to stored program or even mechanical computers of more sophistication. The earliest mention I can find of an info giving machine was in 1726, in Gulliver's Travels. "The Machine Stops"(1909) was a revelation:it provided life support, entertainment, communication and lots of things we associate modern computers with. In 1939, the ever reliable Robert Heinlein used a ship with a navigation computer.

I'm not including other forms of AI in this post.

Computer History

Replicators: First mention..Tom Swift(1910)...byproducts of a cyclotron are used to make any material desired. 1933, The Man Who Woke includes a dizzying array of technologies, including molecular replicators:

Today when we think of replicators, we think of nanotech assemblers, creating whatever we might want from molecules upward. Some current examples of 3D printers are primitive examples of making items out of raw materials for just about any need. NASA uses electron beams in experiments in orbit to create objects.

3D Printing

More 3D printing...

http://www.innovationnewsdaily.com/incredible-3d-printed-products-2267/

Sometimes Science fiction begats or spurs forward whole philosophies and new fields of study, working almost hand-in-hand with scientists/technologists/futurists. In terms of the Singularity--possibly one of the future defining moments of mankind--defined as a point in time where computers or AI outstrip the natural evolution of human intelligence to the degree that predicting the thought process and technological leaps afterward are impossible to those preceeding it unaided.

The first conceptualization: 1847, the "Primitive Expounder" suggested eventually machines may become perfect, and surpass the ideas of humanity. 1951, Alan Turing expected machines to eventually outstrip humans and take control. In 1958, Stanislaw Ulam wrote:
One conversation centered on the ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue.
I.J. Good wrote of an intelligence explosion in 1965. The idea didn't seem to go anywhere until 1983, when scientist and science fiction writer Verner Vinge was central in popularizing it in his: "The Coming Technological Singularity" essay(expanded in 1993), and it specifically tied the term in with AI. He wrote novels using the speculation in 1986 and 1992, "The Fire Upon the Deep" being one of the most acclaimed and popular of the sub-genre. Advances in computers tied into Moore's Law of exponential growth in transistors placed on an integrated circuits and later processing speed and memory capacity made the idea seem more plausible. Cybernetic researchers such as Hans Moravec claimed the reality of advancing AI would have a timeline, and predicted the future on these mathematical models in 1988. The pace of sholarly and speculaive books continued, in 2005 Ray Kurzweil combined theories of nanotech, AI and immortality into a book which was made into a film. He espouses the positive side of the explosion of intelligence. Also in 2005, the story Accelerando makes an attempt at the "impossible", trying to discern what generations of a family might be like before, during and after the singularity. Another type of singularity might be the evolution from physical beings to discrete energy beings, or those that evolve and "leave" the universe. Speculation on such events have often led directly from first evolving into AI or mechanical beings, as in Gregory Benford's far future stories of the Galactic Center, or the nanotech manifested, virtual beings of Stephen Baxter's "The Time ships". Star Trek has multiple examples of such beings.

So far 3 non-fiction movies have been made on the subject of a technological singularity.

In SF, visual fiction has barely touched the topic...Colossus:The Forbin Project(1970), Demon Seed, War Games, Terminator have all scratched the surface of the subject portraying relatively one-sided views of computer takeover. A much more expansive film, The Matrix and it's sequels go into it with more depth, where AI and humanity finally reach an uneasy equilibrium in the end. A culture that builds a Dyson sphere/swarm or other monumental works involving whole solar systems including ringworlds, might well have gone through a Singularity, or even several. Examples of these have appeared in STNG, Andromeda, Stargate, Halo, Ringworld.


RAMA

Edit: It occurs to me STTMP may be one of the largest scale examples of singularity ever seen in fiction...firstly, the evolved AI, evidently spawned by other machines into a huge living entity...it quantifies almost everything in the universe in its massive databanks much as predicted in the intelligent universe theory within the singularity. Not only do we see the end result, but this omniscient being actually transforms into a human/AI interface!!

"Le voyage dans la lune".

Using a canon to launch the ship into space. E.pic.

Something which has never really been abandoned and is alive and kicking today:

http://www.popsci.com/technology/article/2010-01/cannon-shooting-supplies-space

lifeboat.com/em/chapter.1.pdf

All 'models' predicting the technological singularity are based upon, they require continual exponential growth - of intelligence, of technology, etc.
Well, if history showed anything, it showed that exponential growth in anything other than abstract mathematics is not sustainable - regardless of your attempts to 'cheat' this rule.
Technology matures and can't be improved further; etc.

IF you can keep up continual exponential growth in the AI field (and the signs are that you can't), you may - or may not (perhaps 'intelligence' in humans is a mature 'technology') - be able to have a being more intelligent than humans, functioning. But, in any case, you won't be able to keep improving that intelligence; sooner or later, you'll hit a wall.
Singularity proponents gamble that this 'wall' is beyond the singularity - and they have no convincing arguments for it.

It's almost certain there isn't a logic fundamentally 'better' than the one known to us - meaning, we have already hit the wall in this area; you may have a being thinking faster than us (quantitatively), but not qualitatively 'better'.

This qualm is easily explained away..exponential growth has limits till it reaches the next paradigm shift, there is already a next generation of processor technology(s) ready to supplant the current one...in fact, the aforementioned 3D chip technology is one of them, and it appeared just two days ago. The fact that there have been 5 paradigms already that fit the pattern makes it less like wishful thinking and more like a probability.:techman:

http://www.micron.com/innovations/hmc

 
Last edited:
You know Rama, I really think the singularity is an altered reality. So Vger gaining sentience altered reality and might need to be reexplored again. An alternate reality would seem no different than the one we are in now except that the future may be different especially for the machine - or possible machine man interface like 'Demon Seed'. There's another thread going on in the movie section about Kirk and company never making it out of Vger or at least in the same reality but rather in a virtual simulated reality where he is just a memory or something. Go read it.
 
You know Rama, I really think the singularity is an altered reality. So Vger gaining sentience altered reality and might need to be reexplored again. An alternate reality would seem no different than the one we are in now except that the future may be different especially for the machine - or possible machine man interface like 'Demon Seed'. There's another thread going on in the movie section about Kirk and company never making it out of Vger or at least in the same reality but rather in a virtual simulated reality where he is just a memory or something. Go read it.

There are a couple of interesting directions you can go in with this...it was originally Hans Marovec who suggested that in a machine/AI takeover, human beings may not know they are in an altered reality, because the machines will not be belligerent towards us, they will want to keep replicas of us around for historical posterity, even though they have supplanted our biological evolution.

Both Kurzweil and Vinge suggest that when we hit the singularity only those who are not evolved/adapted enough will know it has happened, because the human/AIs will have followed the curve! The others will be left behind.

The Matrix is of course a chief example of AIs recreating man for purposes of their own, in this case, most of the machine AI (but not all) are indifferent to the humans. The human beings don't know about their reality unless they are released from their virtual life.

It's interesting...if events in STTMP are a Matrix-like virtual reality, where the evolved human/V'Ger hybrid has re-created everything after an instant of exploring well...everything, then it has fulfilled the dream of a programmable universe. However, while you can specualte this is the case, there really is no evidence in the movie. Anyway this is another reason to like STTMP.:techman:

http://www.amazon.com/exec/obidos/ASIN/0674576187/the-new-atlantis-20
 
My thinking is that the singularity has already happened elsewhere and we are living in it's alternate reality. The green plasma bolt that hit Epsilon didn't destroy it - according to memory alpha it 'remembered it to death'. This is what might have happened to the Enterprise except that the transition was seemingly seamless. An alternate future reality would be the goal of every machine.
 
SOme here has a thread titled 'Robots have tripled since 2007'

Watch this vid:
http://www.dailymotion.com/video/xgjkrg_a-demo-of-silvia-artificial-intelligence_news
Intelligent interaction with an AI, for a given value of intelligence

Scientist are now growing organs of animals, hopefully growing human ones soon from our own cells (I'm desperately waiting for this one to happen).

I prefer Damien Broderick's description, describing the Singularity as the Spike. Plot progress versus time on a graph for any human endeavour (transport or computers come to mind). And yes, PCs are coming to an end with silicon, but once upon a time we did everything we could with sail... and then steam came along, and transport continued to ramp up, in terms of speed and carrying capacity. We find ways.
 
Addendun, replicators: Of course Forbidden Planet's Robbie the Robot is an early visual example of a replicator, producing food, alcohol, and lead from molecules.

Flying Saucers: The first description of "flying saucer" shaped objects may have been in the 10th century, with an illustration depicting it in a Japanese manuscript. The first sighting may have occurred in 1290 when a silver disc was reported in Yorkshire. The first modern usage of the word "saucer" appeared in 1947, when newspapers applied the term to a description by Kenneth Arnold. The term took off but was soon replaced to describe a wide variety of unidentified objects, by "UFO".

In SF, different types of saucer like objects appeared in pulps, possibly since 1911. They grew in popularity after the rash of sightings in the 1940s and 50s, coming into widespread use as a signature of something "alien". This was turned on it's ear for the monumental SF film "Forbidden Planet" in 1956, where advanced humanity took to the stars in hyperdive driven starships. In recent years the general shape has made a comeback, appearing in Seaquest DSV, and a rash of alien invasion movies/tv shows starting with Independence Day(1996), continuing with V, District 9, and Skyline.

In reality, the saucer has been a tough nut to crack technologically, examples like the Avrocar and Moller Skycar have met with limited success, either being underpowered, and hard to control or as a technology demonstrator. The WEAV is a project that will attempt to fly using a magnetohydrodynamic drive(as in Hunt for Red October) within a year. http://alien-ufo-sightings.com/2011/09/05/the-worlds-first-flying-saucer-made-right-here-on-earth/ Only small UAVs of the saucer shape have met with any success so far.

RAMA
 
....But the people who enjoy it will probably not be any more fundamentally intelligent than we are. Will they have more immediate access to any information they need? Sure, and they'll be able to draw on the problem-solving ability of the rest of humanity through crowdsourcing as well as that of the superfast computers. But they'll still probably think on much the same level that we do. And there's no guarantee that the computers will be any more intelligent, just faster and more powerful.

One of the biggest criticisms of this issue is it's not just hardware and speed, but software and what we are actually able to do or learn with it...well it's hard to tell exactly how far we've come, the advances seem subtle to us with linear human perception, but is in fact moving fast...there is a quantifiable way to see if the claims are true, hence:

One recent study ("Report to the President and Congress, Designing a Digital Future: Federally Funded Research and Development in Networking and Information Technology" by the President's Council of Advisors on Science and Technology) states the following: "Even more remarkable—and even less widely understood—is that in many areas, performance gains due to improvements in algorithms have vastly exceeded even the dramatic performance gains due to increased processor speed. The algorithms that we use today for speech recognition, for natural language translation, for chess playing, for logistics planning, have evolved remarkably in the past decade ... Here is just one example, provided by Professor Martin Grötschel of Konrad-Zuse-Zentrum für Informationstechnik Berlin. Grötschel, an expert in optimization, observes that a benchmark production planning model solved using linear programming would have taken 82 years to solve in 1988, using the computers and the linear programming algorithms of the day. Fifteen years later—in 2003—this same model could be solved in roughly one minute, an improvement by a factor of roughly 43 million. Of this, a factor of roughly 1,000 was due to increased processor speed, whereas a factor of roughly 43,000 was due to improvements in algorithms! Grötschel also cites an algorithmic improvement of roughly 30,000 for mixed integer programming between 1991 and 2008. The design and analysis of algorithms, and the study of the inherent computational complexity of problems, are fundamental subfields of computer science."

Can you imagine the impact of future software that is tied into the AI buffer for the human brain?
 
^Again, I don't deny that it's possible to improve the performance of the brain in certain ways. But the study mentioned in that article I linked to suggested that such improvements would come with a cost, that there would be tradeoffs for any gain, and that eventually you'd reach a point of diminishing returns. It's just wishful thinking to assume the brain can be augmented without limit, or that any system can be scaled upward without limit. That's Kurzweil's fundamental mistake, that failure to recognize that not everything can be extrapolated forward indefinitely.

Moore's Law is not an inviolable law of nature, just a description of a process Moore observed in his time. Moore himself never expected it to apply indefinitely into the future; in fact, the cutoff point at which he assumed it would cease applying is already in our past. So you can't automatically assume that computer capacity will continue to scale up indefinitely just because it did so in the past, and you sure as hell can't assume that there are no obstacles to bringing that same unlimited amplification to the human brain, because there are countless other variables you'd need to factor into that equation.

I think Singularity advocates sometimes forget that the Singularity is supposed to be a point beyond which our ability to extrapolate the future fails because we don't have enough information to make any intelligent conjectures. So to claim certainty about what the Singularity will mean is oxymoronic.
 
Cloning:

Cloning in biotechnology is a complex discipline where several different processes are used to create copies of DNA, cells, or organisms. It was possibly first accomplished in 1952 on tadpoles and the first published work was from a procedure performed on carp, 1963. Mammals were cloned in 1986 and 1997, with the first ape cloned in 2000. Today cloning stem cells is seen as a major area of research. In 2006 the FDA approved mass consumption of cloned meat in USA. In 2009 the first extinct animal(Ibex) was cloned but only lived for 7 minutes. On the 7th of December 2011 its was announced that a team from the Siberian mammoth museum and Japan's Kinki University plan to clone a woolly mammoth from a well preserved sample of bone marrow found in August 2011.

In SF human cloning is a popular topic and as highly controversial as real life. The first large scale use of clones in a novel appeared in A. E. Van Vogt's 1945 novel The World of Null-A. Aldous Huxley's Brave New World made significant use of clones. C. J. Cherryh won the Hugo in 1988 for her novel Cyteen., which is considered a milestone novel of the subject. In visual fiction, cloning is extremely common. Human Duplicators was an early shlock effort. Woody Allen's Sleeper gained more critical notice. As did The Stepford Wives. Most popular of all was Michael Crichton's "Jurassic Park" dealing with the resurrection of extinct dinosaurs. Other efforts include Star Wars: Attack of the Clones, The 6th Day, a surprisingly serious action movie exploration of the subject, and "The Island" of a similar vein. On Star Trek, clones were treated as abominations.
RAMA
 
^Again, I don't deny that it's possible to improve the performance of the brain in certain ways. But the study mentioned in that article I linked to suggested that such improvements would come with a cost, that there would be tradeoffs for any gain, and that eventually you'd reach a point of diminishing returns. It's just wishful thinking to assume the brain can be augmented without limit, or that any system can be scaled upward without limit. That's Kurzweil's fundamental mistake, that failure to recognize that not everything can be extrapolated forward indefinitely.

Moore's Law is not an inviolable law of nature, just a description of a process Moore observed in his time. Moore himself never expected it to apply indefinitely into the future; in fact, the cutoff point at which he assumed it would cease applying is already in our past. So you can't automatically assume that computer capacity will continue to scale up indefinitely just because it did so in the past, and you sure as hell can't assume that there are no obstacles to bringing that same unlimited amplification to the human brain, because there are countless other variables you'd need to factor into that equation.

I think Singularity advocates sometimes forget that the Singularity is supposed to be a point beyond which our ability to extrapolate the future fails because we don't have enough information to make any intelligent conjectures. So to claim certainty about what the Singularity will mean is oxymoronic.


I think just about all these qualms have been countered at one time or another in the last 10 years...the last one first: It's absolutely true and Kurzweil himself makes this statement in his last book(far from being oblivious)...however, it still doesn't mean that we as curious, intelligent beings won't try to, as with Charles Stross' Accelerando. There area few logical extrapolations which seem to make sense but are by no means definitive as part of the 6 epochs idea:

I believe I answered the exponential limit claim already...exponentials reach limits only until surpassed by a new paradigm. My example was processor technology. Something claimed by critics for many years...that there would eventually be a materials limit in Moore's Law, but which has again been surpassed: http://www.trekbbs.com/showthread.php?t=153184

Kurzweil's response to Allen on exponentials not being a law of nature:

When my 1999 book, The Age of Spiritual Machines, was published, and augmented a couple of years later by the 2001 essay, it generated several lines of criticism, such as Moore’s law will come to an end, hardware capability may be expanding exponentially but software is stuck in the mud, the brain is too complicated, there are capabilities in the brain that inherently cannot be replicated in software, and several others. I specifically wrote The Singularity Is Near to respond to those critiques.
I cannot say that Allen would necessarily be convinced by the arguments I make in the book, but at least he could have responded to what I actually wrote. Instead, he offers de novo arguments as if nothing has ever been written to respond to these issues. Allen’s descriptions of my own positions appear to be drawn from my 10-year-old essay. While I continue to stand by that essay, Allen does not summarize my positions correctly even from that essay.
Allen writes that “the Law of Accelerating Returns (LOAR). . . is not a physical law.” I would point out that most scientific laws are not physical laws, but result from the emergent properties of a large number of events at a finer level. A classical example is the laws of thermodynamics (LOT). If you look at the mathematics underlying the LOT, they model each particle as following a random walk. So by definition, we cannot predict where any particular particle will be at any future time. Yet the overall properties of the gas are highly predictable to a high degree of precision according to the laws of thermodynamics. So it is with the law of accelerating returns. Each technology project and contributor is unpredictable, yet the overall trajectory as quantified by basic measures of price-performance and capacity nonetheless follow remarkably predictable paths.
If computer technology were being pursued by only a handful of researchers, it would indeed be unpredictable. But it’s being pursued by a sufficiently dynamic system of competitive projects that a basic measure such as instructions per second per constant dollar follows a very smooth exponential path going back to the 1890 American census. I discuss the theoretical basis for the LOAR extensively in my book, but the strongest case is made by the extensive empirical evidence that I and others present.
Allen writes that “these ‘laws’ work until they don’t.” Here, Allen is confusing paradigms with the ongoing trajectory of a basic area of information technology. If we were examining the trend of creating ever-smaller vacuum tubes, the paradigm for improving computation in the 1950s, it’s true that this specific trend continued until it didn’t. But as the end of this particular paradigm became clear, research pressure grew for the next paradigm. The technology of transistors kept the underlying trend of the exponential growth of price-performance going, and that led to the fifth paradigm (Moore’s law) and the continual compression of features on integrated circuits. There have been regular predictions that Moore’s law will come to an end. The semiconductor industry’s roadmap titled projects seven-nanometer features by the early 2020s. At that point, key features will be the width of 35 carbon atoms, and it will be difficult to continue shrinking them. However, Intel and other chip makers are already taking the first steps toward the sixth paradigm, which is computing in three dimensions to continue exponential improvement in price performance. Intel projects that three-dimensional chips will be mainstream by the teen years. Already three-dimensional transistors and three-dimensional memory chips have been introduced.
This sixth paradigm will keep the LOAR going with regard to computer price performance to the point, later in this century, where a thousand dollars of computation will be trillions of times more powerful than the human brain1. And it appears that Allen and I are at least in agreement on what level of computation is required to functionally simulate the human brain2.
Allen then goes on to give the standard argument that software is not progressing in the same exponential manner of hardware. In The Singularity Is Near, I address this issue at length, citing different methods of measuring complexity and capability in software that demonstrate a similar exponential growth
No one claimed there was no limit to computer/AI processing capacity, but as I already said, this limit is immense, and we can quantifiably predict there will be a time when we can reach it.
 
Last edited:
Just out of curiosity, can the singularuty be equated with religion's rapture (a white hole) or a black hole (alternate ultimate reality?) Once again as is usual when talking science with Christopher, I don't know what I'm talking about.
 
Just out of curiosity, can the singularuty be equated with religion's rapture (a white hole) or a black hole (alternate ultimate reality?) Once again as is usual when talking science with Christopher, I don't know what I'm talking about.

There are a lot of metaphors within the singularity that one could claim are self-servingly spiritual, if people want to claim that then so be it, but that's not what my interest in it is about. This is similar to what has happened with spiritualists and new age latching on to quantum theory. Generally speaking most of the claims originate from a lack of current language to quantify or explain the changes that may happen.

RAMA
 
My thinking, which is quite limited on this subject, is that we won't know what happened until it is too late. In fact, a black hole could have already occured and the Earth might have been destroyed many times over and that we can't know what it will be like but I don't see it having positive connotations and I see alot of people suffering so that a few can change in the twinkling of an eye into trans Humans or whatever.
 
Cloning:

Cloning in biotechnology is a complex discipline where several different processes are used to create copies of DNA, cells, or organisms. It was possibly first accomplished in 1952 on tadpoles and the first published work was from a procedure performed on carp, 1963. Mammals were cloned in 1986 and 1997, with the first ape cloned in 2000. Today cloning stem cells is seen as a major area of research. In 2006 the FDA approved mass consumption of cloned meat in USA. In 2009 the first extinct animal(Ibex) was cloned but only lived for 7 minutes. On the 7th of December 2011 its was announced that a team from the Siberian mammoth museum and Japan's Kinki University plan to clone a woolly mammoth from a well preserved sample of bone marrow found in August 2011.

And since we've never gotten an extinct animal to live for more than a few minutes -- and that was the best result out of multiple attempts -- there's no guarantee we'll have any better luck with the mammoth. Not to mention all the practical difficulties even if we could successfully pull it off -- what would its habitat be? How could it be raised when it has no parents of its own species and nobody has any idea what its behavior is supposed to be? If mammoths were anything like elephants, they were probably highly social, and we've seen how much damage it does to elephants when they're cut off from healthy social interaction with their own kind.

http://io9.com/5865590/no-we-wont-be-able-to-clone-a-woolly-mammoth-in-the-next-five-years
So, let's recap. First, these bone marrow cells need to be absolutely pristine for cloning to work...and there's no guarantee of that. Next, we need to transplant those cells into African elephant eggs...and many of those will fail. Then, the embryos need to survive the pregnancy...and if 1 in 100 do that, it'd be a massive success. After that, the mammoth needs to be born and survive infancy...again, the odds are stacked against it. Finally, the mammoth clone needs to thrive in a world in which it is completely, absolutely alone...which is hardly a guarantee. And that's not even worrying about the question of this clone giving birth to more mammoths down the line.

Taken all together, the odds that any of us will ever see an adult woolly mammoth with our current levels of cloning technology is probably somewhere between 1 in 10,000 and 1 in a million. And whatever, I'd say the five-year estimate is hugely optimistic - I'd be pleasantly surprised if a live birth of a mammoth happens in the next twenty years, even if it dies almost immediately.

Yes, with this new discovery, we're closer to cloning a mammoth than ever before. The problem is, we're still a long, long, long way away, and in the absence of some major breakthrough in cloning technology, that's likely to remain the case for the foreseeable future.


On Star Trek, clones were treated as abominations.

That's not accurate. If you're referring to "Up the Long Ladder," the Mariposans' cloning was portrayed as flawed because it was their only form of reproduction, but the only thing that was portrayed as immoral was stealing someone's genetic material and replicating them without their consent. It was the lack of consent, the violation, that was condemned, not the cloning per se. There's also "A Man Alone," where Odo stated rather bluntly that killing your own clone was still murder, suggesting it's taken for granted that clones have equal rights. Then of course there are the first clones in Trek history, the giant Keniclius and Spock clones of TAS: "The Infinite Vulcan." Neither of them is treated as an abomination; Keniclius is wrong to abduct and clone Spock without his consent, but the clone itself is accepted as a sapient being with a right to live. The Vorta were all clones, but they weren't discriminated against or vilified on that basis; it was their policies and practices that the protagonists objected to, not their nature. Then of course there's Shinzon, another clone created without the donor's consent, but again, Picard was willing to accept him as a being with a right to exist and tried to bring out the best in him.

So I can't find a single instance where a clone in ST was treated as an abomination simply on the basis of being a clone.
 
I believe I answered the exponential limit claim already...exponentials reach limits only until surpassed by a new paradigm. My example was processor technology. Something claimed by critics for many years...that there would eventually be a materials limit in Moore's Law, but which has again been surpassed: http://www.trekbbs.com/showthread.php?t=153184

RAMA, you make the mistake of assuming that new paradigms will keep appearing, based on the recent past, on the scientific/technological revolution.
In other words, you make the mistake of assuming you can extrapolate forward indefinitely - a mistake Christopher already pointed out to you.


Indeed, one can prove logically that there are not an infinte number of paradigm shifts in our future:

There are a finite number of laws of nature AKA there are a finite number of combinations one can make using them.
Almost all these combinations are useless - they have no useful result, are not technology.
The few combinations that are useful are finite AKA they will not appear ad infinitum.


One erroneous assumption of singularity proponents is forward extrapolation ad infinitum - that there is an infinite number of paradigm shifts/advances posible.

Another one, made by some of them, is the assumption that the frequency of appearance of these infinite paradigm shifts/advances will increase exponentially (which is how Kurzweil came up with 2050 as the date for singularity).
In many fields, this assumption was already proven wrong.
 
Last edited:
Doesn't Riker kill some clones without a thought?

Gestating clones that were far from being complete and conscious -- basically still embryonic. And like I already said, it wasn't because they were clones per se, but because they were taken from his and Pulaski's genetic material without permission, because he and Pulaski had been violated by essentially being forced to reproduce without their consent. It was an allegory for reproductive choice and abortion rights.
 
If you are not already a member then please register an account and join in the discussion!

Sign up / Register


Back
Top