View Single Post
Old September 15 2012, 02:35 AM   #128
Mars
Captain
 
Re: Envisioning the world of 2100

newtype_alpha wrote: View Post
Mars wrote: View Post
The only thing that requires much development are the AIs and artificial womb technology.
And nuclear fission reactors in space. And the technology to freeze an embryo that will remain viable for thousands of years. And the techniques to build and maintain an O'Neil colony, let alone the infrastructure needed to begin construction in the first place. And -- most importantly -- a DESTINATION.

As for the purpose of an interstellar colony of humans, I would think that would be obvious, as an insurance policy for the survival of the human race.
Which, much like a laptop computer half a mile wide, is a highly impractical way of achieving that or any other goal. Simply removing the step of sending your space ark to another solar system would increase its feasibility by an order of magnitude; taking it out of orbit, unrolling it and parking it on the moon removes the need for artificial wombs, cryonics or sentient AIs at all.
The problem is that it may not be left alone by the billions of other humans and sentient AIs that are also inhabiting the system, and if something were to destroy humanity, the ship might get caught up in it. There would be insufficient isolation if it stayed in the Solar System, if it is lost in the depths of interstellar space, it might be forgotten about. I figure if you give the human race another 5000 years and it does not do itself in, then the human race should be fairly safe, and their might even be a welcoming committee waiting their arrival at Alpha Centauri, but insurance policies are in case the worst happens, if the worst doesn't happen then there should be humans awaiting their arrival at Alpha Centauri.

That's what people mean when they say "practical." We have, for example, directed energy weapons like the THEL or the ABL that can destroy targets with laser beams; they are not, however PRACTICAL battlefield weapons, because the amount of infrastructure and hardware needed to make them work far outweighs any possible benefit to the technology.

All this change wrought by AI technology may threaten the survival of the human race...
So your solution to the worrying tends of AI development is to place humanity's insurance policy under the direct control of... an AI?
Because the AI itself doing the tending won't itself evolve, it will experience time at a 100:1 ratio for most of the voyage, so the trip will seem to take 50 years for it, there won't be any competing more sophisticated AIs for it to worry about, as it will be just as isolated as the humans will be when they arrive. The Ai will miss about 1000 generations of subsequent AI programs if civilization lasts that long and if not, it will represent the hope for the continued human future.

Do you think insurance policies are a bad investment.
Only if you pay more than you should, to ensure against things that will never happen, which would pay off in a form you can never spend. It's like buying volcano insurance with a $700/month premium that automatically names your great grandson as the beneficiary of an '89 Ford Pinto.
How can you know that the human race will survive for the next 5000 years? I think the greatest danger is that of self destruction, not as asteroid strike or a supernova, to survive whatever calamity wipes out the human race, a part of it will have to be very far away, so the rest of humanity trying to destroy itself will not concern itself with that. You know what nuclear war is right, or runaway nanotech or AIs taking over, or humans uploading into machines and then something happening to those machines so they no longer function, or some kind of fantastic war being fought. Use your imagination, its best to be far away from whatever may occur in order to be safe.

Let's be specific here. Almost anything you can think of to "insure" against the extinction of the human race would be mitigated far more effectively by targeting the thing itself. If rampant AIs are the potential problem, the simplest solution is to STOP BUILDING THEM.
Well an isolated starship is not going to be building much of anything, probably be doing self maintenance and that is it, its harder to get billions of human beings and control them all so they don't build AIs, because for each one it may prove advantageous to build one, but taken together they may threaten humanity by their existance. Humans may become too technology dependent and atrophy to nothingness. the humans arriving at the system will efffectively be late 21st century humans with late 21st century technology, so they start as square one.

Get everyone to ban AI research and sign treaties that isolate countries that don't.
Its not going to work, because the one who violates the treaty will always be at an immediate advantage.

If the problem is pandemics, asteroid impacts, nuclear war, alien invasion, Lady Gaga, the second coming of Jesus... all of those have very specific solutions, and the combination of all of them would be less expensive and more effective than developing a generation ship.
A lot more can go wrong with a community of billions of humans than with an isolated starship with frozen embryos traveling the void between the stars, very little is likely to happen during that almost 5000 year dormant cruise. Back on Earth and the solar System, billions of humans and AIs will be inventing dangerous new technologies we can't even anticipate, the isolated starship won't be a part of that, and the Solar System won't concern itself with them.
Mars is offline   Reply With Quote