View Single Post
Old September 15 2012, 12:57 PM   #138
Mars
Captain
 
Re: Envisioning the world of 2100

newtype_alpha wrote: View Post
Mars wrote: View Post
So tell me why can't you run a nuclear reactor for 5000 years?
Because running at full power, a typical fuel rod will only last for 20 (25 if you're lucky) before it decays to the point of no longer producing useable heat and subsequently becoming a serious radiation hazard.
Ah, but you forget, in space no longer usable fuel rods can be tossed overboard and never seen again, you don't need to store them anywhere or worry about their radiation.

And that's before you take into account neutron breakdown of shielding materials and the piping of the heat exchangers, which over time become brittle and have to be replaced considerably more often.
Well the nuclear power plant will only be at full power for 20 years out of the 5000 year voyage, for the rest of the voyage, it will only be used to provide light and heat for the habitat. There is no other power source available in interstellar space until the discovery of fusion.

You would essentially have to overhaul the entire reactor every ten years of continued operation, replacing the fuel rods every other overhaul. That is no easy task, even for a machine.
If humans can do it, a machine can do it, machines will probably already be doing most of this stuff for nuclear reactors anyway.

In other words, your robots will have to completely rebuild the entire reactor two hundred and fifty times before the end of the voyage. God help you if you've got multiple reactors on board.
So it seems to me to be more than equal to the taske performed by a calculator brain, you'd need an AI to oversee things, as your not going to get a human crew to do this for 5000 years, and pushing the ship to faster speeds has other problems. I'd say compared to building a starship that can reach speeds of up to 10% of the speed of light, building a slow starship like this would be easy, even when these things are taken into consideration. For the price of one speedy starship, you could probably build 100 of the slow ones.

Which goes to the overall point of this being a fundamentally impractical endeavor: that's a LOT of new capabilities being developed for a space craft that doesn't actually accomplish any concrete goal for anyone. Your stated goal is to ensure the survival of the human race 5000 years in the future, yet the sheer massive amount of resources that would be needed for a project this ambitious could be more efficiently used to eradicate world hunger, terraform Mars and tap the methane lakes of Titan to provide the world with an inexhaustible energy supply. It's a highly expensive and complicated solution to a problem that may or may not even exist.
But slow starships are relatively cheap to build, this one would be the size of an O'Neill habitat, and not much more expensive to build, it can house 10,000 people when fully occupied, and an ion drive would accelerate it up to 300 km/sec and then slow it down again at the end of the journey, this is not much in the realm of starships, certainly teraforming Mars would be more expensive than this.

As for World hunger, AIs alone could solve this problem, AI's could grow the food, AIs could do all the work, the rest is just a matter of redistribution. If we can get enough AIs to outnumber the human population, the starving millions need not even work for their food. I don't think there is any chance of the human race dying of starvation, maybe gluttony perhaps.

I do perhaps think that maybe humans need to work to stay healthy, having machines do everything for them may pose a long term risk to their survival, which is why establishing a distant colony of humans is important, and isolation from the rest of humanity is also important.

Mars wrote: View Post
The AI can slow down his consciousness so he won't get bored
And you know this how?


Neither will your generation ship if the AIs decide to chase after it. Or, for that matter, if the pilot AI gets an email from Earth containing the Cyberdyne Manifesto and decides to turn around and head back.
Machines would be in the same peril from obsolescence as humans. I think the event would be more like a forest fire, chances are it wouldn't spend much effort looking for remote spaceships in interstellar space, anyway being far away from the conflagration would be better than being right in it.

Right, because PUTTING AN AI ON THE GENERATION SHIP is isolation enough.


Because in the collected sum of mankind's knowledge about itself, its world, the solar system that contains the world and the immediate vicinity of our stellar neighborhood, there is no reason whatsoever to believe that it WON'T.

More importantly -- and more relevantly to this thread -- I, like most human beings, don't give a damn one way or the other what might happen five thousand years from now. This is a thread about the world of 2100, less than a century into the future, at a time when my son will be watching his grandchildren go on to take meaningful careers.
Well this ship can certainly be launched by 2100, that is it can be built in that time frame with that time frame's technology, so it is an appropriate subject. Maybe you don't care if the human race perishes but I do, and I'm examining what technologies in the late 21st century might be available to save it.

So if I'm to worry about the future at all, it'll be whether or not humanity is going to survive for the next FIFTY years. Is a generation ship on a 5000 year voyage a good way to insure that? No? Then why the hell would I want to spend money building one?
Both 2100 and 5000 years is beyond my expected life span, so I worry about them equally, because I don't expect to personally live in the year 2100, so I remove myself from immediate consideration, it is the rest of humanity, those that will come after me that I am concerned about, since I both won't live in the year 2100 or 7100, I am not concerned more about one than the other, they are of equal concern to me as it is about the future of the human race.

Why does one buy life insurance, this is life insurance for the human race, I think it is worth some effort and expense.

On the other hand, it's relatively easy to control the six or seven thousand people on the entire planet who are even remotely smart enough to attempt to build an AI. We ALREADY do this with nuclear non-proliferation.
I hate to tell you this, but its not working, North Korea has acquired the nuclear bomb and Iran is acquiring it, and shunning them, or not trading with them is not going to stop them, the only way to control the entire planet is with a world government, and that's another fate which I wish to avoid, that of the human race being controlled and turned into drones, Ala "the Borg". Humans need to spread out over time and space so they don't get turned into an ant colony. A world government and trying to control us all puts all our eggs in one basket, if all our decisions get made at the top of government, then the wrong decisions could wipe us out!

On the other hand, you don't have a shred of evidence that the emergence of humanlike AI is even possible, let alone inevitable, let alone that any negative consequence would follow for humanity if it was.
Its hard to predict the unpredictable, AIs will be quite clever, they'll be agents of change, you can't predict what that change will be or what that change will do, the more change there is the greater the risk to the human race. So we have to spread the human race out over time and space to prevent us all from getting caught in one of our mistakes.

Its not going to work, because the one who violates the treaty will always be at an immediate advantage.
Until the next biggest country bombs him back into the stone age for violating the treaty.
And we're back to starting a nuclear war aren't we.
A lot more can go wrong with a community of billions of humans than with an isolated starship with frozen embryos traveling the void between the stars
Which is trivially true. The problem with this statement is that just about anything that can go seriously wrong on a space ship will usually result in the destruction of that space ship. With a population of 7 billion, a global-scale catastrophe could annihilate 99% of the human race and that would still leave more survivors than most countries have people.
Its not the numbers but how far they are spread out that matters. If you can pack one billion human beings within the radius of one nuclear bomb blast, then they all die if one goes off, It is the limited scope of humanity on one planet that is the danger. The thing about spaceships, especially this one, is if it gets destroyed, no one dies, it is simply that no one gets born afterwards. One can have multiple spaceships, and you limit your chance of having one all-consuming disaster that destroys us all.
Mars is offline   Reply With Quote