• Welcome! The TrekBBS is the number one place to chat about Star Trek with like-minded fans.
    If you are not already a member then please register an account and join in the discussion!

Which Sci-fi future will we reach?

^ I think you're wrong.

The silicon technology is reaching its limits. Mind you, I remember them saying that ten years ago, that silicon chips had reached their limit. And yet we carried on. Optical technology is coming. And other technologies too.

I'm surprised so many here doubt that things will progress. Read The Spike by Damien Broderick. Then we'll talk.

Or we'll just have to wait and see. :)

Developing optical processing tech is a LOT harder than making smaller chips to make smaller chips.
It requires creativity, a few geniuses along the way, billions in research money. And even then, it's not certain.
So far, the optical technology is FAR from being ready. And this will continue for some time to come.

I already metioned similar past examples.
Here's one - in the '60, everyone beleived that by now we'll have a very effective method of space propulsion. 50 years later, we still have chemical rockets. You see, developing such a revolutionary propulsion is a lot harder than the overly optimistic predictionns of times past would have one beleive.

As for the theoretical limit of development a civilization can reach - the laws of physics impose quite a few limitations.
For example - entropy, conservation of energy.


The clearest argument against the so-called singularity is that science development was never exponential in nature in history. Most likely, it will never be so.
Not that it'll be our problem, one way or the other - we'll be long dead when the first true AIs are constructed.
 
I would imagine something like either nuBSG or Babylon 5... but without aliens, FTL, or energy weapons.
 
Why is it that there seem to be people in the defense departments of every modern nation that have some perverse desire to duplicate technology from old science fiction that got got mankind destroyed?
 
^ as pointed out, it's not the faster hardware that will make computers smarter, but better software, and it hasn't materialized.

This cannot be stated clearly enough.

Our computing power is many orders of magnitude greater than where we started, but at this point we've only managed to use it for massive number-crunching and video games. AI research is active but very slow-going. We still take a very brute-force approach to intensive computational tasks, because it's much easier than engineering intelligent decisionmaking into a computer.

One area where animals greatly surpass computers is pattern recognition. We can recognize a familiar item--a face, a word, an object--in a fraction of a second without really even thinking about it. For a computer, such recognition is done by comparing the item in question to essentially every other item it knows about, or at least a large number of similar items, in order to find a match. While this can be done quickly on modern hardware, it's only because the hardware is powerful, not because we've increased the intelligence or efficiency.

Things like that outline my skepticism of the Singularity hypothesis. While I think it would be incredibly awesome to reach that level of technology and experience a transhumanistic future, it just doesn't look like we are anywhere near that level. Our computer science would need to make a few giant leaps and we would need find a permanent solution to our energy problems. Just making our computers more and more powerful in terms of their MIPS rating is not going to cut it.
 
It would certainly be interesting, but I do have some reservations about building something that could end up with human-like intelligence. Would it be ethical to destroy it or essentially "enslave" it to do our bidding?
 
Developing optical processing tech is a LOT harder than making smaller chips to make smaller chips.
It requires creativity, a few geniuses along the way, billions in research money. And even then, it's not certain.
So far, the optical technology is FAR from being ready. And this will continue for some time to come.

I already metioned similar past examples.
Here's one - in the '60, everyone beleived that by now we'll have a very effective method of space propulsion. 50 years later, we still have chemical rockets. You see, developing such a revolutionary propulsion is a lot harder than the overly optimistic predictionns of times past would have one beleive.

As for the theoretical limit of development a civilization can reach - the laws of physics impose quite a few limitations.
For example - entropy, conservation of energy.
While I agree with you for the most part, you're ignoring any/all negative outside influences. Obviously, technological evolution can't happen in a vacuum, so it's impossible to say.

That millennium-plus long technological void you spoke of wasn't limited to just weaponry. It was across the board. The reason for this is because those in power had a vested interest in keeping the populous ignorant and in its place.

The only anomaly in the entire time period was the printing press and that was only "allowed" to succeed because it meant God's Word could then be spread to more people.

With rocket propulsion, it (despite eloquent Presidential speeches) was only promoted for its perceived military advantage. Once it was conduced that advantage wasn't practical or worth it, the only thing that remained was economics--in which there is nothing but negative results. For the people that matter, "research" for the sake of it is pretty low on the list. But be rest assured that if a resource that can be exploited on Mars is discovered the technology to get us there will increase ten-fold within a decade.

The internal-combustion engine is pretty obvious so I won't even bother. But for the record, research into making it more efficient as far back as the 80s were immediately squashed. Who's to say how far they would have come had they been allowed to evolve all this time?

Ironically, however, there is an opposite effect with computer-related technology. It's more "economically sound" to advance it forward as fast as possible--the initial cost is seen as insignificant. Faster, more advanced technology has allowed (forced?) the lemmings to do more work faster and more accurately. That is, of course, the work that isn't automated. Advanced AI can process limitless amounts of information in a very short time--and for next to no cost. So it's economically advantageous to obtain it as soon as possible. That's why it has advanced as fast as it has and will continue to do so.

Just look at its effect on society. The ability now exists to allow people to do work that at one-time had to be done on-site half-way across the globe.

But even with that that isn't. People are no longer incommunicado with their employers when they go home for the day, go on vacation, or what have you. The can be reached (and do their work) at any time and any place.

It's been suggested that people who get paid for a typical 40 hours or work a week, actually, on average, do closer to 60.

You think the people who matter aren't going to milk this cow for all its worth? If milking that cow means pushing the technology to the envelope as fast as possible, that's what will happen.
 
Robert Maxwell,

It would certainly be interesting, but I do have some reservations about building something that could end up with human-like intelligence.

Agreed, and for a number of reasons. One it could endanger us, and second of all, there would be ethical ramifications in creating a sentient being for performing various acts of work.

Would it be ethical to destroy it or essentially "enslave" it to do our bidding?

If it endangered us, it would be acceptable to kill it, just as we would kill a person who attempted to murder us. However it would be completely unethical to create an artificial being solely for the purpose of making our lives easier. When sentience enters the equation, the issue of involuntary servitude enters the picture, and forcing such a being to work for us would essentially be involuntary servitude unless it had rights and freedoms (which would defeat it's purpose to work for our benefit without having to pay it or worry about it's needs, it's feelings, and such like people -- but ignoring the fact of it's sentience would be exactly as unethical as enslaving sentient beings, possibly worse, as they would be created from the very beginning solely to be a slave. Ignoring it's sentience would also be turning our backs on reality, which is never a good thing)
 
If history repeating itself is any indicator of the future then it's more likely the current superpowers will fall under their own bloated weight.

Today's greed and nonchalance will fail to address the energy needs of future generations.

We will probably go full circle and return to a much simpler way of life .. with a much smaller global population more akin to post apocalyptic scenarios rather than high tech ones.
 
It would certainly be interesting, but I do have some reservations about building something that could end up with human-like intelligence. Would it be ethical to destroy it or essentially "enslave" it to do our bidding?

Yes! It's all part of the pattern! It just repeats over and over. The next step would be inventing derogatory words for the "machines". After that .. they'll gain their independence and freedom. Next .. the derogatory word will be placed onto the Politically Incorrect list.

See "History repeating itself" above .. ;)
 
Organic computing. That's the way forward.

Exactly. It was the way of the past ... and will be again.

Nb: By "way of the past" .. I mean humans using their "organic" brains to add up their shopping lists .. instead of the calculator in their phones.
 
I do agree that organic computing will be essential to implement the things we take for granted--creativity, intuition, pattern-recognition, etc. But we also need a leap in that area. Neural networks haven't exactly lived up to the hype.
 
Maybe they are using the wrong scale in trying to imitate an organic neural network/brain. Maybe they should think "big" at first.

Early computers were huge.

It may be more doable to produce a huge organic computer first .. then scale it down later.

History again.

No jokes about "Cyberspace's Biggest Loser" thanks :)
 
I do agree that organic computing will be essential to implement the things we take for granted--creativity, intuition, pattern-recognition, etc. But we also need a leap in that area. Neural networks haven't exactly lived up to the hype.

About neural networks - what is the cause of their inefficiency?
The hardware or the used algorithms?
 
If you are not already a member then please register an account and join in the discussion!

Sign up / Register


Back
Top