This brings up something I've wondered about with regard to encryption. First, let me be clear that I don't know much about encryption and only have an interest in the topic, but no actual education in it. You say here that using a multiple short keys is the same as using one long key of the same length as the sum of the short ones. Okay, I get that. But that is based on the idea that each of the short multiple keys is using the same algorithm, right? What if each of those layers used a different algorithm?
To put it in terms more easily understandable to a layman like myself, consider a code message from back in the days before digital computers when text switch ciphers or the like were common. What if instead of just a text switch cipher, a multi step approach was used--something like this: (1) a character change cipher was applied, (2) the characters were then scrambled based on some algorithm, and (3) the (now changed and scrambled) characters were hidden in a larger block of characters according to some other algorithm. To decrypt the information, the reader would first have to identify the characters that were part of the message, then unscramble them, then change each character back to the correct one (I suppose that last step could be performed anytime). It seems to me that such a scheme would be much more difficult to crack because any one step on its own only yields gibberish. Meaningful information is only revealed after correctly applying all three decryption steps.
Could something similar be done with digital communication, using multiple layers of encryption that use different algorithms? Would that still be the same as just using a key equal in length to the sum of the sorter keys?
The problem is that you want something the computer can do
quickly. This needs to work in real-time, after all.
Stepping back for a moment, remember that digital cryptography works because encrypting and decrypting something with a known key is (computationally) easy, but decrypting without the key (that is, trying to break/guess it) is very hard (computationally).
Mixing algorithms together doesn't buy you as much as you'd think, because while they add computational complexity, they don't really add much
security. It's a diminishing return.
Nevertheless, what you are talking about actually is used in some instances. TrueCrypt and Rubberhose both possess the ability to use what are called hidden and nested volumes. It works like this: you create a decoy container, which just looks like a file. It's encrypted. It has some stuff in it, but it's actually stuff you don't care about. Inside it is
another container, which contains the
real data you care about. Because the
entire file (containing both the stuff you care about and the stuff you don't) is encrypted, the contents are indistinguishable from random noise if you don't have the key, and there is no way to tell that there even
is a hidden, nested volume unless you have the key for it. In this case, you may or may not mix algorithms--there's really nothing to be gained by doing so, since a good key is much, much more important.
When you get right down to it, the best encryption available uses AES with the Rijndael cipher. It is not known to have any internal weaknesses. Novel encryption algorithms are a bad idea because they may be broken (and you just don't know it yet), so you don't use those. Other algorithms are inferior or broken for one reason or another, so layering them
at best gains you a tiny bit more security, but more likely opens you up to any number of attacks. "A chain is only as strong as its weakest link" applies here.
The other big concerns in encryption are hashing and key expansion algorithms. Hashing just takes an input and transforms it into a unique (but smaller) output. The idea there is that you could say "plaintext A is represented in short form by hash B," so it's great for validation, but with only hash B you can never get back plaintext A, so it is also informationally secure: the hash itself tells you nothing about the plaintext
unless you happen to have the plaintext already. (Bad hashing algorithms fail this, basically, or are flawed to the point that you can reproduce a given hash with arbitrary plaintext, making it useless for validation.)
Key expansion. Odds are, what you want to encrypt is bigger than the key you're using. That means you need a way to "expand" the key to cover the entirety of the plaintext. This is a complex area but suffice it to say, the slightest weakness in your key expansion algorithm can ruin the encryption method.
All this is to say that mixing and matching encryption methods just doesn't really get you much, and has a strong chance of making you worse off.