I'm not a physicist or computer scientist, so if anyone reading this is one of the above, I welcome any correction. In essence, current computers are limited to either on or off, the famous 0's and 1's, in each bit. Quantum computers can do on, off and a combination of on and off, in each qubit.
Basically, let's look at a 4-bit system, 0000 to 1111, with everything in between (e.g. 0001, 0010,0101). Let's assume you think the solution is 0110 (in decimal that's the number 6), a traditional computer has to test each combination of numbers to prove that the answer is indeed 1010. A 4 qubit quantum computer can test all of the solutions at once, since each qubit can be both 0 and 1 at the same time. So it takes one run of the system to find your answer. The longer the number, the bigger the advantage of quantum computing. There's also the possibility of using a system that's more than binary, with 4 or 8 possible states per each qubit, meaning a 4 cubic computer could check 4,096 solutions at once (8^4) instead of just 16 (2^4).
Only downside, is that quantum computers, like all quantum effects, are infinitely fragile. Just looking at the answer destroys the solution (Heisenberg's uncertainty principle). As such, q-computing is currently theoretical, with only parts of the required system tested. It won't be around for 10 years, at the earliest, and that's at laboratory level systems with just a few qubits. If I recall correctly, you have to have at least 20 qubits to have a computer that's feasible for practical uses. That's a while off, though no doubt Intel is working hard on it with the leading Universities.
Having said all that, there's a lot of things where Q-comps won't be any faster than binary computers. For some applications, however, it's a quantum leap (pun intended).