• Welcome! The TrekBBS is the number one place to chat about Star Trek with like-minded fans.
    If you are not already a member then please register an account and join in the discussion!

Hacking

farmkid

Commodore
Commodore
Over the years I've heard many stories about the Chinese hacking into foreign governments and companies. Some of the reports are rather disturbing. Here is an article from today, for example. So my question about it is, why is it so hard to secure computer systems from hacking? Now admittedly, I know very little about hacking, so this may be a stupid question, but I honestly can't understand why it's so hard.

If it is so hard, at what point does the US just decide that having allowing internet traffic from China is just too dangerous and just shut off the connection? Could the government secure its mission-critical systems from hacking by simply establishing a second network for government use only that is physically separated from the internet? That would be expensive, of course, but I can't see how it would be possible for anyone to hack into such a system.
 
Hacking is lots of fun and a great public service as well.

As for securing a system against remote exploits, it's actually pretty easy – it requires a little effort on the programmer's part and a little effort on the user's part. The problem is that both the users and the programmers are completely lazy, incompetent and ignorant.

Many programmers rarely go researching best programming practices. If they ever hear about them, they tend to ignore them as something that's not relevant to them, or try to apply them only when they think they “matter”. After all, it's paranoid to sanitize your program's inputs if it's a simple program doing a simple thing or if they don't think they are somewhat important. They just to try to build a program that works, oops, I mean, appears to work. They are more commonly known as PHP “programmers”. But it's all a symptom of a greater problem, namely the “Why do I need this? How the hell will it be relevant to me getting a job?” any time people are confronted with any extraneous knowledge they don't think matters.

There's also the thing that businesses care more about their profit than a little bit of negative publicity, which is why some popular systems have been terribly insecure until it started hurting them cause people were heading for the competition.

Computer administration tends to suffer from similar problems – admins try to get the systems up and running, but forget to update their systems, and don't bother to add additional obstacles to anyone compromising the system. Or even worse, assume that these additional obstacles are enough to protect themselves, so they skip everything else. They also try to write programs sometimes.

Security practices are also an inconvenience for users. They tend to ignore them, or complain about them when they can.

Oh, and people who dare to report a security problems get called “hackers” and get arrested. So as a result certain security issues are only known by the bad guys.

---

An example on how easy it is to create and encourage security holes:

The C programming language has a function called gets. It only exists so that programs using it don't break. From its documentation:
BUGS
Never use gets(). [...] It has been used to break computer security. Use fgets() instead.

Nevertheless, our admin used gets in all his programs. Great idea.

Also note that if someone removed this function, it will cause programs to stop working, and people will stop using the operating system that removed the function, instead of stopping to use the broken programs.

P.S. I had a doctor prescribe me homoeopathic preparations once. That's the main reason why computer security is so easily broken. I then bought the homoeopathic preparations and used them without questioning the doctor. That's the second reason.
 
Last edited:
Remember that a chain is only as strong as its weakest link. Computer security is a multifaceted problem. You have to worry about a lot of things:

* Does my network have any possible points of remote intrusion? Odds are, the answer is "yes."
* Are all the systems on my network running a well-supported operating system with current patches and no known zero-day exploits? The answer to this is almost always "no."
* Are all my staff well-trained in proper secure computer and Internet usage, as well as how to resist social engineering attacks? Usually, "no."

It's difficult to stop all hacking because there are so many moving parts. Consider an organization with thousands of computers on its network. Even physically locating every system can be an arduous task. Somebody might have an old Linux box sitting under his desk that he hasn't updated in six years, and it has a ton of known vulnerabilities. But it's safe until someone finds a route into your network. Maybe you have a public-facing Web or email server, and an attacker found it through a port scan. He sees you're running a mail daemon with a buffer overflow exploit that hasn't been patched. Boom, he's in--but he only has control of that one system. Time to scan for other systems inside the network to compromise. Look at that! Some old Linux box that was grandfathered into the current security scheme because it does things no one knows how to port to another system. Break into that (easy), start stealing emails and sniffing traffic all over the network. Maybe compromise a few more systems, if you can find them. Once you've sniffed enough to find some genuine user credentials, VPN into the network legitimately and start ripping everything you can from shared document servers. Given a determined attacker, some common tools, and a bit of time, your organization's best-kept secrets can be in enemy hands in a frighteningly short amount of time.

The obvious reason for why we can't nullroute Chinese traffic is that US companies do a lot of business in China. How do you know which traffic is good and which traffic is bad? You don't. And it's pretty obvious why the US Chamber of Commerce wouldn't block Chinese traffic--they would've been communicating with Chinese officials constantly.

You can think of computer security as a set of different levels, each with its own degree of effectiveness:

1. Physical security. Does anyone have access to the system? If you can completely deny physical access, you've stemmed one major vector of attack.
2. Network security. Is the network hardened against breaches? Is there any outside access? Be sure that if there is any outside access, be it via VPN or some other method, it is breakable. The only way to have true network security is to not have your system connected to any external networks, ever. This is simply too onerous and inconvenient for any but the most sensitive networks (think DoD/NSA systems.)
3. Software security. Think virus scanners, security patches, restricting user account permissions, etc. If you are relying on this kind of security alone, you're a fool.
4. Human security. This may be the hardest one to implement. It means training people to be conscious of good security practices, and forcing them to follow those practices as much as possible. It involves using a combination of the security methods mentioned above to create an overall security strategy: keep people from getting into your building; once they are in the building, limit what they have access to; anything they have access to, monitor exhaustively so you can trace a breach.

So, to your point about keeping sensitive government systems away from the Internet, we already do that for truly sensitive data, such as military systems. But a department like the Chamber of Commerce simply can't do its job if it's segregated from the public Internet. How are they going to send email and communicate with people?

The basic point I am trying to impress is that if you have a public-facing system, it can be compromised, period. All you can do from there is limit the amount of damage a potential attacker can do, such as having separate internal subnetworks and physical firewalls so any intrusion is stopped before it can reach sensitive areas. And if you so much as let one trust network piggyback on your public-facing network, you've just ruined the effectiveness of many of your security measures. This is essentially what happened with Sony, as they offered privileged access to "developer" consoles on the PlayStation Network, which was tied to the public PSN. Once someone compromised the developer network, they had access to basically everything inside the PSN. That's what happens when you make bad assumptions about who will be accessing your "trusted" network.

You'd be surprised how often security violations occur simply because of someone handing out their password over the phone to "Bob from IT", or having it on a sticky note attached to their monitor, or simply have a really crummy password that's easy to guess. Humans are quite often the weakest link in computer security.
 
So how often are systems compromised by people having bad passwords or giving their passwords to others, versus getting in through other means?

I don't know what you could do about people giving their passwords to others, but it seems to me it shouldn't be hard to make it impossible to guess passwords. A simple way to do that would be to only allow one password entry attempt for a certain amount of time. If, for example, the system allowed a user to enter their password only once every 3 seconds, the user would barely notice at all, yet a computer trying a brute force attack trying to guess a password would take so long that it wouldn't even be worth trying. You could even have the time increase with each failed attempt to increase the effectiveness without significantly inconveniencing the user. I've never seen such a system in use, but it seems to me that doing so would basically make it impossible to guess passwords without knowing a lot of personal details about the user (assuming, of course, that people don't use "password" or "123456", etc.).
 
If, for example, the system allowed a user to enter their password only once every 3 seconds, the user would barely notice at all, yet a computer trying a brute force attack trying to guess a password would take so long that it wouldn't even be worth trying.

That has the side-effect of making it extremely easy to lock someone out of the system by repeatedly trying to guess their password. Having a password with high entropy has the same effect, without locking users out.

Also it doesn't help when a list of hashed passwords leaks.
 
Oh, trust me, you are in totally familiar and well-explored territory there. At my last job, you had to change your password every 90 days, it had to be at least 8 characters long, and contain both uppercase and lowercase letters, and at least two numbers. You were not allowed to ever reuse a password, and new passwords had to be substantially different from all past ones (they couldn't pattern match by more than, say, 50%.) You better believe there was plenty of password security there.

Of course, this resulted in people forgetting their passwords all the time, and calling IT to reset them. There are always downsides to such stringent policies.

YellowSubmarine's post is interesting because it looks at computer security fundamentally as a programming problem, whereas I looked at it more from an IT perspective. I think approaching it as a programming problem is wrongheaded due to the fluid and imperfect nature of software and its development. While there are absolutely massively dumb mistakes that all programmers should be trained to avoid (buffer overflows, sanity/taint checking all inputs, etc.), the bottom line is that software is a business and most companies do not and will not have the resources to make every bit of software they put out a CMM/I Level 5 product. You know who can afford to code at that level? Defense contractors, aerospace engineering companies, and some financial services organizations. Your typical off-the-shelf software package--your operating system, your office suite, your productivity application--was almost certainly developed by a company that's lucky to have been appraised for CMM/I or any other process standardization methodology at all.

What YellowSubmarine said about most companies not using best practices is true. Problem is, IT can't solve that problem. IT doesn't write software. IT has to work with the software and hardware they have. That's why I think security will, for the foreseeable future, have to be dealt with at the IT level rather than the programming level.

If we really want to see a sea change in software development, it's going to take government regulation, and let's be blunt: the US government is not going to do that, not if it will stifle/kill hundreds of small software development companies. The software sector is one area of the economy that continues to grow and prosper, and no matter how well-intentioned such regulation would be, it will hurt industry sales and profits.

So, as I see it, IT has to look at security from the assumption that all software is flawed. You may not know where the flaws are, or what they are, but you still want to shield against whatever they might be. That means physical and logical segregation of data networks, and very tightly-monitored and controlled trust networks for sensitive data.
 
That has the side-effect of making it extremely easy to lock someone out of the system by repeatedly trying to guess their password. Having a password with high entropy has the same effect, without locking users out.
I have seen systems that lock a user out after some certain number of attempts. Requiring a wait of a few seconds would overcome that by making brute force guessing attempts impractical while not locking anyone out.

Perhaps the system could be designed to analyze failed password entries to look for signs someone is trying to hack the account and lock it out when it detects such an attempt. For example, many rapid entries faster than a human is capable of, especially if those entries are not even close to the real password.
Oh, trust me, you are in totally familiar and well-explored territory there. At my last job, you had to change your password every 90 days, it had to be at least 8 characters long, and contain both uppercase and lowercase letters, and at least two numbers. You were not allowed to ever reuse a password, and new passwords had to be substantially different from all past ones (they couldn't pattern match by more than, say, 50%.) You better believe there was plenty of password security there.

Of course, this resulted in people forgetting their passwords all the time, and calling IT to reset them. There are always downsides to such stringent policies.
I would think such policies would be counterproductive and actually decrease security in practice. As you say, it resulted in people forgetting their passwords all the time. In such a situation, I would be forced to either call IT frequently to get it reset or I would write it down somewhere, most likely the latter after having to call IT a couple of times. It seems to me that in practice, simple to remember passwords would be more secure, if coupled with other measures that penalize guessing. It might also reduce workplace violence.;)
YellowSubmarine's post is interesting because it looks at computer security fundamentally as a programming problem, whereas I looked at it more from an IT perspective. I think approaching it as a programming problem is wrongheaded due to the fluid and imperfect nature of software and its development. While there are absolutely massively dumb mistakes that all programmers should be trained to avoid (buffer overflows, sanity/taint checking all inputs, etc.), the bottom line is that software is a business and most companies do not and will not have the resources to make every bit of software they put out a CMM/I Level 5 product. You know who can afford to code at that level? Defense contractors, aerospace engineering companies, and some financial services organizations. Your typical off-the-shelf software package--your operating system, your office suite, your productivity application--was almost certainly developed by a company that's lucky to have been appraised for CMM/I or any other process standardization methodology at all.

What YellowSubmarine said about most companies not using best practices is true. Problem is, IT can't solve that problem. IT doesn't write software. IT has to work with the software and hardware they have. That's why I think security will, for the foreseeable future, have to be dealt with at the IT level rather than the programming level.

If we really want to see a sea change in software development, it's going to take government regulation, and let's be blunt: the US government is not going to do that, not if it will stifle/kill hundreds of small software development companies. The software sector is one area of the economy that continues to grow and prosper, and no matter how well-intentioned such regulation would be, it will hurt industry sales and profits.

So, as I see it, IT has to look at security from the assumption that all software is flawed. You may not know where the flaws are, or what they are, but you still want to shield against whatever they might be. That means physical and logical segregation of data networks, and very tightly-monitored and controlled trust networks for sensitive data.
That makes sense. I suppose in the end it's driven by what the market demands. At the moment, customers aren't willing to pay the extra cost that higher security would require, so it won't happen until the cost of not having the security is higher.
 
https://en.wikipedia.org/wiki/Hacker_(programmer_subculture)
The C programming language has a function called gets. It only exists so that programs using it don't break. From its documentation:
BUGS
Never use gets(). [...] It has been used to break computer security. Use fgets() instead.
Nevertheless, our admin used gets in all his programs. Great idea.

The entire C standard library is full of functions which are easy to misuse. Microsoft attempted to fix this by creating _s ("secure") versions of these functions requiring additional parameters, but that approach doesn't do anything except make your code Microsoft-specific when it doesn't have to be.

One huge step forward would be to change how programming is taught in high school and college, so that C is relegated to low-level, high-optimization code, and beginners are introduced to C++ instead. It's far harder to accidentally allow a buffer overflow with std::string than with char arrays...
 
Also, while the human is the problem at all levels, I disagree it's necessarily the end user that's at fault.

The software, the management and the computer administrators need to encourage the users to follow certain practices. They can remove some of the things that make it easier for a system to be exploited.

For example, they can discourage the use of passwords when this is possible. If you don't have a password, you can't tell it to someone. A private key, possibly stored on a smart card could help. It's not something I would do, but if you don't trust the people using your computers, you can force more strict requirements that make it less likely for them to fuck things up. It's certainly within the capabilities of a government to implement such a system.

The person sitting on the computer can't be fixed, even if he's competent he will make mistakes. Make it difficult to exploit those mistakes.
 
The only way to have true network security is to not have your system connected to any external networks, ever. This is simply too onerous and inconvenient for any but the most sensitive networks (think DoD/NSA systems.)

Not only their systems, but also any networks authorized to contain their data....which basically means any computer which contains any information considered classified by the US Government cannot have a physical internet connection, any disc or drive containing such information cannot be connected to a computer with an internet connection, etc.
 
The only way to have true network security is to not have your system connected to any external networks, ever. This is simply too onerous and inconvenient for any but the most sensitive networks (think DoD/NSA systems.)

Not only their systems, but also any networks authorized to contain their data....which basically means any computer which contains any information considered classified by the US Government cannot have a physical internet connection, any disc or drive containing such information cannot be connected to a computer with an internet connection, etc.

Yup.

Do they still fill the USB ports with rubber cement? I've heard some agencies do that to keep people from plugging in flash drives. :lol:

Very low-tech but probably effective.
 
Not in my building. But we do have regulations about which direction monitors have to be facing relative to windows, which computers can have speakers connected to them, etc.
 
Yup.

Do they still fill the USB ports with rubber cement? I've heard some agencies do that to keep people from plugging in flash drives. :lol:

Very low-tech but probably effective.

Another option would be to hire my kids to come plug stuff into the USB ports. They would all be damaged beyond use in no time.:vulcan:
 
I saw this today, which gives a really good overview of just what methods are the most popular for breaking into computer systems, but worded such that most people should be able to understand it (as long as you have a decent working knowledge of computers and networks.)
 
It's nicely presented and it lists ways to prevent those security issues from happening. And after a quick glance, the top two are very easy to fix at the programming side.
 
If you are not already a member then please register an account and join in the discussion!

Sign up / Register


Back
Top