Information Security Concepts

Information Security is such a broad discipline that it’s easy to get lost in a single area and lose perspective. The discipline covers everything from how high to build the fence outside your business, all the way to how to harden a Windows 2003 server.

It’s important, however, to remember not to get caught up in the specifics. Each best practice is tied directly to a higher, more philosophical security concept, and those concepts are what I intend to discuss here.

Eric Cole’s Four Basic Security Principles

To start with, I’d like to cover Eric Cole’s four basic security principles. These four concepts should constantly be on the minds of all security professionals.

  1. Know Thy SystemPerhaps the most important thing when trying to defend a system is knowing that system. It doesn’t matter if it’s a castle or a Linux server — if you don’t know the ins and outs of what you’re actually defending, you have little chance of being successful.An good example of this in the information security world is knowledge of exactly what software is running on your systems. What daemons are you running? What sort of exposure do they create? A good self-test for someone in a small to medium-sized environment would be to randomly select an IP from a list of your systems and see if you know the exact list of ports that are open on the machines.A good admin should be able to say, for example, “It’s a web server, so it’s only running 80, 443, and 22 for remote administration; that’s it.” — and so on and so on for every type of server in the environment. There shouldn’t be any surprises when seeing port scan results.What you don’t want to hear in this sort of test is, “Wow, what’s that port?” Having to ask that question is a sign that the administrator is not fully aware of everything running on the box in question, and that’s precisely the situation we need to avoid.

  2. Least PrivilegeThe next über-important concept is that of least privilege. Least privilege simply says that people and things should only be able to do what they need to do their jobs, and nothing else. The reason I include “things” is that that admins often configure automated tasks that need to be able to do certain things — backups for example. Well, what often happens is the admin will just put the user doing the backup into the domain admins group — even if they could get it to work another way. Why? Because it’s easier.Ultimately this is a principle that is designed to conflict directly with human nature, i.e. laziness. It’s always more difficult to give granular access that allows only specific tasks than it is to give a higher echelon of access that includes what needs to be accomplished.This rule of least privilege simply reminds us not to give into the temptation to do that. Don’t give in. Take the time to make all access granular, and at the lowest level possible.

  3. Defense In DepthDefense In Depth is perhaps the least understood concept out of the four. Many think it’s simply stacking three firewalls instead of one, or using two antivirus programs rather than one. Technically this could apply, but it’s not the true nature of Defense In Depth.The true idea is that of stacking multiple types of protection between an attacker and an asset. And these layers don’t need to be products — they can be applications of other concepts themselves, such as least privilege.Let’s take the example of an attacker on the Internet trying to compromise a web server in the DMZ. This could be relatively easy given a major vulnerability, but with an infrastructure built using Defense In Depth, it can be significantly more difficult.The hardening of routers and firewalls, the inclusion of IPS/IDS, the hardening of the target host, the presence of host-based IPS on the host, anti-virus on the host, etc. — any of these steps can potentially stop an attack from being fully successful.The idea is that we should think in reverse — rather than thinking about what needs to be put in place to stop an attack, think instead of what all has to happen for it to be successful. Maybe an attack had to make it through the external router, the firewall, the switch, get to the host, execute, make a connection outbound to a host outside, download content, run that, etc, etc.What if any of those steps were unsuccessful? That’s the key to Defense In Depth — put barriers in as many points as possible. Lock down network ACLs. Lock down file permissions. Use network intrusion prevention, use intrusion detection, make it more difficult for hostile code to run on your systems, make sure your daemons are running as the least privileged user, etc, etc.The benefit is quite simple — you get more chances to stop an attack from becoming successful. It’s possible for someone to get all the way in, all the way to the box in question, and be stopped by the fact that malicious code in question wouldn’t run on the host. But maybe when that code is fixed so that it would run, it’ll then be caught by an updated IPS or a more restrictive firewall ACL. The idea is to lock down everything you can at every level. Not just one thing, everything — file permissions, stack protection, ACLs, host IPS, limiting admin access, running as limited users — the list goes on and on.The underlying concept is simple — don’t rely on single solutions to defend your assets. Treat each element of your defense as if it were the only layer. When you take this approach you’re more likely to stop attacks before they achieve their goal.

  4. Prevention Is Ideal, But Detection Is A MustThe final concept is rather simple but extremely important. The idea is that while it’s best to stop an attack before it’s successful, it’s absolutely crucial that you at least know it happened. As an example, you may have protections in place that try and keep code from being executed on your system, but if code is executed and something is done, it’s critical that you are alerted to that fact and can take action quickly.The difference between knowing about a successful attack within 5 or 10 minutes vs. finding out about it weeks later is astronomical. Often times having the knowledge early enough can result in the attack not being successful at all, i.e. maybe they get on your box and add a user account, but you get to the machine and take it offline before they are able to do anything with it.Regardless of the situation, detection is an absolute must because there’s no guarantee that you’re prevention measures are going to be successful.

The CIA Triad

The CIA triad is a very important trio in information security. The “CIA” stands for Confidentiality, Integrity, and Availability. These are the three elements that everyone in the industry is trying to protect. Let’s touch on each one of these briefly.

  • Confidentiality : Protecting confidentiality deals with keeping things secret. This could be anything from a company’s intellectual property to a home user’s photo collection. Anything that attacks one’s ability to keep private that which they want to is an attack against confidentiality.

  • Integrity: Integrity deals with making sure things are not changed from their true form. Attacks against integrity are those that try and modify something that’s likely going to be depended on later. Examples include changing prices in an ecommerce database, or changing someone’s pay rate on a spreadsheet.

  • Availability: Availability is a highly critical piece of the CIA puzzle. As one may expect, attacks against availability are those that make it so that the victim cannot use the resource in question. The most famous example of this sort of attack is the Denial Of Service Attack. The idea here is that nothing is being stolen, and nothing is being modified. What the attacker is doing is keeping you from using whatever it is that’s being attacked. That could be a particular server or even a whole network in the case of bandwidth-based DoS attacks.

It’s a good practice to think of information security attacks and defenses in terms of the CIA triad. Consider some common techniques used by attackers — sniffing traffic, reformatting hard drives, and modifying system files.

Sniffing traffic is an attack on confidentiality because it’s based on seeing that which is not supposed to be seen. An attacker who reformats a victim’ s hard drive has attacked the availability of their system. Finally, someone writing modified system files has compromised the integrity of that system. Thinking in these terms can go a long way toward helping you understand various offensive and defensive techniques.

Terms

Next I’d like to go over some extremely crucial industry terms. These can get a bit academic but I’m going to do my best to boil them down to their basics.

Vulnerability

A vulnerability is a weakness in a system. This one is pretty straight forward because vulnerabilities are commonly labeled as such in advisories and even in the media. Examples include the LSASS issue that let attackers take over systems, etc. When you apply a security patch to a system, you’re doing so to address a vulnerability.

Threat

A threat is an event, natural or man-made, that can cause damage to your system. Threats include people trying to break into your network to steal information, fires, tornados, floods, social engineering, malicious employees, etc. Anything that can cause damage to your systems is basically a threat to those systems. Also remember that threat is usually rated as a probability, or a chance, of that threat coming to bear. An example would be the threat of exploit code being used against a particular vulnerability. If there is no known exploit code in the wild the threat is fairly low. But the second working exploit code hits the major mailing lists, your threat (chance) raises significantly.

Risk

Risk is perhaps the most important of all these definitions since the main mission of information security officers is to manage it. The simplest explanation I’ve heard is that risk is the chance of something bad happening. That’s a bit too simple, though, and I think the best way to look at these terms is with a couple of formulas:

Risk = Threat x Vulnerability

Multiplication is used here for a very specific reason — any time one of the two sides reaches zero, the result becomes zero. In other words, there will be no risk anytime there is no threat or no vulnerability.

As an example, if you are completely vulnerable to xyz issue on your Linux server, but there is no way to exploit it in existence, then your risk from that is nil. Likewise, if there are tons of ways of exploiting the problem, but you already patched (and are therefore not vulnerable), you again have no risk whatsoever.

A more involved formula includes the impact, or cost, to the equation (literally):

Risk = Threat x Vulnerability x Cost

Unsupervised Learning — Security, Tech, and AI in 10 minutes…

Get a weekly breakdown of what's happening in security and tech—and why it matters.

What this does is allow a decision maker to attach quantitative meaning to the problem. It’s not always an exact science, but if you know that someone stealing your business’s most precious intellectual property would cost you $4 billion dollars, then that’s good information to have when considering whether or not to address the issue.

That last part is important. The entire purpose of assigning a value to risk is so that managers can make the decisions on what to fix and what not to. If there is a risk associated with hosting certain data on a public FTP server, but that risk isn’t serious enough to offset the benefit, then it’s good business to go ahead and keep it out there.

That’s the whole trick — information security managers have to know enough about the threats and vulnerabilities to be able to make sound business decisions about how to evolve the IT infrastructure. This is Risk Management, and it’s the entire business justification for information security.

Policy — A policy is a high level statement from management saying what is and is not allowed in the organization. A policy will say, for example, that you can’t read personal email at work, or that you can’t do online banking, etc. A policy should be broad enough to encompass the entire organization and should have the endorsement of those in charge.

Standard — A standard dictates what will be used to carry out the policy. As an example, if the policy says all internal users will use a single, corporate email client, the standard may say that the client will be Outlook 2000, etc.

Procedure — A procedure is a description of how exactly to go about doing a certain thing. It’s usually laid out in a series of steps, i.e. 1) Download the following package, 2) Install the package using Add/Remove Programs, 3) Restart the machine, etc. A good way to think of standards and procedures is to imagine standards as being what to do or use, and procedures as how to actually do it.

Personal Views

In this section I’d like to collect a series of important ideas I have about information security. Many of these aren’t rules, per say, and are clearly opinion. As such, you’re not likely to learn them in a class. Hopefully, though, a decent number of those in the field will agree with most of them.

The goal of Information Security is to make the organization’s primary mission successfulMuch hardship arises when security professionals lose site of this key concept. Security isn’t there because it’s cool. It’s there to help the organization do what it does. If that mission is making money, then the main mission of the security group — at its highest level — is to make that company money. To put it another way, the reason the security group is even there in the first place is to keep the organization from losing money.

This isn’t a “leet” way to look at things for those who are into the novelty of being in infosec, but it’s a mentality that one needs to have to make it in the industry long-term. This is becoming increasingly the case as companies are starting to put a premium on the professionals who see security as a business function rather than a purely technical exercise.

Current IT infrastructure makes cracking trivialWhile many of the most skilled attackers can (and have) come up with some ingenious ways to leverage vulnerabilities in systems, the ability to do what we see everyday in the security world is fundamentally based on horribly flawed architecture. Memory management, programming languages, and overall security design — none of these things we use today were designed with security in mind. They were designed by academics for academics.

To use an analogy, I think we are building skyscrapers with balsa wood and guano. Crackers repeatedly tear into us at will and we can do nothing but patch and pray. Why? Because we’re trying to build hundreds of feet into the air using shoddy materials. Balsa wood and guano make excellent huts — huts that stand up to a casual rain storm and a bump or two. But they don’t do well against tornados, earthquakes, or especially hooligans with torches.

For that we need steel.

Today we don’t have any. Today we continue to build using the same old materials. The same memory management issues that allow buffer overflows to run rampant, the same programming language issues that allow most to write dangerous code easier than not, etc. Until we have new materials to build on we’ll always remain behind the curve. It’s just too easy to light wood on fire or smash a hole in it.

So, all analogies aside, I think within the next decade or so we’ll see the introduction of new system architecture models — models that are highly restrictive and run using a “default closed” paradigm. New programming languages, new IDEs, new compilers, new memory management techniques — all designed from the ground up to be secure and robust. The upshot of all of this is that I think that within that time period we’ll see systems that can be exposed to the world and stand on their own for years with little chance of compromise. Successful attacks will still happen, of course, but they’ll be extremely rare compared to today. Security problems will never go away, we all know that, but they’ll return to being human/design/configuration issues rather than issues with gaping technological flaws.

Security by obscurity is bad, but security with obscurity isn’tI’ve been in many debates online over the years about the concept of Security by Obscurity. Basically, there’s a popular belief out there that if any facet of your defense relies on secrecy, then it’s fundamentally flawed. That’s simply not the case.

The confusion is based on the fact that people have heard security by obscurity is bad, and most don’t understand what the term actually means. As a result, they make the horrible assumption that it means relying on obscurity — even as an additional layer to already good security — is bad. This is unfortunate.

What security by obscurity actually describes is a system where secrecy is the only security. It comes from the cryptography world where poor encryption systems are often implemented in such a way that the security of the system depends on the secrecy of the algorithm rather than that of the key. That’s bad — hence the reason for security by obscurity being known as a no-no.

What many people don’t realize is that adding obscurity to security that’s already solid is not a bad thing. A decent example of this is the Portknocking project. This interesting tool allows one to “hide” daemons that are available on the Internet, for example. The software watches firewall logs for specific connection sequences that come from trusted clients. When the tool sees the specific knock on the firewall, it opens the port. The key here is that it doesn’t just give you a shell — that would be security by obscurity. All it does at that point is give you a regular SSH prompt as if the previous step wasn’t even involved. It’s an added layer, in other words, not the only layer.

Security is a process rather than a destinationThis is a pretty common one but it bears repeating. You never get there. There’s no such thing. It’s something you strive for and work towards. The sooner one learns that the better.

Complexity is the enemy of securityYou may call me a weirdo, but I think the entire concept of simplicity is a beautiful thing. This applies to web design, programming, life organization, and yes — security. It’s quite logical that complexity would hinder security because one’s ability to defend their system rests heavily on their understanding of it. Complexity makes things more different to understand. Enough said.

Conclusion

My hope is that this short collection of ideas about information security will be of use to someone. If you have any questions or comments feel free to email me at [email protected]. I’m sure I’ve left out a ton of stuff that should have gone into this, and I’d appreciate any scolding along those lines.:

Related posts: