The Importance of the “Evaluated Configuration” in Common Criteria Evaluations
How many of you have heard of the Common Criteria ? If you’ve ever done security work with government, you probably have. If not, then possibly not. Either way, read on and I’ll give you my own view, including some of the barnacles clinging to the hull of the general program.
Common Criteria Background
Way back in the depths of computing history, government departments used to issue request for proposal (RFPs) for computers having certain specific security requirements. Commercial-off-the-shelf (COTS) systems could not meet these requirements. This resulted in very expensive proposals for building (largely unsupported) customer systems. Worse, from a security perspective, the security requirements weren’t always necessarily self-consistent or supportive of actually maintaining good security. Finally, worst of all, do you think the support for these custom beasts was all that great, compared to the general systems serving millions of customers? Not so much…
In an effort to alleviate the problems related to this process, various governments came up with schemes to try and get vendors to build security into their “normal” offerings. In the US, this resulted in the Orange Book, aka the Trusted Computer Systems Evaluation Criteria, as well as an NSA-managed process for getting systems evaluated. Other countries had similar, but not identical schemes and critieria, such as the Canadian Trusted Computer Product Evaluation Criteria (CTCPEC) and the European Information Technology Security Evaluation Criteria (ITSEC). Fast forward, with a lot of international cooperation between security groups in various governments (as described in this 1998 NIST newsletter) and you get the Common Criteria, a Mutual Recognition Agreement, and at least 22 officially participating countries. This is unprecedented security goodness and has resulted in, if nothing else, many ex-government folks that received some good basic computer security training.
Assurance and Features
One of the keys to solving the historical custom-computer problem was bringing security experts together to define the necessary set of minimal internally consistent requirements needed for better security. For example, what good was an audit system if an admin could alter it to hide shenanigans? This resulted in a criteria defining two key concepts: security assurance and security features.
In the Orange Book, the evaluation levels tied these together. A high assurance system would have mandatory access controls evaluated, or it would fail, for example. The ITSEC did not follow this paradigm and separated features from assurance. In theory, this would allow a very simple, high-assurance system to be designed, developed and evaluated.
The Common Criteria took a further step forward, by allowing for separate Protection Profiles to be developed for different types of products and Security Targets for individual evaluations. This meant you could evaluate the assurance and features for a wide variety of products – for example, smart cards – as long as an accepted Protection Profiles was developed by the participating authorities. By separating assurance from features, the newer program gained a lot of flexibility.
Flexibility and “Playing the System”
Advances frequently come with trade-offs, though. Let’s run through a theoretical scenario. Let’s say I develop my own OS distribution, JeffOS, and I would like to sell to governments that require a Common Criteria certification. This is a cost of doing business to me. I want to minimize that cost in order to maximize my profit. How might I minimize my costs?
1. Pick the easiest process/country. There are now several countries, each with slightly different oversight of the process. Are they all equally rigorous? Might I find one that is a little “easier” to work with than the others? Perhaps not, but I owe it to my investors to check, yes?
2. Pick the cheapest evaluation team. These teams are in business, yes? Can I get one of them to commit to a fixed price contract? Won’t the be less likely to add new requirements if they’re on a fixed contract? Seems like a good approach.
3. Finally, maybe I should evaluate fewer components. Cost for generating evaluation evidence probably scales to this. Also, imagine if they needed me to change a design to comply with security, that could be costly.
I think I’ll do all three of these, it only makes sense. The last one, though, shows real promise. At the end of the day, as long as some of JeffOS is evaluated, I’ll get a certificate and can market the certification, right? I think I’ll just evaluated the kernel and the bare minimum set of drivers and utilities. No graphics. No complicated network protocols, just the basics.
This is Really How it Works
If your mind boggles at the above imaginary scenario, I feel your pain, but it makes total sense from a vendor perspective, doesn’t it? I can tell you that in my early days as an evaluator, every vendor we worked with came to us at some point and said “just tell me the minimum changes I have to make in order to pass the evaluation.”
The real weakness is if all vendors play the system this way, then none of them have to do better. I mean, I don’t have to have my implementation of DNS evaluated as part of JeffOS if the Red Hat and SUSE evaluated systems don’t include it either. They don’t have it, I don’t have it, so the customer has an equal choice either way, and will have an approximate equal starting point for their site Certification and Accreditation process.
Advantage through Evaluated Configuration
Recognizing how this works, a really smart vendor might decide to change the game to their advantage. How? By doing the extra work and investing the extra expense to evaluated more useful systems – in other words, by specifying a more useful evaluated configuration. Look at the following two (theoretically) evaluated systems:
q JeffOS Evaluated Configuration: kernel, shell, basic networking, X-Windows, DHCP, DNS, Apache 2.0, MySQL
q Red Hat Evaluated Configuration: kernel, shell, basic networking
If a customer intended to deploy a Web Server in their environment, assuming assurance level and protection profiles were equal, wouldn’t JeffOS have a real advantage? It seems like it would to me.
If I was a customer, I’d be telling my vendors about this and getting them to compete for my business by not just evaluating systems, but by evaluated systems with useful configurations.
Think Security ~ Jeff