On the Purpose of Security Standards
An interesting article was published in Information Security Resources, titled: “Payment Card Industry Swallows Its Own Tail”.
The author seems to claim that PCI DSS may not survive for long, because the various stakeholders are too busy blaming each other for security breaches instead of trying to make the ecosystem more secure. Also, organizations that are PCI DSS compliant still suffer from security breaches, what seems to indicate that the standard is ineffective.
There are two questions that need to be asked:
- Is the purpose of PCI DSS indeed to make the ecosystem more secure, to everyone’s benefit?
- Does a breach in a compliant system imply that the standard is ineffective?
I think that the answer to the first question is “not only” and to the second question it is just “no”.
It is important to understand the difficulty and the inherent ineffectiveness of security standards, to discuss the purpose that these standards can serve, and then to figure if a security standard such as PCI DSS meets its purpose or not.
It is very difficult, and in most cases impossible, to write a checklist that assures the security of a complex system. For a checklist to be “checkable” it must be drafted positively, as opposed to a security policy/goal which is usually drafted negatively. A checklist includes states that shall happen, whereas a security goal is a set of states that shall not happen. Mapping requirements of the second type to requirements of the first type is difficult (see: “The toughest part of designing secure products”).
A standard is essentially a checklist that was put in a normative form that allows for enforcement and compliance assessment. Where you cannot draft a positive checklist, you cannot draft a standard either. For a positive checklist, you must convert requirements of what shall not happen (e.g., “the attacker shall not be able to access customer lists”) into requirements (checklist items) on what shall happen (e.g., “the system shall encrypt this data here, and block that port there”). Such a conversion is not easily carried out for complex systems; in many cases it is impossible in a general security standard, or at all.
There are two ways for standards authors to address this limitation:
Do as much as they can. Realize that compliance with the standard will never assure security, but may only allow to block particular attack venues, at least to a partial extent. Such a standard will make the compliant system more secure, because some is much better than nothing and mostly because it will funnel the developer into addressing security matters and into understanding some security concepts that may lead him into better security decisions in other aspects of the system. This is probably the best use for security standards: address as many aspects as possible and trigger the developer to address the rest. Adopting this approach, however, requires leaving behind the paradigm of “Compliant = Never To Be Broken Into”. Your standard is written to be enforceable, but you do a partial job, and you know it.
Write a standard that is not based on a positive checklist. Instead of struggling to define a checklist of what a system shall be, based on all that shall not happen, some standards authors choose to use the security goals as checklist items as-is. In such cases, you find security standards with requirements such as: “data shall not appear in clear-text form”, which is almost like saying: “the system shall be such that it cannot be broken into”. This approach makes the standard easier to write, and certainly much more complete, but it also makes the standard impossible to assess compliance by. To resolve the issue of being unable to check compliance, such standards bodies adopt what is called ”self compliance“ or ”self regulation“. This essentially means that compliance is determined by the owner of the system attesting to that his system complies. He gets the compliance diploma based on this testimony, right until his system is proven otherwise.
Standards of the first type, such as FIPS 140-2, actually help making systems more secure, but they are much less attractive as liability shifters because they are naturally and knowingly incomplete. They allow situations of a broken system (= damage that someone needs to be blamed for) while that system is compliant with the standard (= the standard cannot be used as a tool for blaming anyone). On the other hand, standards of the second type, that is, standards that contain requirements in the form of “your system is compliant as long as you took enough measures to make sure that your system cannot be broken into” are great for liability shifting. You were broken into, and thus you are by definition out of compliance, and thus the blame is on you for attesting falsely on the compliance of your system.
When looking at PCI DSS and determining if it succeeded or not, it is important to first note what its purpose is. There is no standard that ensures security; there are standards that improve security while allowing compliant systems to be broken while remaining compliant, and on the other side there are standards that are in essence sophisticated legal tools to allow blaming the vendor for any security lapses, while making it appear as if the vendor got all the guidance needed to build a secure compliant system.
If the purpose of PCI DSS is to shift liability, then it does it well by allowing compliance to be revoked to the light of a breach that occurs. If the purpose of PCI DSS is to ensure complete security, then it failed before a single word was written into it. If its purpose is just to improve security, then in spite of occssional breaches to compliant products, it may still be serving its purpose well.
Display comments as Linear | Threaded