I just finished reading Cory Doctorow's Little Brother. You can buy a copy here or read it for free here. Don't let its classification as young adult deter you. I really enjoyed it. If you are interested in privacy and government and how "it's for your own good" can escalate out of control, I highly recommend giving it a gander.
In the book, there is a terrorist attack on San Francisco which results in draconian security measures being put in place. Our protagonist is Marcus, a 17 year old, who gets picked up by those enforcing the new security measures and is sorely mistreated. Through the book, we follow Marcus as he fights for his rights and the rights of his friends as citizens using every means at his disposal, most of them being technical in nature. He is able to circumvent many of the controls put in place because he is a savvy, technically astute individual who has the security mindset we talk about frequently and is in many cases smarter than those who designed the systems he fights against.
So what does all this have to do with a secure system design that is impossible to break? Well, first of all, it is impossible to design a secure system that is impossible to break Further, as Bruce Schneier says in the afterword:
"Anyone can design a security system so strong he himself can't break it."
We see this same type of phenomenon in other areas. For me, it's proof reading. I have the hardest time proof reading my own writing because I know what it is supposed to say. My own brain gets in my way and I read text as I intended it to be as opposed to how I actually wrote it.
If we can't design perfect systems and we are not able to sufficiently test our systems ourselves, how can we improve those designs to make them more robust and harder to break?
There are a lot of things we can do like build on the successes of other, use "best practices", etc. However, I can think of a couple things that can significantly improve our efforts:
- Peer review - We should have our peers look at our designs. They will see things that we are blind to.
- Testing by a third party - Yes, I am promoting third party testing of our systems, preferably by more than one person. Again, the more eyes involved in reviewing a system, the better chance that weaknesses will be found. I am not proposing that every system get a third party review. It would be prohibitively expensive. However, important ones probably should.
This also started me thinking about our risk assessment processes and procedures. If we develop our risk assessment processes internally, aren't we, in the context of the assertions above, creating a system that is destined to have built-in short comings? Should we have our risk assessment processes "tested?"
I'm interested in your thoughts on both topics, so drop me a note in the comments.