risk assessment

Good afternoon everybody! I hope your day is going well.

Here are today's Interesting Information Security Bits from around the web.

  1. Here is a nice post talking about fuzzing with Burp.
    ClearNet Security : need to do a GET before POST, fuzzing with BURP and WebScarab
    Tags: ( webappsec fuzzing burp )
  2. I know it seems like I point out every FudSec.org post that happens and, actually, I do. It's because they are all great posts that have good thought generating material. Jayson attacks Cyberwar in this week's edition.
    Beware of Falling Turtles (Plus other things that shouldn't really frighten us) - fudsec.com
    Tags: ( fudsec cyberwar )
  3. This is a must read in my opinion. I have only read the executive summary and skimmed the assurance framework part so far, but they alone are worth the price of admission. I look forward to digging into the assessment portion soon.
    Cloud Computing Risk Assessment -- ENISA
    Tags: ( cloud risk-assessment )
  4. Craig has an interview with Giles Hogben up with some insight into the new Cloud Security Risk Assessment mentioned above.
    ENISA Cloud Security Risk Assessment: An Interview with Giles Hogben | Cloud Security
    Tags: ( cloud risk-assessment )
  5. Anton takes an interesting approach to why PCI is good.
    Anton Chuvakin Blog - "Security Warrior": Smart vs Stupid: But Not Why You Think So!
    Tags: ( pci )

That's it for today. Have fun!

Subscribe to my RSS Feed if you enjoy these daily Interesting Bits posts.

Kevin

{ 0 comments }

Well, there I go again, I keep saying I am going to get back to it and then leave you hanging. No real excuse this time other than being mondo busy.

As usual, all the posts in this series can be found on this page if you want a refresher or are just now jumping on the band wagon.

Anyway, last time we started talking about the taxonomy and the definition of risk from FAIR's perspective. As mentioned, we are going to leave those alone for a bit. We are going to build the taxonomy from the ground up. So, without further ado, here is where we are starting.

Threat Event Frequency

We start with the first component of Loss Frequency which is threat event frequency (TEF.) From the introduction, threat event frequency is:

The probable frequency, within a given timeframe, that a threat agent will act against an asset.

In other words, how many times within some amount of time will the bad guy try to do something evil to our treasured asset. This is important to know in determining how often we might actually suffer a loss.

So, to figure out the how many in how much part of the equation, we need to look at a couple things, contact and action. However, we are not talking about binary definitions here such as 'was there contact or not'.

First let's talk contact. From the introduction, contact is:

The probable frequency, within a given timeframe, that a threat agent will come into contact with an asset.

There are three things we want to consider. We are interested in whether the bad guy has regular or random contact with our treasure. Is contact the result of just random chance or is there some regularity to the contact? We are also really interested in whether the contact is intentional or not. Is the bad guy looking specifically for the types of treasure you have or are we target of opportunity.

Now action. From the introduction, action is:

The probability that a threat agent will act against an asset once contact occurs.

Again, we want to look at three things, asset value, vulnerability, and risk. Is it worth it to the bad guy to try something, i.e. is the value of the asset high enough. How vulnerable does the bad guy perceive the treasure to be. Our treasure is much less vulnerable sitting in a bank vault than it is sitting unwatched on a table in a crowded room. Finally, what is the risk to the bad guy. How likely is he to get caught if he tries to make contact.

All these factors must be taken into consideration when we we are thinking about threat event frequency.

Next we will explore the other half of loss frequency, vulnerability. I'll tell you right now that it is not what you think it is, unless, of course, you are already familiar with the FAIR Taxonomy. :)

As usual, drop me a note or leave me a comment with your thoughts.

-Kevin

{ 3 comments }

Good afternoon everybody! I hope your day is going well.

Here are today's Interesting Information Security Bits from around the web.

  1. You may have already heard, but Heartland and RBS are having some PCI issues.
    Visa yanks creds for payment card processing pair * The Register
    Tags: ( pci )
  2. Good tips and suggestions here.
    Gaining and Maintaining Professional Momentum During Difficult Times : The Security Catalyst
    Tags: ( career )
  3. Nifty information on digging into what information Firefox keeps as you peruse the internet.
    Firefox 3.X Forensics: Using F3e << SANS Computer Forensics, Investigation, and Response
    Tags: ( forensics firefox )
  4. A nice source for lots of HIPAA information. (Via @privacyprof)
    FAQ: What is the impact of HIPAA on IT operations?
    Tags: ( hipaa )
  5. Yup. Part 3 of Synjunkie's "Abusing Citrix" series is up. Again, good stuff.
    Syn: Abusing Citrix - Part 3
    Tags: ( hacking citrix )
  6. Jeff has a great post about first solutions and thoughts. Good stuff.
    How to Catch a Balloon : The Security Catalyst
    Tags: ( general )
  7. Chris has a real good primer/reminder on performing an effective and complete application security risk assessment. Good stuff. I hope he gets permission to share more details.
    Application Security Risk Assessments << Risktical Ramblings
    Tags: ( risk assessment application )
  8. Bill has a slide show up from his trip to Boston for SOURCEBoston.
    CSO Online - Security and Risk - Slideshow - SOURCE Boston Security Conference - Slide 1
    Tags: ( source conferences )
  9. Wow. Just wow. (via @brianhonan)
    Drunken BOFH wreaks $1.2m in Oz damage * The Register
    Tags: ( general )

That's it for today. Have fun!

Subscribe to my RSS Feed if you enjoy these daily Interesting Bits posts.

Kevin

{ 0 comments }

I just finished reading Cory Doctorow's Little Brother. You can buy a copy here or read it for free here. Don't let its classification as young adult deter you.  I really enjoyed it. If you are interested in privacy and government and how "it's for your own good" can escalate out of control, I highly recommend giving it a gander.

In the book, there is a terrorist attack on San Francisco which results in draconian security measures being put in place. Our protagonist is Marcus, a 17 year old, who gets picked up by those enforcing the new security measures and is sorely mistreated.  Through the book, we follow Marcus as he fights for his rights and the rights of his friends as citizens using every means at his disposal, most of them being technical in nature.  He is able to circumvent many of the controls put in place because he is a savvy, technically astute individual who has the security mindset we talk about frequently and is in many cases smarter than those who designed the systems he fights against.

So what does all this have to do with a secure system design that is impossible to break? Well, first of all, it is impossible to design a secure system that is impossible to break :) Further, as Bruce Schneier says in the afterword:

"Anyone can design a security system so strong he himself can't break it."

We see this same type of phenomenon in other areas. For me, it's proof reading.  I have the hardest time proof reading my own writing because I know what it is supposed to say. My own brain gets in my way and I read text as I intended it to be as opposed to how I actually wrote it.

If we can't design perfect systems and we are not able to sufficiently test our systems ourselves, how can we improve those designs to make them more robust and harder to break?

There are a lot of things we can do like build on the successes of other, use "best practices", etc.  However, I can think of a couple things that can significantly improve our efforts:

  1. Peer review - We should have our peers look at our designs.  They will see things that we are blind to.
  2. Testing by a third party - Yes, I am promoting third party testing of our systems, preferably by more than one person. Again, the more eyes involved in reviewing a system, the better chance that weaknesses will be found. I am not proposing that every system get a third party review. It would be prohibitively expensive.  However, important ones probably should.

This also started me thinking about our risk assessment processes and procedures.  If we develop our risk assessment processes internally, aren't we, in the context of the assertions above, creating a system that is destined to have built-in short comings?  Should we have our risk assessment processes "tested?"

I'm interested in your thoughts on both topics, so drop me a note in the comments.

Kevin

Technorati Tags: ,

{ 1 comment }

Too focused

by kriggins on March 22, 2008

in Educational, General, Security testing

I am a big fan of Seth Godwin's blog which can be found here:

http://sethgodin.typepad.com/

If you are not familiar with Mr. Godwin, I highly recommed perusing his blog. While not an infosec blog, his insights into marketing and perception are useful in many ways.

He had a post that pointed to this YouTube video. Watch the video and then read on:

Did you watch it? It's important that you did for what follows.

I was reading a discussion about Risk Assessment methodologies on the CISSP forum the other day. In it, many many different methodologies were referenced/pointed out. Obviously, having a number of methodologies to choose from is great since just about every assessment seems to be different than the last. But watching the video helped me to remember that when we are using a methodology or using questionnaires or otherwise performing an assessment, we need to be careful that we are not be blinded by watching for the passes.

{ 1 comment }