As usual, all the posts in this series can be found on this page if you want a refresher or are just now jumping on the band wagon.

In the last post in this series, a very very long time ago, we took a look at Threat Event Frequency (TEF). In its most simple form TEF means how often does a threat event happen.

We are now going to take a look at the other component of Loss Frequency (LF), Vulnerability. However, this is not how we normally think of vulnerability.

From the  Introduction, Vulnerability is:

The probability that an asset will be unable to resist the actions of a threat agent.

This is quite different than how we normally define vulnerability as information security professionals. We usually view vulnerability as a specific weakness in a system or application. In FAIR, vulnerability is an inverse measure of the ability of an asset to protect itself against the efforts of a threat agent.

A high probability means that the asset will likely be compromised and a low probability means that the asset will be able to effectively resist. You have to let that one percolate for a bit.

Vulnerability is made up of two factors and here we diverge a bit from the Introduction. Both the introduction and the Open Group Risk Taxonomy use Control Strength and Threat Capability as factors of Vulnerability. Jack has since modified this slightly. Threat Capability (TCap) is still used, but Control Strength has been changed to Resistance Strength (RS.) Let's talk about both of these for a second.

Resistance Strength is the probability that an asset can resist a baseline measure of force . Let's say I have a gate that keeps people from coming into my property. Someone on a bicycle would be kept out, but someone in a Mini Cooper wouldn't. We would probably say that the Resistance Strength at that point is pretty low. Replace that flimsy gate with a door to rival those protecting the installation in Cheyenne Mountain and our Resistance Strength goes through the roof.

Threat Capability is just what it sounds like. How capable are the evil doers that are attempting to compromise my asset. Are they riding bicycles or driving Abrams tanks.

Putting the two together, Resistance Strength and Threat Capability, gives us Vulnerability. For instance,  we have that super strong door we were talking about. There is a very high probability that the door will be able to resist a baseline or average level of force.  How about the evil dude on the bicycle? His Threat Capability is very low. Combining the two gives us a very low probability that the asset will be unable to resist the threat agent, i.e. we're going to be just fine.

Next time we are going to take a quick look at how Threat Event Frequency and Vulnerability define Loss Frequency and then we will start of the Probably Loss side of the Risk equation.

As always, please leave a comment or send me a note at kriggins@infosecramblings.com with your thoughts.


Enhanced by Zemanta


Well, there I go again, I keep saying I am going to get back to it and then leave you hanging. No real excuse this time other than being mondo busy.

As usual, all the posts in this series can be found on this page if you want a refresher or are just now jumping on the band wagon.

Anyway, last time we started talking about the taxonomy and the definition of risk from FAIR's perspective. As mentioned, we are going to leave those alone for a bit. We are going to build the taxonomy from the ground up. So, without further ado, here is where we are starting.

Threat Event Frequency

We start with the first component of Loss Frequency which is threat event frequency (TEF.) From the introduction, threat event frequency is:

The probable frequency, within a given timeframe, that a threat agent will act against an asset.

In other words, how many times within some amount of time will the bad guy try to do something evil to our treasured asset. This is important to know in determining how often we might actually suffer a loss.

So, to figure out the how many in how much part of the equation, we need to look at a couple things, contact and action. However, we are not talking about binary definitions here such as 'was there contact or not'.

First let's talk contact. From the introduction, contact is:

The probable frequency, within a given timeframe, that a threat agent will come into contact with an asset.

There are three things we want to consider. We are interested in whether the bad guy has regular or random contact with our treasure. Is contact the result of just random chance or is there some regularity to the contact? We are also really interested in whether the contact is intentional or not. Is the bad guy looking specifically for the types of treasure you have or are we target of opportunity.

Now action. From the introduction, action is:

The probability that a threat agent will act against an asset once contact occurs.

Again, we want to look at three things, asset value, vulnerability, and risk. Is it worth it to the bad guy to try something, i.e. is the value of the asset high enough. How vulnerable does the bad guy perceive the treasure to be. Our treasure is much less vulnerable sitting in a bank vault than it is sitting unwatched on a table in a crowded room. Finally, what is the risk to the bad guy. How likely is he to get caught if he tries to make contact.

All these factors must be taken into consideration when we we are thinking about threat event frequency.

Next we will explore the other half of loss frequency, vulnerability. I'll tell you right now that it is not what you think it is, unless, of course, you are already familiar with the FAIR Taxonomy. 🙂

As usual, drop me a note or leave me a comment with your thoughts.



Good afternoon everybody! I hope your day is going well.

Here are today's Interesting Information Security Bits from around the web.

  1. Dre is reading a lot of the same people as I am when it comes to security programs. This post has some good stuff in it along with some great additional reading for us.
    What makes a solid security program? | tssci security
    Tags: ( security-program )
  2. Another day, another case of people handing over credentials to anybody who asks.
    Another Twitter Scam: Twitviewer -- spylogic.net
    Tags: ( twitter )
  3. Looks like there is a nasty BIND vulnerability being actively exploited. Time to update.
    BIND 9 Issue
    Tags: ( bind dns )
  4. Very nice. I like the way he approached this.
    Tactical Web Application Security: Lessons Learned From Casino Surveillance
    Tags: ( general )
  5. Wim is getting into FAIR. Very cool stuff.
    all is FAIR in love and war. << The Security Kitchen
    Tags: ( fair )
  6. An interesting case of what you read on the internet isn't always true 🙂
    Fake Retweets Lead To Spam - SpywareGuide Greynets Blog
    Tags: ( twitter )
  7. Sometimes high availability doesn't make your life easier. Check out Shrdlu's post and think about your situation a little.
    When 'high availability' isn't good enough.
    Tags: ( general )
  8. If you are an information security professional or want to be, I strongly recommend you carve out the time to attend Mike and Lee's talk at Defcon. They know what they are talking about and you should too!
    Effective Information Security Career Planning at DefCon | Information Security Leaders
    Tags: ( career )
  9. No big surprise here for me.
    Study says SSL-certficate warnings are as good as useless - News - The H Security: News and features
    Tags: ( ssl )

That's it for today. Have fun!

Subscribe to my RSS Feed if you enjoy these daily Interesting Bits posts.



This is the presentation I gave at Secure360 2009 titled "Measuring and Communicating Risk using Factor Analysis of Information Risk (FAIR)."

As always, I am interested in your feedback.



In the last post in our series on FAIR we took a look at the data flow diagram for the system that Oblivia wants us to assess. We also reviewed the definition of threat and quickly figured out we need a way to narrow down which threats we should be most concerned about.

FAIR uses the concepts of threat communities and threat characteristics to help us group together like threat agents and help us determine the probability of that threat affecting us. A threat agent being an individual person or instance in a threat population or set of threats.

Let's take a look at these two concepts and see how they can help us.

First, the definition of threat community. From the Introduction to FAIR: Risk Landscape Components:

Subsets of the overall threat agent population that share key characteristics

Basically, we are talking about those characteristics that would define a group of threat agents. The Introduction uses at set of characteristics that could be used to place a threat agent in a community call 'terrorist.' How about the following characteristics?

Motive: Money
Primary intent: Financial gain
Sponsorship: Unofficial
Preferred general target characteristics: Systems where small changes are difficult to find
Preferred specific target characteristics: High traffic/significant impact systems
Preferred targets: Systems and applications
Capability: Significant technology skills
Personal risk tolerance: Medium
Concern for collateral damage: High (need for changes to remain unnoticed)

What could we call the threat community whose agents have these characteristics? I'm going to hate myself for using the term, but cyber criminals seems to work. Individuals who make money by subverting computer systems. This gives us some information about what makes up the community. Now we need some information that can help us determine which communities are worthy of more inspection. That is where threat characteristics come in.

From the Introduction, paraphrased a bit:

There are four primary characteristics we are concerned with in our risk taxonomy:

  • The frequency with which threat agents come into contact with our organizations or assets
  • The probability that threat agents will act against our organizations or assets
  • The probability of threat agent actions being successful in overcoming protective controls
  • The probable nature (type and severity) of impact to our assets

What we are really concerned about from an agent characteristic perspective is, frequency of contact, the likelihood that the agent will act against us, the likelihood that the agent will succeed and the likely type and severity the result of that action to our assets.

A situation where the agent is rarely in contact, is unlikely to actually attack us and even more unlikely to succeed if they do and, finally, the impact if they are successful will be insignificant is much different that one where the agent is in constant contact, is very likely to act against us, is skillful enough to succeed and probably going to result in severe impacts to our assets.

Understanding the different communities and the significant characteristics mentioned above can help us a great deal in managing risk. They help us have a much more concrete estimate of the probability of something untoward happening to us as the result of a threat agent acting against us.

In our next installment we will take one more quick look at a few characteristics related to assets. We will then dive into risk factoring in the next few posts.

As always, I am really interested in your thoughts. I read and take to heart every comment that is left and email received, so please join the conversation!



Speaking at Secure360

by kriggins on March 16, 2009

in Announcement, Conferences, fair, Risk Management

I am really excited. I will be speaking at Secure360. The conference takes place on May 12th and 13th in St. Paul, Minnesota. I will be speaking in the afternoon on the 13th.

From the Secure360 website:

The Upper Midwest Security Alliance (UMSA) serves business, government, and education professionals in the Twin Cities and surrounding areas. The Secure360 conference is the primary mission of UMSA. The annual event is a unique opportunity to explore the latest threats and opportunities in enterprise risk management.

The title of my talk is "Measuring and Communicating Risk with Factor Analysis of Information Risk (FAIR)."



In the last post in our series, we spent some time looking at the definition of asset. In the post previous to that, we described the system we are assessing and a presented a diagram that shows the system and its architecture.

In this post, we are going to start the discussion about threats, but first, a little more information about our scenario.

Phil, in a comment on the last post in this series, said the following.

I suggest that you create a data flow diagram (DFD) and then map out how the data flows.

After saying a) I don't know how and b) we don't need one (not in those exact words :)), I got to thinking about it a bit more and decided he was right. A data flow diagram will be helpful. So a quick study of DFDs later, here is my feeble attempt at providing one for us to use.

Oblivia Tax Rate System Data Flow Diagram (DFD)

Oblivia Tax Rate System Data Flow Diagram (DFD)

You will probably quickly see where we will be focusing our time during our assessment.

Anyway, let's talk about threats. First, from the Introduction to FAIR: Risk Landscape Components:

As I [Jack Jones] mentioned in the Bald Tire section, threats are anything (e.g., object, substance, human, etc.) that are capable of acting against an asset in a manner that can result in harm. A tornado is a threat, as is a flood, as is a hacker. The key consideration is that threats apply the force (water, wind, exploit code, etc.) against an asset that can cause a loss event to occur.

Fairly straight forward. Basically, we are looking for those things that, when they apply force against our asset, can cause damage or loss. Well, even in the simplistic scenario we are looking at, that list is as long as my arm. If that's the case, how to know which threats we should focus on?

Funny you should ask. Jack goes on to talk about threat communities, "Subsets of the overall threat agent population that share key characteristics [or traits]", and threat characteristics which are used to profile threat communities. We will take a deeper look at both in the next post of this series.

As always, I am really interested in your thoughts. I read and take to heart every one that is left, so please join the conversation!


Reblog this post [with Zemanta]


Exploring F.A.I.R – Assets Redux

by kriggins on February 26, 2009

in fair, Risk Management

So, to revisit the post which sparked the last few, let's talk about assets. Before we get started though, just a reminder that all the posts in this series can be found on this page.

And now, on with the show. We have described the organization for which we are performing the assessment. We have also described, to a certain extent, the architecture of the system involved.

Again, we are going to take things in a little different order than presented in the Introduction to FAIR. The first thing we are going to look at is asset. From the introduction:

Any data, device, or other component of the environment that supports information-related activities, which can be illicitly accessed, used, disclosed, altered, destroyed, and/or stolen, resulting in loss.

With this definition in mind, why don't we make a list of the assets we might be concerned about.

  • Bandwidth
  • Hardware (Servers, routers, switches, firewalls, etc.)
  • Services (Web services and database services)
  • Information (Tax code and tax rates)

The bandwidth is an asset because evil doers on the internet need a way to spread their evil. They would much prefer to use our bandwidth than pay for their own.

The hardware is an asset because someone might want to steal it or run their own software on it.

The services provided are an asset for similar reasons. The evil doers need places to put the stuff they want to spread or a place to stash the stuff they have already taken elsewhere.

The information is an asset because...well...it's why the rest of the stuff is there in the first place 🙂 Seriously, information is always an asset. As discussed in the first post on assets, it likely doesn't matter if the information is classified as public or not. The integrity and availability of that public information can be very important.

For instance, in our case, the information defines how much money a company will have to pay in taxes. If it is modified or deleted, it can have a serious effect on the revenue of the state.

Ideally, we would perform a risk analysis for each asset "class" above and incorporate all the results into our risk assessment. For our purposes though, we are going to concentrate on just one, the information.

In the next post in this series we will take a look at threats and threat agents.

As always, please let me know your thoughts in the comments.


Image courtesy of tao_zyn.
Reblog this post [with Zemanta]


In the last post of the series we took a look at the organization we are helping out with our assessment. We also were given their Loss Magnitude Table. That table gives us a good idea of their risk tolerance.

Today we are going to look at the architecture of the system that hosts Oblivia's tax code and tax rate tables.

As indicated before, Oblivia is does not have a very mature technology infrastructure. However, they have been given some good advice about the need for firewalls and to only allow needed ports and such. Below is a diagram of their public facing web infrastructure.

Oblivia Internet Facing Network Architecture

The system configurations are as follows:

Web Server:

  • Operating System: A Very Fine OS (fully patched)
  • HTTPD Software: A Very Fine Web Server (fully patched)
  • CMS: An internally developed application. A penetration test was recently performed and several XSS issues were uncovered along with one SQL injection problem  (import bits of information for later.)

Database Server:

  • Operating System: A Very Fine OS (fully patched)
  • Database Server: A Very Fine DB Server (fully patched)

As you can see, keeping systems appropriately patched has been another good bit of advice given and taken to heart. We will definitely be visiting some of the traffic allowed as we progress. 🙂

On final note, there is no remote access solution in place, but those responsible for the systems sometimes need to be able to work on them from remote locations, i.e. home. You can probable tell how they are doing from the ports allowed through the firewalls.

In our next post, we will again look at assets again. As always, fell free to chime in on the comments if you have something to say or I goofed again 🙂


PS - For those interested, the diagram above was created with Gliffy. It is a really nifty free on-line diagramming tool.


This is the next post in our Exploring F.A.I.R. series. Links to previous posts can be found here.

I didn't plan very well when I jumped right into things with my last post about assets. I made the statement that the information hosted on the web server was not an asset and I was rightfully corrected by several folks.

Where I erred was in having some preconceived ideas of where things were going to go and not sharing those ideas with you ahead of time. That being said, those ideas have changed and I am going to start sharing them in this post.

I am going to follow in the footsteps of others (i.e. steal their ideas) and flesh out our scenario first.  I am essentially copying what Chris did, although not quite as detailed.

Below you will find a description of the organization that we are performing our assessment for along with a Loss Magnitude Table which we will talk about later. The next post will present the characteristics of the system we will be assessing.

Welcome to Oblivia!

Oblivia is a small country that is just now entering the technological age. Needless to say, maturity in their information technology infrastructure is a bit lacking.

The sole source of income for the government is the taxes they assess on companies doing business in the country. Citizens do not pay taxes and there are no tariffs on imports or exports. ( I know, work with me here.) Their tax code is quite complicated and there are many different rates depending on business type, revenue, etc. Annual tax revenue for the country is $10,000,000 and their budget, which they adhere to very well, is $9,000,000. ( I told you, it's a small country!)

They have decided to publish the tax code on the internet and, in the interests of having a transparent tax code, have declared that public representation to be the authoritative source.

We have been hired to assess the web server and infrastructure that has been put in place to publish the tax code.

Below is the Loss Magnitude Table for the Oblivian government.

Severe (Sv) >$1,000,000
High (H) $500,000-$1,000,000
Significant (Sg) $250,000-$499,000
Moderate (M) $100,000-$249,999
Low (L) $50,000-$99,999
Very Low (VL) <$50,000

Keep tuned in as we describe the infrastructure in the next installment of "Exploring F.A.I.R." As always, comment are not only welcome, you are encouraged to let me know what you think.


Reblog this post [with Zemanta]