I remember the first risk assessment I was to complete. It was messy essay on defining the use of a specific port to allow an application through our firewall. Truthfully, it was downright ugly to get to the point that the port wasn’t vulnerable, and neither was the application. It was LOW risk.
When I did my first risk assessment, I didn’t realize there was methodologies (although nowhere near as mature as today) that were established by NIST, the RCMP, CSE, and other organizations out there. For some reason, my earlier years were sparse for resources when it came to risk assessments and how to develop them.
After my first risk assessment and getting approval for allowing specific traffic through the firewall, I positioned myself for training. Research this time worked for me.
In 2007 I attended the RCMP Threat and Risk Assessment 2 day course in Ottawa, Ontario. The course was eye-opening. It was an entire methodology laid out with worksheets and examples. It was here where I found out how way off I was in regards to my first assessment.
This is where I learned about Single Loss Expectancy, Annual Rate of Occurrence, and Annual Loss Expectancy. Mathematical functions that help put costs for risk in front of decision makers.
SLE (Single-loss expectancy) = AV (Asset Value) x EF (Exposure factor)
ALE (Annualized Loss Expectancy) = ARO (Annual Rate of Occurrence) x SLE (Single-loss Expectancy)
The instructor for this course was completely honest about these equations as well. He mentioned that Exposure Factor is completely subjective, which makes the entire process subjective. That being said, he mentioned that this is just a framework and like any other framework, you have to decide what works best. As long as you are assessing risk and doing something about it, you are better off than closing your eyes and hoping nothing happens.
After a few examples, it was getting clearer on modeling threats and mitigation strategies. My early practices were still much to be desired but having a basic template was working to establish the baseline to create better templates moving forward. For example, my basic template following the early RCMP templates was not much more than a Risk Register but it was a start. It allowed me to relay risk information better than essay type documents making someone read jargon for two and a half pages without immediate clear context.
|Asset Description||Threat||Value||Likelihood||Risk||Control Recommendation||Residual Risk|
|Database hosting clientinformation||Stolen by attacker||$100000||High||High||Ensure Firewall blocks external Access||Medium|
|Web site||Defaced by Attacker||$600||Medium||Medium||Have a system to trackchanges and alert||Low|
My biggest problem with this was that this was created in Excel and stayed there. At this point in my career, it didn’t mature much. I had multiple Excel files and stored them for people to view. A very static approach.
A mentor steps in
It was during my transition to work in Toronto where things became clearer on how you can adapt Risk Management frameworks to your organization. I worked with some amazing people and one specific mentor showed me how to present information to different audiences. My main learning outcome was that people like easy explanations, no jargon and especially COLOUR!
From a risk management perspective, I learned from this point on that at any time in a document where risk is High or Critical (I’ll come back to this) the test or highlight must be RED. I think everyone knows why this is a great indicator.
Along with the colouring of the risk levels, this where I said I would come back to it, the establishment of risk levels. Weirdly, I was happy to learn, you can add and remove risk levels as it applies to your business. For example;
You can range from Low to High, Very Low to Very High, anything basically.
And now, this is where heat maps started to make sense as well. I am sure most people by now saw a heat map in their lifetimes. Here is a rough example as well;
You can tailor your heat maps to your business and what is important. An SMB might only be doing $1 million in revenues a year, so a heat map that references a $1 billion dollar loss does not address risk appropriately. You also may put numbers to likelihood, or occurrence so you have a clearer definition making it more quantitative versus qualitative.
As you mature as an organization and can afford to spend time developing your heat maps, they may also include various other factors as well, such time of impact or time to restore. This is where it is important to understand your risk levels and how much of each square in that grid is relevant to your risk tolerance.
I have worked with many organizations where that grid is static and doesn’t reflect a good tolerance of risk. One example that comes to mind is the Low risk category. A lot of times, organizations see Low risk and assume that no further action is required. That depends on your current controls, your levels and that even though it is a Low risk, there is possibly still risk. Further attention may be required. As mentioned below in the comments; be aware of low risk chaining. This may take multiple Low risk vulnerabilities and combine to make them a High. An example might be a Race Condition, combined with Privilege Escalation that can cross a trust boundary.
It’s all about mitigating risks
Once you have established your heat maps, defining your templates and start getting your processes in place to assess risks, it’s time to mature even further.
Maturing around frameworks
RCMP/CSE harmonized – https://www.cse-cst.gc.ca/en/publication/tra-1
NIST – http://csrc.nist.gov/groups/SMA/fisma/framework.html
FAIR – http://www.fairinstitute.org/fair-risk-management
OCTAVE – http://www.cert.org/resilience/products-services/octave/
COBIT 5 – https://www.isaca.org/Knowledge-Center/Research/Documents/COBIT-5-Risk_res_Eng_1213.ppt
ISO – https://www.iso.org/standard/43170.html
As you can see, the maturity of risk around various frameworks can be intimidating. The frameworks can be free to access and use, like OCTAVE and the RCMP/CSEC harmonized or behind a paywall like ISO and COBIT.
It’s up to you as an organization to determine how you want to mature. The cookie cutter risk assessment templates are truly just a start, and from there you should customize to ensure your are finding appropriate risk because next is how you determine how money is spent.
Once you figure out your assets, the likelihood, the occurrence, the value, and other risk defining information, you have to figure out what you are going to do with that.
Are there existing controls?
Do you need to spend money on new controls?
Is it worth it to accept, defer or transfer the risk?
As you can see, this where you start expanding the ‘columns’ you need to add to your risk assessment model.
|Asset Description||Threat||Value||Likelihood||Existing Controls||Risk||Recommended controls||Cost||Residual Risk||Risk Suggestion|
|Database hosting client information||Stolen by attacker||$100000||High||Firewall||High||IPS, HIDS||$5000||Medium||Implement Controls|
|Web site||Defaced by Attacker||$600||Medium||Limited access||Medium||Tool for monitoringand alerting on changes||$500||Low||Accept existing risk|
So as you can tell at this point the model starts to develop your template that it’s time to make it more logical and tactical.
Now it’s your specific preference and how you do your job as a risk assessor, the organizations tolerance for information, how it’s presented and what outcomes are expecting.
My personal preference is to target one system, application or service at a time. This gives me the chance to fully understand the system before getting to the bigger picture. There are a lot questions to be asked at this stage. Some people hand out a questionnaire template and ask for information back. I like to get Visio diagrams and ask people in person making notes on how specific systems work to get a visual understanding and logical flow of a system and its assets.
Questions can be so varied, so again, I dislike the cookie cutter approach. It is much easier to tailor questions once you get used to your methodology of choice.
This is a great example of one of those intimidating questionnaires , but a lot of research has gone into this and gives a great indication of risk profile when doing an assessment.
The Cloud Security Alliance is an absolutely amazing resource for providing guidance on assessing cloud based initiatives.
Once you have received the information needed, fill out your template and work with your teams to understand where to spend your time and effort.
To clarify, this approach is more tailored to tactical risk versus organizational risk. It is all up to your maturity model on how you address this. Thought processes work well for certain assessors versus others. For me it was understanding the systems and how they fit into an organization. This allowed me to figure out their true ‘Keys to the Kingdom’. We all know HR, Financial, Intellectual Property, and Consumer information is important but sometimes the value of reputation, brand, or other data can be more important in context.
Risk Assessment Software:
FixNix GRC Suite – https://www.fixnix.co/
Archer – https://www.rsa.com/en-us/products/governance-risk-and-compliance.html
SAP GRC – https://www.sap.com/canada/solution/platform-technology/analytics/grc.html
Open IT GRC – http://www.eramba.org/
SimpleRisk – https://www.simplerisk.com/