Case Study – Securing a Non-Profit with 0$

What can you do on a limited budget?

When you are a non-profit‍ with a very limited budget that depends on fundraising and providing clients services, IT and IT security are the last things to get looked at. I have approval to put this as a case study for one of the best examples I can put forth on limitations you run into when you have $0 for budget but a volunteer who has some free hours to see what is possible.

Please keep in mind, all software, hardware (donated or repurposed), and time are at a $0 cost for this initiative.

Overall, my biggest concern at any given moment for a small entity, is RansomeWare. Knowing how limited a budget they had and how much impact RansomWare has on an organization, especially small ones, my first task is always getting everything up to a reasonably secure baseline.

First steps to ensure this baselineanti-virus and patches

Most of the machines in the organization are running the default Microsoft provided windows defender, although I could choose to download other anti-virus provided by vendors for free solutions, I wanted to make sure there was minimal impact and the resources are not chewed up.

After a little further research, a product called Immunet by Talos was showing up repeatedly in my searches for a low impact secondary AV utility. Using cloud resources and a community, it provides a great second real-time view of virus and malwaredetection. The biggest note; it isn’t comprehensive either. Like all AV platforms that are signature based, they only catch what they know about and on this specific instance missed one that was found by another anti-malware software. The other caveat for the free solutions, no central management. Although central management would be nice, I would think AV companies tailor that to generate revenue.

Another big miss on centrally managed information is updates, I know with Microsoft it has the set it and forget type deal and to centrally manage would be a significant cost. This is where some automation and some manual process is up to the organization. Working within requirements, updates are manually installed for now, as without an on-site IT person, a botched update would be significant impact.

Running updates for all installed software to meet the current levels took some time as there were the easy ones, like Adobe , Java, and Microsoft. It was the individual specific applications that were business specific that were more an issue. It took some time and research and ensuring products had updated licenses but ultimately, it went and more secure for it.

A smaller issue but can still be detrimental and a pathway for attackers was the wireless router used as a gateway, Currently there is no budget to replace it with a more business centric appliance, so it’ll have to do. It did require a firmware update and changes to the default password. Thankfully no one owned it before getting a chance to update.

There is still a lot of work to do, but so far the cost has been absolutely $0. I have broken down the costs;

Immunet – free cloud AV‍ –

  • Not centrally managed
  • Used to supplement any existing AV

Sophos Home

  • Centrally managed option


  • Installed as another option for a manual scan for PUPs and other unwanted programs

Update everything

Security Assessment – Free

  • Fortinet‍ has a free security‍ assessment‍ solution for partners (hardware is provisioned temporarily)

Wireless Router

  • firmware‍ upgrades – Free
  • Change default password‍ – Free
  • Explicitly deny access from the internet – Free
  • Create allow rule for needed services – Free
  • Create Deny all below – Free
  • Turn off any unneeded internal services (IPv6, Telnet, SNMP) – Free

Enhanced security


Turn off unneeded services (Telnet, SNMP, Appletalk, IPX/SPX, Internet Print) – Free


Documentation of environment

  • Providing a documented environment for storage in case of a need for DR – Free

I am looking to provision some extra hardware for some of the other utilities mentioned but obviously cost is a limitation. Will update with some of those stats as well.

Additional note: This document is based on a non-profit with <25 users and a community/town supported budget. I wrote this article primarily to raise awareness of how many non-profits are at risk due to lack of funding. The hardest part is that all it takes is one breach and the community they server will be at a greater risk and since information security, along with community is a passion for me, I took on this challenge. smb security

I have added some updates to the various categories, based on feedback as well. Removed Nexpose, as there is no more link to a community edition

To order cheaper licensing for non-profits, techsoup has discounted licensing

Assessing Risk – Helping the SMB market understand

I remember the first risk assessment I was to complete. It was messy essay on defining the use of a specific port to allow an application through our firewall. Truthfully, it was downright ugly to get to the point that the port wasn’t vulnerable, and neither was the application. It was LOW risk.

Early Stages

When I did my first risk assessment, I didn’t realize there was methodologies (although nowhere near as mature as today) that were established by NIST, the RCMP, CSE, and other organizations out there. For some reason, my earlier years were sparse for resources when it came to risk assessments and how to develop them.


After my first risk assessment and getting approval for allowing specific traffic through the firewall, I positioned myself for training. Research this time worked for me.

In 2007 I attended the RCMP Threat and Risk Assessment 2 day course in Ottawa, Ontario. The course was eye-opening. It was an entire methodology laid out with worksheets and examples. It was here where I found out how way off I was in regards to my first assessment.

This is where I learned about Single Loss Expectancy, Annual Rate of Occurrence, and Annual Loss Expectancy. Mathematical functions that help put costs for risk in front of decision makers.

SLE (Single-loss expectancy) = AV (Asset Value) x EF (Exposure factor)

ALE (Annualized Loss Expectancy) = ARO (Annual Rate of Occurrence) x SLE (Single-loss Expectancy)

The instructor for this course was completely honest about these equations as well. He mentioned that Exposure Factor is completely subjective, which makes the entire process subjective. That being said, he mentioned that this is just a framework and like any other framework, you have to decide what works best. As long as you are assessing risk and doing something about it, you are better off than closing your eyes and hoping nothing happens.

After a few examples, it was getting clearer on modeling threats and mitigation strategies. My early practices were still much to be desired but having a basic template was working to establish the baseline to create better templates moving forward. For example, my basic template following the early RCMP templates was not much more than a Risk Register but it was a start. It allowed me to relay risk information better than essay type documents making someone read jargon for two and a half pages without immediate clear context.

Asset Description Threat Value Likelihood Risk Control Recommendation Residual Risk
Database hosting clientinformation Stolen by attacker $100000 High High Ensure Firewall blocks external Access Medium
Web site Defaced by Attacker $600 Medium Medium Have a system to trackchanges and alert Low

My biggest problem with this was that this was created in Excel and stayed there. At this point in my career, it didn’t mature much. I had multiple Excel files and stored them for people to view. A very static approach.

A mentor steps in

It was during my transition to work in Toronto where things became clearer on how you can adapt Risk Management frameworks to your organization. I worked with some amazing people and one specific mentor showed me how to present information to different audiences. My main learning outcome was that people like easy explanations, no jargon and especially COLOUR!

From a risk management perspective, I learned from this point on that at any time in a document where risk is High or Critical (I’ll come back to this) the test or highlight must be RED. I think everyone knows why this is a great indicator.

Along with the colouring of the risk levels, this where I said I would come back to it, the establishment of risk levels. Weirdly, I was happy to learn, you can add and remove risk levels as it applies to your business. For example;

You can range from Low to High, Very Low to Very High, anything basically.

And now, this is where heat maps started to make sense as well. I am sure most people by now saw a heat map in their lifetimes. Here is a rough example as well;

You can tailor your heat maps to your business and what is important. An SMB might only be doing $1 million in revenues a year, so a heat map that references a $1 billion dollar loss does not address risk appropriately. You also may put numbers to likelihood, or occurrence so you have a clearer definition making it more quantitative versus qualitative.

As you mature as an organization and can afford to spend time developing your heat maps, they may also include various other factors as well, such time of impact or time to restore. This is where it is important to understand your risk levels and how much of each square in that grid is relevant to your risk tolerance.

I have worked with many organizations where that grid is static and doesn’t reflect a good tolerance of risk. One example that comes to mind is the Low risk category. A lot of times, organizations see Low risk and assume that no further action is required. That depends on your current controls, your levels and that even though it is a Low risk, there is possibly still risk. Further attention may be required. As mentioned below in the comments; be aware of low risk chaining. This may take multiple Low risk vulnerabilities and combine to make them a High. An example might be a Race Condition, combined with Privilege Escalation that can cross a trust boundary.

It’s all about mitigating risks

Once you have established your heat maps, defining your templates and start getting your processes in place to assess risks, it’s time to mature even further.

Maturing around frameworks

RCMP/CSE harmonized –






As you can see, the maturity of risk around various frameworks can be intimidating. The frameworks can be free to access and use, like OCTAVE and the RCMP/CSEC harmonized or behind a paywall like ISO and COBIT.

It’s up to you as an organization to determine how you want to mature. The cookie cutter risk assessment templates are truly just a start, and from there you should customize to ensure your are finding appropriate risk because next is how you determine how money is spent.

Once you figure out your assets, the likelihood, the occurrence, the value, and other risk defining information, you have to figure out what you are going to do with that.

Are there existing controls?

Do you need to spend money on new controls?

Is it worth it to accept, defer or transfer the risk?

As you can see, this where you start expanding the ‘columns’ you need to add to your risk assessment model.

Asset Description Threat Value Likelihood Existing Controls Risk Recommended controls Cost Residual Risk Risk Suggestion
Database hosting client information Stolen by attacker $100000 High Firewall High IPS, HIDS $5000 Medium Implement Controls
Web site Defaced by Attacker $600 Medium Limited access Medium Tool for monitoringand alerting on changes $500 Low Accept existing risk

Due diligence

So as you can tell at this point the model starts to develop your template that it’s time to make it more logical and tactical.

Now it’s your specific preference and how you do your job as a risk assessor, the organizations tolerance for information, how it’s presented and what outcomes are expecting.

My personal preference is to target one system, application or service at a time. This gives me the chance to fully understand the system before getting to the bigger picture. There are a lot questions to be asked at this stage. Some people hand out a questionnaire template and ask for information back. I like to get Visio diagrams and ask people in person making notes on how specific systems work to get a visual understanding and logical flow of a system and its assets.

Questions can be so varied, so again, I dislike the cookie cutter approach. It is much easier to tailor questions once you get used to your methodology of choice.

This is a great example of one of those intimidating questionnaires , but a lot of research has gone into this and gives a great indication of risk profile when doing an assessment.

The Cloud Security Alliance is an absolutely amazing resource for providing guidance on assessing cloud based initiatives.

Once you have received the information needed, fill out your template and work with your teams to understand where to spend your time and effort.

To clarify, this approach is more tailored to tactical risk versus organizational risk. It is all up to your maturity model on how you address this. Thought processes work well for certain assessors versus others. For me it was understanding the systems and how they fit into an organization. This allowed me to figure out their true ‘Keys to the Kingdom’. We all know HR, Financial, Intellectual Property, and Consumer information is important but sometimes the value of reputation, brand, or other data can be more important in context.

Other resources:

Risk Assessment Software:

FixNix GRC Suite –

Archer –


Open IT GRC –

SimpleRisk –