Best Practices for Simulated Phishing Email Campaigns

The cost of successful Phishing attacks against individuals and organizations is both significant and increasing. Research by the Ponemon Institute estimates the cost at $3.7M per year for larger organizations, and , according to the Cloudmark Security Blog’s survey in 2016, a sample of 88 respondents cited an average cost of $1.6M to address SpearPhishing attacks. A Google search on the cost of phishing attacks will supply any number of corresponding accounts. Any way you look at it, Phishing is expensive for victims and relatively easy for attackers–a toxic blend. In response, IT Security organizations increasingly employ simulated phishing campaigns to assist in bolstering employee resilience to these attacks.

Using Simulations – No Longer a Debate

As organizations tally the high cost of lost hours and compromised resources due to phishing, simulated phishing campaigns have become a primary countermeasure along with general awareness training. In fact, some organizations have turned simulated phishing campaigns into a veritable science with varying detection difficulty levels, adaptive themes and frequencies based on response, dispersion and variation algorithms, etc. There is something of an unofficial nomenclature developing around the variables involved in these campaigns. The study of which is at least deserving of careful consideration, and perhaps even standardization to facilitate statistical analysis and comparison. Phishing appears to only be increasing in sophistication and cost–IT security must adapt.

One challenge with comparing simulated phishing campaigns against one another to chart progress and determine an optimum frequency/design is that the campaigns need to be aligned–or even stair-stepped–to some degree for a comparison to be useful. For example, suppose an organization ran only two campaigns a year: campaign ‘A’ designed to look like a poorly-worded external email attack with misspellings and random content, and campaign ‘B’ with personalized content and the appearance of a well-crafted email professionally aligned to a business process. It likely will not be useful to compare click-rates (i.e. failure rate where an employee clicks on a phishing simulation) of two such campaigns: campaign B will have a higher click-rate whether it was run first or second in order. In this case there are too few simulations and they are aligned too randomly.

Alternatively, an organization might start with a simple campaign (e.g. a difficulty level of ‘3’) early in the year and progress through a series of campaigns with slightly increased detection difficulty through the year. An effective strategy might include adjusting and adapting the themes of the simulations to fit the response pattern, as well as re-targeting susceptible employees for extra campaigns. It might be that click-rates don’t decrease dramatically–due to the increase in detection difficulty. If the result was a steady click rate through the increasingly realistic simulations, then that would be a positive result overall. Statistical weights could be assigned to the detection difficulty to quantify the positive progress.

Best Practices

There probably isn’t that much value in designing what we are deeming in the below table to be a level ’10’ phishing email that looks virtually indistinguishable from an internal process. One could argue that an employee should notice the external email flag (hopefully your organization uses that) , but it’s apt to cause frustration. Likewise, there probably isn’t much value in a level ‘1’ email that is laden with grammar and spelling errors and would only be clicked by the most careless or untrained users. However, the Phishme data indicates that there is value in ‘not keeping it simple’ — that is, include in the regimen realistic simulations that a sophisticated attacker might use.

Combine phishing simulations with phishing training and emphasize reporting suspected phishing emails. Consider an easy reporting button if the email client allows for it. Also, emphasize that if someone realizes they have been phished, to report it right away. Give an award or two away for people who habitually report phishing attempts.

There is ongoing debate about whether users should be warned that a phishing campaign is taking place. Most organizations discussing this on the Internet seem to favor notifications. One benefit of notification is that they serve as an ongoing reminder about phishing and hence encourage ongoing attentiveness.

As far as consequences for users that habitually click on simulated phishing emails–or even real phishing emails–it’s worth noting that for any real phishing email, it was able to penetrate the filtering system–in some sense it fooled IT Security as well. Also, bear in mind that if a user clicks on a real phish they may well realize it just after they clicked–the Security team would hope (and educate) to have that user comfortable and diligent in reporting such instances. That reporting is less likely to happen in a punitive environment.

There is not a definitive value for frequency with which anti-phishing campaigns should take place. However, based on various anecdotal sources (i.e. I Googled it and read a variety of forums) , it seems likely that the optimum is six or more per year. Warning users six times a year that a phishing campaign is afoot will at least keep everyone consistently attentive and vigilant, without being annoying. It seems doubtful that more than monthly would be useful except with respect to the most susceptible recipients. . If a campaign spans a few weeks, and is undertaken every month–that means the organization is essentially always under simulated attack.

It may be that simulated phishing platforms compile detailed campaign comparison statistics –I have only personally used a couple and they had scant data available that spanned their customer base. Phishme–a simulator solution platform I’ve not personally used–publishes a very informative report on the effects of simulation : the Enterprise Phishing Susceptibility Report contains data based on 8 million simulated phishing emails sent to 3.5 million employees.

Some simulation platforms that I have personally observed provide sector comparisons so that one may know where they stand with respect to their competitors. Without some of the details presented in the data model below, it’s hard to know how much intrinsic value is gleaned from such comparison (i.e. it may be apples to oranges). However, the trend is clear: more simulations and thoughtful strategies significantly decrease susceptibility. Phishme claims that “Behavioral conditioning decreased susceptible employees’ likelihood to respond to malicious email by 97.14% after just 4 simulations. “ What they are calling ‘behavioral conditioning’ amounts to developing a thoughtful strategy including flagging repeat offenders for extra attention and varying the themes and sophistication of simulations for maximum impact.

For those with a bent towards studying raw statistics and data, there is a very interesting JAMA article that details a multi-year statistical experiment on the effects of simulated phishing campaigns to user behavior. Predictably, it indicates that the odds of an employee clicking on a simulated phishing campaign decreases through the use of repeated simulations. There are gaps in terms of relating the article’s model to a definite frequency of simulations, for example determining the optimum annual frequency. The study is, however, very good statistical evidence to have on hand if management is skeptical regarding the concept of simulations and ROI of purchasing a simulation platform–it’s something beyond a vendor touting the value of a vendor platform.

The Data Model

In the table, the dispersion variables refer to the ability to time the campaign over an interval and not happen all at once. The variation variables refer to the ability to include several variations of a simulated phishing email so that everyone doesn’t get the same one (and warn one another). The type attributes include personal, professional and IT–it will be most useful to compare emails of similar types.

The data model listed in the below table is an attempt to develop a standardized catalog of variables for comparing simulated phishing campaigns . It’s likely the model will exceed the typical simulation platform’s set of configurable controls. Still, it is typical, with current popular software, to be able to label a campaign as ‘easy’, ‘moderate’, ‘hard’ and set the batch delivery schedule interval along with a few variations of the simulated message. That is a good start. If your software doesn’t have at least that much functionality, it may be a poor fit for the scope and cost of phishing attacks.

Phishing Campaign Nomenclature

DescriptionTypeCategoryValue RangeExample
Campaign_NameIntegerIdentificationDescriptive Text'Campaign #10 2019'
Campaign_UIDIntegerIdentificationInteger Key137
Begin_DateDate-TimeScheduleDate-Time20191003-000000
End_DateDate-TimeScheduleDate-Time20191015-235959
Total_Email_SentIntegerIdentificationInteger4,324
Dispersion_Rate - rate of emails per delivery cycle (e.g. per day)IntegerCountInteger250
Variation_Count- how many unique variations of email per delivery cycleIntegerCountInteger10
Variation_Ratio - how many unique emails per 100 employees per delivery cycleIntegerRatioInteger10
Delivery Cycles - how many different instances where a batch of emails are sentIntegerCountInteger40 [batches of email delivered]
Difficulty_Rating_Code- how hard it is to detect the phishIntegerCharacteristicInteger 1, 2, ...1010 [a rating of an email indistinguishable from a current business process]
Corporate _Flag - an email intended to appear as though the internal business organization sent itBooleanCharacteristicTrue,FalseTrue
IT_Flag - an email intended to appear as though the local IT organization sent itBooleanCharacteristicTrue,FalseTrue
Personal _Flag - an email intended to appear as though an external person or non-business-related entity sent itBooleanCharacteristicTrue,FalseTrue
Professional_ Aligned _Flag- an email intended to appear as though an external entity related to the business sent itBooleanCharacteristicTrue,FalseTrue
Target_Population_Code_General - who is getting the campaignIntegerTarget CodeCodes depict: All, Dept, Group, Manager, Non-Manager, Facility000 [ where 'All' = 000 and will be the entire population of employees with email addresses
Target_Population_Code_RepeatersIngegerRepeater Code0,1,20 [not targeted at repeaters, 1 repeaters only, 2 multiple repeater]
Month_Code - what month(s) are included in the delivery cycleIntegerSchedule1-1201,02
Season_Code - what season(s) are included in the delivery cyclesIntegerSchedule1-Fall,2-Winter, 3-Spring, 4-Summer01,02 [Fall = 03 and will include Sep 21 - Dec 21 and comprise the Christmas season]
Email_Opened_AllIntegerResultInteger2,900
Email_Clicked_AllIntegerResultInteger366
Email_Clicked_PctDoubleResultReal Number8.5%
Variables and factors used to describe the characteristics and attributes of an anti-phishing campaign for purposes of comparison across campaigns and measuring progress