Phishing your own people in cybersecurity training? How to protect the brands you use as bait

Mark Raeburn, Accenture’s Cyber Resilience Lead in the UK discusses how CISOs can educate their employees to identify phishing emails without getting their own organisation into legal hot water.

featured-image

With widespread use of generative AI, phishing has become an even more formidable threat for organizations. Hyper realistic emails, texts and deepfake voice-notes can be constructed using AI tools , and with better grammar and spelling making threats appear more genuine, AI-powered phishing is causing major concerns. This year we have seen an escalation in the complexity and variety of phishing methods, with people being targeted on new platforms they trust, beyond the standard email, phone call, or SMS.

The concerns have shot to the very top of business. Accenture’s Pulse of Change research found almost half (47%) of C-suite were concerned about the increased risks from cyber attacks and data breaches. Cybersecurity threats posed by deceptive content, such as realistic phishing emails/messages, were seen as the biggest risk.



Attacks may not be simple, but motivations often are: financial gain. Attackers use messages requesting personal information from fraudulent websites to trick their victims to send money or obtain access to their networks. They also know by impersonating senior leaders they can potentially influence people to share data, money, or credentials.

Unfortunately, as phishing attempts become more realistic, employees are more likely to fall victim, which can create serious disruption, financial loss and potential long-term reputational damage for their organization. Accenture’s Cyber Resilience Lead in the UK. Education is key It is therefore critical that employers provide the necessary education - including training and simulations - to prevent attacks from duping employees into clicking something they shouldn’t.

Simulating an authentic phishing attack isn’t a simple ask. In fact, firms have tried to educate their employees by replicating public brands with typical consumer and employee communications - such as impersonating delivery companies - to create content for educational purposes. This is because these companies tend to have many characteristics that make them ideal targets for social engineering due to brand familiarity, regular personal information requests and sharing of routine tracking links.

Delivery companies regularly share emails and SMS updates, meaning the cadence of communication - and the characteristics that come with it - often go unnoticed, and individuals are easily deceived. However, when organizations copy-cat brands in simulations, it can pose legal issues around IP theft, if they have not asked permission to use their branding and company information. It can also cause the brands themselves reputational damage from being associated with cyber attacks (even fake ones).

Are you a pro? Subscribe to our newsletter Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed! If a business decides to replicate such an exercise, but wants to avoid using a third-party as part of the simulation, it could instead implement internal emails from reliable departments such as finance, legal or HR. This means they still appear credible for employees, as they will resemble emails normally issued directly by internal teams, but they don’t risk falling into legal hot water from external companies. How can you protect your company In addition to training employees, businesses can also take preventative measures to stay protected – and turn the tables on attackers by using gen AI itself.

With AI increasing the risk of being defrauded by realistic content, it is also a vital part of an organization's technological armor. For example, many platform companies and hyperscalers are releasing AI security features in their own environments. Additionally, AI-powered ‘red teaming’ - a cybersecurity technique - mimics an attack to see how individuals would respond.

Other examples, including penetration testing , will become mandatory for organizations as regulations evolve. The key to gaining the upper hand in the era of gen AI will be embedding security-by-design along the journey. The personal touch Although security tools are critical, humans are ultimately a key line of defense.

Training programs play a central role in helping employees recognize and report suspicious communications, but they should also be encouraged to rely on their instincts, too. Employees should always ask themselves: “Is this typical behaviour from the sender? Is this a platform they’d normally be contacting me on? Would I normally verify my details in this way?” There are also cultural factors that support an organization's defense – and it starts with ensuring companies prioritize the ways of working and wellbeing of their people. Always-on and tired employees may be more likely to click on suspicious links in a hurry, so reducing alert fatigue and burnout among people has cyber security benefits, too.

Just as there is a human behind the initial creation of a phishing attack, there’s always a human recipient of a scam. The best defense always relies on the knowledge of an empowered employee that understands the risks and acts mindfully. A healthy dose of human suspicion, combined with a strong line of technology enabled defences, will set organisations on the right pathway to defending against phishing attackers, without inadvertently impacting other brands’ reputations.

We've featured the best cloud antivirus. This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc.

If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro.