In February 2024, a major corporation fell victim to an AI-driven scam, resulting in a $40 million loss. The breach occurred when an employee unknowingly attended a virtual meeting populated entirely by AI-generated deep fakes. These fakes impersonated the company’s Chief Financial Officer, colleagues, and external partners with uncanny accuracy, leading to unauthorized financial transactions that only came to light after the damage was done.

It was an incident that shows the growing sophistication of cyber scams powered by artificial intelligence. The ability of AI to create hyper-realistic audio and video forgeries makes it increasingly difficult for even well-trained employees to differentiate between legitimate and fraudulent communications.

Businesses have long relied on user training as a frontline defense against phishing scams and other cybersecurity threats. However, as AI-enabled scams advance, this approach is showing its limitations.

The assumption that users can reliably identify and neutralize threats is increasingly challenged by the sophistication of these scams.

The need for a strategic shift in cybersecurity is evident. Rather than placing the burden on employees to recognize and avoid scams, organizations should invest in comprehensive cybersecurity measures that prevent these threats from reaching users in the first place.

Challenges with user training

Scammers today are highly specialized professionals who dedicate their time to crafting intricate and convincing scams. Unlike employees who juggle multiple tasks, cybercriminals focus solely on their objective: deceiving targets.

A sole focus gives scammers a significant advantage. They often spend weeks or even months designing a single attack, refining every detail to increase its effectiveness.

With AI tools, these criminals have become even more formidable, helping them to automate and scale their operations, creating more sophisticated and tailored phishing attacks that are increasingly difficult for the average user to identify.

Higher levels of sophistication require a defense strategy that is more than relying on users to outsmart these skilled adversaries.

Your team is paid to work, not fight cybercrime

Employees come to work with the primary goal of fulfilling their job responsibilities, not spotting potential security threats. Focuses are usually on meeting deadlines, achieving targets, and contributing to the business’s core functions, not on cybersecurity.

This reality highlights a key flaw in relying on user training as a primary defense mechanism. Since cybersecurity is not their main responsibility, employees may not have the time or expertise required to consistently identify sophisticated phishing attempts.

Expecting employees to act as a reliable line of defense is unrealistic and places undue pressure on them to perform tasks outside their primary job functions.

How scammers exploit your brain’s shortcuts

Fast thinking vs. slow thinking: the battle in your inbox

Daniel Kahneman’s work in cognitive psychology, particularly in his book Thinking Fast and Slow, provides valuable insights into why phishing scams are so effective. Human thinking is divided into two systems:

  • System one: Thinking is fast, automatic, and intuitive. It’s the type of thinking we use when performing routine tasks, such as reading and responding to emails. It is a system that operates on autopilot, letting us process information quickly without deep analysis.
  • System two: Thinking is slow, deliberate, and analytical. It’s activated when we need to engage in complex problem-solving or make decisions that require careful thought and consideration.

Phishing attacks are designed to exploit System One thinking. Scammers craft emails that trigger an immediate, emotional response—such as urgency or fear—which prompts the recipient to act quickly without engaging their more analytical System Two thinking.

Psychological manipulation like this makes it easy for employees to fall prey to scams, as they react instinctively to the perceived threat or opportunity.

How scammers exploit fast thinking to win

Given the volume of emails and messages that employees handle daily, it’s impractical to expect them to engage in System Two thinking for every interaction. The rapid pace of modern work environments encourages quick responses, reinforcing System One thinking.

Scammers are well aware of this and design their attacks to align with how people naturally process information. They use tactics such as creating a sense of urgency, presenting authoritative requests, or mimicking familiar communication styles, all of which push employees to react quickly rather than think critically.

Expecting employees to consistently slow down and analyze each email is unrealistic. Cognitive burden like this is impractical and ineffective. Organizations need to shift their focus from relying on human vigilance to implementing technological defenses that can intercept these scams before they reach the user.

The need for better cybersecurity measures

In cybersecurity, the distinction between detective and protective controls is key. Detective controls, such as monitoring and logging, are designed to identify and alert on potential threats after they have occurred. Protective controls, on the other hand, are intended to prevent threats from occurring in the first place.

Humans excel in detective roles, identifying anomalies, recognizing patterns, and making judgment calls when something seems off. However, they are not as effective in protective roles where the expectation is to prevent threats from materializing.

Relying on users to act as protective controls assumes that they can consistently detect and stop threats before they cause harm, which is a flawed assumption given the sophisticated nature of modern cyber threats.

Cybersecurity strategies should therefore focus on deploying comprehensive protective controls that prevent malicious content from ever reaching users. It is an approach that reduces the reliance on human intervention and lowers the risk of successful attacks.

Boost your cyber defenses by trusting technology, not training

With the increasing complexity of cyber threats, particularly those improved by AI, it is key to prioritize technological controls over traditional user training. While user awareness remains important, it should not be the primary defense mechanism.

Advanced email security gateways equipped with AI can analyze and filter out potentially harmful content before it reaches an employee’s inbox.

New AI-driven tools can learn from vast amounts of data, identifying patterns, and detecting subtle indicators of malicious intent that a human might miss. When using the same technologies that scammers use to craft their attacks, organizations can reduce the likelihood of successful phishing attempts.

Investing in these technologies will improve security and alleviate the burden on employees, letting them focus on their core responsibilities without the constant pressure of serving as the last line of defense.

How to beat phishing attacks before they start

Phishing-resistant multi-factor authentication (MFA) is a great tool in the fight against credential theft. Unlike traditional MFA, which can still be vulnerable to sophisticated phishing attacks, phishing-resistant MFA employs methods that are far more difficult for attackers to circumvent.

For example, FIDO2-based authentication methods require a physical key or biometric verification, making it nearly impossible for scammers to gain access even if they successfully phish a user’s credentials.

Implementing phishing-resistant MFA across all systems and accounts should be a top priority for organizations. While no security measure is foolproof, this approach significantly reduces the risk of credential-based attacks, which are among the most common and damaging forms of cyber threats.

Keep malware out with smart proxies and AI

To defend against malware downloads, organizations should deploy robust proxy systems that filter web traffic and monitor for malicious content. Proxies can be improved with AI capabilities to detect and block malware before it reaches the endpoint.

When analyzing traffic patterns and comparing them against known threats, AI-powered proxies can identify suspicious behavior that might indicate a malware infection.

Incorporating AI into these systems allows for more dynamic and adaptive threat detection, as the technology can learn from new data and evolve in response to emerging threats. A proactive approach reduces the risk of infections and makes sure that only safe, verified content reaches the user’s device.

Don’t let malware attachments slip through

Email attachments remain a significant vector for malware distribution, making it key for organizations to screen these attachments at the network’s edge before they reach users’ inboxes. Advanced email filtering solutions can scan attachments for known malware signatures and behaviors, flagging or quarantining suspicious files before they can cause harm.

Investing in these technical controls means that users are the last line of defense, not the first. Preventing malicious attachments from reaching users in the first place means organizations reduce the likelihood of a successful attack.

The role of user training in a holistic defense strategy

While advanced technical controls are the most important thing, user training still plays a role as a final layer of defense. Effective training programs should focus on helping employees recognize the most obvious red flags in phishing attempts, such as requests for gift cards, demands for large, unusual transactions, or urgent requests that deviate from standard procedures.

Training should be practical and focused on real-world scenarios that employees are likely to encounter. However, it is important to recognize that training alone cannot be the sole strategy for defense.

Users are human and prone to error, especially when under pressure.

Training should complement, not replace, the more comprehensive technical defenses that form the foundation of an organization’s cybersecurity strategy.

Build business processes that scammers can’t crack

In addition to training, organizations should implement scam-resistant business processes that add another layer of security. For example, verifying any changes to a supplier’s bank account via a phone call, rather than relying solely on email, can prevent business email compromise (BEC) scams that result in fraudulent payments.

Improved processes should be designed to minimize risk by requiring additional verification steps for transactions that fall outside the norm.

When embedding security into everyday business operations, organizations can create a more resilient environment that is less susceptible to scam attempts, even when employees are targeted.

Outsmart deep fakes with codewords

As AI continues to grow, the threat of deep fake technology becomes more pronounced, especially in high-stakes scenarios involving C-level executives. One effective countermeasure is the use of shared phrases or passcodes during sensitive communications.

Codes act as a form of two-factor authentication, verifying that both parties in the conversation are who they claim to be.

Secret phrases should be known only to the individuals involved and should not be something easily guessed or discovered through social media or other public channels. For instance, a CFO and CEO might agree on a unique, non-obvious phrase to use during a phone call about a significant transaction. Codewords adds an additional layer of security, making it harder for deep fakes to successfully deceive key decision-makers.

Key takeaway for strengthening cybersecurity with AI and layered defenses

The concept of defense in depth remains as relevant as ever. Organizations must deploy multiple layers of security controls, each designed to address specific vulnerabilities. When integrating various defensive measures, from AI-powered email filtering to phishing-resistant MFA, businesses can create a comprehensive security posture that adapts to evolving threats.

As cybercriminals increasingly use AI to improve their attacks, it’s imperative that cybersecurity teams adopt similar technologies to defend against these threats.

AI offers significant advantages in detecting and mitigating risks that are too subtle or complex for human detection alone. When incorporating AI into cybersecurity strategies, organizations can stay one step ahead of attackers.

No single solution can address every cyber threat. A balanced approach that combines technical controls, user training, and process improvements is key. Organizations should be cautious not to rely too heavily on any one aspect of their cybersecurity strategy. Diversifying defenses means businesses can reduce the risk of successful attacks and build a more resilient cybersecurity framework.

Alexander Procter

August 26, 2024

10 Min