It is February 12th. Your office manager receives a call from a managing partner who is supposedly at a client site.
The partner sounds stressed, the background noise mimics a busy office, and the request is specific: an urgent wire transfer is needed to close a settlement before the bank’s cutoff. The voice is identical to the partner’s. In fact, the cadence, the tone, even the specific jargon used.
The office manager processes the wire. It is only hours later that the firm realizes the partner never made the call.
For decades, hearing a familiar voice was proof of identity. Deepfakes remove the old safety net of “I recognize the voice.”
Today, relying on sensory cues to verify identity is a security liability. As firms modernize to handle remote work and cloud-based operations, they must also modernize their verification protocols to future-proof your firm against compliance gaps and security risks.
Deepfake scams don’t exploit technology failures. They exploit trust, urgency, and missing verification processes.
Table of Contents Show
What Is a Deepfake Scam?
In plain English, a deepfake scam is the use of artificial intelligence to create convincing audio, video, or image impersonations of trusted individuals to deceive employees into transferring money or sensitive data.
These are not “glitchy” robotic voices anymore. Commercial AI tools can now clone a person’s voice with just a few seconds of audio (often scraped from a webinar, podcast, or social media video).
Common formats targeting professional services include:
- Voice Cloning: AI-generated audio used in phone calls or voicemails to mimic a partner or client.
- Fake Video Calls: Real-time video manipulation where a scammer appears to be a coworker on Zoom or Teams.
- AI-Enhanced Business Email Compromise (BEC): AI-written emails that perfectly mimics the writing style, syntax, and tone of a specific executive.
It is important to understand that deepfakes usually amplify existing fraud tactics rather than creating new ones. The goal is typically wire fraud, diverting payroll, or accessing client tax data. And AI simply makes social engineering much harder for staff to detect.
Deepfakes don’t exploit technology. They exploit psychology
Why Accounting and Tax Firms Are Prime Targets
Criminals follow the money, but more importantly, they follow the authority to move money. Accounting and tax firms represent a high-value target because they sit at the intersection of urgency, financial access, and sensitive data.
- Tax Season Urgency: During peak season, “urgent” is the default state. Scammers exploit this high-pressure environment knowing that a request from a partner to “get this done before the deadline” is less likely to be questioned.
- Money Movement Authority: Unlike most small businesses, CPA firms often hold power of attorney or have direct access to client bank accounts for payroll and tax payments. A single successful deepfake can trigger irreversible wire transfers.
- High-Value Taxpayer Data: Beyond immediate theft, firms hold Social Security numbers, EINs, and financial histories. This data is the “gold standard” for identity theft and future tax return fraud.
- Small Firm IT Limitations: Many firms with 1–50 employees lack a full-time CISO (Chief Information Security Officer). Scammers know these firms may rely on basic antivirus rather than enterprise-grade identity management, making them softer targets than large banks.

The Deepfake Scam Prevention Checklist
Quick Checklist Summary:
- People: Give staff explicit permission to pause and verify.
- Process: Mandate “out-of-band” callbacks for all financial requests.
- Systems: Enforce MFA and restrict admin access.
Layer 1: People (The “Stop & Verify” Permission)
Your first line of defense is not software; it is your staff’s ability to say “no” to a partner’s voice.
- Rule 1: Never Treat Voice or Video as Proof of Identity: Establish a firm-wide policy that hearing a voice or seeing a face on a screen is no longer sufficient authentication for sensitive tasks.
- Rule 2: Permission to Slow Down: Explicitly tell every employee, from interns to managers, that they will never be reprimanded for pausing a transaction to verify it, even if the requester claims to be the managing partner in a hurry.
- Rule 3: Verification Is Not Disrespect: Cultivate a culture where challenging a request is seen as a sign of competence, not insubordination.

The Verification Script: Provide your staff with this exact script to use when they feel pressured:
“I have a standing order to verify all requests of this nature. I am going to hang up and call you back on your internal line immediately.”
Layer 2: Process (The “2-Person Rule”)
Processes must be designed to assume that an impersonator will eventually get through to a staff member. The goal is to ensure they cannot complete their objective.
- Mandatory Callback Verification (“Out-of-Band”): If a request comes in via email, text, or an unexpected call, the employee must verify it using a different communication channel. If the request came by email, call them on a known internal number. Do not use the number provided in the urgent message.
- The “Four-Eyes” Principle (Dual Approval): Requires approval from two separate individuals for any “High-Risk Request.” No single person should have the authority to unilaterally execute these actions.
- Define High-Risk Requests: Clearly list the actions that trigger these extra steps:
- Initiating wire transfers
- Changing vendor ACH/banking details
- Modifying payroll deposit accounts
- Granting new administrative access privileges
- Written Confirmation: Always require a written trail for financial instructions. If a request comes verbally, pause and demand a confirmation via the firm’s official project management or email system.
Layer 3: Systems (Technical Controls)
While deepfakes exploit human psychology, robust systems limit the damage an attacker can do if they deceive an employee.
- MFA Everywhere: Multi-Factor Authentication is non-negotiable. Even if a deepfake tricks an employee into revealing a password, MFA provides a critical barrier.
- Least Privilege Access: Staff should only have access to the specific data and systems necessary for their current work. A junior associate does not need global admin rights.
- Managed Security: For firms without internal security teams, utilizing managed IT security for accounting firms ensures that endpoint protection and email filtering are monitored 24/7.
- Centralized Logging: Ensure all system access and financial transactions are logged. If an incident occurs, you need an immutable record of who accessed what and when.
Deepfake Red Flags (Practical)
While AI technology is advancing rapidly, deepfakes often leave subtle artifacts. However, do not rely on these imperfections alone, as high-end tools are eliminating them quickly.
Audio Indicators:
- Unnatural Pauses: The speaker pauses at odd times or takes too long to respond to simple questions.
- Robotic Cadence: The voice lacks natural emotional inflection or sounds slightly “metallic” or flat.
- Audio Choppiness: Background noise cuts out completely when the person stops speaking.
Video Indicators:
- Sync Issues: The lips do not perfectly match the audio (lip-sync errors).
- Visual Glitches: Blurring or flickering around the edges of the face, hair, or glasses, especially when the person turns their head.
- Unnatural Blinking: The subject blinks too often, too rarely, or in a way that looks artificial.
Contextual Red Flags (The Most Reliable Indicator):
- Unexpected Urgency: The request demands immediate action to avoid a negative consequence (e.g., “The deal will fall through if we don’t wire this now”).
- Broken Patterns: The request deviates from standard procedure (e.g., a partner asking for a wire via text message instead of the usual portal).
- Technical “Excuses”: The caller claims their camera is broken or the connection is bad to hide visual flaws.
7-Day Tax Season Rollout Plan
You cannot overhaul your entire security posture overnight, but you can secure your most critical vulnerabilities in one week. Use this schedule to implement the checklist without disrupting client work.
- Day 1: Define the “No” Policy Draft and distribute the policy stating that staff have explicit permission to pause and verify any urgent request. Make it clear: Speed does not trump security.
- Day 2: The Access Clean-Up Audit your user list. Revoke admin access for anyone who does not strictly need it. Remove accounts for former employees or contractors immediately.
- Day 3: Staff Training (The Script) Hold a 15-minute all-hands meeting. Distribute the “Verification Script” (from the checklist above) and have staff read it aloud. Roleplay one scenario where they have to challenge a partner.
- Day 4: MFA & Backup Check Verify that Multi-Factor Authentication is active on every email and tax software account. Test your data backups to ensure they are actually recoverable.
- Day 5: The “Tabletop” Drill Run a simulation. Send a “fake” urgent text from a partner’s number (or a spoofed number) to a manager asking for a sensitive file. See if they follow the verification process or if they just reply.
- Day 6: Feedback Loop Review the results of the drill. Identify where the process broke down. Did the manager feel comfortable saying no? Adjust the policy if needed.
- Day 7: Client Communication Send a brief email to your clients informing them that your firm has new security protocols. Let them know you will never ask for sensitive data or payment changes via a sudden phone call or text without prior verification.
If You Suspect a Deepfake Incident
If an employee suspects they are interacting with a deepfake, or if a transaction has just been processed based on a suspicious request, take these immediate steps:
- Pause All Action: Stop the wire, freeze the account, or end the call immediately. Do not worry about politeness.
- Verify Independently: Contact the supposed requester using a trusted internal number or face-to-face method (Out-of-Band Verification).
- Revoke Access: Immediately reset passwords and revoke session tokens for any accounts that may have been exposed.
- Document Everything: Write down exactly what happened—time of call, number used, what was requested, and what information was shared.
- Review Logs: Check system logs to see if any files were accessed or exported during the interaction.
- Communicate Internally: Alert the partners and IT management immediately so they can warn other staff members who might be targeted next.
Frequently Asked Questions
1. What is a deepfake scam?
A deepfake scam uses artificial intelligence to create realistic audio or video impersonations of trusted individuals. Scammers use these “clones” to deceive employees into authorizing wire transfers, sharing sensitive tax data, or granting system access, often by mimicking a partner or executive.
2. How do deepfake scams target accounting firms?
Criminals target accounting firms because they manage high-value financial transactions and possess sensitive taxpayer data. Scammers exploit the high-pressure environment of tax season, knowing that staff are less likely to question an “urgent” request from a partner to move money or release files before a deadline.
3. How can I verify a partner’s urgent request safely?
Never use the contact method provided in the urgent message. Instead, hang up and call the partner back on a known internal number or trusted mobile line. This ensures you are speaking to the real person, not an imposter using a spoofed number or AI voice tool.
4. What is out-of-band verification?
Out-of-band verification means confirming a request through a separate, independent communication channel. If you receive an email instruction, verify it via phone. If you receive a text, verify it via a video call or internal chat. This breaks the scammer’s control over the communication channel.
5. Can deepfake scams bypass MFA?
Deepfake technology itself cannot technically “hack” Multi-Factor Authentication (MFA). However, scammers use deepfakes to trick employees into voluntarily approving an MFA push notification or reading out a one-time code over the phone. Strong MFA remains a critical defense layer.
6. What should staff do during a suspected impersonation?
Staff should immediately pause the interaction and not comply with any requests. They should then verify the requester’s identity using out-of-band methods. Employees must feel empowered to say “no” without fear of reprimand, as deepfakes rely on creating a false sense of urgency to force mistakes.
7. Do deepfake scams increase compliance risk?
Yes. A successful deepfake attack can lead to data breaches, violating FTC Safeguards and IRS Publication 4557 requirements. If client data is exposed due to a lack of verification controls, the firm may face regulatory penalties and reputational damage in addition to financial loss.
What controls reduce impersonation fraud the most?
The most effective controls are a combination of “people” and “technical” layers: mandatory call-back procedures for financial requests, dual-approval processes for wire transfers, strict Multi-Factor Authentication (MFA) enforcement, and limiting administrative access privileges to only those who absolutely need them.
Conclusion
The threat of AI-driven fraud is real, but it is manageable. By stripping away the reliance on “recognizing a voice” and replacing it with concrete verification steps, you neutralize the scammer’s greatest advantage.
Remember: “Verification is a process, not a gut feeling.”
The goal isn’t to spot every deepfake. The goal is to make fraud hard to complete. You don’t need perfect detection. You need consistent verification.Future-proof your firm against modern security risks by implementing these standards today.
