logo

Phishing still leads in U.S. education. AI just makes it hit harder

April 11, 2026 InSights

image

Phishing still leads in U.S. education. AI just makes it hit harder

Generative tools amplify phishing and impersonation; use AI defensively with guardrails.

In U.S. schools and on campus, the headline hasn’t changed: phishing and impersonation drive the volume, but the attack mix diverges between K-12 and higher ed—and AI is supercharging the social-engineering that starts it all.

K-12: people are the primary target.

A nationwide CIS/MS-ISAC study of 5,000+ K-12 orgs (Jul 2023–Dec 2024) found 82% experienced cyber-threat impacts. Adversaries targeted human behavior at least 45% more than purely technical flaws, with attacks spiking around high-stakes periods like exams.

Higher ed: broader intrusion beyond the inbox.

In Verizon’s 2025 DBIR, Educational Services (NAICS 61) logged 1,075 incidents, 851 breaches. The top breach patterns—System Intrusion, Miscellaneous Errors, and Social Engineering—accounted for 80% of cases, reflecting credential abuse, exploited edge devices, and web-app attacks alongside classic phishing.

National backdrop: The FBI’s 2024 Internet Crime Report shows phishing/spoofing as the most-reported cybercrime and a record $16.6B in losses—evidence that social engineering remains the country’s volume leader.

Where AI fits (without the hype)

Attackers: AI is an amplifier, not a new category.

The FBI warns of an ongoing campaign using AI-generated voice (vishing) and smishing to impersonate senior officials—tactics that translate directly to “superintendent,” “dean,” or “CFO” spoofs in education. Expect faster, more believable lures and authority imposters.

Defenders: Use AI—with guardrails.

CISA’s 2025 guidance centers on securing the data that trains and runs AI systems (provenance, integrity, encryption, drift monitoring). A multi-agency advisory adds practical controls for the AI data supply chain. If you deploy AI (chatbots, LMS add-ons, help-desk copilots), treat their data like a crown jewel.

Governance: Put policy around AI now.

Anchor institutional AI use to NIST’s AI Risk Management Framework and its Generative AI Profile; EDUCAUSE’s research shows many campuses still lack complete AI policies, creating exposure.

What to do next (K-12 vs. higher ed, with AI in mind)

K-12: harden the human layer + comms channel

DMARC/SPF/DKIM on all domains; move from monitor to enforcement once aligned.

Train for reporting (not just “don’t click”); rehearse surge playbooks before exam/start-of-school windows. 

Assume AI-cloned voice and hyper-real texts: require out-of-band verification for any request involving money, grades, or student records. 

Higher ed: identity, edge, and web apps first

MFA everywhere feasible, detect credential-stuffing and consent-phish/OAuth abuse.

Patch and monitor edge devices and web apps aligned to DBIR’s System Intrusion pattern. 

Treat LMS/research AI features as new assets: inventory them, restrict data access, and log prompts/outputs per CISA AI-data guidance. 

Cross-segment (policy + assurance)

Adopt NIST AI RMF + GenAI Profile to standardize risk controls; close campus AI policy gaps flagged by EDUCAUSE.

Track national baselines (FBI IC3) to justify continued investment in phishing/impersonation defenses. 

 

The leaderboard hasn’t changed—phishing and impersonation still open the door. What’s changed is scale and realism, thanks to AI. If you authenticate your mail, fortify identity and web edges, govern how AI is used, and secure the data that powers it, you’ll blunt today’s most common attacks and shrink the impact of the rest. 

 

Stay ahead of evolving threats with expert insights

Subscribe to our newsletter to keep you updated on the latest cybersecurity insights & resources.

One follow-up from a security expert—no spam, ever.