April 15, 2026 Cyber Trends
DOJ charges allege chip security and cryptography trade secrets were exfiltrated from Google and others. Learn the insider tactics and controls that stop them.
Google Trade Secret Theft Charges Highlight Insider Threat and Data Exfiltration Risk
On February 19, 2026, the U.S. Department of Justice unsealed an indictment charging three Silicon Valley engineers with conspiracy to commit trade secret theft, theft and attempted theft of trade secrets, and obstruction of justice. Prosecutors allege the defendants used their employment at Google and two other technology companies to access sensitive mobile processor security and cryptography information, then moved that data to unauthorized locations—including personal devices, third-party platforms, other employers’ devices, and access from Iran.
This is not a “hacker broke in” story. It’s an insider risk story—where legitimate access, weak controls around data movement, and slow detection create an exposure window large enough to drain a company’s most valuable assets.
What the DOJ alleges happened
The DOJ press release describes a pattern that security teams should treat as a reference model for insider-driven IP theft:
Abuse of legitimate access: Prosecutors allege the defendants gained roles at “leading technology companies” working on mobile computer processors and used that access to obtain confidential information.
Exfiltration to third-party and personal locations: The indictment alleges confidential documents—including processor security and cryptography trade secrets—were moved to unauthorized third-party and personal locations, including to devices tied to each other’s employers and to Iran.
Use of a communications platform as a staging channel: While employed at Google, Samaneh Ghandali allegedly transferred hundreds of files (including trade secrets) to a third-party communications platform, to channels named for each defendant’s first name; Soroor Ghandali allegedly transferred numerous files to the same channels.
Detection, then concealment: The DOJ states Google’s internal security systems detected Samaneh Ghandali’s activity and revoked her access in August 2023, after which she allegedly signed a false affidavit denying sharing confidential information externally.
“Low-tech” capture to evade controls: The indictment alleges the defendants destroyed records, submitted false statements, and concealed exfiltration methods—explicitly including photographing screens instead of moving complete documents through monitored channels.
Travel-linked risk and overseas access: Prosecutors allege that on the night before traveling to Iran in December 2023, Samaneh Ghandali captured photos of Company 2 trade secret information on a work computer screen, and that while in Iran a device associated with her accessed those photos and Khosravi accessed other Company 2 trade secret information.
The Register’s coverage reinforces the core themes: alleged theft of chip/security technology secrets, alleged routing of some data overseas, and alleged attempts to cover tracks once the scheme began to unravel.
Important note: An indictment contains allegations. The defendants are presumed innocent unless proven guilty.
Why this case matters to every security leader
Most organizations still concentrate “serious security” around external threats: phishing, ransomware, exploited vulnerabilities, perimeter events. Those are real risks. But insider-driven data loss—especially involving IP, product designs, source code, cryptography, and chip security—operates differently:
The attacker starts authenticated. Traditional controls tuned for “unknown intruder” behavior often fail because the actions resemble normal work—until the volume, destinations, and timing reveal the pattern.
The target is high-value, low-noise. Trade secret theft is not a smash-and-grab. It is selective. The objective is durable competitive advantage or strategic leverage, not immediate disruption.
Evasion is not always sophisticated. When organizations harden digital exfiltration paths, insiders often pivot to analog techniques (photos, screenshots, re-keying, print-to-PDF). The DOJ allegations explicitly describe this pivot.
Cross-company contamination is a real risk. The DOJ alleges data moved not only to personal locations but to other employers’ work devices. That creates legal, operational, and incident-response chaos across multiple companies.
The modern insider threat kill chain
Security teams do better when they treat insider risk as a lifecycle with observable signals. The alleged behavior in the DOJ narrative maps cleanly to a repeatable sequence:
1) Access positioning
Insider obtains a role with proximity to sensitive workstreams (processor security, cryptography, product designs).
Control objective: Minimize standing access and narrow data visibility.
2) Collection
Insider pulls files or content fragments over time to avoid spikes and alarms.
Control objective: Detect unusual file access patterns and “collection behavior” (breadth, novelty, time-of-day).
3) Staging
Data is assembled in a location optimized for later transfer—shared channels, personal sync folders, removable media, personal cloud, or messaging platforms. The DOJ alleges a third-party communications platform used as a staging area.
Control objective: Prevent sensitive data movement to unapproved destinations; monitor sanctioned tools for anomalous patterns.
4) Exfiltration
The transfer occurs digitally—or if blocked, via screenshots/photography. The DOJ allegations explicitly include photographing screens.
Control objective: Reduce exfil options; harden endpoints; monitor for “exfil substitutes.”
5) Cover and persistence
Deletion, false statements, device wiping, and attempts to understand evidentiary retention. The DOJ alleges web searches about deleting communications and retention of messages “to print out for court.”
Control objective: Log immutably; detect anti-forensics behaviors; enforce retention and chain-of-custody readiness.
Practical controls that stop this class of incident
This is the baseline that prevents “legitimate access turned into illegitimate transfer”:
Data governance that actually matches risk
Classify “crown jewels” at the project/repo level (chip security, cryptography modules, signing infrastructure, key material, design docs).
Enforce policy based on classification, not on where the file happens to live.
Least privilege with time-bound access
Replace broad group memberships with ticketed, expiring access for sensitive repositories.
Require step-up auth for high-sensitivity data pulls.
DLP that focuses on destination and behavior
Block or heavily govern transfers to personal devices, personal cloud storage, and unapproved collaboration tools.
Alert on file movement to “new” destinations, mass downloads, and unusual compression/encryption.
Endpoint hardening for screenshot and photo resistance
No control eliminates analog capture completely, but friction matters:
Disable copy/paste and screen capture in controlled environments where feasible.
Use secure viewing containers for high-sensitivity documentation.
Implement watermarking and per-user visual tagging on sensitive screens to deter casual capture and aid investigations.
Insider risk analytics that watch the whole story
The biggest misses come from siloed telemetry. Combine:
Identity signals (role changes, access revocations, anomalous logins)
Endpoint signals (USB use, file staging, compression, screen capture events where available)
Cloud/app signals (sync clients, collaboration exports, API pulls)
Case management tied to HR/legal workflows
Offboarding and “trust decay” triggers
The DOJ narrative includes access revocation after internal detection.
That moment is critical. Post-detection, organizations need:
Immediate containment playbooks
Device preservation workflows
Litigation hold readiness
Rapid review of “already moved” data on personal and secondary devices
Travel-aware controls for sensitive roles
The DOJ alleges access to captured content while in Iran.
High-sensitivity teams need:
Pre-travel device policies (loaner devices, limited data residency)
Conditional access rules (geo-risk, impossible travel, risky networks)
Mandatory secure access methods (no direct access from unmanaged endpoints)
InfoSight perspective: measure exposure like any other attack surface
Insider risk becomes manageable when it is treated as measurable exposure, not a vague HR concern.
Operationally, that means:
Knowing which identities can reach sensitive systems and repositories (not just who should).
Tracking data movement pathways the same way you track external ingress pathways.
Measuring time-to-detect and time-to-contain insider anomalies the same way you measure response performance for external threats.
The DOJ allegations in this case describe a long enough exposure window for repeated collection, alternative capture methods, and continued access to already-exfiltrated material.
That is a program maturity problem: visibility, controls, and response speed—not a one-off policy failure.
Key takeaway
Trade secret theft is an insider problem first: legitimate access plus unmanaged data movement plus slow detection. The alleged tactics in the DOJ filing—third-party platform staging, device-to-device propagation across employers, screen photography, deletion behavior, and travel-linked access—are exactly what security programs must be built to anticipate.
Subscribe to our newsletter to keep you updated on the latest cybersecurity insights & resources.
One follow-up from a security expert—no spam, ever.
Enter your details below to download the PDF.