ML Security Research That Goes Deeper
SPR{k³ discovers critical vulnerabilities in production AI infrastructure — and proves them. Validated by Meta, Microsoft, NVIDIA, and Amazon.
Validated by Meta · Microsoft · NVIDIA · Amazon
Request a Findings Briefing
The ML Security Layer Below Your Perimeter
Perimeter tools protect the boundary. SPR{k³ goes beneath it — into ML frameworks, training pipelines, model artifacts, and supply chains that need specialized analysis.
ML Framework Vulnerabilities
Pickle RCE in PyTorch, NeMo, and DeepSpeed. Not a cloud misconfiguration — a code-level exploit in the framework itself, outside every perimeter tool on the market.
Distributed Training Exploitation
NCCL, ZMQ, and unauthenticated gRPC attack paths inside your training cluster. One compromised node. Full cluster access in seconds.
Supply Chain Poisoning
Coordinated malicious patterns inserted across multiple ML repositories at once. Invisible to single-repo scanners. Only detectable through cross-repository temporal analysis.
The Vendor Scope Gap
NVIDIA, Microsoft, and Amazon made reasonable scoping decisions. SPR{k³ operates in the space those decisions left uncovered — where your production environment differs from the vendor threat model.
Could your production AI environment have attack surfaces that fall outside every existing tool's scope? SPR{k³ was built to cover that gap.
THE COVERAGE BLIND SPOT
The Scope Gap in ML Security
The gap is not about bad tools. It is where vendor threat models end and real production environments begin — a structural boundary that leaves important ML attack surfaces unaddressed.
Where Vendor Scope Ends
NVIDIA scopes to its deployment assumptions. Microsoft scopes to its SDK. Amazon scopes to SageMaker's managed surface. None of them scope to your specific production topology.
The 250-Sample Threshold
Carlini et al. showed that 250 poisoned samples can reliably backdoor any LLM. That is the attack surface. Ora detects at 1–50 files — before the threshold is reached.
Cross-Repository Coordination
The LiteLLM attack hit five package ecosystems simultaneously. No single-repo scanner sees the pattern. SPR{k³'s temporal cross-repo analysis does.
250 samples
Backdoor threshold (Carlini et al.)
to backdoor any LLM
1–50 files
Ora detection floor
SPR{k³} flags attacks before escalation
Proven in Production.
12
CVEs Across 4 NVIDIA AI Products
79+
Confirmed Vulnerabilities
95.7%
Detection Accuracy
<3%
False Positive Rate
3
Consecutive NVIDIA Bulletins
Validated by Meta · Microsoft · NVIDIA · Amazon
CVE Evidence
Three consecutive months of published, credited NVIDIA security advisories. Publicly verifiable.
NVIDIA Security Bulletin — February 2026
CVE-2025-33241 · CVE-2025-33243 · CVE-2025-33251 · CVE-2025-33252 · CVE-2025-33253 — Dan Aridor, SPR{k³ Security Research. View Bulletin →
NVIDIA Security Bulletin — March 2026
CVE-2025-33244 · CVE-2026-24157 · CVE-2026-24159 · CVE-2026-24152 · CVE-2026-24151 · CVE-2026-24150 — Dan Aridor, SPR{k³ Security Research. View Bulletin →
NVIDIA Security Bulletin — April 2026
Dan Aridor, SPR{k³ Security Research. View Bulletin →
Microsoft & Amazon — Security Acknowledgements
CVE-2026-26030 (CVSS 10.0) — RCE in Microsoft Semantic Kernel, acknowledged by Microsoft MSRC. RCE in Amazon SageMaker Python SDK, acknowledged by AWS Security.
What We Find
Vulnerabilities Vendors Don't Cover
Production deployments routinely operate outside vendor threat model assumptions. We identify the vulnerabilities that fall between vendor scope boundaries and operational reality.
Coordinated Attacks Across Repositories
Malicious patterns that spread across multiple ML frameworks simultaneously. Only visible when the full ecosystem is analyzed together.
Risks That Persist Through the Model Lifecycle
Backdoors that survive fine-tuning. Poisoning that persists through quantization and model merges. Threats that point-in-time scans miss entirely.
Attack Classes We Detect
Unsafe Deserialization / Pickle RCE
torch.load(), pickle.loads(), joblib.load() on untrusted data. Confirmed across PyTorch, NeMo, DeepSpeed, HuggingFace, and AutoGluon.
Distributed Training Exploitation
NCCL, ZMQ, unauthenticated gRPC in training clusters. One compromised node — full cluster access in seconds.
Supply Chain Poisoning
Coordinated malicious patterns across multiple repositories. Detected via cross-repo temporal correlation.
LLM Cognitive Degradation (BrainGuard™)
Thought-skipping, reasoning chain breakdown, capability drift. Monitors LLM cognitive health before permanent loss.
Agent Security / MCP Trust Poisoning
Tool description injection, OAuth delegation exploits, agent identity mutation. OWASP AI agent threat landscape.
Model Artifact Poisoning
Quantization backdoor survival, model merge poisoning, LoRA adapter injection, checkpoint manipulation.
Beyond Detection — Active Intelligence
SPR{k³ doesn't wait for threats to be reported. It predicts, tracks, and intercepts them — powered by an original bio-inspired algorithm that treats the ML threat landscape as a living, evolving system.
Predictive Threat Research
SPR{k³ identifies vulnerability classes before they are exploited in the wild. We surface threats that have no CVE yet — and no existing detection signature.
Active Threat Forecasting
We publish predictions about where the next ML attack vectors will emerge. Enterprises that engage with SPR{k³ see what's coming before it arrives.
Early Warning Intelligence
Coordinated attack patterns leave traces across repositories before they reach production. SPR{k³ detects these signals early — before a campaign becomes a confirmed breach.
Original Bio-Inspired Algorithm
Patent-pending. Models codebases as living systems. Identifies deviations from what should be preserved — not just known bad patterns. This inversion is what makes prediction possible.
Every finding feeds the model. Every CVE sharpens the forecast. The system compounds.
The Ora Scanner
Ora (אורה) means "light" in Hebrew. Named after Dan Aridor's late aunt, it carries a double significance: a personal tribute to family, and a metaphor for what the scanner does — illuminate hidden vulnerabilities, coordinated attacks, and architectural risks that other tools leave in the dark.
<3% false positive rate
with automated FPFE (False Positive Filter Engine) plus mandatory manual verification before any submission
Cross-language analysis
Python, JavaScript, Java, C++, Go, Rust, Ruby
SARIF output
for CI/CD integration, plus JSON/CSV reporting
Patent-pending methodology
for detection built to protect what matters most
How We Work With Enterprises
Direct engagement. No SaaS trial. No access to your systems required.
Findings Report
Documented vulnerability findings specific to your ML infrastructure — scope gap analysis, propagation path mapping, blast radius assessment, and remediation guidance. Based on your framework versions and deployment architecture.
ML Perimeter Intelligence (MPI) Assessment
Comprehensive trust boundary mapping of your distributed ML infrastructure. Identifies where vendor-assumed boundaries don't hold in your specific topology.
Ongoing Monitoring Retainer
Continuous scanning of your ML stack with prioritized findings and early warning on coordinated supply chain attacks. Direct access to the research team.
Contact: support@sprk3.com — All engagements begin with an NDA.
Research & Publications
Original research, published findings, and references to academic work that informs our detection approach.
Original Research — Dan Aridor, SPR{k³
Amazon SageMaker — Remote Code Execution (January 2026)
RCE in the SageMaker Python SDK JumpStart search flow. Fixed in v3.4.0, acknowledged by AWS Security. Read Post →
Microsoft Semantic Kernel — RCE, CVSS 10.0 (December 2025)
CVE-2026-26030. Remote code execution in InMemoryVectorStore filter parsing. Read Post →
The Scanner Was the Weapon — LiteLLM Supply Chain Analysis (March 2026)
Coordinated supply chain attack across five package ecosystems. LiteLLM — 97M+ monthly downloads — poisoned via a ghost PyPI release. Read Post →

Referenced Academic Work — External Sources
Poisoning Web-Scale Training Datasets is Practical (Carlini et al.)
Carlini et al. demonstrated that an attacker needs as few as 250 poisoned samples to reliably backdoor a large language model trained on web-scale data — and that those samples can be injected by purchasing expired domains that once hosted legitimate training data. This finding is foundational to SPR{k³'s detection threshold: if the attack surface begins at 250 samples, a scanner that only flags large-scale anomalies will miss the most dangerous insertions entirely. Ora's cross-repository temporal analysis was designed specifically to operate below that threshold — detecting coordinated poisoning patterns at 1–50 files, before they reach the scale required for reliable backdoor activation. View on arXiv →
About SPR{k³
SPR{k³ was built by Dan Aridor. 12 CVEs assigned across 4 NVIDIA AI products: NeMo Framework, Megatron-LM, NeMo-Guardrails, and Apex. 79+ vulnerabilities confirmed across major ML frameworks.
We work closely with vendor security teams through coordinated responsible disclosure — 90-day timelines, professional documentation, and constructive collaboration. We help enterprises address the risks that fall between vendor scope boundaries and production reality.
The scanner is named Ora, after Dan's late aunt. SPR{k³ operates from Israel.
Contact: support@sprk3.com
Research: Dan Aridor — NVIDIA, Microsoft MSRC & Amazon Security Acknowledgements (2025, 2026)
Dan Aridor
Founder, SPR{k³ Security Research

Columbia Business School — MBA
Corporate finance, strategic partnerships, and fundraising.
Lt. Colonel, Israeli Intelligence Corps
Reserve service, retiring 2012. Co-headed a counter-intelligence research unit.
Chairman, AEBI-Bio
Leads the SoAP biotechnology platform — reducing drug discovery attrition for challenging therapeutic targets.
Founder, inga314.ai · inga314.com & Dan Aridor Holdings
AI-driven logical analysis frameworks applied to research and data science. Strategic consulting firm specializing in operational profitability.
MuTaTo — Multi-Target Toxin Cancer Research
Connected to AEBi's experimental personalized cancer treatment concept — targeting multiple receptors on cancer cells simultaneously to prevent resistance, using a peptide-based Trojan Horse strategy. Early-stage research with promising in-vitro and mouse study results.

support@sprk3.com · NVIDIA, Microsoft MSRC & Amazon Security Acknowledgements (2025, 2026)
Get in Touch
All engagements begin with a conversation. No technical details shared until scope is agreed and an NDA is in place.
Request a Findings Briefing
We'll walk you through relevant findings for your ML stack — no commitment required. Email: support@sprk3.com
Enterprise Engagement
Findings Report, MPI Assessment, or Ongoing Monitoring Retainer. Contact us to discuss scope and fit. Email: support@sprk3.com
Security Research Inquiries
Coordinated disclosure, research collaboration, or press inquiries. Email: support@sprk3.com
SPR{k³ operates from Israel. Response within 1 business day.
SPR{k³ Security Research
support@sprk3.com
Patent Pending — US Provisional Application Filed October 8, 2025
© 2025–2026 SPR{k³ Security Research Team. All rights reserved.