Security Program
Continuous adversarial testing against state-level synthesis attacks. Methodology public, defenses layered, results measured.
Who we build against
We assume a well-resourced adversary with access to modern voice cloning (XTTS-v2, F5-TTS, ElevenLabs), generative models for biometric time-series, full source-code access to our public components (SDK, circuits, on-chain programs), unlimited wallets and devnet SOL, and days to weeks of time per attack campaign.
We do not assume the adversary can compromise user devices, mount physical hardware attacks on phones, or access our private defense-layer infrastructure. Those are separate threat categories covered by standard client-side hardening, hardware root-of-trust guidance, and infrastructure security practice respectively.
How we defend
Zero-knowledge proofs of behavioral consistency. Groth16 proving system. Public verifier on Solana. Every verification produces a proof that the user’s behavioral fingerprint is within a hidden Hamming distance threshold of their baseline, without revealing either fingerprint. Open source, auditable, verifiable on-chain.
Server-side validation of the 134-dimensional feature vector extracted from each verification. Multiple independent checks verify that the statistical properties of extracted features are consistent with human physiology, not synthetic generation. Specific checks and threshold values are not published.
Time-series analysis of phonation and kinematic signals sampled during capture. Real human speech and hand motion share motor-cortex origins and produce measurable temporal coupling at short lags; independent synthesis does not. Enforcement is live on production since April 2026, calibrated against a two-wave red team study that isolated the layer's specific contribution to voice-replay rejection.
How we test our defenses
We maintain an internal adversarial testing harness that runs continuously against our production verification pipeline. The harness implements eight distinct attack tiers, ordered by sophistication. Each campaign generates hundreds to thousands of bot attempts, measures pass rates per defense layer, and feeds the results into threshold calibration and defense roadmap decisions.
| Tier | Attack class | Tests defense against |
|---|---|---|
| T1 | Procedural synthesis (script-kiddie baseline) | Absolute attacker floor |
| T2 | Parameter-varied procedural | Tier 1 statistical consistency checks |
| T3 | Feature-space optimization with source access | Tier 1 distributional realism |
| T4a | Pre-recorded human voice + procedural motion/touch | Cross-modal temporal coupling (Tier 2) |
| T4b | Modern voice cloning (XTTS-v2, F5-TTS, API-based) | Tier 1 TTS artifact detection |
| T5 | Coupled cross-modal synthesis | Tier 2 temporal coupling |
| T6 | Targeted human-mimicry / identity theft | Hamming distance gate + Sybil registry |
| T7 | Replay with adversarial perturbation | Min-distance floor + commitment registry |
| T8 | Black-box adaptive probing | Rate limits + response opacity |
Attack implementation code, per-attempt telemetry, and specific parameter values that produce elevated pass rates are kept in a private repository. This follows the same disclosure convention used by infrastructure security programs: methodology public, weapons private.
Current measurements
| Attack tier | Description | Attempts | Pass rate | Status |
|---|---|---|---|---|
| T1 | Procedural synthesis | 2,000 | 0% | hardened · 2026-03 |
| T2 | Multi-strategy parameter variation | 4,000 | 0% | hardened · 2026-03 |
| T3a | Unconstrained feature optimization | 1,000 | 0% | hardened · 2026-04 |
| T3b | Constrained feature optimization | 9,000 | 0% | hardened · 2026-04 |
| Campaign surfaced a gap in server-side feature validation. Hardened — see AUDIT.md. | ||||
| T4a — Wave 1 | Pre-recorded human voice + procedural motion/touch (temporal enforcement OFF — log-only) | 50 | 100% | counterfactual baseline |
| T4a — Wave 2 | Pre-recorded human voice + procedural motion/touch (temporal enforcement ON) | 10 | 10% | production enforcement truth |
| Cross-program binding gap in update_anchor discovered during cross-analysis, patched same day — see AUDIT.md protocol-core Critical. | ||||
| T4b | Modern voice cloning (XTTS-v2, F5-TTS) | — | queued | next-phase |
| T5 | Coupled cross-modal synthesis | — | queued | next-phase |
| T6 | Targeted human-mimicry / identity theft | — | queued | next-phase |
| T7 | Replay-perturbed | — | queued | next-phase |
| T8 | Adaptive probing | — | queued | post-mainnet |
Last updated: April 20, 2026
Pass rate = fraction of bot attempts that pass server-side Tier 1 validation, the gate preceding on-chain submission. An attempt that fails Tier 1 cannot proceed to challenge fetch, proof generation, or transaction submission. Results rounded to prevent adversarial threshold inference.
// T4A — TWO-WAVE STUDY
T4a was designed as a two-wave study to measure the cross-modal temporal coupling layer's specific contribution to the multi-layer defense. Wave 1 ran with temporal enforcement in log-only mode to establish the counterfactual baseline. Wave 2 ran with enforcement enabled. The 90 percentage-point reduction isolates that layer's contribution. The 10% Wave 2 residual motivated server-side phrase content binding (shipped 2026-04-25) as the next defense layer; T4a Wave 3 is queued to measure the residual after that layer.
// ON-CHAIN ANCHOR STATE
The Entros Anchors currently visible on devnet include internal red team artifacts from T4a Waves 1–2 (documented above) alongside legitimate team and pilot-user verifications. All state is preserved on-chain for audit traceability; the public /stats page reads the full on-chain aggregate directly.
What we open-source, and why
Entros Protocol is open-source where open-source matters for user trust, and deliberately private where privacy protects users. This follows the same disclosure convention used across crypto infrastructure projects. Not a departure from crypto's open-source values. A mature implementation of them.
Open source (MIT)
- ›On-chain programs (entros-anchor, entros-verifier, entros-registry)
- ›ZK circuits and trusted setup artifacts
- ›Client SDK (pulse-sdk on npm)
- ›Executor node
- ›Website and documentation
- ›Security program page, blueprint documents, and aggregate results
- ›Baseline adversarial testing (script-kiddie tier in pulse-sdk)
Private (defense-layer only)
- ›Server-side validation service (entros-validation): check thresholds and parameter values
- ›Red-team harness (entros-redteam): attack code, per-attempt telemetry, captured baseline fixtures
- ›Pre-disclosure vulnerability reports (per standard responsible-disclosure practice)
Nothing that affects verifiable protocol behavior is private. Every on-chain transition, every cryptographic operation, every client-side computation is open and auditable. The private components are the detection surface an attacker would otherwise exploit to calibrate their attacks.
Reporting vulnerabilities
Learn more
Read the research behind our verification pipeline.