Lagrange's Year in Review: 2025
December 30, 2025

For years, verifiable AI existed largely as theory: elegant cryptographic ideas with limited paths to real systems. In 2025, Lagrange focused on closing that gap by demonstrating how zero-knowledge verification can scale to modern model architectures and realistic inference pipelines, and by offering that capability to the environments in which it matters most. Over the course of the year, Lagrange released its zkML proof system DeepProve, proved full end-to-end inference for large language models and frontier architectures, and applied the system to real-world use-cases across defense, aerospace, and government settings. Together, these efforts brought forth the cryptographic infrastructure needed to secure mission-critical systems, laying the foundation for accountable autonomy in defense and beyond.
Taking Verifiable AI from Theory to Production with DeepProve
In 2025, Lagrange introduced DeepProve-1, the first zkML system capable of producing a zero-knowledge proof for a full large-language-model inference, starting with GPT-2. This demonstrated that cryptographic verification can apply to end-to-end AI inference pipelines, not just isolated layers or simplified models, an essential prerequisite for systems used in regulated, safety-critical, and national-security environments.
Soon after, Lagrange extended DeepProve to support Gemma3, demonstrating that the system can adapt to newer model designs and attention mechanisms. This work validated that zkML is not tied to a single model family, but can evolve alongside frontier architectures.
Building on that foundation, Lagrange advanced the underlying cryptography required for practical verification with zk-proofs. The research team published a new paper on Dynamic zk-SNARKs that introduced techniques for updating proofs efficiently as inputs change, reducing redundant computation and opening the door to incremental verification for long-running or adaptive AI workflows. These ideas laid the groundwork for proving more complex architectures and richer execution paths, and were received positively at the SBC (Science of Blockchain) 2025 Conference. This innovation marked another milestone in Lagrange’s mission to expand the frontier of cryptography at large, after receiving two U.S. patents for its earlier research on cryptography systems and cryptographic verification.
Together, these efforts advanced verifiable AI from theory to production. With DeepProve, proving modern AI inference is now practical and efficient, and Lagrange is scaling its applications to real systems across defense, aerospace, government, and beyond, focusing on securing the systems that protect us.
Embedding Verifiability into Defense and National-Security Systems
As AI systems increasingly move into operational roles, where decisions are made continuously, at machine speed, and often under adversarial conditions, assumed correctness and retrospective review are no longer sufficient. What these environments demand is cryptographic evidence generated as part of execution itself: proof that models operated on authorized inputs, followed approved logic, and produced outputs that can be independently verified without exposing sensitive data. In the context of national defense and aerospace, verifiability is not a simple compliance or audit feature, but a core system property. In 2025, Lagrange exhibited how DeepProve can serve as the verification layer across defense, aerospace, and government ecosystems, making accountable autonomy accessible amongst America’s most critical industries.
Lagrange integrated DeepProve into Anduril’s Lattice SDK™, demonstrating verifiable AI decision-making inside an autonomous reconnaissance pipeline. By attaching zero-knowledge proofs to proximity detection, tactical classification, and movement calculations, the prototype showed that autonomy and assurance can coexist at operational speed.
We also joined multiple defense supplier ecosystems, positioning DeepProve as a verification capability for secure mission workflows:
- Lagrange joined the Oracle Partner Network, building verifiable AI use cases on Oracle Cloud Infrastructure (OCI) sovereign and mission-cloud environments designed for defense and government workloads.
- We became an approved supplier within General Dynamics’ supplier ecosystem, supporting Zero Trust architectures by proving not just access, but correct computation across communications, C4ISR, and mission-planning systems.
- Lagrange was added to Raytheon’s supplier network, making DeepProve available as a production-ready capability for cryptographic verification of sensor fusion, telemetry pipelines, and AI-enabled defense platforms.
- We joined Lockheed Martin’s vendor ecosystem, positioning Lagrange to support AI assurance and Zero Trust data-integrity initiatives across aerospace and defense programs where certification, safety review, and coalition interoperability are essential.
- Lagrange was listed in the Vulcan-SOF technology portal, expanding visibility across the Special Operations ecosystem. While this listing does not imply deployment or partnership, it ensures that SOF operators, program managers, and acquisition teams evaluating high-assurance technologies can discover and assess DeepProve as part of future modernization efforts.
Advancing Compliance, Auditability, and Trust in Regulated Systems
In parallel, Lagrange advanced discussions in regulated environments where oversight and privacy cannot be mutually exclusive. Engagement with the U.S. Securities and Exchange Commission explored how DeepProve could modernize compliance and financial surveillance by enabling verifiable enforcement without expanding data exposure. This work reinforced a core principle: zero-knowledge proofs allow regulators to strengthen accountability while preserving privacy by design.
Scaling the Cryptographic Stack with the $LA Token
2025 was also a year of foundational growth for Lagrange’s cryptographic stack and ecosystem. Lagrange introduced the Lagrange Foundation, launched the $LA token and $LA staking program, and supported large-scale proof generation experiments such as Turing Roulette, which produced millions of verifiable AI inferences. On the infrastructure side, Lagrange helped decentralize proof generation for zkSync and supported distributed proving across elastic chains.
At the protocol and infrastructure level, Lagrange partnered with over forty Web3 and AI organizations, including Arbitrum, Base, EigenLayer, LayerZero, OpenLedger, and 0G Labs, to deploy DeepProve in real-world environments. These integrations validated that verifiable computation is becoming a shared requirement across decentralized systems, not a niche feature.
Global Leadership in Korea
This year marked breakout moments not only for Lagrange’s technology, but for the community as well. Lagrange became one of the strongest and largest communities in Korea, and has maintained high brand affinity and awareness throughout the year. Through consistent engagement and a series of high-quality, thoughtfully curated events, we built one of the most active and loyal communities in the market. Starting with Lagrange’s inaugural “AI Night in Seoul” we quickly set a new standard, so much so that the ripple effect became known locally as the “Lagrange Effect,” a phrase now used to describe the momentum and energy that follows well-executed community-led initiatives.
Lagrange’s listings on Upbit significantly expanded brand visibility and accessibility across Asia, helping develop Lagrange as a leading name in the domestic Korean market. As we close the year, Korea stands as a clear example of how strong listings, authentic presence, and community-first execution can drive outsized impact. We’re excited to continue building on this foundation in the year ahead, expanding Lagrange’s community across the globe.
Community and Events
A large part of Lagrange’s community expansion happens IRL. In 2025, Lagrange hosted events with over 5,000 total attendees, and the team spoke at 50+ conferences across both main stages and curated side events, including major industry conferences such as SBC, as well as leading AI, Web3, and defense-focused forums globally.
- Global presence across ETHDenver, Consensus Hong Kong, Token2049 (Dubai & Singapore), Korea Blockchain Week (Seoul), ETHCC Cannes, SBC Berkeley, Digital Assets Summit (NYC), and Hong Kong Web3 Festival
- 50+ talks across conferences and technical sessions on AI systems, zero-knowledge proofs, and verifiable computation
- Hosted flagship Lagrange events across Denver, Dubai, Seoul, and Singapore
- Designed and produced custom merchandise that went viral and became a staple within the Lagrange community
- Established a new reference point for technical and community-led events in South Korea
We’re grateful to the partners we collaborated with this year on events and programs, including 0G Labs, Aethir, Coineasy, House of ZK, OpenLedger, LazAI, Gradient, Rialo, EigenLayer, LayerZero, Polygon, Metis, GAIB, AltLayer, and others.
Looking Ahead to 2026: Real-World Adoption and Expansion
If 2025 was about proving that verifiable AI is possible, 2026 will be about making it the operational standard. In the year ahead, Lagrange will continue advancing DeepProve as the leading cryptographic infrastructure for verifiable AI, strengthening integration paths with mission-critical systems and supporting the proving architectures required for real-time, adversarial settings. As Lagrange’s technology matures, the ability to produce verifiable evidence at scale will increasingly define how autonomous systems are certified, integrated, and trusted.
In the year ahead, the community can expect:
- Deeper integration of DeepProve into operational defense, government, and enterprise systems internationally.
- Broader adoption of cryptographic assurance as a default requirement for AI and blockchain systems.
- Novel cryptography mechanisms for generating different types of proofs, including:
- Proofs of Training: Attesting to the correct execution of the entire model training process
- Proofs of Fine-Tuning: Confirming a pre-trained model was updated on an authorized dataset with specific training constraints
- Proofs of Fairness: Verifying a model’s outputs satisfy a predefined fairness constraint
- Proofs of Reasoning: Tracing and verifying why a model made its decision without revealing sensitive internal weights or user inputs
- Continued evolution of zkML to support larger models, longer contexts, and real-time verification, including:
- Parallel proving for a 10x improvement on proof generation times/
- Expansion of distributed proving networks and hardware-accelerated verification.
Lagrange’s direction is clear. As software and machine intelligence reshape modern defense and national-security systems, the defining question will no longer be whether systems perform, but whether they can prove that they performed correctly. This is the standard Lagrange is pioneering.


