September 1, 2025

Meet the Lagrange Interns (2025 Cohort)

Each year, Lagrange’s internship program brings together emerging researchers and engineers working at the frontier of cryptography, verifiable computation, and secure AI systems. Guided by Charalampos (Babis) Papamanthou, Chief Scientist at Lagrange and Professor of Computer Science at Yale, interns are embedded in real R&D environments, working directly on problems that connect cutting-edge theory with production-scale implementation. 2025’s cohort advanced that vision across three major research tracks: scalable zero-knowledge systems, provable non-linear computation, and post-quantum proof construction.

Christodoulos Pappas

Hong Kong University of Science and Technology

Christodoulos joined Lagrange from the Hong Kong University of Science and Technology, where his research focuses on applied cryptography and zero-knowledge systems. During his internship, Christodoulos played a pivotal role in the design and development of DeepProve, Lagrange’s proof system for verifiable AI. Working closely with the engineering team, he helped architect core components that translate machine learning inference into cryptographically verifiable proofs without revealing private data or model parameters.

In parallel, his independent research tackled one of the field’s most difficult open challenges: constructing the first scalable, transparent, and post-quantum collaborative zkSNARK. The work combines cryptographic transparency (no trusted setup) with post-quantum hardness assumptions and multi-prover scalability, a step toward practical, future-proof zero-knowledge systems. Christodoulos’s work exemplified the dual nature of the Lagrange internship: deep theoretical cryptography applied directly to real, high-performance systems.

Sriram Sridhar

University of California, Berkeley

At UC Berkeley, Sriram’s background in applied mathematics and systems engineering provided the foundation for a project that directly extends the expressive power of zero-knowledge proofs. His research focused on efficiently provable approximations for non-linear functions—a central bottleneck in zkML and verifiable numerical computation. Sriram developed a general technique for constructing provable approximations of a wide class of functions, including exponential, trigonometric, hyperbolic, and incomplete integral functions.

The method leverages numerical integration techniques tuned for high accuracy while maintaining low proving overhead. By bridging numerical analysis and proof theory, his work establishes a practical framework for efficiently verifying continuous functions in a discrete cryptographic setting. Sriram’s approach will enable zk-proofs over richer, smoother function spaces, improving precision in verifiable AI pipelines and differential privacy applications where approximation error directly affects trust.

Alireza Shirzad

University of Pennsylvania

Alireza’s project addressed one of the most active frontiers in cryptographic research: building SNARKs for integer arithmetic. While most zero-knowledge systems operate over finite fields, many real-world computations—especially in verifiable machine learning and zkVMs—are naturally expressed over integers. Alireza’s work focused on designing a lattice-based modular polynomial commitment (mod-PC) scheme for SNARKs that natively support integer operations.

This construction offers both post-quantum security and efficiency in proving non-native computations, paving the way for scalable, integer-compatible ZK systems. The research blends lattice-based cryptography with algebraic commitment schemes, helping to bridge the gap between theoretical constructions and implementable proof systems for real-world data types.

The 2025 cohort reflects the program’s founding purpose: to merge cryptographic research with the engineering discipline required to build verifiable, privacy-preserving AI infrastructure. Under Professor Papamanthou’s mentorship, the interns contributed foundational advances—post-quantum zkSNARK design, verifiable non-linear approximation, and integer-based commitments—each directly tied to Lagrange’s broader mission of scaling trust, verification, and accountability across modern computation. Their work will inform both the continued evolution of DeepProve and Lagrange’s ongoing collaborations with academia, defense, and industry partners building the next generation of verifiable intelligence systems.