In this episode, Dylan sat down with Jens Groth (pronounced “Yens”)—Chief Scientist at Nexus, creator of the Groth16 SNARK—to map how verifiable computation reaches internet scale. We unpack Nexus’s zkVM (RISC-V), its distributed prover network (orchestrator + workers) for low-latency proving, and why proofs-of-proofs are the practical path to tiny, fast-verifying artifacts. Jens explains the coming shift to instruction sorting—bucketing additions/XORs/etc. to cut redundant constraints—and where privacy fits when you parallelize proving. We also compare ZK vs TEE vs FHE in real systems, talk formal verification on the verifier path, and preview Nexus’s L1 that makes proofs first-class citizens for DeFi and beyond.
How a zkVM compiles, executes, and emits an extended witness + proof—and what the verifier actually checks.
Why a centralized orchestrator plus many workers wins on latency today, and where that clashes with privacy.
The real trade-offs between TEEs (TDX/SEV) for speed, ZK for public verifiability, and FHE/MPC for data-in-use privacy.
Instruction sorting: reordering the execution trace into per-op buckets to shrink constraints—an orders-of-magnitude lever when combined with proof composition.
What “maturity” looks like in ZK stacks: open source, audits, formal verification for verifiers, supply-chain discipline, and reproducible builds.
Nexus — zkVM v3 overview, distributed prover network, and blog updates:
https://specification.nexus.xyz/ • https://docs.nexus.xyz/zkvm • https://blog.nexus.xyz/nexus-zkvm-3/ • https://blog.nexus.xyz/nexus-launches-worlds-first-open-prover-network/
Phala Network docs — confidential computing stack & TDX direction:
https://docs.phala.com/overview/phala-network
dstack-TEE — builder notes for attested, policy-driven confidential compute:
https://phalanetwork.mintlify.app/docs/overview/what-is-dstack
RISC-V ISA resources (for how zkVMs model CPU steps):
https://riscv.org/specifications/ratified/
STARK proving system background (why many zk stacks compose proofs):
https://starkware.co/stark-101/ • https://starkware.co/stark/
If you’re building rollups, privacy-first apps, or just need verifiable backends, this one’s a blueprint: start efficient at the base proof, compose downward, verify in milliseconds, and use privacy where it truly survives distribution.
In this episode of Black Box, the discussion centers on Blormmy, an AI-powered wallet system that merges artificial intelligence, crypto infrastructure, and confidential computing. The conversation explores how Blormmy enables users to perform on-chain actions simply by expressing intent—transforming the wallet from a manual interface into an autonomous agent that acts securely on behalf of the user.The founders describe Blormmy as both infrastructure and application: a system where AI interprets user commands, determines the right blockchain actions, and executes them through smart wallets. The project aims to eliminate the friction and complexity of crypto interactions, replacing seed phrases, pop-ups, and manual signing with a conversational interface that feels natural and intuitive.Blormmy is built on Solana and uses Phala Network’s confidential computing stack for deterministic key generation. Through CrossMint’s smart wallet integration, users can onboard with just an email and delegate actions to the AI agent, which operates within strict, user-defined parameters like spending caps or session limits. This combination of autonomy and control forms the foundation of Blormmy’s security model. The result is a wallet that behaves more like an AI assistant—capable of executing swaps, bridging assets, or even buying real-world items such as Amazon products using stablecoins.Ultimately, Blormmy represents a broader vision for the future of Web3—where AI agents can autonomously perform blockchain state changes, making crypto accessible to anyone without the usual technical barriers.Blormmy Docs: https://docs.blormmy.comPhala Docs: https://docs.phala.com/?utm_source=phala.network&utm_medium=site-nav
This is the Blackbox Podcast — where we peel back the hidden layers of technology, society, and the human psyche to unravel the forces shaping confidential AI.
In this episode, host Dylan Kawalec sits down with researcher and entrepreneur Andrew Miller for a deep dive into confidential computing, trusted execution environments (TEEs), and the future of private AI.
Andrew has been at the forefront of cryptography and decentralized systems, from early Bitcoin scripting experiments to Ethereum, MPC, and zero-knowledge proofs. As a key architect behind Flashbots and dstack, he’s redefining how we think about secure, verifiable computation in a world where AI is becoming ever more powerful—and ever more in need of unbreakable safeguards.
Through the lens of auctions, Andrew explains how trust models evolve, why combining TEEs and MPC strengthens guarantees, and how confidential containers may become the new smart contracts. He then connects these ideas to AI, reputation, and agent-to-agent protocols—making the case for “prove what, not who” as a guiding principle for building trustworthy systems.
Along the way, he demos sandboxed TikTok sessions inside a TEE, critiques reputation models for autonomous agents, and explores how confidential compute enables rivals to collaborate securely through projects like BuilderNet—reshaping the future of decentralized infrastructure.
Now here is E04 of the Blackbox Podcast, hosted by Dylan Kawalec and brought to you by Phala Network.
Phala Docs: https://docs.phala.com/
Flashbots Docs: https://docs.flashbots.net
In this deep-dive episode, our host Dylan Kawalec sits down with Arman Aurobindo from zkVerify to explore the cutting-edge intersection of zero-knowledge proofs and trusted execution environments (TEEs).zkVerify handles verification for all major proof types (Groth16, RISC Zero, SP1) while eliminating the expensive 20-30% proof wrapping overhead and reducing verification costs by over 90% compared to native EVM verification. We dive into their integration with Phala's trusted execution environments and explore practical applications from ZK gaming to AI-powered DeFi trading agents.Arman explains why ZK proofs alone aren't enough for complete privacy, how TEEs fill the gaps, and their vision for bridging Web2 services with Web3 verification. We also discuss the future of ZK-AI applications, identity systems for underserved populations, and why the combination of ZK + TEE is the optimal approach for privacy-preserving applications.This conversation covers technical architecture, developer experience improvements, and the practical path to making privacy-preserving applications economically viable.Phala Docs: https://docs.phala.networkzkVerify Docs: https://docs.zkverify.io
In this episode of the Black Box Podcast, we dive into the architecture of private rails for real-world assets. Paul Mierlo, co-founder of Spout Finance, explains how corporate bond ETFs like LQD are mirrored on Ethereum as permissioned ERC-3643 tokens backed by on-chain identities.
We explore the full pipeline: identity verification, eligibility minting, permissioned tokens, encrypted order flow, and confidential settlement using Intel TDX. Paul breaks down why corporate bonds were chosen first, how proof-of-reserve and audits map to chain, and how CeFi liquidity can meet DeFi programmability—without leaking balances or order flow.
If you’re building private infrastructure for RWAs or curious how zero trust finance actually works, this episode offers a detailed look at the design choices shaping the next generation of financial rails.
Phala Docs: https://docs.phala.network
Spout Finance Docs: https://drive.google.com/file/d/1fklbqmZhgxzIzXN0aEjsf2NFat2QdpFp/view
In this episode, we dive into the architecture of decentralized AI systems and what it takes to build privacy-preserving infrastructure at scale. Harish Kotra, Head of Developer Relations at Gaia Network, walks us through the technical pieces behind training safe AI—from vector databases and inference nodes to tool-calling and domain design. We explore why centralization creates risks for privacy, resilience, and developer autonomy, and how concepts like sovereignty, transparency, and accessibility shape the future of confidential AI.
Phala Docs: https://docs.phala.network
Gaia Docs: https://docs.gaianet.ai/?ref=blackboxpodcast
Run Gaia Nodes on Phala’s TEE: https://dev.to/gaiaai/running-gaia-ai-nodes-on-phalas-trusted-execution-environment-3cf2