In a twist to what has probably become our “normal” programming, this episode features just the two of us in conversation. We explore the implications of technological progress - from the shift we’re contemplating from AI-infused linear workflows to fully agentic ones, to the risks and vulnerabilities baked into today’s LLM architectures. Essentially, it’s the kind of discussion we often have offline, brought into the open.
The following pieces ground our discussion:
From linear AI-infused workflows to fully agentic - new skills and orchestration challenges
Prompt Injection Attacks & AI Governance:
If you found this episode interesting, please like, subscribe, comment, and share! For more thought-provoking content at the intersection of law and technology, head to https://lawwhatsnext.substack.com/ for: (i) Focused conversations with leading practitioners, technologists, and educators; (ii) Deep dives into the intersection of law, technology, and organisational behaviour; and (iii) Practical analysis and visualisation of how AI is augmenting our potential
This week we sit down with Memme Onwudiwe for a conversation that starts in a Harvard Law classroom - transitions to his building an AI company before ChatGPT was a thing - and ends up in outer space 🚀
Memme co-founded Evisort while at Harvard Law School in 2016, building AI-powered contract intelligence from the Harvard Innovation Lab years before it became mainstream. Workday acquired the company in October 2024, where Memme now serves as an AI Evangelist.
Memme returns to Harvard each spring to teach legal entrepreneurship alongside co-founder Jerry Ting, and he’s a published space law scholar whose paper “Africa and the Artemis Accords” examines how emerging nations can secure their stake in the space economy.
Key References
Academic Research
Africa and the Artemis Accords — Memme Onwudiwe & Kwame Newton, New Space (2021)
Legal Frameworks
Artemis Accords — Non-binding bilateral space exploration principles (2020, 55+ signatories)
Outer Space Treaty — Foundational UN space law treaty (1967)
Moon Agreement — “Common heritage” framework (1979, 18 signatories)
Organizations
Harvard Innovation Labs — Where Evisort was founded
CLOC — Corporate Legal Operations Consortium (6,300+ members)
Space Beach Law Lab — Annual space law conference, Feb 24-26, 2026, Long Beach
Corporate
If you found this episode interesting, please like, subscribe, comment, and share! For more thought-provoking content at the intersection of law and technology, head to https://lawwhatsnext.substack.com.
Nicole Braddick needs no introduction - but if you had to rush one for the purposes of publishing a podcast 👀 you might say she’s the Global Head of Innovation at Factor Law, following the February 2025 acquisition of her company, Theory & Principle, where she served as CEO and Founder.
A former trial lawyer who transitioned into legal tech 15 years ago, Nicole has been one of the industry's most persistent advocates for bringing modern design and development practices to legal technology.
Her team has worked with leading law firms, legal tech companies, corporate legal departments, non-profits and public sector organisations to build custom solutions focused on user experience - transforming an industry that, when she started, was "purely functional" and "engineering-led" into one where good design is finally recognised as essential.
We get into all of that and more during our discussion, and lean in hard for Nicole’s system wide view and perspective on what’s happening at present..
Key Takeaways
If you found this episode interesting, please like, subscribe, comment, and share! For more thought-provoking content at the intersection of law and technology, head to https://lawwhatsnext.substack.com/ for:
Focused conversations with leading practitioners, technologists, and educators
Deep dives into the intersection of law, technology, and organisational behaviour
Practical analysis and visualisation of how AI is augmenting our potential
We sit down with Stephan Breidenbach, co-founder of the Rulemapping Group and a German scholar who's been quietly revolutionising how we think about law, technology, and democratic governance since the early 2000s.
What started as a teaching tool to help law students visualise complex legal reasoning has evolved into something far more ambitious: a comprehensive system for transforming laws into executable code that maintains human oversight while dramatically improving access to justice.
Stephan's present work spans three critical areas: decision automation (turning legal rules into fast, transparent systems), rule-based AI (supporting human lawyers with explainable reasoning), and law as code (drafting legislation that's both human and machine-readable from day one).
Some of our highlights from the conversation:
The Transparency Imperative: "I would never trust an LLM with a legal process because it's confabulating" Stephan declares, highlighting why the Rulemapping approach prioritises explainable AI over black-box solutions. Their system lets human decision-makers see exactly how the AI reached its conclusions – a "zoom in, zoom out" process that mirrors how lawyers naturally think.
Democracy-First Technology: Unlike Silicon Valley's "move fast and break things" mentality, Stephan advocates for keeping humans in the loop even when AI becomes more accurate: "I think it's very important for trust in the legal system and therefore in a democratic system that there are human beings, even if they make worse decisions."
Access to Justice at Scale: Through real-world deployments like processing 500,000 diesel emission scandal cases and serving as Europe's first certified Digital Services Act dispute resolution body, Rulemapping demonstrates how thoughtful automation can make legal systems accessible to everyone, not just those who can afford lawyers.
We also explore the behavioural risks of over-relying on automated systems, the potential for "law as code" to improve democratic participation, and Stephan's vision of embedded law that serves citizens rather than bureaucracy.
If you found this episode interesting, please like, subscribe, comment, and share!
For more thought-provoking content at the intersection of law and technology, head to https://lawwhatsnext.substack.com/ for more of the same.
We catch up with Ben Martin, the former Director of Privacy at Trustpilot and author of "GDPR for Startups," who's currently living his best life somewhere in the Estonian wilderness with a camper van, fishing rod, and blessed freedom from subject access requests.
Having built privacy programs at high-growth companies like Trustpilot, Ovo Energy, and King Digital Entertainment, Ben brings a refreshingly practical perspective to privacy law that goes way beyond compliance theatre.
From his sabbatical perch in the Nordics, he reflects on everything from why GDPR hasn't quite delivered its promised outcomes to how privacy lawyers are uniquely positioned to lead AI governance.
What We Cover:
The Sabbatical Chronicles: Ben's epic Nordic adventure and why stepping away from work sometimes gives you the clearest perspective on it
Privacy Program Building: Moving from compliance theatre to business enablement, and why good privacy programs start with genuine curiosity about products
GDPR Reality Check: Why the regulation might not have quite yet delivered its intended outcomes and the types of privacy lawyers and approaches Ben sees in practice
AI Governance Evolution: How privacy professionals are naturally stepping into AI oversight roles and what new skills they need to develop
Technical Literacy: The importance of understanding what your business actually builds and Ben's practical approach to learning complex technical concepts
Key References:
GDPR for Startups - Ben's practical guide to building privacy programs in high-growth companies
Field Fisher Privacy Newsletter - Legal developments summary that Ben recommends for staying current
Hard Fork Podcast - Ben's go-to for broad tech and AI developments
Lovable - The AI coding platform Ben's been experimenting with to build his habit tracker (and recruit his girlfriend as user number one)
If you found this episode interesting, please like, subscribe, comment, and share! For more thought-provoking conversations at the intersection of law and technology, head to https://lawwhatsnext.substack.com/.
This week we sat down with Dan Hunter, Executive Dean of the Dickson Poon School of Law at King's College London and serial legal tech entrepreneur.
Dan's journey spans academia across three continents, four successful startups (including his current venture GraceView), and decades of research on the cognitive science of legal reasoning. As both an educator training the next generation of lawyers and an entrepreneur building AI-powered legal solutions, he offers a unique dual perspective on the transformation underway across knowledge work.
Key Takeaways
1. The Learning Paradox: AI Makes Us Feel Smarter While Making Us Dumber
Students using large language models consistently perform better on assignments and believe they're learning more - but when the AI is removed, they've retained virtually nothing. This creates a dangerous illusion of competence (sycophantic models propagate this!) that law schools and firms must address through new assessment methods and training approaches.
2. We're Heading Toward a "Barbell" Legal Profession
Traditional pyramid law firm structures will collapse as AI automates much of the work. Dan believes the future involves senior lawyers managing client relationships at the top, AI agents handling routine tasks in the middle, and "legal engineers" swarming around validating AI outputs and steering the models.
3. Entry-Level Legal Jobs Are Already Disappearing
We discuss the recent Stanford research "Canaries in the Coal Mine? Six Facts about the Recent Employment Effects of Artificial Intelligence" by Erik Brynjolfsson, Bharat Chandar, and Ruyu Chen, Stanford Digital Economy Lab (2025) - The landmark study using ADP payroll data showing 13% employment decline for young workers in AI-exposed occupations.
Interested in more?
If you found this episode interesting, please like, subscribe to the show, comment, and share! For more thought-provoking content at the intersection of law and technology, head to our Law://WhatsNext home for:
Focused conversations with leading practitioners, technologists, and educators
Deep dives into the intersection of law, technology, and organisational behaviour
Practical analysis and visualisation of how AI is augmenting our potential
We have fun sitting down with Dana Rao (the former General Counsel and Chief Trust Officer at Adobe) - where we cover the implications of AI progress on:
regulatory frameworks and geopolitics;
copyright law;
deepfakes - including content proliferation and authenticity;
fair use and Dana’s take on the current class action lawsuits in the US; and
Dana’s proposals for a new impressionistic right for creators to stave off the economic harms of their work being imitated.
The conversation provided us with a fascinating insight into life at Adobe at the moment the performance of these generative models really began to take-off, and it was clear to us that Dana and his team played a pivotal role in shaping not only what kind of products Adobe went on to develop but how they would be distributed and consumed by their users!
This episode draws on Dana's extensive experience at the intersection of technology, law and policy. Here are the key references and cases we discussed:
Legal Cases:
Andy Warhol Foundation for Visual Arts, Inc. v. Goldsmith, 598 U.S. 508 (2023) - The Supreme Court case that Dana argues will have an influence in the outcome of AI fair use battles (which are focussed on economic competition between uses)
Thomson Reuters Enterprise Centre GmbH v. Ross Intelligence Inc., No. - 1:20-CV-613-SB (D. Del. Feb. 11, 2025) - The "Westlaw case" Dana mentioned where the judge initially ruled for the AI company but changed his mind after better understanding the technology
Dana's Policy Work:
Senate Judiciary Committee Testimony (July 12, 2023) - Dana's appearance before the Senate Subcommittee on Intellectual Property hearing titled "Artificial Intelligence and Intellectual Property – Part II: Copyright"
Adobe's Proposed Anti-Impersonation Law - Dana's legislative proposal for federal protection against AI-powered style imitation
Content Authenticity Standards:
Content Authenticity Initiative (CAI) - Adobe-founded initiative with over 5,000 members working to establish content provenance standards
Coalition for Content Provenance and Authenticity (C2PA) - The formal standards organization co-founded by Adobe, Microsoft, Intel, Arm, BBC, and Truepic under the Linux Foundation
C2PA Implementation in Google Pixel Phones - Recent adoption of content authenticity standards in consumer devices
If you found this episode of Law://WhatsNext interesting, please rate, subscribe, comment, and share!
Rapid dispatch: we pulled in Sigge Labor (CTO) and Jacob Johnsson (Legal Engineer) from Legora - one of the fastest-growing AI companies in the world (and one of the few with early access to GPT-5) for a chat about OpenAI's recent model release.
They share what it’s already unlocking for legal reasoning, why their “battle evals” put GPT-5 ahead 80%+ of the time across a host of legal tasks, and how its new steerability could reshape the way lawyers (and the tools they use) interact with the model (including through Legora).
This is part two and concludes our GPT-5 launch mini-series - snappy, unpolished, and recorded while the paint’s still wet.
If you like these hot-off-the-press deep dives, tell us (and more importantly… tell the algorithm by: rating, reviewing, and telling your friends).
Emergency drop: we grabbed Jake Jones (CPO & Co-Founder, Flank) for a quick-fire reaction to OpenAI’s ChatGPT-5 launch.
We cover his day-one impressions, what it means for legal products (including Flank), and the downstream implications for how legal work gets done.
A short detour from our usual programming—did you enjoy this rapid-response format? If yes, please like, rate, and share to help Law://WhatsNext reach more people.
In this compelling episode of Law://WhatsNext, hosts Tom & Alex dive into the transformative shifts underway in legal education and junior lawyer development. Joined by three visionary voices - Lucie Allen (Managing Director, Barbri), Rob Elvin (Partner, Squire Patton Boggs), and Sophie Correia (Trainee Solicitor, TravelPerk) - the discussion explores provocative ideas reshaping what it means to be a lawyer.
Do Lawyers Even Need to Know the Law?
Sophie Correia challenges the traditional emphasis on memorisation and technical rules in legal education. Reflecting on her real-world experiences at a tech scale-up, Sophie argues that success hinges more on human skills such as communication, empathy, and trust-building, rather than recalling obscure statutes.
The Flawed Incentives of Legal Training
Rob Elvin sheds light on systemic issues stemming from the billable hour model, which prioritises short-term profitability over effective mentoring. He advocates for a groundbreaking solution: linking career progression directly to the quality of trainee supervision, potentially transforming mentorship from a luxury into an essential career catalyst.
The AI Disconnect
Lucie Allen identifies a critical gap in legal education - the absence of meaningful engagement with AI and technology. Despite these tools reshaping the profession, current frameworks like the SQE neglect to equip trainees adequately for technological realities, posing a substantial risk to their future readiness.
Three Ideas to Transform Legal Education:
Continuous Learning as the New Norm: Education doesn't stop at qualification. Lucie emphasises the necessity of lifelong learning, driven by relentless curiosity and adaptation to change.
Human Skills Set Lawyers Apart: Sophie highlights the enduring value of human-centric capabilities—understanding people, navigating complexity, and ethical reasoning—as indispensable traits lawyers must cultivate.
Systemic Change through Collective Responsibility: Rob, Lucie, and Sophie underline the importance of personal agency and collaborative effort in driving substantial reform across education, training, and regulatory frameworks.
A Hopeful Path Forward
Ultimately, the podcast champions a future in which tomorrow’s lawyers blend ethical judgment, technological proficiency, and interpersonal insight, prompting listeners to reconsider not whether lawyers need to know the law, but rather what precisely they need to know—and how to prepare them best for the evolving landscape.
Join us for an inspiring conversation that challenges conventional wisdom and points toward an empowered, adaptable, and human-centred future for the legal profession.
In this quarterly deep-dive, we reconnect with Peter Duffy, the brilliant mind behind the Legal Tech Trends newsletter, for our regular temperature check on what's actually happening in the legal technology landscape.
Peter's ability to cut through the hype and identify real trends, combined with his practical experience helping organisations actually adopt AI, makes this a masterclass in separating signal from noise.
Whether you're trying to understand why adoption is lagging despite the excitement, wondering about the strategic implications of billion-dollar acquisitions, or simply want to understand what Silicon Valley's sudden fascination with legal really means, this conversation hopefully delivers some insight.
The episode kicks off with a sobering look at AI adoption across the legal profession, and the numbers might surprise you. Peter walks us through recent surveys from Bain and BCG that paint a picture quite different from the hype we're all hearing about at conferences/or on LinkedIn.
The acquisition and partnership landscape is absolutely wild right now, and we break down the strategic implications of:
Clio's $1 billion acquisition of vLex
Eudia's acquisition of the Irish ALSP Johnson Hanna
Harvey's strategic alliance with LexisNexis
We end discussing some personal topics of interest, including:
Y Combinator's explicit call-out for startups to build "full stack AI companies" – using law firms as their prime example;
the implications of the recent order in the New York Times v. OpenAI case
If you found this episode interesting, please do like, subscribe, comment, and share! It helps the show rank and reach more people..
Best, Alex & Tom
We sit down with Sam Lewis, Senior Product & Privacy Counsel at Canva, who kicks things off by asking us the most important question of 2025: "If you had to be a piece of cutlery, what would you be?" Spoiler alert: Sam's a spoon (warm, empathetic, part of the emotional support crew), while Tom and Alex predictably went fork (practical go-getters with zero patience for fluff).
Beyond the viral TikTok personality tests, Sam delivers cutting-edge insights into product counsel work at a company that's been building AI since 2017 - well before it was trendy. With Canva's community using AI tools over 18 billion times, Sam has become a thought leader on navigating the complex intersection of law, privacy, and AI-native product development.
What makes this conversation essential listening:
Sam reveals how legal teams at AI-first companies don't just manage risk - they can drive growth.
Three Key Takeaways:
1. Trust is the Ultimate Competitive Advantage: In an era where AI capabilities are rapidly commoditizing, trust becomes the differentiator. Sam reveals how Canva aims to be one of the world's most trusted platforms, this isn't just about compliance - it's about building products people love and companies trust.
2. Privacy Instincts Are Non-Negotiable in AI-First Companies: Sam makes it clear: "I don't think it's possible to advise on AI without understanding privacy." AI is built on data, and privacy laws determine what's fair, legal, and ethical - making strong privacy instincts essential for knowing when to green light and when to pause in AI-native environments.
3. Product Counsel Is Risk-Aware, Not Risk-Averse: Sam champions a philosophy that's become essential for AI-first companies: taking a risk-aware rather than risk-averse approach. To Sam this means asking the right questions, reducing unnecessary friction, and helping teams figure out how to move forward safely - often saying "how can we" instead of "we can't."
The conversation touches on Canva's pioneering "AI everywhere" culture (where AI impact is now part of performance reviews), Sam's love for loud parenting, and her admiration for @sophworkbaby on Instagram (a fellow Aussie and ex-Googler) delivering plain unfiltered career insights.
If you found this episode interesting, please do like, subscribe, comment, and share! It helps the show rank and reach more people.. Best, Alex & Tom
Ross McNairn, CEO and Founder of Wordsmith, brings a rare perspective to legal tech - lawyer turned software engineer turned CTO at companies like Skyscanner and TravelPerk.
Our conversation spans Ross's transition from being a trainee solicitor navigating Scottish estate law to leading one of the world's fastest-growing legal AI companies, his philosophy on building lasting products over quick wins, and why he believes we're entering the era of "legal engineering."
What We Cover:
Building Philosophy: Why Ross spent a year quietly iterating rather than rushing to market with an MVP wrapper.
The UK Opportunity: How Britain's legal heritage and technical talent create untapped advantages in the AI race.
Legal Engineering Revolution: Ross's five-level competency framework transforming lawyers into product-minded operators.
Market Evolution: Why the generalist legal AI era is ending and specialization is the future.
Three Key Takeaways:
1. Quality Over Speed Wins Long-Term: Ross's mantra: "I don't want to be the first tool that everybody buys. I want to be the last tool that they buy." While competitors rush wrappers to market, disciplined product development with 95% engineers and lawyers creates lasting competitive advantages through superior reliability and user experience.
2. Legal Engineering is the New Frontier: The future belongs to lawyers who think like product managers. Ross describes customers building sophisticated systems with 50+ interconnected agents—a glimpse into legal practice where workflow orchestration, not individual task automation, drives value for in-house teams managing constant business demands.
3. Invisible Quality Creates Unbeatable Moats: While everyone debates model capabilities, Wordsmith are developing rigorous "evals" testing frameworks defining world-class legal outputs.
If you found this episode interesting, please like, subscribe, comment, and share!
In our first "on record" informal coffee-style conversation with a fellow in-house team, we catch up with Kshitij (KD) Dua (Director of Legal Ops) and Priyam Bhargava (Senior Corporate Counsel) from HashiCorp (an IBM company). Having witnessed HashiCorp's extraordinary journey from startup through series rounds, IPO, and ultimately IBM acquisition, this dynamic duo bring unique insights and perspectives on the importance of the lawyer x legal ops dynamic/relationship; legal AI adoption and what happens when traditional SaaS metrics meet an intelligence explosion.
Our conversation emerges following their super engaging "Influencing Without Authority" presentation at CLOC CGI in Las Vegas last month.
What We Cover:
If you found this episode interesting, please like, subscribe, comment, and share!
For more thought-provoking content at the intersection of law and technology, head to https://lawwhatsnext.substack.com/ for:
Focused conversations with leading practitioners, technologists, and educators
Deep dives into the intersection of law, technology, and organisational behaviour
Jessica Block thinks about the technological transformation we are all experiencing the way physicists think about quantum mechanics – as a system where observation changes the outcome and where the most interesting things happen at the edge of possibility.
In this episode of Law://WhatsNext, we catch up with Jess, Executive Vice President at Factor – a former theatre major turned legal tech executive who's spent her career electrifying complex systems. From her early days in eDiscovery at FTI Consulting to her multifaceted roles at Factor (including a stint as CFO), she embodies the kind of cross-functional thinking and leadership that thrives in periods of rapid transformation.
What makes this episode essential listening is Jessica's ability to articulate the delicate balance between structure and emergence when leading through technological upheaval. "We're constantly stitching together partial insights to get to coherence," she explains, describing how Factor is approaching the GenAI moment not as a traditional tech rollout but as a complex system that requires cultivation rather than control.
The conversation takes unexpected turns as Jessica reflects on Factor’s strategic pivot into AI-first legal solutions. The Sense Collective, formed during what she calls "that constant loop of discovery and disappointment" in early 2023, has positioned Factor at the center of a community exploring how generative AI will reshape legal work.
We venture into philosophical territory when Jessica references complex systems theory and Neil Theise's "Notes on Complexity" drawing parallels between consciousness as a transducer and the kind of emergent organisational change Factor is nurturing. These moments of introspection reveal why building and leading in this space requires not just technical expertise but a willingness to confront existential questions about how we approach work itself.
If you found this episode interesting, please like, subscribe, comment, and share!
In this episode of Law://WhatsNext, we welcome legal ops guru, pioneer, visionary and friend, Jenn McCarron - President of CLOC, former Director of Legal Operations at Netflix and Spotify - for a conversation about her renaissance year and visions for the future of legal operations.
We go light, deep, introspective and playful around:
“writing your own funeral” on jobs and being honest with yourself around your strengths, ambitions and (as a by product) your limitations (we all spend a lot of time at work)
The discipline, rigour, humility and vulnerability that comes with honing the craft of “writing” (Jenn and a mystery co-author might be building some provocative frameworks for reimagining how we might perceive our current roles and their potential)
Using the odd job interview to sharpen your intuition and feeling for who you are and what you do
Becoming more intentional with the time, energy and attention Jenn is dedicating to her creative pursuits and passions
Staying sharp with a couple of high profile consulting commitments at a prominent private equity firm and social media business, the CLOC Presidency and observing the market and how its evolving from a different vantage point
Jenn’s influential Legal Ops 3.0 framework, which envisions legal operations evolving beyond implementing foundational systems to becoming data-powered enterprise wide strategic partners
Some recent reading, including Jevons Paradox: A Personal Perspective by Tina He, where we reflect on the idea that technology meant to create efficiencies often paradoxically increases workload (rather than creating more space) and we consider that legal ops professionals often "sell efficiency" without controlling what happens with the time saved.
The episode concludes with a preview of the upcoming CLOC Global Institute, where Jenn hints at exciting new programming developments. We will report back our thoughts and reflections from the event in Las Vegas next week!
We covered a lot but the time came and went in a blink. Jenn always brings bright, bold and effusive energy to every touchpoint, conversation and interaction.. We hope (but secretly suspect) you’ll find something in this chat to inspire a creative thought (or two) or reframe a perspective.
If you found this episode interesting, please like, subscribe, comment, and share!
Peter drops the quote of the quarter: "Legal is sexy now when it comes to people working in technology and innovation".
In our latest episode, we're mixing things up by taking Peter Duffy's wildly popular Legal Tech Trends newsletter and diving deeper into the hottest headlines from Q1 2025. Peter joins us to unpack what's actually happening behind the email blasts and LinkedIn posts we've all been scrolling through.
We explore four key trends:
1. Voice as the new frontier - Why typing is "super slow" compared to how fast we think and speak, and how voice interfaces are becoming the preferred way for experts to unlock their domain knowledge when working with AI.
2. Law firms rolling out Harvey & Legora - The rush to deploy these AI-native tools firm-wide and the cultural challenges of making them stick in an environment Peter describes as "a collection of islands."
3. The Axiom & DraftPilot case study - A rare deep dive into actual, measurable results from AI implementation across 27 legal departments (with Peter confirming these stats are "totally legit").
4. The economics of "product-led" law firms - How firms like MacFarlanes and A&O Shearman are experimenting with subscription models and profit-sharing arrangements that challenge the billable hour.
Alex brings the heat with his skepticism about law firms' tech initiatives, comparing them to "drug dealers giving free samples" to hook clients – while we debate whether AI might finally change that dynamic.
Want to stay on top of these trends yourself? Check out Peter's newsletter where he delivers regular shots of insights and personal recommendations after "scouring the web for mentions of LegalTech and listening to legal tech podcasts at 2x speed."
If you found this episode interesting, please like, subscribe, and share!
We dive into the machine room of AI-native legal tech with Jake Jones, co-founder of Flank (formerly Legal OS) – a designer-turned-entrepreneur who's building digital colleagues to serve as the front line between corporate legal teams and the businesses they support.
What makes this episode essential listening is Jake's ability to demystify complex concepts that many of us encounter but few truly understand – from RAG (Retrieval Augmented Generation) to context windows and vector databases.
"We're in the era of workarounds," Jake explains, describing how companies are building complex technical solutions that will likely become obsolete within 12-24 months as AI capabilities advance. His breakdown of how these systems actually function – chunking documents, creating embeddings, and using similarity search – provides a rare glimpse into the machinery behind today's AI applications and why this technical debt is both necessary and temporary.
The conversation takes fascinating turns as Jake drops provocative insights throughout: "I think SaaS is going to die," he declares, predicting that today's point-and-click applications will seem as primitive as telegrams once model-based applications become the norm. "We will look back in 20 years and think, how did the older generations manage to get anything done?"
We venture into unexpectedly philosophical territory when Jake reflects on why interactions with AI can sometimes feel spookily conscious: "What we're seeing is our own consciousness projected into it and being reflected back." These moments of introspection reveal why building in this space requires not just technical expertise but a willingness to confront existential questions about intelligence and humanity itself.
This episode embodies exactly why we started Law://WhatsNext – to capture raw, unfiltered conversations with the people building our future while it's still being shaped. Jake's candid admission that he initially "metamorphosed into fear" about AI's implications before finding a more optimistic path forward mirrors the journey many of us are on.
Whether you're a legal professional trying to navigate the AI revolution or simply curious about the machinery behind the magic, Jake's insights will help you separate signal from noise in a rapidly evolving landscape. And fair warning: his hot takes on everything from hiring practices ("a single marketer who understands how to use AI can do the work of five") to the future of autonomous agents might just fundamentally change how you think about technology's role in legal practice.
If you found this episode interesting, please like, subscribe, comment, and share!
We explore responsible AI governance with Hadassah Drukarch, former Director of Policy and Delivery at the Responsible AI Institute and current PhD researcher at Leiden University.
Hadassah shares her journey into AI ethics and governance, driven by her interest in problem-solving and recognizing that traditional top-down legal approaches are insufficient for emerging technologies – we need bottom-up perspectives as well.
The conversation centers on a critical disconnect: regulatory frameworks for AI are often too generalised and detached from practical implementation contexts. While frameworks like the EU AI Act provide baseline standards, Hadassah expresses serious doubts about their effectiveness, noting that implementing responsible AI looks dramatically different across industries like healthcare and finance.
Perhaps most compelling is Hadassah's work testing healthcare robotics, which revealed significant gaps between regulatory standards and real-world performance. Her team's experiments showed that exoskeleton technology designed primarily for elderly users had been tested predominantly on male subjects and was often uncomfortable or unusable for women – demonstrating how policy development frequently lacks critical real-world testing with diverse user populations.
Hadassah advocates for creating infrastructure that incorporates practical, on-the-ground testing directly into the regulatory process, replacing theoretical frameworks with evidence-based governance.
The conversation concludes with reflections on the value of peer-to-peer knowledge sharing and practical conversations – like this podcast – as essential resources in this rapidly evolving field.
If you found this episode interesting, please like, subscribe, comment, and share!
For more thought-provoking content at the intersection of law and technology, head to https://lawwhatsnext.substack.com/ for:
In this episode of Law://WhatsNext, we sit down with Sean West, co-founder of Hence Technologies and author of the soon-to-be-released book Unruly, available for preorder ahead of its official release on 25 March 2025. Drawing on Sean’s experience as a globetrotting CEO , political advisor and legal technology founder, our conversation dissects the implications of a rapidly evolving geopolitical landscape - what Sean refers to as a new “unruly” world order.
Key Themes & Highlights
Geopolitical Deconvergence: Why old-world certainties are fracturing, and how that puts legal professionals on the front lines of assessing and mitigating global risk.
Law as a Techno-Political Force: Sean spotlights how law and law firms are no longer mere service providers but active shapers of strategic conversations in boardrooms and beyond.
AI and the ‘Legal Singularity’: Referencing seminal works like Personalized Law: Different Rules for Different People by Omri Ben-Shahar and Ariel Porat, and The Legal Singularity: How Artificial Intelligence Can Make Law Radically Better by Abdi Aidid and Benjamin Alarie, Sean provokes us to imagine a future where legal frameworks adapt at the speed of technology—and the profound implications it holds for practitioners and society at large.
A Special Offer for Our ListenersSean also shares how his team at Hence Technologies is forging new ways to track and interpret global events—so your legal function or organization can stay ahead of the curve. Want to give it a try? Head to global.hence.ai, and enter the code “lawwhatsnext” at checkout to receive a one-off promotional offer on the new product.
If this conversation resonates with you, like, subscribe, comment, and share this episode! For more deep dives at the intersection of law, technology, and organizational behavior, visit https://lawwhatsnext.substack.com/ where we feature:
Focused conversations with leading practitioners, technologists, and educators
Insights on navigating geopolitical risk, AI regulation, and the future of legal practice
Practical analysis of how emerging technologies can augment (not just automate) the work we do