We’re excited to announce Air Street Capital’s lead investment in Studio Atelico’s $5M Seed round, alongside friends including Chris Ré (Stanford), Thomas Wolf (Hugging Face), and Alex Ratner (Snorkel), to back a team on a mission to redefine the role of generative AI in games.
Welcome to the latest issue of your guide to AI, an editorialized newsletter covering the key developments in AI policy, research, industry, and start-ups over the last month.
Gene editing has the potential to solve fundamental challenges in agriculture, biotechnology and human health. CRISPR-based gene editors derived from microorganisms, although powerful, often show notable functional tradeoffs when ported into non-native environments, such as human cells1. Artificial-intelligence-enabled design provides a powerful alternative with the potential to bypass evolutionary constraints and generate editors with optimal properties. Here, using large language models2 trained on biological diversity at scale, we demonstrate successful precision editing of the human genome with a programmable gene editor designed with artificial intelligence. To achieve this goal, we curated a dataset of more than 1 million CRISPR operons through systematic mining of 26 terabases of assembled genomes and metagenomes. We demonstrate the capacity of our models by generating 4.8× the number of protein clusters across CRISPR–Cas families found in nature and tailoring single-guide RNA sequences for Cas9-like effector proteins. Several of the generated gene editors show comparable or improved activity and specificity relative to SpCas9, the prototypical gene editing effector, while being 400 mutations away in sequence. Finally, we demonstrate that an artificial-intelligence-generated gene editor, denoted as OpenCRISPR-1, exhibits compatibility with base editing. We release OpenCRISPR-1 to facilitate broad, ethical use across research and commercial applications.
No one is coming to save Europe. We must build new primes here and now.
At this year’s RAAIS, Max Jaderberg of Isomorphic Labs delivered a talk that felt like the spiritual sequel to AlphaGo, only this time the board isn’t 19×19, it’s the human body. The stakes? The future of drug discovery and human health.
Isomorphic Labs, the biotech spinout from DeepMind, has declared a radical mission: solving disease. It's not a metaphor. It’s a systems-level wager that the same kinds of models that cracked Go can crack biology. Not only by digitizing wet labs, but by encoding the dynamics of biomolecules into machine-learnable substrates. If AlphaGo marked the start of AI systems inventing strategies never before seen in human play, Isomorphic wants AI to invent medicines we’d never stumble across in the lab.
This is a moonshot, a bet that biology is information science, and machine learning is our best bet at learning its language.
At this year’s RAAIS, I joined Adam Satariano of the New York Times and Dimitrios Kottas, founder of Delian Alliance Industries and formerly Apple's Special Projects Group, for a candid conversation about one of the most taboo, yet increasingly urgent topics in tech: AI and defense.
The timing couldn’t be more acute. The war in Ukraine has shown how low-cost drones and software can upend conventional military doctrine. More recently, the escalation of hostilities with Iran has further underscored the volatility of global security and the growing relevance of digital warfare and drone-enabled asymmetric tactics. In the U.S., a second Trump administration has accelerated European defense policy, with Germany, Poland, France, and others pushing military spending to levels unseen since the Cold War. The European Commission plans to deploy €800 billion toward defense by 2029.
So: where does that money go? For once, not just to the legacy primes. There’s an opening for new entrants, namely startups building fast, iterating with operational focus, and pushing beyond the traditional defense hardware playbook.
At RAAIS 2025, Andreas Blattmann, co-founder of Black Forest Labs, shared a deep dive into FLUX.1 Kontext, the startup’s newly released generative model for controllable image and video generation. The talk gave a rare look under the hood of one of the most technically ambitious and commercially relevant AI-first companies to emerge in Europe.
Andreas is no stranger to the field. He was among the original researchers behind Stable Diffusion and later worked with Stability AI before striking out on his own to build the model he always wanted: a unified, fast, and open infrastructure for pixel generation, whether from text, images, or combinations of the two.
What you need to know in AI across geopolitics, big tech, hardware, research, models, datasets, financings and exits over the last 4 weeks.
Two years ago, I met Mati Staniszewski in a time before AI voices were good enough to fool anyone. ElevenLabs hadn’t launched yet, but their vision was clear: voice was broken, and they were going to fix it.
Today, ElevenLabs is one of the fastest-moving companies in the agentic voice space. Their platform powers narration for authors, dubbing for studios, real-time agents for enterprises, and everything in between. At the 9th Research and Applied AI Summit in London, Mati and I recounted the story of how they got here, and where they're going next, and lessons learned for any founder building in AI.
If 2016 was the year AI shocked the world by mastering Go, 2025 is shaping up to be the year they learn to innovate. This is the thesis of Ed Hughes, long-time researcher at Google DeepMind and one of the few voices charting a credible path toward AI systems capable of doing science.
In his closing talk at RAAIS this year, Hughes argued that we’re entering a new phase in the evolution of artificial intelligence: one where open-endedness becomes the central organizing principle. Not just solving problems, but defining them. Not just predicting the next token, but surfacing previously unknown unknowns. If he’s right, the next generation of AI systems won’t just be tools, they’ll be participants in the scientific process itself.
At this year's RAAIS, we convened a panel with Lionel Laurent (Bloomberg), Chris Yiu (Meta), and Benedict Macon-Cooney (Tony Blair Institute) to dissect the uneasy relationship between AI and the state. This conversation, grounded in experience across Whitehall, Big Tech, and Brussels, pulled no punches on where things stand and what still needs to change.
When poolside co-founder and CTO Eiso Kant stepped on stage at RAAIS 2025, he didn’t deliver the kind of slick, pre-baked keynote you might expect from the co-founder of one of the most ambitious AI companies in the world. Instead, Kant opted for a more experimental approach: a candid walkthrough of poolside’s beliefs, software systems, and bets on how to build AGI. The result was one of the most revealing sessions of the summit.
Kant’s message was clear: if you want to compete at the frontier of model capabilities, the secret isn’t just scale, it’s iteration speed. And to iterate fast, you need infrastructure that matches the ambition of your research. This is the thesis behind poolside’s “model factory,” a production-grade system for turning ideas into results at industrial speed.
Today, we release v4 of the
You'll now find updated counts as of June 2025 for AI research papers using chips from NVIDIA, TPUs, Apple, Huawei, AMD, ASICs, FPGAs, and AI semi startups, as well as updates to A100 H100/200 cluster sizes. We also include new data on the most and least commonly used chips for specific research topic areas.
State of AI Report Compute Index in collaboration with Zeta Alpha.
Across the technology investing world, investors are scaling their bets on a seductive thesis: Generative AI will transform low-margin service businesses into high-margin software companies. Several well-known platform venture firms have committed billions to this strategy and have begun to make their bets.
This essay is co-authored by Nathan Benaich and Nikola Mrksic, CEO of PolyAI.
At this year’s RAAIS, Paige Bailey of Google DeepMind delivered her talk on AI research to production with interactive demos and a clear thesis: “I’m going to show you how to automate significant parts of your work.” Generative models are now a co-author, a debugger, a lab assistant, a video editor. And increasingly, models are doing the work behind the curtain while you coach and edit. Here’s her talk and a narrative of the key insights:
A decade is a familiar yardstick for measuring progress. In that time, we tend to expect steady, incremental change. Historically, meaningful change tends to unfold slowly. In AI, however, the last ten years has been a story of "gradually, then suddenly". The weekly model launches we are now accustomed to feel incremental in the heat of the moment, but a ten-year look back reveals a landscape transformed by something that feels closer to magic.
At our 9th Research and Applied AI Summit (RAAIS), we brought together the people building the next decade. Researchers, founders, and policymakers who have gone from writing foundational papers to deploying infrastructure and policies at global scale. And if there’s one thing they agree on, it’s this: ten years in, we are still at the beginning.
On AI factories, sovereign AI and what this means for national AI strategies.
This article originally appeared on Fortune.com and can be read here for free.
What you need to know in AI across geopolitics, big tech, hardware, research, models, datasets, financings and exits over the last 4 weeks.
Our read of the UK's Ministry of Defence Strategic Defence Review 2025.
Bringing you the best of AI today and what’s coming tomorrow.