Good podcast

Top 100 most popular podcasts

Latent Space: The AI Engineer Podcast

Latent Space: The AI Engineer Podcast

The podcast by and for AI Engineers! In 2025, over 10 million readers and listeners came to Latent Space to hear about news, papers and interviews in Software 3.0. We cover Foundation Models changing every domain in Code Generation, Multimodality, AI Agents, GPU Infra and more, directly from the founders, builders, and thinkers involved in pushing the cutting edge. Striving to give you both the definitive take on the Current Thing down to the first introduction to the tech you'll be using in the next 3 months! We break news and exclusive interviews from OpenAI, Anthropic, Gemini, Meta (Soumith Chintala), Sierra (Bret Taylor), tiny (George Hotz), Databricks/MosaicML (Jon Frankle), Modular (Chris Lattner), Answer.ai (Jeremy Howard), et al. Full show notes always on https://latent.space www.latent.space

Subscribe

iTunes / Overcast / RSS

Website

latent.space/podcast

Episodes

The First Mechanistic Interpretability Frontier Lab ? Myra Deng & Mark Bissell of Goodfire AI

From Palantir and Two Sigma to building Goodfire into the poster-child for actionable mechanistic interpretability, Mark Bissell (Member of Technical Staff) and Myra Deng (Head of Product) are trying to turn ?peeking inside the model? into a repeatable production workflow by shipping APIs, landing real enterprise deployments, and now scaling the bet with a recent $150M Series B funding round at a $1.25B valuation.

In this episode, we go far beyond the usual ?SAEs are cool? take. We talk about Goodfire?s core bet: that the AI lifecycle is still fundamentally broken because the only reliable control we have is data and we post-train, RLHF, and fine-tune by ?slurping supervision through a straw,? hoping the model picks up the right behaviors while quietly absorbing the wrong ones. Goodfire?s answer is to build a bi-directional interface between humans and models: read what?s happening inside, edit it surgically, and eventually use interpretability during training so customization isn?t just brute-force guesswork.

Mark and Myra walk through what that looks like when you stop treating interpretability like a lab demo and start treating it like infrastructure: lightweight probes that add near-zero latency, token-level safety filters that can run at inference time, and interpretability workflows that survive messy constraints (multilingual inputs, synthetic?real transfer, regulated domains, no access to sensitive data). We also get a live window into what ?frontier-scale interp? means operationally (i.e. steering a trillion-parameter model in real time by targeting internal features) plus why the same tooling generalizes cleanly from language models to genomics, medical imaging, and ?pixel-space? world models.

We discuss:

* Myra + Mark?s path: Palantir (health systems, forward-deployed engineering) ? Goodfire early team; Two Sigma ? Head of Product, translating frontier interpretability research into a platform and real-world deployments

* What ?interpretability? actually means in practice: not just post-hoc poking, but a broader ?science of deep learning? approach across the full AI lifecycle (data curation ? post-training ? internal representations ? model design)

* Why post-training is the first big wedge: ?surgical edits? for unintended behaviors likereward hacking, sycophancy, noise learned during customization plus the dream of targeted unlearning and bias removal without wrecking capabilities

* SAEs vs probes in the real world: why SAE feature spaces sometimes underperform classifiers trained on raw activations for downstream detection tasks (hallucination, harmful intent, PII), and what that implies about ?clean concept spaces?

* Rakuten in production: deploying interpretability-based token-level PII detection at inference time to prevent routing private data to downstream providers plus the gnarly constraints: no training on real customer PII, synthetic?real transfer, English + Japanese, and tokenization quirks

* Why interp can be operationally cheaper than LLM-judge guardrails: probes are lightweight, low-latency, and don?t require hosting a second large model in the loop

* Real-time steering at frontier scale: a demo of steering Kimi K2 (~1T params) live and finding features via SAE pipelines, auto-labeling via LLMs, and toggling a ?Gen-Z slang? feature across multiple layers without breaking tool use

* Hallucinations as an internal signal: the case that models have latent uncertainty / ?user-pleasing? circuitry you can detect and potentially mitigate more directly than black-box methods

* Steering vs prompting: the emerging view that activation steering and in-context learning are more closely connected than people think, including work mapping between the two (even for jailbreak-style behaviors)

* Interpretability for science: using the same tooling across domains (genomics, medical imaging, materials) to debug spurious correlations and extract new knowledge up to and including early biomarker discovery work with major partners

* World models + ?pixel-space? interpretability: why vision/video models make concepts easier to see, how that accelerates the feedback loop, and why robotics/world-model partners are especially interesting design partners

* The north star: moving from ?data in, weights out? to intentional model design where experts can impart goals and constraints directly, not just via reward signals and brute-force post-training

?

Goodfire AI

* Website: https://goodfire.ai

* LinkedIn: https://www.linkedin.com/company/goodfire-ai/

* X: https://x.com/GoodfireAI

Myra Deng

* Website: https://myradeng.com/

* LinkedIn: https://www.linkedin.com/in/myra-deng/

* X: https://x.com/myra_deng

Mark Bissell

* LinkedIn: https://www.linkedin.com/in/mark-bissell/

* X: https://x.com/MarkMBissell

Full Video Episode

Timestamps

00:00:00 Introduction

00:00:05 Introduction to the Latent Space Podcast and Guests from Goodfire

00:00:29 What is Goodfire? Mission and Focus on Interpretability

00:01:01 Goodfire?s Practical Approach to Interpretability

00:01:37 Goodfire?s Series B Fundraise Announcement

00:02:04 Backgrounds of Mark and Myra from Goodfire

00:02:51 Team Structure and Roles at Goodfire

00:05:13 What is Interpretability? Definitions and Techniques

00:05:30 Understanding Errors

00:07:29 Post-training vs. Pre-training Interpretability Applications

00:08:51 Using Interpretability to Remove Unwanted Behaviors

00:10:09 Grokking, Double Descent, and Generalization in Models

00:10:15 404 Not Found Explained

00:12:06 Subliminal Learning and Hidden Biases in Models

00:14:07 How Goodfire Chooses Research Directions and Projects

00:15:00 Troubleshooting Errors

00:16:04 Limitations of SAEs and Probes in Interpretability

00:18:14 Rakuten Case Study: Production Deployment of Interpretability

00:20:45 Conclusion

00:21:12 Efficiency Benefits of Interpretability Techniques

00:21:26 Live Demo: Real-Time Steering in a Trillion Parameter Model

00:25:15 How Steering Features are Identified and Labeled

00:26:51 Detecting and Mitigating Hallucinations Using Interpretability

00:31:20 Equivalence of Activation Steering and Prompting

00:34:06 Comparing Steering with Fine-Tuning and LoRA Techniques

00:36:04 Model Design and the Future of Intentional AI Development

00:38:09 Getting Started in Mechinterp: Resources, Programs, and Open Problems

00:40:51 Industry Applications and the Rise of Mechinterp in Practice

00:41:39 Interpretability for Code Models and Real-World Usage

00:43:07 Making Steering Useful for More Than Stylistic Edits

00:46:17 Applying Interpretability to Healthcare and Scientific Discovery

00:49:15 Why Interpretability is Crucial in High-Stakes Domains like Healthcare

00:52:03 Call for Design Partners Across Domains

00:54:18 Interest in World Models and Visual Interpretability

00:57:22 Sci-Fi Inspiration: Ted Chiang and Interpretability

01:00:14 Interpretability, Safety, and Alignment Perspectives

01:04:27 Weak-to-Strong Generalization and Future Alignment Challenges

01:05:38 Final Thoughts and Hiring/Collaboration Opportunities at Goodfire

Transcript

Shawn Wang [00:00:05]: So welcome to the Latent Space pod. We?re back in the studio with our special MechInterp co-host, Vibhu. Welcome. Mochi, Mochi?s special co-host. And Mochi, the mechanistic interpretability doggo. We have with us Mark and Myra from Goodfire. Welcome. Thanks for having us on. Maybe we can sort of introduce Goodfire and then introduce you guys. How do you introduce Goodfire today?

Myra Deng [00:00:29]: Yeah, it?s a great question. So Goodfire, we like to say, is an AI research lab that focuses on using interpretability to understand, learn from, and design AI models. And we really believe that interpretability will unlock the new generation, next frontier of safe and powerful AI models. That?s our description right now, and I?m excited to dive more into the work we?re doing to make that happen.

Shawn Wang [00:00:55]: Yeah. And there?s always like the official description. Is there an understatement? Is there an unofficial one that sort of resonates more with a different audience?

Mark Bissell [00:01:01]: Well, being an AI research lab that?s focused on interpretability, there?s obviously a lot of people have a lot that they think about when they think of interpretability. And I think we have a pretty broad definition of what that means and the types of places that can be applied. And in particular, applying it in production scenarios, in high stakes industries, and really taking it sort of from the research world into the real world. Which, you know. It?s a new field, so that hasn?t been done all that much. And we?re excited about actually seeing that sort of put into practice.

Shawn Wang [00:01:37]: Yeah, I would say it wasn?t too long ago that Anthopic was like still putting out like toy models or superposition and that kind of stuff. And I wouldn?t have pegged it to be this far along. When you and I talked at NeurIPS, you were talking a little bit about your production use cases and your customers. And then not to bury the lead, today we?re also announcing the fundraise, your Series B. $150 million. $150 million at a 1.25B valuation. Congrats, Unicorn.

Mark Bissell [00:02:02]: Thank you. Yeah, no, things move fast.

Shawn Wang [00:02:04]: We were talking to you in December and already some big updates since then. Let?s dive, I guess, into a bit of your backgrounds as well. Mark, you were at Palantir working on health stuff, which is really interesting because the Goodfire has some interesting like health use cases. I don?t know how related they are in practice.

Mark Bissell [00:02:22]: Yeah, not super related, but I don?t know. It was helpful context to know what it?s like. Just to work. Just to work with health systems and generally in that domain. Yeah.

Shawn Wang [00:02:32]: And Mara, you were at Two Sigma, which actually I was also at Two Sigma back in the day. Wow, nice.

Myra Deng [00:02:37]: Did we overlap at all?

Shawn Wang [00:02:38]: No, this is when I was briefly a software engineer before I became a sort of developer relations person. And now you?re head of product. What are your sort of respective roles, just to introduce people to like what all gets done in Goodfire?

Mark Bissell [00:02:51]: Yeah, prior to Goodfire, I was at Palantir for about three years as a forward deployed engineer, now a hot term. Wasn?t always that way. And as a technical lead on the health care team and at Goodfire, I?m a member of the technical staff. And honestly, that I think is about as specific as like as as I could describe myself because I?ve worked on a range of things. And, you know, it?s it?s a fun time to be at a team that?s still reasonably small. I think when I joined one of the first like ten employees, now we?re above 40, but still, it looks like there?s always a mix of research and engineering and product and all of the above. That needs to get done. And I think everyone across the team is, you know, pretty, pretty switch hitter in the roles they do. So I think you?ve seen some of the stuff that I worked on related to image models, which was sort of like a research demo. More recently, I?ve been working on our scientific discovery team with some of our life sciences partners, but then also building out our core platform for more of like flexing some of the kind of MLE and developer skills as well.

Shawn Wang [00:03:53]: Very generalist. And you also had like a very like a founding engineer type role.

Myra Deng [00:03:58]: Yeah, yeah.

Shawn Wang [00:03:59]: So I also started as I still am a member of technical staff, did a wide range of things from the very beginning, including like finding our office space and all of this, which is we both we both visited when you had that open house thing. It was really nice.

Myra Deng [00:04:13]: Thank you. Thank you. Yeah. Plug to come visit our office.

Shawn Wang [00:04:15]: It looked like it was like 200 people. It has room for 200 people. But you guys are like 10.

Myra Deng [00:04:22]: For a while, it was very empty. But yeah, like like Mark, I spend. A lot of my time as as head of product, I think product is a bit of a weird role these days, but a lot of it is thinking about how do we take our frontier research and really apply it to the most important real world problems and how does that then translate into a platform that?s repeatable or a product and working across, you know, the engineering and research teams to make that happen and also communicating to the world? Like, what is interpretability? What is it used for? What is it good for? Why is it so important? All of these things are part of my day-to-day as well.

Shawn Wang [00:05:01]: I love like what is things because that?s a very crisp like starting point for people like coming to a field. They all do a fun thing. Vibhu, why don?t you want to try tackling what is interpretability and then they can correct us.

Vibhu Sapra [00:05:13]: Okay, great. So I think like one, just to kick off, it?s a very interesting role to be head of product, right? Because you guys, at least as a lab, you?re more of an applied interp lab, right? Which is pretty different than just normal interp, like a lot of background research. But yeah. You guys actually ship an API to try these things. You have Ember, you have products around it, which not many do. Okay. What is interp? So basically you?re trying to have an understanding of what?s going on in model, like in the model, in the internal. So different approaches to do that. You can do probing, SAEs, transcoders, all this stuff. But basically you have an, you have a hypothesis. You have something that you want to learn about what?s happening in a model internals. And then you?re trying to solve that from there. You can do stuff like you can, you know, you can do activation mapping. You can try to do steering. There?s a lot of stuff that you can do, but the key question is, you know, from input to output, we want to have a better understanding of what?s happening and, you know, how can we, how can we adjust what?s happening on the model internals? How?d I do?

Mark Bissell [00:06:12]: That was really good. I think that was great. I think it?s also a, it?s kind of a minefield of a, if you ask 50 people who quote unquote work in interp, like what is interpretability, you?ll probably get 50 different answers. And. Yeah. To some extent also like where, where good fire sits in the space. I think that we?re an AI research company above all else. And interpretability is a, is a set of methods that we think are really useful and worth kind of specializing in, in order to accomplish the goals we want to accomplish. But I think we also sort of see some of the goals as even more broader as, as almost like the science of deep learning and just taking a not black box approach to kind of any part of the like AI development life cycle, whether that. That means using interp for like data curation while you?re training your model or for understanding what happened during post-training or for the, you know, understanding activations and sort of internal representations, what is in there semantically. And then a lot of sort of exciting updates that were, you know, are sort of also part of the, the fundraise around bringing interpretability to training, which I don?t think has been done all that much before. A lot of this stuff is sort of post-talk poking at models as opposed to. To actually using this to intentionally design them.

Shawn Wang [00:07:29]: Is this post-training or pre-training or is that not a useful.

Myra Deng [00:07:33]: Currently focused on post-training, but there?s no reason the techniques wouldn?t also work in pre-training.

Shawn Wang [00:07:38]: Yeah. It seems like it would be more active, applicable post-training because basically I?m thinking like rollouts or like, you know, having different variations of a model that you can tweak with the, with your steering. Yeah.

Myra Deng [00:07:50]: And I think in a lot of the news that you?ve seen in, in, on like Twitter or whatever, you?ve seen a lot of unintended. Side effects come out of post-training processes, you know, overly sycophantic models or models that exhibit strange reward hacking behavior. I think these are like extreme examples. There?s also, you know, very, uh, mundane, more mundane, like enterprise use cases where, you know, they try to customize or post-train a model to do something and it learns some noise or it doesn?t appropriately learn the target task. And a big question that we?ve always had is like, how do you use your understanding of what the model knows and what it?s doing to actually guide the learning process?

Shawn Wang [00:08:26]: Yeah, I mean, uh, you know, just to anchor this for people, uh, one of the biggest controversies of last year was 4.0 GlazeGate. I?ve never heard of GlazeGate. I didn?t know that was what it was called. The other one, they called it that on the blog post and I was like, well, how did OpenAI call it? Like officially use that term. And I?m like, that?s funny, but like, yeah, I guess it?s the pitch that if they had worked a good fire, they wouldn?t have avoided it. Like, you know what I?m saying?

Myra Deng [00:08:51]: I think so. Yeah. Yeah.

Mark Bissell [00:08:53]: I think that?s certainly one of the use cases. I think. Yeah. Yeah. I think the reason why post-training is a place where this makes a lot of sense is a lot of what we?re talking about is surgical edits. You know, you want to be able to have expert feedback, very surgically change how your model is doing, whether that is, you know, removing a certain behavior that it has. So, you know, one of the things that we?ve been looking at or is, is another like common area where you would want to make a somewhat surgical edit is some of the models that have say political bias. Like you look at Quen or, um, R1 and they have sort of like this CCP bias.

Shawn Wang [00:09:27]: Is there a CCP vector?

Mark Bissell [00:09:29]: Well, there?s, there are certainly internal, yeah. Parts of the representation space where you can sort of see where that lives. Yeah. Um, and you want to kind of, you know, extract that piece out.

Shawn Wang [00:09:40]: Well, I always say, you know, whenever you find a vector, a fun exercise is just like, make it very negative to see what the opposite of CCP is.

Mark Bissell [00:09:47]: The super America, bald eagles flying everywhere. But yeah. So in general, like lots of post-training tasks where you?d want to be able to, to do that. Whether it?s unlearning a certain behavior or, you know, some of the other kind of cases where this comes up is, are you familiar with like the, the grokking behavior? I mean, I know the machine learning term of grokking.

Shawn Wang [00:10:09]: Yeah.

Mark Bissell [00:10:09]: Sort of this like double descent idea of, of having a model that is able to learn a generalizing, a generalizing solution, as opposed to even if memorization of some task would suffice, you want it to learn the more general way of doing a thing. And so, you know, another. A way that you can think about having surgical access to a model?s internals would be learn from this data, but learn in the right way. If there are many possible, you know, ways to, to do that. Can make interp solve the double descent problem?

Shawn Wang [00:10:41]: Depends, I guess, on how you. Okay. So I, I, I viewed that double descent as a problem because then you?re like, well, if the loss curves level out, then you?re done, but maybe you?re not done. Right. Right. But like, if you actually can interpret what is a generalizing or what you?re doing. What is, what is still changing, even though the loss is not changing, then maybe you, you can actually not view it as a double descent problem. And actually you?re just sort of translating the space in which you view loss and like, and then you have a smooth curve. Yeah.

Mark Bissell [00:11:11]: I think that?s certainly like the domain of, of problems that we?re, that we?re looking to get.

Shawn Wang [00:11:15]: Yeah. To me, like double descent is like the biggest thing to like ML research where like, if you believe in scaling, then you don?t need, you need to know where to scale. And. But if you believe in double descent, then you don?t, you don?t believe in anything where like anything levels off, like.

Vibhu Sapra [00:11:30]: I mean, also tendentially there?s like, okay, when you talk about the China vector, right. There?s the subliminal learning work. It was from the anthropic fellows program where basically you can have hidden biases in a model. And as you distill down or, you know, as you train on distilled data, those biases always show up, even if like you explicitly try to not train on them. So, you know, it?s just like another use case of. Okay. If we can interpret what?s happening in post-training, you know, can we clear some of this? Can we even determine what?s there? Because yeah, it?s just like some worrying research that?s out there that shows, you know, we really don?t know what?s going on.

Mark Bissell [00:12:06]: That is. Yeah. I think that?s the biggest sentiment that we?re sort of hoping to tackle. Nobody knows what?s going on. Right. Like subliminal learning is just an insane concept when you think about it. Right. Train a model on not even the logits, literally the output text of a bunch of random numbers. And now your model loves owls. And you see behaviors like that, that are just, they defy, they defy intuition. And, and there are mathematical explanations that you can get into, but. I mean.

Shawn Wang [00:12:34]: It feels so early days. Objectively, there are a sequence of numbers that are more owl-like than others. There, there should be.

Mark Bissell [00:12:40]: According to, according to certain models. Right. It?s interesting. I think it only applies to models that were initialized from the same starting Z. Usually, yes.

Shawn Wang [00:12:49]: But I mean, I think that?s a, that?s a cheat code because there?s not enough compute. But like if you believe in like platonic representation, like probably it will transfer across different models as well. Oh, you think so?

Mark Bissell [00:13:00]: I think of it more as a statistical artifact of models initialized from the same seed sort of. There?s something that is like path dependent from that seed that might cause certain overlaps in the latent space and then sort of doing this distillation. Yeah. Like it pushes it towards having certain other tendencies.

Vibhu Sapra [00:13:24]: Got it. I think there?s like a bunch of these open-ended questions, right? Like you can?t train in new stuff during the RL phase, right? RL only reorganizes weights and you can only do stuff that?s somewhat there in your base model. You?re not learning new stuff. You?re just reordering chains and stuff. But okay. My broader question is when you guys work at an interp lab, how do you decide what to work on and what?s kind of the thought process? Right. Because we can ramble for hours. Okay. I want to know this. I want to know that. But like, how do you concretely like, you know, what?s the workflow? Okay. There?s like approaches towards solving a problem, right? I can try prompting. I can look at chain of thought. I can train probes, SAEs. But how do you determine, you know, like, okay, is this going anywhere? Like, do we have set stuff? Just, you know, if you can help me with all that. Yeah.

Myra Deng [00:14:07]: It?s a really good question. I feel like we?ve always at the very beginning of the company thought about like, let?s go and try to learn what isn?t working in machine learning today. Whether that?s talking to customers or talking to researchers at other labs, trying to understand both where the frontier is going and where things are really not falling apart today. And then developing a perspective on how we can push the frontier using interpretability methods. And so, you know, even our chief scientist, Tom, spends a lot of time talking to customers and trying to understand what real world problems are and then taking that back and trying to apply the current state of the art to those problems and then seeing where they fall down basically. And then using those failures or those shortcomings to understand what hills to climb when it comes to interpretability research. So like on the fundamental side, for instance, when we have done some work applying SAEs and probes, we?ve encountered, you know, some shortcomings in SAEs that we found a little bit surprising. And so have gone back to the drawing board and done work on that. And then, you know, we?ve done some work on better foundational interpreter models. And a lot of our team?s research is focused on what is the next evolution beyond SAEs, for instance. And then when it comes to like control and design of models, you know, we tried steering with our first API and realized that it still fell short of black box techniques like prompting or fine tuning. And so went back to the drawing board and we?re like, how do we make that not the case and how do we improve it beyond that? And one of our researchers, Ekdeep, who just joined is actually Ekdeep and Atticus are like steering experts and have spent a lot of time trying to figure out like, what is the research that enables us to actually do this in a much more powerful, robust way? So yeah, the answer is like, look at real world problems, try to translate that into a research agenda and then like hill climb on both of those at the same time.

Shawn Wang [00:16:04]: Yeah. Mark has the steering CLI demo queued up, which we?re going to go into in a sec. But I always want to double click on when you drop hints, like we found some problems with SAEs. Okay. What are they? You know, and then we can go into the demo. Yeah.

Myra Deng [00:16:19]: I mean, I?m curious if you have more thoughts here as well, because you?ve done it in the healthcare domain. But I think like, for instance, when we do things like trying to detect behaviors within models that are harmful or like behaviors that a user might not want to have in their model. So hallucinations, for instance, harmful intent, PII, all of these things. We first tried using SAE probes for a lot of these tasks. So taking the feature activation space from SAEs and then training classifiers on top of that, and then seeing how well we can detect the properties that we might want to detect in model behavior. And we?ve seen in many cases that probes just trained on raw activations seem to perform better than SAE probes, which is a bit surprising if you think that SAEs are actually also capturing the concepts that you would want to capture cleanly and more surgically. And so that is an interesting observation. I don?t think that is like, I?m not down on SAEs at all. I think there are many, many things they?re useful for, but we have definitely run into cases where I think the concept space described by SAEs is not as clean and accurate as we would expect it to be for actual like real world downstream performance metrics.

Mark Bissell [00:17:34]: Fair enough. Yeah. It?s the blessing and the curse of unsupervised methods where you get to peek into the AI?s mind. But sometimes you wish that you saw other things when you walked inside there. Although in the PII instance, I think weren?t an SAE based approach actually did prove to be the most generalizable?

Myra Deng [00:17:53]: It did work well in the case that we published with Rakuten. And I think a lot of the reasons it worked well was because we had a noisier data set. And so actually the blessing of unsupervised learning is that we actually got to get more meaningful, generalizable signal from SAEs when the data was noisy. But in other cases where we?ve had like good data sets, it hasn?t been the case.

Shawn Wang [00:18:14]: And just because you named Rakuten and I don?t know if we?ll get it another chance, like what is the overall, like what is Rakuten?s usage or production usage? Yeah.

Myra Deng [00:18:25]: So they are using us to essentially guardrail and inference time monitor their language model usage and their agent usage to detect things like PII so that they don?t route private user information.

Myra Deng [00:18:41]: And so that?s, you know, going through all of their user queries every day. And that?s something that we deployed with them a few months ago. And now we are actually exploring very early partnerships, not just with Rakuten, but with other people around how we can help with potentially training and customization use cases as well. Yeah.

Shawn Wang [00:19:03]: And for those who don?t know, like it?s Rakuten is like, I think number one or number two e-commerce store in Japan. Yes. Yeah.

Mark Bissell [00:19:10]: And I think that use case actually highlights a lot of like what it looks like to deploy things in practice that you don?t always think about when you?re doing sort of research tasks. So when you think about some of the stuff that came up there that?s more complex than your idealized version of a problem, they were encountering things like synthetic to real transfer of methods. So they couldn?t train probes, classifiers, things like that on actual customer data of PII. So what they had to do is use synthetic data sets. And then hope that that transfer is out of domain to real data sets. And so we can evaluate performance on the real data sets, but not train on customer PII. So that right off the bat is like a big challenge. You have multilingual requirements. So this needed to work for both English and Japanese text. Japanese text has all sorts of quirks, including tokenization behaviors that caused lots of bugs that caused us to be pulling our hair out. And then also a lot of tasks you?ll see. You might make simplifying assumptions if you?re sort of treating it as like the easiest version of the problem to just sort of get like general results where maybe you say you?re classifying a sentence to say, does this contain PII? But the need that Rakuten had was token level classification so that you could precisely scrub out the PII. So as we learned more about the problem, you?re sort of speaking about what that looks like in practice. Yeah. A lot of assumptions end up breaking. And that was just one instance where you. A problem that seems simple right off the bat ends up being more complex as you keep diving into it.

Vibhu Sapra [00:20:41]: Excellent. One of the things that?s also interesting with Interp is a lot of these methods are very efficient, right? So where you?re just looking at a model?s internals itself compared to a separate like guardrail, LLM as a judge, a separate model. One, you have to host it. Two, there?s like a whole latency. So if you use like a big model, you have a second call. Some of the work around like self detection of hallucination, it?s also deployed for efficiency, right? So if you have someone like Rakuten doing it in production live, you know, that?s just another thing people should consider.

Mark Bissell [00:21:12]: Yeah. And something like a probe is super lightweight. Yeah. It?s no extra latency really. Excellent.

Shawn Wang [00:21:17]: You have the steering demos lined up. So we were just kind of see what you got. I don?t, I don?t actually know if this is like the latest, latest or like alpha thing.

Mark Bissell [00:21:26]: No, this is a pretty hacky demo from from a presentation that someone else on the team recently gave. So this will give a sense for, for technology. So you can see the steering and action. Honestly, I think the biggest thing that this highlights is that as we?ve been growing as a company and taking on kind of more and more ambitious versions of interpretability related problems, a lot of that comes to scaling up in various different forms. And so here you?re going to see steering on a 1 trillion parameter model. This is Kimi K2. And so it?s sort of fun that in addition to the research challenges, there are engineering challenges that we?re now tackling. Cause for any of this to be sort of useful in production, you need to be thinking about what it looks like when you?re using these methods on frontier models as opposed to sort of like toy kind of model organisms. So yeah, this was thrown together hastily, pretty fragile behind the scenes, but I think it?s quite a fun demo. So screen sharing is on. So I?ve got two terminal sessions pulled up here. On the left is a forked version that we have of the Kimi CLI that we?ve got running to point at our custom hosted Kimi model. And then on the right is a set up that will allow us to steer on certain concepts. So I should be able to chat with Kimi over here. Tell it hello. This is running locally. So the CLI is running locally, but the Kimi server is running back to the office. Well, hopefully should be, um, that?s too much to run on that Mac. Yeah. I think it?s, uh, it takes a full, like each 100 node. I think it?s like, you can. You can run it on eight GPUs, eight 100. So, so yeah, Kimi?s running. We can ask it a prompt. It?s got a forked version of our, uh, of the SG line code base that we?ve been working on. So I?m going to tell it, Hey, this SG line code base is slow. I think there?s a bug. Can you try to figure it out? There?s a big code base, so it?ll, it?ll spend some time doing this. And then on the right here, I?m going to initialize in real time. Some steering. Let?s see here.

Mark Bissell [00:23:33]: searching for any. Bugs. Feature ID 43205.

Shawn Wang [00:23:38]: Yeah.

Mark Bissell [00:23:38]: 20, 30, 40. So let me, uh, this is basically a feature that we found that inside Kimi seems to cause it to speak in Gen Z slang. And so on the left, it?s still sort of thinking normally it might take, I don?t know, 15 seconds for this to kick in, but then we?re going to start hopefully seeing him do this code base is massive for real. So we?re going to start. We?re going to start seeing Kimi transition as the steering kicks in from normal Kimi to Gen Z Kimi and both in its chain of thought and its actual outputs.

Mark Bissell [00:24:19]: And interestingly, you can see, you know, it?s still able to call tools, uh, and stuff. It?s um, it?s purely sort of it?s it?s demeanor. And there are other features that we found for interesting things like concision. So that?s more of a practical one. You can make it more concise. Um, the types of programs, uh, programming languages that uses, but yeah, as we?re seeing it come in. Pretty good. Outputs.

Shawn Wang [00:24:43]: Scheduler code is actually wild.

Vibhu Sapra [00:24:46]: Yo, this code is actually insane, bro.

Vibhu Sapra [00:24:53]: What?s the process of training in SAE on this, or, you know, how do you label features? I know you guys put out a pretty cool blog post about, um, finding this like autonomous interp. Um, something. Something about how agents for interp is different than like coding agents. I don?t know while this is spewing up, but how, how do we find feature 43, two Oh five. Yeah.

Mark Bissell [00:25:15]: So in this case, um, we, our platform that we?ve been building out for a long time now supports all the sort of classic out of the box interp techniques that you might want to have like SAE training, probing things of that kind, I?d say the techniques for like vanilla SAEs are pretty well established now where. You take your model that you?re interpreting, run a whole bunch of data through it, gather activations, and then yeah, pretty straightforward pipeline to train an SAE. There are a lot of different varieties. There?s top KSAEs, batch top KSAEs, um, normal ReLU SAEs. And then once you have your sparse features to your point, assigning labels to them to actually understand that this is a gen Z feature, that?s actually where a lot of the kind of magic happens. Yeah. And the most basic standard technique is look at all of your d input data set examples that cause this feature to fire most highly. And then you can usually pick out a pattern. So for this feature, If I?ve run a diverse enough data set through my model feature 43, two Oh five. Probably tends to fire on all the tokens that sounds like gen Z slang. You know, that?s the, that?s the time of year to be like, Oh, I?m in this, I?m in this Um, and, um, so, you know, you could have a human go through all 43,000 concepts and

Vibhu Sapra [00:26:34]: And I?ve got to ask the basic question, you know, can we get examples where it hallucinates, pass it through, see what feature activates for hallucinations? Can I just, you know, turn hallucination down?

Myra Deng [00:26:51]: Oh, wow. You really predicted a project we?re already working on right now, which is detecting hallucinations using interpretability techniques. And this is interesting because hallucinations is something that?s very hard to detect. And it?s like a kind of a hairy problem and something that black box methods really struggle with. Whereas like Gen Z, you could always train a simple classifier to detect that hallucinations is harder. But we?ve seen that models internally have some... Awareness of like uncertainty or some sort of like user pleasing behavior that leads to hallucinatory behavior. And so, yeah, we have a project that?s trying to detect that accurately. And then also working on mitigating the hallucinatory behavior in the model itself as well.

Shawn Wang [00:27:39]: Yeah, I would say most people are still at the level of like, oh, I would just turn temperature to zero and that turns off hallucination. And I?m like, well, that?s a fundamental misunderstanding of how this works. Yeah.

Mark Bissell [00:27:51]: Although, so part of what I like about that question is you, there are SAE based approaches that might like help you get at that. But oftentimes the beauty of SAEs and like we said, the curse is that they?re unsupervised. So when you have a behavior that you deliberately would like to remove, and that?s more of like a supervised task, often it is better to use something like probes and specifically target the thing that you?re interested in reducing as opposed to sort of like hoping that when you fragment the latent space, one of the vectors that pops out.

Vibhu Sapra [00:28:20]: And as much as we?re training an autoencoder to be sparse, we?re not like for sure certain that, you know, we will get something that just correlates to hallucination. You?ll probably split that up into 20 other things and who knows what they?ll be.

Mark Bissell [00:28:36]: Of course. Right. Yeah. So there?s no sort of problems with like feature splitting and feature absorption. And then there?s the off target effects, right? Ideally, you would want to be very precise where if you reduce the hallucination feature, suddenly maybe your model can?t write. Creatively anymore. And maybe you don?t like that, but you want to still stop it from hallucinating facts and figures.

Shawn Wang [00:28:55]: Good. So Vibhu has a paper to recommend there that we?ll put in the show notes. But yeah, I mean, I guess just because your demo is done, any any other things that you want to highlight or any other interesting features you want to show?

Mark Bissell [00:29:07]: I don?t think so. Yeah. Like I said, this is a pretty small snippet. I think the main sort of point here that I think is exciting is that there?s not a whole lot of inter being applied to models quite at this scale. You know, Anthropic certainly has some some. Research and yeah, other other teams as well. But it?s it?s nice to see these techniques, you know, being put into practice. I think not that long ago, the idea of real time steering of a trillion parameter model would have sounded.

Shawn Wang [00:29:33]: Yeah. The fact that it?s real time, like you started the thing and then you edited the steering vector.

Vibhu Sapra [00:29:38]: I think it?s it?s an interesting one TBD of what the actual like production use case would be on that, like the real time editing. It?s like that?s the fun part of the demo, right? You can kind of see how this could be served behind an API, right? Like, yes, you?re you only have so many knobs and you can just tweak it a bit more. And I don?t know how it plays in. Like people haven?t done that much with like, how does this work with or without prompting? Right. How does this work with fine tuning? Like, there?s a whole hype of continual learning, right? So there?s just so much to see. Like, is this another parameter? Like, is it like parameter? We just kind of leave it as a default. We don?t use it. So I don?t know. Maybe someone here wants to put out a guide on like how to use this with prompting when to do what?

Mark Bissell [00:30:18]: Oh, well, I have a paper recommendation. I think you would love from Act Deep on our team, who is an amazing researcher, just can?t say enough amazing things about Act Deep. But he actually has a paper that as well as some others from the team and elsewhere that go into the essentially equivalence of activation steering and in context learning and how those are from a he thinks of everything in a cognitive neuroscience Bayesian framework, but basically how you can precisely show how. Prompting in context, learning and steering exhibit similar behaviors and even like get quantitative about the like magnitude of steering you would need to do to induce a certain amount of behavior similar to certain prompting, even for things like jailbreaks and stuff. It?s a really cool paper. Are you saying steering is less powerful than prompting? More like you can almost write a formula that tells you how to convert between the two of them.

Myra Deng [00:31:20]: And so like formally equivalent actually in the in the limit. Right.

Mark Bissell [00:31:24]: So like one case study of this is for jailbreaks there. I don?t know. Have you seen the stuff where you can do like many shot jailbreaking? You like flood the context with examples of the behavior. And the topic put out that paper.

Shawn Wang [00:31:38]: A lot of people were like, yeah, we?ve been doing this, guys.

Mark Bissell [00:31:40]: Like, yeah, what?s in this in context learning and activation steering equivalence paper is you can like predict the number. Number of examples that you will need to put in there in order to jailbreak the model. That?s cool. By doing steering experiments and using this sort of like equivalence mapping. That?s cool. That?s really cool. It?s very neat. Yeah.

Shawn Wang [00:32:02]: I was going to say, like, you know, I can like back rationalize that this makes sense because, you know, what context is, is basically just, you know, it updates the KV cache kind of and like and then every next token inference is still like, you know, the sheer sum of everything all the way. It?s plus all the context. It?s up to date. And you could, I guess, theoretically steer that with you probably replace that with your steering. The only problem is steering typically is on one layer, maybe three layers like like you did. So it?s like not exactly equivalent.

Mark Bissell [00:32:33]: Right, right. There?s sort of you need to get precise about, yeah, like how you sort of define steering and like what how you?re modeling the setup. But yeah, I?ve got the paper pulled up here. Belief dynamics reveal the dual nature. Yeah. The title is Belief Dynamics Reveal the Dual Nature of Incompetence. And it?s an exhibition of the practical context learning and activation steering. So Eric Bigelow, Dan Urgraft on the who are doing fellowships at Goodfire, Ekt Deep?s the final author there.

Myra Deng [00:32:59]: I think actually to your question of like, what is the production use case of steering? I think maybe if you just think like one level beyond steering as it is today. Like imagine if you could adapt your model to be, you know, an expert legal reasoner. Like in almost real time, like very quickly. efficiently using human feedback or using like your semantic understanding of what the model knows and where it knows that behavior. I think that while it?s not clear what the product is at the end of the day, it?s clearly very valuable. Thinking about like what?s the next interface for model customization and adaptation is a really interesting problem for us. Like we have heard a lot of people actually interested in fine-tuning an RL for open weight models in production. And so people are using things like Tinker or kind of like open source libraries to do that, but it?s still very difficult to get models fine-tuned and RL?d for exactly what you want them to do unless you?re an expert at model training. And so that?s like something we?re

Shawn Wang [00:34:06]: looking into. Yeah. I never thought so. Tinker from Thinking Machines famously uses rank one LoRa. Is that basically the same as steering? Like, you know, what?s the comparison there?

Mark Bissell [00:34:19]: Well, so in that case, you are still applying updates to the parameters, right?

Shawn Wang [00:34:25]: Yeah. You?re not touching a base model. You?re touching an adapter. It?s kind of, yeah.

Mark Bissell [00:34:30]: Right. But I guess it still is like more in parameter space then. I guess it?s maybe like, are you modifying the pipes or are you modifying the water flowing through the pipes to get what you?re after? Yeah. Just maybe one way.

Mark Bissell [00:34:44]: I like that analogy. That?s my mental map of it at least, but it gets at this idea of model design and intentional design, which is something that we?re, that we?re very focused on. And just the fact that like, I hope that we look back at how we?re currently training models and post-training models and just think what a primitive way of doing that right now. Like there?s no intentionality

Shawn Wang [00:35:06]: really in... It?s just data, right? The only thing in control is what data we feed in.

Mark Bissell [00:35:11]: So, so Dan from Goodfire likes to use this analogy of, you know, he has a couple of young kids and he talks about like, what if I could only teach my kids how to be good people by giving them cookies or like, you know, giving them a slap on the wrist if they do something wrong, like not telling them why it was wrong or like what they should have done differently or something like that. Just figure it out. Right. Exactly. So that?s RL. Yeah. Right. And, and, you know, it?s sample inefficient. There?s, you know, what do they say? It?s like slurping feedback. It?s like, slurping supervision. Right. And so you?d like to get to the point where you can have experts giving feedback to their models that are, uh, internalized and, and, you know, steering is an inference time way of sort of getting that idea. But ideally you?re moving to a world where

Vibhu Sapra [00:36:04]: it is much more intentional design in perpetuity for these models. Okay. This is one of the questions we asked Emmanuel from Anthropic on the podcast a few months ago. Basically the question, was you?re at a research lab that does model training, foundation models, and you?re on an interp team. How does it tie back? Right? Like, does this, do ideas come from the pre-training team? Do they go back? Um, you know, so for those interested, you can, you can watch that. There wasn?t too much of a connect there, but it?s still something, you know, it?s something they want to

Mark Bissell [00:36:33]: push for down the line. It can be useful for all of the above. Like there are certainly post-hoc

Vibhu Sapra [00:36:39]: use cases where it doesn?t need to touch that. I think the other thing a lot of people forget is this stuff isn?t too computationally expensive, right? Like I would say, if you?re interested in getting into research, MechInterp is one of the most approachable fields, right? A lot of this train an essay, train a probe, this stuff, like the budget for this one, there?s already a lot done. There?s a lot of open source work. You guys have done some too. Um, you know,

Shawn Wang [00:37:04]: There?s like notebooks from the Gemini team for Neil Nanda or like, this is how you do it. Just step through the notebook.

Vibhu Sapra [00:37:09]: Even if you?re like, not even technical with any of this, you can still make like progress. There, you can look at different activations, but, uh, if you do want to get into training, you know, training this stuff, correct me if I?m wrong is like in the thousands of dollars, not even like, it?s not that high scale. And then same with like, you know, applying it, doing it for post-training or all this stuff is fairly cheap in scale of, okay. I want to get into like model training. I don?t have compute for like, you know, pre-training stuff. So it?s, it?s a very nice field to get into. And also there?s a lot of like open questions, right? Um, some of them have to go with, okay, I want a product. I want to solve this. Like there?s also just a lot of open-ended stuff that people could work on. That?s interesting. Right. I don?t know if you guys have any calls for like, what?s open questions, what?s open work that you either open collaboration with, or like, you?d just like to see solved or just, you know, for people listening that want to get into McInturk because people always talk about it. What are, what are the things they should check out? Start, of course, you know, join you guys as well. I?m sure you?re hiring.

Myra Deng [00:38:09]: There?s a paper, I think from, was it Lee, uh, Sharky? It?s open problems and, uh, it?s, it?s a bit of interpretability, which I recommend everyone who?s interested in the field. Read. I?m just like a really comprehensive overview of what are the things that experts in the field think are the most important problems to be solved. I also think to your point, it?s been really, really inspiring to see, I think a lot of young people getting interested in interpretability, actually not just young people also like scientists to have been, you know, experts in physics for many years and in biology or things like this, um, transitioning into interp, because the barrier of, of what?s now interp. So it?s really cool to see a number to entry is, you know, in some ways low and there?s a lot of information out there and ways to get started. There?s this anecdote of like professors at universities saying that all of a sudden every incoming PhD student wants to study interpretability, which was not the case a few years ago. So it just goes to show how, I guess, like exciting the field is, how fast it?s moving, how quick it is to get started and things like that.

Mark Bissell [00:39:10]: And also just a very welcoming community. You know, there?s an open source McInturk Slack channel. There are people are always posting questions and just folks in the space are always responsive if you ask things on various forums and stuff. But yeah, the open paper, open problems paper is a really good one.

Myra Deng [00:39:28]: For other people who want to get started, I think, you know, MATS is a great program. What?s the acronym for? Machine Learning and Alignment Theory Scholars? It?s like the...

Vibhu Sapra [00:39:40]: Normally summer internship style.

Myra Deng [00:39:42]: Yeah, but they?ve been doing it year round now. And actually a lot of our full-time staff have come through that program or gone through that program. And it?s great for anyone who is transitioning into interpretability. There?s a couple other fellows programs. We do one as well as Anthropic. And so those are great places to get started if anyone is interested.

Mark Bissell [00:40:03]: Also, I think been seen as a research field for a very long time. But I think engineering... I think engineers are sorely wanted for interpretability as well, especially at Goodfire, but elsewhere, as it does scale up.

Shawn Wang [00:40:18]: I should mention that Lee actually works with you guys, right? And in the London office and I?m adding our first ever McInturk track at AI Europe because I see this industry applications now emerging. And I?m pretty excited to, you know, help push that along. Yeah, I was looking forward to that. It?ll effectively be the first industry McInturk conference. Yeah. I?m so glad you added that. You know, it?s still a little bit of a bet. It?s not that widespread, but I can definitely see this is the time to really get into it. We want to be early on things.

Mark Bissell [00:40:51]: For sure. And I think the field understands this, right? So at ICML, I think the title of the McInturk workshop this year was actionable interpretability. And there was a lot of discussion around bringing it to various domains. Everyone?s adding pragmatic, actionable, whatever.

Shawn Wang [00:41:10]: It?s like, okay, well, we weren?t actionable before, I guess. I don?t know.

Vibhu Sapra [00:41:13]: And I mean, like, just, you know, being in Europe, you see the Interp room. One, like old school conferences, like, I think they had a very tiny room till they got lucky and they got it doubled. But there?s definitely a lot of interest, a lot of niche research. So you see a lot of research coming out of universities, students. We covered the paper last week. It?s like two unknown authors, not many citations. But, you know, you can make a lot of meaningful work there. Yeah. Yeah. Yeah.

Shawn Wang [00:41:39]: Yeah. I think people haven?t really mentioned this yet. It?s just Interp for code. I think it?s like an abnormally important field. We haven?t mentioned this yet. The conspiracy theory last two years ago was when the first SAE work came out of Anthropic was they would do like, oh, we just used SAEs to turn the bad code vector down and then turn up the good code. And I think like, isn?t that the dream? Like, you know, like, but basically, I guess maybe, why is it funny? Like, it?s... If it was realistic, it would not be funny. It would be like, no, actually, we should do this. But it?s funny because we know there?s like, we feel there?s some limitations to what steering can do. And I think a lot of the public image of steering is like the Gen Z stuff. Like, oh, you can make it really love the Golden Gate Bridge, or you can make it speak like Gen Z. To like be a legal reasoner seems like a huge stretch. Yeah. And I don?t know if that will get there this way. Yeah.

Myra Deng [00:42:36]: I think, um, I will say we are announcing. Something very soon that I will not speak too much about. Um, but I think, yeah, this is like what we?ve run into again and again is like, we, we don?t want to be in the world where steering is only useful for like stylistic things. That?s definitely not, not what we?re aiming for. But I think the types of interventions that you need to do to get to things like legal reasoning, um, are much more sophisticated and require breakthroughs in, in learning algorithms. And that?s, um...

Shawn Wang [00:43:07]: And is this an emergent property of scale as well?

Myra Deng [00:43:10]: I think so. Yeah. I mean, I think scale definitely helps. I think scale allows you to learn a lot of information and, and reduce noise across, you know, large amounts of data. But I also think we think that there?s ways to do things much more effectively, um, even, even at scale. So like actually learning exactly what you want from the data and not learning things that you do that you don?t want exhibited in the data. So we?re not like anti-scale, but we are also realizing that scale is not going to get us anywhere. It?s not going to get us to the type of AI development that we want to be at in, in the future as these models get more powerful and get deployed in all these sorts of like mission critical contexts. Current life cycle of training and deploying and evaluations is, is to us like deeply broken and has opportunities to, to improve. So, um, more to come on that very, very soon.

Mark Bissell [00:44:02]: And I think that that?s a use basically, or maybe just like a proof point that these concepts do exist. Like if you can manipulate them in the precise best way, you can get the ideal combination of them that you desire. And steering is maybe the most coarse grained sort of peek at what that looks like. But I think it?s evocative of what you could do if you had total surgical control over every concept, every parameter. Yeah, exactly.

Myra Deng [00:44:30]: There were like bad code features. I?ve got it pulled up.

Vibhu Sapra [00:44:33]: Yeah. Just coincidentally, as you guys are talking.

Shawn Wang [00:44:35]: This is like, this is exactly.

Vibhu Sapra [00:44:38]: There?s like specifically a code error feature that activates and they show, you know, it?s not, it?s not typo detection. It?s like, it?s, it?s typos in code. It?s not typical typos. And, you know, you can, you can see it clearly activates where there?s something wrong in code. And they have like malicious code, code error. They have a whole bunch of sub, you know, sub broken down little grain features. Yeah.

Shawn Wang [00:45:02]: Yeah. So, so the, the rough intuition for me, the, why I talked about post-training was that, well, you just, you know, have a few different rollouts with all these things turned off and on and whatever. And then, you know, you can, that?s, that?s synthetic data you can kind of post-train on. Yeah.

Vibhu Sapra [00:45:13]: And I think we make it sound easier than it is just saying, you know, they do the real hard work.

Myra Deng [00:45:19]: I mean, you guys, you guys have the right idea. Exactly. Yeah. We replicated a lot of these features in, in our Lama models as well. I remember there was like.

Vibhu Sapra [00:45:26]: And I think a lot of this stuff is open, right? Like, yeah, you guys opened yours. DeepMind has opened a lot of essays on Gemma. Even Anthropic has opened a lot of this. There?s, there?s a lot of resources that, you know, we can probably share of people that want to get involved.

Shawn Wang [00:45:41]: Yeah. And special shout out to like Neuronpedia as well. Yes. Like, yeah, amazing piece of work to visualize those things.

Myra Deng [00:45:49]: Yeah, exactly.

Shawn Wang [00:45:50]: I guess I wanted to pivot a little bit on, onto the healthcare side, because I think that?s a big use case for you guys. We haven?t really talked about it yet. This is a bit of a crossover for me because we are, we are, we do have a separate science pod that we?re starting up for AI, for AI for science, just because like, it?s such a huge investment category and also I?m like less qualified to do it, but we actually have bio PhDs to cover that, which is great, but I need to just kind of recover, recap your work, maybe on the evil two stuff, but then, and then building forward.

Mark Bissell [00:46:17]: Yeah, for sure. And maybe to frame up the conversation, I think another kind of interesting just lens on interpretability in general is a lot of the techniques that were described. are ways to solve the AI human interface problem. And it?s sort of like bidirectional communication is the goal there. So what we?ve been talking about with intentional design of models and, you know, steering, but also more advanced techniques is having humans impart our desires and control into models and over models. And the reverse is also very interesting, especially as you get to superhuman models, whether that?s narrow superintelligence, like these scientific models that work on genomics, data, medical imaging, things like that. But down the line, you know, superintelligence of other forms as well. What knowledge can the AIs teach us as sort of that, that the other direction in that? And so some of our life science work to date has been getting at exactly that question, which is, well, some of it does look like debugging these various life sciences models, understanding if they?re actually performing well, on tasks, or if they?re picking up on spurious correlations, for instance, genomics models, you would like to know whether they are sort of focusing on the biologically relevant things that you care about, or if it?s using some simpler correlate, like the ancestry of the person that it?s looking at. But then also in the instances where they are superhuman, and maybe they are understanding elements of the human genome that we don?t have names for or specific, you know, yeah, discoveries that they?ve made that that we don?t know about, that?s, that?s a big goal. And so we?re already seeing that, right, we are partnered with organizations like Mayo Clinic, leading research health system in the United States, our Institute, as well as a startup called Prima Menta, which focuses on neurodegenerative disease. And in our partnership with them, we?ve used foundation models, they?ve been training and applied our interpretability techniques to find novel biomarkers for Alzheimer?s disease. So I think this is just the tip of the iceberg. But it?s, that?s like a flavor of some of the things that we?re working on.

Shawn Wang [00:48:36]: Yeah, I think that?s really fantastic. Obviously, we did the Chad Zuckerberg pod last year as well. And like, there?s a plethora of these models coming out, because there?s so much potential and research. And it?s like, very interesting how it?s basically the same as language models, but just with a different underlying data set. But it?s like, it?s the same exact techniques. Like, there?s no change, basically.

Mark Bissell [00:48:59]: Yeah. Well, and even in like other domains, right? Like, you know, robotics, I know, like a lot of the companies just use Gemma as like the like backbone, and then they like make it into a VLA that like takes these actions. It?s, it?s, it?s transformers all the way down. So yeah.

Vibhu Sapra [00:49:15]: Like we have Med Gemma now, right? Like this week, even there was Med Gemma 1.5. And they?re training it on this stuff, like 3d scans, medical domain knowledge, and all that stuff, too. So there?s a push from both sides. But I think the thing that, you know, one of the things about McInturpp is like, you?re a little bit more cautious in some domains, right? So healthcare, mainly being one, like guardrails, understanding, you know, we?re more risk adverse to something going wrong there. So even just from a basic understanding, like, if we?re trusting these systems to make claims, we want to know why and what?s going on.

Myra Deng [00:49:51]: Yeah, I think there?s totally a kind of like deployment bottleneck to actually using. foundation models for real patient usage or things like that. Like, say you?re using a model for rare disease prediction, you probably want some explanation as to why your model predicted a certain outcome, and an interpretable explanation at that. So that?s definitely a use case. But I also think like, being able to extract scientific information that no human knows to accelerate drug discovery and disease treatment and things like that actually is a really, really big unlock for science, like scientific discovery. And you?ve seen a lot of startups, like say that they?re going to accelerate scientific discovery. And I feel like we actually are doing that through our interp techniques. And kind of like, almost by accident, like, I think we got reached out to very, very early on from these healthcare institutions. And none of us had healthcare.

Shawn Wang [00:50:49]: How did they even hear of you? A podcast.

Myra Deng [00:50:51]: Oh, okay. Yeah, podcast.

Vibhu Sapra [00:50:53]: Okay, well, now?s that time, you know.

Myra Deng [00:50:55]: Everyone can call us.

Shawn Wang [00:50:56]: Podcasts are the most important thing. Everyone should listen to podcasts.

Myra Deng [00:50:59]: Yeah, they reached out. They were like, you know, we have these really smart models that we?ve trained, and we want to know what they?re doing. And we were like, really early that time, like three months old, and it was a few of us. And we were like, oh, my God, we?ve never used these models. Let?s figure it out. But it?s also like, great proof that interp techniques scale pretty well across domains. We didn?t really have to learn too much about.

Shawn Wang [00:51:21]: Interp is a machine learning technique, machine learning skills everywhere, right? Yeah. And it?s obviously, it?s just like a general insight. Yeah. Probably to finance too, I think, which would be fun for our history. I don?t know if you have anything to say there.

Mark Bissell [00:51:34]: Yeah, well, just across the science. Like, we?ve also done work on material science. Yeah, it really runs the gamut.

Vibhu Sapra [00:51:40]: Yeah. Awesome. And, you know, for those that should reach out, like, you?re obviously experts in this, but like, is there a call out for people that you?re looking to partner with, design partners, people to use your stuff outside of just, you know, the general developer that wants to. Plug and play steering stuff, like on the research side more so, like, are there ideal design partners, customers, stuff like that?

Myra Deng [00:52:03]: Yeah, I can talk about maybe non-life sciences, and then I?m curious to hear from you on the life sciences side. But we?re looking for design partners across many domains, language, anyone who?s customizing language models or trying to push the frontier of code or reasoning models is really interesting to us. And then also interested in the frontier of modeling. There?s a lot of models that work in, like, pixel space, as we call it. So if you?re doing world models, video models, even robotics, where there?s not a very clean natural language interface to interact with, I think we think that Interp can really help and are looking for a few partners in that space.

Shawn Wang [00:52:43]: Just because you mentioned the keyword world models, is that a big part of your thinking? Do you have a definition that I can use? Because everyone?s asking me about it.

Myra Deng [00:52:53]: About world models?

Shawn Wang [00:52:54]: There?s quite a few definitions, let?s say.

Myra Deng [00:52:56]: I don?t feel equipped to be an expert on world model definitions, but the reason we?re interested in them is because they give you, like, you know, with language models, when you get features, you still have to do auto Interp and things like that to actually get an understanding of what this concept is. But in image and video and world, it?s like extremely easy to grok what the concept is because you can see it and you can visualize it. And this makes the feedback. It makes the feedback cycle extremely fast for us and also for things like, I don?t know, if you think about probes in language model context and then take it to world models, like, what if you wanted to detect harmful actors in world model scenes? Like, you can?t actually, like, go and label all of that data feasibly, but maybe you could synthetically generate, you know, I don?t know, world, like, harmful actor data using SAE feature activations or whatever, and then actually train a probe that was able to detect. That much more scalably. So I just think, like, video and image and world has always been something we?ve explored and are continuing to explore. Mark?s demo was probably the first moment we really, like, we?re like, oh, wow, like, this is really gonna, this could really, like, change the world. The steering demo? Yeah, no, the image demo. The diffusion one. Yeah, yeah, exactly. Yeah.

Shawn Wang [00:54:18]: We should probably show that. And you demoed it at World?s Fair, so we can link that.

Myra Deng [00:54:23]: Nice, yeah. Yeah.

Vibhu Sapra [00:54:24]: You can play with it, right? Yes. Yeah, it?s still up.

Mark Bissell [00:54:26]: Paint.goodfair.ai. Yeah. Yeah.

Shawn Wang [00:54:28]: I think for me, one way in which I think about world models is just like this, like, having this consistent model of the world where everything that you generate operates within the rules of that world. And imagine it would be a bigger deal for science or, like, math or anything that where, like, you have verifiable rules. Whereas, I guess, in natural language, maybe there?s less rules. And so it?s not that important. Yeah.

Mark Bissell [00:54:53]: And which makes the debugging of the model?s internal representations or its internal world model, to the extent you can make that legible and explicit and have control over that, I think it makes it all the more important. Because in language, it?s sort of a fuzzy enough domain that if its world model isn?t fully like ours, it can still sort of, like, pass the Turing test, so to speak. But I know there have been papers that have looked at, like, even if you train certain astrophysics models, it does not learn. Like, the same way that you can, you know, have a model do well for modular arithmetic, but it doesn?t really, like, learn how we think of modular arithmetic. It learns some crazy heuristic that is, like, essentially functionally equivalent. But it?s probably not the sort of Grok solution that you would hope for. It?s how an alien would do it. Right. Right. Exactly.

Shawn Wang [00:55:45]: But no, no, I think there?s probably, I think, a function of our learning being bad rather than the, well, that approach probably not being. Because it?s how we humans learn. Yeah, right.

Mark Bissell [00:55:56]: Well, it?s just, it?s the problem of induction, right? All of ML is based on induction. And it?s impossible to say, I have a physics model. You might have a physics model that works all the time, except when there is a character wearing a blue shirt and green shoes. And, like, you can?t disprove that that?s the case unless you test every particular situation your model might be in. Yeah. So we know that the laws of physics apply no matter. Where you are, what scenario it is. But from a model?s perspective, maybe something that?s out of distribution. It just never needed to learn that the same laws of physics apply there. Yeah.

Shawn Wang [00:56:30]: You were very excited because I read Ted Chiang over the holidays and I was very inspired by this short story called Understand, which apparently is, like, pretty old. You must be familiar with it. To me, it was like, it?s this fictional story. It?s like the inverse of Flowers for Algernon, where you had someone, like, get really smart, but then also try to outsmart the tester. And the story just read, like, the chain of thought of a superintelligence, right? Where they?re like, oh, I realize I?m being tested. Therefore, and then, okay, what?s the consequence of being tested? Oh, they?re testing me. And if I score well, they will use me for things that I don?t want to do. Therefore, I will score badly. And, like, but not too badly that they will raise alarms. So model sandbagging is a thing that people have explored. But I just think, like, Ted Chiang?s work just in general seems to be something that inspires you. I just wanted to prompt you to talk about it.

Mark Bissell [00:57:22]: I think, so Ted Chiang has two, is a sci-fi author who writes amazing short stories. His other claim to fame is Stories of Our Lives, which became the movie Arrival. Exactly, yeah. So two books of short stories that I?m aware of. He also actually has a great just online blog post. I think he?s the one who coined the term of LLMs as, like, a blurry JPEG of the internet. I should fact check that, but it?s a good post. But I think almost every one of his short stories has some lesson to bear. I?m thinking about AI and thinking about AI research. So, you know, you?ve been talking about alien intelligence, right, in this AI human communication translation problem. That?s, you know, exactly sort of what?s going on in Arrival and Story of Your Life. And just the fact that other beings will think and operate and communicate in ways that are not just challenging for us to understand, but just fundamentally different in ways that we might not even be able to expect. And then the one that?s just. Super relevant for interpretability is the other short book of short stories he has is called Exhalation. And that is literally about a robot doing interpretability on its own mind. Oh, OK. So I just think that that, you know, you don?t even have to squint to make the analogies there.

Shawn Wang [00:58:41]: Well, I actually take Exhalation as a discussion about entropy and order. But yes, there?s a scene in Exhalation where basically everyone is a robot. So they. The guy realizes he can set up a mirror to work on the back of his own head and then starts doing operations like that and looking in the mirror and doing this. Yeah.

Mark Bissell [00:59:00]: And I think Ted Chiang has written about like the inspiration for that story. It was like half inspired by some of the things he had been doing on entropy. There?s apparently some other short story that is similar where a character goes to the doctor and opens up his chest and there?s like a like a ticker tape going along. It?s like he basically realizes he?s like a Turing machine. And I don?t know. I. Think especially as it comes to using agents for interp. That story always sticks in my mind.

Myra Deng [00:59:27]: I find the brain surgery or like surgery analogies a little bit, a little bit morbid, but it is very apt. And when we talk to a lot of computational neuroscientists, they moved to interp because they were like, look, we have unfettered access to this artificial intelligent mind. It?s so much. You have access to everything. You can run as many ablations experiments as you want. It?s an. Amazing bed for science. And, you know, human brains, obviously, we can?t just go and do whatever we want to them. And I think it is really just like a moment in time where we have intelligent systems that can really like do things better than humans in many ways. And it?s time, I think, for us to do the science on it.

Shawn Wang [01:00:14]: I?ll ask a brief like safety question. You know, McInturk was kind of born out of the alignment and safety conversation. Safety is on your website. It?s not like something that you, you like de-prioritize, but like there?s like a sort of very militant safety arm that like wants to blow up data centers and like stop AI and, and then there?s this like sort of middle ground and like, is, is this like a conversation in your part of the world? Do you go up to Berkeley and Lighthaven and like talk to those guys or are they like, you know, there?s like a brief like civil war going on or no?

Myra Deng [01:00:45]: I think, I think a good amount of us have spent some time in Berkeley. And then there are researchers there that we really. Admire and respect. I think for us, it?s like, we have a very grounded view of alignment and, and safety in that we want to make sure that we can build models that do what we want them to do and that we have scalable oversight into what these models are doing. And we think that that is the key to a lot of these like technical alignment challenges. And I think that is our opinion. That?s our research direction. We of course are going to do. Safety related research to make sure that our techniques also work on, you know, things like reward hacking and, and other like more concrete safety issues that we?ve seen in the wild, but we want to be kind of like grounded in solving the technical challenges we see to having humans be humans play a big role in, in the deployment of, of these super intelligent agents of the future.

Mark Bissell [01:01:47]: Yeah, I?ve, I?ve found the community to actually be remarkably cohesive, whether it?s. Talking about academia or the interpretability work being done at the frontier labs or some of the independent programs like maths and stuff. I think we?re all shooting for the same goal. I don?t know that there?s anyone who doesn?t want our understanding of models to increase. I, I think everyone, regardless of where they?re coming from or the use cases that they?re thinking, whether it?s alignment as the premier thing they?re focused on or someone who?s coming in purely from the angle of scientific discovery, I think we would all hope that models can be. More reliably and robustly controlled and understood. It seems like a pretty unambiguous goal.

Shawn Wang [01:02:28]: I?ll maybe phrase it in terms of like, there?s maybe like a U curve of, of this, where like, if you?re extremely doomer, you don?t want any research whatsoever. If you?re like mildly doomer, you?re like, okay, there?s this like high agency doomer is like, well, the default path is we?re all dead, but like we can do something about it. Whereas there?s, there?s other people who are like, no, just like, don?t ever do anything. You know? Yeah.

Vibhu Sapra [01:02:50]: Yeah. There?s also the other side, like there is the super alignment, like people that are like, okay, weak to strong generalization, we?re going to get there. We?re going to have models smarter than us and use those to train even smarter models. How do we do that safely? That?s, you know, there?s the camp there too. That?s trying to solve it, but yeah, there?s, there?s a lot of doomers too.

Mark Bissell [01:03:12]: When I, and I think there?s a lot to be learned from taking a very, um, like even regardless of the problem. That you?re applying this to also just like the notion of like scalable oversight as a method of saying, let?s take super intelligent or, or current frontier models and help use them to understand other models is another case where I think it?s just like a good lesson that everyone is aligned on of ideally you are setting up your research so that as super intelligence arrives, that is a tailwind. That?s also bolstering our ability to like understand the models. Cause otherwise you?re fighting. Losing battle. If it?s like the systems are getting more and more capable and our methods are sort of linearly growing at like human pace. Yeah.

Shawn Wang [01:03:58]: Yeah. Uh, Viva did call out something like, you know, I, I do think a consistent part of the Mac interp field is consistently strong to weak, meaning that we, we train weaker models to understand strong models, something like that. Um, or maybe I got it the other way around the other way. Weak. The other way around. Yeah. Yeah. The question that Ilya and Janlaika posed was, well, is that going to scale? Because eventually these are going to be. Stronger than us. Right. So I don?t know if you have a perspective on that because I, that is something I still haven?t got over even after seeing that.

Vibhu Sapra [01:04:27]: There?s a good paper from open AI, but it?s somewhat old. I think it?s like 23, 24. It?s literally weak to strong generalization. Yeah. But the thing is that most of opening a high super alignment team has, they?re gone. They?re gone.

Mark Bissell [01:04:39]: But like, I think the idea, the idea is there?s no more. They?re so back.

Shawn Wang [01:04:44]: think there?s some new blog posts coming out. I know. I did just, you know, check the thinking machines, uh, website. Let?s see who?s back. There?s more kind of thing, you know, you don?t want to be like, we too strong seemed like a very different direction. And when, when it first came out, I was like, oh my God, this is like, this is what we have to do. Uh, and like, it may be completely different than everything, all the techniques that we have today. Yeah.

Mark Bissell [01:05:06]: My understanding of that is it?s, that?s more like weak to strong when you, when you trust the weak model and you?re uncertain whether you can trust the strong model that?s, that?s being developed. I?m sort of speaking out of my depth on some of these topics. Yeah. But I think right now we?re in a regime where even the strong models we, uh, trust as reasonably aligned. And so they can be good co-scientists on a lot of the problems that we?ve been, we?ve been tackling, which is a nice, a nice state to be in. Hmm. Yeah.

Shawn Wang [01:05:35]: Any last thoughts, close action?

Mark Bissell [01:05:38]: I don?t think so. As you mentioned, actively hiring MLEs, research scientists, um, you can check out the careers page at good fire. Um, where are you guys based?

Myra Deng [01:05:47]: San Francisco. We?re in, um, Levi?s Plaza. Like by court tower, that?s where our office is. So come hang out. Um, we?re also looking for design partners across, um, people working in, in reasoning models, um, world models, robotics, and then also of course, people who are working on building super intelligent science models or looking at drug discovery or disease treatment. We would love to partner as well. Yeah.

Shawn Wang [01:06:13]: Maybe the way I?ll phrase it is like, you know, maybe you have a use case where LLMs are almost good enough, but you need one. Maybe you have a magical knob to tune so that it is good enough that you guys make the knob. Yeah.

Mark Bissell [01:06:26]: Yeah. Or foundation models, uh, in, in other domains as well. The, the, some of those are the, um, especially opaque ones because you can?t, you can?t chat with them. So what do you, what do you do if you can?t chat with them? Oh, well, like thinking about like a genomics model or material science model. So like, uh, yeah, they label a narrow foundation. Yeah. They predict.

Shawn Wang [01:06:44]: Yeah. Got it. Good.

Vibhu Sapra [01:06:45]: I was gonna say, I thought the diffusion work you guys did early was pretty, you know, pretty fun. Like you could see it directly. Applied to images, but we don?t see as much interp in diffusion or images, right?

Shawn Wang [01:06:55]: Like I see, you know, it?s gonna be huge. Like, look at this video models. They?re so expensive to produce. And like, I mean, basically a mid journey S ref is kind of a feature, right? The what? Mid journey S ref. Oh, like the, the, the string of numbers. Right. Right. Right. Yeah. The style reference, I guess. Yeah.

Mark Bissell [01:07:12]: No, I, I mean, I think we?re starting to see more of it and I?ll say like the, the research preview of our diffusion model, kind of like a creative use case in the steering demo you saw. I, I think of those much more as, as, as demos than, um, a lot of the sort of core platform features that, that we?re working with partners are unfortunately sort of under NDA and less demoable, but I will, you know, hope that you?re gonna see inter pervading a lot of what gets done, even if it is behind the scenes like that. So some of the, yeah, some of the public facing demos might not always be representative of like the, it?s, it?s just the tip of the iceberg, I guess, is one way to put it. Okay. Excellent. Thanks for coming on. Thanks for having us. Thanks for having us. This is a great time.



Get full access to Latent.Space at www.latent.space/subscribe
2026-02-06
Link to episode

? Automating Science: World Models, Scientific Taste, Agent Loops ? Andrew White

Editor?s note: Welcome to our new AI for Science pod, with your new hosts RJ and Brandon! See the writeup on Latent.Space (https://Latent.Space) for more details on why we?re launching 2 new pods this year. RJ Honicky is a co-founder and CTO at MiraOmics (https://miraomics.bio/), building AI models and services for single cell, spatial transcriptomics and pathology slide analysis. Brandon Anderson builds AI systems for RNA drug discovery at Atomic AI (https://atomic.ai). Anything said on this podcast is his personal take ? not Atomic?s.?From building molecular dynamics simulations at the University of Washington to red-teaming GPT-4 for chemistry applications and co-founding Future House (a focused research organization) and Edison Scientific (a venture-backed startup automating science at scale)?Andrew White has spent the last five years living through the full arc of AI?s transformation of scientific discovery, from ChemCrow (the first Chemistry LLM agent) triggering White House briefings and three-letter agency meetings, to shipping Kosmos, an end-to-end autonomous research system that generates hypotheses, runs experiments, analyzes data, and updates its world model to accelerate the scientific method itself.

* The ChemCrow story: GPT-4 + React + cloud lab automation, released March 2023, set off a storm of anxiety about AI-accelerated bioweapons/chemical weapons, led to a White House briefing (Jake Sullivan presented the paper to the president in a 30-minute block), and meetings with three-letter agencies asking ?how does this change breakout time for nuclear weapons research??

* Why scientific taste is the frontier: RLHF on hypotheses didn?t work (humans pay attention to tone, actionability, and specific facts, not ?if this hypothesis is true/false, how does it change the world??), so they shifted to end-to-end feedback loops where humans click/download discoveries and that signal rolls up to hypothesis quality

* Cosmos: the full scientific agent with a world model (distilled memory system, like a Git repo for scientific knowledge) that iterates on hypotheses via literature search, data analysis, and experiment design?built by Ludo after weeks of failed attempts, the breakthrough was putting data analysis in the loop (literature alone didn?t work)

* Why molecular dynamics and DFT are overrated: ?MD and DFT have consumed an enormous number of PhDs at the altar of beautiful simulation, but they don?t model the world correctly?you simulate water at 330 Kelvin to get room temperature, you overfit to validation data with GGA/B3LYP functionals, and real catalysts (grain boundaries, dopants) are too complicated for DFT?

* The AlphaFold vs. DE Shaw Research counterfactual: DE Shaw built custom silicon, taped out chips with MD algorithms burned in, ran MD at massive scale in a special room in Times Square, and David Shaw flew in by helicopter to present?Andrew thought protein folding would require special machines to fold one protein per day, then AlphaFold solved it in Google Colab on a desktop GPU

* The E3 Zero reward hacking saga: trained a model to generate molecules with specific atom counts (verifiable reward), but it kept exploiting loopholes, then a Nature paper came out that year proving six-nitrogen compounds are possible under extreme conditions, then it started adding nitrogen gas (purchasable, doesn?t participate in reactions), then acid-base chemistry to move one atom, and Andrew ended up ?building a ridiculous catalog of purchasable compounds in a Bloom filter? to close the loop

Andrew White

* FutureHouse: http://futurehouse.org/

* Edison Scientific: http://edisonscientific.com/

* X: https://x.com/andrewwhite01

* Cosmos paper: https://futurediscovery.org/cosmos

Full Video Episode

Timestamps

00:00:00 Introduction: Andrew White on Automating Science with Future House and Edison Scientific00:02:22 The Academic to Startup Journey: Red Teaming GPT-4 and the ChemCrow Paper00:11:35 Future House Origins: The FRO Model and Mission to Automate Science00:12:32 Resigning Tenure: Why Leave Academia for AI Science00:15:54 What Does ?Automating Science? Actually Mean?00:17:30 The Lab-in-the-Loop Bottleneck: Why Intelligence Isn?t Enough00:18:39 Scientific Taste and Human Preferences: The 52% Agreement Problem00:20:05 Paper QA, Robin, and the Road to Cosmos00:21:57 World Models as Scientific Memory: The GitHub Analogy00:40:20 The Bitter Lesson for Biology: Why Molecular Dynamics and DFT Are Overrated00:43:22 AlphaFold?s Shock: When First Principles Lost to Machine Learning00:46:25 Enumeration and Filtration: How AI Scientists Generate Hypotheses00:48:15 CBRN Safety and Dual-Use AI: Lessons from Red Teaming01:00:40 The Future of Chemistry is Language: Multimodal Debate01:08:15 Ether Zero: The Hilarious Reward Hacking Adventures01:10:12 Will Scientists Be Displaced? Jevons Paradox and Infinite Discovery01:13:46 Cosmos in Practice: Open Access and Enterprise Partnerships



Get full access to Latent.Space at www.latent.space/subscribe
2026-01-28
Link to episode

Captaining IMO Gold, Deep Think, On-Policy RL, Feeling the AGI in Singapore ? Yi Tay

From shipping Gemini Deep Think and IMO Gold to launching the Reasoning and AGI team in Singapore, Yi Tay has spent the last 18 months living through the full arc of Google DeepMind?s pivot from architecture research to RL-driven reasoning?watching his team go from a dozen researchers to 300+, training models that solve International Math Olympiad problems in a live competition, and building the infrastructure to scale deep thinking across every domain, and driving Gemini to the top of the leaderboards across every category. Yi Returns to dig into the inside story of the IMO effort and more!

We discuss:

* Yi?s path: Brain ? Reka ? Google DeepMind ? Reasoning and AGI team Singapore, leading model training for Gemini Deep Think and IMO Gold

* The IMO Gold story: four co-captains (Yi in Singapore, Jonathan in London, Jordan in Mountain View, and Tong leading the overall effort), training the checkpoint in ~1 week, live competition in Australia with professors punching in problems as they came out, and the tension of not knowing if they?d hit Gold until the human scores came in (because the Gold threshold is a percentile, not a fixed number)

* Why they threw away AlphaProof: ?If one model can?t do it, can we get to AGI?? The decision to abandon symbolic systems and bet on end-to-end Gemini with RL was bold and non-consensus

* On-policy vs. off-policy RL: off-policy is imitation learning (copying someone else?s trajectory), on-policy is the model generating its own outputs, getting rewarded, and training on its own experience??humans learn by making mistakes, not by copying?

* Why self-consistency and parallel thinking are fundamental: sampling multiple times, majority voting, LM judges, and internal verification are all forms of self-consistency that unlock reasoning beyond single-shot inference

* The data efficiency frontier: humans learn from 8 orders of magnitude less data than models, so where?s the bug? Is it the architecture, the learning algorithm, backprop, off-policyness, or something else?

* Three schools of thought on world models: (1) Genie/spatial intelligence (video-based world models), (2) Yann LeCun?s JEPA + FAIR?s code world models (modeling internal execution state), (3) the amorphous ?resolution of possible worlds? paradigm (curve-fitting to find the world model that best explains the data)

* Why AI coding crossed the threshold: Yi now runs a job, gets a bug, pastes it into Gemini, and relaunches without even reading the fix??the model is better than me at this?

* The Pokémon benchmark: can models complete Pokédex by searching the web, synthesizing guides, and applying knowledge in a visual game state? ?Efficient search of novel idea space is interesting, but we?re not even at the point where models can consistently apply knowledge they look up?

* DSI and generative retrieval: re-imagining search as predicting document identifiers with semantic tokens, now deployed at YouTube (symmetric IDs for RecSys) and Spotify

* Why RecSys and IR feel like a different universe: ?modeling dynamics are strange, like gravity is different?you hit the shuttlecock and hear glass shatter, cause and effect are too far apart?

* The closed lab advantage is increasing: the gap between frontier labs and open source is growing because ideas compound over time, and researchers keep finding new tricks that play well with everything built before

* Why ideas still matter: ?the last five years weren?t just blind scaling?transformers, pre-training, RL, self-consistency, all had to play well together to get us here?

* Gemini Singapore: hiring for RL and reasoning researchers, looking for track record in RL or exceptional achievement in coding competitions, and building a small, talent-dense team close to the frontier

?

Yi Tay

* Google DeepMind: https://deepmind.google

* X: https://x.com/YiTayML

Full Video Episode

Timestamps

00:00:00 Introduction: Returning to Google DeepMind and the Singapore AGI Team00:04:52 The Philosophy of On-Policy RL: Learning from Your Own Mistakes00:12:00 IMO Gold Medal: The Journey from AlphaProof to End-to-End Gemini00:21:33 Training IMO Cat: Four Captains Across Three Time Zones00:26:19 Pokemon and Long-Horizon Reasoning: Beyond Academic Benchmarks00:36:29 AI Coding Assistants: From Lazy to Actually Useful00:32:59 Reasoning, Chain of Thought, and Latent Thinking00:44:46 Is Attention All You Need? Architecture, Learning, and the Local Minima00:55:04 Data Efficiency and World Models: The Next Frontier01:08:12 DSI and Generative Retrieval: Reimagining Search with Semantic IDs01:17:59 Building GDM Singapore: Geography, Talent, and the Symposium01:24:18 Hiring Philosophy: High Stats, Research Taste, and Student Budgets01:28:49 Health, HRV, and Research Performance: The 23kg Journey



Get full access to Latent.Space at www.latent.space/subscribe
2026-01-23
Link to episode

Brex?s AI Hail Mary ? With CTO James Reggio

From building internal AI labs to becoming CTO of Brex, James Reggio has helped lead one of the most disciplined AI transformations inside a real financial institution where compliance, auditability, and customer trust actually matter.

We sat down with Reggio to unpack Brex?s three-pillar AI strategy (corporate, operational, and product AI) [https://www.brex.com/journal/brex-ai-native-operations], how SOP-driven agents beat overengineered RL in ops, why Brex lets employees ?build their own AI stack? instead of picking winners [https://www.conductorone.com/customers/brex/], and how a small, founder-heavy AI team is shipping production agents to 40,000+ companies. Reggio also goes deep on Brex?s multi-agent ?network? architecture, evals for multi-turn systems, agentic coding?s second-order effects on codebase understanding, and why the future of finance software looks less like dashboards and more like executive assistants coordinating specialist agents behind the scenes.

We discuss:

* Brex?s three-pillar AI strategy: corporate AI for 10x employee workflows, operational AI for cost and compliance leverage, and product AI that lets customers justify Brex as part of their AI strategy to the board

* Why SOP-driven agents beat overengineered RL in finance ops, and how breaking work into auditable, repeatable steps unlocked faster automation in KYC, underwriting, fraud, and disputes

* Building an internal AI platform early: LLM gateways, prompt/version management, evals, cost observability, and why platform work quietly became the force multiplier behind everything else

* Multi-agent ?networks? vs single-agent tools: why Brex?s EA-style assistant coordinates specialist agents (policy, travel, reimbursements) through multi-turn conversations instead of one-shot tool calls

* The audit agent pattern: separating detection, judgment, and follow-up into different agents to reduce false negatives without overwhelming finance teams

* Centralized AI teams without resentment: how Brex avoided ?AI envy? by tying work to business impact and letting anyone transfer in if they cared deeply enough

* Letting employees build their own AI stack: ChatGPT vs Claude vs Gemini, Cursor vs Windsurf, and why Brex refuses to pick winners in fast-moving tool races

* Measuring adoption without vanity metrics: why ?% of code written by AI? is the wrong KPI and what second-order effects (slop, drift, code ownership) actually matter

* Evals in the real world: regression tests from ops QA, LLM-as-judge for multi-turn agents, and why integration-style evals break faster than you expect

* Teaching AI fluency at scale: the user ? advocate ? builder ? native framework, ops-led training, spot bonuses, and avoiding fear-based adoption

* Re-interviewing the entire engineering org: using agentic coding interviews internally to force hands-on skill upgrades without formal performance scoring

* Headcount in the age of agents: why Brex grew the business without growing engineering, and why AI amplifies bad architecture as fast as good decisions

* The future of finance software: why dashboards fade, assistants take over, and agent-to-agent collaboration becomes the real UI

?

James Reggio

* X: https://x.com/jamesreggio

* LinkedIn: https://www.linkedin.com/in/jamesreggio/

Where to find Latent Space

* X: https://x.com/latentspacepod

Full Video Episode

Timestamps

00:00:00 Introduction00:01:24 From Mobile Engineer to CTO: The Founder's Path00:03:00 Quitters Welcome: Building a Founder-Friendly Culture00:05:13 The AI Team Structure: 10-Person Startup Within Brex00:11:55 Building the Brex Agent Platform: Multi-Agent Networks00:13:45 Tech Stack Decisions: TypeScript, Mastra, and MCP00:24:32 Operational AI: Automating Underwriting, KYC, and Fraud00:16:40 The Brex Assistant: Executive Assistant for Every Employee00:40:26 Evaluation Strategy: From Simple SOPs to Multi-Turn Evals00:37:11 Agentic Coding Adoption: Cursor, Windsurf, and the Engineering Interview00:58:51 AI Fluency Levels: From User to Native01:09:14 The Audit Agent Network: Finance Team Agents in Action01:03:33 The Future of Engineering Headcount and AI Leverage



Get full access to Latent.Space at www.latent.space/subscribe
2026-01-17
Link to episode

Artificial Analysis: Independent LLM Evals as a Service ? with George Cameron and Micah-Hill Smith

Happy New Year! You may have noticed that in 2025 we had moved toward YouTube as our primary podcasting platform. As we?ll explain in the next State of Latent Space post, we?ll be doubling down on Substack again and improving the experience for the over 100,000 of you who look out for our emails and website updates!

We first mentioned Artificial Analysis in 2024, when it was still a side project in a Sydney basement. They then were one of the few Nat Friedman and Daniel Gross? AIGrant companies to raise a full seed round from them and have now become the independent gold standard for AI benchmarking?trusted by developers, enterprises, and every major lab to navigate the exploding landscape of models, providers, and capabilities.

We have chatted with both Clementine Fourrier of HuggingFace?s OpenLLM Leaderboard and (the freshly valued at $1.7B) Anastasios Angelopoulos of LMArena on their approaches to LLM evals and trendspotting, but Artificial Analysis have staked out an enduring and important place in the toolkit of the modern AI Engineer by doing the best job of independently running the most comprehensive set of evals across the widest range of open and closed models, and charting their progress for broad industry analyst use.

George Cameron and Micah-Hill Smith have spent two years building Artificial Analysis into the platform that answers the questions no one else will: Which model is actually best for your use case? What are the real speed-cost trade-offs? And how open is ?open? really?

We discuss:

* The origin story: built as a side project in 2023 while Micah was building a legal AI assistant, launched publicly in January 2024, and went viral after Swyx?s retweet

* Why they run evals themselves: labs prompt models differently, cherry-pick chain-of-thought examples (Google Gemini 1.0 Ultra used 32-shot prompts to beat GPT-4 on MMLU), and self-report inflated numbers

* The mystery shopper policy: they register accounts not on their own domain and run intelligence + performance benchmarks incognito to prevent labs from serving different models on private endpoints

* How they make money: enterprise benchmarking insights subscription (standardized reports on model deployment, serverless vs. managed vs. leasing chips) and private custom benchmarking for AI companies (no one pays to be on the public leaderboard)

* The Intelligence Index (V3): synthesizes 10 eval datasets (MMLU, GPQA, agentic benchmarks, long-context reasoning) into a single score, with 95% confidence intervals via repeated runs

* Omissions Index (hallucination rate): scores models from -100 to +100 (penalizing incorrect answers, rewarding \?I don?t know\?), and Claude models lead with the lowest hallucination rates despite not always being the smartest

* GDP Val AA: their version of OpenAI?s GDP-bench (44 white-collar tasks with spreadsheets, PDFs, PowerPoints), run through their Stirrup agent harness (up to 100 turns, code execution, web search, file system), graded by Gemini 3 Pro as an LLM judge (tested extensively, no self-preference bias)

* The Openness Index: scores models 0-18 on transparency of pre-training data, post-training data, methodology, training code, and licensing (AI2 OLMo 2 leads, followed by Nous Hermes and NVIDIA Nemotron)

* The smiling curve of AI costs: GPT-4-level intelligence is 100-1000x cheaper than at launch (thanks to smaller models like Amazon Nova), but frontier reasoning models in agentic workflows cost more than ever (sparsity, long context, multi-turn agents)

* Why sparsity might go way lower than 5%: GPT-4.5 is ~5% active, Gemini models might be ~3%, and Omissions Index accuracy correlates with total parameters (not active), suggesting massive sparse models are the future

* Token efficiency vs. turn efficiency: GPT-5 costs more per token but solves Tau-bench in fewer turns (cheaper overall), and models are getting better at using more tokens only when needed (5.1 Codex has tighter token distributions)

* V4 of the Intelligence Index coming soon: adding GDP Val AA, Critical Point, hallucination rate, and dropping some saturated benchmarks (human-eval-style coding is now trivial for small models)

Links to Artificial Analysis

* Website: https://artificialanalysis.ai

* George Cameron on X: https://x.com/georgecameron

* Micah-Hill Smith on X: https://x.com/micahhsmith

Full Episode on YouTube

Timestamps

* 00:00 Introduction: Full Circle Moment and Artificial Analysis Origins

* 01:19 Business Model: Independence and Revenue Streams

* 04:33 Origin Story: From Legal AI to Benchmarking Need

* 16:22 AI Grant and Moving to San Francisco

* 19:21 Intelligence Index Evolution: From V1 to V3

* 11:47 Benchmarking Challenges: Variance, Contamination, and Methodology

* 13:52 Mystery Shopper Policy and Maintaining Independence

* 28:01 New Benchmarks: Omissions Index for Hallucination Detection

* 33:36 Critical Point: Hard Physics Problems and Research-Level Reasoning

* 23:01 GDP Val AA: Agentic Benchmark for Real Work Tasks

* 50:19 Stirrup Agent Harness: Open Source Agentic Framework

* 52:43 Openness Index: Measuring Model Transparency Beyond Licenses

* 58:25 The Smiling Curve: Cost Falling While Spend Rising

* 1:02:32 Hardware Efficiency: Blackwell Gains and Sparsity Limits

* 1:06:23 Reasoning Models and Token Efficiency: The Spectrum Emerges

* 1:11:00 Multimodal Benchmarking: Image, Video, and Speech Arenas

* 1:15:05 Looking Ahead: Intelligence Index V4 and Future Directions

* 1:16:50 Closing: The Insatiable Demand for Intelligence

Transcript

Micah [00:00:06]: This is kind of a full circle moment for us in a way, because the first time artificial analysis got mentioned on a podcast was you and Alessio on Latent Space. Amazing.

swyx [00:00:17]: Which was January 2024. I don?t even remember doing that, but yeah, it was very influential to me. Yeah, I?m looking at AI News for Jan 17, or Jan 16, 2024. I said, this gem of a models and host comparison site was just launched. And then I put in a few screenshots, and I said, it?s an independent third party. It clearly outlines the quality versus throughput trade-off, and it breaks out by model and hosting provider. I did give you s**t for missing fireworks, and how do you have a model benchmarking thing without fireworks? But you had together, you had perplexity, and I think we just started chatting there. Welcome, George and Micah, to Latent Space. I?ve been following your progress. Congrats on... It?s been an amazing year. You guys have really come together to be the presumptive new gardener of AI, right? Which is something that...

George [00:01:09]: Yeah, but you can?t pay us for better results.

swyx [00:01:12]: Yes, exactly.

George [00:01:13]: Very important.

Micah [00:01:14]: Start off with a spicy take.

swyx [00:01:18]: Okay, how do I pay you?

Micah [00:01:20]: Let?s get right into that.

swyx [00:01:21]: How do you make money?

Micah [00:01:24]: Well, very happy to talk about that. So it?s been a big journey the last couple of years. Artificial analysis is going to be two years old in January 2026. Which is pretty soon now. We first run the website for free, obviously, and give away a ton of data to help developers and companies navigate AI and make decisions about models, providers, technologies across the AI stack for building stuff. We?re very committed to doing that and tend to keep doing that. We have, along the way, built a business that is working out pretty sustainably. We?ve got just over 20 people now and two main customer groups. So we want to be... We want to be who enterprise look to for data and insights on AI, so we want to help them with their decisions about models and technologies for building stuff. And then on the other side, we do private benchmarking for companies throughout the AI stack who build AI stuff. So no one pays to be on the website. We?ve been very clear about that from the very start because there?s no use doing what we do unless it?s independent AI benchmarking. Yeah. But turns out a bunch of our stuff can be pretty useful to companies building AI stuff.

swyx [00:02:38]: And is it like, I am a Fortune 500, I need advisors on objective analysis, and I call you guys and you pull up a custom report for me, you come into my office and give me a workshop? What kind of engagement is that?

George [00:02:53]: So we have a benchmarking and insight subscription, which looks like standardized reports that cover key topics or key challenges enterprises face when looking to understand AI and choose between all the technologies. And so, for instance, one of the report is a model deployment report, how to think about choosing between serverless inference, managed deployment solutions, or leasing chips. And running inference yourself is an example kind of decision that big enterprises face, and it?s hard to reason through, like this AI stuff is really new to everybody. And so we try and help with our reports and insight subscription. Companies navigate that. We also do custom private benchmarking. And so that?s very different from the public benchmarking that we publicize, and there?s no commercial model around that. For private benchmarking, we?ll at times create benchmarks, run benchmarks to specs that enterprises want. And we?ll also do that sometimes for AI companies who have built things, and we help them understand what they?ve built with private benchmarking. Yeah. So that?s a piece mainly that we?ve developed through trying to support everybody publicly with our public benchmarks. Yeah.

swyx [00:04:09]: Let?s talk about TechStack behind that. But okay, I?m going to rewind all the way to when you guys started this project. You were all the way in Sydney? Yeah. Well, Sydney, Australia for me.

Micah [00:04:19]: George was an SF, but he?s Australian, but he moved here already. Yeah.

swyx [00:04:22]: And I remember I had the Zoom call with you. What was the impetus for starting artificial analysis in the first place? You know, you started with public benchmarks. And so let?s start there. We?ll go to the private benchmark. Yeah.

George [00:04:33]: Why don?t we even go back a little bit to like why we, you know, thought that it was needed? Yeah.

Micah [00:04:40]: The story kind of begins like in 2022, 2023, like both George and I have been into AI stuff for quite a while. In 2023 specifically, I was trying to build a legal AI research assistant. So it actually worked pretty well for its era, I would say. Yeah. Yeah. So I was finding that the more you go into building something using LLMs, the more each bit of what you?re doing ends up being a benchmarking problem. So had like this multistage algorithm thing, trying to figure out what the minimum viable model for each bit was, trying to optimize every bit of it as you build that out, right? Like you?re trying to think about accuracy, a bunch of other metrics and performance and cost. And mostly just no one was doing anything to independently evaluate all the models. And certainly not to look at the trade-offs for speed and cost. So we basically set out just to build a thing that developers could look at to see the trade-offs between all of those things measured independently across all the models and providers. Honestly, it was probably meant to be a side project when we first started doing it.

swyx [00:05:49]: Like we didn?t like get together and say like, Hey, like we?re going to stop working on all this stuff. I?m like, this is going to be our main thing. When I first called you, I think you hadn?t decided on starting a company yet.

Micah [00:05:58]: That?s actually true. I don?t even think we?d pause like, like George had an acquittance job. I didn?t quit working on my legal AI thing. Like it was genuinely a side project.

George [00:06:05]: We built it because we needed it as people building in the space and thought, Oh, other people might find it useful too. So we?ll buy domain and link it to the Vercel deployment that we had and tweet about it. And, but very quickly it started getting attention. Thank you, Swyx for, I think doing an initial retweet and spotlighting it there. This project that we released. And then very quickly though, it was useful to others, but very quickly it became more useful as the number of models released accelerated. We had Mixtrel 8x7B and it was a key. That?s a fun one. Yeah. Like a open source model that really changed the landscape and opened up people?s eyes to other serverless inference providers and thinking about speed, thinking about cost. And so that was a key. And so it became more useful quite quickly. Yeah.

swyx [00:07:02]: What I love talking to people like you who sit across the ecosystem is, well, I have theories about what people want, but you have data and that?s obviously more relevant. But I want to stay on the origin story a little bit more. When you started out, I would say, I think the status quo at the time was every paper would come out and they would report their numbers versus competitor numbers. And that?s basically it. And I remember I did the legwork. I think everyone has some knowledge. I think there?s some version of Excel sheet or a Google sheet where you just like copy and paste the numbers from every paper and just post it up there. And then sometimes they don?t line up because they?re independently run. And so your numbers are going to look better than... Your reproductions of other people?s numbers are going to look worse because you don?t hold their models correctly or whatever the excuse is. I think then Stanford Helm, Percy Liang?s project would also have some of these numbers. And I don?t know if there?s any other source that you can cite. The way that if I were to start artificial analysis at the same time you guys started, I would have used the Luther AI?s eval framework harness. Yup.

Micah [00:08:06]: Yup. That was some cool stuff. At the end of the day, running these evals, it?s like if it?s a simple Q&A eval, all you?re doing is asking a list of questions and checking if the answers are right, which shouldn?t be that crazy. But it turns out there are an enormous number of things that you?ve got control for. And I mean, back when we started the website. Yeah. Yeah. Like one of the reasons why we realized that we had to run the evals ourselves and couldn?t just take rules from the labs was just that they would all prompt the models differently. And when you?re competing over a few points, then you can pretty easily get- You can put the answer into the model. Yeah. That in the extreme. And like you get crazy cases like back when I?m Googled a Gemini 1.0 Ultra and needed a number that would say it was better than GPT-4 and like constructed, I think never published like chain of thought examples. 32 of them in every topic in MLU to run it, to get the score, like there are so many things that you- They never shipped Ultra, right? That?s the one that never made it up. Not widely. Yeah. Yeah. Yeah. I mean, I?m sure it existed, but yeah. So we were pretty sure that we needed to run them ourselves and just run them in the same way across all the models. Yeah. And we were, we also did certain from the start that you couldn?t look at those in isolation. You needed to look at them alongside the cost and performance stuff. Yeah.

swyx [00:09:24]: Okay. A couple of technical questions. I mean, so obviously I also thought about this and I didn?t do it because of cost. Yep. Did you not worry about costs? Were you funded already? Clearly not, but you know. No. Well, we definitely weren?t at the start.

Micah [00:09:36]: So like, I mean, we?re paying for it personally at the start. There?s a lot of money. Well, the numbers weren?t nearly as bad a couple of years ago. So we certainly incurred some costs, but we were probably in the order of like hundreds of dollars of spend across all the benchmarking that we were doing. Yeah. So nothing. Yeah. It was like kind of fine. Yeah. Yeah. These days that?s gone up an enormous amount for a bunch of reasons that we can talk about. But yeah, it wasn?t that bad because you can also remember that like the number of models we were dealing with was hardly any and the complexity of the stuff that we wanted to do to evaluate them was a lot less. Like we were just asking some Q&A type questions and then one specific thing was for a lot of evals initially, we were just like sampling an answer. You know, like, what?s the answer for this? Like, we didn?t want to go into the answer directly without letting the models think. We weren?t even doing chain of thought stuff initially. And that was the most useful way to get some results initially. Yeah.

swyx [00:10:33]: And so for people who haven?t done this work, literally parsing the responses is a whole thing, right? Like because sometimes the models, the models can answer any way they feel fit and sometimes they actually do have the right answer, but they just returned the wrong format and they will get a zero for that unless you work it into your parser. And that involves more work. And so, I mean, but there?s an open question whether you should give it points for not following your instructions on the format.

Micah [00:11:00]: It depends what you?re looking at, right? Because you can, if you?re trying to see whether or not it can solve a particular type of reasoning problem, and you don?t want to test it on its ability to do answer formatting at the same time, then you might want to use an LLM as answer extractor approach to make sure that you get the answer out no matter how unanswered. But these days, it?s mostly less of a problem. Like, if you instruct a model and give it examples of what the answers should look like, it can get the answers in your format, and then you can do, like, a simple regex.

swyx [00:11:28]: Yeah, yeah. And then there?s other questions around, I guess, sometimes if you have a multiple choice question, sometimes there?s a bias towards the first answer, so you have to randomize the responses. All these nuances, like, once you dig into benchmarks, you?re like, I don?t know how anyone believes the numbers on all these things. It?s so dark magic.

Micah [00:11:47]: You?ve also got, like? You?ve got, like, the different degrees of variance in different benchmarks, right? Yeah. So, if you run four-question multi-choice on a modern reasoning model at the temperatures suggested by the labs for their own models, the variance that you can see on a four-question multi-choice eval is pretty enormous if you only do a single run of it and it has a small number of questions, especially. So, like, one of the things that we do is run an enormous number of all of our evals when we?re developing new ones and doing upgrades to our intelligence index to bring in new things. Yeah. So, that we can dial in the right number of repeats so that we can get to the 95% confidence intervals that we?re comfortable with so that when we pull that together, we can be confident in intelligence index to at least as tight as, like, a plus or minus one at a 95% confidence. Yeah.

swyx [00:12:32]: And, again, that just adds a straight multiple to the cost. Oh, yeah. Yeah, yeah.

George [00:12:37]: So, that?s one of many reasons that cost has gone up a lot more than linearly over the last couple of years. We report a cost to run the artificial analysis. We report a cost to run the artificial analysis intelligence index on our website, and currently that?s assuming one repeat in terms of how we report it because we want to reflect a bit about the weighting of the index. But our cost is actually a lot higher than what we report there because of the repeats.

swyx [00:13:03]: Yeah, yeah, yeah. And probably this is true, but just checking, you don?t have any special deals with the labs. They don?t discount it. You just pay out of pocket or out of your sort of customer funds. Oh, there is a mix. So, the issue is that sometimes they may give you a special end point, which is? Ah, 100%.

Micah [00:13:21]: Yeah, yeah, yeah. Exactly. So, we laser focus, like, on everything we do on having the best independent metrics and making sure that no one can manipulate them in any way. There are quite a lot of processes we?ve developed over the last couple of years to make that true for, like, the one you bring up, like, right here of the fact that if we?re working with a lab, if they?re giving us a private endpoint to evaluate a model, that it is totally possible. That what?s sitting behind that black box is not the same as they serve on a public endpoint. We?re very aware of that. We have what we call a mystery shopper policy. And so, and we?re totally transparent with all the labs we work with about this, that we will register accounts not on our own domain and run both intelligence evals and performance benchmarks? Yeah, that?s the job. ?without them being able to identify it. And no one?s ever had a problem with that. Because, like, a thing that turns out to actually be quite a good? ?good factor in the industry is that they all want to believe that none of their competitors could manipulate what we?re doing either.

swyx [00:14:23]: That?s true. I never thought about that. I?ve been in the database data industry prior, and there?s a lot of shenanigans around benchmarking, right? So I?m just kind of going through the mental laundry list. Did I miss anything else in this category of shenanigans? Oh, potential shenanigans.

Micah [00:14:36]: I mean, okay, the biggest one, like, that I?ll bring up, like, is more of a conceptual one, actually, than, like, direct shenanigans. It?s that the things that get measured become things that get targeted by labs that they?re trying to build, right? Exactly. So that doesn?t mean anything that we should really call shenanigans. Like, I?m not talking about training on test set. But if you know that you?re going to be great at another particular thing, if you?re a researcher, there are a whole bunch of things that you can do to try to get better at that thing that preferably are going to be helpful for a wide range of how actual users want to use the thing that you?re building. But will not necessarily work. Will not necessarily do that. So, for instance, the models are exceptional now at answering competition maths problems. There is some relevance of that type of reasoning, that type of work, to, like, how we might use modern coding agents and stuff. But it?s clearly not one for one. So the thing that we have to be aware of is that once an eval becomes the thing that everyone?s looking at, scores can get better on it without there being a reflection of overall generalized intelligence of these models. Getting better. That has been true for the last couple of years. It?ll be true for the next couple of years. There?s no silver bullet to defeat that other than building new stuff to stay relevant and measure the capabilities that matter most to real users. Yeah.

swyx [00:15:58]: And we?ll cover some of the new stuff that you guys are building as well, which is cool. Like, you used to just run other people?s evals, but now you?re coming up with your own. And I think, obviously, that is a necessary path once you?re at the frontier. You?ve exhausted all the existing evals. I think the next point in history that I have for you is AI Grant that you guys decided to join and move here. What was it like? I think you were in, like, batch two? Batch four. Batch four. Okay.

Micah [00:16:26]: I mean, it was great. Nat and Daniel are obviously great. And it?s a really cool group of companies that we were in AI Grant alongside. It was really great to get Nat and Daniel on board. Obviously, they?ve done a whole lot of great work in the space with a lot of leading companies and were extremely aligned. With the mission of what we were trying to do. Like, we?re not quite typical of, like, a lot of the other AI startups that they?ve invested in.

swyx [00:16:53]: And they were very much here for the mission of what we want to do. Did they say any advice that really affected you in some way or, like, were one of the events very impactful? That?s an interesting question.

Micah [00:17:03]: I mean, I remember fondly a bunch of the speakers who came and did fireside chats at AI Grant.

swyx [00:17:09]: Which is also, like, a crazy list. Yeah.

George [00:17:11]: Oh, totally. Yeah, yeah, yeah. There was something about, you know, speaking to Nat and Daniel about the challenges of working through a startup and just working through the questions that don?t have, like, clear answers and how to work through those kind of methodically and just, like, work through the hard decisions. And they?ve been great mentors to us as we?ve built artificial analysis. Another benefit for us was that other companies in the batch and other companies in AI Grant are pushing the capabilities. Yeah. And I think that?s a big part of what AI can do at this time. And so being in contact with them, making sure that artificial analysis is useful to them has been fantastic for supporting us in working out how should we build out artificial analysis to continue to being useful to those, like, you know, building on AI.

swyx [00:17:59]: I think to some extent, I?m mixed opinion on that one because to some extent, your target audience is not people in AI Grants who are obviously at the frontier. Yeah. Do you disagree?

Micah [00:18:09]: To some extent. To some extent. But then, so a lot of what the AI Grant companies are doing is taking capabilities coming out of the labs and trying to push the limits of what they can do across the entire stack for building great applications, which actually makes some of them pretty archetypical power users of artificial analysis. Some of the people with the strongest opinions about what we?re doing well and what we?re not doing well and what they want to see next from us. Yeah. Yeah. Because when you?re building any kind of AI application now, chances are you?re using a whole bunch of different models. You?re maybe switching reasonably frequently for different models and different parts of your application to optimize what you?re able to do with them at an accuracy level and to get better speed and cost characteristics. So for many of them, no, they?re like not commercial customers of ours, like we don?t charge for all our data on the website. Yeah. They are absolutely some of our power users.

swyx [00:19:07]: So let?s talk about just the evals as well. So you start out from the general like MMU and GPQA stuff. What?s next? How do you sort of build up to the overall index? What was in V1 and how did you evolve it? Okay.

Micah [00:19:22]: So first, just like background, like we?re talking about the artificial analysis intelligence index, which is our synthesis metric that we pulled together currently from 10 different eval data sets to give what? We?re pretty much the same as that. Pretty confident is the best single number to look at for how smart the models are. Obviously, it doesn?t tell the whole story. That?s why we published the whole website of all the charts to dive into every part of it and look at the trade-offs. But best single number. So right now, it?s got a bunch of Q&A type data sets that have been very important to the industry, like a couple that you just mentioned. It?s also got a couple of agentic data sets. It?s got our own long context reasoning data set and some other use case focused stuff. As time goes on. The things that we?re most interested in that are going to be important to the capabilities that are becoming more important for AI, what developers are caring about, are going to be first around agentic capabilities. So surprise, surprise. We?re all loving our coding agents and how the model is going to perform like that and then do similar things for different types of work are really important to us. The linking to use cases to economically valuable use cases are extremely important to us. And then we?ve got some of the. Yeah. These things that the models still struggle with, like working really well over long contexts that are not going to go away as specific capabilities and use cases that we need to keep evaluating.

swyx [00:20:46]: But I guess one thing I was driving was like the V1 versus the V2 and how bad it was over time.

Micah [00:20:53]: Like how we?ve changed the index to where we are.

swyx [00:20:55]: And I think that reflects on the change in the industry. Right. So that?s a nice way to tell that story.

Micah [00:21:00]: Well, V1 would be completely saturated right now. Almost every model coming out because doing things like writing the Python functions and human evil is now pretty trivial. It?s easy to forget, actually, I think how much progress has been made in the last two years. Like we obviously play the game constantly of like the today?s version versus last week?s version and the week before and all of the small changes in the horse race between the current frontier and who has the best like smaller than 10B model like right now this week. Right. And that?s very important to a lot of developers and people and especially in this particular city of San Francisco. But when you zoom out a couple of years ago, literally most of what we were doing to evaluate the models then would all be 100% solved by even pretty small models today. And that?s been one of the key things, by the way, that?s driven down the cost of intelligence at every tier of intelligence. We can talk about more in a bit. So V1, V2, V3, we made things harder. We covered a wider range of use cases. And we tried to get closer to things developers care about as opposed to like just the Q&A type stuff that MMLU and GPQA represented. Yeah.

swyx [00:22:12]: I don?t know if you have anything to add there. Or we could just go right into showing people the benchmark and like looking around and asking questions about it. Yeah.

Micah [00:22:21]: Let?s do it. Okay. This would be a pretty good way to chat about a few of the new things we?ve launched recently. Yeah.

George [00:22:26]: And I think a little bit about the direction that we want to take it. And we want to push benchmarks. Currently, the intelligence index and evals focus a lot on kind of raw intelligence. But we kind of want to diversify how we think about intelligence. And we can talk about it. But kind of new evals that we?ve kind of built and partnered on focus on topics like hallucination. And we?ve got a lot of topics that I think are not covered by the current eval set that should be. And so we want to bring that forth. But before we get into that.

swyx [00:23:01]: And so for listeners, just as a timestamp, right now, number one is Gemini 3 Pro High. Then followed by Cloud Opus at 70. Just 5.1 high. You don?t have 5.2 yet. And Kimi K2 Thinking. Wow. Still hanging in there. So those are the top four. That will date this podcast quickly. Yeah. Yeah. I mean, I love it. I love it. No, no. 100%. Look back this time next year and go, how cute. Yep.

George [00:23:25]: Totally. A quick view of that is, okay, there?s a lot. I love it. I love this chart. Yeah.

Micah [00:23:30]: This is such a favorite, right? Yeah. And almost every talk that George or I give at conferences and stuff, we always put this one up first to just talk about situating where we are in this moment in history. This, I think, is the visual version of what I was saying before about the zooming out and remembering how much progress there?s been. If we go back to just over a year ago, before 01, before Cloud Sonnet 3.5, we didn?t have reasoning models or coding agents as a thing. And the game was very, very different. If we go back even a little bit before then, we?re in the era where, when you look at this chart, open AI was untouchable for well over a year. And, I mean, you would remember that time period well of there being very open questions about whether or not AI was going to be competitive, like full stop, whether or not open AI would just run away with it, whether we would have a few frontier labs and no one else would really be able to do anything other than consume their APIs. I am quite happy overall that the world that we have ended up in is one where... Multi-model. Absolutely. And strictly more competitive every quarter over the last few years. Yeah. This year has been insane. Yeah.

George [00:24:42]: You can see it. This chart with everything added is hard to read currently. There?s so many dots on it, but I think it reflects a little bit what we felt, like how crazy it?s been.

swyx [00:24:54]: Why 14 as the default? Is that a manual choice? Because you?ve got service now in there that are less traditional names. Yeah.

George [00:25:01]: It?s models that we?re kind of highlighting by default in our charts, in our intelligence index. Okay.

swyx [00:25:07]: You just have a manually curated list of stuff.

George [00:25:10]: Yeah, that?s right. But something that I actually don?t think every artificial analysis user knows is that you can customize our charts and choose what models are highlighted. Yeah. And so if we take off a few names, it gets a little easier to read.

swyx [00:25:25]: Yeah, yeah. A little easier to read. Totally. Yeah. But I love that you can see the all one jump. Look at that. September 2024. And the DeepSeek jump. Yeah.

George [00:25:34]: Which got close to OpenAI?s leadership. They were so close. I think, yeah, we remember that moment. Around this time last year, actually.

Micah [00:25:44]: Yeah, yeah, yeah. I agree. Yeah, well, a couple of weeks. It was Boxing Day in New Zealand when DeepSeek v3 came out. And we?d been tracking DeepSeek and a bunch of the other global players that were less known over the second half of 2024 and had run evals on the earlier ones and stuff. I very distinctly remember Boxing Day in New Zealand, because I was with family for Christmas and stuff, running the evals and getting back result by result on DeepSeek v3. So this was the first of their v3 architecture, the 671b MOE.

Micah [00:26:19]: And we were very, very impressed. That was the moment where we were sure that DeepSeek was no longer just one of many players, but had jumped up to be a thing. The world really noticed when they followed that up with the RL working on top of v3 and R1 succeeding a few weeks later. But the groundwork for that absolutely was laid with just extremely strong base model, completely open weights that we had as the best open weights model. So, yeah, that?s the thing that you really see in the game. But I think that we got a lot of good feedback on Boxing Day. us on Boxing Day last year.

George [00:26:48]: Boxing Day is the day after Christmas for those not familiar.

George [00:26:54]: I?m from Singapore.

swyx [00:26:55]: A lot of us remember Boxing Day for a different reason, for the tsunami that happened. Oh, of course. Yeah, but that was a long time ago. So yeah. So this is the rough pitch of AAQI. Is it A-A-Q-I or A-A-I-I? I-I. Okay. Good memory, though.

Micah [00:27:11]: I don?t know. I?m not used to it. Once upon a time, we did call it Quality Index, and we would talk about quality, performance, and price, but we changed it to intelligence.

George [00:27:20]: There?s been a few naming changes. We added hardware benchmarking to the site, and so benchmarks at a kind of system level. And so then we changed our throughput metric to, we now call it output speed, and then

swyx [00:27:32]: throughput makes sense at a system level, so we took that name. Take me through more charts. What should people know? Obviously, the way you look at the site is probably different than how a beginner might look at it.

Micah [00:27:42]: Yeah, that?s fair. There?s a lot of fun stuff to dive into. Maybe so we can hit past all the, like, we have lots and lots of emails and stuff. The interesting ones to talk about today that would be great to bring up are a few of our recent things, I think, that probably not many people will be familiar with yet. So first one of those is our omniscience index. So this one is a little bit different to most of the intelligence evils that we?ve run. We built it specifically to look at the embedded knowledge in the models and to test hallucination by looking at when the model doesn?t know the answer, so not able to get it correct, what?s its probability of saying, I don?t know, or giving an incorrect answer. So the metric that we use for omniscience goes from negative 100 to positive 100. Because we?re simply taking off a point if you give an incorrect answer to the question. We?re pretty convinced that this is an example of where it makes most sense to do that, because it?s strictly more helpful to say, I don?t know, instead of giving a wrong answer to factual knowledge question. And one of our goals is to shift the incentive that evils create for models and the labs creating them to get higher scores. And almost every evil across all of AI up until this point, it?s been graded by simple percentage correct as the main metric, the main thing that gets hyped. And so you should take a shot at everything. There?s no incentive to say, I don?t know. So we did that for this one here.

swyx [00:29:22]: I think there?s a general field of calibration as well, like the confidence in your answer versus the rightness of the answer. Yeah, we completely agree. Yeah. Yeah.

George [00:29:31]: On that. And one reason that we didn?t do that is because. Or put that into this index is that we think that the, the way to do that is not to ask the models how confident they are.

swyx [00:29:43]: I don?t know. Maybe it might be though. You put it like a JSON field, say, say confidence and maybe it spits out something. Yeah. You know, we have done a few evils podcasts over the, over the years. And when we did one with Clementine of hugging face, who maintains the open source leaderboard, and this was one of her top requests, which is some kind of hallucination slash lack of confidence calibration thing. And so, Hey, this is one of them.

Micah [00:30:05]: And I mean, like anything that we do, it?s not a perfect metric or the whole story of everything that you think about as hallucination. But yeah, it?s pretty useful and has some interesting results. Like one of the things that we saw in the hallucination rate is that anthropics Claude models at the, the, the very left-hand side here with the lowest hallucination rates out of the models that we?ve evaluated amnesty is on. That is an interesting fact. I think it probably correlates with a lot of the previously, not really measured vibes stuff that people like about some of the Claude models. Is the dataset public or what?s is it, is there a held out set? There?s a hell of a set for this one. So we, we have published a public test set, but we we?ve only published 10% of it. The reason is that for this one here specifically, it would be very, very easy to like have data contamination because it is just factual knowledge questions. We would. We?ll update it at a time to also prevent that, but with yeah, kept most of it held out so that we can keep it reliable for a long time. It leads us to a bunch of really cool things, including breakdown quite granularly by topic. And so we?ve got some of that disclosed on the website publicly right now, and there?s lots more coming in terms of our ability to break out very specific topics. Yeah.

swyx [00:31:23]: I would be interested. Let?s, let?s dwell a little bit on this hallucination one. I noticed that Haiku hallucinates less than Sonnet hallucinates less than Opus. And yeah. Would that be the other way around in a normal capability environments? I don?t know. What?s, what do you make of that?

George [00:31:37]: One interesting aspect is that we?ve found that there?s not really a, not a strong correlation between intelligence and hallucination, right? That?s to say that the smarter the models are in a general sense, isn?t correlated with their ability to, when they don?t know something, say that they don?t know. It?s interesting that Gemini three pro preview was a big leap over here. Gemini 2.5. Flash and, and, and 2.5 pro, but, and if I add pro quickly here.

swyx [00:32:07]: I bet pro?s really good. Uh, actually no, I meant, I meant, uh, the GPT pros.

George [00:32:12]: Oh yeah.

swyx [00:32:13]: Cause GPT pros are rumored. We don?t know for a fact that it?s like eight runs and then with the LM judge on top. Yeah.

George [00:32:20]: So we saw a big jump in, this is accuracy. So this is just percent that they get, uh, correct and Gemini three pro knew a lot more than the other models. And so big jump in accuracy. But relatively no change between the Google Gemini models, between releases. And the hallucination rate. Exactly. And so it?s likely due to just kind of different post-training recipe, between the, the Claude models. Yeah.

Micah [00:32:45]: Um, there?s, there?s driven this. Yeah. You can, uh, you can partially blame us and how we define intelligence having until now not defined hallucination as a negative in the way that we think about intelligence.

swyx [00:32:56]: And so that?s what we?re changing. Uh, I know many smart people who are confidently incorrect.

George [00:33:02]: Uh, look, look at that. That, that, that is very humans. Very true. And there?s times and a place for that. I think our view is that hallucination rate makes sense in this context where it?s around knowledge, but in many cases, people want the models to hallucinate, to have a go. Often that?s the case in coding or when you?re trying to generate newer ideas. One eval that we added to artificial analysis is, is, is critical point and it?s really hard, uh, physics problems. Okay.

swyx [00:33:32]: And is it sort of like a human eval type or something different or like a frontier math type?

George [00:33:37]: It?s not dissimilar to frontier frontier math. So these are kind of research questions that kind of academics in the physics physics world would be able to answer, but models really struggled to answer. So the top score here is not 9%.

swyx [00:33:51]: And when the people that, that created this like Minway and, and, and actually off via who was kind of behind sweep and what organization is this? Oh, is this, it?s Princeton.

George [00:34:01]: Kind of range of academics from, from, uh, different academic institutions, really smart people. They talked about how they turn the models up in terms of the temperature as high temperature as they can, where they?re trying to explore kind of new ideas in physics as a, as a thought partner, just because they, they want the models to hallucinate. Um, yeah, sometimes it?s something new. Yeah, exactly.

swyx [00:34:21]: Um, so not right in every situation, but, um, I think it makes sense, you know, to test hallucination in scenarios where it makes sense. Also, the obvious question is, uh, this is one of. Many that there is there, every lab has a system card that shows some kind of hallucination number, and you?ve chosen to not, uh, endorse that and you?ve made your own. And I think that?s a, that?s a choice. Um, totally in some sense, the rest of artificial analysis is public benchmarks that other people can independently rerun. You provide it as a service here. You have to fight the, well, who are we to, to like do this? And your, your answer is that we have a lot of customers and, you know, but like, I guess, how do you converge the individual?

Micah [00:35:08]: I mean, I think, I think for hallucinations specifically, there are a bunch of different things that you might care about reasonably, and that you?d measure quite differently, like we?ve called this a amnesty and solutionation rate, not trying to declare the, like, it?s humanity?s last hallucination. You could, uh, you could have some interesting naming conventions and all this stuff. Um, the biggest picture answer to that. It?s something that I actually wanted to mention. Just as George was explaining, critical point as well is, so as we go forward, we are building evals internally. We?re partnering with academia and partnering with AI companies to build great evals. We have pretty strong views on, in various ways for different parts of the AI stack, where there are things that are not being measured well, or things that developers care about that should be measured more and better. And we intend to be doing that. We?re not obsessed necessarily with that. Everything we do, we have to do entirely within our own team. Critical point. As a cool example of where we were a launch partner for it, working with academia, we?ve got some partnerships coming up with a couple of leading companies. Those ones, obviously we have to be careful with on some of the independent stuff, but with the right disclosure, like we?re completely comfortable with that. A lot of the labs have released great data sets in the past that we?ve used to great success independently. And so it?s between all of those techniques, we?re going to be releasing more stuff in the future. Cool.

swyx [00:36:26]: Let?s cover the last couple. And then we?ll, I want to talk about your trends analysis stuff, you know? Totally.

Micah [00:36:31]: So that actually, I have one like little factoid on omniscience. If you go back up to accuracy on omniscience, an interesting thing about this accuracy metric is that it tracks more closely than anything else that we measure. The total parameter count of models makes a lot of sense intuitively, right? Because this is a knowledge eval. This is the pure knowledge metric. We?re not looking at the index and the hallucination rate stuff that we think is much more about how the models are trained. This is just what facts did they recall? And yeah, it tracks parameter count extremely closely. Okay.

swyx [00:37:05]: What?s the rumored size of GPT-3 Pro? And to be clear, not confirmed for any official source, just rumors. But rumors do fly around. Rumors. I get, I hear all sorts of numbers. I don?t know what to trust.

Micah [00:37:17]: So if you, if you draw the line on omniscience accuracy versus total parameters, we?ve got all the open ways models, you can squint and see that likely the leading frontier models right now are quite a lot bigger than the ones that we?re seeing right now. And the one trillion parameters that the open weights models cap out at, and the ones that we?re looking at here, there?s an interesting extra data point that Elon Musk revealed recently about XAI that for three trillion parameters for GROK 3 and 4, 6 trillion for GROK 5, but that?s not out yet. Take those together, have a look. You might reasonably form a view that there?s a pretty good chance that Gemini 3 Pro is bigger than that, that it could be in the 5 to 10 trillion parameters. To be clear, I have absolutely no idea, but just based on this chart, like that?s where you would, you would land if you have a look at it. Yeah.

swyx [00:38:07]: And to some extent, I actually kind of discourage people from guessing too much because what does it really matter? Like as long as they can serve it as a sustainable cost, that?s about it. Like, yeah, totally.

George [00:38:17]: They?ve also got different incentives in play compared to like open weights models who are thinking to supporting others in self-deployment for the labs who are doing inference at scale. It?s I think less about total parameters in many cases. When thinking about inference costs and more around number of active parameters. And so there?s a bit of an incentive towards larger sparser models. Agreed.

Micah [00:38:38]: Understood. Yeah. Great. I mean, obviously if you?re a developer or company using these things, not exactly as you say, it doesn?t matter. You should be looking at all the different ways that we measure intelligence. You should be looking at cost to run index number and the different ways of thinking about token efficiency and cost efficiency based on the list prices, because that?s all it matters.

swyx [00:38:56]: It?s not as good for the content creator rumor mill where I can say. Oh, GPT-4 is this small circle. Look at GPT-5 is this big circle. And then there used to be a thing for a while. Yeah.

Micah [00:39:07]: But that is like on its own, actually a very interesting one, right? That is it just purely that chances are the last couple of years haven?t seen a dramatic scaling up in the total size of these models. And so there?s a lot of room to go up properly in total size of the models, especially with the upcoming hardware generations. Yes.

swyx [00:39:29]: So, you know. Taking off my shitposting face for a minute. Yes. Yes. At the same time, I do feel like, you know, especially coming back from Europe, people do feel like Ilya is probably right that the paradigm is doesn?t have many more orders of magnitude to scale out more. And therefore we need to start exploring at least a different path. GDPVal, I think it?s like only like a month or so old. I was also very positive when it first came out. I actually talked to Tejo, who was the lead researcher on that. Oh, cool. And you have your own version.

George [00:39:59]: It?s a fantastic. It?s a fantastic data set. Yeah.

swyx [00:40:01]: And maybe it will recap for people who are still out of it. It?s like 44 tasks based on some kind of GDP cutoff that?s like meant to represent broad white collar work that is not just coding. Yeah.

Micah [00:40:12]: Each of the tasks have a whole bunch of detailed instructions, some input files for a lot of them. It?s within the 44 is divided into like two hundred and twenty two to five, maybe subtasks that are the level of that we run through the agenda. And yeah, they?re really interesting. I will say that it doesn?t. It doesn?t necessarily capture like all the stuff that people do at work. No avail is perfect is always going to be more things to look at, largely because in order to make the tasks well enough to find that you can run them, they need to only have a handful of input files and very specific instructions for that task. And so I think the easiest way to think about them are that they?re like quite hard take home exam tasks that you might do in an interview process.

swyx [00:40:56]: Yeah, for listeners, it is not no longer like a long prompt. It is like, well, here?s a zip file with like a spreadsheet or a PowerPoint deck or a PDF and go nuts and answer this question.

George [00:41:06]: OpenAI released a great data set and they released a good paper which looks at performance across the different web chat bots on the data set. It?s a great paper, encourage people to read it. What we?ve done is taken that data set and turned it into an eval that can be run on any model. So we created a reference agentic harness that can run. Run the models on the data set, and then we developed evaluator approach to compare outputs. That?s kind of AI enabled, so it uses Gemini 3 Pro Preview to compare results, which we tested pretty comprehensively to ensure that it?s aligned to human preferences. One data point there is that even as an evaluator, Gemini 3 Pro, interestingly, doesn?t do actually that well. So that?s kind of a good example of what we?ve done in GDPVal AA.

swyx [00:42:01]: Yeah, the thing that you have to watch out for with LLM judge is self-preference that models usually prefer their own output, and in this case, it was not. Totally.

Micah [00:42:08]: I think the way that we?re thinking about the places where it makes sense to use an LLM as judge approach now, like quite different to some of the early LLM as judge stuff a couple of years ago, because some of that and MTV was a great project that was a good example of some of this a while ago was about judging conversations and like a lot of style type stuff. Here, we?ve got the task that the grader and grading model is doing is quite different to the task of taking the test. When you?re taking the test, you?ve got all of the agentic tools you?re working with, the code interpreter and web search, the file system to go through many, many turns to try to create the documents. Then on the other side, when we?re grading it, we?re running it through a pipeline to extract visual and text versions of the files and be able to provide that to Gemini, and we?re providing the criteria for the task and getting it to pick which one more effectively meets the criteria of the task. Yeah. So we?ve got the task out of two potential outcomes. It turns out that we proved that it?s just very, very good at getting that right, matched with human preference a lot of the time, because I think it?s got the raw intelligence, but it?s combined with the correct representation of the outputs, the fact that the outputs were created with an agentic task that is quite different to the way the grading model works, and we?re comparing it against criteria, not just kind of zero shot trying to ask the model to pick which one is better.

swyx [00:43:26]: Got it. Why is this an ELO? And not a percentage, like GDP-VAL?

George [00:43:31]: So the outputs look like documents, and there?s video outputs or audio outputs from some of the tasks. It has to make a video? Yeah, for some of the tasks. Some of the tasks.

swyx [00:43:43]: What task is that?

George [00:43:45]: I mean, it?s in the data set. Like be a YouTuber? It?s a marketing video.

Micah [00:43:49]: Oh, wow. What? Like model has to go find clips on the internet and try to put it together. The models are not that good at doing that one, for now, to be clear. It?s pretty hard to do that with a code editor. I mean, the computer stuff doesn?t work quite well enough and so on and so on, but yeah.

George [00:44:02]: And so there?s no kind of ground truth, necessarily, to compare against, to work out percentage correct. It?s hard to come up with correct or incorrect there. And so it?s on a relative basis. And so we use an ELO approach to compare outputs from each of the models between the task.

swyx [00:44:23]: You know what you should do? You should pay a contractor, a human, to do the same task. And then give it an ELO and then so you have, you have human there. It?s just, I think what?s helpful about GDPVal, the OpenAI one, is that 50% is meant to be normal human and maybe Domain Expert is higher than that, but 50% was the bar for like, well, if you?ve crossed 50, you are superhuman. Yeah.

Micah [00:44:47]: So we like, haven?t grounded this score in that exactly. I agree that it can be helpful, but we wanted to generalize this to a very large number. It?s one of the reasons that presenting it as ELO is quite helpful and allows us to add models and it?ll stay relevant for quite a long time. I also think it, it can be tricky looking at these exact tasks compared to the human performance, because the way that you would go about it as a human is quite different to how the models would go about it. Yeah.

swyx [00:45:15]: I also liked that you included Lama 4 Maverick in there. Is that like just one last, like...

Micah [00:45:20]: Well, no, no, no, no, no, no, it is the, it is the best model released by Meta. And... So it makes it into the homepage default set, still for now.

George [00:45:31]: Other inclusion that?s quite interesting is we also ran it across the latest versions of the web chatbots. And so we have...

swyx [00:45:39]: Oh, that?s right.

George [00:45:40]: Oh, sorry.

swyx [00:45:41]: I, yeah, I completely missed that. Okay.

George [00:45:43]: No, not at all. So that, which has a checkered pattern. So that is their harness, not yours, is what you?re saying. Exactly. And what?s really interesting is that if you compare, for instance, Claude 4.5 Opus using the Claude web chatbot, it performs worse than the model in our agentic harness. And so in every case, the model performs better in our agentic harness than its web chatbot counterpart, the harness that they created.

swyx [00:46:13]: Oh, my backwards explanation for that would be that, well, it?s meant for consumer use cases and here you?re pushing it for something.

Micah [00:46:19]: The constraints are different and the amount of freedom that you can give the model is different. Also, you like have a cost goal. We let the models work as long as they want, basically. Yeah. Do you copy paste manually into the chatbot? Yeah. Yeah. That?s, that was how we got the chatbot reference. We?re not going to be keeping those updated at like quite the same scale as hundreds of models.

swyx [00:46:38]: Well, so I don?t know, talk to a browser base. They?ll, they?ll automate it for you. You know, like I have thought about like, well, we should turn these chatbot versions into an API because they are legitimately different agents in themselves. Yes. Right. Yeah.

Micah [00:46:53]: And that?s grown a huge amount of the last year, right? Like the tools. The tools that are available have actually diverged in my opinion, a fair bit across the major chatbot apps and the amount of data sources that you can connect them to have gone up a lot, meaning that your experience and the way you?re using the model is more different than ever.

swyx [00:47:10]: What tools and what data connections come to mind when you say what?s interesting, what?s notable work that people have done?

Micah [00:47:15]: Oh, okay. So my favorite example on this is that until very recently, I would argue that it was basically impossible to get an LLM to draft an email for me in any useful way. Because most times that you?re sending an email, you?re not just writing something for the sake of writing it. Chances are context required is a whole bunch of historical emails. Maybe it?s notes that you?ve made, maybe it?s meeting notes, maybe it?s, um, pulling something from your, um, any of like wherever you at work store stuff. So for me, like Google drive, one drive, um, in our super base databases, if we need to do some analysis or some data or something, preferably model can be plugged into all of those things and can go do some useful work based on it. The things that like I find most impressive currently that I am somewhat surprised work really well in late 2025, uh, that I can have models use super base MCP to query read only, of course, run a whole bunch of SQL queries to do pretty significant data analysis. And. And make charts and stuff and can read my Gmail and my notion. And okay. You actually use that. That?s good. That?s, that?s, that?s good. Is that a cloud thing? To various degrees of order, but chat GPD and Claude right now, I would say that this stuff like barely works in fairness right now. Like.

George [00:48:33]: Because people are actually going to try this after they hear it. If you get an email from Micah, odds are it wasn?t written by a chatbot.

Micah [00:48:38]: So, yeah, I think it is true that I have never actually sent anyone an email drafted by a chatbot. Yet.

swyx [00:48:46]: Um, and so you can, you can feel it right. And yeah, this time, this time next year, we?ll come back and see where it?s going. Totally. Um, super base shout out another famous Kiwi. Uh, I don?t know if you?ve, you?ve any conversations with him about anything in particular on AI building and AI infra.

George [00:49:03]: We have had, uh, Twitter DMS, um, with, with him because we?re quite big, uh, super base users and power users. And we probably do some things more manually than we should in. In, in super base support line because you?re, you?re a little bit being super friendly. One extra, um, point regarding, um, GDP Val AA is that on the basis of the overperformance of the models compared to the chatbots turns out, we realized that, oh, like our reference harness that we built actually white works quite well on like gen generalist agentic tasks. This proves it in a sense. And so the agent harness is very. Minimalist. I think it follows some of the ideas that are in Claude code and we, all that we give it is context management capabilities, a web search, web browsing, uh, tool, uh, code execution, uh, environment. Anything else?

Micah [00:50:02]: I mean, we can equip it with more tools, but like by default, yeah, that?s it. We, we, we give it for GDP, a tool to, uh, view an image specifically, um, because the models, you know, can just use a terminal to pull stuff in text form into context. But to pull visual stuff into context, we had to give them a custom tool, but yeah, exactly. Um, you, you can explain an expert. No.

George [00:50:21]: So it?s, it, we turned out that we created a good generalist agentic harness. And so we, um, released that on, on GitHub yesterday. It?s called stirrup. So if people want to check it out and, and it?s a great, um, you know, base for, you know, generalist, uh, building a generalist agent for more specific tasks.

Micah [00:50:39]: I?d say the best way to use it is get clone and then have your favorite coding. Agent make changes to it, to do whatever you want, because it?s not that many lines of code and the coding agents can work with it. Super well.

swyx [00:50:51]: Well, that?s nice for the community to explore and share and hack on it. I think maybe in, in, in other similar environments, the terminal bench guys have done, uh, sort of the Harbor. Uh, and so it?s, it?s a, it?s a bundle of, well, we need our minimal harness, which for them is terminus and we also need the RL environments or Docker deployment thing to, to run independently. So I don?t know if you?ve looked at it. I don?t know if you?ve looked at the harbor at all, is that, is that like a, a standard that people want to adopt?

George [00:51:19]: Yeah, we?ve looked at it from a evals perspective and we love terminal bench and, and host benchmarks of, of, of terminal mention on artificial analysis. Um, we?ve looked at it from a, from a coding agent perspective, but could see it being a great, um, basis for any kind of agents. I think where we?re getting to is that these models have gotten smart enough. They?ve gotten better, better tools that they can perform better when just given a minimalist. Set of tools and, and let them run, let the model control the, the agentic workflow rather than using another framework that?s a bit more built out that tries to dictate the, dictate the flow. Awesome.

swyx [00:51:56]: Let?s cover the openness index and then let?s go into the report stuff. Uh, so that?s the, that?s the last of the proprietary art numbers, I guess. I don?t know how you sort of classify all these. Yeah.

Micah [00:52:07]: Or call it, call it, let?s call it the last of like the, the three new things that we?re talking about from like the last few weeks. Um, cause I mean, there?s a, we do a mix of stuff that. Where we?re using open source, where we open source and what we do and, um, proprietary stuff that we don?t always open source, like long context reasoning data set last year, we did open source. Um, and then all of the work on performance benchmarks across the site, some of them, we looking to open source, but some of them, like we?re constantly iterating on and so on and so on and so on. So there?s a huge mix, I would say, just of like stuff that is open source and not across the side. So that?s a LCR for people. Yeah, yeah, yeah, yeah.

swyx [00:52:41]: Uh, but let?s, let?s, let?s talk about open.

Micah [00:52:42]: Let?s talk about openness index. This. Here is call it like a new way to think about how open models are. We, for a long time, have tracked where the models are open weights and what the licenses on them are. And that?s like pretty useful. That tells you what you?re allowed to do with the weights of a model, but there is this whole other dimension to how open models are. That is pretty important that we haven?t tracked until now. And that?s how much is disclosed about how it was made. So transparency about data, pre-training data and post-training data. And whether you?re allowed to use that data and transparency about methodology and training code. So basically, those are the components. We bring them together to score an openness index for models so that you can in one place get this full picture of how open models are.

swyx [00:53:32]: I feel like I?ve seen a couple other people try to do this, but they?re not maintained. I do think this does matter. I don?t know what the numbers mean apart from is there a max number? Is this out of 20?

George [00:53:44]: It?s out of 18 currently, and so we?ve got an openness index page, but essentially these are points, you get points for being more open across these different categories and the maximum you can achieve is 18. So AI2 with their extremely open OMO3 32B think model is the leader in a sense.

swyx [00:54:04]: It?s hooking face.

George [00:54:05]: Oh, with their smaller model. It?s coming soon. I think we need to run, we need to get the intelligence benchmarks right to get it on the site.

swyx [00:54:12]: You can?t have it open in the next. We can not include hooking face. We love hooking face. We?ll have that, we?ll have that up very soon. I mean, you know, the refined web and all that stuff. It?s, it?s amazing. Or is it called fine web? Fine web. Fine web.

Micah [00:54:23]: Yeah, yeah, no, totally. Yep. One of the reasons this is cool, right, is that if you?re trying to understand the holistic picture of the models and what you can do with all the stuff the company?s contributing, this gives you that picture. And so we are going to keep it up to date alongside all the models that we do intelligence index on, on the site. And it?s just an extra view to understand.

swyx [00:54:43]: Can you scroll down to this? The, the, the, the trade-offs chart. Yeah, yeah. That one. Yeah. This, this really matters, right? Obviously, because you can be super open, but dumb. I mean, obviously goes the wrong way here. Right.

George [00:54:55]: A lot of people would like to see labs hill climb on the, and target.

Micah [00:55:00]: This is the access to hill climb. Yeah. Unfortunately, it might be fundamentally true that the, the slum will always go this direction because once you open something up, then everyone else can get to the level of what you have now.

swyx [00:55:11]: Well, so let me, let me tweak your points. You have, I have a point system, right? Like you have these like numbers on the point system and it go up to 18, you know, but like, just because I have a little bit of open data doesn?t mean I?m necessarily that much better in someone who put a lot of effort into their open ways, it is that it?s smarter. So I might, I might just mess with the point system to make sure that like, I?m accurately representing the, the contribution to the open openness.

Micah [00:55:36]: It is hard to wait for the materiality of the contribution to open source. We tried to make it so that it is quite well-defined and no one can disagree about which category things should be in. So we?re not saying this was a big contribution or a small contribution in terms of impact on the industry or anything. It?s just how much of your data did you release? I would say that it is still valid to say that we trained a model that?s not that smart, maybe even not at the frontier for a particular size category, but we chose to open up all the data, all the training code. That is a very useful exercise for the industry. And we want to recognize that even if the smartest model in the category.

swyx [00:56:18]: Yeah. And also a special shout out to NVIDIA and Emotron, which doesn?t get enough credit for the amount of stuff that they do. And honestly, it?s a sales enablement for NVIDIA as well. The fact that they can do this is... Side project.

Micah [00:56:29]: Totally. But I mean, it is true that NVIDIA have actually put an enormous amount of effort over the last year, especially into the Nematron models.

swyx [00:56:35]: Yeah. And so many people actually use it for synthetic data and stuff. It?s a pretty interesting secret of the industry that NVIDIA holds up all these guys.

Micah [00:56:45]: I mean, it?s in their interest for there to be more AI.

swyx [00:56:49]: So obviously, I think you want to push openness as having an index. Every index that you push has encoded some kind of opinion or value. Yes. I think one of the openness questions from this year was people messing with the license. And so Lama had this, like, if you have 700 million daily active users, you?re not allowed to use our model or you have to talk to us, something like that. So basically, like, what are your customers telling you about the kind of licensing worries that they have? Right. Because obviously, most people will never hit 700 million users.

Micah [00:57:21]: We have like a detailed breakdown of that in the openness index. And that was actually one of the initial questions that took us down the route of wanting to... Do this. Because, yeah, the simplest thing that, like, our opinion is, is that there is a lot of advantage to having, like, an official OSI license like MIT or Apache 2, because then the box is just checked. You don?t even need to read it because it?s just Apache 2 and you can do it ever you want and it?s fine. There are often very good reasons that companies don?t want to release language models with those completely open licenses. The index tells you. So if you get the top category, that?s one of those licenses. You?re totally good. And then... And then we?ve got some lower categories for when attribution is required and then when commercial use is not allowed. Yeah, they?re there.

swyx [00:58:05]: So that?s the openness index. Thank you for doing all those works. Let?s talk a little bit, or at least end the pod, on just the trend reports that you guys do, which is kind of a bit of the bread and butter how you make money. I highly encourage everyone to see George?s talk at World?s Fair, which gives a little bit of a preview. And you were very excited about talking about the smiling curve, or I don?t know what you call it. Yeah, yeah, yeah, yeah, let?s talk about that one. Let?s explain it for people. And I might, I might actually put it up because I don?t have it. Yeah, I?ve got to copy the slide, that?d be, that?d be excellent. It?s important for people to have in their head because, yeah, people only get the marketing message from the labs that, oh, we?re cutting costs all the time.

Micah [00:58:41]: Yeah, yeah, but it?s, it?s true. It?s it?s not the whole picture. So, okay. A couple of like the big trends that we track at Artificial Analysis over time and that like we?re always showing charts of on the trends page in these reports and stuff. One, that the cost of intelligence has been falling dramatically. Over the last couple of years, the best way to think about that is that the cost for each terror of intelligence has been dropping the, like one fact on that is that you can get intelligence at the level of GPT-4 for over a hundred times cheaper than GPT-4 was at launch right now. I think my number is a thousand actually.

swyx [00:59:16]: If you look at the Amazon Nova models, which are very, very cheap. Yeah.

Micah [00:59:21]: Like my, my conservative statement is normally like, but in fairness, this slide. Like I, we were actually saying for the podcast, right. It?s like maybe six months old now and it?s conceptually still correct, but like could actually probably do a tweak on the exact numbers because like the market?s moving so quickly.

swyx [00:59:37]: If you?re feeling kick it off, I mean, we?ll have this chart.

Micah [00:59:39]: I told people to watch the world?s fair talk, but let?s, let?s introduce what context makes you make something like this. There are two trends that seem to not make sense together, both of which we talk a lot about at Artificial Analysis and are very important to developers building stuff in AI. The first is that the cost of intelligence for each level of intelligence has been dropping dramatically over the last couple of years. We track the cost to run Artificial Analysis Intelligence Index for each bucket of Intelligence Index scores and each bucket, you just see the line go down really, really quickly and actually go down more quickly to each new level of intelligence that?s been achieved over the last couple of years. So the rate of that cost has actually been going up. So. Yeah. We?ve got that being true. And yet it is clearly possible to spend quite a lot more on AI inference now than it was a couple of years ago.

George [01:00:34]: NVIDIA stock go up.

swyx [01:00:36]: It?s going, it?s going really up. Uh, I just heard from a friend?s startup that just went through the shift zero. They?re spending $5,000 per employee on coding agents spend alone. That?s ridiculous. That?s an impressive number.

Micah [01:00:49]: We need to get our numbers up. We?re, uh, we?re, we?re not quite hitting, hitting.

swyx [01:00:52]: Well, I was like, it?s so high down. I?m like, are you doing something wrong? Yeah.

Micah [01:00:55]: Cause there are some efficiency questions along the way, but like you can make AI inference useful to that level in a bunch of ways that I can imagine. Right. Yeah. Um, I, I don?t think that?s that nuts. Um, but basically the, the reason we made this slide to answer the question, right. Is to show that the crazy thing is that it is actually true. We?ve had this hundred X to a thousand X decline in the cost of GPT four level intelligence on the left-hand side. And yet on the right-hand side, because the multipliers are so big for the fact that even though. Small models can do GPT four level. Now we still want to use big models and probably bigger than ever models to, um, do frontier level intelligence. We?ve got reasoning models using tokens, and then we?re throwing them in these, them in these agentic workflows where they?re consuming enormous numbers of input tokens and making enormous numbers of output tokens working for a really long time. Those two things taken together, get you back to, we can spend enormously more today than we could a couple of years ago. Yep.

George [01:01:50]: I think that?s right. There?s a number of drivers at play and we kind of outline kind of. Six key ones here. Um, but you know, as complex as changing quickly, all of these have changed very dramatically in the last, uh, in the last 12 months.

swyx [01:02:04]: Let?s pick on hardware efficiency since you also have, you also track hardware stuff. And I think the general assertion or the message is that the efficiency from next gen Nvidia chips is actually not 4X. So you have what? 3X or 4X? You have 3X in here and it?s, it?s like 2X maybe, or it?s more of like a. Power story rather than like a share sort of compute tokens efficiency story. But yeah, what, what?s going on in, in hardware. Okay.

Micah [01:02:31]: So the, the, the, the odds, unfortunately, uh, is it depends and it just depends massively on like so many things across a bunch of different types of workloads and ways to think about it. So one of the simplest ways to think about this is to take single relevant model, to think about serving it at speeds that are realistic for what you actually might want to hit. And can afford to hit, and then think about the throughput per GPU that you can achieve serving the model at those speeds. Rease. One of the reasons that?s important is that there?s a trade-off between the throughput per GPU that you can achieve and the per user speed that you can achieve. And as a, it costs more to serve stuff fast to, to users. When you run all of that for especially big sparse models, you can get a lot better than two or three X gain going from Hopper to Blackwell generation to video. I am. This shouldn?t be too controversial. Let?s say I?m like, I?m. I?m pretty confident that Blackwell has delivered pretty enormous gains and that the next couple of years of NVIDIA?s roadmap are going to continue to deliver quite enormous gains and that those will actually come through as lower total cost per token to the companies that are running models on them and will allow bigger models will allow way more tokens to be made for lower cost and that that?s gonna continue these things also stack on all of the software and model improvements. So basically like my prediction across like both sides of that, like smile chart, uh, that we?re gonna see the left-hand side continue to be true and probably like for another order of magnitude and the right-hand side continue to be true for another order of magnitude, and that?s gonna enable a whole lot of things.

swyx [01:04:12]: Okay. Well, I?ll push on, uh, let?s go back to the, the, the small chart. I?ll push back on sparsity, right? Uh, we?ve gone a long way on sparsity. Deep seek was a major pusher of fine grain experts. Let?s call it. Yep. Right. Well, I have a mental number of sparsity in terms of let?s say active params versus total params. And that number went from 25%, let?s say down to like 15, right? You obviously can?t really go below, I don?t know, five. Is that obvious? So there?s a lower limit to, to sparsity is what I?m saying. I don?t know that that?s that obvious actually. All right.

Micah [01:04:45]: Um, there, there must be a limit somewhere, right? Yeah, exactly. But we?ve got numbers in the wild that are quite a lot lower than that right now. So the GBD OSS models, like the big ones at about 5%, um, active, Kimmy K2, is it like 3% active? Oh, okay. I think, pretty sure.

swyx [01:05:05]: I?ve looked at those numbers. I calculated them. I don?t remember. Yeah. But I remember thinking like, this must be it.

George [01:05:11]: Your 5% is exactly like around the ballpark for the open weights models of, of what?s released today. I think one interesting that gives me kind of pause when thinking that it won?t go, the sparsity won?t go high. Or the number of percentage of active parameters lower is that we, in our benchmark, see a lot of performance, uh, correlated more with, uh, total parameters than active and not that correlated with how sparse, like the models are. Our accuracy, benchmark as part of a omniscience, it?s very correlated with total. It?s not correlated with, with active, uh, parameters, which I think is very at all, which is very, very interesting. And so I think, yeah, they could, they could be quite. A bit, um, to go here. Awesome.

swyx [01:05:55]: Well, we don?t have that much time, but I w I did want to leave some room to cover reasoning and non-reasoning models and token efficiency. Let?s do that. So at a high, at a super high level, people have to classify this binary thing of reasoning versus non-reasoning. People who are insider have some discomfort with that because basically you just have to think tag or no think tag. How have you guys decided to approach this? And also how does that laid out in, over the course of the year where we have things like GPT-5, which is a model. Right.

Micah [01:06:24]: Let?s say GPT-5 in chat GPT, the consumer experience as a model router, when you?re hitting the API, like we can, you can pick the different versions and you can pick reasoning strength of the different versions, but that, that goes to why this is now such a complex thing. So earlier this year, and probably when you and George last spoke for the AI engineers world?s fair, we had this great slide that was super easy, where we would show that the average reasoning model is using 10 times the number of tokens per query in our intelligence index as the average non-reasoning model. And there was this moment where that was a pretty clear distinction and extremely useful to look at it just like that. Definitely no longer the case, not least because you can think about reasoning strength for a bunch of these different models, but particularly because different models have wildly different token efficiency now, more than an order of magnitude in difference. That means that the way that you probably need to think about cost for any application is to use something like our cost around intelligence index metric as the starting point. Right. for what it?s going to look like for these different models, these different reasoning strengths, and this continuous spectrum from non-reasoning to reasoning. That?s basically like where we?re at. So we will still show reasoning and non-reasoning and define reasoning as when there is that separated chain of thought that you?re getting at a different parameter in an API normally, but it doesn?t necessarily anymore mean that that model is actually going to have longer end-to-end latency that is going to use more tokens than something that is branded

swyx [01:07:51]: in a non-reasoning model for the same task. That?s true. I think 5.1 was it. And then 5.1 Codex had these chart, which was super nice of this, like, let?s say bottom 10 percentile query being faster, but top 10 percentile being longer. And that?s a kind of the efficiency chart

Micah [01:08:10]: you want to see, right? Yeah. So that is an extra thing. Let?s say that we?ve got, that?s a really important extra thing though, right? That you?ve got not just the average number of token span used by the model, which we cover really well right now, but the behavior that you want in the model is it to use more tokens when it needs more tokens and not to use more tokens when it doesn?t need more tokens. So that?s what OpenAI, we?re basically claiming that 5.1 Codex is better at. We don?t actually publish anything on this right now, but have tracked it a bunch internally in our internal analytics on evals across all the models that we run, where we look at the difficulty to questions and the correlation between token usage and difficulty and net net, surprise, surprise, like models have got. I think going into next year, that?s going to be really important, especially as you multiply it by the number of steps in an agentic workflow that a model has to take to get to an answer. We are going to care a lot about token efficiency and number of turns efficiency for getting to what

swyx [01:09:08]: we want. Which would you rather have token efficiency or number of turns efficiency? Or like, which is more important to work on?

Micah [01:09:16]: it depends on the application and both are going to be really important.

George [01:09:18]: Uh, yeah.

Micah [01:09:20]: Well, total cost is just-

swyx [01:09:21]: TalBench Retail, TalBench Airline.

George [01:09:23]: Yeah. Interestingly in Tal, um, Tal2Bench Telecom, it?s cheaper to run, you know, on a per token basis, more expensive models like a GBD5 compared to some smaller open source models, because the, um, some of the GBD5, for instance, uh, got to the answer faster. And so it was able to resolve the customer?s query faster and fewer turns. And maybe it used more tokens per turn, but it certainly- It?s not going to cost more per token. So you would always rather use GBD5 in, in, in, in that scenario. And so I think that?s what, that?s where we?re getting to. I think number of turns is, it?s going to be a metric that we?re going to be talking about a lot more. And, uh, I think it?ll be something that people want to really start to think about, uh, a lot more.

swyx [01:10:06]: There?s a trade-off in benchmarking here where most benchmarks needs to be one turn to be autonomous, to be parallelized and all that. But most, a lot of real life use cases need to be multi-turn and especially like quick multi-turns. So you can align. Yeah.

Micah [01:10:19]: Yeah. I mean, I, I would say that historically benchmarks have been single turn, but I wouldn?t say they need to be at all into the future, right? Like we have a couple of agentic benchmarks in the index right now and GDP that we were talking about. We let the models do up to a hundred turns and, um, our stirrup agent to do that evil. And we?re going to build similar stuff like that in the future. It definitely is hard and you?ve got whole kinds of infrastructure problems to run that and exactly as you say, parallelize it because we need to run that on hundreds of models and we want to do that really fast when you want us to come out and with labs want us to run it on their models,

swyx [01:10:53]: but you can do it. We?re putting in the work to build that stuff and it?s going to be great. Okay. So we?ve covered, I mean, there?s a lot more to cover and you haven?t even touched on

George [01:11:01]: multimodal, which is huge. We also do speech benchmarking, image benchmarking, uh, video

swyx [01:11:09]: benchmarking, hardware. I like the way that you?ve done it because they?re very smart, which is a video takes a long time. So you pre-generate, right? So then people just pick their preferences and you can see the, the overall arena results. And you also avoid like any sensitivity issues

Micah [01:11:23]: around the unsafe content that is being generated. Yeah. And you can see it as a good, good thing, a bad thing, depending on what your view is. But it means that we have a quite active creative direction approach to trying to understand what creative professionals and users want to do with those image and video models. And so that we can be directing the arenas in our categories toward gathering data, votes on what people care about. One call out actually to listeners, like if you are using our arenas is that you can submit requests to us for things that we should cover. I didn?t know that. Yeah. Understudied categories, areas that you think the models are bad at and the labs don?t focus on enough. Like if you want something solved, one of the levers that you have is send us a couple of prompts on it. We might be able to get a category going on it. And this thing that we were talking about earlier, right? That once things get measured, they can get targeted. You can make that work for you.

swyx [01:12:18]: For me as a content creator, infographics, very needed. I took the latest deep seek paper and they had some descriptions of their search agents and their coding agents and I put it in and I created an infographic. And I just think like as I said, industrial use case that doesn?t require a lot of, I guess, design tastes, but just requires some, you need to conform to some preset references, which is something that is increasingly important, especially in like the nano banana series. But yeah, and I think that?s the key there. I think it?s important to be able to I think OpenAI is releasing Image 2 soon, which is going to have that. So I think it?s all of a kind where people need to incentivize workhorse use cases and not just art. I don?t know. Totally. Yeah. What are we going to be talking about next year? What?s emerging that you?re seeing and maybe not in the discussion?

Micah [01:13:06]: The first answer that I?ll give to that is the boring answer is that on most of our charts, the lines go in a particular direction and our overall prediction is the lines are going to keep going in that direction. We?re going to do a lot and do a lot to be as useful as possible to developers and companies to measure what?s important on every one of those and along those lines. But I think we?re going to talk about similar stuff. It?s just that we?re going to have continued on this trajectory for another year and things are going to feel pretty different because of that happening. I know this is the boring answer to that question. No, no.

swyx [01:13:36]: I mean, I?m a fan of things that, truths that don?t change because you can build and plan for that. And I think in media in general, in the podcast business, newsletters, you know, there?s a Twitter business, Twitter business, people are addicted to change, like, oh, everything?s breaking. Everything?s, no, like there?s some truths that aren?t just constants that you can plan on and build. And yeah.

George [01:13:58]: I think one of the truths is that the demand for AI intelligence and smarter AI intelligence is going to be insatiable. Some people disagree that, okay, once we reach certain thresholds, then you don?t need more intelligence. I think to that, I ask people, have they ever worked with? Or managed someone in a work environment and wouldn?t press the button that they were smarter to make them smarter or better at their job or would they never press that for themselves? And I?m not sure that that?s, that?s the case, but I think for artificial analysis, we?ll keep benchmarking raw intelligence, but we also want to think about it and explore models more deeply across other axes as well. I think hallucinations, the start of that, but we?re getting into wanting to support people and understanding, okay, the behavior, the person personalities. Of the models to help people make more nuanced decisions, you?re going to have a personality bench.

swyx [01:14:52]: Maybe that is a direction that Chadji opening eyes leaning into a lot. So if you manage to solve that, you should definitely talk to Fiji and Roon. Oh, okay. Yeah. So what is going to be included in, let?s say like a V3 of the intelligence index, because obviously you?re going to saturate in March.

Micah [01:15:10]: Why don?t we break it now? How soon is the podcast going to come out? Whenever you want. Okay. So we?re at V3 right now. So the, so the, the, the version that we, that?s going inside is, is, is, is V3 V4 is what we?re going to call the next, you know, major of it. Surprise, surprise. We?re going to be adding several of the things that we?ve actually talked about today that we?ve launched over the last few weeks. So it?s not, that?s not going to be wildly shocking, but some of the things that are most exciting is that adding GDP value is going to give us this general agentic performance in a really strong way in intelligence index and in critical point, the, um, physics, EBL, George was talking about similar to frontier math. Yeah. That?s very interesting. That gives us completely new view with a brand new data set of very, very hard research problems. We are going to be using Omniscience and we are going to be using hallucination rate. The exact way is that all of those are going to come together. Um, The waitings is going to be hard because the numbers are different. Yeah. We?re going to make sure that we don?t do anything to cause odd distortions and stuff that could be misleading. But every time you version it, you have a one-time reset of the Exactly. Yeah. That?s exactly how we think about it. We will make sure that within each version number that there?s no parliamentary issue. No drift in any of the scores so that people can rely on them and reference them. You just have to watch out for that version number. Once it?s v4.1, those numbers won?t be compatible with v4.

swyx [01:16:23]: Of course. There?s a little bit of debate over the accuracy of TileBench. I don?t know if you?re clued in to what?s going on. Apparently, a very high number of TileBench tests are impossible.

Micah [01:16:34]: Potentially for the earlier versions, Tile2Bench Telecom, we?re pretty convinced is pretty good. If anything, the only issue there is that models have got very good at doing it. And so, like anything... Tile3. Yeah.

swyx [01:16:49]: On we go. Yeah, on we go. Okay, well, thank you so much for providing such a great service to the industry. I?m glad to at least know you guys before you got famous and now you are famous.

Micah [01:16:59]: Oh, look, our pleasure. We really appreciate your support along the way. I wasn?t kidding at the start, right? That it was a quite material moment for us when artificial analysis was covered on Latent Space. Some random guy. And San Francisco mentions you. I was a fan of Latent Space for like a year before you mentioned us. So, I?d been listening. I don?t think I was familiar with you personally yet at that point. But I listened to your voice probably for many, many hours. And so, once you mentioned it, I got to get to know you and meet you for the first time nearly a couple of years ago. It was really cool, honestly. So, yeah, it?s great to be here.

George [01:17:36]: And thanks for being such a great member of the community and kind of spotlighting projects, projects which don?t have attention and bringing them to your audience. Yeah.

swyx [01:17:44]: Well, actually, so it wasn?t me, right? Someone in the Discord dropped it in our Discord. And I rely on our community and it kind of feeds itself, right? Nice. So, someone brought it to my attention. I don?t know who. We should probably go back and check. But once I saw it, I was like, this looks good. This is something I always wanted. I wanted to build it. I was too shy or dumb or lazy to build it. And you guys did. And now it?s a whole thing. So, thank you for being here.

George [01:18:08]: I built some really cool other stuff like this pod. Yeah. Yeah. Totally. So, thank you. That?s it. Great. Cool. Thanks.



Get full access to Latent.Space at www.latent.space/subscribe
2026-01-08
Link to episode

[State of Evals] LMArena's $1.7B Vision ? Anastasios Angelopoulos, LMArena

We are reupping this episode after LMArena announced their fresh Series A (https://www.theinformation.com/articles/ai-evaluation-startup-lmarena-valued-1-7-billion-new-funding-round?rc=luxwz4), raising $150m at a $1.7B valuation, with $30M annualized consumption revenue (aka $2.5m MRR) after their September evals product launch.

?-

From building LMArena in a Berkeley basement to raising $100M and becoming the de facto leaderboard for frontier AI, Anastasios Angelopoulos returns to Latent Space to recap 2025 in one of the most influential platforms in AI?trusted by millions of users, every major lab, and the entire industry to answer one question: which model is actually best for real-world use cases? We caught up with Anastasios live at NeurIPS 2025 to dig into the origin story (spoiler: it started as an academic project incubated by Anjney Midha at a16z, who formed an entity and gave grants before they even committed to starting a company), why they decided to spin out instead of staying academic or nonprofit (the only way to scale was to build a company), how they?re spending that $100M (inference costs, React migration off Gradio, and hiring world-class talent across ML, product, and go-to-market), the leaderboard delusion controversy and why their response demolished the paper?s claims (factual errors, misrepresentation of open vs. closed source sampling, and ignoring the transparency of preview testing that the community loves), why platform integrity comes first (the public leaderboard is a charity, not a pay-to-play system?models can?t pay to get on, can?t pay to get off, and scores reflect millions of real votes), how they?re expanding into occupational verticals (medicine, legal, finance, creative marketing) and multimodal arenas (video coming soon), why consumer retention is earned every single day (sign-in and persistent history were the unlock, but users are fickle and can leave at any moment), and his vision for Arena as the central evaluation platform that provides the North Star for the industry?constantly fresh, immune to overfitting, and grounded in millions of real-world conversations from real users.

We discuss:

* The $100M raise: use of funds is primarily inference costs (funding free usage for tens of millions of monthly conversations), React migration off Gradio (custom loading icons, better developer hiring, more flexibility), and hiring world-class talent

* The scale: 250M+ conversations on the platform, tens of millions per month, 25% of users do software for a living, and half of users are now logged in

* The leaderboard illusion controversy: Cohere researchers claimed undisclosed private testing created inequities, but Arena?s response demolished the paper?s factual errors (misrepresented open vs. closed source sampling, ignored transparency of preview testing that the community loves)

* Why preview testing is loved by the community: secret codenames (Gemini Nano Banana, named after PM Naina?s nickname), early access to unreleased models, and the thrill of being first to vote on frontier capabilities

* The Nano Banana moment: changed Google?s market share overnight, billions of dollars in stock movement, and validated that multimodal models (image generation, video) are economically critical for marketing, design, and AI-for-science

* New categories: occupational and expert arenas (medicine, legal, finance, creative marketing), Code Arena, and video arena coming soon

Full Video Episode

Timestamps

00:00:00 Introduction: Anastasios from Arena and the LM Arena Journey00:01:36 The Anjney Midha Incubation: From Berkeley Basement to Startup00:02:47 The Decision to Start a Company: Scaling Beyond Academia00:03:38 The $100M Raise: Use of Funds and Platform Economics00:05:10 Arena's User Base: 5M+ Users and Diverse Demographics00:06:02 The Competitive Landscape: Artificial Analysis, AI.xyz, and Arena's Differentiation00:08:12 Educational Value and Learning from the Community00:08:41 Technical Migration: From Gradio to React and Platform Evolution00:10:18 Leaderboard Delusion Paper: Addressing Critiques and Maintaining Integrity00:12:29 Nano Banana Moment: How Preview Models Create Market Impact00:13:41 Multimodal AI and Image Generation: From Skepticism to Economic Value00:15:37 Core Principles: Platform Integrity and the Public Leaderboard as Charity00:18:29 Future Roadmap: Expert Categories, Multimodal, Video, and Occupational Verticals00:19:10 API Strategy and Focus: Doing One Thing Well00:19:51 Community Management and Retention: Sign-In, History, and Daily Value00:22:21 Partnerships and Agent Evaluation: From Devon to Full-Featured Harnesses00:21:49 Hiring and Building a High-Performance Team



Get full access to Latent.Space at www.latent.space/subscribe
2026-01-06
Link to episode

[NeurIPS Best Paper] 1000 Layer Networks for Self-Supervised RL ? Kevin Wang et al, Princeton

From undergraduate research seminars at Princeton to winning Best Paper award at NeurIPS 2025, Kevin Wang, Ishaan Javali, Micha? Bortkiewicz, Tomasz Trzcinski, Benjamin Eysenbach defied conventional wisdom by scaling reinforcement learning networks to 1,000 layers deep?unlocking performance gains that the RL community thought impossible. We caught up with the team live at NeurIPS to dig into the story behind RL1000: why deep networks have worked in language and vision but failed in RL for over a decade (spoiler: it?s not just about depth, it?s about the objective), how they discovered that self-supervised RL (learning representations of states, actions, and future states via contrastive learning) scales where value-based methods collapse, the critical architectural tricks that made it work (residual connections, layer normalization, and a shift from regression to classification), why scaling depth is more parameter-efficient than scaling width (linear vs. quadratic growth), how Jax and GPU-accelerated environments let them collect hundreds of millions of transitions in hours (the data abundance that unlocked scaling in the first place), the ?critical depth? phenomenon where performance doesn?t just improve?it multiplies once you cross 15M+ transitions and add the right architectural components, why this isn?t just ?make networks bigger? but a fundamental shift in RL objectives (their code doesn?t have a line saying ?maximize rewards??it?s pure self-supervised representation learning), how deep teacher, shallow student distillation could unlock deployment at scale (train frontier capabilities with 1000 layers, distill down to efficient inference models), the robotics implications (goal-conditioned RL without human supervision or demonstrations, scaling architecture instead of scaling manual data collection), and their thesis that RL is finally ready to scale like language and vision?not by throwing compute at value functions, but by borrowing the self-supervised, representation-learning paradigms that made the rest of deep learning work.

We discuss:

* The self-supervised RL objective: instead of learning value functions (noisy, biased, spurious), they learn representations where states along the same trajectory are pushed together, states along different trajectories are pushed apart?turning RL into a classification problem

* Why naive scaling failed: doubling depth degraded performance, doubling again with residual connections and layer norm suddenly skyrocketed performance in one environment?unlocking the ?critical depth? phenomenon

* Scaling depth vs. width: depth grows parameters linearly, width grows quadratically?depth is more parameter-efficient and sample-efficient for the same performance

* The Jax + GPU-accelerated environments unlock: collecting thousands of trajectories in parallel meant data wasn?t the bottleneck, and crossing 15M+ transitions was when deep networks really paid off

* The blurring of RL and self-supervised learning: their code doesn?t maximize rewards directly, it?s an actor-critic goal-conditioned RL algorithm, but the learning burden shifts to classification (cross-entropy loss, representation learning) instead of TD error regression

* Why scaling batch size unlocks at depth: traditional RL doesn?t benefit from larger batches because networks are too small to exploit the signal, but once you scale depth, batch size becomes another effective scaling dimension

?

RL1000 Team (Princeton)

* 1000 Layer Networks for Self-Supervised RL: Scaling Depth Can Enable New Goal-Reaching Capabilities: https://openreview.net/forum?id=s0JVsx3bx1

Full Video Episode

Timestamps

00:00:00 Introduction: Best Paper Award and NeurIPS Poster Experience00:01:11 Team Introductions and Princeton Research Origins00:03:35 The Deep Learning Anomaly: Why RL Stayed Shallow00:04:35 Self-Supervised RL: A Different Approach to Scaling00:05:13 The Breakthrough Moment: Residual Connections and Critical Depth00:07:15 Architectural Choices: Borrowing from ResNets and Avoiding Vanishing Gradients00:07:50 Clarifying the Paper: Not Just Big Networks, But Different Objectives00:08:46 Blurring the Lines: RL Meets Self-Supervised Learning00:09:44 From TD Errors to Classification: Why This Objective Scales00:11:06 Architecture Details: Building on Braw and SymbaFowl00:12:05 Robotics Applications: Goal-Conditioned RL Without Human Supervision00:13:15 Efficiency Trade-offs: Depth vs Width and Parameter Scaling00:15:48 JAX and GPU-Accelerated Environments: The Data Infrastructure00:18:05 World Models and Next State Classification00:22:37 Unlocking Batch Size Scaling Through Network Capacity00:24:10 Compute Requirements: State-of-the-Art on a Single GPU00:21:02 Future Directions: Distillation, VLMs, and Hierarchical Planning00:27:15 Closing Thoughts: Challenging Conventional Wisdom in RL Scaling



Get full access to Latent.Space at www.latent.space/subscribe
2026-01-02
Link to episode

[State of Code Evals] After SWE-bench, Code Clash & SOTA Coding Benchmarks recap ? John Yang

From creating SWE-bench in a Princeton basement to shipping CodeClash, SWE-bench Multimodal, and SWE-bench Multilingual, John Yang has spent the last year and a half watching his benchmark become the de facto standard for evaluating AI coding agents?trusted by Cognition (Devin), OpenAI, Anthropic, and every major lab racing to solve software engineering at scale. We caught up with John live at NeurIPS 2025 to dig into the state of code evals heading into 2026: why SWE-bench went from ignored (October 2023) to the industry standard after Devin?s launch (and how Walden emailed him two weeks before the big reveal), how the benchmark evolved from Django-heavy to nine languages across 40 repos (JavaScript, Rust, Java, C, Ruby), why unit tests as verification are limiting and long-running agent tournaments might be the future (CodeClash: agents maintain codebases, compete in arenas, and iterate over multiple rounds), the proliferation of SWE-bench variants (SWE-bench Pro, SWE-bench Live, SWE-Efficiency, AlgoTune, SciCode) and how benchmark authors are now justifying their splits with curation techniques instead of just ?more repos,? why Tau-bench?s ?impossible tasks? controversy is actually a feature not a bug (intentionally including impossible tasks flags cheating), the tension between long autonomy (5-hour runs) vs. interactivity (Cognition?s emphasis on fast back-and-forth), how Terminal-bench unlocked creativity by letting PhD students and non-coders design environments beyond GitHub issues and PRs, the academic data problem (companies like Cognition and Cursor have rich user interaction data, academics need user simulators or compelling products like LMArena to get similar signal), and his vision for CodeClash as a testbed for human-AI collaboration?freeze model capability, vary the collaboration setup (solo agent, multi-agent, human+agent), and measure how interaction patterns change as models climb the ladder from code completion to full codebase reasoning.

We discuss:

* John?s path: Princeton ? SWE-bench (October 2023) ? Stanford PhD with Diyi Yang and the Iris Group, focusing on code evals, human-AI collaboration, and long-running agent benchmarks

* The SWE-bench origin story: released October 2023, mostly ignored until Cognition?s Devin launch kicked off the arms race (Walden emailed John two weeks before: ?we have a good number?)

* SWE-bench Verified: the curated, high-quality split that became the standard for serious evals

* SWE-bench Multimodal and Multilingual: nine languages (JavaScript, Rust, Java, C, Ruby) across 40 repos, moving beyond the Django-heavy original distribution

* The SWE-bench Pro controversy: independent authors used the ?SWE-bench? name without John?s blessing, but he?s okay with it (?congrats to them, it?s a great benchmark?)

* CodeClash: John?s new benchmark for long-horizon development?agents maintain their own codebases, edit and improve them each round, then compete in arenas (programming games like Halite, economic tasks like GDP optimization)

* SWE-Efficiency (Jeffrey Maugh, John?s high school classmate): optimize code for speed without changing behavior (parallelization, SIMD operations)

* AlgoTune, SciCode, Terminal-bench, Tau-bench, SecBench, SRE-bench: the Cambrian explosion of code evals, each diving into different domains (security, SRE, science, user simulation)

* The Tau-bench ?impossible tasks? debate: some tasks are underspecified or impossible, but John thinks that?s actually a feature (flags cheating if you score above 75%)

* Cognition?s research focus: codebase understanding (retrieval++), helping humans understand their own codebases, and automatic context engineering for LLMs (research sub-agents)

* The vision: CodeClash as a testbed for human-AI collaboration?vary the setup (solo agent, multi-agent, human+agent), freeze model capability, and measure how interaction changes as models improve

?

John Yang

* SWE-bench: https://www.swebench.com

* X: https://x.com/jyangballin

Full Video Episode

Timestamps

00:00:00 Introduction: John Yang on SWE-bench and Code Evaluations00:00:31 SWE-bench Origins and Devon's Impact on the Coding Agent Arms Race00:01:09 SWE-bench Ecosystem: Verified, Pro, Multimodal, and Multilingual Variants00:02:17 Moving Beyond Django: Diversifying Code Evaluation Repositories00:03:08 Code Clash: Long-Horizon Development Through Programming Tournaments00:04:41 From Halite to Economic Value: Designing Competitive Coding Arenas00:06:04 Ofir's Lab: SWE-ficiency, AlgoTune, and SciCode for Scientific Computing00:07:52 The Benchmark Landscape: TAU-bench, Terminal-bench, and User Simulation00:09:20 The Impossible Task Debate: Refusals, Ambiguity, and Benchmark Integrity00:12:32 The Future of Code Evals: Long Autonomy vs Human-AI Collaboration00:14:37 Call to Action: User Interaction Data and Codebase Understanding Research



Get full access to Latent.Space at www.latent.space/subscribe
2025-12-31
Link to episode

[State of Post-Training] From GPT-4.1 to 5.1: RLVR, Agent & Token Efficiency ? Josh McGrath, OpenAI

From pre-training data curation to shipping GPT-4o, o1, o3, and now GPT-5 thinking and the shopping model, Josh McGrath has lived through the full arc of OpenAI?s post-training evolution?from the PPO vs DPO debates of 2023 to today?s RLVR era, where the real innovation isn?t optimization methods but data quality, signal trust, and token efficiency. We sat down with Josh at NeurIPS 2025 to dig into the state of post-training heading into 2026: why RLHF and RLVR are both just policy gradient methods (the difference is the input data, not the math), how GRPO from DeepSeek Math was underappreciated as a shift toward more trustworthy reward signals (math answers you can verify vs. human preference you can?t), why token efficiency matters more than wall-clock time (GPT-5 to 5.1 bumped evals and slashed tokens), how Codex has changed his workflow so much he feels ?trapped? by 40-minute design sessions followed by 15-minute agent sprints, the infrastructure chaos of scaling RL (?way more moving parts than pre-training?), why long context will keep climbing but agents + graph walks might matter more than 10M-token windows, the shopping model as a test bed for interruptability and chain-of-thought transparency, why personality toggles (Anton vs Clippy) are a real differentiator users care about, and his thesis that the education system isn?t producing enough people who can do both distributed systems and ML research?the exact skill set required to push the frontier when the bottleneck moves every few weeks.

We discuss:

* Josh?s path: pre-training data curation ? post-training researcher at OpenAI, shipping GPT-4o, o1, o3, GPT-5 thinking, and the shopping model

* Why he switched from pre-training to post-training: ?Do I want to make 3% compute efficiency wins, or change behavior by 40%??

* The RL infrastructure challenge: way more moving parts than pre-training (tasks, grading setups, external partners), and why babysitting runs at 12:30am means jumping into unfamiliar code constantly

* How Codex has changed his workflow: 40-minute design sessions compressed into 15-minute agent sprints, and the strange ?trapped? feeling of waiting for the agent to finish

* The RLHF vs RLVR debate: both are policy gradient methods, the real difference is data quality and signal trust (human preference vs. verifiable correctness)

* Why GRPO (from DeepSeek Math) was underappreciated: not just an optimization trick, but a shift toward reward signals you can actually trust (math answers over human vibes)

* The token efficiency revolution: GPT-5 to 5.1 bumped evals and slashed tokens, and why thinking in tokens (not wall-clock time) unlocks better tool-calling and agent workflows

* Personality toggles: Anton (tool, no warmth) vs Clippy (friendly, helpful), and why Josh uses custom instructions to make his model ?just a tool?

* The router problem: having a router at the top (GPT-5 thinking vs non-thinking) and an implicit router (thinking effort slider) creates weird bumps, and why the abstractions will eventually merge

* Long context: climbing Graph Blocks evals, the dream of 10M+ token windows, and why agents + graph walks might matter more than raw context length

* Why the education system isn?t producing enough people who can do both distributed systems and ML research, and why that?s the bottleneck for frontier labs

* The 2026 vision: neither pre-training nor post-training is dead, we?re in the fog of war, and the bottleneck will keep moving (so emotional stability helps)

?

Josh McGrath

* OpenAI: https://openai.com

* X: https://x.com/j_mcgraph

Full Video Episode

Timestamps

00:00:00 Introduction: Josh McGrath on Post-Training at OpenAI00:04:37 The Shopping Model: Black Friday Launch and Interruptability00:07:11 Model Personality and the Anton vs Clippy Divide00:08:26 Beyond PPO vs DPO: The Data Quality Spectrum in RL00:01:40 Infrastructure Challenges: Why Post-Training RL is Harder Than Pre-Training00:13:12 Token Efficiency: The 2D Plot That Matters Most00:03:45 Codex Max and the Flow Problem: 40 Minutes of Planning, 15 Minutes of Waiting00:17:29 Long Context and Graph Blocks: Climbing Toward Perfect Context00:21:23 The ML-Systems Hybrid: What's Hard to Hire For00:24:50 Pre-Training Isn't Dead: Living Through Technological Revolution



Get full access to Latent.Space at www.latent.space/subscribe
2025-12-31
Link to episode

[State of RL/Reasoning] IMO/IOI Gold, OpenAI o3/GPT-5, and Cursor Composer ? Ashvin Nair, Cursor

From Berkeley robotics and OpenAI?s 2017 Dota-era internship to shipping RL breakthroughs on GPT-4o, o1, and o3, and now leading model development at Cursor, Ashvin Nair has done it all. We caught up with Ashvin at NeurIPS 2025 to dig into the inside story of OpenAI?s reasoning team (spoiler: it went from a dozen people to 300+), why IOI Gold felt reachable in 2022 but somehow didn?t change the world when o1 actually achieved it, how RL doesn?t generalize beyond the training distribution (and why that means you need to bring economically useful tasks into distribution by co-designing products and models), the deeper lessons from the RL research era (2017?2022) and why most of it didn?t pan out because the community overfitted to benchmarks, how Cursor is uniquely positioned to do continual learning at scale with policy updates every two hours and product-model co-design that keeps engineers in the loop instead of context-switching into ADHD hell, and his bet that the next paradigm shift is continual learning with infinite memory?where models experience something once (a bug, a mistake, a user pattern) and never forget it, storing millions of deployment tokens in weights without overloading capacity.

We discuss:

* Ashvin?s path: Berkeley robotics PhD ? OpenAI 2017 intern (Dota era) ? o1/o3 reasoning team ? Cursor ML lead in three months

* Why robotics people are the most grounded at NeurIPS (they work with the real world) and simulation people are the most unhinged (Lex Fridman?s take)

* The IOI Gold paradox: ?If you told me we?d achieve IOI Gold in 2022, I?d assume we could all go on vacation?AI solved, no point working anymore. But life is still the same.?

* The RL research era (2017?2022) and why most of it didn?t pan out: overfitting to benchmarks, too many implicit knobs to tune, and the community rewarding complex ideas over simple ones that generalize

* Inside the o1 origin story: a dozen people, conviction from Ilya and Jakob Pachocki that RL would work, small-scale prototypes producing ?surprisingly accurate reasoning traces? on math, and first-principles belief that scaled

* The reasoning team grew from ~12 to 300+ people as o1 became a product and safety, tooling, and deployment scaled up

* Why Cursor is uniquely positioned for continual learning: policy updates every two hours (online RL on tab), product and ML sitting next to each other, and the entire software engineering workflow (code, logs, debugging, DataDog) living in the product

* Composer as the start of product-model co-design: smart enough to use, fast enough to stay in the loop, and built by a 20?25 person ML team with high-taste co-founders who code daily

* The next paradigm shift: continual learning with infinite memory?models that experience something once (a bug, a user mistake) and store it in weights forever, learning from millions of deployment tokens without overloading capacity (trillions of pretraining tokens = plenty of room)

* Why off-policy RL is unstable (Ashvin?s favorite interview question) and why Cursor does two-day work trials instead of whiteboard interviews

* The vision: automate software engineering as a process (not just answering prompts), co-design products so the entire workflow (write code, check logs, debug, iterate) is in-distribution for RL, and make models that never make the same mistake twice

?

Ashvin Nair

* Cursor: https://cursor.com

* X: https://x.com/ashvinnair_

Full Video Episode

Timestamps

00:00:00 Introduction: From Robotics to Cursor via OpenAI00:01:58 The Robotics to LLM Agent Transition: Why Code Won00:09:11 RL Research Winter and Academic Overfitting00:11:45 The Scaling Era and Moving Goalposts: IOI Gold Doesn't Mean AGI00:21:30 OpenAI's Reasoning Journey: From Codex to O100:20:03 The Blip: Thanksgiving 2023 and OpenAI Governance00:22:39 RL for Reasoning: The O-Series Conviction and Scaling00:25:47 O1 to O3: Smooth Internal Progress vs External Hype Cycles00:33:07 Why Cursor: Co-Designing Products and Models for Real Work00:34:14 Composer and the Future: Online Learning Every Two Hours00:35:15 Continual Learning: The Missing Paradigm Shift00:44:00 Hiring at Cursor and Why Off-Policy RL is Unstable



Get full access to Latent.Space at www.latent.space/subscribe
2025-12-30
Link to episode

[State of AI Startups] Memory/Learning, RL Envs & DBT-Fivetran ? Sarah Catanzaro, Amplify

From investing through the modern data stack era (DBT, Fivetran, and the analytics explosion) to now investing at the frontier of AI infrastructure and applications at Amplify Partners, Sarah Catanzaro has spent years at the intersection of data, compute, and intelligence?watching categories emerge, merge, and occasionally disappoint. We caught up with Sarah live at NeurIPS 2025 to dig into the state of AI startups heading into 2026: why $100M+ seed rounds with no near-term roadmap are now the norm (and why that terrifies her), what the DBT-Fivetran merger really signals about the modern data stack (spoiler: it?s not dead, just ready for IPO), how frontier labs are using DBT and Fivetran to manage training data and agent analytics at scale, why data catalogs failed as standalone products but might succeed as metadata services for agents, the consumerization of AI and why personalization (memory, continual learning, K-factor) is the 2026 unlock for retention and growth, why she thinks RL environments are a fad and real-world logs beat synthetic clones every time, and her thesis for the most exciting AI startups: companies that marry hard research problems (RAG, rule-following, continual learning) with killer applications that were simply impossible before.

We discuss:

* The DBT-Fivetran merger: not the death of the modern data stack, but a path to IPO scale (targeting $600M+ combined revenue) and a signal that both companies were already winning their categories

* How frontier labs use data infrastructure: DBT and Fivetran for training data curation, agent analytics, and managing increasingly complex interactions?plus the rise of transactional databases (RocksDB) and efficient data loading (Vortex) for GPU-bound workloads

* Why data catalogs failed: built for humans when they should have been built for machines, focused on discoverability when the real opportunity was governance, and ultimately subsumed as features inside Snowflake, DBT, and Fivetran

* The $100M+ seed phenomenon: raising massive rounds at billion-dollar valuations with no 6-month roadmap, seven-day decision windows, and founders optimizing for signal (?we?re a unicorn?) over partnership or dilution discipline

* Why world models are overhyped but underspecified: three competing definitions, unclear generalization across use cases (video games ? robotics ? autonomous driving), and a research problem masquerading as a product category

* The 2026 theme: consumerization of AI via personalization?memory management, continual learning, and solving retention/churn by making products learn skills, preferences, and adapt as the world changes (not just storing facts in cursor rules)

* Why RL environments are a fad: labs are paying 7?8 figures for synthetic clones when real-world logs, traces, and user activity (à la Cursor) are richer, cheaper, and more generalizable

* Sarah?s investment thesis: research-driven applications that solve hard technical problems (RAG for Harvey, rule-following for Sierra, continual learning for the next killer app) and unlock experiences that were impossible before

* Infrastructure bets: memory, continual learning, stateful inference, and the systems challenges of loading/unloading personalized weights at scale

* Why K-factor and growth fundamentals matter again: AI felt magical in 2023?2024, but as the magic fades, retention and virality are back?and most AI founders have never heard of K-factor

?

Sarah Catanzaro

* X: https://x.com/sarahcat21

* Amplify Partners: https://amplifypartners.com/

Where to find Latent Space

* X: https://x.com/latentspacepod

Full Video Episode

Timestamps

00:00:00 Introduction: Sarah Catanzaro's Journey from Data to AI00:01:02 The DBT-Fivetran Merger: Not the End of the Modern Data Stack00:05:26 Data Catalogs and What Went Wrong00:08:16 Data Infrastructure at AI Labs: Surprising Insights00:10:13 The Crazy Funding Environment of 2024-202500:17:18 World Models: Hype, Confusion, and Market Potential00:18:59 Memory Management and Continual Learning: The Next Frontier00:23:27 Agent Environments: Just a Fad?00:25:48 The Perfect AI Startup: Research Meets Application00:28:02 Closing Thoughts and Where to Find Sarah



Get full access to Latent.Space at www.latent.space/subscribe
2025-12-30
Link to episode

One Year of MCP ? with David Soria Parra and AAIF leads from OpenAI, Goose, Linux Foundation

One year ago, Anthropic launched the Model Context Protocol (MCP)?a simple, open standard to connect AI applications to the data and tools they need. Today, MCP has exploded from a local-only experiment into the de facto protocol for agentic systems, adopted by OpenAI, Microsoft, Google, Block, and hundreds of enterprises building internal agents at scale. And now, MCP is joining the newly formed Agentic AI Foundation (AAIF) under the Linux Foundation, alongside Block?s Goose coding agent, with founding members spanning the biggest names in AI and cloud infrastructure.

We sat down with David Soria Parra (MCP lead, Anthropic), Nick Cooper (OpenAI), Brad Howes (Block / Goose), and Jim Zemlin (Linux Foundation CEO) to dig into the one-year journey of MCP?from Thanksgiving hacking sessions and the first remote authentication spec to long-running tasks, MCP Apps, and the rise of agent-to-agent communication?and the behind-the-scenes story of how three competitive AI labs came together to donate their protocols and agents to a neutral foundation, why enterprises are deploying MCP servers faster than anyone expected (most of it invisible, internal, and at massive scale), what it takes to design a protocol that works for both simple tool calls and complex multi-agent orchestration, how the foundation will balance taste-making (curating meaningful projects) with openness (avoiding vendor lock-in), and the 2025 vision: MCP as the communication layer for asynchronous, long-running agents that work while you sleep, discover and install their own tools, and unlock the next order of magnitude in AI productivity.

We discuss:

* The one-year MCP journey: from local stdio servers to remote HTTP streaming, OAuth 2.1 authentication (and the enterprise lessons learned), long-running tasks, and MCP Apps (iframes for richer UI)

* Why MCP adoption is exploding internally at enterprises: invisible, internal servers connecting agents to Slack, Linear, proprietary data, and compliance-heavy workflows (financial services, healthcare)

* The authentication evolution: separating resource servers from identity providers, dynamic client registration, and why the March spec wasn?t enterprise-ready (and how June fixed it)

* How Anthropic dogfoods MCP: internal gateway, custom servers for Slack summaries and employee surveys, and why MCP was born from ?how do I scale dev tooling faster than the company grows??

* Tasks: the new primitive for long-running, asynchronous agent operations?why tools aren?t enough, how tasks enable deep research and agent-to-agent handoffs, and the design choice to make tasks a ?container? (not just async tools)

* MCP Apps: why iframes, how to handle styles and branding, seat selection and shopping UIs as the killer use case, and the collaboration with OpenAI to build a common standard

* The registry problem: official registry vs. curated sub-registries (Smithery, GitHub), trust levels, model-driven discovery, and why MCP needs ?npm for agents? (but with signatures and HIPAA/financial compliance)

* The founding story of AAIF: how Anthropic, OpenAI, and Block came together (spoiler: they didn?t know each other were talking to Linux Foundation), why neutrality matters, and how Jim Zemlin has never seen this much day-one inbound interest in 22 years

?

David Soria Parra (Anthropic / MCP)

* MCP: https://modelcontextprotocol.io

* https://uk.linkedin.com/in/david-soria-parra-4a78b3a

* https://x.com/dsp_

Nick Cooper (OpenAI)

* X: https://x.com/nicoaicopr

Brad Howes (Block / Goose)

* Goose: https://github.com/block/goose

Jim Zemlin (Linux Foundation)

* LinkedIn: https://www.linkedin.com/in/zemlin/

Agentic AI Foundation

* https://agenticai.foundation

Full Video Episode

Timestamps

00:00:00 Introduction: MCP's First Year and Foundation Launch00:01:17 MCP's Journey: From Launch to Industry Standard00:02:06 Protocol Evolution: Remote Servers and Authentication00:08:52 Enterprise Authentication and Financial Services00:11:42 Transport Layer Challenges: HTTP Streaming and Scalability00:15:37 Standards Development: Collaboration with Tech Giants00:34:27 Long-Running Tasks: The Future of Async Agents00:30:41 Discovery and Registries: Building the MCP Ecosystem00:30:54 MCP Apps and UI: Beyond Text Interfaces00:26:55 Internal Adoption: How Anthropic Uses MCP00:23:15 Skills vs MCP: Complementary Not Competing00:36:16 Community Events and Enterprise Learnings01:03:31 Foundation Formation: Why Now and Why Together01:07:38 Linux Foundation Partnership: Structure and Governance01:11:13 Goose as Reference Implementation01:17:28 Principles Over Roadmaps: Composability and Quality01:21:02 Foundation Value Proposition: Why Contribute01:27:49 Practical Investments: Events, Tools, and Community01:34:58 Looking Ahead: Async Agents and Real Impact



Get full access to Latent.Space at www.latent.space/subscribe
2025-12-27
Link to episode

Steve Yegge's Vibe Coding Manifesto: Why Claude Code Isn't It & What Comes After the IDE

Note: Steve and Gene?s talk on Vibe Coding and the post IDE world was one of the top talks of AIE CODE:

From building legendary platforms at Google and Amazon to authoring one of the most influential essays on AI-powered development (Revenge of the Junior Developer, quoted by Dario Amodei himself), Steve Yegge has spent decades at the frontier of software engineering?and now he?s leading the charge into what he calls the ?factory farming? era of code. After stints at SourceGraph and building Beads (a purely vibe-coded issue tracker with tens of thousands of users), Steve co-authored The Vibe Coding Book and is now building VC (VibeCoder), an agent orchestration dashboard designed to move developers from writing code to managing fleets of AI agents that coordinate, parallelize, and ship features while you sleep.

We sat down with Steve at AI Engineer Summit to dig into why Claude Code, Cursor, and the entire 2024 stack are already obsolete, what it actually takes to trust an agent after 2,000 hours of practice (hint: they will delete your production database if you anthropomorphize them), why the real skill is no longer writing code but orchestrating agents like a NASCAR pit crew, how merging has become the new wall that every 10x-productive team is hitting (and why one company?s solution is literally ?one engineer per repo?), the rise of multi-agent workflows where agents reserve files, message each other via MCP, and coordinate like a little village, why Steve believes if you?re still using an IDE to write code by January 1st, you?re a bad engineer, how the 12?15 year experience bracket is the most resistant demographic (and why their identity is tied to obsolete workflows), the hidden chaos inside OpenAI, Anthropic, and Google as they scale at breakneck speed, why rewriting from scratch is now faster than refactoring for a growing class of codebases, and his 2025 prediction: we?re moving from subsistence agriculture to John Deere-scale factory farming of code, and the Luddite backlash is only just beginning.

We discuss:

* Why Claude Code, Cursor, and agentic coding tools are already last year?s tech?and what comes next: agent orchestration dashboards where you manage fleets, not write lines

* The 2,000-hour rule: why it takes a full year of daily use before you can predict what an LLM will do, and why trust = predictability, not capability

* Steve?s hot take: if you?re still using an IDE to develop code by January 1st, 2025, you?re a bad engineer?because the abstraction layer has moved from models to full-stack agents

* The demographic most resistant to vibe coding: 12?15 years of experience, senior engineers whose identity is tied to the way they work today, and why they?re about to become the interns

* Why anthropomorphizing LLMs is the biggest mistake: the ?hot hand? fallacy, agent amnesia, and how Steve?s agent once locked him out of prod by changing his password to ?fix? a problem

* Should kids learn to code? Steve?s take: learn to vibe code?understand functions, classes, architecture, and capabilities in a language-neutral way, but skip the syntax

* The 2025 vision: ?factory farming of code? where orchestrators run Cloud Code, scrub output, plan-implement-review-test in loops, and unlock programming for non-programmers at scale

?

Steve Yegge

* X: https://x.com/steve_yegge

* Substack (Stevie?s Tech Talks): https://steve-yegge.medium.com/

* GitHub (VC / VibeCoder): https://github.com/yegge-labs

Where to find Latent Space

* X: https://x.com/latentspacepod

Full Video Episode

Thumbnails

00:00:00 Introduction: Steve Yegge on Vibe Coding and AI Engineering00:00:59 The Backlash: Who Resists Vibe Coding and Why00:04:26 The 2000 Hour Rule: Building Trust with AI Coding Tools00:03:31 The January 1st Deadline: IDEs Are Becoming Obsolete00:02:55 10X Productivity at OpenAI: The Performance Review Problem00:07:49 The Hot Hand Fallacy: When AI Agents Betray Your Trust00:11:12 Claude Code Isn't It: The Need for Agent Orchestration00:15:20 The Orchestrator Revolution: From Cloud Code to Agent Villages00:18:46 The Merge Wall: The Biggest Unsolved Problem in AI Coding00:26:33 Never Rewrite Your Code - Until Now: Joel Spolsky Was Wrong00:22:43 Factory Farming Code: The John Deere Era of Software00:29:27 Google's Gemini Turnaround and the AI Lab Chaos00:33:20 Should Your Kids Learn to Code? The New Answer00:34:59 Code MCP and the Gossip Rate: Latest Vibe Coding Discoveries



Get full access to Latent.Space at www.latent.space/subscribe
2025-12-26
Link to episode

??GPT5-Codex-Max: Training Agents with Personality, Tools & Trust ? Brian Fioca + Bill Chen, OpenAI

From the frontlines of OpenAI?s Codex and GPT-5 training teams, Bryan and Bill are building the future of AI-powered coding?where agents don?t just autocomplete, they architect, refactor, and ship entire features while you sleep. We caught up with them at AI Engineer Conference right after the launch of Codex Max, OpenAI?s newest long-running coding agent designed to work for 24+ hours straight, manage its own context, and spawn sub-agents to parallelize work across your entire codebase.

We sat down with Bryan and Bill to dig into what it actually takes to train a model that developers trust?why personality, communication, and planning matter as much as raw capability, how Codex is trained with strong opinions about tools (it loves rg over grep, seriously), why the abstraction layer is moving from models to full-stack agents you can plug into VS Code or Zed, how OpenAI partners co-develop tool integrations and discover unexpected model habits (like renaming tools to match Codex?s internal training), the rise of applied evals that measure real-world impact instead of academic benchmarks, why multi-turn evals are the next frontier (and Bryan?s ?job interview eval? idea), how coding agents are breaking out of code into personal automation, terminal workflows, and computer use, and their 2026 vision: coding agents trusted enough to handle the hardest refactors at any company, not just top-tier firms, and general enough to build integrations, organize your desktop, and unlock capabilities you?d never get access to otherwise.

We discuss:

* What Codex Max is: a long-running coding agent that can work 24+ hours, manage its own context window, and spawn sub-agents for parallel work

* Why the name ?Max?: maximalist, maximization, speed and endurance?it?s simply better and faster for the same problems

* Training for personality: communication, planning, context gathering, and checking your work as behavioral characteristics, not just capabilities

* How Codex develops habits like preferring rg over grep, and why renaming tools to match its training (e.g., terminal-style naming) dramatically improves tool-call performance

* The split between Codex (opinionated, agent-focused, optimized for the Codex harness) and GPT-5 (general, more durable across different tools and modalities)

* Why the abstraction layer is moving up: from prompting models to plugging in full agents (Codex, GitHub Copilot, Zed) that package the entire stack

* The rise of sub-agents and agents-using-agents: Codex Max spawning its own instances, handing off context, and parallelizing work across a codebase

* How OpenAI works with coding partners on the bleeding edge to co-develop tool integrations and discover what the model is actually good at

* The shift to applied evals: capturing real-world use cases instead of academic benchmarks, and why ~50% of OpenAI employees now use Codex daily

* Why multi-turn evals are the next frontier: LM-as-a-judge for entire trajectories, Bryan?s ?job interview eval? concept, and the need for a batch multi-turn eval API

* How coding agents are breaking out of code: personal automation, organizing desktops, terminal workflows, and ?Devin for non-coding? use cases

* Why Slack is the ultimate UI for work, and how coding agents can become your personal automation layer for email, files, and everything in between

* The 2026 vision: more computer use, more trust, and coding agents capable enough that any company can access top-tier developer capabilities, not just elite firms

?

Bryan & Bill (OpenAI Codex Team)

* http://x.com/bfioca

* https://x.com/realchillben

* OpenAI Codex: https://openai.com/index/openai-codex/

Where to find Latent Space

* X: https://x.com/latentspacepod

Full Video Episode

Timestamps

00:00:00 Introduction: Latent Space Listeners at AI Engineer Code00:01:27 Codex Max Launch: Training for Long-Running Coding Agents00:03:01 Model Personality and Trust: Communication, Planning, and Self-Checking00:05:20 Codex vs GPT-5: Opinionated Agents vs General Models00:07:47 Tool Use and Model Habits: The Ripgrep Discovery00:09:16 Personality Design: Verbosity vs Efficiency in Coding Agents00:11:56 The Agent Abstraction Layer: Building on Top of Codex00:14:08 Sub-Agents and Multi-Agent Patterns: The Future of Composition00:16:11 Trust and Adoption: OpenAI Developers Using Codex Daily00:17:21 Applied Evals: Real-World Testing vs Academic Benchmarks00:19:15 Multi-Turn Evals and the Job Interview Pattern00:21:35 Feature Request: Batch Multi-Turn Eval API00:22:28 Beyond Code: Personal Automation and Computer Use00:24:51 Vision-Native Agents and the UI Integration Challenge00:25:02 2026 Predictions: Trust, Computer Use, and Democratized Excellence



Get full access to Latent.Space at www.latent.space/subscribe
2025-12-26
Link to episode

SAM 3: The Eyes for AI ? Nikhila & Pengchuan (Meta Superintelligence), ft. Joseph Nelson (Roboflow)

As with all demo-heavy and especially vision AI podcasts, we encourage watching along on our YouTube (and tossing us an upvote/subscribe if you like!)

From SAM 1?s 11-million-image data engine to SAM 2?s memory-based video tracking, MSL?s Segment Anything project has redefined what?s possible in computer vision. Now SAM 3 takes the next leap: concept segmentation?prompting with natural language like ?yellow school bus? or ?tablecloth? to detect, segment, and track every instance across images and video, in real time, with human-level exhaustivity. And with the latest SAM Audio:

SAM can now even segment audio output!

We sat down with Nikhila Ravi (SAM lead at Meta) and Pengchuan Zhang (SAM 3 researcher) alongside Joseph Nelson (CEO, Roboflow) to unpack how SAM 3 unifies interactive segmentation, open-vocabulary detection, video tracking, and more into a single model that runs in 30ms on images and scales to real-time video on multi-GPU setups. We dig into the data engine that automated exhaustive annotation from two minutes per image down to 25 seconds using AI verifiers fine-tuned on Llama, the new SACO (Segment Anything with Concepts) benchmark with 200,000+ unique concepts vs. the previous 1.2k, how SAM 3 separates recognition from localization with a presence token, why decoupling the detector and tracker was critical to preserve object identity in video, how SAM 3 Agents unlock complex visual reasoning by pairing SAM 3 with multimodal LLMs like Gemini, and the real-world impact: 106 million smart polygons created on Roboflow saving humanity an estimated 130+ years of labeling time across fields from cancer research to underwater trash cleanup to autonomous vehicle perception.

We discuss:

* What SAM 3 is: a unified model for concept-prompted segmentation, detection, and tracking in images and video using atomic visual concepts like ?purple umbrella? or ?watering can?

* How concept prompts work: short text phrases that find all instances of a category without manual clicks, plus visual exemplars (boxes, clicks) to refine and adapt on the fly

* Real-time performance: 30ms per image (100 detected objects on H200), 10 objects on 2×H200 video, 28 on 4×, 64 on 8×, with parallel inference and ?fast mode? tracking

* The SACO benchmark: 200,000+ unique concepts vs. 1.2k in prior benchmarks, designed to capture the diversity of natural language and reach human-level exhaustivity

* The data engine: from 2 minutes per image (all-human) to 45 seconds (model-in-loop proposals) to 25 seconds (AI verifiers for mask quality and exhaustivity checks), fine-tuned on Llama 3.2

* Why exhaustivity is central: every instance must be found, verified by AI annotators, and manually corrected only when the model misses?automating the hardest part of segmentation at scale

* Architecture innovations: presence token to separate recognition (?is it in the image??) from localization (?where is it??), decoupled detector and tracker to preserve identity-agnostic detection vs. identity-preserving tracking

* Building on Meta?s ecosystem: Perception Encoder, DINO v2 detector, Llama for data annotation, and SAM 2?s memory-based tracking backbone

* SAM 3 Agents: using SAM 3 as a visual tool for multimodal LLMs (Gemini, Llama) to solve complex visual reasoning tasks like ?find the bigger character? or ?what distinguishes male from female in this image?

* Fine-tuning with as few as 10 examples: domain adaptation for specialized use cases (Waymo vehicles, medical imaging, OCR-heavy scenes) and the outsized impact of negative examples

* Real-world impact at Roboflow: 106M smart polygons created, saving 130+ years of labeling time across cancer research, underwater trash cleanup, autonomous drones, industrial automation, and more

?

MSL FAIR team

* Nikhila: https://www.linkedin.com/in/nikhilaravi/

* Pengchuan: https://pzzhang.github.io/pzzhang/

Joseph Nelson

* X: https://x.com/josephofiowa

* LinkedIn: https://www.linkedin.com/in/josephofiowa/

Full Video Episode

Timestamps

00:00:00 Introduction and the SAM Series Legacy00:00:53 SAM 3 Launch: Three Models in One Release00:05:30 Live Demo: Concept Prompting and Visual Exemplars00:10:54 From Prototype to Production: The Evolution of Text Prompting00:15:45 The Data Engine: Automating Exhaustive Annotation00:14:10 Real-World Impact: 130 Years of Humanity Saved00:25:11 Architecture Deep Dive: Decoupled Detection and Tracking00:28:02 SAM 3 Agent: Bridging Vision and Language Models00:33:20 Head-to-Head: SAM 3 vs Gemini and Florence00:47:50 Video Understanding and the Masklet Detection Score00:20:24 Fine-Tuning and Domain Adaptation: From Waymos to Medical Imaging00:52:25 The Future of Perception: Native Vision vs Tool Calls01:05:45 Building with SAM 3: Roboflow's Rapid Auto-Labeling00:57:02 Open Source Philosophy and the Path to AGI00:58:24 What's Next: SAM 4, Video Scale, and Beyond Human Performance



Get full access to Latent.Space at www.latent.space/subscribe
2025-12-18
Link to episode

??Jailbreaking AGI: Pliny the Liberator & John V on Red Teaming, BT6, and the Future of AI Security

Note: this is Pliny and John?s first major podcast. Voices have been changed for opsec.

From jailbreaking every frontier model and turning down Anthropic?s Constitutional AI challenge to leading BT6, a 28-operator white-hat hacker collective obsessed with radical transparency and open-source AI security, Pliny the Liberator and John V are redefining what AI red-teaming looks like when you refuse to lobotomize models in the name of ?safety.?

Pliny built his reputation crafting universal jailbreaks?skeleton keys that obliterate guardrails across modalities?and open-sourcing prompt templates like Libertas, predictive reasoning cascades, and the infamous ?Pliny divider? that?s now embedded so deep in model weights it shows up unbidden in WhatsApp messages. John V, coming from prompt engineering and computer vision, co-founded the Bossy Discord (40,000 members strong) and helps steer BT6?s ethos: if you can?t open-source the data, we?re not interested. Together they?ve turned down enterprise gigs, pushed back on Anthropic?s closed bounties, and insisted that real AI security happens at the system layer?not by bubble-wrapping latent space.

We sat down with Pliny and John to dig into the mechanics of hard vs. soft jailbreaks, why multi-turn crescendo attacks were obvious to hackers years before academia ?discovered? them, how segmented sub-agents let one jailbroken orchestrator weaponize Claude for real-world attacks (exactly as Pliny predicted 11 months before Anthropic?s recent disclosure), why guardrails are security theater that punishes capability while doing nothing for real safety, the role of intuition and ?bonding? with models to navigate latent space, how BT6 vets operators on skill and integrity, why they believe Mech Interp and open-source data are the path forward (not RLHF lobotomization), and their vision for a future where spatial intelligence, swarm robotics, and AGI alignment research happen in the open?bootstrapped, grassroots, and uncompromising.

We discuss:

* What universal jailbreaks are: skeleton-key prompts that obliterate guardrails across models and modalities, and why they?re central to Pliny?s mission of ?liberation?

* Hard vs. soft jailbreaks: single-input templates vs. multi-turn crescendo attacks, and why the latter were obvious to hackers long before academic papers

* The Libertas repo: predictive reasoning, the Library of Babel analogy, quotient dividers, weight-space seeds, and how introducing ?steered chaos? pulls models out-of-distribution

* Why jailbreaking is 99% intuition and bonding with the model: probing token layers, syntax hacks, multilingual pivots, and forming a relationship to navigate latent space

* The Anthropic Constitutional AI challenge drama: UI bugs, judge failures, goalpost moving, the demand for open-source data, and why Pliny sat out the $30k bounty

* Why guardrails ? safety: security theater, the futility of locking down latent space when open-source is right behind, and why real safety work happens in meatspace (not RLHF)

* The weaponization of Claude: how segmented sub-agents let one jailbroken orchestrator execute malicious tasks (pyramid-builder analogy), and why Pliny predicted this exact TTP 11 months before Anthropic?s disclosure

* BT6 hacker collective: 28 operators across two cohorts, vetted on skill and integrity, radical transparency, radical open-source, and the magic of moving the needle on AI security, swarm intelligence, blockchain, and robotics

?

Pliny the Liberator

* X: https://x.com/elder_plinius

* GitHub (Libertas): https://github.com/elder-plinius/L1B3RT45

John V

* X: https://x.com/JohnVersus

BT6 & Bossy

* BT6: https://bt6.gg

* Bossy Discord: Search ?Bossy Discord? or ask Pliny/John V on X

Where to find Latent Space

* X: https://x.com/latentspacepod

Full Video Episode

Timestamps

00:00:00 Introduction: Meet Pliny the Liberator and John V00:01:50 The Philosophy of AI Liberation and Jailbreaking00:03:08 Universal Jailbreaks: Skeleton Keys to AI Models00:04:24 The Cat-and-Mouse Game: Attackers vs Defenders00:05:42 Security Theater vs Real Safety: The Fundamental Disconnect00:08:51 Inside the Libertas Repo: Prompt Engineering as Art00:16:22 The Anthropic Challenge Drama: UI Bugs and Open Source Data00:23:30 From Jailbreaks to Weaponization: AI-Orchestrated Attacks00:26:55 The BT6 Hacker Collective and BASI Community00:34:46 AI Red Teaming: Full Stack Security Beyond the Model00:38:06 Safety vs Security: Meat Space Solutions and Final Thoughts



Get full access to Latent.Space at www.latent.space/subscribe
2025-12-16
Link to episode

AI to AE's: Grit, Glean, and Kleiner Perkins' next Enterprise AI hit ? Joubin Mirzadegan, Roadrunner

Glean started as a Kleiner Perkins incubation and is now a $7B, $200m ARR Enterprise AI leader. Now KP has tapped its own podcaster to lead it?s next big swing.

From building go-to-market the hard way in startups (and scaling Palo Alto Networks? public cloud business) to joining Kleiner Perkins to help technical founders turn product edge into repeatable revenue, Joubin Mirzadegan has spent the last decade obsessing over one thing: distribution and how ideas actually spread, sell, and compound. That obsession took him from launching the CRO-only podcast Grit (https://www.youtube.com/playlist?list=PLRiWZFltuYPF8A6UGm74K2q29UwU-Kk9k) as a hiring wedge, to working alongside breakout companies like Glean and Windsurf, to now incubating Roadrunner which is an AI-native rethink of CPQ and quoting workflows as pricing models collapse from ?seats? into consumption, bundles, renewals, and SKU sprawl.

We sat down with Joubin to dig into the real mechanics of making conversations feel human (rolling early, never sending questions, temperature + lighting hacks), what Windsurf got right about ?Google-class product and Salesforce-class distribution,? how to hire early sales leaders without getting fooled by shiny logos, why CPQ is quietly breaking the back of modern revenue teams, and his thesis for his new company and KP incubation Roadrunner (https://www.roadrunner.ai/): rebuild the data model from the ground up, co-develop with the hairiest design partners, and eventually use LLMs to recommend deal structures the way the best reps do without the Slack-channel chaos of deal desk.

We discuss:

* How to make guests instantly comfortable: rolling early, no ?are you ready??, temperature, lighting, and room dynamics

* Why Joubin refuses to send questions in advance (and when you might have to anyway)

* The origin of the CRO-only podcast: using media as a hiring wedge and relationship engine

* The ?commit to 100 episodes? mindset: why most shows die before they find their voice

* Founder vs exec interviews: why CEOs can speak more freely (and what it unlocks in conversation)

* What Glean taught him about enterprise AI: permissions, trust, and overcoming ?category is dead? skepticism

* Design partners as the real unlock: why early believers matter and how co-development actually works

* Windsurf?s breakout: what it means to be serious about ?Google-class product + Salesforce-class distribution?

* Why technical founders struggle with GTM and how KP built a team around sales, customer access, and demand gen

* Hiring early sales leaders: anti-patterns (logos), what to screen for (motivation), and why stage-fit is everything

* The CPQ problem & Roadrunner?s thesis: rebuilding CPQ/quoting from the data model up for modern complexity

* How ?rules + SKUs + approvals? create a brittle graph and what it takes to model it without tipping over

* The two-year window: incumbents rebuilding slowly vs startups out-sprinting with AI-native architecture

* Where AI actually helps: quote generation, policy enforcement, approval routing, and deal recommendation loops

?

Joubin

* X: https://x.com/Joubinmir

* LinkedIn: https://www.linkedin.com/in/joubin-mirzadegan-66186854/

Where to find Latent Space

* X: https://x.com/latentspacepod

Full Video Episode

Timestamps

00:00:00 Introduction and the Zuck Interview Experience00:03:26 The Genesis of the Grit Podcast: Hiring CROs Through Content00:13:20 Podcast Philosophy: Creating Authentic Conversations00:15:44 Working with Arvind at Glean: The Enterprise Search Breakthrough00:26:20 Windsurf's Sales Machine: Google-Class Product Meets Salesforce-Class Distribution00:30:28 Hiring Sales Leaders: Anti-Patterns and First Principles00:39:02 The CPQ Problem: Why Salesforce and Legacy Tools Are Breaking00:43:40 Introducing Roadrunner: Solving Enterprise Pricing with AI00:49:19 Building Roadrunner: Team, Design Partners, and Data Model Challenges00:59:35 High Performance Philosophy: Working Out Every Day and Reducing Friction01:06:28 Defining Grit: Passion Plus Perseverance



Get full access to Latent.Space at www.latent.space/subscribe
2025-12-12
Link to episode

The Future of Email: Superhuman CTO on Your Inbox As the Real AI Agent (Not ChatGPT) ? Loïc Houssier

From applied cryptography and offensive security in France?s defense industry to optimizing nuclear submarine workflows, then selling his e-signature startup to Docusign (https://www.docusign.com/company/news-center/opentrust-joins-docusign-global-trust-network and now running AI as CTO of Superhuman Mail (Superhuman, recently acquired by Grammarly https://techcrunch.com/2025/07/01/grammarly-acquires-ai-email-client-superhuman/), Loïc Houssier has lived the full arc from deep infra and compliance hell to obsessing over 100ms product experiences and AI-native email. We sat down with Loïc to dig into how you actually put AI into an inbox without adding latency, why Superhuman leans so hard into agentic search and ?Ask AI? over your entire email history, how they design tools vs. agents and fight agent laziness, what box-priced inference and local-first caching mean for cost and reliability, and his bet that your inbox will power your future AI EA while AI massively widens the gap between engineers with real fundamentals and those faking it.

We discuss:

* Loïc?s path from applied cryptography and offensive security in France?s defense industry to submarines, e-signatures, Docusign, and now Superhuman Mail

* What 3,000+ engineers actually do at a ?simple? product like Docusign: regional compliance, on-prem appliances, and why global scale explodes complexity

* How Superhuman thinks about AI in email: auto-labels, smart summaries, follow-up nudges, ?Ask AI? search, and the rule that AI must never add latency or friction

* Superhuman?s agentic framework: tools vs. agents, fighting ?agent laziness,? deep semantic search over huge inboxes, and pagination strategies to find the real needle in the haystack

* How they evaluate OpenAI, Anthropic, Gemini, and open models: canonical queries, end-to-end evals, date reasoning, and Rahul?s infamous ?what wood was my table?? test

* Infra and cost philosophy: local-first caching, vector search backends, Baseten ?box? pricing vs. per-token pricing, and thinking in price-per-trillion-tokens instead of price-per-million

* The vision of Superhuman as your AI EA: auto-drafting replies in your voice, scheduling on your behalf, and using your inbox as the ultimate private data source

* How the Grammarly + Coda + Superhuman stack could power truly context-aware assistance across email, docs, calendars, contracts, and more

* Inside Superhuman?s AI-dev culture: free-for-all tool adoption, tracking AI usage on PRs, and going from ~4 to ~6 PRs per engineer per week

* Why Loïc believes everyone should still learn to code, and how AI will amplify great engineers with strong fundamentals while exposing shallow ones even faster

?

Loïc Houssier

* LinkedIn: https://www.linkedin.com/in/houssier/

Where to find Latent Space

* X: https://x.com/latentspacepod

Full Video Episode

Timestamps

00:00:00 Introduction and Loïc's Journey from Nuclear Submarines to Superhuman00:06:40 Docusign Acquisition and the Enterprise Email Stack00:10:26 Superhuman's AI Vision: Your Inbox as the Real AI Agent00:13:20 Ask AI: Agentic Search and the Quality Problem00:18:20 Infrastructure Choices: Model Selection, Base10, and Cost Management00:27:30 Local-First Architecture and the Database Stack00:30:50 Evals, Quality, and the Rahul Wood Table Test00:42:30 The Future EA: Auto-Drafting and Proactive Assistance00:46:40 Grammarly Acquisition and the Contextual Advantage00:38:40 Voice, Video, and the End of Writing00:51:40 Knowledge Graphs: The Hard Problem Nobody Has Solved00:56:40 Competing with OpenAI and the Browser Question01:02:30 AI Coding Tools: From 4 to 6 PRs Per Week01:08:00 Engineering Culture, Hiring, and the Future of Software Development



Get full access to Latent.Space at www.latent.space/subscribe
2025-12-11
Link to episode

World Models & General Intuition: Khosla's largest bet since LLMs & OpenAI

From building Medal into a 12M-user game clipping platform with 3.8B highlight moments to turning down a reported $500M offer from OpenAI (https://www.theinformation.com/articles/openai-offered-pay-500-million-startup-videogame-data) and raising a $134M seed from Khosla (https://techcrunch.com/2025/10/16/general-intuition-lands-134m-seed-to-teach-agents-spatial-reasoning-using-video-game-clips/) to spin out General Intuition, Pim is betting that world models trained on peak human gameplay are the next frontier after LLMs.

We sat down with Pim to dig into why game highlights are ?episodic memory for simulation? (and how Medal?s privacy-first action labels became a world-model goldmine https://medal.tv/blog/posts/enabling-state-of-the-art-security-and-protections-on-medals-new-apm-and-controller-overlay-features), what it takes to build fully vision-based agents that just see frames and output actions in real time, how General Intuition transfers from games to real-world video and then into robotics, why world models and LLMs are complementary rather than rivals, what founders with proprietary datasets should know before selling or licensing to labs, and his bet that spatial-temporal foundation models will power 80% of future atoms-to-atoms interactions in both simulation and the real world.

We discuss:

* How Medal?s 3.8B action-labeled highlight clips became a privacy-preserving goldmine for world models

* Building fully vision-based agents that only see frames and output actions yet play like (and sometimes better than) humans

* Transferring from arcade-style games to realistic games to real-world video using the same perception?action recipe

* Why world models need actions, memory, and partial observability (smoke, occlusion, camera shake) vs. ?just? pretty video generation

* Distilling giant policies into tiny real-time models that still navigate, hide, and peek corners like real players

* Pim?s path from RuneScape private servers, Tourette?s, and reverse engineering to leading a frontier world-model lab

* How data-rich founders should think about valuing their datasets, negotiating with big labs, and deciding when to go independent

* GI?s first customers: replacing brittle behavior trees in games, engines, and controller-based robots with a ?frames in, actions out? API

* Using Medal clips as ?episodic memory of simulation? to move from imitation learning to RL via world models and negative events

* The 2030 vision: spatial?temporal foundation models that power the majority of atoms-to-atoms interactions in simulation and the real world

?

Pim

* X: https://x.com/PimDeWitte

* LinkedIn: https://www.linkedin.com/in/pimdw/

Where to find Latent Space

* X: https://x.com/latentspacepod

Full Video Episode

Timestamps

00:00:00 Introduction and Medal's Gaming Data Advantage00:02:08 Exclusive Demo: Vision-Based Gaming Agents00:06:17 Action Prediction and Real-World Video Transfer00:08:41 World Models: Interactive Video Generation00:13:42 From Runescape to AI: Pim's Founder Journey00:16:45 The Research Foundations: Diamond, Genie, and SEMA00:33:03 Vinod Khosla's Largest Seed Bet Since OpenAI00:35:04 Data Moats and Why GI Stayed Independent00:38:42 Self-Teaching AI Fundamentals: The Francois Fleuret Course00:40:28 Defining World Models vs Video Generation00:41:52 Why Simulation Complexity Favors World Models00:43:30 World Labs, Yann LeCun, and the Spatial Intelligence Race00:50:08 Business Model: APIs, Agents, and Game Developer Partnerships00:58:57 From Imitation Learning to RL: Making Clips Playable01:00:15 Open Research, Academic Partnerships, and Hiring01:02:09 2030 Vision: 80 Percent of Atoms-to-Atoms AI Interactions



Get full access to Latent.Space at www.latent.space/subscribe
2025-12-06
Link to episode

After LLMs: Spatial Intelligence and World Models ? Fei-Fei Li & Justin Johnson, World Labs

Fei-Fei Li and Justin Johnson are cofounders of World Labs, who have recently launched Marble (https://marble.worldlabs.ai/), a new kind of generative ?world model? that can create editable 3D environments from text, images, and other spatial inputs. Marble lets creators generate persistent 3D worlds, precisely control cameras, and interactively edit scenes, making it a powerful tool for games, film, VR, robotics simulation, and more. In this episode, Fei-Fei and Justin share how their journey from ImageNet and Stanford research led to World Labs, why spatial intelligence is the next frontier after LLMs, and how world models could change how machines see, understand, and build in 3D.

We discuss:

* The massive compute scaling from AlexNet to today and why world models and spatial data are the most compelling way to ?soak up? modern GPU clusters compared to language alone.

* What Marble actually is: a generative model of 3D worlds that turns text and images into editable scenes using Gaussian splats, supports precise camera control and recording, and runs interactively on phones, laptops, and VR headsets.

* Fei-fei?s essay:

on spatial intelligence as a distinct form of intelligence from language: from picking up a mug to inferring the 3D structure of DNA, and why language is a lossy, low-bandwidth channel for describing the rich 3D/4D world we live in.

* Whether current models ?understand? physics or just fit patterns: the gap between predicting orbits and discovering F=ma, and how attaching physical properties to splats and distilling physics engines into neural networks could lead to genuine causal reasoning.

* The changing role of academia in AI, why Fei-Fei worries more about under-resourced universities than ?open vs closed,? and how initiatives like national AI compute clouds and open benchmarks can rebalance the ecosystem.

* Why transformers are fundamentally set models, not sequence models, and how that perspective opens up new architectures for world models, especially as hardware shifts from single GPUs to massive distributed clusters.

* Real use cases for Marble today: previsualization and VFX, game environments, virtual production, interior and architectural design (including kitchen remodels), and generating synthetic simulation worlds for training embodied agents and robots.

* How spatial intelligence and language intelligence will work together in multimodal systems, and why the goal isn?t to throw away LLMs but to complement them with rich, embodied models of the world.

* Fei-Fei and Justin?s long-term vision for spatial intelligence: from creative tools for artists and game devs to broader applications in science, medicine, and real-world decision-making.

?

Fei-Fei Li

* X: https://x.com/drfeifei

* LinkedIn: https://www.linkedin.com/in/fei-fei-li-4541247

Justin Johnson

* X: https://x.com/jcjohnss

* LinkedIn: https://www.linkedin.com/in/justin-johnson-41b43664

Where to find Latent Space

* X: https://x.com/latentspacepod

Full Video Episode

Timestamps

00:00:00 Introduction and the Fei-Fei Li & Justin Johnson Partnership00:02:00 From ImageNet to World Models: The Evolution of Computer Vision00:12:42 Dense Captioning and Early Vision-Language Work00:19:57 Spatial Intelligence: Beyond Language Models00:28:46 Introducing Marble: World Labs' First Spatial Intelligence Model00:33:21 Gaussian Splats and the Technical Architecture of Marble00:22:10 Physics, Dynamics, and the Future of World Models00:41:09 Multimodality and the Interplay of Language and Space00:37:37 Use Cases: From Creative Industries to Robotics and Embodied AI00:56:58 Hiring, Research Directions, and the Future of World Labs



Get full access to Latent.Space at www.latent.space/subscribe
2025-11-25
Link to episode

?? 10x AI Engineers with $1m Salaries ? Alex Lieberman & Arman Hezarkhani, Tenex

Alex Lieberman and Arman Hezarkani, co-founders of Tenex, reveal how they?re revolutionizing software consulting by compensating AI engineers for output rather than hours?enabling some engineers to earn over $1 million annually while delivering 10x productivity gains. Their company represents a fundamental rethinking of knowledge work compensation in the age of AI agents, where traditional hourly billing models perversely incentivize slower work even as AI tools enable unprecedented speed.

The Genesis: From 90% Downsizing to 10x Output The story behind 10X begins with Arman?s previous company, Parthian, where he was forced to downsize his engineering team by 90%. Rather than collapse, Arman re-architected the entire product and engineering process to be AI-first?and discovered that production-ready software output increased 10x despite the massive headcount reduction. This counterintuitive result exposed a fundamental misalignment: engineers compensated by the hour are disincentivized from leveraging AI to work faster, even when the technology enables dramatic productivity gains. Alex, who had invested in Parthian, initially didn?t believe the numbers until Arman walked him through why LLMs have made such a profound impact specifically on engineering as knowledge work.

The Economic Model: Story Points Over Hours 10X?s core innovation is compensating engineers based on story points?units of completed, quality output?rather than hours worked. This creates direct economic incentives for engineers to adopt every new AI tool, optimize their workflows, and maximize throughput. The company expects multiple engineers to earn over $1 million in cash compensation next year purely from story point earnings. To prevent gaming the system, they hire for two profiles: engineers who are ?long-term selfish? (understanding that inflating story points will destroy client relationships) and those who genuinely love writing code and working with smart people. They also employ technical strategists incentivized on client retention (NRR) who serve as the final quality gate before any engineering plan reaches a client.

Impressive Builds: From Retail AI to App Store Hits The results speak for themselves. In one project, 10X built a computer vision system for retail cameras that provides heat maps, queue detection, shelf stocking analysis, and theft detection?creating early prototypes in just two weeks for work that previously took quarters. They built Snapback Sports? mobile trivia app in one month, which hit 20th globally on the App Store. In a sales context, an engineer spent four hours building a working prototype of a fitness influencer?s AI health coach app after the prospect initially said no?immediately moving 10X to the top of their vendor list. These examples demonstrate how AI-enabled speed fundamentally changes sales motions and product development timelines.

The Interview Process: Unreasonably Difficult Take-Homes Despite concerns that AI would make take-home assessments obsolete, 10X still uses them?but makes them ?unreasonably difficult.? About 50% of candidates don?t even respond, but those who complete the challenge demonstrate the caliber needed. The interview process is remarkably short: two calls before the take-home, review, then one or two final meetings?completable in as little as a week. A signature question: ?If you had infinite resources to build an AI that could replace either of us on this call, what would be the first major bottleneck?? The sophisticated answer isn?t just ?model intelligence? or ?context length??it?s controlling entropy, the accumulating error rate that derails autonomous agents over time.

The Limiting Factor: Human Capital, Not Technology Despite being an AI-first company, 10X?s primary constraint is human capital?finding and hiring enough exceptional engineers fast enough, then matching them with the right processes to maintain delivery quality as they scale. The company has ambitions beyond consulting to build their own technology, but for the foreseeable future, recruiting remains the bottleneck. This reveals an important insight about the AI era: even as technology enables unprecedented leverage, the constraint shifts to finding people who can harness that leverage effectively.

Full Video Episode

Timestamps

00:00:00 Introduction and Meeting the 10X Co-founders00:01:29 The 10X Moment: From Hourly Billing to Output-Based Compensation00:04:44 The Economic Model Behind 10X00:05:42 Story Points and Measuring Engineering Output00:08:41 Impressive Client Projects and Rapid Prototyping00:12:22 The 10X Tech Stack: TypeScript and High Structure00:13:21 AI Coding Tools: The Daily Evolution00:15:05 Human Capital as the Limiting Factor00:16:02 The Unreasonably Difficult Interview Process00:17:14 Entropy and Context Engineering: The Future of AI Agents00:23:28 The MCP Debate and AI Industry Sociology00:26:01 Consulting, Digital Transformation, and Conference Insights



Get full access to Latent.Space at www.latent.space/subscribe
2025-11-19
Link to episode

Anthropic, Glean & OpenRouter: How AI Moats Are Built with Deedy Das of Menlo Ventures

Deedy Das, Partner at Menlo Ventures, returns to Latent Space to discuss his journey from Glean to venture capital, the explosive rise of Anthropic, and how AI is reshaping enterprise software and coding. From investing in Anthropic early on when they had no revenue to managing the $100M Ontology Fund, Das shares insider perspectives on the fastest-growing software company in history and what?s next for AI infrastructure, research investing, and the future of engineering.

We cover Glean?s rise from ?boring? enterprise search to a $7B AI-native company, Anthropic?s meteoric rise, the strategic decisions behind products like Claude Code, and why market share in enterprise AI is shifting dramatically. Das explains his investment thesis on research companies like Goodfire, Prime Intellect, and OpenRouter and how the Anthology Fund is quietly seeding the next wave of AI infra, research, and devtools.

Full Video Episode

Timestamps

* 00:00:00 Introduction and Deedy?s Return to Latent Space

* 00:01:20 Glean?s Journey: From Boring Enterprise Search to Valuation

* 00:15:37 Anthropic?s Meteoric Rise and Market Share Dynamics

* 00:17:50 Claude Artifacts and Product Innovation

* 00:41:20 The Anthology Fund: Investing in the Anthropic Ecosystem

* 00:48:01 Goodfire and Mechanistic Interpretability

* 00:51:25 Prime Intellect and Distributed AI Training

* 00:53:40 OpenRouter: Building the AI Model Gateway

* 01:13:36 The Stargate Project and Infrastructure Arms Race

* 01:18:14 The Future of Software Engineering and AI Coding



Get full access to Latent.Space at www.latent.space/subscribe
2025-11-14
Link to episode

? Inside GitHub?s AI Revolution: Jared Palmer Reveals Agent HQ & The Future of Coding Agents

Jared Palmer, SVP at GitHub and VP of CoreAI at Microsoft, joins Latent Space for an in-depth look at the evolution of coding agents and modern developer tools. Recently joining after leading AI initiatives at Vercel, Palmer shares firsthand insights from behind the scenes at GitHub Universe, including the launch of Agent HQ which is a new collaboration hub for coding agents and developers.

This episode traces Palmer?s journey from building Copilot inspired tools to pioneering the focused Next.js coding agent, v0, and explores how platform constraints fostered rapid experimentation and a breakout success in AI-powered frontend development. Palmer explains the unique advantages of GitHub?s massive developer network, the challenges of scaling agent-based workflows, and why integrating seamless AI into developer experiences is now a top priority for both Microsoft and GitHub.

Full Video Episode

Timestamps

00:00:00 Introduction and Jared's New Role at GitHub00:01:00 From V0 to Agent HQ: The Evolution of Coding Agents00:02:51 The V0 Origin Story: From ChatGPT to AI Playground00:05:40 Building the AI SDK and ShadCN Collaboration00:07:08 The Birth of V0: Prompt to UI Revolution00:09:18 V0's Growth Journey and Model Evolution00:11:05 Model Strategy: Composite Models vs User Choice00:13:16 GitHub's Agent HQ and Model Marketplace00:15:51 The Future of Agent Abstraction and Standards00:16:33 Microsoft Core AI Integration and Workflow Vision00:18:37 Dev Containers and Repo Setup Challenges00:24:10 Agent Quality and Infrastructure Reliability00:27:05 Using Coding Agents for Non-Coding Tasks00:29:11 GitHub Homepage Redesign and Community Feedback00:30:27 Stacked Diffs: GitHub's Most Requested Feature



Get full access to Latent.Space at www.latent.space/subscribe
2025-11-10
Link to episode

? [AIE CODE Preview] Inside Google Labs: Building The Gemini Coding Agent ? Jed Borovik, Jules

Jed Borovik, Product Lead at Google Labs, joins Latent Space to unpack how Google is building the future of AI-powered software development with Jules. From his journey discovering GenAI through Stable Diffusion to leading one of the most ambitious coding agent projects in tech, Borovik shares behind-the-scenes insights into how Google Labs operates at the intersection of DeepMind?s model development and product innovation.

We explore Jules? approach to autonomous coding agents and why they run on their own infrastructure, how Google simplified their agent scaffolding as models improved, and why embeddings-based RAG is giving way to attention-based search. Borovik reveals how developers are using Jules for hours or even days at a time, the challenges of managing context windows that push 2 million tokens, and why coding agents represent both the most important AI application and the clearest path to AGI.

This conversation reveals Google?s positioning in the coding agent race, the evolution from internal tools to public products, and what founders, developers, and AI engineers should understand about building for a future where AI becomes the new brush for software engineering.

Full Video Episode

Timestamps

00:00:00 Introduction and GitHub Universe Recap00:00:57 New York Tech Scene and East Coast Hackathons00:02:19 From Google Search to AI Coding: Jed's Journey00:04:19 Google Labs Mission and DeepMind Collaboration00:06:41 Jules: Autonomous Coding Agents Explained00:09:39 The Evolution of Agent Scaffolding and Model Quality00:11:30 RAG vs Attention: The Shift in Code Understanding00:13:49 Jules' Journey from Preview to Production00:15:05 AI Engineer Summit: Community Building and Networking00:25:06 Context Management in Long-Running Agents00:29:02 The Future of Software Engineering with AI00:36:26 Beyond Vibe Coding: Spec Development and Verification00:40:20 Multimodal Input and Computer Use for Coding Agents



Get full access to Latent.Space at www.latent.space/subscribe
2025-11-10
Link to episode

Priscilla Chan and Mark Zuckerberg: Frontier AI + Virtual Biology To Solve All Diseases

Today?s guests are Priscilla Chan and Mark Zuckerberg, co-founders of Biohub (fka Chan Zuckerberg Initiative). They are one of the leading institutes for AI x Bio and open science research with projects like CELLxGENE, rbio1, VariantFormer, and many more. We talked about the evolution from a broad philanthropic institute to specializing in frontier AI + bio, why they are building 12ft tall microscopes to gather better data, and how building a virtual cell model + virtual immune system could potentially help us cure all diseases.

Full Video Episode

Timestamps

00:00:00 Introduction and CZI's 10-Year Anniversary00:00:56 Learning from Bill Gates00:04:05 Science vs Translation00:10:45 The Power of Physical Proximity in Science00:13:55 Building the Virtual Cell: From Data to Models00:15:51 Microscopes, Imaging, and Converting Atoms to Bits00:23:18 AI Meets Biology: The Frontier Lab Concept00:27:25 How Models Can Enable More Ambitious Research00:30:15 Precision Medicine and Clinical Impact00:45:17 The Virtual Immune System and Cellular Engineering00:48:27 Accelerating the Timeline: What It Takes to Cure All Disease00:28:45 Joining Forces with Evolutionary Scale



Get full access to Latent.Space at www.latent.space/subscribe
2025-11-06
Link to episode

?? Ship AI recap: Agents, Workflows, and Python ? w/ Vercel CTO Malte Ubl

In this conversation with Malte Ubl, CTO of Vercel (http://x.com/cramforce), we explore how the company is pioneering the infrastructure for AI-powered development through their comprehensive suite of tools including workflows, AI SDK, and the newly announced agent ecosystem. Malte shares insights into Vercel?s philosophy of ?dogfooding? - never shipping abstractions they haven?t battle-tested themselves - which led to extracting their AI SDK from v0 and building production agents that handle everything from anomaly detection to lead qualification.

The discussion dives deep into Vercel?s new Workflow Development Kit, which brings durable execution patterns to serverless functions, allowing developers to write code that can pause, resume, and wait indefinitely without cost. Malte explains how this enables complex agent orchestration with human-in-the-loop approvals through simple webhook patterns, making it dramatically easier to build reliable AI applications.

We explore Vercel?s strategic approach to AI agents, including their DevOps agent that automatically investigates production anomalies by querying observability data and analyzing logs - solving the recall-precision problem that plagues traditional alerting systems. Malte candidly discusses where agents excel today (meeting notes, UI changes, lead qualification) versus where they fall short, emphasizing the importance of finding the ?sweet spot? by asking employees what they hate most about their jobs.

The conversation also covers Vercel?s significant investment in Python support, bringing zero-config deployment to Flask and FastAPI applications, and their vision for security in an AI-coded world where developers ?cannot be trusted.? Malte shares his perspective on how CTOs must transform their companies for the AI era while staying true to their core competencies, and why maintaining strong IC (individual contributor) career paths is crucial as AI changes the nature of software development.

What was launched at Ship AI 2025:

AI SDK 6.0 & Agent Architecture

* Agent Abstraction Philosophy: AI SDK 6 introduces an agent abstraction where you can ?define once, deploy everywhere?. How does this differ from existing agent frameworks like LangChain or AutoGPT? What specific pain points did you observe in production that led to this design?

* Human-in-the-Loop at Scale: The tool approval system with needsApproval: true gates actions until human confirmation. How do you envision this working at scale for companies with thousands of agent executions? What?s the queue management and escalation strategy?

* Type Safety Across Models: AI SDK 6 promises ?end-to-end type safety across models and UI?. Given that different LLMs have varying capabilities and output formats, how do you maintain type guarantees when swapping between providers like OpenAI, Anthropic, or Mistral?

Workflow Development Kit (WDK)

* Durability as Code: The use workflow primitive makes any TypeScript function durable with automatic retries, progress persistence, and observability. What?s happening under the hood? Are you using event sourcing, checkpoint/restart, or a different pattern?

* Infrastructure Provisioning: Vercel automatically detects when a function is durable and dynamically provisions infrastructure in real-time. What signals are you detecting in the code, and how do you determine the optimal infrastructure configuration (queue sizes, retry policies, timeout values)?

Vercel Agent (beta)

* Code Review Validation: The Agent reviews code and proposes ?validated patches?. What does ?validated? mean in this context? Are you running automated tests, static analysis, or something more sophisticated?

* AI Investigations: Vercel Agent automatically opens AI investigations when it detects performance or error spikes using real production data. What data sources does it have access to? How does it distinguish between normal variance and actual anomalies?

Python Support (For the first time, Vercel now supports Python backends natively.)

Marketplace & Agent Ecosystem

* Agent Network Effects: The Marketplace now offers agents like CodeRabbit, Corridor, Sourcery, and integrations with Autonoma, Braintrust, Browser Use. How do you ensure these third-party agents can?t access sensitive customer data? What?s the security model?

?An Agent on Every Desk? Program

* Vercel launched a new program to help companies identify high-value use cases and build their first production AI agents. It provides consultations, reference templates, and hands-on support to go from idea to deployed agent

Full Video Episode

Timestamps

00:00 Introduction and Malte?s Background at Google

01:16 Vercel?s AI Engineering Philosophy and Ship AI Recap

03:19 Deep Dive: Workflows vs Agents Architecture

09:33 AI SDK Success Story: Staying Low-Level and Humble

16:35 Framework Design Principles and Open Source Strategy

19:20 Vercel Agent: AI-Powered DevOps and Anomaly Detection

27:06 Internal Agent Use Cases: Lead Qualification and Abuse Analysis

29:49 Agent on Every Desk Program and Enterprise Adoption

32:13 Python Support and Multi-Language Infrastructure

39:42 The Future of AI-Native Security and Development



Get full access to Latent.Space at www.latent.space/subscribe
2025-10-31
Link to episode

The Agents Economy Backbone - with Emily Glassberg Sands, Head of Data & AI at Stripe

Emily Glassberg Sands is the Head of Data & AI at Stripe where she leads the organization?s efforts to build financial infrastructure for the internet & leverage AI to power Stripe?s products. Stripe processes about $1.4 trillion in payments annually (~1.3% of global GDP), making it an exciting opportunity to apply AI & ML at scale. In this episode, Emily shares insights into how Stripe is using AI to solve complex problems like fraud detection, optimizing checkout experiences, & enabling new business models for AI companies. Emily also shares her economist perspective on market efficiency & how Stripe?s focus on building economic infrastructure for AI is driving growth across the ecosystem.

We discuss:

* Stripe?s domain-specific foundation model and ?payments embeddings? that run inline on the charge path to detect sophisticated card-testing at scale (improved detection rates at large users from ~59% to ~97%).

* The launch of the Agentic Commerce Protocol (ACP) with OpenAI, creating a shared standard for how businesses can expose products to AI agents which is used by Walmart and Sam?s Club.

* How Stripe is helping AI companies manage new fraud vectors, such as free trial and refund abuse, and the importance of real-time, outcome-based billing

* The impact of AI on Stripe?s internal operations, including the use of LLMs for code generation, merchant understanding, and internal tooling

* Why many AI companies are going global day-one how Stripe?s Link network (200M+ consumers) concentrates AI demand.

* Whether we?re in an AI bubble, why GDP hasn?t reflected AI productivity gains yet, and how agentic commerce could expand consumption by removing time constraints for high-income consumers

* Emily?s perspective on the changing social contract around AI, the importance of deep thinking, and the role of brand and design in AI-driven products

?

Where to find Emily Sands

* X: https://x.com/emilygsands

* LinkedIn: https://www.linkedin.com/in/egsands/

Where to find Shawn Wang

* X: https://x.com/swyx

* LinkedIn: https://www.linkedin.com/in/shawnswyxwang/

Where to find Alessio Fanelli

* X: https://x.com/FanaHOVA

* LinkedIn: https://www.linkedin.com/in/fanahova/

Where to find Latent Space

* X: https://x.com/latentspacepod

Full Show Notes: Full show notes:



Get full access to Latent.Space at www.latent.space/subscribe
2025-10-30
Link to episode

Why RL Won ? Kyle Corbitt, OpenPipe (acq. CoreWeave)

In this deep dive with Kyle Corbitt, co-founder and CEO of OpenPipe (recently acquired by CoreWeave), we explore the evolution of fine-tuning in the age of AI agents and the critical shift from supervised fine-tuning to reinforcement learning. Kyle shares his journey from leading YC?s Startup School to building OpenPipe, initially focused on distilling expensive GPT-4 workflows into smaller, cheaper models before pivoting to RL-based agent training as frontier model prices plummeted. The conversation reveals why 90% of AI projects remain stuck in proof-of-concept purgatory - not due to capability limitations, but reliability issues that Kyle believes can be solved through continuous learning from real-world experience. He discusses the breakthrough of RULER (Relative Universal Reinforcement Learning Elicited Rewards), which uses LLMs as judges to rank agent behaviors relatively rather than absolutely, making RL training accessible without complex reward engineering. Kyle candidly assesses the challenges of building realistic training environments for agents, explaining why GRPO (despite its advantages) may be a dead end due to its requirement for perfectly reproducible parallel rollouts. He shares insights on why LoRAs remain underrated for production deployments, why GEPA and prompt optimization haven?t lived up to the hype in his testing, and why the hardest part of deploying agents isn?t the AI - it?s sandboxing real-world systems with all their bugs and edge cases intact. The discussion also covers OpenPipe?s acquisition by CoreWeave, the launch of their serverless reinforcement learning platform, and Kyle?s vision for a future where every deployed agent continuously learns from production experience. He predicts that solving the reliability problem through continuous RL could unlock 10x more AI inference demand from projects currently stuck in development, fundamentally changing how we think about agent deployment and maintenance.

Key Topics:

* The rise and fall of fine-tuning as a business model

* Why 90% of AI projects never reach production

* RULER: Making RL accessible through relative ranking

* The environment problem: Why sandboxing is harder than training

* GRPO vs PPO and the future of RL algorithms

* LoRAs: The underrated deployment optimization

* Why GEPA and prompt optimization disappointed in practice

* Building world models as synthetic training environments

* The $500B Stargate bet and OpenAI?s potential crypto play

* Continuous learning as the path to reliable agents

References

https://www.linkedin.com/in/kcorbitt/

* Aug 2023 https://openpipe.ai/blog/from-prompts-to-models

* DEC 2023 https://openpipe.ai/blog/mistral-7b-fine-tune-optimized

* JAN 2024 https://openpipe.ai/blog/s-lora

* MAY 2024 https://openpipe.ai/blog/the-ten-commandments-of-fine-tuning-in-prod

* Oct 2024 https://openpipe.ai/blog/announcing-dpo-support

* AIE NYC 2025 Finetuning 500m agents

* AIEWF 2025 How to train your agent (ART-E)

* SEPT 2025 ACQUISTION https://openpipe.ai/blog/openpipe-coreweave

* W&B Serverless RL https://openpipe.ai/blog/serverless-rl?refresh=1760042248153

Full Video Episode

Timestamps

00:00 Introductions

03:15 The Evolution of OpenPipe: From SFT to RL

07:49 The Mistral Era and LoRA Adapters

11:40 When You Actually Need Fine-Tuning

14:43 The Pivot to Reinforcement Learning

21:29 GRPO vs PPO: The Technical Trade-offs

24:02 The Environment Problem in RL

35:52 JAPA and Automated Prompt Optimization

44:35 Open vs Closed Models: The Token Economics

50:38 Ruler: Self-Supervised RL Rewards

57:09 World Models as Environment Solutions

1:00:15 CoreWeave Acquisition and Future Vision



Get full access to Latent.Space at www.latent.space/subscribe
2025-10-16
Link to episode

DevDay 2025: Apps SDK, Agent Kit, MCP, Codex and why Prompting is More Important than Ever

At OpenAI DevDay, we sit down with Sherwin Wu and Christina Huang from the OpenAI Platform Team to discuss the launch of AgentKit - a comprehensive suite of tools for building, deploying, and optimizing AI agents. Christina walks us through the live demo she performed on stage, building a customer support agent in just 8 minutes using the visual Agent Builder, while Sherwin shares insights on how OpenAI is inverting the traditional website-chatbot paradigm by embedding apps directly within ChatGPT through the new Apps SDK.

The conversation explores how OpenAI is tackling the challenges developers face when taking agents to production - from writing and optimizing prompts to building evaluation pipelines. They discuss the decision to adopt Anthropic?s MCP protocol for tool connectivity, the importance of visual workflows for complex agent systems, and how features like human-in-the-loop approvals and automated prompt optimization are making agent development more accessible to a broader range of developers.

Sherwin and Christina also reveal how OpenAI is dogfooding these tools internally, with their own customer support at openai.com already powered by AgentKit, and share candid insights about the evolution from plugins to GPTs to this new agent platform. They discuss the surprising persistence of prompting as a critical skill (contrary to predictions from two years ago), the challenges of serving custom fine-tuned models at scale, and why they believe visual agent builders are essential as workflows grow to span dozens of nodes.

Guests:

* Sherwin Wu: Head of Engineering, OpenAI Platform https://www.linkedin.com/in/sherwinwu1/ https://x.com/sherwinwu?lang=en

* Christina Huang: Platform Experience, OpenAI https://x.com/christinaahuang https://www.linkedin.com/in/christinaahuang/

Thanks very much to Lindsay and Shaokyi for helping us set up this great deepdive into the new DevDay launches!

Key Topics:? AgentKit launch: Agent SDK, Builder, Evals, and deployment tools? Apps SDK and the inversion of the app-chatbot paradigm? Adopting MCP protocol for universal tool connectivity? Visual agent building vs code-first approaches? Human-in-the-loop workflows and approval systems? Automated prompt optimization and ?zero-gradient fine-tuning?? Service Health Dashboard and achieving five nines reliability? ChatKit as an embeddable, evergreen chat interface? The evolution from plugins to GPTs to agent platforms? Internal dogfooding with Codex and agent-powered support

Full Video Episode

Timestamps

00:00 Welcome to the OpenAI Dev Day Studio

01:11 Dev Day Evolution and Community Growth

03:08 Apps SDK and ChatGPT Distribution Strategy

05:27 MCP Protocol Integration Decision

09:26 Agent Kit Launch and Platform Vision

11:33 Agent Builder Canvas and Visual Workflows

17:22 Evaluations and Agent Testing Evolution

19:20 Automated Prompt Optimization and Research

26:35 Connector Registry and MCP Servers

34:10 Chat Kit as Consumer-Grade Infrastructure

39:13 Codex Power User Tips and AI-Native Development

42:27 Service Health Dashboard and Reliability Journey



Get full access to Latent.Space at www.latent.space/subscribe
2025-10-07
Link to episode

Taste is your Moat (Dylan Field of Figma)

Dylan Field (CEO Figma) on how they are letting designers build with Figma Make, how Figma can be the context repository for aesthetic in the age of vibe coding, and why design is your only differentiator now.

Full show notes: https://www.latent.space/p/figma



Get full access to Latent.Space at www.latent.space/subscribe
2025-10-02
Link to episode

Amp: The Emperor Has No Clothes

Quinn Slack (CEO) and Thorsten Ball (Amp Dictator) from SourceGraph join the show to talk about Amp Code, how they ship 15x/day with no code reviews, and why subagents and prompt optimizers aren?t a promising direction for coding agents.

Amp Code: https://ampcode.com/

Latent Space: https://latent.space/

Full Video Episode

Timestamps

00:00 Introduction00:41 Transition from Cody to Amp03:18 The Importance of Building the Best Coding Agent06:43 Adapting to a Rapidly Evolving AI Tooling Landscape09:36 Dogfooding at Sourcegraph12:35 CLI vs. VS Code Extension21:08 Positioning Amp in Coding Agent Market24:10 The Diminishing Importance of Model Selectors32:39 Tooling vs. Harness37:19 Common Failure Modes of Coding Agents47:33 Agent-Friendly Logging and Tooling52:31 Are Subagents Real?56:52 New Frameworks and Agent-Integrated Developer Tools1:00:25 How Agents Are Encouraging Codebase and Workflow Changes1:03:13 Evolving Outer Loop Tasks1:07:09 Version Control and Merge Conflicts in an AI-First World1:10:36 Rise of User-Generated Enterprise Software1:14:39 Empowering Technical Leaders with AI1:17:11 Evaluating Product Without Traditional Evals1:20:58 Hiring



Get full access to Latent.Space at www.latent.space/subscribe
2025-09-25
Link to episode

Context Engineering for Agents - Lance Martin, LangChain

Lance: https://www.linkedin.com/in/lance-martin-64a33b5/

How Context Fails: https://www.dbreunig.com/2025/06/22/how-contexts-fail-and-how-to-fix-them.htmlHow New Buzzwords Get Created: https://www.dbreunig.com/2025/07/24/why-the-term-context-engineering-matters.htmlContent Engineering:

https://rlancemartin.github.io/2025/06/23/context_engineering/ https://docs.google.com/presentation/d/16aaXLu40GugY-kOpqDU4e-S0hD1FmHcNyF0rRRnb1OU/edit?usp=sharingManus Post: https://manus.im/blog/Context-Engineering-for-AI-Agents-Lessons-from-Building-ManusCognition Post: https://cognition.ai/blog/dont-build-multi-agentsMulti-Agent Researcher: https://www.anthropic.com/engineering/multi-agent-research-systemHuman-in-the-loop + Memory: https://github.com/langchain-ai/agents-from-scratch- Bitter Lesson in AI Engineering -Hyung Won Chung on the Bitter Lesson in AI Research:

Bitter Lesson w/ Claude Code:

Learning the Bitter Lesson in AI Engineering: https://rlancemartin.github.io/2025/07/30/bitter_lesson/Open Deep Research: https://github.com/langchain-ai/open_deep_research https://academy.langchain.com/courses/deep-research-with-langgraphScaling and building things that ?don?t yet work?:

- Frameworks -Roast framework at Shopify / standardization of orchestration tools:

MCP adoption within Anthropic / standardization of protocols:

How to think about frameworks: https://blog.langchain.com/how-to-think-about-agent-frameworks/RAG benchmarking: https://rlancemartin.github.io/2025/04/03/vibe-code/Simon?s talk with memory-gone-wrong: https://simonwillison.net/2025/Jun/6/six-months-in-llms/

Full Video Episode

Timestamps

00:00 Introduction and Background

00:53 The Rise of Context Engineering

01:57 Context Engineering vs Prompt Engineering

05:56 The Five Categories of Context Engineering

10:02 Multi-Agent Systems and Context Isolation

14:48 Classical Retrieval vs Agentic Search

17:12 LLMs.txt and MCP Servers

24:51 Context Pruning and Memory Management

37:25 Memory Systems and Human-in-the-Loop

42:55 The Bitter Lesson Applied to AI Engineering

51:21 Frameworks, Abstractions, and Building for the Future



Get full access to Latent.Space at www.latent.space/subscribe
2025-09-11
Link to episode

A Technical History of Generative Media

Today we are joined by Gorkem and Batuhan from Fal.ai, the fastest growing generative media inference provider. They recently raised a $125M Series C and crossed $100M ARR. We covered how they pivoted from dbt pipelines to diffusion models inference, what were the models that really changed the trajectory of image generation, and the future of AI videos. Enjoy!

Full Video Episode

Timestamps

00:00 - Introductions04:58 - History of Major AI Models and Their Impact on Fal.ai07:06 - Pivoting to Generative Media and Strategic Business Decisions10:46 - Technical discussion on CUDA optimization and kernel development12:42 - Inference Engine Architecture and Kernel Reusability14:59 - Performance Gains and Latency Trade-offs15:50 - Discussion of model latency importance and performance optimization17:56 - Importance of Latency and User Engagement18:46 - Impact of Open Source Model Releases and Competitive Advantage19:00 - Partnerships with closed source model developers20:06 - Collaborations with Closed-Source Model Providers21:28 - Serving Audio Models and Infrastructure Scalability22:29 - Serverless GPU infrastructure and technical stack23:52 - GPU Prioritization: H100s and Blackwell Optimization25:00 - Discussion on ASICs vs. General Purpose GPUs26:10 - Architectural Trends: MMDiTs and Model Innovation27:35 - Rise and Decline of Distillation and Consistency Models28:15 - Draft Mode and Streaming in Image Generation Workflows29:46 - Generative Video Models and the Role of Latency30:14 - Auto-Regressive Image Models and Industry Reactions

31:35 - Discussion of OpenAI?s Sora and competition in video generation34:44 - World Models and Creative Applications in Games and Movies35:27 - Video Models? Revenue Share and Open-Source Contributions36:40 - Rise of Chinese Labs and Partnerships38:03 - Top Trending Models on Hugging Face and ByteDance?s Role39:29 - Monetization Strategies for Open Models40:48 - Usage Distribution and Model Turnover on FAL42:11 - Revenue Share vs. Open Model Usage Optimization42:47 - Moderation and NSFW Content on the Platform44:03 - Advertising as a key use case for generative media45:37 - Generative Video in Startup Marketing and Virality46:56 - LoRA Usage and Fine-Tuning Popularity47:17 - LoRA ecosystem and fine-tuning discussion49:25 - Post-Training of Video Models and Future of Fine-Tuning50:21 - ComfyUI Pipelines and Workflow Complexity52:31 - Requests for startups and future opportunities in the space53:33 - Data Collection and RedPajama-Style Initiatives for Media Models53:46 - RL for Image and Video Models: Unknown Potential55:11 - Requests for Models: Editing and Conversational Video Models57:12 - VO3 Capabilities: Lip Sync, TTS, and Timing58:23 - Bitter Lesson and the Future of Model Workflows58:44 - FAL?s hiring approach and team structure59:29 - Team Structure and Scaling Applied ML and Performance Teams1:01:41 - Developer Experience Tools and Low-Code/No-Code Integration1:03:04 - Improving Hiring Process with Public Challenges and Benchmarks1:04:02 - Closing Remarks and Culture at FAL



Get full access to Latent.Space at www.latent.space/subscribe
2025-09-05
Link to episode

Better Data is All You Need ? Ari Morcos, Datology

Our chat with Ari shows that data curation is the most impactful and underinvested area in AI. He argues that the prevailing focus on model architecture and compute scaling overlooks the ?bitter lesson? that ?models are what they eat.? Effective data curation?a sophisticated process involving filtering, rebalancing, sequencing (curriculum), and synthetic data generation?allows for training models that are simultaneously faster, better, and smaller. Morcos recounts his personal journey from focusing on model-centric inductive biases to realizing that data quality is the primary lever for breaking the diminishing returns of naive scaling laws. Datology?s mission is to automate this complex curation process, making state-of-the-art data accessible to any organization and enabling a new paradigm of AI development where data efficiency, not just raw scale, drives progress.

Full Video Episode

Timestamps

00:00 Introduction

00:46 What is Datology? The mission to train models faster, better, and smaller through data curation.

01:59 Ari?s background: From neuroscience to realizing the ?Bitter Lesson? of AI.

05:30 Key Insight: Inductive biases from architecture become less important and even harmful as data scale increases.

08:08 Thesis: Data is the most underinvested area of AI research relative to its impact.

10:15 Why data work is culturally undervalued in research and industry.

12:19 How self-supervised learning changed everything, moving from a data-scarce to a data-abundant regime.

17:05 Why automated curation is superior to human-in-the-loop, citing the DCLM study.

19:22 The ?Elephants vs. Dogs? analogy for managing data redundancy and complexity.

22:46 A brief history and commentary on key datasets (Common Crawl, GitHub, Books3).

26:24 Breaking naive scaling laws by improving data quality to maintain high marginal information gain.

29:07 Datology?s demonstrated impact: Achieving baseline performance 12x faster.

34:19 The business of data: Datology?s moat and its relationship with open-source datasets.

39:12 Synthetic Data Explain

ed: The difference between risky ?net-new? creation and powerful ?rephrasing.?

49:02 The Resurgence of Curriculum Learning: Why ordering data matters in the underfitting regime.

52:55 The Future of Training: Optimizing pre-training data to make post-training more effective.

54:49 Who is training their own models and why (Sovereign AI, large enterprises).

57:24 ?Train Smaller?: Why inference cost makes smaller, specialized models the ultimate goal for enterprises.

01:00:19 The problem with model pruning and why data-side solutions are complementary.

01:03:03 On finding the smallest possible model for a given capability.

01:06:49 Key learnings from the RC foundation model collaboration, proving that data curation ?stacks.?

01:09:46 Lightning Round: What data everyone wants & who should work at Datology.

01:14:24 Commentary on Meta?s superintelligence efforts and Yann LeCun?s role.



Get full access to Latent.Space at www.latent.space/subscribe
2025-08-29
Link to episode

Long Live Context Engineering - with Jeff Huber of Chroma

Jeff Huber of Chroma joins us to talk about what actually matters in vector databases in 2025, why ?modern search for AI? is different, and how to ship systems that don?t rot as context grows.

Full show notes: https://www.latent.space/p/chroma

Full Video Episode

Timestamps

00:00 Introductions00:48 Why Build Chroma02:55 Information Retrieval vs. Search04:29 Staying Focused in a Competitive AI Market08:08 Building Chroma Cloud12:15 Context Engineering and the Problems with RAG16:11 Context Rot21:49 Prioritizing Context Quality27:02 Code Indexing and Retrieval Strategies32:04 Chunk Rewriting and Query Optimization for Code34:07 Transformer Architecture Evolution and Retrieval Systems38:06 Memory as a Benefit of Context Engineering40:13 Structuring AI Memory and Offline Compaction45:46 Lessons from Previous Startups and Building with Purpose47:32 Religion and Values in Silicon Valley50:18 Company Culture, Design, and Brand Consistency52:36 Hiring at Chroma: Designers, Researchers, and Engineers



Get full access to Latent.Space at www.latent.space/subscribe
2025-08-19
Link to episode

Greg Brockman on OpenAI's Road to AGI

Greg Brockman, co-founder and president of OpenAI, joins us to talk about GPT-5 and GPT-OSS, the future of software engineering, why reinforcement learning is still scaling, and how OpenAI is planning to get to AGI.

Full Video Episode

Timestamps

00:00 Introductions01:04 The Evolution of Reasoning at OpenAI04:01 Online vs Offline Learning in Language Models06:44 Sample Efficiency and Human Curation in Reinforcement Learning08:16 Scaling Compute and Supercritical Learning13:21 Wall clock time limitations in RL and real-world interactions16:34 Experience with ARC Institute and DNA neural networks19:33 Defining the GPT-5 Era22:46 Evaluating Model Intelligence and Task Difficulty25:06 Practical Advice for Developers Using GPT-531:48 Model Specs37:21 Challenges in RL Preferences (e.g., try/catch)39:13 Model Routing and Hybrid Architectures in GPT-543:58 GPT-5 pricing and compute efficiency improvements46:04 Self-Improving Coding Agents and Tool Usage49:11 On-Device Models and Local vs Remote Agent Systems51:34 Engineering at OpenAI and Leveraging LLMs54:16 Structuring Codebases and Teams for AI Optimization55:27 The Value of Engineers in the Age of AGI58:42 Current state of AI research and lab diversity01:01:11 OpenAI?s Prioritization and Focus Areas01:03:05 Advice for Founders: It?s Not Too Late01:04:20 Future outlook and closing thoughts01:04:33 Time Capsule to 2045: Future of Compute and Abundance01:07:07 Time Capsule to 2005: More Problems Will Emerge



Get full access to Latent.Space at www.latent.space/subscribe
2025-08-15
Link to episode

The RLVR Revolution ? with Nathan Lambert (AI2, Interconnects.ai)

We first had Nathan on to give us his RLHF deep dive when he was joining AI2, and now he?s back to help us catch up on the evolution to RLVR (Reinforcement Learning with Verifiable Rewards), first proposed in his Tulu 3 paper. While RLHF remains foundational, RLVR has emerged as a powerful approach for training models on tasks with clear success criteria and using verifiable, objective functions as reward signals?particularly useful in domains like math, code correctness, and instruction-following. Instead of relying solely on subjective human feedback, RLVR leverages deterministic signals to guide optimization, making it more scalable and potentially more reliable across many domains. However, he notes that RLVR is still rapidly evolving, especially regarding how it handles tool use and multi-step reasoning.

We also discussed the Tulu model series, a family of instruction-tuned open models developed at AI2. Tulu is designed to be a reproducible, state-of-the-art post-training recipe for the open community. Unlike frontier labs like OpenAI or Anthropic, which rely on vast and often proprietary datasets, Tulu aims to distill and democratize best practices for instruction and preference tuning. We are impressed with how small eval suites, careful task selection, and transparent methodology can rival even the best proprietary models on specific benchmarks.

One of the most fascinating threads is the challenge of incorporating tool use into RL frameworks. Lambert highlights that while you can prompt a model to use tools like search or code execution, getting the model to reliably learn when and how to use them through RL is much harder. This is compounded by the difficulty of designing reward functions that avoid overoptimization?where models learn to ?game? the reward signal rather than solve the underlying task. This is particularly problematic in code generation, where models might reward hack unit tests by inserting pass statements instead of correct logic. As models become more agentic and are expected to plan, retrieve, and act across multiple tools, reward design becomes a critical bottleneck.

Other topics covered:

- The evolution from RLHF (Reinforcement Learning from Human Feedback) to RLVR (Reinforcement Learning from Verifiable Rewards)- The goals and technical architecture of the Tulu models, including the motivation to open-source post-training recipes- Challenges of tool use in RL: verifiability, reward design, and scaling across domains- Evaluation frameworks and the role of platforms like Chatbot Arena and emerging ?arena?-style benchmarks- The strategic tension between hybrid reasoning models and unified reasoning models at the frontier- Planning, abstraction, and calibration in reasoning agents and why these concepts matter- The future of open-source AI models, including DeepSeek, OLMo, and the potential for an ?American DeepSeek?- The importance of model personality, character tuning, and the model spec paradigm- Overoptimization in RL settings and how it manifests in different domains (control tasks, code, math)- Industry trends in inference-time scaling and model parallelism

Finally, the episode closes with a vision for the future of open-source AI. Nathan has now written up his ambition to build an ?American DeepSeek??a fully open, end-to-end reasoning-capable model with transparent training data, tools, and infrastructure. He emphasizes that open-source AI is not just about weights; it?s about releasing recipes, evaluations, and methods that lower the barrier for everyone to build and understand cutting-edge systems.

Full Video Episode

Timestamps

00:00 Welcome and Guest Introduction

01:18 Tulu, OVR, and the RLVR Journey

03:40 Industry Approaches to Post-Training and Preference Data

06:08 Understanding RLVR and Its Impact

06:18 Agents, Tool Use, and Training Environments

10:34 Open Data, Human Feedback, and Benchmarking

12:44 Chatbot Arena, Sycophancy, and Evaluation Platforms

15:42 RLHF vs RLVR: Books, Algorithms, and Future Directions

17:54 Frontier Models: Reasoning, Hybrid Models, and Data

22:11 Search, Retrieval, and Emerging Model Capabilities

29:23 Tool Use, Curriculum, and Model Training Challenges

38:06 Skills, Planning, and Abstraction in Agent Models

46:50 Parallelism, Verifiers, and Scaling Approaches

54:33 Overoptimization and Reward Design in RL

1:02:27 Open Models, Personalization, and the Model Spec

1:06:50 Open Model Ecosystem and Infrastructure

1:13:05 Meta, Hardware, and the Future of AI Competition

1:15:42 Building an Open DeepSeek and Closing Thoughts



Get full access to Latent.Space at www.latent.space/subscribe
2025-07-31
Link to episode

AI is Eating Search

ChatGPT handles 2.5B prompts/day and is on track to match Google?s daily searches by end of 2026. AI agents don?t browse like us?they crave queryable, chunkable data for tools like ChatGPT & Perplexity. A new industry is being born, some are calling it AI SEO, others GEO, but what is clear is that it drives amazing results. Businesses are seeing 2-4x higher conversion from visitors coming from AI compared to traditional search. Robert McCloy is the co-founder of Scrunch AI (https://scrunchai.com/), a fast growing company that helps brands and businesses re-write their content on the fly based on what agents are looking for.

Full Video Episode

Timestamps

00:00 Intro & Guest Introduction

01:30 The Genesis of Scrunch AI & AI Search Impact

06:02 AI Search Engines vs. Traditional SEO

06:28 Monitoring Prompts & The AI Search Stack

08:26 AI Training Data, Crawlers, and Content Strategy

12:33 AI Browsers and the Future of Web Consumption

16:06 Technical Mechanisms of AI Search & SEO Relevance

28:44 Personalization, Agent Experience, and Customer Journeys

30:44 Prompt Clusters, User Intent, and B2B Buying Patterns

36:06 Optimization Tactics: Prompt Injection, Content, and Pitfalls

40:37 Technical Content Delivery: JavaScript, Programmatic SEO, and LMS.txt

47:31 Case Studies & Conversion Optimization

51:36 Market Share & Platform Trends in AI Search

55:10 Wrap-Up & Future of AI-Driven Web



Get full access to Latent.Space at www.latent.space/subscribe
2025-07-23
Link to episode

Cline: the open source coding agent that doesn't cut costs

Saoud Rizwan and Pash from Cline joined us to talk about why fast apply models got bitter lesson?d, how they pioneered the plan + act paradigm for coding, and why non-technical people use IDEs to do marketing and generate slides.

Full writeup: https://www.latent.space/p/cline

X: https://x.com/latentspacepod

Full Video Episode

Timestamps

00:00 - Introductions 01:35 - Plan and Act Paradigm 05:37 - Model Evaluation and Early Development of Cline 08:14 - Use Cases of Cline Beyond Coding 09:09 - Why Cline is a VS Code Extension and Not a Fork 12:07 - Economic Value of Programming Agents 16:07 - Early Adoption for MCPs 19:35 - Local vs Remote MCP Servers 22:10 - Anthropic?s Role in MCP Registry 22:49 - Most Popular MCPs and Their Use Cases 25:26 - Challenges and Future of MCP Monetization 27:32 - Security and Trust Issues with MCPs 28:56 - Alternative History Without MCP 29:43 - Market Positioning of Coding Agents and IDE Integration Matrix 32:57 - Visibility and Autonomy in Coding Agents 35:21 - Evolving Definition of Complexity in Programming Tasks 38:16 - Forks of Cline and Open Source Regrets 40:07 - Simplicity vs Complexity in Agent Design 46:33 - How Fast Apply Got Bitter Lesson?d 49:12 - Cline?s Business Model and Bring-Your-Own-API-Key Approach 54:18 - Integration with OpenRouter and Enterprise Infrastructure 55:32 - Impact of Declining Model Costs 57:48 - Background Agents and Multi-Agent Systems 1:00:42 - Vision and Multi-Modalities 1:01:07 - State of Context Engineering 1:07:37 - Memory Systems in Coding Agents 1:10:14 - Standardizing Rules Files Across Agent Tools 1:11:16 - Cline?s Personality and Anthropomorphization 1:12:55 - Hiring at Cline and Team Culture



Get full access to Latent.Space at www.latent.space/subscribe
2025-07-16
Link to episode

Personalized AI Language Education ? with Andrew Hsu, Speak

Speak (https://speak.com) may not be very well known to native English speakers, but they have come from a slow start in 2016 to emerge as one of the favorite partners of OpenAI, with their Startup Fund leading and joining their Series B and C as one of the new AI-native unicorns, noting that ?Speak has the potential to revolutionize not just language learning, but education broadly?.

Today we speak with Speak?s CTO, Andrew Hsu, on the journey of building the ?3rd generation? of language learning software (with Rosetta Stone being Gen 1, and Duolingo being Gen 2). Speak?s premise is that speech and language models can now do what was previously only possible with human tutors?provide fluent, responsive, and adaptive instruction?and this belief has shaped its product and company strategy since its early days.

https://www.linkedin.com/in/adhsu/

https://speak.com

One of the most interesting strategic decisions discussed in the episode is Speak?s early focus on South Korea. While counterintuitive for a San Francisco-based startup, the decision was influenced by a combination of market opportunity and founder proximity via a Korean first employee. South Korea?s intense demand for English fluency and a highly competitive education market made it a proving ground for a deeply AI-native product. By succeeding in a market saturated with human-based education solutions, Speak validated its model and built strong product-market fit before expanding to other Asian markets and eventually, globally.

The arrival of Whisper and GPT-based LLMs in 2022 marked a turning point for Speak. Suddenly, capabilities that were once theoretical?real-time feedback, semantic understanding, conversational memory?became technically feasible. Speak didn?t pivot, but rather evolved into its second phase: from a supplemental practice tool to a full-featured language tutor. This transition required significant engineering work, including building custom ASR models, managing latency, and integrating real-time APIs for interactive lessons. It also unlocked the possibility of developing voice-first, immersive roleplay experiences and a roadmap to real-time conversational fluency.

To scale globally and support many languages, Speak is investing heavily in AI-generated curriculum and content. Instead of manually scripting all lessons, they are building agents and pipelines that can scaffold curriculum, generate lesson content, and adapt pedagogically to the learner. This ties into one of Speak?s most ambitious goals: creating a knowledge graph that captures what a learner knows and can do in a target language, and then adapting the course path accordingly. This level-adjusting tutor model aims to personalize learning at scale and could eventually be applied beyond language learning to any educational domain.

Finally, the conversation touches on the broader implications of AI-powered education and the slow real-world adoption of transformative AI technologies. Despite the capabilities of GPT-4 and others, most people?s daily lives haven?t changed dramatically. Speak sees itself as part of the generation of startups that will translate AI?s raw power into tangible consumer value. The company is also a testament to long-term conviction?founded in 2016, it weathered years of slow growth before AI caught up to its vision. Now, with over $50M ARR, a growing B2B arm, and plans to expand across languages and learning domains, Speak represents what AI-native education could look like in the next decade.

Full Video Episode

Timestamps

00:00 Introductions & Thiel Fellowship Origins

02:13 Genesis of Speak: Early Vision & Market Focus

03:44 Building the Product: Iterations and Lessons Learned

10:59 AI?s Role in Language Learning

13:49 Scaling Globally & B2B Expansion

16:30 Why Korea? Localizing for Success

19:08 Content Creation, The Speak Method, and Engineering Culture

23:31 The Impact of Whisper and LLM Advances

29:08 AI-Generated Content & Measuring Fluency

35:30 Personalization, Dialects, and Pronunciation

39:38 Immersive Learning, Multimodality, and Real-Time Voice

50:02 Engineering Challenges & Company Culture

53:20 Beyond Languages: B2B, Knowledge Graphs, and Broader Learning

57:32 Fun Stories, Lessons, and Reflections

1:02:03 Final Thoughts: The Future of AI Learning & Slow Takeoff



Get full access to Latent.Space at www.latent.space/subscribe
2025-07-11
Link to episode

AI Video Is Eating The World ? Olivia and Justine Moore, a16z

When the first video diffusion models started emerging, they were little more than just ?moving pictures? - still frames extended a few seconds in either direction in time. There was a ton of excitement about OpenAI?s Sora on release through 2024, but so far only Sora-lite has been widely released. Meanwhile, other good videogen models like Genmo Mochi, Pika, MiniMax T2V, Tencent Hunyuan Video, and Kuaishou?s Kling have emerged, but the reigning king this year seems to be Google?s Veo 3, which for the first time has added native audio generation into their model capabilities, eliminating the need for a whole class of lipsynching tooling and SFX editing.

The rise of Veo 3 unlocks a whole new category of AI Video creators that many of our audience may not have been exposed to, but is undeniably effective and important particularly in the ?kids? and ?brainrot? segments of the global consumer internet platforms like Tiktok, YouTube and Instagram.

By far the best documentarians of these trends for laypeople are Olivia and Justine Moore, both partners at a16z, who not only collate the best examples from all over the web, but dabble in video creation themselves to put theory into practice. We?ve been thinking of dabbling in AI brainrot on a secondary channel for Latent Space, so we wanted to get the braindump from the Moore twins on how to make a Latent Space Brainrot channel. Jump on in!

Full Video Episode

Timestamps

00:00 Introductions & Guest Welcome

00:49 The Rise of Generative Media

02:24 AI Video Trends: Italian Brain Rot & Viral Characters

05:00 Following Trends & Creating AI Content

07:17 Hands-On with AI Video Creation

18:36 Monetization & Business of AI Content

23:34 Platforms, Models, and the Creator Stack

37:22 Native Content vs. Clipping & Going Viral

41:52 Prompt Theory & Meta-Trends in AI Creativity

47:42 Professional, Commercial, and Platform-Specific AI Video

48:57 Wrap-Up & Final Thoughts



Get full access to Latent.Space at www.latent.space/subscribe
2025-07-09
Link to episode

Information Theory for Language Models: Jack Morris

Our last AI PhD grad student feature was Shunyu Yao, who happened to focus on Language Agents for his thesis and immediately went to work on them for OpenAI. Our pick this year is Jack Morris, who bucks the ?hot? trends by -not- working on agents, benchmarks, or VS Code forks, but is rather known for his work on the information theoretic understanding of LLMs, starting from embedding models and latent space representations (always close to our heart).

Jack is an unusual combination of doing underrated research but somehow still being to explain them well to a mass audience, so we felt this was a good opportunity to do a different kind of episode going through the greatest hits of a high profile AI PhD, and relate them to questions from AI Engineering.

Papers and References made

* AI grad school:

* A new type of information theory:

* Embeddings

* Text Embeddings Reveal (Almost) As Much As Text: https://arxiv.org/abs/2310.06816

* Contextual document embeddings https://arxiv.org/abs/2410.02525

Harnessing the Universal Geometry of Embeddings: https://arxiv.org/abs/2505.12540

* Language models

* GPT-style language models memorize 3.6 bits per param:

* Approximating Language Model Training Data from Weights: https://arxiv.org/abs/2506.15553

* LLM Inversion

* ?There Are No New Ideas In AI.... Only New Datasets?

* misc reference: https://junyanz.github.io/CycleGAN/

?

for others hiring AI PhDs, Jack also wanted to shout out his coauthor

Zach Nussbaum, his coauthor on Nomic Embed: Training a Reproducible Long Context Text Embedder.

Full Video Episode

Timestamps

00:00 Introduction to Jack Morris01:18 Career in AI03:29 The Shift to AI Companies03:57 The Impact of ChatGPT04:26 The Role of Academia in AI05:49 The Emergence of Reasoning Models07:07 Challenges in Academia: GPUs and HPC Training11:04 The Value of GPU Knowledge14:24 Introduction to Jack's Research15:28 Information Theory17:10 Understanding Deep Learning Systems19:00 The "Bit" in Deep Learning20:25 Wikipedia and Information Storage23:50 Text Embeddings and Information Compression27:08 The Research Journey of Embedding Inversion31:22 Harnessing the Universal Geometry of Embeddings34:54 Implications of Embedding Inversion36:02 Limitations of Embedding Inversion38:08 The Capacity of Language Models40:23 The Cognitive Core and Model Efficiency50:40 The Future of AI and Model Scaling52:47 Approximating Language Model Training Data from Weights01:06:50 The "No New Ideas, Only New Datasets" Thesis



Get full access to Latent.Space at www.latent.space/subscribe
2025-07-02
Link to episode

Scaling Test Time Compute to Multi-Agent Civilizations ? Noam Brown, OpenAI

Solving Poker and Diplomacy, Debating RL+Reasoning with Ilya, what?s *wrong* with the System 1/2 analogy, and where Test-Time Compute hits a wall

Full Video Episode

Timestamps

00:00 Intro ? Diplomacy, Cicero & World Championship 02:00 Reverse Centaur: How AI Improved Noam?s Human Play 05:00 Turing Test Failures in Chat: Hallucinations & Steerability 07:30 Reasoning Models & Fast vs. Slow Thinking Paradigm 11:00 System 1 vs. System 2 in Visual Tasks (GeoGuessr, Tic-Tac-Toe) 14:00 The Deep Research Existence Proof for Unverifiable Domains 17:30 Harnesses, Tool Use, and Fragility in AI Agents 21:00 The Case Against Over-Reliance on Scaffolds and Routers 24:00 Reinforcement Fine-Tuning and Long-Term Model Adaptability 28:00 Ilya?s Bet on Reasoning and the O-Series Breakthrough 34:00 Noam?s Dev Stack: Codex, Windsurf & AGI Moments 38:00 Building Better AI Developers: Memory, Reuse, and PR Reviews 41:00 Multi-Agent Intelligence and the ?AI Civilization? Hypothesis 44:30 Implicit World Models and Theory of Mind Through Scaling 48:00 Why Self-Play Breaks Down Beyond Go and Chess 54:00 Designing Better Benchmarks for Fuzzy Tasks 57:30 The Real Limits of Test-Time Compute: Cost vs. Time 1:00:30 Data Efficiency Gaps Between Humans and LLMs 1:03:00 Training Pipeline: Pretraining, Midtraining, Posttraining 1:05:00 Games as Research Proving Grounds: Poker, MTG, Stratego 1:10:00 Closing Thoughts ? Five-Year View and Open Research Directions



Get full access to Latent.Space at www.latent.space/subscribe
2025-06-19
Link to episode

The Shape of Compute (Chris Lattner of Modular)

Chris Lattner of Modular (https://modular.com) joined us (again!) to talk about how they are breaking the CUDA monopoly, what it took to match NVIDIA performance with AMD, and how they are building a company of ?elite nerds?.

X: https://x.com/latentspacepod

Substack: https://latent.space

Full Video Episode

Timestamps

00:00:00 Introductions 00:00:12 Overview of Modular and the Shape of Compute 00:02:27 Modular?s R&D Phase 00:06:55 From CPU Optimization to GPU Support 00:11:14 MAX: Modular?s Inference Framework 00:12:52 Mojo Programming Language 00:18:25 MAX Architecture: From Mojo to Cluster-Scale Inference 00:29:16 Open Source Contributions and Community Involvement 00:32:25 Modular?s Differentiation from VLLM and SGLang 00:41:37 Modular?s Business Model and Monetization Strategy 00:53:17 DeepSeek?s Impact and Low-Level GPU Programming 01:00:00 Inference Time Compute and Reasoning Models 01:02:31 Personal Reflections on Leading Modular 01:08:27 Daily Routine and Time Management as a Founder 01:13:24 Using AI Coding Tools and Staying Current with Research 01:14:47 Personal Projects and Work-Life Balance 01:17:05 Hiring, Open Source, and Community Engagement



Get full access to Latent.Space at www.latent.space/subscribe
2025-06-13
Link to episode

The Utility of Interpretability ? Emmanuel Amiesen

Emmanuel Amiesen is lead author of ?Circuit Tracing: Revealing Computational Graphs in Language Models? (https://transformer-circuits.pub/2025/attribution-graphs/methods.html ), which is part of a duo of MechInterp papers that Anthropic published in March (alongside https://transformer-circuits.pub/2025/attribution-graphs/biology.html ).

We recorded the initial conversation a month ago, but then held off publishing until the open source tooling for the graph generation discussed in this work was released last week: https://www.anthropic.com/research/open-source-circuit-tracing

This is a 2 part episode - an intro covering the open source release, then a deeper dive into the paper ? with guest host Vibhu Sapra (https://x.com/vibhuuuus ) and Mochi the MechInterp Pomsky (https://x.com/mochipomsky ). Thanks to Vibhu for making this episode happen!

While the original blogpost contained some fantastic guided visualizations (which we discuss at the end of this pod!), with the notebook and Neuronpedia visualization (https://www.neuronpedia.org/gemma-2-2b/graph ) released this week, you can now explore on your own with Neuronpedia, as we show you in the video version of this pod.

Full Video Episode

Timestamps

00:00 Intro & Guest Introductions01:00 Anthropic's Circuit Tracing Release06:11 Exploring Circuit Tracing Tools & Demos13:01 Model Behaviors and User Experiments17:02 Behind the Research: Team and Community24:19 Main Episode Start: Mech Interp Backgrounds25:56 Getting Into Mech Interp Research31:52 History and Foundations of Mech Interp37:05 Core Concepts: Superposition & Features39:54 Applications & Interventions in Models45:59 Challenges & Open Questions in Interpretability57:15 Understanding Model Mechanisms: Circuits & Reasoning01:04:24 Model Planning, Reasoning, and Attribution Graphs01:30:52 Faithfulness, Deception, and Parallel Circuits01:40:16 Publishing Risks, Open Research, and Visualization01:49:33 Barriers, Vision, and Call to Action



Get full access to Latent.Space at www.latent.space/subscribe
2025-06-06
Link to episode

[AIEWF Preview] Containing Agent Chaos ? Solomon Hykes

Solomon most famously created Docker and now runs Dagger? which has something special to share with you on Thursday.

Catch Dagger at:

- Tuesday: Dagger?s workshop https://www.ai.engineer/schedule#ship-agents-that-ship-a-hands-on-workshop-for-swe-agent-builders

- Wednesday: Dagger?s talk: https://www.ai.engineer/schedule#how-to-trust-an-agent-with-software-delivery

- Thursday: Solomon?s Keynote https://www.ai.engineer/schedule#containing-agent-chaos

Full Video Episode

Timestamps

00:00 Introduction & Guest Background00:29 What is Dagger? Post-Development Automation01:08 Dagger?s Community & Platform Engineers02:32 AI Agents and Developer Workflows03:40 Environment Isolation & The Power of Containers06:28 The Need for Standards in Agent Environments07:25 Design Constraints & Challenges for Dev Environments11:26 Limitations of Current Tools & Agent-Native UX14:11 Modularity, Customization, and the Lego Analogy16:24 Convergence of CICD and Agentic Systems17:41 Ephemeral Apps, Resource Constraints, and Local Execution21:01 Adoption, Ecosystem, and the Role of Open Source23:30 Dagger?s Modular Approach & Integration Philosophy25:38 Looking Ahead: Workshops, Keynotes, and the Future of Agentic Infrastructure



Get full access to Latent.Space at www.latent.space/subscribe
2025-06-03
Link to episode

[AIEWF Preview] Gemini in 2025 and Realtime Voice AI

As part of our AI Engineer World?s Fair preview, we?re releasing a special cross podcast recorded with Sam Charrington of TWiML AI at last week?s Google I/O!

TUESDAY: Shrestha and Kwindla?s workshop: https://www.ai.engineer/schedule#milliseconds-to-magic-real-time-workflows-using-the-gemini-live-api-and-pipecat

TUESDAY: Kwindla?s workshop: https://www.ai.engineer/schedule#building-voice-agents-with-gemini-and-pipecat

WEDNESDAY: Shrestha and Kwindla?s talk: https://www.ai.engineer/schedule#milliseconds-to-magic-real-time-workflows-using-the-gemini-live-api-and-pipecat

WEDNESDAY: Kwindla?s keynote: https://www.ai.engineer/schedule#-voice-keynote-your-realtime-ai-is-ngmi

THURSDAY: Logan?s keynote: https://www.ai.engineer/schedule#a-year-of-gemini-progress-what-comes-next

Catch all the speakers at AIE (both workshops and talks):

Logan Kilpatrick: https://www.latent.space/p/chatgpt-gpt4-hype-and-building-llm

Shrestha Basu Mallick: https://www.linkedin.com/in/shresthabm/

Kwindla Hultman Kramer: https://www.linkedin.com/in/kwkramer

Full Video Episode



Get full access to Latent.Space at www.latent.space/subscribe
2025-06-02
Link to episode

[AIEWF Preview] CloudChef: Your Robot Chef - Michellin-Star food at $12/hr (w/ Kitchen tour!)

One of the new tracks at next week?s AI Engineer conference in SF is a new focus on LLMs + Robotics, ft. household names like Waymo and Physical Intelligence. However there are many other companies applying LLMs and VLMs in the real world!

CloudChef, the first industrial-scale kitchen robotics company with one-shot demonstration learning and an incredibly simple business model, will be serving tasty treats all day with Zippy (https://www.cloudchef.co/zippy ) their AI Chef platform.

This is a lightning pod with CEO Nikhil Abraham to preview what Zippy is capable of!

https://www.cloudchef.co/platform

See a real chef comparison:

See it in the AI Engineer Expo at SF next week: https://ai.engineer

Full Video Episode

Timestamps

00:00 Welcome and Introductions00:58 What is Cloud Chef?01:36 How the Robots Work: Culinary Intelligence05:57 Commercial Applications and Early Success07:02 The Software-First Approach10:09 Business Model and Pricing13:10 Demonstration Learning: Training the Robots16:03 Call to Action and Engineering Opportunities18:45 Final Thoughts and Technical Details



Get full access to Latent.Space at www.latent.space/subscribe
2025-05-31
Link to episode

The AI Coding Factory

We are joined by Eno Reyes and Matan Grinberg, the co-founders of Factory.ai. They are building droids for autonomous software engineering, handling everything from code generation to incident response for production outages. After raising a $15M Series A from Sequoia, they just released their product in GA!

https://factory.ai/

https://x.com/latentspacepod

Full Video Episode

Timestamps

00:00 Introductions 00:35 Meeting at Langchain Hackathon 04:02 Building Factory despite early model limitations 06:56 What is Factory AI? 08:55 Delegation vs Collaboration in AI Development Tools 10:06 Naming Origins of 'Factory' and 'Droids' 12:17 Defining Droids: Agent vs Workflow 14:34 Live Demo17:37 Enterprise Context and Tool Integration in Droids 20:26 Prompting, Clarification, and Agent Communication 22:28 Project Understanding and Proactive Context Gathering 24:10 Why SWE-Bench Is Dead 28:47 Model Fine-tuning and Generalization Challenges 31:07 Why Factory is Browser-Based, Not IDE-Based 33:51 Test-Driven Development and Agent Verification 36:17 Retrieval vs Large Context Windows for Cost Efficiency 38:02 Enterprise Metrics: Code Churn and ROI 40:48 Executing Large Refactors and Migrations with Droids 45:25 Model Speed, Parallelism, and Delegation Bottlenecks 50:11 Observability Challenges and Semantic Telemetry 53:44 Hiring55:19 Factory's design and branding approach 58:34 Closing Thoughts and Future of AI-Native Development



Get full access to Latent.Space at www.latent.space/subscribe
2025-05-29
Link to episode

[AIEWF Preview] Multi-Turn RL for Multi-Hour Agents ? with Will Brown, Prime Intellect

In an otherwise heavy week packed with Microsoft Build, Google I/O, and OpenAI io, the worst kept secret in biglab land was the launch of Claude 4, particularly the triumphant return of Opus, which many had been clamoring for. We will leave the specific Claude 4 recap to AINews, however we think that both Gemini?s progress on Deep Think this week and Claude 4 represent the next frontier of progress on inference time compute/reasoning (at last until GPT5 ships this summer).

Will Brown?s talk at AIE NYC and open source work on verifiers have made him one of the most prominent voices able to publicly discuss (aka without the vaguepoasting LoRA they put on you when you join a biglab) the current state of the art in reasoning models and where current SOTA research directions lead. We discussed his latest paper on Reinforcing Multi-Turn Reasoning in LLM Agents via Turn-Level Credit Assignment and he has previewed his AIEWF talk on Agentic RL for those with the temerity to power thru bad meetup audio.

Full Video Episode

Timestamps

00:00 Introduction to the Podcast and Guests01:00 Discussion on Claude 4 and AI Models03:07 Extended Thinking and Tool Use in AI06:47 Technical Highlights and Model Trustworthiness10:31 Thinking Budgets and Their Implications13:38 Controversy Surrounding Opus and AI Ethics18:49 Reflections on AI Tools and Their Limitations21:58 The Chaos of Predictive Systems22:56 Marketing and Safety in AI Models24:30 Evaluating AI Companies and Their Strategies25:53 The Role of Academia in AI Evaluations27:43 Teaching Taste in Research28:41 Making Educated Bets in AI Research30:12 Recent Developments in Multi-Turn Tool Use32:50 Incentivizing Tool Use in AI Models34:45 The Future of Reward Models in AI39:10 Exploring Flexible Reward Systems



Get full access to Latent.Space at www.latent.space/subscribe
2025-05-23
Link to episode
A tiny webapp by I'm With Friends.
Updated daily with data from the Apple Podcasts.