Good podcast

Top 100 most popular podcasts

The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)

The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)

Machine learning and artificial intelligence are dramatically changing the way businesses operate and people live. The TWIML AI Podcast brings the top minds and ideas from the world of ML and AI to a broad and influential community of ML/AI researchers, data scientists, engineers and tech-savvy business and IT leaders. Hosted by Sam Charrington, a sought after industry analyst, speaker, commentator and thought leader. Technologies covered include machine learning, artificial intelligence, deep learning, natural language processing, neural networks, analytics, computer science, data science and more.

Subscribe

iTunes / Overcast / RSS

Website

twimlai.com

Episodes

Why Agents Are Stupid & What We Can Do About It with Dan Jeffries - #713

Today, we're joined by Dan Jeffries, founder and CEO of Kentauros AI to discuss the challenges currently faced by those developing advanced AI agents. We dig into how Dan defines agents and distinguishes them from other similar uses of LLM, explore various use cases for them, and dig into ways to create smarter agentic systems. Dan shared his ?big brain, little brain, tool brain? approach to tackling real-world challenges in agents, the trade-offs in leveraging general-purpose vs. task-specific models, and his take on LLM reasoning. We also cover the way he thinks about model selection for agents, along with the need for new tools and platforms for deploying them. Finally, Dan emphasizes the importance of open source in advancing AI, shares the new products they?re working on, and explores the future directions in the agentic era. The complete show notes for this episode can be found at https://twimlai.com/go/713.
2024-12-16
Link to episode

Automated Reasoning to Prevent LLM Hallucination with Byron Cook - #712

Today, we're joined by Byron Cook, VP and distinguished scientist in the Automated Reasoning Group at AWS to dig into the underlying technology behind the newly announced Automated Reasoning Checks feature of Amazon Bedrock Guardrails. Automated Reasoning Checks uses mathematical proofs to help LLM users safeguard against hallucinations. We explore recent advancements in the field of automated reasoning, as well as some of the ways it is applied broadly, as well as across AWS, where it is used to enhance security, cryptography, virtualization, and more. We discuss how the new feature helps users to generate, refine, validate, and formalize policies, and how those policies can be deployed alongside LLM applications to ensure the accuracy of generated text. Finally, Byron also shares the benchmarks they?ve applied, the use of techniques like ?constrained coding? and ?backtracking,? and the future co-evolution of automated reasoning and generative AI. The complete show notes for this episode can be found at https://twimlai.com/go/712.
2024-12-09
Link to episode

AI at the Edge: Qualcomm AI Research at NeurIPS 2024 with Arash Behboodi - #711

Today, we're joined by Arash Behboodi, director of engineering at Qualcomm AI Research to discuss the papers and workshops Qualcomm will be presenting at this year?s NeurIPS conference. We dig into the challenges and opportunities presented by differentiable simulation in wireless systems, the sciences, and beyond. We also explore recent work that ties conformal prediction to information theory, yielding a novel approach to incorporating uncertainty quantification directly into machine learning models. Finally, we review several papers enabling the efficient use of LoRA (Low-Rank Adaptation) on mobile devices (Hollowed Net, ShiRA, FouRA). Arash also previews the demos Qualcomm will be hosting at NeurIPS, including new video editing diffusion and 3D content generation models running on-device, Qualcomm's AI Hub, and more! The complete show notes for this episode can be found at https://twimlai.com/go/711.
2024-12-03
Link to episode

AI for Network Management with Shirley Wu - #710

Today, we're joined by Shirley Wu, senior director of software engineering at Juniper Networks to discuss how machine learning and artificial intelligence are transforming network management. We explore various use cases where AI and ML are applied to enhance the quality, performance, and efficiency of networks across Juniper?s customers, including diagnosing cable degradation, proactive monitoring for coverage gaps, and real-time fault detection. We also dig into the complexities of integrating data science into networking, the trade-offs between traditional methods and ML-based solutions, the role of feature engineering and data in networking, the applicability of large language models, and Juniper?s approach to using smaller, specialized ML models to optimize speed, latency, and cost. Finally, Shirley shares some future directions for Juniper Mist such as proactive network testing and end-user self-service. The complete show notes for this episode can be found at https://twimlai.com/go/710.
2024-11-19
Link to episode

Why Your RAG System Is Broken, and How to Fix It with Jason Liu - #709

Today, we're joined by Jason Liu, freelance AI consultant, advisor, and creator of the Instructor library to discuss all things retrieval-augmented generation (RAG). We dig into the tactical and strategic challenges companies face with their RAG system, the different signs Jason looks for to identify looming problems, the issues he most commonly encounters, and the steps he takes to diagnose these issues. We also cover the significance of building out robust test datasets, data-driven experimentation, evaluation tools, and metrics for different use cases. We also touched on fine-tuning strategies for RAG systems, the effectiveness of different chunking strategies, the use of collaboration tools like Braintrust, and how future models will change the game. Lastly, we cover Jason?s interest in teaching others how to capitalize on their own AI experience via his AI consulting course. The complete show notes for this episode can be found at https://twimlai.com/go/709.
2024-11-11
Link to episode

An Agentic Mixture of Experts for DevOps with Sunil Mallya - #708

Today we're joined by Sunil Mallya, CTO and co-founder of Flip AI. We discuss Flip?s incident debugging system for DevOps, which was built using a custom mixture of experts (MoE) large language model (LLM) trained on a novel "CoMELT" observability dataset which combines traditional MELT data?metrics, events, logs, and traces?with code to efficiently identify root failure causes in complex software systems. We discuss the challenges of integrating time-series data with LLMs and their multi-decoder architecture designed for this purpose. Sunil describes their system's agent-based design, focusing on clear roles and boundaries to ensure reliability. We examine their "chaos gym," a reinforcement learning environment used for testing and improving the system's robustness. Finally, we discuss the practical considerations of deploying such a system at scale in diverse environments and much more. The complete show notes for this episode can be found at https://twimlai.com/go/708.
2024-11-04
Link to episode

Building AI Voice Agents with Scott Stephenson - #707

Today, we're joined by Scott Stephenson, co-founder and CEO of Deepgram to discuss voice AI agents. We explore the importance of perception, understanding, and interaction and how these key components work together in building intelligent AI voice agents. We discuss the role of multimodal LLMs as well as speech-to-text and text-to-speech models in building AI voice agents, and dig into the benefits and limitations of text-based approaches to voice interactions. We dig into what?s required to deliver real-time voice interactions and the promise of closed-loop, continuously improving, federated learning agents. Finally, Scott shares practical applications of AI voice agents at Deepgram and provides an overview of their newly released agent toolkit. The complete show notes for this episode can be found at https://twimlai.com/go/707.
2024-10-28
Link to episode

Is Artificial Superintelligence Imminent? with Tim Rocktäschel - #706

Today, we're joined by Tim Rocktäschel, senior staff research scientist at Google DeepMind, professor of Artificial Intelligence at University College London, and author of the recently published popular science book, ?Artificial Intelligence: 10 Things You Should Know.? We dig into the attainability of artificial superintelligence and the path to achieving generalized superhuman capabilities across multiple domains. We discuss the importance of open-endedness in developing autonomous and self-improving systems, as well as the role of evolutionary approaches and algorithms. Additionally, we cover Tim?s recent research projects such as ?Promptbreeder,? ?Debating with More Persuasive LLMs Leads to More Truthful Answers,? and more. The complete show notes for this episode can be found at https://twimlai.com/go/706.
2024-10-21
Link to episode

ML Models for Safety-Critical Systems with Lucas García - #705

Today, we're joined by Lucas García, principal product manager for deep learning at MathWorks to discuss incorporating ML models into safety-critical systems. We begin by exploring the critical role of verification and validation (V&V) in these applications. We review the popular V-model for engineering critical systems and then dig into the ?W? adaptation that?s been proposed for incorporating ML models. Next, we discuss the complexities of applying deep learning neural networks in safety-critical applications using the aviation industry as an example, and talk through the importance of factors such as data quality, model stability, robustness, interpretability, and accuracy. We also explore formal verification methods, abstract transformer layers, transformer-based architectures, and the application of various software testing techniques. Lucas also introduces the field of constrained deep learning and convex neural networks and its benefits and trade-offs. The complete show notes for this episode can be found at https://twimlai.com/go/705.
2024-10-14
Link to episode

AI Agents: Substance or Snake Oil with Arvind Narayanan - #704

Today, we're joined by Arvind Narayanan, professor of Computer Science at Princeton University to discuss his recent works, AI Agents That Matter and AI Snake Oil. In ?AI Agents That Matter?, we explore the range of agentic behaviors, the challenges in benchmarking agents, and the ?capability and reliability gap?, which creates risks when deploying AI agents in real-world applications. We also discuss the importance of verifiers as a technique for safeguarding agent behavior. We then dig into the AI Snake Oil book, which uncovers examples of problematic and overhyped claims in AI. Arvind shares various use cases of failed applications of AI, outlines a taxonomy of AI risks, and shares his insights on AI?s catastrophic risks. Additionally, we also touched on different approaches to LLM-based reasoning, his views on tech policy and regulation, and his work on CORE-Bench, a benchmark designed to measure AI agents' accuracy in computational reproducibility tasks. The complete show notes for this episode can be found at https://twimlai.com/go/704.
2024-10-07
Link to episode

AI Agents for Data Analysis with Shreya Shankar - #703

Today, we're joined by Shreya Shankar, a PhD student at UC Berkeley to discuss DocETL, a declarative system for building and optimizing LLM-powered data processing pipelines for large-scale and complex document analysis tasks. We explore how DocETL's optimizer architecture works, the intricacies of building agentic systems for data processing, the current landscape of benchmarks for data processing tasks, how these differ from reasoning-based benchmarks, and the need for robust evaluation methods for human-in-the-loop LLM workflows. Additionally, Shreya shares real-world applications of DocETL, the importance of effective validation prompts, and building robust and fault-tolerant agentic systems. Lastly, we cover the need for benchmarks tailored to LLM-powered data processing tasks and the future directions for DocETL. The complete show notes for this episode can be found at https://twimlai.com/go/703.
2024-09-30
Link to episode

Stealing Part of a Production Language Model with Nicholas Carlini - #702

Today, we're joined by Nicholas Carlini, research scientist at Google DeepMind to discuss adversarial machine learning and model security, focusing on his 2024 ICML best paper winner, ?Stealing part of a production language model.? We dig into this work, which demonstrated the ability to successfully steal the last layer of production language models including ChatGPT and PaLM-2. Nicholas shares the current landscape of AI security research in the age of LLMs, the implications of model stealing, ethical concerns surrounding model privacy, how the attack works, and the significance of the embedding layer in language models. We also discuss the remediation strategies implemented by OpenAI and Google, and the future directions in the field of AI security. Plus, we also cover his other ICML 2024 best paper, ?Position: Considerations for Differentially Private Learning with Large-Scale Public Pretraining,? which questions the use and promotion of differential privacy in conjunction with pre-trained models. The complete show notes for this episode can be found at https://twimlai.com/go/702.
2024-09-23
Link to episode

Supercharging Developer Productivity with ChatGPT and Claude with Simon Willison - #701

Today, we're joined by Simon Willison, independent researcher and creator of Datasette to discuss the many ways software developers and engineers can take advantage of large language models (LLMs) to boost their productivity. We dig into Simon?s own workflows and how he uses popular models like ChatGPT and Anthropic?s Claude to write and test hundreds of lines of code while out walking his dog. We review Simon?s favorite prompting and debugging techniques, his strategies for sidestepping the limitations of contemporary models, how he uses Claude?s Artifacts feature for rapid prototyping, his thoughts on the use and impact of vision models, the role he sees for open source models and local LLMs, and much more. The complete show notes for this episode can be found at https://twimlai.com/go/701.
2024-09-17
Link to episode

Automated Design of Agentic Systems with Shengran Hu - #700

Today, we're joined by Shengran Hu, a PhD student at the University of British Columbia, to discuss Automated Design of Agentic Systems (ADAS), an approach focused on automatically creating agentic system designs. We explore the spectrum of agentic behaviors, the motivation for learning all aspects of agentic system design, the key components of the ADAS approach, and how it uses LLMs to design novel agent architectures in code. We also cover the iterative process of ADAS, its potential to shed light on the behavior of foundation models, the higher-level meta-behaviors that emerge in agentic systems, and how ADAS uncovers novel design patterns through emergent behaviors, particularly in complex tasks like the ARC challenge. Finally, we touch on the practical applications of ADAS and its potential use in system optimization for real-world tasks. The complete show notes for this episode can be found at https://twimlai.com/go/700.
2024-09-02
Link to episode

The EU AI Act and Mitigating Bias in Automated Decisioning with Peter van der Putten - #699

Today, we're joined by Peter van der Putten, director of the AI Lab at Pega and assistant professor of AI at Leiden University. We discuss the newly adopted European AI Act and the challenges of applying academic fairness metrics in real-world AI applications. We dig into the key ethical principles behind the Act, its broad definition of AI, and how it categorizes various AI risks. We also discuss the practical challenges of implementing fairness and bias metrics in real-world scenarios, and the importance of a risk-based approach in regulating AI systems. Finally, we cover how the EU AI Act might influence global practices, similar to the GDPR's effect on data privacy, and explore strategies for closing bias gaps in real-world automated decision-making. The complete show notes for this episode can be found at https://twimlai.com/go/699.
2024-08-27
Link to episode

The Building Blocks of Agentic Systems with Harrison Chase - #698

Today, we're joined by Harrison Chase, co-founder and CEO of LangChain to discuss LLM frameworks, agentic systems, RAG, evaluation, and more. We dig into the elements of a modern LLM framework, including the most productive developer experiences and appropriate levels of abstraction. We dive into agents and agentic systems as well, covering the ?spectrum of agenticness,? cognitive architectures, and real-world applications. We explore key challenges in deploying agentic systems, and the importance of agentic architectures as a means of communication in system design and operation. Additionally, we review evolving use cases for RAG, and the role of observability, testing, and evaluation tools in moving LLM applications from prototype to production. Lastly, Harrison shares his hot takes on prompting, multi-modal models, and more! The complete show notes for this episode can be found at https://twimlai.com/go/698.
2024-08-19
Link to episode

Simplifying On-Device AI for Developers with Siddhika Nevrekar - #697

Today, we're joined by Siddhika Nevrekar, AI Hub head at Qualcomm Technologies, to discuss on-device AI and how to make it easier for developers to take advantage of device capabilities. We unpack the motivations for AI engineers to move model inference from the cloud to local devices, and explore the challenges associated with on-device AI. We dig into the role of hardware solutions, from powerful system-on-chips (SoC) to neural processors, the importance of collaboration between community runtimes like ONNX and TFLite and chip manufacturers, the unique challenges of IoT and autonomous vehicles, and the key metrics developers should focus on to ensure optimal on-device performance. Finally, Siddhika introduces Qualcomm's AI Hub, a platform developed to simplify the process of testing and optimizing AI models across different devices. The complete show notes for this episode can be found at https://twimlai.com/go/697.
2024-08-12
Link to episode

Genie: Generative Interactive Environments with Ashley Edwards - #696

Today, we're joined by Ashley Edwards, a member of technical staff at Runway, to discuss Genie: Generative Interactive Environments, a system for creating ?playable? video environments for training deep reinforcement learning (RL) agents at scale in a completely unsupervised manner. We explore the motivations behind Genie, the challenges of data acquisition for RL, and Genie?s capability to learn world models from videos without explicit action data, enabling seamless interaction and frame prediction. Ashley walks us through Genie?s core components?the latent action model, video tokenizer, and dynamics model?and explains how these elements collaborate to predict future frames in video sequences. We discuss the model architecture, training strategies, benchmarks used, as well as the application of spatiotemporal transformers and the MaskGIT techniques used for efficient token prediction and representation. Finally, we touched on Genie?s practical implications, its comparison to other video generation models like ?Sora,? and potential future directions in video generation and diffusion models. The complete show notes for this episode can be found at https://twimlai.com/go/696.
2024-08-05
Link to episode

Bridging the Sim2real Gap in Robotics with Marius Memmel - #695

Today, we're joined by Marius Memmel, a PhD student at the University of Washington, to discuss his research on sim-to-real transfer approaches for developing autonomous robotic agents in unstructured environments. Our conversation focuses on his recent ASID and URDFormer papers. We explore the complexities presented by real-world settings like a cluttered kitchen, data acquisition challenges for training robust models, the importance of simulation, and the challenge of bridging the sim2real gap in robotics. Marius introduces ASID, a framework designed to enable robots to autonomously generate and refine simulation models to improve sim-to-real transfer. We discuss the role of Fisher information as a metric for trajectory sensitivity to physical parameters and the importance of exploration and exploitation phases in robot learning. Additionally, we cover URDFormer, a transformer-based model that generates URDF documents for scene and object reconstruction to create realistic simulation environments. The complete show notes for this episode can be found at https://twimlai.com/go/695.
2024-07-30
Link to episode

Building Real-World LLM Products with Fine-Tuning and More with Hamel Husain - #694

Today, we're joined by Hamel Husain, founder of Parlance Labs, to discuss the ins and outs of building real-world products using large language models (LLMs). We kick things off discussing novel applications of LLMs and how to think about modern AI user experiences. We then dig into the key challenge faced by LLM developers?how to iterate from a snazzy demo or proof-of-concept to a working LLM-based application. We discuss the pros, cons, and role of fine-tuning LLMs and dig into when to use this technique. We cover the fine-tuning process, common pitfalls in evaluation?such as relying too heavily on generic tools and missing the nuances of specific use cases, open-source LLM fine-tuning tools like Axolotl, the use of LoRA adapters, and more. Hamel also shares insights on model optimization and inference frameworks and how developers should approach these tools. Finally, we dig into how to use systematic evaluation techniques to guide the improvement of your LLM application, the importance of data generation and curation, and the parallels to traditional software engineering practices. The complete show notes for this episode can be found at https://twimlai.com/go/694.
2024-07-23
Link to episode

Mamba, Mamba-2 and Post-Transformer Architectures for Generative AI with Albert Gu - #693

Today, we're joined by Albert Gu, assistant professor at Carnegie Mellon University, to discuss his research on post-transformer architectures for multi-modal foundation models, with a focus on state-space models in general and Albert?s recent Mamba and Mamba-2 papers in particular. We dig into the efficiency of the attention mechanism and its limitations in handling high-resolution perceptual modalities, and the strengths and weaknesses of transformer architectures relative to alternatives for various tasks. We dig into the role of tokenization and patching in transformer pipelines, emphasizing how abstraction and semantic relationships between tokens underpin the model's effectiveness, and explore how this relates to the debate between handcrafted pipelines versus end-to-end architectures in machine learning. Additionally, we touch on the evolving landscape of hybrid models which incorporate elements of attention and state, the significance of state update mechanisms in model adaptability and learning efficiency, and the contribution and adoption of state-space models like Mamba and Mamba-2 in academia and industry. Lastly, Albert shares his vision for advancing foundation models across diverse modalities and applications. The complete show notes for this episode can be found at https://twimlai.com/go/693.
2024-07-17
Link to episode

Decoding Animal Behavior to Train Robots with EgoPet with Amir Bar - #692

Today, we're joined by Amir Bar, a PhD candidate at Tel Aviv University and UC Berkeley to discuss his research on visual-based learning, including his recent paper, ?EgoPet: Egomotion and Interaction Data from an Animal?s Perspective.? Amir shares his research projects focused on self-supervised object detection and analogy reasoning for general computer vision tasks. We also discuss the current limitations of caption-based datasets in model training, the ?learning problem? in robotics, and the gap between the capabilities of animals and AI systems. Amir introduces ?EgoPet,? a dataset and benchmark tasks which allow motion and interaction data from an animal's perspective to be incorporated into machine learning models for robotic planning and proprioception. We explore the dataset collection process, comparisons with existing datasets and benchmark tasks, the findings on the model performance trained on EgoPet, and the potential of directly training robot policies that mimic animal behavior. The complete show notes for this episode can be found at https://twimlai.com/go/692.
2024-07-09
Link to episode

How Microsoft Scales Testing and Safety for Generative AI with Sarah Bird - #691

Today, we're joined by Sarah Bird, chief product officer of responsible AI at Microsoft. We discuss the testing and evaluation techniques Microsoft applies to ensure safe deployment and use of generative AI, large language models, and image generation. In our conversation, we explore the unique risks and challenges presented by generative AI, the balance between fairness and security concerns, the application of adaptive and layered defense strategies for rapid response to unforeseen AI behaviors, the importance of automated AI safety testing and evaluation alongside human judgment, and the implementation of red teaming and governance. Sarah also shares learnings from Microsoft's ?Tay? and ?Bing Chat? incidents along with her thoughts on the rapidly evolving GenAI landscape. The complete show notes for this episode can be found at https://twimlai.com/go/691.
2024-07-01
Link to episode

Long Context Language Models and their Biological Applications with Eric Nguyen - #690

Today, we're joined by Eric Nguyen, PhD student at Stanford University. In our conversation, we explore his research on long context foundation models and their application to biology particularly Hyena, and its evolution into Hyena DNA and Evo models. We discuss Hyena, a convolutional-based language model developed to tackle the challenges posed by long context lengths in language modeling. We dig into the limitations of transformers in dealing with longer sequences, the motivation for using convolutional models over transformers, its model training and architecture, the role of FFT in computational optimizations, and model explainability in long-sequence convolutions. We also talked about Hyena DNA, a genomic foundation model pre-trained on 1 million tokens, designed to capture long-range dependencies in DNA sequences. Finally, Eric introduces Evo, a 7 billion parameter hybrid model integrating attention layers with Hyena DNA's convolutional framework. We cover generating and designing DNA with language models, hallucinations in DNA models, evaluation benchmarks, the trade-offs between state-of-the-art models, zero-shot versus a few-shot performance, and the exciting potential in areas like CRISPR-Cas gene editing. The complete show notes for this episode can be found at https://twimlai.com/go/690.
2024-06-25
Link to episode

Accelerating Sustainability with AI with Andres Ravinet - #689

Today, we're joined by Andres Ravinet, sustainability global black belt at Microsoft, to discuss the role of AI in sustainability. We explore real-world use cases where AI-driven solutions are leveraged to help tackle environmental and societal challenges, from early warning systems for extreme weather events to reducing food waste along the supply chain to conserving the Amazon rainforest. We cover the major threats that sustainability aims to address, the complexities in standardized sustainability compliance reporting, and the factors driving businesses to take a step toward sustainable practices. Lastly, Andres addresses the ways LLMs and generative AI can be applied towards the challenges of sustainability. The complete show notes for this episode can be found at https://twimlai.com/go/689.
2024-06-18
Link to episode

Gen AI at the Edge: Qualcomm AI Research at CVPR 2024 with Fatih Porikli - #688

Today we?re joined by Fatih Porikli, senior director of technology at Qualcomm AI Research. In our conversation, we covered several of the Qualcomm team?s 16 accepted main track and workshop papers at this year?s CVPR conference. The papers span a variety of generative AI and traditional computer vision topics, with an emphasis on increased training and inference efficiency for mobile and edge deployment. We explore efficient diffusion models for text-to-image generation, grounded reasoning in videos using language models, real-time on-device 360° image generation for video portrait relighting, unique video-language model for situated interactions like fitness coaching, and visual reasoning model and benchmark for interpreting complex mathematical plots, and more! We also touched on several of the demos the team will be presenting at the conference, including multi-modal vision-language models (LLaVA) and parameter-efficient fine tuning (LoRA) on mobile phones. The complete show notes for this episode can be found at https://twimlai.com/go/688.
2024-06-11
Link to episode

Energy Star Ratings for AI Models with Sasha Luccioni - #687

Today, we're joined by Sasha Luccioni, AI and Climate lead at Hugging Face, to discuss the environmental impact of AI models. We dig into her recent research into the relative energy consumption of general purpose pre-trained models vs. task-specific, non-generative models for common AI tasks. We discuss the implications of the significant difference in efficiency and power consumption between the two types of models. Finally, we explore the complexities of energy efficiency and performance benchmarking, and talk through Sasha?s recent initiative, Energy Star Ratings for AI Models, a rating system designed to help AI users select and deploy models based on their energy efficiency. The complete show notes for this episode can be found at http://twimlai.com/go/687.
2024-06-04
Link to episode

Language Understanding and LLMs with Christopher Manning - #686

Today, we're joined by Christopher Manning, the Thomas M. Siebel professor in Machine Learning at Stanford University and a recent recipient of the 2024 IEEE John von Neumann medal. In our conversation with Chris, we discuss his contributions to foundational research areas in NLP, including word embeddings and attention. We explore his perspectives on the intersection of linguistics and large language models, their ability to learn human language structures, and their potential to teach us about human language acquisition. We also dig into the concept of ?intelligence? in language models, as well as the reasoning capabilities of LLMs. Finally, Chris shares his current research interests, alternative architectures he anticipates emerging beyond the LLM, and opportunities ahead in AI research. The complete show notes for this episode can be found at https://twimlai.com/go/686.
2024-05-27
Link to episode

Chronos: Learning the Language of Time Series with Abdul Fatir Ansari - #685

Today we're joined by Abdul Fatir Ansari, a machine learning scientist at AWS AI Labs in Berlin, to discuss his paper, "Chronos: Learning the Language of Time Series." Fatir explains the challenges of leveraging pre-trained language models for time series forecasting. We explore the advantages of Chronos over statistical models, as well as its promising results in zero-shot forecasting benchmarks. Finally, we address critiques of Chronos, the ongoing research to improve synthetic data quality, and the potential for integrating Chronos into production systems. The complete show notes for this episode can be found at twimlai.com/go/685.
2024-05-20
Link to episode

Powering AI with the World's Largest Computer Chip with Joel Hestness - #684

Today we're joined by Joel Hestness, principal research scientist and lead of the core machine learning team at Cerebras. We discuss Cerebras? custom silicon for machine learning, Wafer Scale Engine 3, and how the latest version of the company?s single-chip platform for ML has evolved to support large language models. Joel shares how WSE3 differs from other AI hardware solutions, such as GPUs, TPUs, and AWS? Inferentia, and talks through the homogenous design of the WSE chip and its memory architecture. We discuss software support for the platform, including support by open source ML frameworks like Pytorch, and support for different types of transformer-based models. Finally, Joel shares some of the research his team is pursuing to take advantage of the hardware's unique characteristics, including weight-sparse training, optimizers that leverage higher-order statistics, and more. The complete show notes for this episode can be found at twimlai.com/go/684.
2024-05-13
Link to episode

AI for Power & Energy with Laurent Boinot - #683

Today we're joined by Laurent Boinot, power and utilities lead for the Americas at Microsoft, to discuss the intersection of AI and energy infrastructure. We discuss the many challenges faced by current power systems in North America and the role AI is beginning to play in driving efficiencies in areas like demand forecasting and grid optimization. Laurent shares a variety of examples along the way, including some of the ways utility companies are using AI to ensure secure systems, interact with customers, navigate internal knowledge bases, and design electrical transmission systems. We also discuss the future of nuclear power, and why electric vehicles might play a critical role in American energy management. The complete show notes for this episode can be found at twimlai.com/go/683.
2024-05-07
Link to episode

Controlling Fusion Reactor Instability with Deep Reinforcement Learning with Aza Jalalvand - #682

Today we're joined by Azarakhsh (Aza) Jalalvand, a research scholar at Princeton University, to discuss his work using deep reinforcement learning to control plasma instabilities in nuclear fusion reactors. Aza explains his team developed a model to detect and avoid a fatal plasma instability called ?tearing mode?. Aza walks us through the process of collecting and pre-processing the complex diagnostic data from fusion experiments, training the models, and deploying the controller algorithm on the DIII-D fusion research reactor. He shares insights from developing the controller and discusses the future challenges and opportunities for AI in enabling stable and efficient fusion energy production. The complete show notes for this episode can be found at twimlai.com/go/682.
2024-04-29
Link to episode

GraphRAG: Knowledge Graphs for AI Applications with Kirk Marple - #681

Today we're joined by Kirk Marple, CEO and founder of Graphlit, to explore the emerging paradigm of "GraphRAG," or Graph Retrieval Augmented Generation. In our conversation, Kirk digs into the GraphRAG architecture and how Graphlit uses it to offer a multi-stage workflow for ingesting, processing, retrieving, and generating content using LLMs (like GPT-4) and other Generative AI tech. He shares how the system performs entity extraction to build a knowledge graph and how graph, vector, and object storage are integrated in the system. We dive into how the system uses ?prompt compilation? to improve the results it gets from Large Language Models during generation. We conclude by discussing several use cases the approach supports, as well as future agent-based applications it enables. The complete show notes for this episode can be found at twimlai.com/go/681.
2024-04-22
Link to episode

Teaching Large Language Models to Reason with Reinforcement Learning with Alex Havrilla - #680

Today we're joined by Alex Havrilla, a PhD student at Georgia Tech, to discuss "Teaching Large Language Models to Reason with Reinforcement Learning." Alex discusses the role of creativity and exploration in problem solving and explores the opportunities presented by applying reinforcement learning algorithms to the challenge of improving reasoning in large language models. Alex also shares his research on the effect of noise on language model training, highlighting the robustness of LLM architecture. Finally, we delve into the future of RL, and the potential of combining language models with traditional methods to achieve more robust AI reasoning. The complete show notes for this episode can be found at twimlai.com/go/680.
2024-04-17
Link to episode

Localizing and Editing Knowledge in LLMs with Peter Hase - #679

Today we're joined by Peter Hase, a fifth-year PhD student at the University of North Carolina NLP lab. We discuss "scalable oversight", and the importance of developing a deeper understanding of how large neural networks make decisions. We learn how matrices are probed by interpretability researchers, and explore the two schools of thought regarding how LLMs store knowledge. Finally, we discuss the importance of deleting sensitive information from model weights, and how "easy-to-hard generalization" could increase the risk of releasing open-source foundation models. The complete show notes for this episode can be found at twimlai.com/go/679.
2024-04-08
Link to episode

Coercing LLMs to Do and Reveal (Almost) Anything with Jonas Geiping - #678

Today we're joined by Jonas Geiping, a research group leader at the ELLIS Institute, to explore his paper: "Coercing LLMs to Do and Reveal (Almost) Anything". Jonas explains how neural networks can be exploited, highlighting the risk of deploying LLM agents that interact with the real world. We discuss the role of open models in enabling security research, the challenges of optimizing over certain constraints, and the ongoing difficulties in achieving robustness in neural networks. Finally, we delve into the future of AI security, and the need for a better approach to mitigate the risks posed by optimized adversarial attacks. The complete show notes for this episode can be found at twimlai.com/go/678.
2024-04-01
Link to episode

V-JEPA, AI Reasoning from a Non-Generative Architecture with Mido Assran - #677

Today we?re joined by Mido Assran, a research scientist at Meta?s Fundamental AI Research (FAIR). In this conversation, we discuss V-JEPA, a new model being billed as ?the next step in Yann LeCun's vision? for true artificial reasoning. V-JEPA, the video version of Meta?s Joint Embedding Predictive Architecture, aims to bridge the gap between human and machine intelligence by training models to learn abstract concepts in a more efficient predictive manner than generative models. V-JEPA uses a novel self-supervised training approach that allows it to learn from unlabeled video data without being distracted by pixel-level detail. Mido walks us through the process of developing the architecture and explains why it has the potential to revolutionize AI. The complete show notes for this episode can be found at twimlai.com/go/677.
2024-03-25
Link to episode

Video as a Universal Interface for AI Reasoning with Sherry Yang - #676

Today we?re joined by Sherry Yang, senior research scientist at Google DeepMind and a PhD student at UC Berkeley. In this interview, we discuss her new paper, "Video as the New Language for Real-World Decision Making,? which explores how generative video models can play a role similar to language models as a way to solve tasks in the real world. Sherry draws the analogy between natural language as a unified representation of information and text prediction as a common task interface and demonstrates how video as a medium and generative video as a task exhibit similar properties. This formulation enables video generation models to play a variety of real-world roles as planners, agents, compute engines, and environment simulators. Finally, we explore UniSim, an interactive demo of Sherry's work and a preview of her vision for interacting with AI-generated environments. The complete show notes for this episode can be found at twimlai.com/go/676.
2024-03-18
Link to episode

Assessing the Risks of Open AI Models with Sayash Kapoor - #675

Today we?re joined by Sayash Kapoor, a Ph.D. student in the Department of Computer Science at Princeton University. Sayash walks us through his paper: "On the Societal Impact of Open Foundation Models.? We dig into the controversy around AI safety, the risks and benefits of releasing open model weights, and how we can establish common ground for assessing the threats posed by AI. We discuss the application of the framework presented in the paper to specific risks, such as the biosecurity risk of open LLMs, as well as the growing problem of "Non Consensual Intimate Imagery" using open diffusion models. The complete show notes for this episode can be found at twimlai.com/go/675.
2024-03-11
Link to episode

OLMo: Everything You Need to Train an Open Source LLM with Akshita Bhagia - #674

Today we?re joined by Akshita Bhagia, a senior research engineer at the Allen Institute for AI. Akshita joins us to discuss OLMo, a new open source language model with 7 billion and 1 billion variants, but with a key difference compared to similar models offered by Meta, Mistral, and others. Namely, the fact that AI2 has also published the dataset and key tools used to train the model. In our chat with Akshita, we dig into the OLMo models and the various projects falling under the OLMo umbrella, including Dolma, an open three-trillion-token corpus for language model pretraining, and Paloma, a benchmark and tooling for evaluating language model performance across a variety of domains. The complete show notes for this episode can be found at twimlai.com/go/674.
2024-03-04
Link to episode

Training Data Locality and Chain-of-Thought Reasoning in LLMs with Ben Prystawski - #673

Today we?re joined by Ben Prystawski, a PhD student in the Department of Psychology at Stanford University working at the intersection of cognitive science and machine learning. Our conversation centers on Ben?s recent paper, ?Why think step by step? Reasoning emerges from the locality of experience,? which he recently presented at NeurIPS 2023. In this conversation, we start out exploring basic questions about LLM reasoning, including whether it exists, how we can define it, and how techniques like chain-of-thought reasoning appear to strengthen it. We then dig into the details of Ben?s paper, which aims to understand why thinking step-by-step is effective and demonstrates that local structure is the key property of LLM training data that enables it. The complete show notes for this episode can be found at twimlai.com/go/673.
2024-02-26
Link to episode

Reasoning Over Complex Documents with DocLLM with Armineh Nourbakhsh - #672

Today we're joined by Armineh Nourbakhsh of JP Morgan AI Research to discuss the development and capabilities of DocLLM, a layout-aware large language model for multimodal document understanding. Armineh provides a historical overview of the challenges of document AI and an introduction to the DocLLM model. Armineh explains how this model, distinct from both traditional LLMs and document AI models, incorporates both textual semantics and spatial layout in processing enterprise documents like reports and complex contracts. We dig into her team?s approach to training DocLLM, their choice of a generative model as opposed to an encoder-based approach, the datasets they used to build the model, their approach to incorporating layout information, and the various ways they evaluated the model?s performance. The complete show notes for this episode can be found at twimlai.com/go/672.
2024-02-19
Link to episode

Are Emergent Behaviors in LLMs an Illusion? with Sanmi Koyejo - #671

Today we?re joined by Sanmi Koyejo, assistant professor at Stanford University, to continue our NeurIPS 2024 series. In our conversation, Sanmi discusses his two recent award-winning papers. First, we dive into his paper, ?Are Emergent Abilities of Large Language Models a Mirage??. We discuss the different ways LLMs are evaluated and the excitement surrounding their?emergent abilities? such as the ability to perform arithmetic Sanmi describes how evaluating model performance using nonlinear metrics can lead to the illusion that the model is rapidly gaining new capabilities, whereas linear metrics show smooth improvement as expected, casting doubt on the significance of emergence. We continue on to his next paper, ?DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models,? discussing the methodology it describes for evaluating concerns such as the toxicity, privacy, fairness, and robustness of LLMs. The complete show notes for this episode can be found at twimlai.com/go/671.
2024-02-12
Link to episode

AI Trends 2024: Reinforcement Learning in the Age of LLMs with Kamyar Azizzadenesheli - #670

Today we?re joined by Kamyar Azizzadenesheli, a staff researcher at Nvidia, to continue our AI Trends 2024 series. In our conversation, Kamyar updates us on the latest developments in reinforcement learning (RL), and how the RL community is taking advantage of the abstract reasoning abilities of large language models (LLMs). Kamyar shares his insights on how LLMs are pushing RL performance forward in a variety of applications, such as ALOHA, a robot that can learn to fold clothes, and Voyager, an RL agent that uses GPT-4 to outperform prior systems at playing Minecraft. We also explore the progress being made in assessing and addressing the risks of RL-based decision-making in domains such as finance, healthcare, and agriculture. Finally, we discuss the future of deep reinforcement learning, Kamyar?s top predictions for the field, and how greater compute capabilities will be critical in achieving general intelligence. The complete show notes for this episode can be found at twimlai.com/go/670.
2024-02-05
Link to episode

Building and Deploying Real-World RAG Applications with Ram Sriharsha - #669

Today we?re joined by Ram Sriharsha, VP of engineering at Pinecone. In our conversation, we dive into the topic of vector databases and retrieval augmented generation (RAG). We explore the trade-offs between relying solely on LLMs for retrieval tasks versus combining retrieval in vector databases and LLMs, the advantages and complexities of RAG with vector databases, the key considerations for building and deploying real-world RAG-based applications, and an in-depth look at Pinecone's new serverless offering. Currently in public preview, Pinecone Serverless is a vector database that enables on-demand data loading, flexible scaling, and cost-effective query processing. Ram discusses how the serverless paradigm impacts the vector database?s core architecture, key features, and other considerations. Lastly, Ram shares his perspective on the future of vector databases in helping enterprises deliver RAG systems. The complete show notes for this episode can be found at twimlai.com/go/669.
2024-01-29
Link to episode

Nightshade: Data Poisoning to Fight Generative AI with Ben Zhao - #668

Today we?re joined by Ben Zhao, a Neubauer professor of computer science at the University of Chicago. In our conversation, we explore his research at the intersection of security and generative AI. We focus on Ben?s recent Fawkes, Glaze, and Nightshade projects, which use ?poisoning? approaches to provide users with security and protection against AI encroachments. The first tool we discuss, Fawkes, imperceptibly ?cloaks? images in such a way that models perceive them as highly distorted, effectively shielding individuals from recognition by facial recognition models. We then dig into Glaze, a tool that employs machine learning algorithms to compute subtle alterations that are indiscernible to human eyes but adept at tricking the models into perceiving a significant shift in art style, giving artists a unique defense against style mimicry. Lastly, we cover Nightshade, a strategic defense tool for artists akin to a 'poison pill' which allows artists to apply imperceptible changes to their images that effectively ?breaks? generative AI models that are trained on them. The complete show notes for this episode can be found at twimlai.com/go/668.
2024-01-22
Link to episode

Learning Transformer Programs with Dan Friedman - #667

Today, we continue our NeurIPS series with Dan Friedman, a PhD student in the Princeton NLP group. In our conversation, we explore his research on mechanistic interpretability for transformer models, specifically his paper, Learning Transformer Programs. The LTP paper proposes modifications to the transformer architecture which allow transformer models to be easily converted into human-readable programs, making them inherently interpretable. In our conversation, we compare the approach proposed by this research with prior approaches to understanding the models and their shortcomings. We also dig into the approach?s function and scale limitations and constraints. The complete show notes for this episode can be found at twimlai.com/go/667.
2024-01-15
Link to episode

AI Trends 2024: Machine Learning & Deep Learning with Thomas Dietterich - #666

Today we continue our AI Trends 2024 series with a conversation with Thomas Dietterich, distinguished professor emeritus at Oregon State University. As you might expect, Large Language Models figured prominently in our conversation, and we covered a vast array of papers and use cases exploring current research into topics such as monolithic vs. modular architectures, hallucinations, the application of uncertainty quantification (UQ), and using RAG as a sort of memory module for LLMs. Lastly, don?t miss Tom?s predictions on what he foresees happening this year as well as his words of encouragement for those new to the field. The complete show notes for this episode can be found at twimlai.com/go/666.
2024-01-08
Link to episode

AI Trends 2024: Computer Vision with Naila Murray - #665

Today we kick off our AI Trends 2024 series with a conversation with Naila Murray, director of AI research at Meta. In our conversation with Naila, we dig into the latest trends and developments in the realm of computer vision. We explore advancements in the areas of controllable generation, visual programming, 3D Gaussian splatting, and multimodal models, specifically vision plus LLMs. We discuss tools and open source projects, including Segment Anything?a tool for versatile zero-shot image segmentation using simple text prompts clicks, and bounding boxes; ControlNet?which adds conditional control to stable diffusion models; and DINOv2?a visual encoding model enabling object recognition, segmentation, and depth estimation, even in data-scarce scenarios. Finally, Naila shares her view on the most exciting opportunities in the field, as well as her predictions for upcoming years. The complete show notes for this episode can be found at twimlai.com/go/665.
2024-01-02
Link to episode

Are Vector DBs the Future Data Platform for AI? with Ed Anuff - #664

Today we?re joined by Ed Anuff, chief product officer at DataStax. In our conversation, we discuss Ed?s insights on RAG, vector databases, embedding models, and more. We dig into the underpinnings of modern vector databases (like HNSW and DiskANN) that allow them to efficiently handle massive and unstructured data sets, and discuss how they help users serve up relevant results for RAG, AI assistants, and other use cases. We also discuss embedding models and their role in vector comparisons and database retrieval as well as the potential for GPU usage to enhance vector database performance. The complete show notes for this episode can be found at twimlai.com/go/664.
2023-12-28
Link to episode
A tiny webapp by I'm With Friends.
Updated daily with data from the Apple Podcasts.