Good podcast

Top 100 most popular podcasts

Practical AI: Machine Learning, Data Science, LLM

Practical AI: Machine Learning, Data Science, LLM

Making artificial intelligence practical, productive & accessible to everyone. Practical AI is a show in which technology professionals, business people, students, enthusiasts, and expert guests engage in lively discussions about Artificial Intelligence and related topics (Machine Learning, Deep Learning, Neural Networks, GANs, MLOps, AIOps, LLMs & more). The focus is on productive implementations and real-world scenarios that are accessible to everyone. If you want to keep up with the latest advances in AI, while keeping one foot in the real world, then this is the show for you!

Subscribe

iTunes / Overcast / RSS

Website

changelog.com/practicalai

Episodes

Mozart to Megadeath at CHRP

Daniel and Chris groove with Jeff Smith, Founder and CEO at CHRP.ai. Jeff describes how CHRP anonymously analyzes emotional wellness data, derived from employees' music preferences, giving HR leaders actionable insights to improve productivity, retention, and overall morale. By monitoring key trends and identifying shifts in emotional health across teams, CHRP.ai enables proactive decisions to ensure employees feel supported and engaged.
2024-12-19
Link to episode

Sidekick is an AI Shopify expert

Today, Chris explores Shopify Magic and other AI offerings with Mike Tamir, Distinguished ML Engineer and Head of Machine Learning, and Matt Colyer, Director of Product Management for Sidekick. They talk about how Shopify uses generative AI and LLMs to enhance their products, and they take a deeper dive into Sidekick, a first-of-its-kind, AI-enabled commerce assistant that understands a merchant?s business (products, orders, customers) and has been trained to know all about Shopify.
2024-12-11
Link to episode

Full-duplex, real-time dialogue with Kyutai

Kyutai, an open science research lab, made headlines over the summer when they released their real-time speech-to-speech AI assistant (beating OpenAI to market with their teased GPT-driven speech-to-speech functionality). Alex from Kyutai joins us in this episode to discuss the research lab, their recent Moshi models, and what might be coming next from the lab. Along the way we discuss small models and the AI ecosystem in France.
2024-12-04
Link to episode

Clones, commerce & campaigns

Chris and Daniel dive into what Trump?s impending second term could mean for AI companies, model developers, and regulators, unpacking the potential shifts in policy and innovation. Next, they discuss the latest models, like Qwen, that blur the performance gap between open and closed systems. Finally, they explore new AI tools for meeting clones and AI-driven commerce, sparking a conversation about the balance between digital convenience and fostering genuine human connections.
2024-11-29
Link to episode

scikit-learn & data science you own

We are at GenAI saturation, so let's talk about scikit-learn, a long time favorite for data scientists building classifiers, time series analyzers, dimensionality reducers, and more! Scikit-learn is deployed across industry and driving a significant portion of the "AI" that is actually in production. :probabl is a new kind of company that is stewarding this project along with a variety of other open source projects. Yann Lechelle and Guillaume Lemaitre share some of the vision behind the company and talk about the future of scikit-learn!
2024-11-19
Link to episode

Creating tested, reliable AI applications

It can be frustrating to get an AI application working amazingly well 80% of the time and failing miserably the other 20%. How can you close the gap and create something that you rely on? Chris and Daniel talk through this process, behavior testing, and the flow from prototype to production in this episode. They also talk a bit about the apparent slow down in the release of frontier models.
2024-11-13
Link to episode

AI is changing the cybersecurity threat landscape

This week, Chris is joined by Gregory Richardson, Vice President and Global Advisory CISO at BlackBerry, and Ismael Valenzuela, Vice President of Threat Research & Intelligence at BlackBerry. They address how AI is changing the threat landscape, why human defenders remain a key part of our cyber defenses, and the explain the AI standoff between cyber threat actors and cyber defenders.
2024-11-05
Link to episode

The path towards trustworthy AI

Elham Tabassi, the Chief AI Advisor at the U.S. National Institute of Standards & Technology (NIST), joins Chris for an enlightening discussion about the path towards trustworthy AI. Together they explore NIST's 'AI Risk Management Framework' (AI RMF) within the context of the White House's 'Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence'.
2024-10-29
Link to episode

Big data is dead, analytics is alive

We are on the other side of "big data" hype, but what is the future of analytics and how does AI fit in? Till and Adithya from MotherDuck join us to discuss why DuckDB is taking the analytics and AI world by storm. We dive into what makes DuckDB, a free, in-process SQL OLAP database management system, unique including its ability to execute lighting fast analytics queries against a variety of data sources, even on your laptop! Along the way we dig into the intersections with AI, such as text-to-sql, vector search, and AI-driven SQL query correction.
2024-10-24
Link to episode

Practical workflow orchestration

Workflow orchestration has always been a pain for data scientists, but this is exacerbated in these AI hype days by agentic workflows executing arbitrary (not pre-defined) workflows with a variety of failure modes. Adam from Prefect joins us to talk through their open source Python library for orchestration and visibility into python-based pipelines. Along the way, he introduces us to things like Marvin, their AI engineering framework, and ControlFlow, their agent workflow system.
2024-10-15
Link to episode

Towards high-quality (maybe synthetic) datasets

As Argilla puts it: "Data quality is what makes or breaks AI." However, what exactly does this mean and how can AI team probably collaborate with domain experts towards improved data quality? David Berenstein & Ben Burtenshaw, who are building Argilla & Distilabel at Hugging Face, join us to dig into these topics along with synthetic data generation & AI-generated labeling / feedback.
2024-10-09
Link to episode

Understanding what's possible, doable & scalable

We are constantly hearing about disillusionment as it relates to AI. Some of that is probably valid, but Mike Lewis, an AI architect from Cincinnati, has proven that he can consistently get LLM and GenAI apps to the point of real enterprise value (even with the Big Cos of the world). In this episode, Mike joins us to share some stories from the AI trenches & highlight what it takes (practically) to show what is possible, doable & scalable with AI.
2024-10-03
Link to episode

GraphRAG (beyond the hype)

Seems like we are hearing a lot about GraphRAG these days, but there are lots of questions: what is it, is it hype, what is practical? One of our all time favorite podcast friends, Prashanth Rao, joins us to dig into this topic beyond the hype. Prashanth gives us a bit of background and practical use cases for GraphRAG and graph data.
2024-09-25
Link to episode

Pausing to think about scikit-learn & OpenAI o1

Recently the company stewarding the open source library scikit-learn announced their seed funding. Also, OpenAI released "o1" with new behavior in which it pauses to "think" about complex tasks. Chris and Daniel take some time to do their own thinking about o1 and the contrast to the scikit-learn ecosystem, which has the goal to promote "data science that you own."
2024-09-17
Link to episode

Cybersecurity in the GenAI age

Dinis Cruz drops by to chat about cybersecurity for generative AI and large language models. In addition to discussing The Cyber Boardroom, Dinis also delves into cybersecurity efforts at OWASP and that organization's Top 10 for LLMs and Generative AI Apps.
2024-09-11
Link to episode

AI is more than GenAI

GenAI is often what people think of when someone mentions AI. However, AI is much more. In this episode, Daniel breaks down a history of developments in data science, machine learning, AI, and GenAI in this episode to give listeners a better mental model. Don't miss this one if you are wanting to understand the AI ecosystem holistically and how models, embeddings, data, prompts, etc. all fit together.
2024-09-05
Link to episode

Metrics Driven Development

How do you systematically measure, optimize, and improve the performance of LLM applications (like those powered by RAG or tool use)? Ragas is an open source effort that has been trying to answer this question comprehensively, and they are promoting a "Metrics Driven Development" approach. Shahul from Ragas joins us to discuss Ragas in this episode, and we dig into specific metrics, the difference between benchmarking models and evaluating LLM apps, generating synthetic test data and more.
2024-08-29
Link to episode

Threat modeling LLM apps

If you have questions at the intersection of Cybersecurity and AI, you need to know Donato at WithSecure! Donato has been threat modeling AI applications and seriously applying those models in his day-to-day work. He joins us in this episode to discuss his LLM application security canvas, prompt injections, alignment, and more.
2024-08-22
Link to episode

Only as good as the data

You might have heard that "AI is only as good as the data." What does that mean and what data are we talking about? Chris and Daniel dig into that topic in the episode exploring the categories of data that you might encounter working in AI (for training, testing, fine-tuning, benchmarks, etc.). They also discuss the latest developments in AI regulation with the EU's AI Act coming into force.
2024-08-14
Link to episode

Gaudi processors & Intel's AI portfolio

There is an increasing desire for and effort towards GPU alternatives for AI workloads and an ability to run GenAI models on CPUs. Ben and Greg from Intel join us in this episode to help us understand Intel's strategy as it related to AI along with related projects, hardware, and developer communities. We dig into Intel's Gaudi processors, open source collaborations with Hugging Face, and AI on CPU/Xeon processors.
2024-08-07
Link to episode

Broccoli AI at its best ?

We discussed "? Broccoli AI" a couple weeks ago, which is the kind of AI that is actually good/healthy for a real world business. Bengsoon Chuah, a data scientist working in the energy sector, joins us to discuss developing and deploying NLP pipelines in that environment. We talk about good/healthy ways of introducing AI in a company that uses on-prem infrastructure, has few data science professionals, and operates in high risk environments.
2024-07-31
Link to episode

Hyperventilating over the Gartner AI Hype Cycle

This week Daniel & Chris hang with repeat guest and good friend Demetrios Brinkmann of the MLOps Community. Together they review, debate, and poke fun at the 2024 Gartner Hype Cycle chart for Artificial Intelligence. You are invited to join them in this light-hearted fun conversation about the state of hype in artificial intelligence.
2024-07-24
Link to episode

The first real-time voice assistant

In the midst of the demos & discussion about OpenAI's GPT-4o voice assistant, Kyutai swooped in to release the *first* real-time AI voice assistant model and a pretty slick demo (Moshi). Chris & Daniel discuss what this more open approach to a voice assistant might catalyze. They also discuss recent changes to Gartner's ranking of GenAI on their hype cycle.
2024-07-18
Link to episode

Vectoring in on Pinecone

Daniel & Chris explore the advantages of vector databases with Roie Schwaber-Cohen of Pinecone. Roie starts with a very lucid explanation of why you need a vector database in your machine learning pipeline, and then goes on to discuss Pinecone's vector database, designed to facilitate efficient storage, retrieval, and management of vector data.
2024-07-10
Link to episode

Stanford's AI Index Report 2024

We've had representatives from Stanford's Institute for Human-Centered Artificial Intelligence (HAI) on the show in the past, but we were super excited to talk through their 2024 AI Index Report after such a crazy year in AI! Nestor from HAI joins us in this episode to talk about some of the main takeaways including how AI makes workers more productive, the US is increasing regulations sharply, and industry continues to dominate frontier AI research.
2024-07-02
Link to episode

Apple Intelligence & Advanced RAG

Daniel & Chris engage in an impromptu discussion of the state of AI in the enterprise. Then they dive into the recent _Apple Intelligence_ announcement to explore its implications. Finally, Daniel leads a deep dive into a new topic - Advanced RAG - covering everything you need to know to be practical & productive.
2024-06-25
Link to episode

The perplexities of information retrieval

Daniel & Chris sit down with Denis Yarats, Co-founder & CTO at Perplexity, to discuss Perplexity's sophisticated AI-driven answer engine. Denis outlines some of the deficiencies in search engines, and how Perplexity's approach to information retrieval improves on traditional search engine systems, with a focus on accuracy and validation of the information provided.
2024-06-19
Link to episode

Using edge models to find sensitive data

We've all heard about breaches of privacy and leaks of private health information (PHI). For healthcare providers and those storing this data, knowing where all the sensitive data is stored is non-trivial. Ramin, from Tausight, joins us to discuss how they have deploy edge AI models to help company search through billions of records for PHI.
2024-06-13
Link to episode

Rise of the AI PC & local LLMs

We've seen a rise in interest recently and a number of major announcements related to local LLMs and AI PCs. NVIDIA, Apple, and Intel are getting into this along with models like the Phi family from Microsoft. In this episode, we dig into local AI tooling, frameworks, and optimizations to help you navigate this AI niche, and we talk about how this might impact AI adoption in the longer term.
2024-06-04
Link to episode

AI in the U.S. Congress

At the age of 72, U.S. Representative Don Beyer of Virginia enrolled at GMU to pursue a Master's degree in C.S. with a concentration in Machine Learning. Rep. Beyer is Vice Chair of the bipartisan Artificial Intelligence Caucus & Vice Chair of the NDC's AI Working Group. He is the author of the AI Foundation Model Transparency Act & a lead cosponsor of the CREATE AI Act, the Federal Artificial Intelligence Risk Management Act & the Artificial Intelligence Environmental Impacts Act. We hope you tune into this inspiring, nonpartisan conversation with Rep. Beyer about his decision to dive into the deep end of the AI pool & his leadership in bringing that expertise to Capitol Hill.
2024-05-29
Link to episode

First impressions of GPT-4o

Daniel & Chris share their first impressions of OpenAI's newest LLM: GPT-4o and Daniel tries to bring the model into the conversation with humorously mixed results. Together, they explore the implications of Omni's new feature set - the speed, the voice interface, and the new multimodal capabilities.
2024-05-22
Link to episode

Full-stack approach for effective AI agents

There's a lot of hype about AI agents right now, but developing robust agents isn't yet a reality in general. Imbue is leading the way towards more robust agents by taking a full-stack approach; from hardware innovations through to user interface. In this episode, Josh, Imbue's CTO, tell us more about their approach and some of what they have learned along the way.
2024-05-15
Link to episode

Autonomous fighter jets?!

Yep, you heard that right. Autonomous fighter jets are in the news. Chris and Daniel discuss a modified F-16 known as the X-62A VISTA and autonomous vehicles/ systems more generally. They also comment on the Linux Foundation's new Open Platform for Enterprise AI.
2024-05-08
Link to episode

Private, open source chat UIs

We recently gathered some Practical AI listeners for a live webinar with Danny from LibreChat to discuss the future of private, open source chat UIs. During the discussion we hear about the motivations behind LibreChat, why enterprise users are hosting their own chat UIs, and how Danny (and the LibreChat community) is creating amazing features (like RAG and plugins).
2024-04-30
Link to episode

Mamba & Jamba

First there was Mamba... now there is Jamba from AI21. This is a model that combines the best non-transformer goodness of Mamba with good 'ol attention layers. This results in a highly performant and efficient model that AI21 has open sourced! We hear all about it (along with a variety of other LLM things) from AI21's co-founder Yoav.
2024-04-24
Link to episode

Udio & the age of multi-modal AI

2024 promises to be the year of multi-modal AI, and we are already seeing some amazing things. In this "fully connected" episode, Chris and Daniel explore the new Udio product/service for generating music. Then they dig into the differences between recent multi-modal efforts and more "traditional" ways of combining data modalities.
2024-04-16
Link to episode

RAG continues to rise

Daniel & Chris delight in conversation with "the funniest guy in AI", Demetrios Brinkmann. Together they explore the results of the MLOps Community's latest survey. They also preview the upcoming AI Quality Conference.
2024-04-10
Link to episode

Should kids still learn to code?

In this fully connected episode, Daniel & Chris discuss NVIDIA GTC keynote comments from CEO Jensen Huang about teaching kids to code. Then they dive into the notion of "community" in the AI world, before discussing challenges in the adoption of generative AI by non-technical people. They finish by addressing the evolving balance between generative AI interfaces and search engines.
2024-04-02
Link to episode

AI vs software devs

Daniel and Chris are out this week, so we're bringing you conversations all about AI's complicated relationship to software developers from other Changelog pods: JS Party, Go Time & The Changelog.
2024-03-26
Link to episode

Prompting the future

Daniel & Chris explore the state of the art in prompt engineering with Jared Zoneraich, the founder of PromptLayer. PromptLayer is the first platform built specifically for prompt engineering. It can visually manage prompts, evaluate models, log LLM requests, search usage history, and help your organization collaborate as a team. Jared provides expert guidance in how to be implement prompt engineering, but also illustrates how we got here, and where we're likely to go next.
2024-03-20
Link to episode

Generating the future of art & entertainment

Runway is an applied AI research company shaping the next era of art, entertainment & human creativity. Chris sat down with Runway co-founder / CTO, Anastasis Germanidis, to discuss their rise and how it's defining the future of the creative landscape with its text & image to video models. We hope you find Anastasis's founder story as inspiring as Chris did.
2024-03-12
Link to episode

YOLOv9: Computer vision is alive and well

While everyone is super hyped about generative AI, computer vision researchers have been working in the background on significant advancements in deep learning architectures. YOLOv9 was just released with some noteworthy advancements relevant to parameter efficient models. In this episode, Chris and Daniel dig into the details and also discuss advancements in parameter efficient LLMs, such as Microsofts 1-Bit LLMs and Qualcomm's new AI Hub.
2024-03-06
Link to episode

Representation Engineering (Activation Hacking)

Recently, we briefly mentioned the concept of "Activation Hacking" in the episode with Karan from Nous Research. In this fully connected episode, Chris and Daniel dive into the details of this model control mechanism, also called "representation engineering". Of course, they also take time to discuss the new Sora model from OpenAI.
2024-02-28
Link to episode

Leading the charge on AI in National Security

Chris & Daniel explore AI in national security with Lt. General Jack Shanahan (USAF, Ret.). The conversation reflects Jack's unique background as the only senior U.S. military officer responsible for standing up and leading two organizations in the United States Department of Defense (DoD) dedicated to fielding artificial intelligence capabilities: Project Maven and the DoD Joint AI Center (JAIC). Together, Jack, Daniel & Chris dive into the fascinating details of Jack's recent written testimony to the U.S. Senate's AI Insight Forum on National Security, in which he provides the U.S. government with thoughtful guidance on how to achieve the best path forward with artificial intelligence.
2024-02-20
Link to episode

Gemini vs OpenAI

Google has been releasing a ton of new GenAI functionality under the name "Gemini", and they've officially rebranded Bard as Gemini. We take some time to talk through Gemini compared with offerings from OpenAI, Anthropic, Cohere, etc. We also discuss the recent FCC decision to ban the use of AI voices in robocalls and what the decision might mean for government involvement in AI in 2024.
2024-02-14
Link to episode

Data synthesis for SOTA LLMs

Nous Research has been pumping out some of the best open access LLMs using SOTA data synthesis techniques. Their Hermes family of models is incredibly popular! In this episode, Karan from Nous talks about the origins of Nous as a distributed collective of LLM researchers. We also get into fine-tuning strategies and why data synthesis works so well.
2024-02-06
Link to episode

Large Action Models (LAMs) & Rabbits ?

Recently the release of the rabbit r1 device resulted in huge interest in both the device and "Large Action Models" (or LAMs). What is an LAM? Is this something new? Did these models come out of nowhere, or are they related to other things we are already using? Chris and Daniel dig into LAMs in this episode and discuss neuro-symbolic AI, AI tool usage, multimodal models, and more.
2024-01-30
Link to episode

Collaboration & evaluation for LLM apps

Small changes in prompts can create large changes in the output behavior of generative AI models. Add to that the confusion around proper evaluation of LLM applications, and you have a recipe for confusion and frustration. Raza and the Humanloop team have been diving into these problems, and, in this episode, Raza helps us understand how non-technical prompt engineers can productively collaborate with technical software engineers while building AI-driven apps.
2024-01-23
Link to episode

Advent of GenAI Hackathon recap

Recently, Intel's Liftoff program for startups and Prediction Guard hosted the first ever "Advent of GenAI" hackathon. 2,000 people from all around the world participated in Generate AI related challenges over 7 days. In this episode, we discuss the hackathon, some of the creative solutions, the idea behind it, and more.
2024-01-17
Link to episode

AI predictions for 2024

We scoured the internet to find all the AI related predictions for 2024 (at least from people that might know what they are talking about), and, in this episode, we talk about some of the common themes. We also take a moment to look back at 2023 commenting with some distance on a crazy AI year.
2024-01-10
Link to episode
A tiny webapp by I'm With Friends.
Updated daily with data from the Apple Podcasts.