Today we?re joined by Penousal Machado, Associate Professor and Head of the Computational Design and Visualization Lab in the Center for Informatics at the University of Coimbra.
In our conversation with Penousal, we explore his research in Evolutionary Computation, and how that work coincides with his passion for images and graphics. We also discuss the link between creativity and humanity, and have an interesting sidebar about the philosophy of Sci-Fi in popular culture.
Finally, we dig into Penousals evolutionary machine learning research, primarily in the context of the evolution of various animal species mating habits and practices.
The complete show notes for this episode can be found at twimlai.com/go/459.
Today we?re joined by Arul Menezes, a Distinguished Engineer at Microsoft.
Arul, a 30 year veteran of Microsoft, manages the machine translation research and products in the Azure Cognitive Services group. In our conversation, we explore the historical evolution of machine translation like breakthroughs in seq2seq and the emergence of transformer models.
We also discuss how they?re using multilingual transfer learning and combining what they?ve learned in translation with pre-trained language models like BERT. Finally, we explore what they?re doing to experience domain-specific improvements in their models, and what excites Arul about the translation architecture going forward.
The complete show notes for this series can be found at twimlai.com/go/458.
Today we?re joined by Luna Dong, Sr. Principal Scientist at Amazon.
In our conversation with Luna, we explore Amazon?s expansive product knowledge graph, and the various roles that machine learning plays throughout it. We also talk through the differences and synergies between the media and retail product knowledge graph use cases and how ML comes into play in search and recommendation use cases. Finally, we explore the similarities to relational databases and efforts to standardize the product knowledge graphs across the company and broadly in the research community.
The complete show notes for this episode can be found at https://twimlai.com/go/457.
Today we?re joined by Sarah Brown, an Assistant Professor of Computer Science at the University of Rhode Island.
In our conversation with Sarah, whose research focuses on Fairness in AI, we discuss why a ?systems-level? approach is necessary when thinking about ethical and fairness issues in models and algorithms. We also explore Wiggum: a fairness forensics tool, which explores bias and allows for regular auditing of data, as well as her ongoing collaboration with a social psychologist to explore how people perceive ethics and fairness.
Finally, we talk through the role of tools in assessing fairness and bias, and the importance of understanding the decisions the tools are making.
The complete show notes can be found at twimlai.com/go/456.
Today we?re joined by Andrew Trister, Deputy Director for Digital Health Innovation at the Bill & Melinda Gates Foundation.
In our conversation with Andrew, we explore some of the AI use cases at the foundation, with the goal of bringing ?community-based? healthcare to underserved populations in the global south. We focus on COVID-19 response and improving the accuracy of malaria testing with a bayesian framework and a few others, and the challenges like scaling these systems and building out infrastructure so that communities can begin to support themselves.
We also touch on Andrew's previous work at Apple, where he helped develop what is now known as Research Kit, their ML for health tools that are now seen in apple devices like phones and watches.
The complete show notes for this episode can be found at https://twimlai.com/go/455
Today we?re joined by Drago Anguelov, Distinguished Scientist and Head of Research at Waymo.
In our conversation, we explore the state of the autonomous vehicles space broadly and at Waymo, including how AV has improved in the last few years, their focus on level 4 driving, and Drago?s thoughts on the direction of the industry going forward. Drago breaks down their core ML use cases, Perception, Prediction, Planning, and Simulation, and how their work has lead to a fully autonomous vehicle being deployed in Phoenix.
We also discuss the socioeconomic and environmental impact of self-driving cars, a few research papers submitted to NeurIPS 2020, and if the sophistication of AV systems will lend themselves to the development of tomorrow?s enterprise machine learning systems.
The complete show notes for this episode can be found at twimlai.com/go/454.
Today we?re joined by Ya Xu, head of Data Science at LinkedIn, and TWIMLcon: AI Platforms 2021 Keynote Speaker.
We cover a ton of ground with Ya, starting with her experiences prior to becoming Head of DS, as one of the architects of the LinkedIn Platform. We discuss her ?three phases? (building, adoption, and maturation) to keep in mind when building out a platform, how to avoid ?hero syndrome? early in the process.
Finally, we dig into the various tools and platforms that give LinkedIn teams leverage, their organizational structure, as well as the emergence of differential privacy for security use cases and if it's ready for prime time.
The complete show notes for this episode can be found at https://twimlai.com/go/453.
Today we?re joined by Jesse Engel, Staff Research Scientist at Google, working on the Magenta Project.
In our conversation with Jesse, we explore the current landscape of creativity AI, and the role Magenta plays in helping express creativity through ML and deep learning. We dig deep into their Differentiable Digital Signal Processing (DDSP) library, which ?lets you combine the interpretable structure of classical DSP elements (such as filters, oscillators, reverberation, etc.) with the expressivity of deep learning.?
Finally, Jesse walks us through some of the other projects that the Magenta team undertakes, including NLP and language modeling, and what he wants to see come out of the work that he and others are doing in creative AI research.
The complete show notes for this episode can be found at twimlai.com/go/452.
Today we?re joined by return guest Francisco Webber, CEO & Co-founder of Cortical.io.
Francisco was originally a guest over 4 years and 400 episodes ago, where we discussed his company Cortical.io, and their unique approach to natural language processing. In this conversation, Francisco gives us an update on Cortical, including their applications and toolkit, including semantic extraction, classifier, and search use cases. We also discuss GPT-3, and how it compares to semantic folding, the unreasonable amount of data needed to train these models, and the difference between the GPT approach and semantic modeling for language understanding.
The complete show notes for this episode can be found at twimlai.com/go/451.
Today we?re joined by Gurdeep Pall, Corporate Vice President at Microsoft.
Gurdeep, who we had the pleasure of speaking with on his 31st anniversary at the company, has had a hand in creating quite a few influential projects, including Skype for business (and Teams) and being apart of the first team that shipped wifi as a part of a general-purpose operating system.
In our conversation with Gurdeep, we discuss Microsoft?s acquisition of Bonsai and how they fit in the toolchain for creating brains for autonomous systems with ?machine teaching,? and other practical applications of machine teaching in autonomous systems. We also explore the challenges of simulation, and how they?ve evolved to make the problems that the physical world brings more tenable. Finally, Gurdeep shares concrete use cases for autonomous systems, and how to get the best ROI on those investments, and of course, what?s next in the very broad space of autonomous systems.
The complete show notes for this episode can be found at twimlai.com/go/450.
Today we?re joined by Bryan Carstens, a professor in the Department of Evolution, Ecology, and Organismal Biology & Head of the Tetrapod Division in the Museum of Biological Diversity at The Ohio State University.
In our conversation with Bryan, who comes from a traditional biology background, we cover a ton of ground, including a foundational layer of understanding for the vast known unknowns in species and biodiversity, and how he came to apply machine learning to his lab?s research.
We explore a few of his lab?s projects, including applying ML to genetic data to understand the geographic and environmental structure of DNA, what factors keep machine learning from being used more frequently used in biology, and what?s next for his group.
The complete show notes for this episode can be found at twimlai.com/go/449.
Today we?re joined by Jason Gauci, a Software Engineering Manager at Facebook AI.
In our conversation with Jason, we explore their Reinforcement Learning platform, Re-Agent (Horizon). We discuss the role of decision making and game theory in the platform and the types of decisions they?re using Re-Agent to make, from ranking and recommendations to their eCommerce marketplace.
Jason also walks us through the differences between online/offline and on/off policy model training, and where Re-Agent sits in this spectrum. Finally, we discuss the concept of counterfactual causality, and how they ensure safety in the results of their models.
The complete show notes for this episode can be found at twimlai.com/go/448.
Today we?re joined by Saiph Savage, a Visiting professor at the Human-Computer Interaction Institute at CMU, director of the HCI Lab at WVU, and co-director of the Civic Innovation Lab at UNAM.
We caught up with Saiph during NeurIPS where she delivered an insightful invited talk ?A Future of Work for the Invisible Workers in A.I.?. In our conversation with Saiph, we gain a better understanding of the ?Invisible workers,? or the people doing the work of labeling for machine learning and AI systems, and some of the issues around lack of economic empowerment, emotional trauma, and other issues that arise with these jobs.
We discuss ways that we can empower these workers, and push the companies that are employing these workers to do the same. Finally, we discuss Saiph?s participatory design work with rural workers in the global south.
The complete show notes for this episode can be found at twimlai.com/go/447.
Today we?re back with the final episode of AI Rewind joined by Michael Bronstein, a professor at Imperial College London and the Head of Graph Machine Learning at Twitter.
In our conversation with Michael, we touch on his thoughts about the year in Machine Learning overall, including GPT-3 and Implicit Neural Representations, but spend a major chunk of time on the sub-field of Graph Machine Learning.
We talk through the application of Graph ML across domains like physics and bioinformatics, and the tools to look out for. Finally, we discuss what Michael thinks is in store for 2021, including graph ml applied to molecule discovery and non-human communication translation.
Today we continue the 2020 AI Rewind series, joined by friend of the show Sameer Singh, an Assistant Professor in the Department of Computer Science at UC Irvine.
We last spoke with Sameer at our Natural Language Processing office hours back at TWIMLfest, and was the perfect person to help us break down 2020 in NLP. Sameer tackles the review in 4 main categories, Massive Language Modeling, Fundamental Problems with Language Models, Practical Vulnerabilities with Language Models, and Evaluation.
We also explore the impact of GPT-3 and Transformer models, the intersection of vision and language models, and the injection of causal thinking and modeling into language models, and much more.
The complete show notes for this episode can be found at twimlai.com/go/445.
AI Rewind continues today as we?re joined by Pavan Turaga, Associate Professor in both the Departments of Arts, Media, and Engineering & Electrical Engineering, and the Interim Director of the School of Arts, Media, and Engineering at Arizona State University.
Pavan, who joined us back in June to talk through his work from CVPR ?20, Invariance, Geometry and Deep Neural Networks, is back to walk us through the trends he?s seen in Computer Vision last year. We explore the revival of physics-based thinking about scenes, differential rendering, the best papers, and where the field is going in the near future.
The complete show notes for this episode can be found at twimlai.com/go/444
Today we kick off our annual AI Rewind series joined by friend of the show Pablo Samuel Castro, a Staff Research Software Developer at Google Brain.
Pablo joined us earlier this year for a discussion about Music & AI, and his Geometric Perspective on Reinforcement Learning, as well our RL office hours during the inaugural TWIMLfest. In today?s conversation, we explore some of the latest and greatest RL advancements coming out of the major conferences this year, broken down into a few major themes, Metrics/Representations, Understanding and Evaluating Deep Reinforcement Learning, and RL in the Real World.
This was a very fun conversation, and we encourage you to check out all the great papers and other resources available on the show notes page.
Today we close out our NeurIPS series joined by Aravind Rajeswaran, a PhD Student in machine learning and robotics at the University of Washington.
At NeurIPS, Aravind presented his paper MOReL: Model-Based Offline Reinforcement Learning. In our conversation, we explore model-based reinforcement learning, and if models are a ?prerequisite? to achieve something analogous to transfer learning. We also dig into MOReL and the recent progress in offline reinforcement learning, the differences in developing MOReL models and traditional RL models, and the theoretical results they?re seeing from this research.
The complete show notes for this episode can be found at twimlai.com/go/442
As we continue our NeurIPS 2020 series, we?re joined by friend-of-the-show Charles Isbell, Dean, John P. Imlay, Jr. Chair, and professor at the Georgia Tech College of Computing.
This year Charles gave an Invited Talk at this year?s conference, You Can?t Escape Hyperparameters and Latent Variables: Machine Learning as a Software Engineering Enterprise. In our conversation, we explore the success of the Georgia Tech Online Masters program in CS, which now has over 11k students enrolled, and the importance of making the education accessible to as many people as possible. We spend quite a bit speaking about the impact machine learning is beginning to have on the world, and how we should move from thinking of ourselves as compiler hackers, and begin to see the possibilities and opportunities that have been ignored.
We also touch on the fallout from Timnit Gebru being ?resignated? and the importance of having diverse voices and different perspectives ?in the room,? and what the future holds for machine learning as a discipline.
The complete show notes for this episode can be found at twimlai.com/go/441.
Today we kick off our NeurIPS 2020 series joined by Taco Cohen, a Machine Learning Researcher at Qualcomm Technologies.
In our conversation with Taco, we discuss his current research in equivariant networks and video compression using generative models, as well as his paper ?Natural Graph Networks,? which explores the concept of ?naturality, a generalization of equivariance? which suggests that weaker constraints will allow for a ?wider class of architectures.?
We also discuss some of Taco?s recent research on neural compression and a very interesting visual demo for equivariance CNNs that Taco and the Qualcomm team released during the conference.
The complete show notes for this episode can be found at twimlai.com/go/440.
Today we close out our re:Invent series joined by Edgar Bahilo Rodriguez, Lead Data Scientist in the industrial applications division of Siemens Energy.
Edgar spoke at this year's re:Invent conference about Productionizing R Workloads, and the resurrection of R for machine learning and productionalization. In our conversation with Edgar, we explore the fundamentals of building a strong machine learning infrastructure, and how they?re breaking down applications and using mixed technologies to build models.
We also discuss their industrial applications, including wind, power production management, managing systems intent on decreasing the environmental impact of pre-existing installations, and their extensive use of time-series forecasting across these use cases.
The complete show notes can be found at twimlai.com/go/439.
Today we continue our re:Invent series with Srivathsan Canchi, Head of Engineering for the Machine Learning Platform team at Intuit.
As we teased earlier this week, one of the major announcements coming from AWS at re:Invent was the release of the SageMaker Feature Store. To our pleasant surprise, we came to learn that our friends at Intuit are the original architects of this offering and partnered with AWS to productize it at a much broader scale. In our conversation with Srivathsan, we explore the focus areas that are supported by the Intuit machine learning platform across various teams, including QuickBooks and Mint, Turbotax, and Credit Karma, and his thoughts on why companies should be investing in feature stores.
We also discuss why the concept of ?feature store? has seemingly exploded in the last year, and how you know when your organization is ready to deploy one. Finally, we dig into the specifics of the feature store, including the popularity of graphQL and why they chose to include it in their pipelines, the similarities (and differences) between the two versions of the store, and much more!
The complete show notes for this episode can be found at twimlai.com/go/438.
Today we?re kicking off our annual re:invent series joined by Swami Sivasubramanian, VP of Artificial Intelligence, at AWS.
During re:Invent last week, Amazon made a ton of announcements on the machine learning front, including quite a few advancements to SageMaker. In this roundup conversation, we discuss the motivation for hosting the first-ever machine learning keynote at the conference, a bunch of details surrounding tools like Pipelines for workflow management, Clarify for bias detection, and JumpStart for easy to use algorithms and notebooks, and many more.
We also discuss the emphasis placed on DevOps and MLOps tools in these announcements, and how the tools are all interconnected. Finally, we briefly touch on the announcement of the AWS feature store, but be sure to check back later this week for a more in-depth discussion on that particular release!
The complete show notes for this episode can be found at twimlai.com/go/437.
Today we?re joined by Subarna Sinha, Machine Learning Engineering Leader at 23andMe.
23andMe handles a massive amount of genomic data every year from its core ancestry business but also uses that data for disease prediction, which is the core use case we discuss in our conversation.
Subarna talks us through an initial use case of creating an evaluation of polygenic scores, and how that led them to build an ML pipeline and platform. We talk through the tools and tech stack used for the operationalization of their platform, the use of synthetic data, the internal pushback that came along with the changes that were being made, and what?s next for her team and the platform.
The complete show notes for this episode can be found at twimlai.com/go/436.
Today we?re joined by Daan Odijk, Data Science Manager at RTL.
In our conversation with Daan, we explore the RTL MLOps journey, and their need to put platform infrastructure in place for ad optimization and forecasting, personalization, and content understanding use cases. Daan walks us through some of the challenges on both the modeling and engineering sides of building the platform, as well as the inherent challenges of video applications.
Finally, we discuss the current state of their platform, and the benefits they?ve seen from having this infrastructure in place, and why using building a custom platform was worth the investment.
The complete show notes for this episode can be found at twimlai.com/go/435.
Today we?re joined by Peter Mattson, General Chair at MLPerf, a Staff Engineer at Google, and President of MLCommons.
In our conversation with Peter, we discuss MLCommons and MLPerf, the former an open engineering group with the goal of accelerating machine learning innovation, and the latter a set of standardized Machine Learning speed benchmarks used to measure things like model training speed, throughput speed for inference.
We explore the target user for the MLPerf benchmarks, the need for benchmarks in the ethics, bias, fairness space, and how they?re approaching this through the "People?s Speech" datasets. We also walk through the MLCommons best practices of getting a model into production, why it's so difficult, and how MLCube can make the process easier for researchers and developers.
The complete show notes page for this episode can be found at twimlai.com/go/434.
Today we?re joined by Charlene Chambliss, Machine Learning Engineer at Primer AI.
Charlene, who we also had the pleasure of hosting at NLP Office Hours during TWIMLfest, is back to share some of the work she?s been doing with NLP. In our conversation, we explore her experiences working with newer NLP models and tools like BERT and HuggingFace, as well as whats she?s learned along the way with word embeddings, labeling tasks, debugging, and more. We also focus on a few of her projects, like her popular multi-lingual BERT project, and a COVID-19 classifier.
Finally, Charlene shares her experience getting into data science and machine learning coming from a non-technical background, and what the transition was like, and tips for people looking to make a similar shift.
In this special episode of the podcast, we're joined by Kevin Stumpf, Co-Founder and CTO of Tecton, Willem Pienaar, an engineering lead at Gojek and founder of the Feast Project, and Maxime Beauchemin, Founder & CEO of Preset, for a discussion on Feature Stores for Accelerating AI Development.
In this panel discussion, Sam and our guests explored how organizations can increase value and decrease time-to-market for machine learning using feature stores, MLOps, and open source. We also discuss the main data challenges of AI/ML, and the role of the feature store in solving those challenges.
The complete show notes for this episode can be found at twimlai.com/go/432.
In this special edition of the podcast, we're joined by Shalini Kantayya, the director of Coded Bias, and Deb Raji and Meredith Broussard, who both contributed to the film.
In this panel discussion, Sam and our guests explored the societal implications of the biases embedded within AI algorithms. The conversation discussed examples of AI systems with disparate impact across industries and communities, what can be done to mitigate this disparity, and opportunities to get involved.
Our panelists Shalini, Meredith, and Deb each share insight into their experience working on and researching bias in AI systems and the oppressive and dehumanizing impact they can have on people in the real world.?
The complete show notes for this film can be found at twimlai.com/go/431
Today we?re joined by Dileep George, Founder and the CTO of Vicarious.
Dileep, who was also a co-founder of Numenta, works at the intersection of AI research and neuroscience, and famously pioneered the hierarchical temporal memory. In our conversation, we explore the importance of mimicking the brain when looking to achieve artificial general intelligence, the nuance of ?language understanding? and how all the tasks that fall underneath it are all interconnected, with or without language.
We also discuss his work with Recursive Cortical Networks, Schema Networks, and what?s next on the path towards AGI!
Today we?re joined by Sushil Thomas, VP of Engineering for Machine Learning at Cloudera.
Over the summer, I had the pleasure of hosting Sushil and a handful of business leaders across industries at the Cloudera Virtual Roundtable. In this conversation with Sushil, we recap the roundtable, exploring some of the topics discussed and insights gained from those conversations. Sushil gives us a look at how COVID19 has impacted business throughout the year, and how the pandemic is shaping enterprise decision making moving forward.
We also discuss some of the key trends he?s seeing as organizations try to scale their machine learning and AI efforts, including understanding best practices, and learning how to hybridize the engineering side of ML with the scientific exploration of the tasks. Finally, we explore if organizational models like hub vs centralized are still organization-specific or if that?s changed in recent years, as well as how to get and retain good ML talent with giant companies like Google and Microsoft looming large.
The complete show notes for this episode can be found at https://twimlai.com/go/429.
Today we?re joined by Devin Singh, a Physician Lead for Clinical Artificial Intelligence & Machine Learning in Pediatric Emergency Medicine at the Hospital for Sick Children (SickKids) in Toronto, and Founder and CEO of HeroAI.
In our conversation with Devin, we discuss some of the interesting ways that Devin is deploying machine learning within the SickKids hospital, the current structure of academic research, including how much research and publications are currently being incentivized, how little of those research projects actually make it to deployment, and how Devin is working to flip that system on it's head.
We also talk about his work at Hero AI, where he is commercializing and deploying his academic research to build out infrastructure and deploy AI solutions within hospitals, creating an automated pipeline with patients, caregivers, and EHS companies. Finally, we discuss Devins's thoughts on how he?d approach bias mitigation in these systems, and the importance of having proper stakeholder engagement and using design methodology when building ML systems.
The complete show notes for this episode can be found at twimlai.com/go/428.
Today we?re joined by Roland Memisevic, return podcast guest and Co-Founder & CEO of Twenty Billion Neurons.
We last spoke to Roland in 2018, and just earlier this year TwentyBN made a sharp pivot to a surprising use case, a companion app called Fitness Ally, an interactive, personalized fitness coach on your phone.
In our conversation with Roland, we explore the progress TwentyBN has made on their goal of training deep neural networks to understand physical movement and exercise. We also discuss how they?ve taken their research on understanding video context and awareness and applied it in their app, including how recent advancements have allowed them to deploy their neural net locally while preserving privacy, and Roland?s thoughts on the enormous opportunity that lies in the merging of language and video processing.
The complete show notes for this episode can be found at twimlai.com/go/427.
Today we?re joined by Jon Wang, a medical student at UCSF, and former Gates Scholar and AI researcher at the Bill and Melinda Gates Foundation.
In our conversation with Jon, we explore a few of the different ways he?s attacking various public health issues, including improving the electronic health records system through automating clinical order sets, and exploring how the lack of literature and AI talent in the non-profit and healthcare spaces, and bad data have further marginalized undersupported communities.
We also discuss his work at the Gates Foundation, which included understanding how AI can be helpful in lower-resource and lower-income countries, and building digital infrastructure, and much more.
The complete show notes for this episode can be found at twimlai.com/go/426.
Digital imagery is pervasive today. More than a billion images per day are produced and uploaded to social media sites, with many more embedded within websites, apps, digital documents, and eBooks. Engaging with digital imagery has become fundamental to participating in contemporary society, including education, the professions, e-commerce, civics, entertainment, and social interactions.
However, most digital images remain inaccessible to the 39 million people worldwide who are blind. AI and computer vision technologies hold the potential to increase image accessibility for people who are blind, through technologies like automated image descriptions.
The speakers share their perspectives as people who are both technology experts and are blind, providing insight into future directions for the field of computer vision for describing images and videos for people who are blind.
To check out the video of this panel, visit here!
The complete show notes for this episode can be found at twimlai.com/go/425
Today we?re joined by Frank Zhao, Senior Director of Quantamental Research at S&P Global Market Intelligence.
In our conversation with Frank, we explore how he came to work at the intersection of ML and finance, and how he navigates the relationship between data science and domain expertise. We also discuss the rise of data science in the investment management space, examining the largely under-explored technique of using unstructured data to gain insights into equity investing, and the edge it can provide for investors.
Finally, Frank gives us a look at how he uses natural language processing with textual data of earnings call transcripts and walks us through the entire pipeline.
The complete show notes for this episode can be found at twimlai.com/go/424.
In the final #TWIMLfest Keynote Interview, we?re joined by Salman Khan, Founder of Khan Academy.
In our conversation with Sal, we explore the amazing origin story of the academy, and how coronavirus is shaping the future of education and remote and distance learning, for better and for worse. We also explore Sal?s perspective on machine learning and AI being used broadly in education, the potential of injecting a platform like Khan Academy with ML and AI for course recommendations, and if they?re planning on implementing these features in the future.
Finally, Sal shares some great stories about the impact of community and opportunity, and what advice he has for learners within the TWIML community and beyond!
The complete show notes for this episode can be found at twimlai.com/go/423.
In this special #TWIMLfest Keynote episode, we?re joined by Milind Tambe, Director of AI for Social Good at Google Research India, and Director of the Center for Research in Computation and Society (CRCS) at Harvard University.
In our conversation, we explore Milind?s various research interests, most of which fall under the umbrella of AI for Social Impact, including his work in public health, both stateside and abroad, his conservation work in South Asia and Africa, and his thoughts on the ways that those interested in social impact can get involved.
The complete show notes for this episode can be found at twimlai.com/go/422.
In this special #TWIMLfest episode of the podcast, we?re joined by Jeremy Howard, Founder of Fast.ai.
In our conversation with Jeremy, we discuss his career path, including his journey through the consulting world and how those experiences led him down the path to ML education, his thoughts on the current state of the machine learning adoption cycle, and if we?re at maximum capacity for deep learning use and capability.
Of course, we dig into the newest version of the fast.ai framework and course, the reception of Jeremy?s book ?Deep Learning for Coders with Fastai and PyTorch: AI Applications Without a PhD,? and what?s missing from the machine learning education landscape. If you?ve missed our previous conversations with Jeremy, I encourage you to check them out here and here.
The complete show notes for this episode can be found at https://twimlai.com/go/421.
Today we?re joined by Mike del Balso, co-Founder and CEO of Tecton.
Mike, who you might remember from our last conversation on the podcast, was a foundational member of the Uber team that created their ML platform, Michelangelo. Since his departure from the company in 2018, he has been busy building up Tecton, and their enterprise feature store.
In our conversation, Mike walks us through why he chose to focus on the feature store aspects of the machine learning platform, the journey, personal and otherwise, to operationalizing machine learning, and the capabilities that more mature platforms teams tend to look for or need to build. We also explore the differences between standalone components and feature stores, if organizations are taking their existing databases and building feature stores with them, and what a dynamic, always available feature store looks like in deployment.
Finally, we explore what sets Tecton apart from other vendors in this space, including enterprise cloud providers who are throwing their hat in the ring.
The complete show notes for this episode can be found at twimlai.com/go/420.
Thanks to our friends at Tecton for sponsoring this episode of the podcast! Find out more about what they're up to at tecton.ai.
In this special #TWIMLfest episode, we?re joined by Suzana Ili?, a computational linguist at Causaly and founder of Machine Learning Tokyo (MLT).
Suzana joined us as a keynote speaker to discuss the origins of the MLT community, but we cover a lot of ground in this conversation. We briefly discuss Suzana?s work at Causaly, touching on her experiences transitioning from linguist and domain expert to working with causal modeling, balancing her role as both product manager and leader of the development team for their causality extraction module, and the unique ways that she thinks about UI in relation to their product.
We also spend quite a bit of time exploring MLT, including how they?ve achieved exponential growth within the community over the past few years and when Suzana knew MLT was moving beyond just a personal endeavor, her experiences publishing papers at major ML conferences as an independent organization, and inspires her within the broader ML/AI Community. And of course, we answer quite a few great questions from our live audience!
In this special #TWIMLfest edition of the podcast, we?re joined by Shakir Mohamed, a Senior Research Scientist at DeepMind.
Shakir is also a leader of Deep Learning Indaba, a non-profit organization whose mission is to Strengthen African Machine Learning and Artificial Intelligence. In our conversation with Shakir, we discuss his recent paper ?Decolonial AI,? the distinction between decolonizing AI and ethical AI, while also exploring the origin of the Indaba, the phases of community, and much more.
The complete show notes for this episode can be found at twimlai.com/go/418.
Today we?re joined by Adina Trufinescu, Principal Program Manager at Microsoft, to discuss some of the computer vision updates announced at Ignite 2020.
We focus on the technical innovations that went into their recently announced spatial analysis software, and the software?s use cases including the movement of people within spaces, distance measurements (social distancing), and more.
We also discuss the ?responsible AI guidelines? put in place to curb bad actors potentially using this software for surveillance, what techniques are being used to do object detection and image classification, and the challenges to productizing this research.
The complete show notes for this episode can be found at twimlai.com/go/417.
Today we?re joined by Cha Zhang, a Partner Engineering Manager at Microsoft Cloud & AI.
Cha?s work at MSFT is focused on exploring ways that new technologies can be applied to optical character recognition, or OCR, pushing the boundaries of what has been seen as an otherwise ?solved? problem. In our conversation with Cha, we explore some of the traditional challenges of doing OCR in the wild, and what are the ways in which deep learning algorithms are being applied to transform these solutions.
We also discuss the difficulties of using an end to end pipeline for OCR work, if there is a semi-supervised framing that could be used for OCR, the role of techniques like neural architecture search, how advances in NLP could influence the advancement of OCR problems, and much more.
The complete show notes for this episode can be found at twimlai.com/go/416.
The complete show notes for this episode can be found at twimlai.com/go/415.
Today we're joined by Jeff Gehlhaar, VP of Technology at Qualcomm, and Zahra Koochak, Staff Machine Learning Engineer at Qualcomm AI Research.
If you haven?t had a chance to listen to our first interview with Jeff, I encourage you to check it out here! In this conversation, we catch up with Jeff and Zahra to get an update on what the company has up to since our last conversation, including the Snapdragon 865 chipset and Hexagon Neural Network Direct.
We also discuss open-source projects like the AI efficiency toolkit and Tensor Virtual Machine compiler, and how these projects fit in the broader Qualcomm ecosystem. Finally, we talk through their vision for on-device federated learning.
The complete show notes for this page can be found at twimlai.com/go/414.
Today we?re joined by Sasha Luccioni, a Postdoctoral Researcher at the MILA Institute, and moderator of our upcoming TWIMLfest Panel, ?Machine Learning in the Fight Against Climate Change.?
We were first introduced to Sasha?s work through her paper on ?Visualizing The Consequences Of Climate Change Using Cycle-consistent Adversarial Networks?, and we?re excited to pick her brain about the ways ML is currently being leveraged to help the environment. In our conversation, we explore the use of GANs to visualize the consequences of climate change, the evolution of different approaches she used, and the challenges of training GANs using an end-to-end pipeline.
Finally, we talk through Sasha?s goals for the aforementioned panel, which is scheduled for Friday, October 23rd at 1 pm PT. Register for all of the great TWIMLfest sessions at twimlfest.com!
The complete show notes for this episode can be found at twimlai.com/go/413.
Today we?re joined by Burr Settles, Research Director at Duolingo. Most would acknowledge that one of the most effective ways to learn is one on one with a tutor, and Duolingo?s main goal is to replicate that at scale.
In our conversation with Burr, we dig how the business model has changed over time, the properties that make a good tutor, and how those features translate to the AI tutor they?ve built. We also discuss the Duolingo English Test, and the challenges they?ve faced with maintaining the platform while adding languages and courses.
Check out the complete show notes for this episode at twimlai.com/go/412.
Today we?re joined by Artur Yakimovich, Co-Founder at Artificial Intelligence for Life Sciences and a visiting scientist in the Lab for Molecular Cell Biology at University College London. In our conversation with Artur, we explore the gulf that exists between life science researchers and the tools and applications used by computer scientists.
While Artur?s background is in viral chemistry, he has since transitioned to a career in computational biology to ?see where chemistry stopped, and biology started.? We discuss his work in that middle ground, looking at quite a few of his recent work applying deep learning and advanced neural networks like capsule networks to his research problems.
Finally, we discuss his efforts building the Artificial Intelligence for Life Sciences community, a non-profit organization he founded to bring scientists from different fields together to share ideas and solve interdisciplinary problems.
Check out the complete show notes at twimlai.com/go/411.
Today we?re joined by Kavita Bala, the Dean of Computing and Information Science at Cornell University.
Kavita, whose research explores the overlap of computer vision and computer graphics, joined us to discuss a few of her projects, including GrokStyle, a startup that was recently acquired by Facebook and is currently being deployed across their Marketplace features. We also talk about StreetStyle/GeoStyle, projects focused on using social media data to find style clusters across the globe.
Kavita shares her thoughts on the privacy and security implications, progress with integrating privacy-preserving techniques into vision projects like the ones she works on, and what?s next for Kavita?s research.
The complete show notes for this episode can be found at twimlai.com/go/410.