Good podcast

Top 100 most popular podcasts

Towards Data Science

Towards Data Science

A Medium publication sharing concepts, ideas, and codes.

Subscribe

iTunes / Overcast / RSS

Website

towardsdatascience.com/

Episodes

72. Margot Gerritsen - Does AI have to be understandable to be ethical?

As AI systems have become more ubiquitous, people have begun to pay more attention to their ethical implications. Those implications are potentially enormous: Google?s search algorithm and Twitter?s recommendation system each have the ability to meaningfully sway public opinion on just about any issue. As a result, Google and Twitter?s choices have an outsized impact ? not only on their immediate user base, but on society in general.

That kind of power comes with risk of intentional misuse (for example, Twitter might choose to boost tweets that express views aligned with their preferred policies). But while intentional misuse is an important issue, equally challenging is the problem of avoiding unintentionally bad outputs from AI systems.

Unintentionally bad AIs can lead to various biases that make algorithms perform better for some people than for others, or more generally to systems that are optimizing for things we actually don?t want in the long run. For example, platforms like Twitter and YouTube have played an important role in the increasing polarization of their US (and worldwide) user bases. They never intended to do this, of course, but their effect on social cohesion is arguably the result of internal cultures based on narrow metric optimization: when you optimize for short-term engagement, you often sacrifice long-term user well-being.

The unintended consequences of AI systems are hard to predict, almost by definition. But their potential impact makes them very much worth thinking and talking about ? which is why I sat down with Stanford professor, co-director of the Women in Data Science (WiDS) initiative, and host of the WiDS podcast Margot Gerritsen for this episode of the podcast.

2021-02-24
Link to episode

71. Ben Garfinkel - Superhuman AI and the future of democracy and government

As we continue to develop more and more sophisticated AI systems, an increasing number of economists, technologists and futurists have been trying to predict what the likely end point of all this progress might be. Will human beings be irrelevant? Will we offload all of our decisions???from what we want to do with our spare time, to how we govern societies???to machines? And what is the emergence of highly capable and highly general AI systems mean for the future of democracy and governance?

These questions are impossible to answer completely and directly, but it  may be possible to get some hints by taking a long-term view at the history of human technological development. That?s a strategy that my guest, Ben Garfinkel, is applying in his research on the future of AI. Ben is a physicist and mathematician who now does research on forecasting risks from emerging technologies at Oxford?s Future of Humanity Institute.

Apart from his research on forecasting the future impact of technologies like AI, Ben has also spent time exploring some classic arguments for AI risk, many of which he disagrees with. Since we?ve had a number of guests on the podcast who do take these risks seriously, I thought it would be worth speaking to Ben about his views as well, and I?m very glad I did.

2021-02-17
Link to episode

70. Sarah Williams - What does ethical AI even mean?

There?s no question that AI ethics has received a lot of well-deserved attention lately. But ask the average person what ethics AI means, and you?re as likely as not to get a blank stare. I think that?s largely because every data science or machine learning problem comes with a unique ethical context, so it can be hard to pin down ethics principles that generalize to a wide class of AI problems.

Fortunately, there are researchers who focus on just this issue? and my guest today, Sarah Williams, is one of them. Sarah is an associate professor of urban planning and the director of the Civic Data Design Lab at MIT?s School of Architecture and Planning School. Her job is to study applications of data science to urban planning, and to work with policymakers on applying AI in an ethical way. Through that process, she?s distilled several generalizable AI ethics principles that have practical and actionable implications.

This episode was a wide-ranging discussion about everything from the way our ideologies can colour our data analysis to the challenges governments face when trying to regulate AI.

2021-02-10
Link to episode

69. Anders Sandberg - Answering the Fermi Question: Is AI our Great Filter?

The apparent absence of alien life in our universe has been a source of speculation and controversy in scientific circles for decades. If we assume that there?s even a tiny chance that intelligent life might evolve on a given planet, it seems almost impossible to imagine that the cosmos isn?t brimming with alien civilizations. So where are they?

That?s what Anders Sandberg calls the ?Fermi Question?: given the unfathomable size of the universe, how come we have seen no signs of alien life? Anders is a researcher at the University of Oxford?s Future of Humanity Institute, where he tries to anticipate the ethical, philosophical and practical questions that human beings are going to have to face as we approach what could be a technologically unbounded future. That work focuses to a great extent on superintelligent AI and the existential risks it might create. As part of that work, he?s studied the Fermi Question in great detail, and what it implies for the scarcity of life and the value of the human species.

2021-02-03
Link to episode

68. Silvia Milano - Ethical problems with recommender systems

One of the consequences of living in a world where we have every kind of data we could possible want at our fingertips, is that we have far more data available to us than we could possibly review. Wondering which university program you should enter? You could visit any one of a hundred thousand websites that each offer helpful insights, or take a look at ten thousand different program options on hundreds of different universities? websites. The only snag is that, by the time you finish that review, you probably could have graduated.

Recommender systems allow us to take controlled sips from the information fire hose that?s pointed our way every day of the week, by highlighting a small number of particularly relevant or valuable items from a vast catalog. And while they?re incredibly valuable pieces of technology, they also have some serious ethical failure modes???many of which arise because companies tend to build recommenders to reflect user feedback, without thinking of the broader implications these systems have for society and human civilization.

Those implications are significant, and growing fast. Recommender algorithms deployed by Twitter and Google regularly shape public opinion on the key moral issues of our time???sometimes intentionally, and sometimes even by accident. So rather than allowing society to be reshaped in the image of these powerful algorithms, perhaps it?s time we asked some big questions about the kind of world we want to live in, and worked backward to figure out what our answers would imply for the way we evaluate recommendation engines.

That?s exactly why I wanted to speak with Silvia Milano, my guest for this episode of the podcast. Silvia is an expert of the ethics of recommender systems, and a researcher at Oxford?s Future of Humanity Institute and at the Oxford Internet Institute, where she?s been involved in work aimed at better understanding the hidden impact of recommendation algorithms, and what can be done to mitigate their more negative effects. Our conversation took us led us to consider complex questions, including the definition of identity, the human right to self-determination, and the interaction of governments with technology companies.

2021-01-27
Link to episode

67. Joaquin Quiñonero-Candela - Responsible AI at Facebook

Facebook routinely deploys recommendation systems and predictive models that affect the lives of billions of people everyday. That kind of reach comes with great responsibility ? among other things, the responsibility to develop AI tools that ethical, fair and well characterized.

This isn?t an easy task. Human beings have spent thousands of years arguing about what ?fairness? and ?ethics? mean, and haven?t come close to a consensus. Which is precisely why the responsible AI community has to involve as many disparate perspectives as possible in determining what policies to explore and recommend ? a practice that Facebook?s Responsible AI team has applied itself.

For this episode of the podcast, I?m joined by Joaquin Quiñonero-Candela, the Distinguished Tech Lead for Responsible AI at Facebook. Joaquin has been at the forefront of the AI ethics and fairness movements for years, and has overseen the formation of Facebook?s responsible AI team. As a result, he?s one of relatively few people with hands-on experience making critical AI ethics decisions at scale, and seeing their effects.

Our conversation covered a lot of ground, from philosophical questions about the definition of fairness, to practical challenges that arise when implementing certain ethical AI frameworks.

2021-01-20
Link to episode

66. Owain Evans - Predicting the future of AI

Most researchers agree we?ll eventually reach a point where our AI systems begin to exceed human performance at virtually every economically valuable task, including the ability to generalize from what they?ve learned to take on new tasks that they haven?t seen before. These artificial general intelligences (AGIs) would in all likelihood have transformative effects on our economies, our societies and even our species.

No one knows what these effects will be, or when AGI systems will be developed that can bring them about. But that doesn?t mean these things aren?t worth predicting or estimating. The more we know about the amount of time we have to develop robust solutions to important AI ethics, safety and policy problems, the more clearly we can think about what problems should be receiving our time and attention today.

That?s the thesis that motivates a lot of work on AI forecasting: the attempt to predict key milestones in AI development, on the path to AGI and super-human artificial intelligence. It?s still early days for this space, but it?s received attention from an increasing number of the AI safety and AI capabilities researchers. One of those researchers is Owain Evans, whose work at Oxford University?s Future of Humanity Institute is focused on techniques for learning about human beliefs, preferences and values from observing human behavior or interacting with humans. Owain joined me for this episode of the podcast to talk about AI forecasting, the problem of inferring human values, and the ecosystem of research organizations that support this type of research.

2021-01-13
Link to episode

65. Helen Toner - The strategic and security implications of AI

With every new technology comes the potential for abuse. And while AI is clearly starting to deliver an awful lot of value, it?s also creating new systemic vulnerabilities that governments now have to worry about and address. Self-driving cars can be hacked.  Speech synthesis can make traditional ways of verifying someone?s identity less reliable. AI can be used to build weapons systems that are less predictable.

As AI technology continues to develop and become more powerful, we?ll have to worry more about safety and security. But competitive pressures risk encouraging companies and countries to focus on capabilities research rather than responsible AI development. Solving this problem will be a big challenge, and it?s probably going to require new national AI policies, and international norms and standards that don?t currently exist.

Helen Toner is Director of Strategy at the Center for Security and Emerging Technology (CSET), a US policy think tank that connects policymakers to experts on the security implications of new technologies like AI. Her work spans national security and technology policy, and international AI competition, and she?s become an expert on AI in China, in particular. Helen joined me for a special AI policy-themed episode of the podcast.

2021-01-06
Link to episode

64. David Krueger - Managing the incentives of AI

What does a neural network system want to do?

That might seem like a straightforward question. You might imagine that the answer is ?whatever the loss function says it should do.? But when you dig into it, you quickly find that the answer is much more complicated than that might imply.

In order to accomplish their primary goal of optimizing a loss function, algorithms often develop secondary objectives (known as instrumental goals) that are tactically useful for that main goal. For example, a computer vision algorithm designed to tell faces apart might find it beneficial to develop the ability to detect noses with high fidelity. Or in a more extreme case, a very advanced AI might find it useful to monopolize the Earth?s resources in order to accomplish its primary goal???and it?s been suggested that this might actually be the default behavior of powerful AI systems in the future.

So, what does an AI want to do? Optimize its loss function???perhaps. But a sufficiently complex system is likely to also manifest instrumental goals. And if we don?t develop a deep understanding of AI incentives, and reliable strategies to manage those incentives, we may be in for an unpleasant surprise when unexpected and highly strategic behavior emerges from systems with simple and desirable primary goals. Which is why it?s a good thing that my guest today, David Krueger, has been working on exactly that problem. David studies deep learning and AI alignment at MILA, and joined me to discuss his thoughts on AI safety, and his work on managing the incentives of AI systems.

2020-12-30
Link to episode

63. Geordie Rose - Will AGI need to be embodied?

The leap from today?s narrow AI to a more general kind of intelligence seems likely to happen at some point in the next century. But no one knows exactly how: at the moment, AGI remains a significant technical and theoretical challenge, and expert opinion about what it will take to achieve it varies widely. Some think that scaling up existing paradigms ? like deep learning and reinforcement learning ? will be enough, but others think these approaches are going to fall short.

Geordie Rose is one of them, and his voice is one that?s worth listening to: he has deep experience with hard tech, from founding D-Wave (the world?s first quantum computing company), to building Kindred Systems, a company pioneering applications of reinforcement learning in industry that was recently acquired for $350 million dollars.

Geordie is now focused entirely on AGI. Through his current company, Sanctuary AI, he?s working on an exciting and unusual thesis. At the core of this thesis is the idea is that one of the easiest paths to AGI will be to build embodied systems: AIs with physical structures that can move around in the real world and interact directly with objects. Geordie joined me for this episode of the podcast to discuss his AGI thesis, as well as broader questions about AI safety and AI alignment.

2020-12-23
Link to episode

62. Nicolai Baldin - AI meets the law: Bias, fairness, privacy and regulation

The fields of AI bias and AI fairness are still very young. And just like most young technical fields, they?re dominated by theoretical discussions: researchers argue over what words like ?privacy? and ?fairness? mean, but don?t do much in the way of applying these definitions to real-world problems.

Slowly but surely, this is all changing though, and government oversight has had a big role to play in that process. Laws like GDPR???passed by the European Union in 2016 ?are starting to impose concrete requirements on companies that want to use consumer data, or build AI systems with it. There are pros and cons to legislating machine learning, but one thing?s for sure: there?s no looking back. At this point, it?s clear that  government-endorsed definitions of ?bias? and ?fairness? in AI systems are going to be applied to companies (and therefore to consumers), whether they?re well-developed and thoughtful or not.

Keeping up with the philosophy of AI is a full-time job for most, but actually applying that philosophy to real-world corporate data is its own additional challenge. My guest for this episode of the podcast is doing just that: Nicolai Baldin is a former Cambridge machine learning researcher, and now the  founder and CEO of Synthesized, a startup that specializes in helping companies apply privacy, AI fairness and bias best practices to their data. Nicolai is one of relatively few people working on concrete problems in these areas, and has a unique perspective on the space as a result.

2020-12-16
Link to episode

61. Ben Goertzel - The unorthodox path to AGI

No one knows for sure what it?s going to take to make artificial general intelligence work. But that doesn?t mean that there aren?t prominent research teams placing big bets on different theories: DeepMind seems to be hoping that a brain emulation strategy will pay off, whereas OpenAI is focused on achieving AGI by scaling up existing deep learning and reinforcement learning systems with more data, more compute.

Ben Goertzel ?a pioneering AGI researcher, and the guy who literally coined the term ?AGI????doesn?t think either of these approaches is quite right. His alternative approach is the strategy currently being used by OpenCog, an open-source AGI project he first released in 2008. Ben is also a proponent of decentralized AI development, due to his concerns about centralization of power through AI, as the technology improves. For that reason, he?s currently working on building a decentralized network of AIs through SingularityNET, a blockchain-powered AI marketplace that he founded in 2017.

Ben has some interesting and contrarian views on AGI, AI safety, and consciousness, and he was kind enough to explore them with me on this episode of the podcast.

2020-12-09
Link to episode

60. Rob Miles - Why should I care about AI safety?

Progress in AI capabilities has consistently surprised just about everyone, including the very developers and engineers who build today?s most advanced AI systems. AI can now match or exceed human performance in everything from speech recognition to driving, and one question that?s increasingly on people?s minds is: when will AI systems be better than humans at AI research itself?

The short answer, of course, is that no one knows for sure ? but some have taken some educated guesses, including Nick Bostrom and Stuart Russell. One common hypothesis is that once an AI systems are better than a human at improving their own performance, we can expect at least some of them to do so. In the process, these self-improving systems would become an even more powerful system that they were previously?and therefore, even more capable of further self-improvement. With each additional self-improvement step, improvements in a system?s performance would compound. Where this all ultimately leads, no one really has a clue, but it?s safe to say that if there?s a good chance that we?re going to be creating systems that are capable of this kind of stunt, we ought to think hard about how we should be building them.

This concern among many others has led to the development of the rich field of AI safety, and my guest for this episode, Robert Miles, has been involved in popularizing AI safety research for more than half a decade through two very successful YouTube channels, Robert Miles and Computerphile. He joined me on the podcast to discuss how he?s thinking about AI safety, what AI means for the course of human evolution, and what our biggest challenges will be in taming advanced AI.

2020-12-02
Link to episode

59. Matthew Stewart - Tiny ML and the future of on-device AI

When it comes to machine learning, we?re often led to believe that bigger is better. It?s now pretty clear that all else being equal, more data, more compute, and larger models add up to give more performance and more generalization power. And cutting edge language models have been growing at an alarming rate???by up to 10X each year.

But size isn?t everything. While larger models are certainly more capable, they can?t be used in all contexts: take, for example, the case of a cell phone or a small drone, where on-device memory and processing power just isn?t enough to accommodate giant neural networks or huge amounts of data. The art of doing machine learning on small devices with significant power and memory constraints is pretty new, and it?s now known as ?tiny ML?. Tiny ML unlocks an awful lot of exciting applications, but also raises a number of safety and ethical questions.

And that?s why I wanted to sit down with Matthew Stewart, a Harvard PhD researcher focused on applying tiny ML to environmental monitoring. Matthew has worked with many of the world?s top tiny ML researchers, and our conversation focused on the possibilities and potential risks associated with this promising new field.

2020-11-25
Link to episode

58. David Duvenaud - Using generative models for explainable AI

In the early 1900s, all of our predictions were the direct product of human brains. Scientists, analysts, climatologists, mathematicians, bankers, lawyers and politicians did their best to anticipate future events, and plan accordingly.

Take physics, for example, where every task we think of as part of the learning process, from data collection to cleaning to feature selection to modeling, all had to happen inside a physicist?s head. When Einstein introduced gravitational fields, what he was really doing was proposing a new feature to be added to our model of the universe. And the gravitational field equations that he put forward at the same time were an update to that very model.

Einstein didn?t come up with his new model (or ?theory? as physicists call it) of gravity by running model.fit() in a jupyter notebook. In fact, he never outsourced any of the computations that were needed to develop it to machines.

Today, that?s somewhat unusual, and most of the predictions that the world runs on are generated in part by computers. But only in part ? until we have fully general artificial intelligence, machine learning will always be a mix of two things: first, the constraints that human developers impose on their models, and second, the calculations that go into optimizing those models, which we outsource to machines.

The human touch is still a necessary and ubiquitous component of every machine learning pipeline, but it?s ultimately limiting: the more of the learning pipeline that can be outsourced to machines, the more we can take advantage of computers? ability to learn faster and from far more data than human beings. But designing algorithms that are flexible enough to do that requires serious outside-of-the-box thinking ? exactly the kind of thinking that University of Toronto professor and researcher David Duvenaud specializes in. I asked David to join me for the latest episode of the podcast to talk about his research on more flexible and robust machine learning strategies.

2020-11-18
Link to episode

57. Dylan Hadfield-Menell - Humans in the loop

Human beings are collaborating with artificial intelligences on an increasing number of high-stakes tasks. I?m not just talking about robot-assisted surgery or self-driving cars here???every day, social media apps recommend content to us that quite literally shapes our worldviews and our cultures. And very few of us even have a basic idea of how these all-important recommendations are generated.

As time goes on, we?re likely going to become increasingly dependent on our machines, outsourcing more and more of our thinking to them. If we aren?t thoughtful about the way we do this, we risk creating a world that doesn?t reflect our current values or objectives. That?s why the domain of human/AI collaboration and interaction is so important???and it?s the reason I wanted to speak to Berkeley AI researcher Dylan Hadfield-Menell for this episode of the Towards Data Science podcast. Dylan?s  work is focused on designing algorithms that could allow humans and robots to collaborate more constructively, and he?s one of a small but growing cohort of AI researchers focused on the area of AI ethics and AI alignment.

2020-11-11
Link to episode

56. Annette Zimmermann - The ethics of AI

As AI systems have become more powerful, they?ve been deployed to tackle an increasing number of problems.

Take computer vision. Less than a decade ago, one of the most advanced applications of computer vision algorithms was to classify hand-written digits on mail. And yet today, computer vision is being applied to everything from self-driving cars to facial recognition and cancer diagnostics.

Practically useful AI systems have now firmly moved from ?what if?? territory to ?what now?? territory. And as more and more of our lives are run by algorithms, an increasing number of researchers from domains outside computer science and engineering are starting to take notice. Most notably among these are philosophers, many of  whom are concerned about the ethical implications of outsourcing our decision-making to machines whose reasoning we often can?t understand or even interpret.

One of the most important voices in the world of AI ethics has been that of Dr Annette Zimmermann, a Technology & Human Rights Fellow at the Carr Center for Human Rights Policy at Harvard University, and a Lecturer in Philosophy at the University of York. Annette is has focused a lot of her work on exploring the overlap between algorithms, society and governance, and I had the chance to sit down with her to discuss her views on bias in machine learning, algorithmic fairness, and the big picture of AI ethics.

2020-11-04
Link to episode

55. Rohin Shah - Effective altruism, AI safety, and learning human preferences from the state of the world

If you walked into a room filled with objects that were scattered around somewhat randomly, how important or expensive would you assume those objects were?

What if you walked into the same room,  and instead found those objects carefully arranged in a very specific configuration that was unlikely to happen by chance?

These two scenarios hint at something important: human beings have shaped our environments in ways that reflect what we value. You might just learn more about what I value by taking a 10 minute stroll through my apartment than by spending 30 minutes talking to me as I try to put my life philosophy into words.

And that?s a pretty important idea, because as it turns out, one of the most important challenges in advanced AI today is finding ways to communicate our values to machines. If our environments implicitly encode part of our value system, then we might be able to teach machines to observe it, and learn about our preferences without our having to express them explicitly.

The idea of leveraging deriving human values  from the state of an human-inhabited environment was first developed in a paper co-authored by Berkeley PhD and incoming DeepMind researcher Rohin Shah. Rohin has spent the last several years working on AI safety, and publishes the widely read AI alignment newsletter???and he was kind enough to join us for this episode of the Towards Data Science podcast, where we discussed his approach to AI safety, and his thoughts on risk mitigation strategies for advanced AI systems.

2020-10-28
Link to episode

54. Tim Rocktäschel - Deep reinforcement learning, symbolic learning and the road to AGI

Reinforcement learning can do some pretty impressive things. It can optimize ad targeting, help run self-driving cars, and even win StarCraft games. But current RL systems are still highly task-specific. Tesla?s self-driving car algorithm can?t win at StarCraft, and DeepMind?s AlphaZero algorithm can with Go matches against grandmasters, but can?t optimize your company?s ad spend.

So how do we make the leap from narrow AI systems that leverage reinforcement learning to solve specific problems, to more general systems that can orient themselves in the world? Enter Tim Rocktäschel, a Research Scientist at Facebook AI Research London and a Lecturer in the Department of Computer Science at University College London. Much of Tim?s work has been focused on ways to make RL agents learn with relatively little data, using strategies known as sample efficient learning, in the hopes of improving their ability to solve more general problems. Tim joined me for this episode of the podcast.

2020-10-15
Link to episode

53. Edouard Harris - Emerging problems in machine learning: making AI "good"

Where do we want our technology to lead us? How are we falling short of that target? What risks might advanced AI systems pose to us in the future, and what potential do they hold? And what does it mean to build ethical, safe, interpretable, and accountable AI that?s aligned with human values?

That?s what this year is going to be about for the Towards Data Science podcast. I hope you join us for that journey, which starts today with an interview with my brother Ed, who apart from being a colleague who?s worked with me as part of a small team to build the SharpestMinds data science mentorship program, is also collaborating with me on a number of AI safety, alignment and policy projects. I thought he?d be a perfect guest to kick off this new year for the podcast.

2020-10-08
Link to episode

52. Sanyam Bhutani - Networking like a pro in data science

Networking is the most valuable career advancement skill in data science. And yet, almost paradoxically, most data scientists don?t spend any time on it at all. In some ways, that?s not terribly surprising: data science is a pretty technical field, and technical people often prefer not to go out of their way to seek social interactions. We tend to think of networking with other ?primates who code? as a distraction at best, and an anxiety-inducing nightmare at worst.

So how can data scientists overcome that anxiety, and tap into the value of network-building, and develop a brand for themselves in the data science community? That?s the question that brings us to this episode of the podcast. To answer it, I spoke with repeat guest Sanyam Bhutani ? a top Kaggler, host of the Chai Time Data Science Show, Machine Learning Engineer and AI Content Creator at H2O.ai, about the unorthodox networking strategies that he?s leveraged to become a fixture in the machine learning community, and to land his current role.

2020-09-23
Link to episode

51. Adrien Treuille and Tim Conkling - Streamlit Is All You Need

We?ve talked a lot about ?full stack? data science on the podcast. To many, going full-stack is one of those long-term goals that we never get to. There are just too many algorithms and data structures and programming languages to know, and not enough time to figure out software engineering best practices around deployment and building app front-ends.

Fortunately, a new wave of data science tooling is now making full-stack data science much more accessible by allowing people with no software engineering background to build data apps quickly and easily. And arguably no company has had such explosive success at building this kind of tooling than Streamlit, which is why I wanted to sit down with Streamlit founder Adrien Treuille and gamification expert Tim Conkling to talk about their journey, and the importance of building flexible, full-stack data science apps.

2020-09-16
Link to episode

50. Ken Jee - Building your brand in data science

It?s no secret that data science is an area where brand matters a lot.

In fact, if there?s one thing I?ve learned from A/B testing ways to help job-seekers get hired at SharpestMinds, it?s that blogging, having a good presence on social media, making open-source contributions, podcasting and speaking at meetups is one of the best ways to get noticed by employers.

Brand matters. And if there?s one person who has a deep understanding of the value of brand in data science ? and how to build one ? it?s data scientist and YouTuber Ken Jee. Ken not only has experience as a data scientist and sports analyst, having worked at DraftKings and GE, but he?s also founded a number of companies ? and his YouTube channel, with over 60 000 subscribers, is one of his main projects today.

For today?s episode, I spoke to Ken about brand-building strategies in data science, as well as job search tips for anyone looking to land their first data-related role.

2020-09-09
Link to episode

49. Catherine Zhou - The data science of learning

If you?re interested in upping your coding game, or your data science game in general, then it?s worth taking some time to understand the process of learning itself.

And if there?s one company that?s studied the learning process more than almost anyone else, it?s Codecademy. With over 65 million users, Codecademy has developed a deep understanding of what it takes to get people to learn how to code, which is why I wanted to speak to their Head of Data Science, Cat Zhou, for this episode of the podcast. 

2020-09-02
Link to episode

48. Emmanuel Ameisen - Beyond the jupyter notebook: how to build data science products

Data science is about much more than jupyter notebooks, because data science problems are about more than machine learning.

What data should I collect? How good does my model need to be to be ?good enough? to solve my problem? What form should my project take for it to be useful? Should it be a dashboard, a live app, or something else entirely? How do I deploy it? How do I make sure something awful and unexpected doesn?t happen when it?s deployed in production?

None of these questions can be answered by importing sklearn and pandas and hacking away in a jupyter notebook. Data science problems take a unique combination of business savvy and software engineering know-how, and that?s why Emmanuel Ameisen wrote a book called Building Machine Learning Powered Applications: Going from Idea to Product. Emmanuel is a machine learning engineer at Stripe, and formerly worked as Head of AI at Insight Data Science, where he oversaw the development of dozens of machine learning products.

Our conversation was focused on the missing links in most online data science education: business instinct, data exploration, model evaluation and deployment.

2020-08-26
Link to episode

47. Goku Mohandas - Industry research and how to show off your projects

Project-building is the single most important activity that you can get up to if you?re trying to keep your machine learning skills sharp or break into data science. But a project won?t do you much good unless you can show it off effectively and get feedback to iterate on it???and until recently, there weren?t many places you could turn to to do that.

A recent open-source initiative called MadeWithML is trying to change that, by creating an easily shareable repository of crowdsourced data science and machine learning projects, and its founder, former Apple ML researcher and startup founder Goku Mohandas, sat down with me for this episode of the TDS podcast to discuss data science projects, his experiences doing research in industry, and the MadeWithML project.

2020-08-19
Link to episode

46. Ihab Ilyas - Data cleaning is finally being automated

It?s cliché to say that data cleaning accounts for 80% of a data scientist?s job, but it?s directionally true.

That?s too bad, because fun things like data exploration, visualization and modelling are the reason most people get into data science. So it?s a good thing that there?s a major push underway in industry to automate data cleaning as much as possible.

One of the leaders of that effort is Ihab Ilyas, a professor at the University of Waterloo and founder of two companies, Tamr and Inductiv, both of which are focused on the early stages of the data science lifecycle: data cleaning and data integration. Ihab knows an awful lot about data cleaning and data engineering, and has some really great insights to share about the future direction of the space???including what work is left for data scientists, once you automate away data cleaning.

2020-08-12
Link to episode

45. Kenny Ning - Is data science merging with data engineering?

There?s been a lot of talk about the future direction of data science, and for good reason. The space is finally coming into its own, and as the Wild West phase of the mid-2010s well and truly comes to an end, there?s keen interest among data professionals to stay ahead of the curve, and understand what their jobs are likely to look like 2, 5 and 10 years down the road.

And amid all the noise, one trend is clearly emerging, and has already materialized to a significant degree: as more and more of the data science lifecycle is automated or abstracted away, data professionals can afford to spend more time adding value to companies in more strategic ways. One way to do this is to invest your time deepening your subject matter expertise, and mastering the business side of the equation. Another is to double down on technical skills, and focus on owning more and more of the data stack ?particularly including productionization and deployment stages.

My guest for today?s episode of the Towards Data Science podcast has been down both of these paths, first as a business-focused data scientist at Spotify, where he spent his time defining business metrics and evaluating products, and second as a data engineer at Better.com, where his focus has shifted towards productionization and engineering. During our chat, Kenny shared his insights about the relative merits of each approach, and the future of the field.

2020-08-05
Link to episode

44. Jakob Foerster - Multi-agent reinforcement learning and the future of AI

Reinforcement learning has gotten a lot of attention recently, thanks in large part to systems like AlphaGo and AlphaZero, which have highlighted its immense potential in dramatic ways. And while the RL systems we?ve developed have accomplished some impressive feats, they?ve done so in a fairly naive way. Specifically, they haven?t tended to confront multi-agent problems, which require collaboration and competition. But even when multi-agent problems have been tackled, they?ve been addressed using agents that just assume other agents are an uncontrollable part of the environment, rather than entities with rich internal structures that can be reasoned and communicated with.

That?s all finally changing, with new research into the field of multi-agent RL, led in part by OpenAI, Oxford and Google alum, and current FAIR research scientist Jakob Foerster. Jakob?s research is aimed specifically at understanding how reinforcement learning agents can learn to collaborate better and navigate complex environments that include other agents, whose behavior they try to model. In essence, Jakob is working on giving RL agents a theory of mind.

2020-07-29
Link to episode

43. Ian Scott - Data science at Deloitte

Data science can look very different from one company to the next, and it?s generally difficult to get a consistent opinion on the question of what a data scientist really is.

That?s why it?s so important to speak with data scientists who apply their craft at different organizations???from startups to enterprises. Getting exposure to the full spectrum of roles and responsibilities that data scientists are called on to execute is the only way to distill data science down to its essence.

That?s why I wanted to chat with Ian Scott, Chief Science Officer at Deloitte Omnia, Deloitte?s AI practice. Ian was doing data science as far back as the late 1980s, when he was applying statistical modeling to data from experimental high energy physics as par of his PhD work at Harvard. Since then, he?s occupied strategic roles at a number of companies, most recently including Deloitte, where he leads significant machine learning and data science projects.

2020-07-22
Link to episode

42. Will Grathwohl - Energy-based models and the future of generative algorithms

Machine learning in grad school and machine learning in industry are very different beasts. In industry, deployment and data collection become key, and the only thing that matters is whether you can deliver a product that real customers want, fast enough to meet internal deadlines. In grad school, there?s a different kind of pressure, focused on algorithm development and novelty. It?s often difficult to know which path you might be best suited for, but that?s why it can be so useful to speak with people who?ve done both???and bonus points if their academic research experience comes from one of the top universities in the world.

For today?s episode of the Towards Data Science podcast, I sat down with Will Grathwohl, a PhD student at the University of Toronto, student researcher at Google AI, and alum of MIT and OpenAI. Will has seen cutting edge machine learning research in industry and academic settings, and has some great insights to share about the differences between the two environments. He?s also recently published an article on the fascinating topic of energy models in which he and his co-authors propose a unique way of thinking about generative models that achieves state-of-the-art performance in computer vision tasks.

2020-07-15
Link to episode

41. Solmaz Shahalizadeh - Data science in high-growth companies

One of the themes that I?ve seen come up increasingly in the past few months is the critical importance of product thinking in data science. As new and aspiring data scientists deepen their technical skill sets and invest countless hours doing practice problems on leetcode, product thinking has emerged as a pretty serious blind spot for many applicants. That blind spot has become increasingly critical as new tools have emerged that abstract away a lot of what used to be the day-to-day gruntwork of data science, allowing data scientists more time to develop subject matter expertise and focus on the business value side of the product equation.

If there?s one company that?s made a name for itself for leading the way on product-centric thinking in data science, it?s Shopify. And if there?s one person at Shopify who?s spent the most time thinking about product-centered data science, it?s Shopify?s Head of Data Science and Engineering, Solmaz Shahalizadeh. Solmaz has had an impressive career arc, which included joining Shopify in its pre-IPO days, back in 2013, and seeing the Shopify data science team grow from a handful of people to a pivotal organization-wide effort that tens of thousands of merchants rely on to earn a living today.

2020-07-08
Link to episode

40. David Meza - Data science at NASA

Machine learning isn?t rocket science, unless you?re doing it at NASA. And if you happen to be doing data science at NASA, you have something in common with David Meza, my guest for today?s episode of the podcast.

David has spent his NASA career focused on optimizing the flow of information through NASA?s many databases, and ensuring that that data is harnessed with machine learning and analytics. His current focus is on people analytics, which involves tracking the skills and competencies of employees across NASA, to detect people who have abilities that could be used in new or unexpected ways to meet needs that the organization has or might develop. 

2020-07-01
Link to episode

39. Nick Pogrebnyakov - Data science at Reuters, and the remote work after the coronavirus

Nick Pogrebnyakov is a Senior Data Scientist at Thomson Reuters, an Associate Professor at Copenhagen Business School, and the founder of Leverness, a marketplace where experienced machine learning developers can find contract work with companies. He?s a busy man, but he agreed to sit down with me for today?s TDS podcast episode, to talk about his day job ar Reuters, as well as the machine learning and data science job landscape. 

2020-06-24
Link to episode

38. Matthew Stewart - Data privacy and machine learning in environmental science

One Thursday afternoon in 2015, I got a spontaneous notification on my phone telling me how long it would take to drive to my favourite restaurant under current traffic conditions. This was alarming, not only because it implied that my phone had figured out what my favourite restaurant was without ever asking explicitly, but also because it suggested that my phone knew enough about my eating habits to realize that I liked to go out to dinner on Thursdays specifically.

As our phones, our laptops and our Amazon Echos collect increasing amounts of data about us???and impute even more???data privacy is becoming a greater and greater concern for research as well as government and industry applications. That?s why I wanted to speak to Harvard PhD student and frequent Towards Data Science contributor Matthew Stewart about to get an introduction to some of the key principles behind data privacy. Matthew is a prolific blogger, and his research work at Harvard is focused on applications of machine learning to environmental sciences, a topic we also discuss during this episode.

2020-06-17
Link to episode

37. Sean Knapp - The brave new world of data engineering

There?s been a lot of talk in data science circles about techniques like AutoML, which are dramatically reducing the time it takes for data scientists to train and tune models, and create reliable experiments. But that trend towards increased automation, greater robustness and reliability doesn?t end with machine learning: increasingly, companies are focusing their attention on automating earlier parts of the data lifecycle, including the critical task of data engineering.

Today, many data engineers are unicorns: they not only have to understand the needs of their customers, but also how to work with data, and what software engineering tools and best practices to use to set up and monitor their pipelines. Pipeline monitoring in particular is time-consuming, and just as important, isn?t a particularly fun thing to do. Luckily, people like Sean Knapp???a former Googler turned founder of data engineering startup Ascend.io???are leading the charge to make automated data pipeline monitoring a reality.

We had Sean on this latest episode of the Towards Data Science podcast to talk about data engineering: where it?s at, where it?s going, and what data scientists should really know about it to be prepared for the future.

2020-06-10
Link to episode

36. Max Welling - The future of machine learning

For the last decade, advances in machine learning have come from two things: improved compute power and better algorithms. These two areas have become somewhat siloed in most people?s thinking: we tend to imagine that there are people who build hardware, and people who make algorithms, and that there isn?t much overlap between the two.

But this picture is wrong. Hardware constraints can and do inform algorithm design, and algorithms can be used to optimize hardware. Increasingly, compute and modelling are being optimized together, by people with expertise in both areas.

My guest today is one of the world?s leading experts on hardware/software integration for machine learning applications. Max Welling is a former physicist and currently works as VP Technologies at Qualcomm, a world-leading chip manufacturer, in addition to which he?s also a machine learning researcher with affiliations at UC Irvine, CIFAR and the University of Amsterdam.

2020-06-03
Link to episode

35. Rubén Harris - Learning and looking for jobs in quarantine

Coronavirus quarantines fundamentally change the dynamics of learning, and the dynamics of the job search. Just a few months ago, in-person bootcamps and college programs, live networking events where people exchanged handshakes and business cards were the way the world worked, but now, no longer. With that in mind, many aspiring techies are asking themselves how they should be adjusting their gameplan to keep up with learning or land that next job, given the constraints of an ongoing pandemic and impending economic downturn.

That?s why I wanted to talk to Rubén Harris, CEO and co-founder of Career Karma, a startup that helps aspiring developers find the best coding bootcamp for them. He?s got a great perspective to share on the special psychological and practical challenges of navigating self-learning and the job search, and he was kind enough to make the time to chat with me for this latest episode of the Towards Data Science podcast.

2020-05-27
Link to episode

34. Denise Gosnell and Matthias Broecheler - You should really learn about graph databases. Here?s why.

One great way to get ahead in your career is to make good bets on what technologies are going to become important in the future, and to invest time in learning them. If that sounds like something you want to do, then you should definitely be paying attention to graph databases.

Graph databases aren?t exactly new, but they?ve become increasingly important as graph data (data that describe interconnected networks of things) has become more widely available than ever. Social media, supply chains, mobile device tracking, economics and many more fields are generating more graph data than ever before, and buried in these datasets are potential solutions for many of our biggest problems.

That?s why I was so excited to speak with Denise Gosnell and Matthias Broecheler, respectively the Chief Data Officer and Chief Technologist at DataStax, a company specialized in solving data engineering problems for enterprises. Apart from their extensive experience working with graph databases at DataStax, and Denise and Matthias have also recently written a book called The Practitioner?s Guide to Graph Data, and were kind enough to make the time for a discussion about the basics of data engineering and graph data for this episode of the Towards Data Science Podcast. 

2020-05-20
Link to episode

33. Roland Memisevic - Machines that can see and hear

One of the most interesting recent trends in machine learning has been the combination of different types of data in order to be able to unlock new use cases for deep learning. If the 2010s were the decade of computer vision and voice recognition, the 2020s may very well be the decade we finally figure out how to make machines that can see and hear the world around them, making them that much more context-aware and potentially even humanlike.

The push towards integrating diverse data sources has received a lot of attention, from academics as well as companies. And one of those companies is Twenty Billion Neurons, and its founder Roland Memisevic, is our guest for this latest episode of the Towards Data Science podcast. Roland is a former academic who?s been knee-deep in deep learning since well before the hype that was sparked by AlexNet in 2012. His company has been working on deep learning-powered developer tools, as well as an automated fitness coach that combines video and audio data to keep users engaged throughout their workout routines.

2020-05-13
Link to episode

32. Bahador Khaleghi - Explainable AI and AI interpretability

If I were to ask you to explain why you?re reading this blog post, you could answer in many different ways.

For example, you could tell me ?it?s because I felt like it?, or ?because my neurons fired in a specific way that led me to click on the link that was advertised to me?. Or you might go even deeper and relate your answer to the fundamental laws of quantum physics.

The point is, explanations need to be targeted to a certain level of abstraction in order to be effective.

That?s true in life, but it?s also true in machine learning, where explainable AI is getting more and more attention as a way to ensure that models are working properly, in a way that makes sense to us. Understanding explainability and how to leverage it is becoming increasingly important, and that?s why I wanted to speak with Bahador Khaleghi, a data scientist at H20.ai whose technical focus is on explainability and interpretability in machine learning.

2020-05-06
Link to episode

31. Russell Pollari - Building habits and breaking into data science

Most of us want to change our identities. And we usually have an idealized version of ourselves that we aspire to become???one who?s fitter, smarter, healthier, more famous, wealthier, more centered, or whatever.

But you can?t change your identity in a fundamental way without also changing what you do in your day-to-day life. You don?t get fitter without working out regularly. You don?t get smarter without studying regularly.

To change yourself, you must first change your habits. But how do you do that?

Recently, books like Atomic Habits and Deep Work have focused on answering that question in general terms, and they?re definitely worth reading. But habit formation in the context of data science, analytics, machine learning, and startups comes with a unique set of challenges, and deserves attention in its own right. And that?s why I wanted to sit down with today?s guest, Russell Pollari.

Russell may now be the CTO of the world?s largest marketplace for income share mentorships (and the very same company I work at every day!) but he was once???and not too long ago???a physics PhD student with next to no coding ability and a classic case of the grad school blues. To get to where he is today, he?s had to learn a lot, and in his quest to optimize that process, he?s focused a lot of his attention on habit formation and self-improvement in the context of tech, data science and startups.

2020-04-29
Link to episode

30. Interviewing the Medium data science team

Revenues drop unexpectedly, and management pulls aside the data science team into a room. The team is given its marching orders: ?your job,? they?re told, ?is to find out what the hell is going on with our purchase orders.?

That?s a very open-ended question, of course, because revenues and signups could drop for any number of reasons. Prices may have increased. A new user interface might be confusing potential customers. Seasonality effects might have to be considered. The source of the problem could be, well, anything.

That?s often the position data scientists find themselves in: rather than having a clear A/B test to analyze, they frequently are in the business of combing through user funnels to ensure that each stage is working as expected.

It takes a very detail-oriented and business-savvy team to pull off an investigation with that broad a scope, but that?s exactly what Medium has: a group of product-minded data scientists dedicated to investigating anomalies and identifying growth opportunities hidden in heaps of user data. They were kind enough to chat with me and talk about how Medium does data science for this episode of the Towards Data Science podcast.

2020-04-22
Link to episode

29. Cameron Davidson-Pillon - Data science at Shopify

If you want to know where data science is heading, it helps to know where it?s been. Very few people have that kind of historical perspective, and even fewer combine it with an understanding of cutting-edge tooling that hints at the direction the field might be taking in the future.

Luckily for us, one of them is Cameron Davidson-Pillon, the former Director of Data Science at Shopify. Cameron has been knee-deep in data science and estimation theory since 2012, when the space was still coming into its own. He?s got a great high-level perspective not only on technical issues but also on hiring and team-building, and he was kind enough to join us for today?s episode of the Towards Data Science podcast.

2020-04-15
Link to episode

28. Emily Robinson - Building a Career in Data Science

It?s easy to think of data science as a purely technical discipline: after all, it exists at the intersection of a number of genuinely technical topics, from statistics to programming to machine learning.

But there?s much more to data science and analytics than solving technical problems???and there?s much more to the data science job search than coding challenges and Kaggle competitions as well. Landing a job or a promotion as a data scientist calls on a ton of career skills and soft skills that many people don?t spend nearly enough time honing.

On this episode of the podcast, I spoke with Emily Robinson, an experienced data scientist and blogger with a pedigree that includes Etsy and DataCamp, about career-building strategies. Emily?s got a lot to say about the topic, particularly since she just finished authoring a book entitled ?Build a Career in Data Science? with her co-author Jacqueline Nolis. The book explores a lot of great, practical strategies for moving data science careers forward, many of which we discussed during our conversation.

2020-04-07
Link to episode

27. Alayna Kennedy - AI safety, AI ethics and the AGI debate

Most of us believe that decisions that affect us should be made rationally: they should be reached by following a reasoning process that combines data we trust with a logic that we find acceptable.

As long as human beings are making these decisions, we can probe at that reasoning to find out whether we agree with it. We can ask why we were denied that bank loan, or why a judge handed down a particular sentence, for example.

Today however, machine learning is automating away more and more of these important decisions, and as a result, our lives are increasingly governed by decision-making processes that we can?t interrogate or understand. Worse, machine learning algorithms can exhibit bias or make serious mistakes, so a black-box-ocracy risks becoming more like a dystopia than even the most imperfect human-designed systems we have today.

That?s why AI ethics and AI safety have drawn so much attention in recent years, and why I was so excited to talk to Alayna Kennedy, a data scientist at IBM whose work is focused on the ethics of machine learning, and the risks associated with ML-based decision-making. Alayna has consulted with key players in the US government?s AI effort, and has expertise applying machine learning in industry as well, through previous work on neural network modelling and fraud detection.

2020-03-30
Link to episode

26. Jeremy Howard - Coronavirus: the data behind the disease

In mid-January, China launched an official investigation into a string of unusual pneumonia cases in Hubei province. Within two months, that cluster of cases would snowball into a full-blown pandemic, with hundreds of thousands???perhaps even millions???of infections worldwide, with the potential to unleash a wave of economic damage not seen since the 1918 Spanish influenza or the Great Depression.

The exponential growth that led us from a few isolated infections to where we are today is profoundly counterintuitive. And it poses many challenges for the epidemiologists who need to pin down the transmission characteristics of the coronavirus, and for the policy makers who must act on their recommendations, and convince a generally complacent public to implement life-saving social distancing measures.

With the coronas in full bloom, I thought now would be a great time to reach out to Jeremy Howard, co-founder of the incredibly popular Fast.ai machine learning education site. Along with his co-founder Rachel Thomas, Jeremy authored a now-viral report outlining a data-driven case for concern regarding the coronavirus.

2020-03-20
Link to episode

25. Chris Parmer - Plotly founder on what data science is, and where it's going

It?s easy to think of data scientists as ?people who explore and model data?. Bur in reality, the job description is much more flexible: your job as a data scientist is to solve problems that people actually have with data.

You?ll notice that I wrote ?problems that people actually have? rather than ?build models?. It?s relatively rare that the problems people have actually need to be solved using a predictive model. Instead, a good visualization or interactive chart is almost always the first step of the problem-solving process, and can often be the last as well.

And you know who understands visualization strategy really, really well? Plotly, that?s who. Plotly is a company that builds a ton of great open-source visualization, exploration and data infrastructure tools (and some proprietary commercial ones, too). Today, their tooling is being used by over 50 million people worldwide, and they?ve developed a number of tools and libraries that are now industry standard. So you can imagine how excited I was to speak with Plotly co-founder and Chief Product Officer Chris Parmer.

Chris had some great insights to share about data science and analytics tooling, including the future direction he sees the space moving in. But as his job title suggests, he?s also focused on another key characteristic that all great data scientists develop early on: product instinct (AKA: ?knowing what to build next?).

2020-03-18
Link to episode

24. Xander Steenbrugge - Machine learning as a creative tool, and the quest for artificial general intelligence

Most machine learning models are used in roughly the same way: they take a complex, high-dimensional input (like a data table, an image, or a body of text) and return something very simple (a classification or regression output, or a set of cluster centroids). That makes machine learning ideal for automating repetitive tasks that might historically have been carried out only by humans.

But this strategy may not be the most exciting application of machine learning in the future: increasingly, researchers and even industry players are experimenting with generative models, that produce much more complex outputs like images and text from scratch. These models are effectively carrying out a creative process ? and mastering that process hugely widens the scope of what can be accomplished by machines.

My guest today is Xander Steenbrugge, and his focus is on the creative side of machine learning. In addition to consulting with large companies to help them put state-of-the-art machine learning models into production, he?s focused a lot of his work on more philosophical and interdisciplinary questions ? including the interaction between art and machine learning. For that reason, our conversation went in an unusually philosophical direction, covering everything from the structure of language, to what makes natural language comprehension more challenging than computer vision, to the emergence of artificial general intelligence, and how all these things connect to the current state of the art in machine learning.

2020-03-10
Link to episode

23. Iain Harlow - Leaving academia for industry and optimizing how you learn

I can?t remember how many times I?ve forgotten something important.

I?m sure it?s a regular occurrence though: I constantly forget valuable life lessons, technical concepts and useful bits of statistical theory. What?s worse, I often forget these things after working bloody hard to learn them, so my forgetfulness is just a giant waste of time and energy.

That?s why I jumped at the chance to chat with Iain Harlow, VP of Science at Cerego???a company that helps businesses build training courses for their employees by optimizing the way information is served to maximize retention and learning outcomes.

Iain knows a lot about learning and has some great insights to share about how you can optimize your own learning, but he?s also got a lot of expertise solving data science problems and hiring data scientists???two things that he focuses on in his work at Cerego. He?s also a veteran of the academic world, and has some interesting observations to share about the difference between research in academia and research in industry.

2020-03-03
Link to episode
A tiny webapp by I'm With Friends.
Updated daily with data from the Apple Podcasts.