Stanford Digital Economy Lab / February 26, 2026

Q&A | Demystifying Machine Learning with Tom Mitchell

Tom Mitchell’s new podcast isn’t so much a lesson in machine learning as it is a celebration of the people and ideas behind it.
In our Q&A, he discusses why he wanted to honor the “passion, curiosity, and humanity” of the pioneers of machine learning.

by Matty Smith

Tom Mitchell is a Digital Fellow at the Stanford Digital Economy Lab and Founders University Professor at Carnegie Mellon University, where he founded the world’s first Machine Learning Department. He is also the author of Machine Learning, a foundational textbook for the field.

Machine Learning: How Did We Get Here? is his new podcast, featuring a series of conversations on machine learning history with Nobel Prize winners, groundbreaking researchers, and industry leaders.

 

How would you describe your background in the field of machine learning?

 

I’ve done research on machine learning since my days as a PhD student at Stanford in the 1970s, both inventing new approaches and applying machine learning to different problems. Two of my favorite applications have been applying it to brain image data (to learn to decode which noun you’re thinking about from your fMRI brain image), and applying it to online education (where our algorithms were able to learn which hint to provide to students stuck solving a problem–and it has now served hints to millions on online students!).

 

What is it about right now that made it a good time to finally do the podcast?

 

A lot of people hear about machine learning in the news, and are curious.  I hope the podcast can demystify machine learning a bit, and also show the human face of some of the individuals who made big contributions.

 

What would you say is the audience for the podcast: long-time researchers, those who may be familiar with AI but want a deeper dive, or both?

 

I’ve thought about this question a lot, and tried to build episodes that would be understandable to everybody, but still have some surprises for professionals working in AI.   

 

Are there any particular moments, stories, or details that came up in your conversations that stand out?

 

There are a lot of memorable quotes and stories in the episode conversations, but what stands out to me are the personalities. In the interviews you can see their passion for what they were doing, their curiosity, their humanity.  

You can see that they really were having fun as they put in the hours, months and years working on the question of how to make machines learn.  And you get a flavor for the camaraderie of the research community–everybody was in this together, and sharing openly what they were discovering.  

 

In your opening lecture (episode one), you discuss questioning authority as an important theme throughout machine learning’s history. Can you explain that a bit in the context of your conversations?

 

It’s important to ask what we should learn from the history of machine learning.  I think one of the most important lessons is that big advances often came from people questioning the conventional wisdom of the field, and even redefining the problem–questioning authority.

For example, one of the podcast episodes features Dean Pomerleau who was interested in self-driving cars during the 1980s.  The conventional wisdom at the time was that we should manually construct the computer programs to drive, and that’s what researchers were all doing.  But as a PhD student Dean said no–I’m going to try training a neural network to do it.  He did, and he ended up far outperforming the state of the art at the time, and changing the direction of research on that problem.

Another podcast episode features Kai-Fu Li who as a PhD student working on speech recognition decided to take a very novel machine learning approach applying Hidden Markov Models.  Again, he changed the direction of that field.  I think one of the most dangerous things that a newcomer to the field can do is to blindly accept the prevailing problem definitions and paradigms–doing that robs you of the opportunity to take a fresh look at what is possible.  

Tom Mitchell Digital Fellow

What stands out to me are the personalities. In the interviews you can see their passion for what they were doing, their curiosity, their humanity. 

by Matty Smith

Can you explain what you mean when you say machine learning is a blend of “technical forces and social forces?”

 

When I say the field of machine learning is driven by “technical forces” I mean that the cold hard facts of how well different machine learning methods perform–how accurately they can learn–really do drive where the field goes.  In fact, one of the fortunate features of our field is that researchers share data sets so that different researchers with different approaches can acquire those cold hard facts and compare their performance on the same datasets.  But there are also significant social forces at work. Well-known senior researchers get their papers read more often than others, and when they suggest a particular question to work on or a particular approach, the field is more likely to follow them.

Beyond that, part of getting one’s ideas accepted by other researchers is being a good communicator and persuader.  After all, in the end the research community is a group of human beings with all of the incentives and social forces at work that we see in many other groups.  Now it’s interesting to ask whether these social forces help or hinder progress in the field.  I think the social forces serve a role–we probably should listen a bit more to the researchers who have a track record of being successful (i.e., the senior researchers), than to others.  But taking their suggestions as gospel without questioning them is clearly unwise.  

Overall, I think the combination of technical and social forces is a good thing.  It’s the technical forces and hard cold facts that win out in the long term, but it seems the social forces have a strong influence on what problems and approaches the field focuses on in the short term.  And social interactions with other researchers certainly make research a lot more fun!

 

Why did you start your lecture by discussing philosophers from long before “machine learning” existed?

 

Long before we thought “machine learning” was a thing, philosophers thought “human learning” was a thing to understand. And over the centuries they came to some pretty interesting conclusions about human learning that turn out to apply to machines as well!  The most fundamental idea they came up with is that when people come to a general conclusion like “all swans are white” after looking at just some of the examples, that step cannot really be justified without making some other assumptions — it’s possible of course that there’s a black swan out there that we just haven’t seen yet.  

As the philosopher David Hume suggested in the 1700s, people perform this kind of leaping to general conclusions, but it is just a habit, and not a provably correct logical inference.  He was right, and this is just as true when machines do it as when people do.  As humans this habit has been pretty useful, and it’s very interesting to understand why we get away with leaping to general conclusions — what are the implicit assumptions we are making, and when are those actually satisfied?  I think we need to ask the same questions about our learning programs since they are also leaping to general conclusions that go beyond the data they are trained on.  Why, when, and how can that be justified?  If we understood the answer to this question, I bet we could design better machine learning algorithms.

 

Machine Learning: How Did We Get Here? is available now on podcast platforms such as Apple Podcasts and Spotify, as well as the Lab’s YouTube channel.

 

Learn more about the podcast

Tom Mitchell

Digital Fellow

Tom M. Mitchell is the Founders University Professor at Carnegie Mellon University, where he founded the world’s first Machine Learning Department, and served as Interim Dean of the School of Computer Science (2018-2019). He is also a Digital Fellow at the Digital Economy Lab at Stanford. He has worked on machine learning and AI ever since his 1979 Stanford Ph.D., and he remains optimistic about its future. In 2010 Mitchell was elected to the U.S. National Academy of Engineering “For pioneering contributions and leadership in the methods and applications of machine learning.”

Read more