If a human uses their biological neural network to figure something out, they can share the acquired knowledge with others through a symbolic representation such as English or mathematics. So how can we do the same with knowledge that’s been machine-learned by an artificial neural network?
On October 17, 2022, MIT’s Max Tegmark joined Erik Brynjolfsson for his talk titled “Extracting Machine-Learned Knowledge.”
If a human uses her biological neural network to figure something out, she can share her acquired knowledge with us through a symbolic representation such as English or mathematics. How can we do the same with knowledge that’s been machine-learned by an artificial neural network? This would be useful for making AI systems more efficient, robust, safe, and trustworthy. I present examples of progress in this regard, where describable patterns are discovered in data — patterns ranging from symbolic formulas to hidden symmetries, modularity, and conservation laws. The methods I present exploit numerous ideas from physics to recursively simplify neural networks, ranging from symmetries to differentiable manifolds, curvature, and topological defects, and also take advantage of mathematical insights from knot theory and graph modularity.
Max Tegmark’s research is focused on precision cosmology, e.g., combining theoretical work with new measurements to place sharp constraints on cosmological models and their free parameters. During his first quarter-century as a physics researcher, this criterion has lead him to work mainly on cosmology and quantum information. Although he’s continuing his cosmology work with the HERA collaboration, the main focus of his current research is on the physics of intelligence: using physics-based techniques to better understand biological and artificial intelligence (AI).