Hello! I’ve had something of a break from writing. It looks like my last post here was in August 2020.
My excuse is that I recently started in a new role as a data scientist with the UK government, and so I had to prioritise different things. Even so, writing regularly is important to me. I definitely don’t have the time anymore to read every academic paper I think I would find interesting; I don’t even have the time to read everything relevant to my job. But I have been reading more things recently and there are enough things floating around my mind now that I think I’d like to start sharing things again.
Let’s get started.
A Lie group (pronounced ‘Lee’) is a group that is also a manifold. By group, we mean a ‘symmetry group’, which is the way that mathematicians describe symmetry, and by manifold, we mean a topological space that locally looks like Euclidean space, but globally is something a bit more complex.
A useful analogy for thinking about this local/global concept might be the way we think about the surface of the Earth: locally, it looks flat to us, but if you go up high enough you start to see that the Earth really is curved. It depends on whether you’re talking about local or global properties as to whether you need to consider the curvature of the Earth or not.
Anyway, back to Lie groups. A cool example of a Lie group is this one below, called E8. E8 is a 248-dimensional Lie group, but mathematicians have classified it by something known as its maximal torus, which is 8-dimensional, hence E8.
The figures below (from Wikimedia Commons) show E8 projected into various planes so that we can get a good look at it.
Cool, right? Mathematicians are rather fond of E8. It tends to show up frequently in different places, such as on the front cover of a popular textbook, or even recently in a TED talk where a prominent physicist-slash-surfer has claimed that E8 is the key to a new theory of everything. The short answer is no, E8 will not solve all your problems and it probably will not be showing up in the high school physics curriculum anytime soon.
Anyway. I actually want to talk about geometry.
Recently I’ve been interested in using tools from mathematics to help illuminate new ways to think about data science, machine learning, and artificial intelligence. One example of this is topological data analysis, which is the use of ideas from topology to structure data in new, interpretable ways.
For example, here’s an application of topological data analysis to understanding brain dynamics using fMRI data:
Another example is geometric deep learning, which is a subfield of machine learning that uses ideas from differential geometry to develop powerful tools for doing deep learning on manifolds. I had the pleasure of watching a talk by one of the leading academics involved in this project, Michael Bronstein, when he gave a talk at the UCL Gatsby Computational Neuroscience Unit on geometric deep learning and network data science at Twitter.
Bronstein’s big idea is to take formalisms from mathematics, particularly Riemannian geometry and related areas, and use these to help push at the frontier of what deep learning can do. For a 15-minute version of these ideas, I would highly recommend his recent Medium post on the geometric foundations of deep learning and the Erlangen programme, which I thoroughly enjoyed. If you still want more, Bronstein and collaborators have been working on a proto-textbook of these ideas, Geometric Deep Learning: Grids, Groups, Graphs, Geodesics, and Gauges, which has just been published on arXiv. I almost certainly do not have the required technical background to digest it all, at least not yet.
Through a citation in the above post, I was interested to see the connection between the differential geometry approach to deep learning and Shun-ichi Amari’s work on information geometry. In a recent reading list I made, which in turn was a response to this helpful post ‘A gentle introduction to information geometry’, I quoted Amari who called information geometry “a method of exploring the world of information by means of modern geometry” in his recent textbook on the subject.
I was initially left with a lack of enthusiasm at the project of information geometry, which seemed to be a bit of a dead-end academic field. In particular, it didn’t seem to have any connection to modern machine learning; this might be different now.
Back to geometry.
The 1872 Erlangen programme of Felix Klein, which is discussed with much enthusiasm by Bronstein in the Medium post above, was an influential moment in the history of mathematics. It seems to have done a lot to bring geometry and algebra, especially group theory, closer together.
Felix Klein is a particularly interesting figure for a number of other reasons too. You might have heard of something called a Klein bottle, which is a two-dimensional manifold that is non-orientable, i.e., the inside is indistinguishable from the outside. Klein is also known for working in Göttingen with David Hilbert and Emmy Noether, certainly two of the most interesting mathematicians of the past few centuries, on a better algebraic and geometric understanding of Einstein’s new general theory of relativity. A recent book on this episode in the history of mathematics, The Noether Theorems, helps to place this work in its proper context.
As physicists have continued to seek to use advanced mathematics to help formulate the laws of physics, and even the reverse, it seems as if the relationship between mathematics and physics has never been stronger. But another relationship I’ve recently noticed to be growing stronger is that between advanced physics and machine learning.
In The Hintons in your Neural Network: a Quantum Field Theory View of Deep Learning, Roberto Bondesan and Max Welling develop a new formalism for machine learning, particularly quantum machine learning, using quantum field theory. The authors argue that a candidate for the next disruption in artificial intelligence is quantum computing, and go on to show how quantum field theory can be of use to machine learning research.
My own naive view is that quantum computing is indeed an extremely exciting area of research, but then so is machine learning, and both fields are in such relatively early stages of development that it’s hard to know exactly where the next great disruption might come from. So it’s difficult to know whether to chase after the current buzzword hot topics of the moment or whether to spend more time generating new interesting subfields for further study. Having said that, I have recently been trying to learn the absolute basics of quantum computing just a little bit, and some useful resources I’ve found have been the Quantum Computing course on Brilliant.org and the interactive flashcard textbook by one of the most famous names in quantum computing research, Michael Nielsen, called Quantum Country. I would recommend in particular the first essay from the book called Quantum computing for the very curious, which comes with in-text quizzes and flashcards to help you actually learn the material.
(A side note: Michael Nielsen is among the names that Dominic Cummings has suggested to lead the UK’s proposed advanced research projects agency, sometimes known as the ‘UK ARPA’, but he is thought to have turned down the offer. You can find out more about his ideas for the agency here and even watch the whole 2-hour committee meeting where he gave evidence here. It’s actually pretty good; it could almost be listened to as a podcast. I’d recommend it if you like thinking about the limits and future of science policy and funding for basic research.)
If you’ve read this far, I hope you have a newly developed appreciation for the connection between mathematics and the various ways it can be applied in fields such as physics and artificial intelligence. Or if this material was all very familiar to you, at least maybe you came across a new factoid or two.
This has been a long post about too many different things, and when it comes to updating this blog it’s unlikely I’ll have the same post cadence as I did before. But when I do feel like I have something to say—despite the perfectionism that usually holds me back from putting anything at all out in the public sphere anymore—you’ll be able to read it here.
I’ve been struggling with my mental health a lot recently, as have so many people during the pandemic. The generalised advice from those I’ve spoken to about the matter has been to keep things simple, focus on doing a small number of things really well, say ‘no’ to things as well as ‘yes’, but also to keep doing the things that bring joy.
Thanks for reading.
Geometry, the 1872 Erlangen programme, quantum machine learning (+ life update)
nice to see you writing again. Defo agree with your priority of mental health first and taking things slow