Category theory is a way of thinking and structuring one's knowledge grounded in the idea of compositionality. Originating in abstract mathematics, this is a formal language that has since spread to numerous fields, becoming a topic of interest for a growing number of researchers. It's helped build rigorous bridges between seemingly disparate scientific areas, showing great potential as a cohesive force in the scientific world. These fields include physics, chemistry, computer science, game theory, systems theory, database theory, and most importantly for us, machine learning, where it's seen a steady growth.
Many machine learning concepts have started to be distilled in category theory. From general components of gradient-based learning, over specific architectures such as graph and recurrent neural networks, to equivariant learning, automatic differentiation, bayesian learning, topological data analysis, and more.
Despite the steady growth and pervasive use in 21st century mathematics and physics, most of machine learning research is done without the use of this rigorous compositional language. This lecture series aims to help bridge this gap by providing an accessible introduction to category theory targeted at deep learning practitioners. We hope to show that machine learning can benefit greatly from these compositional ideas, and that category theory will become an essential element in the machine learning researchers' toolbox.
This course is aimed towards machine learning researchers, but approachable to anyone with a basic understanding of linear algebra and differential calculus. The material is self-contained and all the necessary background will be introduced along the way.
We will have a zulip server where everyone can ask questions and discuss the content of the lecture series with other participants.
|Design by Mike Pierce|