Home | Program |
The lecture series consists of 2 parts:
A more irregular schedule of deep dives into specific topics of Category Theory, some already showing applications to Machine Learning and some which have not beeen applied yet.
The lectures are finished for the moment, but you can still check out their recordings!
We had weekly introductory lectures, where we
taught the basics of category theory with a focus on applications to Machine Learning.
Week of October 10 | Week 1: Why Category Theory? - Recording link and Slides |
Bruno Gavranović | |
By the end of this week you will:
|
Week of October 17 | Week 2: Essential building blocks: Categories and Functors - Recording link and Slides |
Petar Veličković | |
By the end of this week you will:
|
Week of October 24 | Week 3: Categorical Dataflow: Optics and Lenses as data structures for backpropagation - Recording link and Slides |
Bruno Gavranović | |
By the end of this week you will:
|
Week of October 31 | Week 4: Geometric Deep Learning & Naturality - Recording link and Slides |
Pim de Haan | |
By the end of this week you will:
|
Week of November 7 | Week 5: Monoids, Monads, Mappings, and lstMs - Recording link and Slides |
Andrew Dudzik | |
By the end of this week you will:
|
November 14 | Neural network layers as parametric spans - Recording link and Slides |
Pietro Vertechi | |
Properties such as composability and automatic differentiation made artificial neural networks a pervasive tool in applications. Tackling more challenging problems caused neural networks to progressively become more complex and thus difficult to define from a mathematical perspective. In this talk, we will discuss a general definition of linear layer arising from a categorical framework based on the notions of integration theory and parametric spans. This definition generalizes and encompasses classical layers (e.g., dense, convolutional), while guaranteeing existence and computability of the layer's derivatives for backpropagation. |
November 21 | Causal Model Abstraction & Grounding via Category Theory - Recording link and Slides |
Taco Cohen | |
Causal models are used in many areas of science to describe data generating processes and reason about the effect of changes to these processes (interventions). Causal models are typically highly abstracted representations of the underlying process, consisting of only a few carefully selected variables, and the causal mechanisms between them. This simplifies causal reasoning, but the relation between the model and the underlying system is never described in mathematical terms, and this has led to considerable philosophical confusions. Furthermore, it has made it hard to understand how causal modeling relates to other fields such as physics (where systems are described by dynamical laws without reference to causes), dynamical systems, and agent-centric frameworks such as Markov Decision Processes (MDPs). In this talk we study this idea of abstraction from a categorical perspective, focussing on two questions in particular:
|
December 12 | Category Theory Inspired by LLMs - Recording link and Slides |
Tai-Danae Bradley | |
The success of today's large language models (LLMs) is striking, especially given that the training data consists of raw, unstructured text. In this talk, we'll see that category theory can provide a natural framework for investigating this passage from texts—and probability distributions on them—to a more semantically meaningful space. To motivate the mathematics involved, we will open with a basic, yet curious, analogy between linear algebra and category theory. We will then define a category of expressions in language enriched over the unit interval and afterwards pass to enriched copresheaves on that category. We will see that the latter setting has rich mathematical structure and comes with ready-made tools to begin exploring that structure. |
TBA | Polynomial Functors |
David Spivak | |
TBA |
Design by Mike Pierce |