In the previous two posts, “Category theory notes 14: Yoneda lemma (Part 1)” and “Category theory notes 15: Yoneda lemma (Part 2),” I started a task of deciphering the Yoneda lemma. I recorded my frustrations with the significant yet difficult category-theoretic result but in the mean time showed that the lemma was actually not that unsurmountable, at least on the conceptual level. In this post I’ll continue deciphering the assembly-language-like Yoneda. Since my aim is to explain things down to every detail—because a main reason I had found the lemma difficult to follow was that the textbooks I used omitted a lot of nontrivial details—this post is going to be longer than usual (in fact it may be the longest in the series). So bear with me.😬

## Deciphering the assembly-language-like Yoneda (continued)

### The full-fledged assembly language

Once we have the naturality square, we can set our hands to the details. One reason why programming in assembly language isn’t fashionable is because it’s too tedious. I’ve noticed programmers lamenting about it in the following way:

Angoid: Assembly language is not difficult to learn, but it is fiddly.

Peter Hand: Assembly language is not difficult, but it’s finicky.

The situation is similar with the assembly-language-like facet of the Yoneda lemma. While its naturality square is clear enough, to verify that the square is indeed natural one needs to go through a laborious amount of low-level calculations—the thing that computers are good at but humans hate!

So, if you’ve made your way to this point and understood the gist of the Yoneda lemma. You can choose not to implement the calculation by yourself but just accept the naturality of the square—it’s in a well-established lemma in official textbooks after all… But if you’re determined to go over the calculations by yourself, then there are three logical steps you need to follow. I’ve found it helpful to keep the three steps in mind and think about them from time to time in order not to get lost in the lengthy calculation. The three steps are:

• Step 1: Define functions. Since the Yoneda isomorphism is a morphism in $\mathbf{Sets},$ it’s essentially a set-theoretic function, namely a bijection. And since a bijection consists of two functions that are mutually invertible, we need to find those two functions. This step is the trickiest part of understanding the assembly-language-like Yoneda, and it’s also the least well explained in textbooks!
• Step 2: Prove bijection. After finding the two functions, the next step is to prove algebraically that they indeed form a bijection. There’s nothing fancy in this step but it’s really tedious…
• Step 3: Prove naturality. After finding the two functions and proving their mutual invertibility, the final step towards a full conquest of the Yoneda lemma is to prove the naturality square is really natural in its two arguments (i.e., $\bullet$ and $-,$ or $F$ and $C$ following Awodey’s formulation). This step isn’t conceptually challenging either, because all we need to do is chase the diagram and verify that the two parallel paths between $\mathrm{Hom}(y(C),F)$ and $G(D)$ are indeed equivalent. But again the calculation is rather tedious because there are two variables, which means we need to algebraically chase two diagrams (one with $F$ fixed and the other with $C$ fixed)… And if you’re a really meticulous person who will only feel safe after verifying that the squares commute in both directions specified by the isomorphism (i.e., the two-headed arrows), then your calculation load doubles again (i.e., you have four squares to check)!😝 Most textbooks aren’t so meticulous, though, and only check one direction, that from the hom-functor to the evaluation map.

I’m too lazy to go over the entire calculation process at the moment 😌, so I’ll just comment on the first step, because that’s where I got stuck and have found textbooks/tutorials least helpful. The problem is that textbooks usually throw the correct functions at readers’ faces without explaining how or why those particular functions have been chosen. This is extremely frustrating (at least for me) because the two functions look quite contrived and surely aren’t the first thing beginners would remotely think of, which leads to the question “How have textbook authors (or whoever discovered them in the first place) thought of them? Out of thin air?”🤨

Anyway, the two functions are (take $\eta_{F, C}$ for example): $\eta_{F, C}: \alpha \mapsto \alpha_C(1_C),\; \text{and}$ $\eta_{F, C}^{-1}: a \mapsto F(f)(a) \text{ for any } \mathbb{C}^\mathrm{op}\text{-arrow } f\colon C\rightarrow C’$ N.b. $F(f)(a)$ isn’t the full natural transformation given rise to by $a$ (there’s no way to express that in a single formula) but merely the result of applying its component at $C’$ (i.e., a $\mathbf{Sets}$-morphism from $\mathrm{Hom}(C’,C)$ to $FC’$) to an element in the source object of that component (i.e., $\mathrm{Hom}(C’,C)$). Since both $C’$ and $f$ are freely chosen in $\mathbb{C}^\mathrm{op},$ specifying this particular application result of this particular component effectively amounts to an indirect specification of the entire natural transformation.

In order to figure out why these functions work, we can take a step back and remind ourselves of the definition of a function.

A function is a relation between sets that associates to every element of a first set exactly one element of the second set. (Wikipedia)

So, our task of finding two back-and-forth functions that together define $\eta_{F,C}$ can be reduced to that of finding two total and single-valued maps between the source and target objects (i.e., sets) of $\eta_{F,C}.$ There may be many such maps, but the task is completed as long as we can find one. As such, what we are faced with is an “existential problem” (couldn’t avoid the pun😏). The choices of the two functions above are exactly done under this guideline.

### First function

First, let’s find a function from $\mathrm{Hom}(y(C),F)$ to $F(C);$ that is, a map that for each element in $\mathrm{Hom}(y(C),F)$ yields a unique element in $F(C).$ In other words, what we need to do is express an element in $F(C)$ (where $C$ is fixed and therefore a constant instead of a variable) solely in terms of an element in $\mathrm{Hom}(y(C),F).$ But doesn’t that amount to finding an element in the target object of the $C$-component of $\mathrm{Hom}(y(C),F)$? Why? Because the target object of the $C$-component of $\mathrm{Hom}(y(C),F)$ is $FC$!

To flesh out the above “coincidence”—though it’s really not a coincidence but firmly built into the design of the Yoneda configuration—let’s zoom in on the set of natural transformations $\mathrm{Hom}(y(C),F).$ Recall from above that both $y(C)$ and $F$ are presheaves (i.e., functors from $\mathbb{C}^\mathrm{op}$ to $\mathbf{Sets}$), so a natural transformation $\rho$ between them can be displayed as follows: As we can see, the target object of the $X$-component of $\rho$ for any $X$ in $\mathbb{C}^\mathrm{op}$ is just $F(X),$ and so the target object of the $C$-component of $\rho$ is $F(C).$ Therefore, as long as we can express a unique element in $F(C)$ deterministically in terms of any $\rho$ we are given, our task is completed. And the choice of the function $\alpha \mapsto \alpha_C(1_C)$ (where $\rho$ is replaced with $\alpha$) is precisely based on this “isomorphism between tasks”!😎

But there may be numerous elements in $y(C)(C)$—which are just endomorphisms from $C$ to $C$—and by what criterion should we pick one element among them to yield the $F(C)$ value we desire? Well, it doesn’t really matter, because remember that our original mission is just to show that whatever $\rho$ we are given, we can always use that $\rho$ to determine a unique element in $F(C).$ So, the requirement is just that the element be unique within $F(C)$ but doesn’t impose a restriction on how it should be unique, and that turns this choosing-unique-element task a multi-solution problem. Whichever element we choose to call “unique,” we can always meet the requirement and complete the task. And that’s why $1_C$ is used in the textbook function—since we can just pick any element, why don’t we be lazy and pick an easiest one and one that we know must exist whichever $C$ (or $X$ if you’re still looking at the above diagram) we fix? That “easy pick” is just the identity arrow on $C$…

In sum, whichever $\rho$ and $C$ we are given, we can always use these data to “cook up” a particular unique element in $F(C)$ by the formula $\rho_C(1_C),$ and thus we have found a candidate function for $\eta_{F,C}.$ Since our task is to find a function rather than the function, we don’t need to do any more work in this thread 🙃, and we can happily move on to the next thread, namely finding a candidate function for $\eta_{F,C}^{-1}$.

### Second function

The second function we want is in the opposite direction from the first one; namely, from $F(C)$ to $\mathrm{Hom}(y(C), F).$ This means that for each element $a$ in $F(C)$ we need to be able to express a natural transformation in $\mathrm{Hom}(y(C), F)$ solely in terms of $a$ (together with any pre-fixed constants). But as above-mentioned, there’s no way to specify a natural transformation without specifying its components. Therefore, to complete this task we need to be able to use $a$ to express the result of applying a random component of the desired natural transformation to a random element in the source object of that component. Suppose the random component is at $C’$ and given fixed $F$ and $C,$ the natural transformation $\eta_{F,C}^{-1}(a)$ is displayed below: See the solution? It’s readily in the diagram! Since the action of the Yoneda functor $y$ is to map an objects $X$ in $\mathbb{C}$ to the hom-functor $\mathbb{C}(-,X)$ (or written $\mathrm{Hom}_\mathbb{C}(-,X)$), applying such a hom-functor to another $\mathbb{C}$-object $Y$ yields a hom-set $\mathrm{Hom}_\mathbb{C}(Y,X).$ In other words, $y(X)(Y) = \mathrm{Hom}_\mathbb{C}(Y,X),$ and so $y(C)(C’)=\mathrm{Hom}_\mathbb{C}(C’,C),$ which is the hom-set from $C’$ to $C$ in $\mathbb{C}.$ But that’s just the hom-set from $C$ to $C’$ in $\mathbb{C}^\mathrm{op},$ namely the set $f$ lives in! Since our $f$ is defined randomly—that is, it stands for any arrow from $C$ to $C’$ in $\mathbb{C}^\mathrm{op}$—we can feel free to use it to formulate our desired element in $F(C’).$

But how? The crucial thing to realize is that $f$ is not only an element in $y(C)(C’)$ but also an arrow in the source category of $F.$ As such, we can lift $f$ by $F$ into $\mathbf{Sets},$ which is precisely $F(f)\colon F(C)\rightarrow F(C’)$—and voilà, this arrow is already in our commutative diagram above! So, for any given $a\in F(C)$ we have two paths to reach an element in $F(C’)$: one via ${\eta_{F,C}^{-1}(a)}_{C’}$ and the other via $F(f).$ Hence, a readily available element in $F(C’)$ expressed in terms of $a\in F(C)$ is just $F(f)(a).$ In other words, ${\eta_{F,C}^{-1}(a)}_{C’}(f) = F(f)(a).$ This is exactly the second textbook function!

Well, to be really sure that we have the right answer we still need to verify that the naturality square above commutes. But it surely does, because with our $f\in y(C)(C’)$ and the way $y(C)(f)$ works (i.e., syntactic postcomposition1 with $f,$ which is often indicated by an upper asterisk in textbooks, as $f^*$) we can deduce that in order to get $f$ via $y(C)(f)$ we need an argument $x$ such that $f^* x=x\circ f=f.$ Well there’s only one way to achieve this: setting $x$ to be the identity arrow on the the target object of $f.$ And what’s that? Here’s an easy pitfall for beginners: we’ve defined $f$ as $C\rightarrow C’$ in $\mathbb{C}^\mathrm{op},$ but the hom-sets yielded by applying hom-functors are in the original, non-$op$ categories, so the $f$ used in $y(C)(f)$ should be $C’\rightarrow C.$ Therefore, $x=1_C,$ which we know must exist in $y(C)(C).$ With this new datum, we can now chase it around the diagram: $(F(f)\circ{\eta_{F,C}^{-1}(a)}_{C})1_C = ({\eta_{F,C}^{-1}(a)}_{C’}\circ y(C)(f))1_C,\; \text{i.e.,}$ $F(f)({\eta_{F,C}^{-1}(a)}_{C}(1_C)) = {\eta_{F,C}^{-1}(a)}_{C’}(y(C)(f))(1_C)),\; \text{i.e.,}$ $F(f)(F(1_C)(a)) = {\eta_{F,C}^{-1}(a)}_{C’}(f),\; \text{i.e.,}$ $F(f)(1_{F(C)}(a)) = F(f)(a),\; \text{i.e.,}$ $F(f)(a) = F(f)(a)$ In particular, we know $F(1_C)(a) = 1_{F(C)}(a)$ by the unit law in the definition of functor; namely, identity arrows are mapped to identity arrows (see my Sep 1 post if you need to refresh your memory). So, with the above calculation the commutativity of our square diagram is verified, which in turn confirms that our second function is correct. And by now we have completed the first step of implementing the assembly-language-like Yoneda lemma (i.e., defining functions)…

The assembly-language-like Yoneda lemma is intimidating because it’s full of dazzling calculations like the above!

As I declared above, I’m too lazy to implement all the three steps, so I’ll directly jump to the next section.😅

## Bridging the two Yonedas

Above I have painstakingly demonstrated one third of the assembly-language-like Yoneda lemma, but if you compare that with the zen-like Yoneda lemma, you’ll realize that there’s literally no connection between the two… How have mathematicians abstracted the “Yoneda philosophy” from that chunk of low-level clutter?🧐

I’ve found Bradley’s blog article very helpful in bridging the two Yonedas. Some other sources also provide explanations but I haven’t seen anything as beginner-friendly as Bradley’s narration. In a nutshell, the two Yonedas are bridged via two corrollaries of the Yoneda lemma:

• Corollary 1: The Yoneda functor is full, faithful, and injective on objects; namely, it’s an embedding (for this reason the Yoneda functor is also called the Yoneda embedding).
• Corollary 2: In the Yoneda embedding configuration, two objects in the source category are isomorphic if and only if their functorial images in the target category are isomorphic.

I won’t present the proofs of the two corollaries here as they’re usually given wherever they’re taught and not too difficult to follow once you’ve understood the Yoneda lemma. Again I’ve found Bradley’s blog article very helpful in selling the proofs in a beginner-friendly way.

What I’d like to comment on is how the Yoneda lemma becomes the Yoneda philosophy via the Yoneda corollaries. Observe the Yoneda embedding: Via $y(C)$ a random $\mathbb{C}$-object $C$ is lifted to the set of arrows from all other $\mathbb{C}$-objects to itself. If we conceive a categorical arrow as sort of a relationship between objects, then the hom-set in question can be conceived as the set of all relationships established from other objects to $C,$ or in more fashionable terms, the set of all the ways other objects view $C.$ This is somewhat reminiscent of Marx’s theory of human nature:

Aber das menschliche Wesen ist kein, dem einzelnen Individuum innewohnendes Abstraktum. In seiner Wirklichkeit ist es das Ensemble der gesellschaftlichen Verhältnisse.
“But the essence of man is no abstraction inherent in each single individual. In reality, it is the ensemble of the social relations.” (the sixth of the Theses on Feuerbach; German)

So, under the Marxist view the Yoneda embedding enables us to virtually define a $\mathbb{C}$-object by the ensemble of its “in-arrows” (or equivalently by its “out-arrows” by a dual version of the Yoneda embedding). This definition is completely reliable because

1. we can define all $\mathbb{C}$-objects in this way (by corollary 1), and
2. all nonisomorphic $\mathbb{C}$-objects can be adequately distinguished via their in-arrow ensembles (by corollary 2).

There’s only one prerequisite in the application of the Yoneda embedding (and the Yoneda lemma)—the category $\mathbb{C}$ in question must be locally small, or it’s meaningless to talk about hom-sets (recall from my Aug 29 post that non-locally-small categories have no hom-sets). This is also why there’s a locally-small condition in the Yoneda lemma’s definition!

In short, the zen-like Yoneda doesn’t resemble the assembly-language-like Yoneda because it isn’t directly abstracted from the Yoneda lemma but is more closely based on the Yoneda corollaries.

## Takeaway

• There are two angles to perceive the Yoneda lemma: a zen-like (philosophical) angle and an assembly-language-like (technical) angle.
• The zen-like angle is more closely related to the corollaries of the Yoneda lemma than to the lemma per se. It basically says categorical objects can be completely determined by their relationships with all other objects in their locality. This is also reminiscent of Marx’s view of human nature.
• The assembly-language-like angle is very mind-bending because it’s full of tedious and lengthy calculations. But the core idea is clear enough—to implement the Yoneda lemma from scratch one must complete three tasks: (i) define functions; (ii) prove bijection; (iii) prove naturality. Among the three tasks the first one is the trickiest because textbooks usually just give out the correct functions without motivating the particular choices.
• Since the Yoneda lemma and its corollaries are well established and needn’t be proved from scratch every time they are used (and shouldn’t be considering how lengthy the proofs are!), perhaps the best learning strategy for beginners, especially for nonmathematicians, is to focus on the big picture and aim for a good conceptual understanding of the various Yoneda-related results. When the proof of some theorem relies on Yoneda, just cite it as a lemma as its name says!😜

1. This is simply called “postcomposition” in textbooks, but I added a “syntactic” here to emphasize that the post- in the term is based on a purely syntactic criterion just like the left/right terms I mentioned in my Sep 5 post. So, to apply a function $f$ to an argument $g$ (which should itself be a function) by “postcomposition” (i.e., $f^* g$) simply means writing the function symbol after the argument symbol in the application result, namely $g\circ f.$
Dually, there's also a term “precomposition,” denoted by a lower asterisk (e.g., $f_*$), which means writing the function symbol and the argument symbol as is without reordering, so $f_* g = f\circ g.$ It's important to bear in mind that the criterion for the “post/pre-” prefixes is a purely syntactic one; otherwise there's a high risk of confusion because a syntactic postcomposition is, unfortunately, a diagrammatic precomposition (i.e., apply the arrow denoted by the function symbol before the arrow denoted by the argument symbol)...
In my own learning process I've invented a mnemonic to help myself remember the above horribly designed terminology, but unfortunately the mnemonic is in Chinese and so doesn't make sense to non-Chinese-speakers. Anyway, the mnemonic is 上反下正 or simply 上反, meaning that if you see an upper asterisk you should swap the syntactic order of the function symbol and the argument symbol (and don't swap the ordering if you see a lower asterisk).
Some textbooks don't adopt the asterisk notations but use a hom-set-like notation, writing $\mathrm{Hom}(f,-)(g)$ for $f^* g$ and $\mathrm{Hom}(-,f)(g)$ for $f_* g$ (the $-$ placeholder is usually some identity morphism). My mnemonic also has a variant for this kind of notation, namely 左反右正 or simply 左反, meaning that if you see the function symbol being written as the left-hand component in a $\mathrm{Hom}(\text{LEFT}, \text{RIGHT})$ notation then you should swap the syntactic order of the function symbol and the argument symbol in the application result (and don't swap the ordering if the function symbol is written as the right-hand component). So, $\mathrm{Hom}(f,-)(g)=f^* g=g\circ f$ (左反/上反) and $\mathrm{Hom}(-,f)(g) = f_* g = f\circ g$ (右正/下正). These Chinese mnemonics have worked pretty well for me and helped me quickly get the correct application results.

Tags:

Categories:

Updated:

## Subscribe to I-Yuwen

* indicates required