Functors Between Metric-Enriched Categories: Is This A Stupid Idea?

Note: I’m hoping to get some comments about this, so if you have some thoughts, please leave a comment.

Let Met denote the category whose objects consist of metric spaces (for convenience, we will allow metrics to take the value of infinity) and whose morphisms are (weakly) contractive maps (aka short maps, nonexpansive maps, 1-Lipschitz maps), or more precisely functions satisfying d(f(x),f(y)) \leq d(x,y)Met is a monoidal category where X\otimes Y is the product space with metric given by d((x_1,y_1),(x_2,y_2) = d_X(x_1, x_2) + d(y_1, y_2) (the identity object is the singleton).

It is my (perhaps mistaken) opinion that a lot of functional analysis can be done using categories enriched in Met. (Actually, I think that for more generality, one ought to replace metric spaces with semi-metric spaces (aka quasi-metric spaces) where distinct elements can have zero distance. It may also be possible that Lawvere metric spaces are the appropriate choice here, but I’m not yet convinced of this.) Of course, one “trivial” yet important example is using the discrete metric. So for any two objects A, B in a locally small category, we can define a metric on \operatorname{Hom}(A,B) by $d(f,g) = 1$ if $f\neq g$ and $d(f,g) = 0$ if $f = g$.

A category enriched in Met enables one to talk about approximately commuting diagrams. This has been explored in approximate Fraïssé limits, though I don’t know enough logic to understand it.

Given two categories \mathcal{C}, \mathcal{D} enriched in Met, we can define a type of continuity (except that word’s already taken) where a functor F: \mathcal{C} \to \mathcal{D} is “continuous” if for any \epsilon > 0, there exists \delta > 0 such that for any objects A,B\in \mathcal{C} and any morphisms f,g\in \operatorname{Hom}(A,B), if d(f,g) < \delta, then d(F(f),F(g)) < \epsilon.

Apologies for the underdeveloped ideas, I have been thinking about this for quite a while (throughout my time as a graduate student) and I have some trouble formulating what I want to say. Talking to a few people about this did not generate a terrible amount of interest, but I was curious if people had any insight as to:

  1. Has there been any work done along these lines?
  2. Do you think this might be a potentially interesting idea?

My category theory is limited, and what I’ve learned does not seem to have this type of idea in mind, but I’d love to hear some thoughts!


Some Thoughts on “A Breakthrough in Higher Dimensional Spheres”

Since I have an interest in the popularization of math, I was excited to hear about a new PBS series called “Infinite Series” and their first YouTube video “A Breakthrough in Higher Dimensional Spheres.” So I decided to write down some thoughts I’ve had about this video and some general trends in pop math.

My general impression of this video is that it’s good. It strikes a good balance between being correct and understandable for the layperson. Many articles about math tend to fail at this, using inappropriate analogies to try to illustrate the point. I am slightly disappointed that some details weren’t provided (more on this later), but it’s extremely hard to balance the level of detail with being entertaining.

To get to the details, the video starts with an introduction the show and its premise, which seems to be current progress in mathematics. It’s not a bad idea, but since most mathematical ideas are built from older ones, this will either limit the topics covered or require each a portion of each episode to go over some background material. Based on how this episode went, I’m going to guess both factors will come into play. I don’t expect to see an episode on the Kadison-Singer problem or the classification of nuclear C*-algebras, for example.

The introduction of spheres and Euclidean space of arbitrary dimensions was good. The sphere packing problem was stated well, but I thought the examples and explanations lacked some important details. The 2d sphere packing is based on square grids and hexagonal grids. Even outlining this fact in the image would have gone a long way to illustrating how that works. It would also help in understanding the higher dimensional generalizations. The 3d case also does little to explain how the sphere packing works and the shapes involved. Also, it’s unclear if, as the picture seems to indicate, the 8d case is a “duplication” of the hexagonal sphere packing. Also when discussing the general case, I think it would be helpful to know where the difficulty lies. Is it the case that we have candidates for other dimensions, but can’t prove that it’s the best as it was for the 3d case? I felt that the sphere packing problem and its solutions were inadequately explored.

The counterintuitive nature of higher dimensional spheres, on the other hand, I thought was explained well. The ratio of the volume of the sphere to the volume of the circumscribing cube going to zero was interesting. (This might be an interesting worksheet in a calculus class, I should make a note of it.) The analogy with the basketball court and the grain of sand was good, due to the fact that such an object with all those properties is impossible to conceive of in our limited space.

Despite some flaws, I think the video was pretty good. Its discussion of the sphere packing problem itself, I think was of moderate success. But the discussion of higher-dimensional spheres worked well to highlight its counterintuitiveness. I am looking forward to seeing more videos from this program, and I hope you’ll join me.

Arrow’s Impossibility Theorem

One of the few “useful” theorems that I know about is Arrow’s impossibility theorem. It also highlights a unique feature in mathematics, which is the ability to demonstrating when something is outright impossible. The colloquial manner in which the theorem is described is that no voting system can be “perfect,” where the word “perfect” is

My initial plan today was to read Arrow’s initial paper on the subject “A Difficulty in the Concept of Social Welfare” and give my thoughts, a layman friendly statement, and maybe comments on the proof, but given the fact that I know most of the material, albeit, in an abbreviated form, I did not manage to read through the paper. So, I will just try to provide a layman-friendly (ish) introduction to the statement of the theorem (which you can probably find better explained on YouTube).

The first underlying assumption is that voters have “rational” preferences. This word “rational” is severely (and frustratingly) misunderstood. Here the word means something very specific. Namely that every voter has their own preference ordering for the candidates of an election, which satisfy two conditions:

  1. For each pair of candidates a and b, one of the following is true: a is preferable to b , is preferable to a, or there’s an indifference between a and b.
  2. For any three candidates ab, and c, if a is preferable to b (or there’s indifference between them) and b is not preferable to c (or there’s indifference between them), then is not preferable to c (or there’s indifference between them).

So the first condition states that everyone has a preference or is indifferent between every two possibilities. I’m not a huge fan of the first condition, but I’m willing to go with it since it makes more sense in the context of voting. The second says that your preferences are actually consistent; if you say that you prefer chocolate to strawberry and strawberry to vanilla, then it wouldn’t make sense to go around and claim that you like vanilla more than chocolate.

Now, it becomes a little clearer what we mean by “voting system”; it’s a function that takes as an input the preference orderings of the voters and outputs an ordering for society as a whole. In other words, it’s a process that aggregates everyone’s preferences. Our goal is to make a process that is consistent and reflects, at least a little, the preferences of the voters. So we would want a few criterion, the list that Arrow conceived are as follows:

  1. The voting system has a result for every possibility.
  2. If one candidate x does better (or at least doesn’t do worse) in every voter’s opinion and was previously doing better than y, then x is still doing better than y.
  3. Adding or removing candidates should not affect the relative standing of the rest of the candidates in the total vote.
  4. There are no forced results, or in other words, every candidate has some path to victory.
  5. The election result is not determined by a single person (a dictator).

Arrow’s impossibility theorem states that no voting system can satisfy all five conditions.

Now the voting system that I think is most natural to most people is the plurality (or first-past-the-post) voting system, where everyone votes for one candidate and the one with the most votes wins. The problem with this is, as most people have already guessed, the third condition. If there are two candidates a and b, where a is preferred by more people to b, but then a candidate c, whose policies resemble that of a‘s, enters the race, then the vote is split and b wins the race.

But the problem isn’t just with a single voting system, but with all of them. We have to sacrifice at least one of these conditions whenever we decide on a voting system.

Is Calculus a Subset of Algebra?

Or a more accurate title would be “Is the way we teach calculus a subset of algebra?” I was reading L.D. Nel’s paper “Differential Calculus Founded on an Isomorphism”, and while I have little commentary on the paper, it reminded me of a question I had while I was teaching calculus. To summarize, for differential calculus and especially, for integral calculus, it seems that the standard teaching method is to introduce the analytic definitions (e.g. the derivative in terms of limits and the integral in terms of Riemann sums) and then as quickly as possible, we abandon the analysis when we develop enough algebraic tools to do the computations that are ostensibly the main point of the subject. The analytic roots are the first thing, in my experience, that students forget. If they remember anything from those classes, it will likely be the power rule.

So how much of what we teach can be boiled down to solve algebraic problems in a certain (rather large) D-module? From what I can recall, there are applications of derivatives in terms of tangents, velocity, rates of growth, and optimization and applications of integrals in terms of area and volume. But in terms of mathematical content, there doesn’t seem to be much analytic content beyond their definitions and initial geometric interpretations. The only examples that come to mind are Newton’s method for approximating solutions of equations and trapezoidal and Simpson’s rules for approximating definite integrals. Of course, much of what I say doesn’t apply to sequences and series, which has a much more analytic flavor with its concerns about convergence.

But then again, to what extent is calculus part of analysis? The original texts seem more algebraic (and geometric) than its current form. The formal definitions of limits and the like were developed much later. And though it is clear that analysis was born out of calculus, whether the parent belongs to the same club as the child is harder to tell.

But then again, what makes analysis not a part of algebra? The definitions are fairly algebraic. Look at the definition of a limit. Where’s the analysis? Is it the absolute value? The inequality? Certainly, it’s not the quantifiers. I think I come back to this question because it’s not clear to me where the lines of between the mathematical subjects and what makes them so. The art of epsilon-delta is supposed to be the dividing line, but dissecting that animal doesn’t clarify the issue.

Application of Approximate Diagonalization of Commutative C*-Algebras to Invariants

The premise of my dissertation is founded on the idea that matrices over C*-algebras are important and that approximate diagonalization would make dealing with such matrices easier. I have yet to find any useful application of my own result, though I find some mildly amusing applications to results of matrices over commutative C*-algebras.

So we start with the main definition:

Definition. Let A be a C*-algebra and let n be a positive integer. A normal matrix a\in M_n(A) is approximately diagonalizable if for every \varepsilon > 0, there exist elements a_1, \dotsc, a_n\in A and a unitary u\in M_n(A) such that

\lVert uau^{*} - \mathrm{diag}(a_1,\dotsc,a_n) \rVert < \varepsilon.

Next, we present the theorem that we will be applying:

Theorem. Let X be a compact metrizable space. Every self-adjoint matrix in M_n(C(X)) is approximately diagonalizable if and only if \dim(X) \leq 2 and \check{H}^2(X) = 0.

The first mildly interesting application is that on the connection between K-theory and approximate diagonalization of matrices over commutative C*-algebras.

Theorem. (Theorem 3.4 and 4.1 of [3]) Let X be a compact metrizable space. If for every positive integer n, every projection in M_n(C(X)) is approximately diagonalizable, then K_0(C(X)) \cong C(X,\mathbb{Z}).

Proof. First, note that the K_0 class of any diagonal projection is an element of C(X,\mathbb{Z}). This is because projections in C(X) are characteristic (indictator) functions of clopen subsets and so the K_0-class of a diagonal projection is the sum of indicator functions of clopen subsets, which is a continuous, (non-negative) integer-valued function.

Given an approximately diagonalizable projection p, there exist projections p_1, p_2, \dotsc, p_n \in C(X) and a unitary u\in M_n(C(X)) such that

\lVert upu^{*} - \mathrm{diag}(p_1, p_2, \dotsc, p_n)\rVert < 1.

Note that p_1, p_2, \dotsc, p_n can be chosen to be projections by the stability of the projection relations (i.e. being self-adjoint and idempotent, see Lemma 2.5.4 of [1] for details). Since close projections (norm distance strictly less than 1) are unitary equivalent (see Lemma 2.5.1 of [1]), their K_0-classes are the same. So every projection has the same K_0-class as of a diagonal projection, and thus belongs to C(X, \mathbb{Z}). \Box

The observant reader can see that we actually proved every approximately diagonalizable projection is in fact diagonalizable. Combining this with Xue’s theorem, we get the corollary:

Corollary. Let X be a compact metrizable space such that \dim(X) \leq 2 and \check{H}^2(X) = 0. Then K_0(C(X)) \cong C(X,\mathbb{Z}).

Perhaps of more slightly more interest is the fact that we can use this method to prove the following:

Theorem. Let X be a compact metrizable space. If for every positive integer n, every positive matrix in M_n(C(X)) is approximately diagonalizable, then W(C(X)) \cong \mathrm{lsc}(X,\mathbb{N}\cup \{0\}).

Here W(A) = (A\otimes M_{\infty})/\sim denotes the Cuntz semigroup, where \sim denotes Cuntz equivalence. The proof of this theorem falls out in the same way as the previous theorem: the Cuntz class of positive elements correspond to characteristic functions of open sets, which sum to non-negative integer-valued lower semicontinuous functions and it is clear that every approximately diagonalizable matrix is Cuntz equivalent to a diagonal matrix.

When we combine this theorem with Xue’s theorem, we have the following corollary:

Collorary. Let X be a compact metrizable space such that \dim(X) \leq 2 and \check{H}^2(X) = 0. Then W(C(X)) \cong \mathrm{lsc}(X,\mathbb{N}).

This is finite and unital version of Theorem 1.3 of [2]. Perhaps the only real upside to this, is that this proof directly deals with positive elements in contrast to Robert’s proof using Hilbert modules.

I’m still looking for some applications of approximate diagonalization, especially my dissertation results. Any help in this respect would be appreciated.

[1] Lin, Huaxin. An Introduction to the Classification of Amenable C*-Algebras. World Scientific, 2001.

[2] Robert, Leonel. “The Cuntz semigroup of some spaces of dimension at most two” C. R. Math. Acad. Sci. Soc. R. Can. 35:1 (2013) pp. 22-32.

[3] Xue, Yifeng. “Approximate Diagonalization of Self-Adjoint Matrices over C(M)Funct. Anal. Approx. Comput. 2:1 (2010) pp. 53-65. [pdf]

Broken Promises

It’s strange how things seem to never go as planned. At the beginning of the year, I made several resolutions about this blog. None of them were accomplished. For the first seven months of this year, I did not write a single post. As the months started to pass, it became increasingly difficult to write. It became more and more difficult to make a post that would “make up” for my absence. Of course no one really holds such expectations, and if they did, would postponing blogging make the situation any better? No. It’s clear now that I was the only person holding me back.

But why is this the case? Why is it so difficult to do this, especially since I want to? Perhaps that’s the problem. Whenever I resolve to do something and to do it well, something I want to do, it becomes that much harder to do it. I want to do research. I want to prepare my lectures. I want to write blog posts. Yet, each resolution, each striving gives way to inaction, inability. Why?

Is it perfectionism? The gap between achievement and potential always exists. It’s easier to imagine the perfect solution, the perfect lecture, the perfect blog post. Actually doing it, by construction, is impossible. “Brain crack“,  a phrase termed by Ze Frank, illustrates this problem perfectly. What my mind imagines outweighs what reality will ever produce. Nothing ever produced is without flaw. Each lecture will have awkward phrases and unexplained concepts. Best that all my thoughts stay in my mind, where they can persist unchallenged. Best that my effort remain minimal, lest I realize the limits of my ability. Best be captivated by my own imaginings of success than be let down by the reality of actual action.

And then at the same time, am I afraid of success? If this blog attracts attention, then I will have readers with expectations to meet. Nothing is more challenging to the status quo than an escalation of quality. If I give a good lecture one day, a drop in quality will be all the more noticeable. If I prove a result, people might think I actually know something. If you show people what you’re capable of, then people will expect that much of you. Despite how the movies go, life isn’t made up of a climax. There’s no big showdown at the end to showcase your best. No big game at the end. It doesn’t end. After the talk, there will be another. And another. What if I tire? What if I succeed only to eventually fail?

I don’t have the answers. I never do. But nonetheless, when I don’t find myself thinking about doing things, I go out and do them. Not because I will never fail. Or because my imagination matches reality. But because what I want to do is worth being done.

Thoughts on MaBloWriMo

At some point when Thanksgiving break began, I felt exhausted and didn’t want to any blogging. Forcing myself to blog was beneficial for the first two weeks, but after some point, it seemed counterproductive. Taking an excess amount of time to think and develop ideas reaps no benefits, as the long hiatus of this blog demonstrates. But forcing out every newly-thought idea out before it’s full conceived results in some poor writing. I would like to go back one day to revise some of the blog posts I made. Even so, attempting to blog everyday has taught me valuable lessons about what I want to do with this blog. So hopefully this project will produce some fruit in terms of new posts. Thanks for reading!