Mackey functor undergrad research

I recently finished an Undergraduate Research program run by the University of Sheffield mathematics department.

My subject was the highly abstract concept of “Mackey functors”. These are a niche but important part of many advanced mathematical concepts, including stable homotopy theory, and representation theory.

Their invention was motivated by trying to create a formal mathematical object which could find morphisms between different properties between a group and it’s subgroups. For instance, we may want to find a map between the order of a group’s subgroups. We may write a sketch to demonstrate what these mappings between subgroups should look like:

How do we formalise this into a concrete mathematical idea?

Category theory

In order to have any hope in defining such an object, we must explore a bit of category theory. But don’t worry, the initial definition is not actually too hard to understand!

A category \mathcal{C} is comprised of two types of things, a collection of objects (\text{Obj}(\mathcal{C})), and a collection of “morphisms” between these objects (\text{Hom}_{\mathcal{C}}(A, B)).

These are subject to two criteria, or axioms:

  1. For each object in A in \mathcal{C}, there must exist an \textbf{identity} mophism \operatorname{id}_A such that f \circ \operatorname{id}_A = f = \operatorname{id}_A \circ f for all compatible morphisms f.
  2. \textbf{Composition} must be well-defined. That is, where we have morphisms f in \operatorname{Hom}_{\mathcal{C}}(A, B) and g in \operatorname{Hom}_{\mathcal{C}}(B, C), we can find g \circ f in \operatorname{Hom}_{\mathcal{C}}(A, C).

Some examples of categories include:

  • \mathbf{Set} – the category of sets. Contains all sets and all possible set mappings between these sets.
  • \mathbf{Ab} – the category of abelian groups. All abelian groups and all homomorphisms between these abelian groups.
  • \mathbf{Grp} – the category of groups. All groups and all homomorphisms between these groups.
  • \mathbf{Cat} – the category of all small categories. Morphisms are “functors” (maps between categories).
  • \mathbf{Graph} – the category of graphs. All possible graphs and all graph homomorphisms between them.

Definition by a function and axioms

One of the ways to define a Mackey functor is to use a function and then axiomatically enforce the existence of morphisms which go “up” subgroups and morphisms which go “down” subgroups. As the previously presented diagram suggests, we want our object to contain a morphisms to and from each pair of subgroups.

Consider a group G. A G-Mackey functor \underline{M} is a function from the subgroups of G to abelian groups in the \mathbf{category} of abelian groups, \mathbf{Ab}.

    \[\underline{M}: \{\text{subgroups of}~ G\} \to \mathbf{Ab}\]

The category \mathbf{Ab} must contain the following morphisms…

    \[\text{Transfer} \colon ~~~ \text{tr}_{K}^{H} \colon \underline{M}(K) \to \underline{M}(H)\]

    \[\text{Restriction}\colon ~~~ \text{res}_{K}^{H} \colon \underline{M}(H) \to \underline{M}(K)\]

    \[\text{Conjugation}\colon ~~~ \text{c}_{g} \colon \underline{M}(H) \to \underline{M}(^{g}H)\]

for all subgroups K \leq H of G and all elements g \in G. The morphisms are subject to the following axioms:

    \[\begin{enumerate}\item $\text{tr}_{H}^{H}, \text{res}_{H}^{H}, \text{c}_{h} : \underline{M}(H) \to \underline{M}(H)$ are identity morphisms.\bigskip\item $\text{res}_{J}^{K}\text{res}_{K}^{H} = \text{res}_{J}^{H}$\bigskip\item $\text{tr}_{K}^{H}\text{tr}_{J}^{K} = \text{tr}_{J}^{H}$\bigskip\item $\text{c}_{g}\text{c}_{h} = \text{c}_{gh}$ for all $g, h \in G$.\bigskip\item $\text{res}_{^{g}K}^{^{g}H} \text{c}_g = \text{c}_{g} \text{res}_{K}^{H}$\bigskip\item $\text{tr}_{^{g}K}^{^{g}H} \text{c}_g = \text{c}_{g} \text{tr}_{K}^{H}$\bigskip\item $\text{res}_{J}^{H} \text{tr}_{K}^{H} = \sum_{x \in [J\setminus H / K]} \text{tr}_{J \cap ^{x}K}^{J} \text{c}_{x} \text{res}_{J^{x} \cap K}^{K}$ for all subgroups $J, K \le H$.\end{enumerate}\]

Most of these rules are actually quite simple ideas. Most of them mathematicians are very familiar with, such as the idea that conjugation has to play well. For instance, if we reduce from H to K, then reduce again from K to J, we would expect that to be the same thing as reducing from H to J (this is what the second axiom is saying).

The axiom that doesn’t make sense is the notorious double coset formula – the last one. The story of why this axiom is included is quite interesting. It was originally included because it was a property that many examples of Mackey functor-like objects were exhibiting. It was then observed later on in a clever reformulation of the Mackey functor where it’s inclusion becomes undeniable.

Example: the constant Mackey functor

Let’s consider the constant C_4-Mackey functor.

We must define a mapping from each of the subgroups of C_4 to an abelian group. By definition, the constant Mackey functor assigns the same abelian group (we can call it A), to each of the subgroups.

We can then determine the restriction and transfer morphisms.

Restriction

The restriction morphism is \text{res}_{K}^{H} \colon A \to A, so in this case, we simply define it as the inclusion map. As the domain of this function is a subset of the codomain, we can define a map which takes every element in the domain to the same element in the codomain. We can denote this as:

    \[\text{res}_{K}^{H} \colon \underline{M}(H) \xhookrightarrow{} \underline{M}(K)\]

Transfer / Conjugation

The transfer map is a bit more interesting. In the case of the constant Mackey functor, it happens that it is defined as so:

For a transfer \text{tr}_{K}^{H} we define it as multiplying an element a \in A by the index of K in H ([H : K). This is quite similar to what we were doing in the original example I gave of what a Mackey functor does – it’s revealed the change in size as we move up subgroups.

In this case conjugation is also just the identity morphism. All these groups are normal so conjugation does nothing to them.

You maybe wondering how we decided these functions are the transfer and restriction morphism. It all comes down to the axioms. We define this Mackey functor to take all subgroups to the same abelian group (hence “constant”), and then we look at the axioms. The restriction, transfer and conjugation we choose must satisfy the axioms, and the axiom that imposes the most limitation on what these can be is the double coset formula. By taking the double coset formula, we can cleverly choose subgroups so that the formula simplifies down and reveals some information about the morphism we are looking at. It is via this method that the correct function actually pops out.

Taking it further

This definition of a Mackey functor using a functor is nice enough, but you may have noticed that it’s a Mackey functor, not function. Let’s look at what a functor is

Functors

A functor is a mapping between categories. Given two categories \mathcal{C} and \mathcal{D}, a functor creates an association from every object A in \text{Obj}(\mathcal{C}), to a corresponding object F(A) in \text{Obj}(\mathcal{D}), and similarly, for the morphisms f in \text{Hom}(\mathcal{C}), it provides a morphism F(f) in \text{Hom}(\mathcal{D}).

These maps are subject to maintaining proper composition. The functor must map composed morphisms in \mathcal{C} to the composition of the mapping of the individual morphisms, i.e.: F(g \circ f) = F(g) \circ F(f). This can be represented by the following commutative diagram:

A commutative diagram illustrating the concept of functors in category theory, showing the relationships between objects A, B, and C as well as their corresponding mappings through a functor F.

Okay so with functors, we can map from a category to a category. So instead of mapping from the subgroups of G to the category of abelian groups, and then axiomatically enforcing the existence of the morphisms we want, it would be nice to find a special category in which we can make a sensible assignment from each morphism in said category to restriction, transfer and conjugation.

That category turns out to be \text{Span}(\mathbf{SpanGSet}). Using this object, we can create a full definition of a Mackey functor in just a few lines:

A Mackey functor is a functor

    \[\underline{F} \colon \text{Span}(\mathbf{FinGSet}) \to \mathbf{Ab}\]


which preserves products.

Thanks for reading!

Wacky Factorials

We all know the definition of the factorial ! as n! = n(n-1)(n-2) \cdots 2*1 but there are even more definitions of factorials that mathematicians have invented.

Double Factorial

The double factorial looks like this n!!. It can be used to easily denote the product of the odd or even numbers less than or equal to n

    \[n!! =\left{\begin{matrix}n(n-2)(n-4) \cdots, \textrm{even n}\\n(n-2)(n-4)\cdots*1 , \textrm{odd n}\end{matrix}\right.\]

Subfactorial

The subfactorial has a really intuitive practical application. Imagine you have n tokens in defined positions in an array. You take all the tokens out and want to place every token back into the array so that none of the tokens end up in the position they were just in. This is called derangement.

For instance, {1, 2, 3},
You can arrange this set in the following ways: [1, 2, 3], [1, 3, 2], [2, 1, 3], [2, 3, 1], [3, 1, 2], [3, 2, 1].

Only two of these sets satisfies the above criteria, {2, 3, 1} and {3, 1, 2}

Therefore we say !3 = 2

Primorial

The Primorial is denoted n# and is the product of primes less than and equal to n.

n# = \prod_{P\leq n}^{}P

Primorials are used in the search for large prime numbers. Each primorial has more distinct prime factors than any number smaller than it.

Superfactorial (Sloane)

The more common definition of the Superfactorial. It is defined as such:

sf(n) = \prod_{k=1}^{n}k!

For example: sf(4) = 4!*3!*2!*1! = 288

You can look at it another way by expanding the factorials: sf(4) = 4*3^{2}*2^{3}*1^{4}

And so:

sf(n) = \prod_{k=1}^{n}k^{n-k+1}

Superfactorial (Pickover)

Another definition of the Superfactorial uses tetration.

    \[n\$ = _{}^{n!}\textrm{n!}\]

It grows insanely fast.

    \[4\$ = 24^{24\cdots^{24}}\]

Exponential Factorial

Annoyingly also denoted

    \[n\$\]

the exponential factorial is like a normal factorial but exponentiated instead of multiplied:

    \[4\$ = n^{n-1^{n-2}\cdots^{1}}\]

Hyperfactorial

Finally the hyperfactorial is defined like this:

H(n) = n^{n}(n-1)^{n-1} \cdots 2^{2} * 1^{1}

Or:
H(n)=\prod_{k=1}^{n}k^{k}

And hence is very similar to the Sloane definition of the superfactorial:

H(4) = 4^{4}*3^{3}*2^{2}*1^{1}

Differential Equations

What is a Differential Equation? (DE)

A differential equation is an equation which contains the derivative of one or more of the variables.

They are incredibly useful because they allow models to describe the rate of change of values rather than the size of the values.

First order Differential Equations

Some first order differential equations can simply be solved by separating variables and integrating. This is a normal maths technique so I will be skipping over it.


Linear first order differential equations are in the form:

To solve these, you can observe how an application of the product rule would have produced the expression on the left side.

For instance, take the linear first order differential equation:

Observe that by applying the product rule to the expression

Gives the same expression as on the left of the above equation, taking into account the need for implicit differentiation.

After spotting this, integrating both sides with respect to x will yield a solution, but don’t forget a + c on one of the sides!

DONE! Don’t forget to apply the division of x to the c as well.

Integration Factor

Sometimes the equation will not immediately be susceptible to the reversed product rule attack, but we can change the equation to force it to.

First, a particular expression must be found called the Integrating Factor (IF). This can be found using the formula:

Multiply the entire equation by this expression and the trick described earlier will always work.

Second Order Differential Equations

Second order differential equations are differential equations which contain a second derivative.

Homogeneous

Here is how to solve differential equations in the form:

This is called a homogeneous equation because it is equal to zero.

The first step in solving these equations is to form the Auxiliary Equation (A.E).

There are three scenarios to follow depending on the solutions to this quadratic:

Scenario 1: Distinct Real Solutions

In the case where the quadratic has two distinct real solutions, use the equation:

Scenario 2: Repeated Real Solution
Scenario 3: Imaginary Solutions

Where the roots are the form of:

These expressions are called the Complementary Function (C.F.).


Select the correct equation and sub in the roots and you have the General Solution (G.S).

Non-homogeneous

You need to add an extra step if the differential equation is not equal to 0, rather a function of x.

The first step is to get the Complementary Function like usual.

You then have to find the Particular Integral (P.I.) which satisfies the differential equation. First, observe the form of the function of x on the right side of the equation. Depending on it’s form, select on of the following forms of expression.

Form of f(x)Form of P.I.
kλ
ax+bλx + μ
ax2 + bx + cλx2 + μx + ν
mcos(ωx)λsin(ωx) + μcos(ωx)
msin(ωx)λsin(ωx) + μcos(ωx)
msin(ωx) + ncos(ωx)λsin(ωx) + μcos(ωx)

Once one of these equations has been found, set it equal to y and find the first and second derivative.

Then sub these three expression into the formula. Compare coefficients to find the values of the unknowns.

You have now found the Particular Integral! Simply add this to the Complimentary Function to find the complete general solution!

y = Complimentary Function + Particular Integral

Song of the article

Reflected Gradient Formula

The inverse of a function can be found by swapping x and ys in an equation (and then rearrange back to terms of x). This has the effect of reflecting the original function in the line y = x, as you are essentially just swapping the x and y axis.

Inverse Function- Basic - Maple Help

Some mates and I sat down a few weeks ago and tried to ask the question: how do we reflect in a different line, not y = x. We set out to construct a robust description of reflection, so we could get the equation of any a reflection of any function, in any other function!


We decided that we would need to find an equation to get the gradient of a line based on two other lines: one the reflectant (which can be imagined as the incident ray) and the reflector (the mirror or boundary). I set out to find this formula and here is what I found:

Variation 1

This is the second version of the formula for the gradient of reflection (the first being a method similar to this but with an unnecessary step, making it less simplified).

The aim is to have two equations:

Reflectant: y=m1x

Reflector: y=m2x

And we want to find the gradient of the line which is formed when the reflectant is reflected in the reflector. We know from physics that the angle of incidence (the angle the reflectant makes with the normal at the point of collision with the reflector) is equal to the angle of reflection:

Angle of Reflection -- from Eric Weisstein's World of Physics
Angle of Incidence = Angle of Reflection

Now we’ve got that essential aspect out of the way, let me explain how I derived the formula:

First know that we can represent a line as a complex number, 1 + im, where m is the gradient of the line we want to convert to this complex representation. So for example the line y=x can be represented as the complex number 1 + i, and the line y = -3x can be represented like 1-3i.

This works when there is 1 on the real axis, as then the y value is equal to the gradient.

Now, consider this example:

The reflector line is x = 0 (lets ignore that this is not possible in the form I said the lines would be in earlier), and the reflectant is y = -x.

Imagining the reflector line as the complex number (1-i), to get the reflected line we must get it’s complex conjugate, and then convert it back into the equation of a line. So (1-i) goes to (1+i), which is the complex number which represent the line y = x, giving us our reflected line.

Okay, so lets add getting the complex conjugate to our list of things we need to do.

Second consider the setup: Reflector: y = x, Reflectant: x=0.

From now on, the mirror line will be coloured black and the light ray will be coloured yellow.

Imagining again the reflectant line as the complex number (0-i) and the reflector (1+i), lets call (0-i) z1 and (1+i) z2. When you multiply together two complex numbers, you can imagine it as two properties of the complex numbers interacting:

The Argument of a complex number is the angle the line between the point and origin makes with the positive part of the real axis.

The Modulus of a complex number can be described as the distance from the number to the origin.

When you multiply two complex numbers, the resulting complex number will have a modulus that is the product of the two original numbers, and a argument that is the sum of the two original numbers.

Probably a Lot of Math — Complex Numbers
A way of looking at complex multiplication

Going back to our original scenario, observe that the modulus of z2 is π/4 (45⁰) and the modulus of z1 is -π/2 (-90⁰). The line we want to get is going to be y = 0, represented by the number 1 + 0i. So to get this, we need to multiply z1 by z2 TWO TIMES, to get a number with an argument of 0 (-90 + 2 * 45 = 0).

Okay, so lets add multiplying by z2 two times as another one of the things we need to do.

This gives us the formula:

M1 and M2 are defined above.

Okay, lets expand this:

We now have this complex number, but its not useful to us yet, as the real part is not equal to 1.

So if we multiply the complex number by 1 / Re (z) we will scale the entire thing so the real part is 1 and the imaginary part is whatever it lines up with.
Also at this point we can do a special trick, removing the i. Treat the imaginary part as the “y” and the real part as the “x”. This way the equation can be generalised to the Cartesian plane.

This gives us the final form of the equation:

The first version of the gradient of reflection formula

Variation 2

There is more that can be done – a slightly different approach…

A complex number can be written in the form

Where r is the modulus of the complex number and θ is the argument of the complex number.

So if we convert our formula at the earliest step, we get this:

We can simplify this by changing the arg functions to arctan(m1 or m2) (as the real part of z1 and z2 are just 1 and the imaginary part is simply the gradient of the lines those numbers represent). We can also use indices laws to first put the power of two into the second e’s exponent, and then add the exponents of both e factors, as they have the same base. We now get this:

It has been slightly rearranged so the negative term comes second

To go further, we must figure out how to get this formula to work on the cartesian plane, and thus must convert the formula into a complex number in the form a + bi.

To do this, we can use the mod-arg form of a complex number, so:

First, factor out i:

And see that now:

So now sub theta into the mod-arg form of the complex number:

And now we can just do what we did in the prior variation of this formula. Imagine we are scaling up the real part to 1, dragging the imaginary part along so it is the gradient of the new line. Do this by multiplying the imaginary part by 1 over the real part (dividing by real). Lets drop the i now as well:

Now you will see that this can be simplified by the trig identity tan = sin over cos, and we get the final formula!

The second formula

You can play around with

Variation One: https://www.desmos.com/calculator/vtoxqqta11

and

Variation Two: https://www.desmos.com/calculator/377n9twxwl

On Desmos

Thank you for reading.

Song of the article