# Summary

In today's class we finished talking about convolutions, with the highlight being the proof and a few applications of the Mobius Inversion Formula. Afterwards we talked about quadratic congruences, ultimately defining and playing around with so-called quadratic residues.

# A Review of Convolutions

In class on Wednesday we defined the convolution of 2 arithmetic functions f and g as the arithmetic function $(f*g)(n)$ defined by

(1)
\begin{align} (f*g)(n) = \sum_{d \mid n}f\left(\frac{n}{d}\right)g(d). \end{align}

#### Example: Convolution in practice

Just to get a little exercise with convolutions, let's try to compute $(\sigma * \phi)(6)$:

(2)
\begin{split} (\sigma * \phi)(6) &= \sum_{d \mid 6} \sigma\left(\frac{6}{d}\right)\phi(d)\\ &=\sigma\left(\frac{6}{1}\right)\phi(1)+\sigma\left(\frac{6}{2}\right)\phi(2)+\sigma\left(\frac{6}{3}\right)\phi(3)+\sigma\left(\frac{6}{6}\right)\phi(6)\\ &=\sigma(6)\phi(1)+\sigma(3)\phi(2)+\sigma(2)\phi(3)+\sigma(1)\phi(6)\\ &=12\cdot 1 + 4\cdot 1 + 3\cdot 2 + 1\cdot 2 = 24. \end{split}

$\square$

After we defined convolutions, we talked about a few important identities, like

• $(f*I)(n) = f(n)$
• $(P_0*\mu)(n) = I(n)$

(The functions $I, P_k$ and $\mu$ were all defined last class period). It was this last identity which led to the Mobious Inversion Formula.

# Mobius Inversion

Last class period we finished by stating the following

Theorem (MIF): If f and g are arithmetic functions, then

$\displaystyle f(n) = \sum_{d \mid n}g(d) \quad \mbox{ iff } \quad g(d) = \sum_{d \mid n} \mu\left(\frac{n}{d}\right) f(d).$

Proof: We know that

(3)
\begin{align} f(n) = \sum_{d \mid n} g(d) = \sum_{d\mid n} 1\cdot g(d) =\sum_{d \mid n}\left(\frac{n}{d}\right)^0\cdot g(d) = \sum_{d \mid n}P_0\left(\frac{n}{d}\right)\cdot g(d). \end{align}

Hence our hypothesis is that $f = P_0 * g$. But then we can convolve with $\mu$ on both sides to get

(4)
\begin{align} \mu*f = \mu*P_0*g = (\mu*P_0)*g=I*g=g. \end{align}

If we substitute the definition for the convolution on the left hand side of this expression, it tells us that

(5)
\begin{align} g(n) = \sum_{d\mid n}\mu\left(\frac{n}{d}\right)f(d). \end{align}

Going the other way, suppose that we are told

(6)
\begin{align} g(n) = \sum_{d\mid n}\mu\left(\frac{n}{d}\right)f(d). \end{align}

This can be reexpressed as the convolution identity $g = \mu*f$. Now convolving on both sides by $P_0$ gives

(7)
\begin{align} P_0*g = P_0*\mu*f = (P_0*\mu)*f=I*f=f. \end{align}

By substituting in the definition of convolution on the left hand side, this tells us that

(8)
\begin{align} f= \sum_{d \mid n}P_0\left(\frac{n}{d}\right)\cdot g(d) = \sum_{d \mid n}\left(\frac{n}{d}\right)^0\cdot g(d) = \sum_{d\mid n} 1\cdot g(d) = \sum_{d \mid n} g(d), \end{align}

as desired. $\square$

#### Example: Inverting Phi

We know from a long time ago that

(9)
\begin{align} n = \sum_{d \mid n} \phi(d). \end{align}

The left hand side is the function $P_1(n)$ (since $n = n^1 = P_1(n)$), and the right hand side is $P_0 * \phi$. MIF therefore tells us that $\phi = \mu*P_1$, which means we have

(10)
\begin{align} \phi(n) = \sum_{d \mid n}\mu\left(\frac{n}{d}\right)P_1(d) = \sum_{d \mid n}\left(\frac{n}{d}\right)d. \end{align}

To see this in action, let's test it out for $n=12$:

(11)
\begin{split} \phi(12) &= \sum_{d \mid 12}\mu\left(\frac{12}{d}\right)d\\ &=\mu\left(\frac{12}{1}\right)\cdot 1 +\mu\left(\frac{12}{2}\right)\cdot 2+\mu\left(\frac{12}{3}\right)\cdot 3+\mu\left(\frac{12}{4}\right)\cdot 4+\mu\left(\frac{12}{6}\right)\cdot 6+\mu\left(\frac{12}{12}\right)\cdot 12\\&=0\cdot 1 + 1\cdot 2 + 0\cdot 3 + (-1)\cdot 4 + (-1)\cdot 6 + 1\cdot 12 \\&= 4. \end{split}

It works!

#### Example: Inverting Sigmak

We know that $\sigma_k$ is defined as $\sum_{d \mid n}d^k$. But notice this means that

(12)
\begin{align} \sigma_k(n) = \sum_{d \mid n}1\cdot d^k = \sum_{d \mid n}\left(\frac{n}{d}\right)^0d^k = \sum_{d \mid n}P_0\left(\frac{n}{d}\right)P_k(d) = (P_0*P_k)(n). \end{align}

MIF therefore tells us that $\mu*\sigma_k = \mu*P_0*P_k = I*P_k = P_k$, so we have

(13)
\begin{align} n^k = P_k(n) = \sum_{d \mid n}\mu\left(\frac{n}{d}\right)\sigma_k(d). \end{align}

To see this in action, let's take $n=6$. We get:

(14)
\begin{split} 6^k &= \sum_{d \mid 6} \mu\left(\frac{6}{d}\right)\sigma_k(d)\\ &= \mu\left(\frac{6}{1}\right)\sigma_k(1)+\mu\left(\frac{6}{2}\right)\sigma_k(2)+\mu\left(\frac{6}{3}\right)\sigma_k(3)+\mu\left(\frac{6}{6}\right)\sigma_k(6)\\ &=1\cdot(1^k) + (-1)(1^k+2^k) + (-1)(1^k+3^k) + (1)(1^k+2^k+3^k+6^k). \end{split}

Awesome!

#### Bonus Example: Inverting to Find Mu Squared

For an integer n, we'll let $\omega(n)$ denote the number of distinct prime factors of n. With this notation, we claim that

(15)
\begin{align} \sum_{d \mid n}\mu^2(d) = 2^{\omega(n)}. \end{align}

(Here the function $\mu^2(d)$ means $\mu(d)\cdot\mu(d)$). To see this is true, notice that $\mu^2$ takes on only the values 0 and 1; if d has $\mu(d) = \pm 1$, then we get $\mu^2(d) = (\pm 1)^2$, and if d has $\mu(d) = 0$, then we get $\mu^2(d) = (0)^2 = 0$. Hence those divisors d which have $\mu(d) = 0$ make no contribution to the sum considered above, whereas those divisors d for which $\mu(d) = \pm 1$ each contribute 1 to the sum above. To prove that identity above, then, we need to prove that there are $2^{\omega(n)}$ divisors d of n so that $\mu(d) = \pm 1$.

For this, let's write $n = p_1^{e_1}\cdots p_{\omega(n)}^{e_{\omega(n)}}$ for the prime factorization of n (note that we have $\omega(n)$ primes in this factorization, since $\omega(n)$ counts the number of distinct prime divisors of n). This means that any divisors d of n has

(16)
\begin{align} d = p_1^\square p_2^\square\cdots p_{\omega(n)}^\square, \end{align}

where the $\square$'s above $p_i$ denotes the number of times $p_i$ appears in the factorization of d.

Now if the exponent for $p_i$ in the factorization of d is 2 or larger, then this means $p_i^2 \mid d$, and so we'd have $\mu^2(d) = 0$. On the other hand, if all the exponents are 0 or 1, then this means that $\mu^2(d) = 1$. Hence to count the divisors d for which $\mu^2(d) = 1$, we just need to count the number of ways to fill in the $\square$'s above with either 0 or 1. Since each $\square$ can be filled with either 0 or 1, and since there are $\omega(n)$ squares total, this means that the total number of ways to fill in these boxes is $2^{\omega(n)}$. Hence there are $2^{\omega(n)}$ divisors d of n so that $\mu(d) = \pm 1$, and so we get

(17)
\begin{align} \sum_{d \mid n}\mu^2(d) = 2^{\omega(n)} \end{align}

as promised.

Now that we know this equality is true, note that we can rewrite this equality as

(18)
\begin{align} P_0 * \mu^2 = 2^{\omega}. \end{align}

MIF then tells us that $\mu*2^{\omega} = \mu*P_0*\mu^2 = I* \mu^2 = \mu^2$. Hence we get the interesting identity

(19)
\begin{align} \mu^2(n) = \sum_{d \mid n}\mu\left(\frac{n}{d}\right)2^{\omega(d)}. \end{align}

Awesome! $\square$

Now that we've finished chapter 3, we're now going to change course slightly. Our starting point is to generalize the study of linear congruences from chapter 2. You will remember that in chapter 2 we were able to solve equations that took the form

(20)
\begin{align} ax = \equiv b \mod{m} \end{align}

for arbitrary integers a,b and m. Having solved this problem, it is natural to ask whether we can solve quadratic congruence equations:

(21)
\begin{align} ax^2+bx+c \equiv 0 \mod{m}. \end{align}

Now we know from the Chinese Remainder Theorem that solving a congruence equation of this form will rely on solving equations of the form

(22)
\begin{align} ax^2+bx+c \equiv 0 \mod{p^a}, \end{align}

where here p is a prime number. It turns out, though, that — except in the case $p=2$ — this problem can in turn be solved by solving the seemingly easier congruence

(23)
\begin{align} ax^2 + bx+c \equiv 0 \mod{p}. \end{align}

The key to this reduction is called Hensel's Lemma, but we're not going to go into how Hensel's Lemma works or why $p=2$ creates problems for applying Hensel. For now, we'll just take this on faith. You'll also have to take on faith (for now, at least), that being able to solve this last congruence is tantamount to solving a congruence of the form

(24)
\begin{align} y^2 \equiv d \mod{p}, \end{align}

where here y is a variable that depends on x,a,b and c from the equation above, and d is a constant which depends on a,b and c.

The moral of the story is this: if you want to solve general quadratic congruences, you need to first be able to solve the "simpler" problem of determining which residues modulo p are actually squares. This leads to our motivating

Definition: Let p be an odd prime. Then n is a quadratic residue modulo p if $p \nmid n$ and

$x^2 \equiv n \mod{p}$

has at least one solution. Numbers n with $p \nmid n$ but for which the above equation has no solutions are called quadratic nonresidues.

#### Example: Is it a quadratic residue?

Suppose that someone asks you whether 2 is a quadratic residue modulo 7. To answer this question, let's see what the square of every residue is modulo 7; if 2 shows up on this list, then we'll know it's a quadratic residue, and if it doesn't show up on the list, then we'll know it's not a quadratic residue (i.e., a quadratic nonresidue).

x mod 7 x2 mod 7
0 0
1 1
2 4
3 2
4 2
5 4
6 1

Since 2 shows up as the square of 3 (and 4) mod 7, this means that 2 is a quadratic residue mod 7. $\square$

To finish off our introduction to quadratic residues, let's count the number of solutions to equations of the form

(25)
\begin{align} x^2 \equiv n \mod{p} \end{align}

when $p \nmid n$.

Lemma: If $p \nmid n$, then the equation $x^2 \equiv n \mod{p}$ has either 0 solutions or 2 solutions.

Proof: First, if n is a quadratic nonresidue, then by definition $x^2 \equiv n \mod{p}$ has no solutions.

Now if n is a quadratic residue, then by definition this means $x^2 \equiv n \mod{p}$ does have at least one solution. Let's choose one solution $x_0$. Notice that we then also have

(26)
\begin{align} (-x_0)^2 \equiv (-1)^2(x_0)^2 \equiv n \mod{p}, \end{align}

so that $-x_0$ is also a solution. Furthermore, we know that $x_0 \not \equiv x_1 \mod{p}$, since otherwise we'd have $p \mid 2x_0$, implying either $p \mid 2$ (impossible since p is odd) or $p \mid x_0$ (impossible since this would mean $0 \equiv 0^2 \equiv x_0^2 \equiv n \mod{p}$, which is ruled out since $p\nmid n$ by hypothesis). So this means that $x_0,-x_0$ are distinct solutions to the equation.

Are there any more? If $x_1$ were some solution to $x^2 \equiv n \mod{p}$, then this would mean that

(27)
\begin{align} x_0^2 \equiv n \equiv x_1^2 \mod{p}. \end{align}

This in turn implies that $p \mid x_0^2 - x_1^2 = (x_0-x_1)(x_0+x_1)$, and then Euclid's Lemma says that $p \mid x_0 - x_1$ or $p \mid x_0+x_1$. In the former case we'd have $x_1 \equiv x_0 \mod{p}$, whereas in the latter we'd have $x_1 \equiv -x_0 \mod{p}$. Hence we see that any solution is equivalent to one of the two we have already produced, and so there aren't any new solutions. This makes the total number of solutions in this case equal to 2. $\square$