Mathematica et cetera

Category: Math

A proof of Euler’s Reflection Formula

I just found the following proof of Euler’s reflection formula. The thing I like about it is that all  it requires is dumbly marching forwards, there is no trickery.

When s and 1-s are both positive, that is, when 0<s<1,

    \[\Gamma(s)\Gamma(1-s)=\int_0^\infty t^{s-1}e^{-t}dt\int_0^\infty t^{-s}e^{-t}dt.\]

I’m going to join those integrals, so giving different names to each copy of t,

    \[=\int_0^\infty\int_0^\infty x^{s-1}e^{-x}y^{-s}e^{-y}dxdy\]

    \[=\int_0^\infty\int_0^\infty \frac 1x \left(\frac xy\right)^s e^{-(x+y)}dxdy.\]

Now change variables u=\frac xy and v=x+y. The inverse transformation is x=\frac{uv}{u+1} and y=\frac v{u+1} and the Jacobian is \frac v{(u+1)^2}, then

    \[=\int_0^\infty\int_0^\infty\frac{u+1}{uv} u^s e^{-v}\frac v{(u+1)^2}dudv\]

    \[=\int_0^\infty\int_0^\infty\frac{u^{s-1}}{u+1}e^{-v}dudv=\int_0^\infty\frac{u^{s-1}}{u+1}du.\]

To evaluate this integral, split into the two parts

    \[\int_0^\infty\frac{u^{s-1}}{u+1}du=\int_0^1\frac{u^{s-1}}{u+1}du+\int_1^\infty\frac{u^{s-1}}{u+1}du\]

    \[=\int_0^1\frac{u^{s-1}}{1+u}du+\int_1^\infty\frac{u^{s-2}}{1+\frac 1u}du.\]

Now use the series for the denominators

    \[=\int_0^1u^{s-1}\sum_{h=0}^\infty (-1)^hu^hdu+\int_1^\infty u^{s-2}\sum_{h=0}^\infty (-1)^hu^{-h}du\]

and now change the order of the sums and integrals

    \[=\sum_{h=0}^\infty\int_0^1(-1)^hu^{s+h-1}du+\sum_{h=0}^\infty\int_1^\infty (-1)^hu^{s-h-2}du\]

    \[=\sum_{h=0}^\infty\frac {(-1)^h}{s+h}+\frac {(-1)^h}{s-h-1}\]

    \[=\frac 1s+\sum_{h=1}^\infty \frac {(-1)^h 2h}{s^2-h^2}.\]

Now define the function

    \[f(x)=x\prod_{n=1}^\infty \frac{4n^2-x^2}{4n^2}\frac{(2n-1)^2}{(2n-1)^2-x^2}\]

and observe that

    \[f(x)=\left(x\prod_{n=1}^\infty 1-\frac{x^2}{4n^2}\right)\left(\prod_{n=0}^\infty 1-\frac{x^2}{(2n+1)^2}\right)^{-1}\]

and use Euler’s products for the sine and cosine

    \[\sin(\pi x)=\pi x\prod_{n=1}^\infty 1-\frac{x^2}{n^2}\quad \cos\left(\frac\pi 2 x\right)=\prod_{n=0}^\infty 1-\frac{x^2}{(2n+1)^2}\]

to notice

    \[f(x)=\frac 2\pi \sin\left(\frac\pi 2 x\right)\frac 1{\cos\left(\frac\pi 2 x\right)}=\frac 2\pi\tan\left(\frac\pi 2 x\right)\]

now take the logarithmic derivative

    \[\log(f(x))'=\frac{\pi}{\sin(\pi x)}=\frac 1x+\sum_{n=1}^\infty \frac {4n}{4n^2-x^2}-\frac{2(2n-1)}{(2n-1)^2-x^2}\]

the right hand side is what we had previously found with the variable s instead of x, therefore

    \[\Gamma(s)\Gamma(1-s)=\frac{\pi}{\sin(\pi s)}.\]

We have the equality when 0<s<1, and since these functions are holomorphic, and they coincide in an open set they must be equal.

A346587

OEIS link. This is the sequence of k so that

    \[\frac{k}{\lambda(k)}\]

is incrementally largest, where \lambda is the Carmichael function.

I wrote a math stackexchange post which I’ll reproduce below:

If we fix a list of primes, and write n=2^{a_0} p_1^{a_1}\dots p_k^{a_k} where only a_0 can be 0, we can find what the exponents should be. There are four cases for a_0=0,1,2 and a_0>2. I’ll do the last one and the others are analogous.

    \[\lambda(n)=\text{lcm}(2^{a_0-2},(p_1-1)p_1^{a_1-1},\dots,(p_k-1)p_k^{a_k-1})\]

and now we choose the a_i to be equal to or smaller than one (2 for a_0) plus the largest exponent of p_i which occurs in any of the other parentheses, because if we chose a_i to be greater than that it would have the effect of multiplying both n and \lambda(n) which would cancel out.

So this means that for any list of primes we know that it appears on the sequence a finite number of times and we can find what the exponents are. For example if we want the primes that divide n to be 2,3,5,7 we have that the possibilities of n are 105,210,315,420,630,840,1260,2520 and of those only 1260 appears on the sequence. One thought I had is that we would want n to be divisible by some prime p so that p-1 has a prime raised to a large power in its factorization, but for example n which is divisible by 2,17 does not appear on the sequence. Larger Fermat primes don’t work either. Some of the terms in the sequence are primorials like for the totient function, but some don’t appear, for example 30 and 210 don’t.


The question of describing the numbers which appear on the sequence feels like it shouldn’t be hard, but I haven’t been able to prove anything about it, except the result above. It also seems like the sequence encodes something about the sequence of numbers one less than the primes, but it’s unclear what that is. Alas, a seemingly interesting problem I cannot solve.

Counting the number of chess games

I recently came across a github repo by John Tromp with an exceedingly simple premise: generate a bunch of random chess positions, verify whether they’re legal, and use statistics to approximate how many legal games there are.  Clearly that idea has the precise simplicity and novelty to make one feel like an idiot for not having come up with it oneself. He did that with a sample of 1000 games and found a 95% confidence interval of (4\pm 1.1)\cdot 10^{44} games. Then he decided to crowdsource a sample of 10000 games, which should produce an estimate with one digit within the confidence interval. I proved that 8 of them are legal by finding games that result in the random position. It was… odd. I expected a  randomly generated chess game to be weird, but I wasn’t prepared for just how weird they were going to be. I think that’s sort of like how old movies imagined aliens to be little green men. The weirdness we can up with is within the bounds of what we know. Here’s an example:

Anarchy! How many queens are there? And why does black have five knights? Anyway, the position is legal, as I found a game that produces it. As of writing there are still 132 positions missing proofs that they’re either legal or illegal. So I encourage everyone reading to go there and try to do them. The procedure I came up with to produce the proof games is:

  1. Count the number of black and white pieces. Subtract those from 16 to get the number of pieces you need to capture from each side.
  2. Find out whether either side is missing a piece they started with, for example if they don’t have a light square bishop. If so, you have to capture that piece, otherwise capturing pawns is usually more convenient.
  3. If either player has more than one light square or dark square bishop, make sure you can produce it by promotion.
  4. Get the pawns out of each others’s ways for promotion. This is the trickiest part in my opinion. In the example above I had to preserve the white B and G pawns and the C and G black ones. I also had to make room for 5 black and 4 white promotions, and get rid of the black dark squares bishop. I had finished this step by move 11, the last capture of the game, when the board looked like this:

  1. Get the kings into their final positions, and if possible get the pieces around them too, so you don’t produce too many checks when getting the other.
  2. Do all the promotions necessary.
  3. Get all the pieces into position.

Edit: the 10 thousand game sample is complete. It yielded a 95% confidence interval of (4.45\pm 0.37)\cdot 10^{44} games.

Consistent Legendre Symbols in LaTeX

Quadratic reciprocity tells us that

    \[\left(\frac 3p\right)\left(\frac p3\right)=(-1)^{(p-1)/2}\]

which is just terrible: the parentheses are different sizes, on the Legendre symbol on the left p and 3 are closer together and on the right further apart. Normally this doesn’t come up because you rarely have two nearly identical fractions side by side and enclosed by parentheses, but when talking about quadratic residues it’s quite common. One solution is to change the \left and \right tags to \bigg

    \[\bigg(\frac 3p\bigg)\bigg(\frac p3\bigg)=(-1)^{(p-1)/2}.\]

It looks good enough, but if you’re going to the trouble, you might as well define a command to do it for you. I copied one from the LaTeX stackexchange and modified it with the fix above

\makeatletter
\def\legendre@dash#1#2{\hb@xt@#1{%
\kern-#2\p@
\cleaders\hbox{\kern.5\p@
\vrule\@height.2\p@\@depth.2\p@\@width\p@
\kern.5\p@}\hfil
\kern-#2\p@
}}
\def\@legendre#1#2#3#4#5{\mathopen{}\bigg(
\sbox\z@{$\genfrac{}{}{0pt}{#1}{#3#4}{#3#5}$}%
\dimen@=\wd\z@
\kern-\p@\vcenter{\box0}\kern-\dimen@\vcenter{\legendre@dash\dimen@{#2}}\kern-\p@
\bigg)\mathclose{}}
\newcommand\legendre[2]{\mathchoice
{\@legendre{0}{1}{}{#1}{#2}}
{\@legendre{1}{.5}{\vphantom{1}}{#1}{#2}}
{\@legendre{2}{0}{\vphantom{1}}{#1}{#2}}
{\@legendre{3}{0}{\vphantom{1}}{#1}{#2}}
}
\def\dlegendre{\@legendre{0}{1}{}}
\def\tlegendre{\@legendre{1}{0.5}{\vphantom{1}}}
\makeatother

It looks like this

\legendre{p}{3}\legendre{3}{p}=(-1)^{(p-1)/2}

    \[\legendre{p}{3}\legendre{3}{p}=(-1)^{(p-1)/2}.\]

Which is nice, since it isn’t identical to a fraction. It doesn’t, however, fix the issue with the spacing. I think that’s because the bounding box of the p in this font has some whitespace on top and 3 has on the bottom. But you can just fix that by changing the font.

The Gregorian Calendar is Wrong

Caesar!

Picture this: it is January 48BC. Caesar crossed the Rubicon nearly an year ago. Then he proceeded to march on Rome then go to Spain, destroy the forces loyal to the Senate there and come back. Now he is in Italy while Pompey and most of the Senate are in Greece assembling an army to meet him. There is a real possibility that Pompey will use the resources of the richer Roman East to outnumber and defeat Caesar. Furthermore, the Senate’s navy, led by Bibulus, controls the Mediterranean, and are blockading the Strait of Otranto.

However, Caesar knows something Bibulus doesn’t. The roman calendar calendar is a real mess. It is a lunar calendar with common years of 355 days and leap years of 377 or 378. The Pontifex Maximus, that is Caesar, was responsible for manually adding in leap years to keep the calendar in sync. But Caesar had been busy conquering Gaul for the last ten years so the calendar had drifted so far that it said January, but it was actually early Autumn. It was too dangerous to sail in winter, so Bibulus assumed Caesar wouldn’t dare a crossing. Caesar, of course, did dare the crossing. Then he proceeded to destroy Pompey’s army at Pharsalus. Pompey fled to Egypt, where king Ptolemy XIII had him beheaded. I think the lesson here is not to mess with the guy who owns the calendar.

After winning the civil war, Caesar proceeded to fix the calendar, so nobody could pull this trick ever again. The calendar he instituted, known as the Julian calendar, had the familiar 365 day regular years and 366 day leap years every 4 years, so an year length of 365.25. It was used into the XXth century in places like Russia and the Ottoman Empire. But in the XVIth century, Pope Gregory XIII noticed the calendar was 10 days out of sync which was messing up the calculation of the date of Easter. The Pope didn’t use this knowledge to win any battles, but he did fix the calendar to the precision of a day for the next 7700 years, which seems good enough. The Gregorian calendar has an year length of 365.2425 days. In the XIXth century John Herschel proposed modifying the Gregorian calendar by removing a leap year every 4000 years which would make it yet closer to the true tropical year. I will propose a better solution.

The true length of a tropical year is 365.242189 days. And the continued fraction of that is

    \[365.242189=365+\frac 1{4+\frac 1{7+\frac 1{1+\frac 1{3+\frac 1\ddots}}}}=[365;4,7,1,3,40,2,3,5].\]

This means that we can crop the continued fraction and get a good approximation of the length of the year. Cropping just before a large coefficient yields an especially good approximation compared to the size of the denominator, so I propose a calendar with year length [365;4,7,1,3]=365+\frac{31}{128}=365+\frac{1}{4}-\frac 1{128}, which corresponds to common years of 365, intercalated with leap years of 366 days every 4 years, except for once every 128 years which is a regular year instead. This is simpler than the rules for the Gregorian calendar and yet it achieves a smaller error

    \begin{equation*} \begin{split} 365.242189-365-\frac{1}{4}+\frac{1}{128} & = \phantom{-}0.0000015 \\ 365.242189-365-\frac{1}{4}+\frac{1}{100}-\frac 1{400} & = -0.000311 \end{split} \end{equation*}

So we’d be on the correct date for the next six hundred thousand years, which is close to the maximum length of time we can predict that sort of thing over anyway.

A344202

OEIS link. This is the sequence of primes p such that

    \[\gcd(\text{ord}_p(2), \text{ord}_p(3)) = 1.\]

I first thought of this sequence after reading a mathoverflow post about the Diophantine equation

    \[(2^x-1)(3^y-1)=2z^2.\]

The sequence arises when you consider solving the equation for x,y coprime. In this case you might also expect 2^x-1 and 3^y-1 to be coprime, in which case you would have

    \[2^x-1=p^2\quad 3^y-1=2q^2\]

for integers p,q because a product of coprime integers which equals twice a square must one be a square and the other twice a square. But you would be wrong to think that. However, the primes that can make that conjecture fail are somewhat rare: 683, 599479, 108390409, 149817457, 666591179, 2000634731, 4562284561, 14764460089, 24040333283, 2506025630791, 5988931115977, and none more up to 10^{13}. OEIS generates these nice graphs:

So it looks like the sequence is more or less linearly distributed in the logarithmic graph, so actually exponentially distributed.

I, and someone called sheerluck/Andrew, wrote a program to calculate these numbers. It’s somewhat interesting in its own right. It uses the Sieve of Eratosthenes to generate batches of primes and then spawns a thread to check all the primes in a batch. The version on github is configured to use 4 threads, and it took about a month to check all the way to 10^{13}, but you can edit it to use as many as you want. Each thread then checks each prime in its designated batch one by one. It does this by calculating \text{ord}_p(2) for each prime and then checking if \text{ord}_p(3) shares a factor with it. This is more efficient than calculating both orders because we can return early when checking the order of 3 nearly always. The function that calculates the order of 2 is as follows:

std::vector<uint64_t> order_two(std::map<uint64_t, uint64_t> factors, uint64_t p) {
    namespace view = std::ranges::views;
    uint64_t group_order = p - 1;
    std::vector<uint64_t> order;
    for (const auto& [P, e] : factors)
    {
        uint64_t exponent = group_order;
        for (const auto f: view::iota(0ull, e + 1))
        {
            if (modpow_two(exponent, p) != 1)
            {
                uint64_t alpha = P;
                order.push_back(alpha);
                
                break;
            }
            exponent /= P;
        }

    }
    return order;
}

It uses the fact that the subset of (\mathbb{Z}/p\mathbb{Z})^\times generated by 2 is also a subgroup, thus by Lagrange’s theorem its order divides p-1. So we don’t have to check whether all the powers of two are congruent to 1, just the ones which are divisors of p-1.

Also, the reason why I calculate the order of 2 and then check if it has a common factor with the order of 3 and not the other way around is because calculating the order of 2 is slightly faster using this trick. And I profiled several ways of multiplying 64 bit unsigned ints without overflowing: peasant multiplication, using 128 bit ints (__uint128_t from gcc), using some assembly, and using a trick based on how a * b, the c++ modular multiplication is equal to ab the integer multiplication mod 2^{64}. The one using assembly turned out to be fastest.

I conjecture that the sequence is infinite. If the analogous sequences for all pairs of naturals (\alpha,\beta) (not just (2,3) as we’ve considered so far) are all finite some real coincidences must happen. So I suspect they’re all infinite.

Powered by WordPress & Theme by Anders Norén