Sunday, November 23, 2025

An N person class?! Now that's really deranged!

If there are $N$ students in the class, where $N$ is some large number, how likely is it that they form a single loop that includes the entire class, in terms of $N$?

In the classic problem, we actually went through the various types of assignments that could work, but even there we showed roughly enough to know that the number of single loop gift exchange assignments is $(N-1)!$. The somewhat trickier piece is for the total number of all acceptable gift exchange assignments. Here, if you were paying attention to the titles of these posts, you might have been alerted to the fact that an acceptable gift exchange assignment, that is a permutation where no number is assigned to itself, is called a derangement. The number of derangements of $N$ elements is sometimes called the sub-factorial, denoted $!N,$ which can be shown to have the formula $$!N = N!\sum_{i=0}^N \frac{(-1)^i}{i!}.$$ Thus, we see that the probability of having a single loop for some large number $N$ students is $$\frac{(N-1)!}{N! \sum_{i=0}^N \frac{(-1)^i}{i!} } \approx \frac{e}{N}$$ as $N \to \infty,$ since $\sum_{i=0}^\infty \frac{(-1)^i}{i!} = e^{-1}.$

You know, it honestly takes a smidgeon of the fun out of knowing the solution, rather than doing much work for it. So, I dunno, maybe let's derive the formula $$!N = N! \sum_{i=0}^N \frac{(-1)^i}{i!}.$$ Let $U$ be the set of all inappropriate gift exchange assignments, that is where someone receives themselves as the giftee. In particular, for each $k = 1, \dots, N,$ let $U_k$ be the set of assignments where $k$ receives herself as the giftee. Using the inclusion/exclusion principle, we have \begin{align*}|U| = |U_1 \cup U_2 \cup \dots \cup U_N| &= \sum_{i=1}^N |U_i| -\sum_{1 \leq i \lt j \leq N} |U_i \cap U_j| \\ &\quad\quad\quad+ \sum_{1 \leq i \lt j \lt k \leq N} |U_i \cap U_j \cap U_k | + \dots \\ &\quad\quad\quad\quad+ (-1)^{N+1} |U_1 \cap U_2 \cap \dots \cap U_N|.\end{align*} Now the $k$ summand in the above formula, e.g., $|U_{i_1} \cap U_{i_1} \cap \dots \cap U_{i_k}|$ counts all permutations where $i_1 \lt i_2 \lt \dots \lt i_k$ receive themselves as giftees leaving the other $N-k$ students free to explore all possible permutations. Therefore, we have $|U_{i_1} \cap \dots \cap U_{i_k}| = (N-k)!$, which is coupled with the fact that there are $\binom{N}{k}$ ways of choosing $i_1, \dots, i_k.$ So we have \begin{align*}|U| &= \binom{N}{1} (N-1)! - \binom{N}{2} (N-2)! + \dots + (-1)^{N+1} 0! \\ &= \sum_{i=1}^N (-1)^{i+1} \binom{N}{i} (N-i)! \\ &= N! \sum_{i=1}^N \frac{(-1)^{i+1}}{i!}.\end{align*} Since the number of total derangements would be the number of all permutations minus the non-derangements, we get $$!N = N! - |U| = N! \left(1 - \sum_{i=1}^N \frac{(-1)^{i+1}}{i!}\right) = N! \left( 1 + \sum_{i=1}^N \frac{(-1)^i}{i!} \right) = N! \sum_{i=0}^N \frac{(-1)^i}{i!}.$$ Pretty neat, if not anticlimactic, I guess.

A five person class! That's deranged!

You are participating in a holiday gift exchange with your classmates. You each write down your own name on a slip of paper and fold it up. Then, all the students place their names into a single hat. Next, students pull a random name from the hat, one at a time. If at any point someone pulls their own name from the hat, the whole class starts over, with everyone returning the names to the hat.

Once the whole process is complete, each student purchases a gift for the classmate whose name they pulled. Gifts are handed out at a big holiday party at the end of the year. At this party, you observe that there are “loops” of gift-giving within the class. For example, student A might have gotten a gift for B, who got a gift for C, who got a gift for D, who got a gift for A. In this case, A, B, C and D would form a loop of length four. Another way to have a loop of length four is if student A got a gift for C, who got a gift for B, who got a gift for D, who got a gift for A. And of course, there are other ways.

If there are a total of five students in the class, how likely is it that they form a single loop that includes the entire class?

For this particular case, we can just go through all of the possible outcomes. Let's first enumerate all of the possible single loops for all five students. Let's use the permutation notation $(ABCDE)$ stand for the loop where $A \to B \to C \to D \to E \to A.$ It doesn't necessarily matter where on this loop you start, so let's always assume that we start with $A$. Then we are free to choose any ordering of $B,$ $C$, $D,$ and $E$ to form a loops, so there are $4! = 24$ such loops.

On the other hand, since we cannot ever have a one person loop, where a person receives themselves as the giftee, the only other acceptable assingments for our gift exchange is that there is a combination of a 2-loop and a 3-loop. Here we see that there are $\binom{5}{2} = 10$ ways of choosing the two students to be in the 2-loop. By analogy, we see that there are $2 = (3-1)!$ ways of arranging the remaining members of our 3-loop. So there are a total of $\binom{5}{2} 2! = 20$ other acceptable gift exchange assignments.

Therefore, the probability that all of the giftees form a single loop when there are $5$ students is $$\frac{24}{24+20} = \frac{6}{11} = 54.5454545454\dots\%.$$

Monday, November 17, 2025

Millions of Peaches, Peaches for Free, .... or at least for Extra Credit

As before, your assistant intends to pick two random points along the circumference of the garden and run a hose straight between them.

This time, you’ve decided to contribute to the madness yourself by picking a random point inside the garden to plant a second peach tree. On average, how far can you expect this point to be from the nearest part of the hose?

Here let's again assume that the assistant's hose starts at $(1,0)$ and ends at $(\cos \theta, \sin \theta)$ for some uniform random $\theta \sim U(0,2\pi).$ Here however, we assume that there is a peach tree at the random point $(u,v)$ for some uniformly random point with $$f(u,v) = \frac{1}{\pi}\chi ( u^2+v^2 \leq 1 ).$$ We can again use the line $y = x\tan \frac{\theta}{2},$ which is the perpendicular bisector of the chord traces by the hose, or more precisely we will use parallel copies of it. Let's subdivide the circular area into three different spaces: \begin{align*}\Omega_1(\theta) &= \left\{ (u,v) \mid u^2 + v^2 \leq 1, \left| v - u \tan \frac{\theta}{2} \right| \leq \tan \frac{\theta}{2} \right\} \\ \Omega_2(\theta) &= \left\{ (u,v) \mid u^2 + v^2 \leq 1, v - u\tan \frac{\theta}{2} \geq \tan \frac{\theta}{2} \right\} \\ \Omega_3(\theta) &= \left\{ (u,v) \mid u^2 + v^2 \leq 1, v - u \tan \frac{\theta}{2} \leq - \tan \frac{\theta}{2} \right\}.\end{align*}

In general we get \begin{align*}\lambda^*(u,v,\theta) & = \arg\max \left\{ \| ( 1 - \lambda + \lambda \cos \theta - u, \lambda \sin \theta - v ) \|_2 \mid 0 \leq \lambda \leq 1 \right\} \\ &= \arg \max \left\{ (1-u)^2 + v^2 - 2 \lambda ( (1-u)(1-\cos \theta) + v\sin \theta ) \right.\\&\quad\quad\quad\quad\quad\quad\quad\quad \left.+ 2(1-\cos \theta) \lambda^2 \mid 0 \leq \lambda \leq 1 \right\} \\ &= \begin{cases} \frac{1}{2} \left( 1 - u + v \cot \frac{\theta}{2} \right), & \text{ if $(u,v) \in \Omega_1(\theta)$; } \\ 0, &\text{ if $(u,v) \in \Omega_2(\theta)$;} \\ 1, &\text{if $(u,v) \in \Omega_3(\theta).$}\end{cases}\end{align*} Plugging this back in we see that we have $$d(u,v,\theta) = \begin{cases} \frac{ \left| (1-u) \sin \theta + v(1-\cos \theta) \right| }{ 2 \sin \frac{\theta}{2} }, &\text{if $(u,v) \in \Omega_1(\theta)$;} \\ \sqrt{ (u-1)^2 + v^2 }, &\text{if $(u,v) \in \Omega_2(\theta)$;} \\ \sqrt{ (u- \cos \theta)^2 + (v - \sin \theta)^2 }, &\text{ if $(u,v) \in \Omega_3(\theta).$}\end{cases}$$

Let's first attack $A(\theta) = \iint_{\Omega_1(\theta)} d(u,v,\theta) f(u,v) \,du\,dv.$ We see that, so long as $0 \leq \theta \leq \pi$, by rotating the plane counterclockwise about the origin by $\frac{\pi-\theta}{2}$ that the chord is now parallel to the $x$-axis and stretches from $(-\sin \frac{\theta}{2}, \cos \frac{\theta}{2})$ to $(\sin \frac{\theta}{2}, \cos \frac{\theta}{2}).$ In this case, it is clear to see that the distance $d(u^\prime, v^\prime, \theta) = |v^\prime - \cos \frac{\theta}{2}|.$ So the transformed integral is \begin{align*}A(\theta) &= \frac{2}{\pi} \int_0^{\sin \theta/2} \int_{-\sqrt{1-x^2}}^{\sqrt{1-x^2}} \left|y - \cos \frac{\theta}{2}\right| \,dy \,dx \\ &= \frac{2}{\pi} \int_0^{\sin \theta/2} \left( \int_{-\sqrt{1-x^2}}^{\cos \theta/2} \left(\cos \frac{\theta}{2} - y \right)\,dy + \int_{\cos \theta/2}^{\sqrt{1-x^2}} \left( y - \cos \frac{\theta}{2} \right) \,dy \right) \,dx \\ &= \frac{2}{\pi} \int_0^{\sin \theta/2} \left(1 + \cos^2 \frac{\theta}{2} - x^2\right) \,dx \\ &= \frac{2}{\pi} \sin \frac{\theta}{2} \left( 1 + \cos^2 \frac{\theta}{2} \right) - \frac{2}{3\pi} \sin^3 \frac{\theta}{2} \\ &= \frac{4}{3\pi} \sin \frac{\theta}{2} \left( 1 + 2 \cos^2 \frac{\theta}{2} \right).\end{align*}

We see from symmetry that $\iint_{\Omega_2(\theta)} d(u,v,\theta) f(u,v)\,du\,dv = \iint_{\Omega_3(\theta)} d(u,v,\theta) f(u,v) \,du\,dv,$ so let's let \begin{align*}B(\theta) = \frac{2}{\pi} \iint_{\Omega_2(\theta)} d(u,v,\theta)\,du\,dv &= \frac{2}{\pi} \int_{-\cos \theta}^1 \int_{-\sqrt{1-x^2}}^{\tan \frac{\theta}{2} (x - 1)} \sqrt{ (x-1)^2 + y^2 }\,dy\,dx \\& = \frac{2}{\pi} \int_{\pi + \theta/2}^{3\pi/2} \int_0^{-2\cos \phi} \rho^2 \,d\rho \,d\phi,\end{align*} where the final change of variables represents the transformation $(u,v)$ into $(1 + \rho \cos \phi, \rho \sin \phi)$ for $\phi \in ( \pi + \theta/2, 3\pi/2)$ and $0 \leq \rho \leq -2\cos \phi.$ Therefore, we see that $$B(\theta) = \frac{2}{\pi} \int_{\pi + \theta/2}^{3\pi/2} \left(-\frac{8}{3} \cos^3 \phi\right) \,d\phi = \frac{2}{\pi} \left(\frac{16}{9} - \frac{8}{3} \sin \frac{\theta}{2} + \frac{8}{9} \sin^3 \frac{\theta}{2}\right).$$

Putting these two together we see that the average distance from the random point $(u,v)$ to the chord from $(1,0)$ to $(\cos \theta, \sin \theta)$ is given by $d(\theta) = A(\theta) + B(\theta).$ Therefore, through symmetry, we see that the average distance is given by $$\hat{d} = \frac{1}{\pi} \int_0^\pi d(\theta) \,d\theta = \frac{1}{\pi} \int_0^\pi A(\theta) \,d\theta + \frac{1}{\pi} \int_0^\pi B(\theta)\, d\theta.$$ We have \begin{align*}\frac{1}{\pi} \int_0^\pi A(\theta) \,d\theta &= \frac{4}{3\pi^2} \int_0^\pi \sin \frac{\theta}{2} \left( 1 + 2 \cos^2 \frac{\theta}{2} \right)\,d\theta \\ &= \frac{8}{3\pi^2} \left.\left( -\cos \frac{\theta}{2} - \frac{2}{3} \cos^3 \frac{\theta}{2} \right)\right|_0^\pi = \frac{40}{9\pi^2}.\end{align*} Similarly, we have \begin{align*}\frac{1}{\pi} \int_0^\pi B(\theta) \,d\theta &= \frac{2}{\pi^2} \int_0^\pi \left(\frac{16}{9} - \frac{8}{3} \sin \frac{\theta}{2} + \frac{8}{9}\sin^3 \frac{\theta}{2} \right) \,d\theta \\ &= \frac{2}{\pi^2} \left.\left( \frac{16}{9} \theta + \frac{32}{9} \cos \frac{\theta}{2} + \frac{16}{27} \cos^3 \frac{\theta}{2} \right)\right|_0^\pi = \frac{96\pi - 224}{27\pi^2}.\end{align*} Combining these together gives the average distance between the randomly place hose and your randomly placed peach tree at a slightly larger $$\hat{d} = \frac{40}{9\pi^2} + \frac{96\pi - 224}{27\pi^2} = \frac{96\pi - 104}{27\pi^2} \approx 0.741494295364\dots$$ furlongs.

Peaches, peaches, peaches, peaches, peaches … I likely won’t water you

You and your assistant are planning to irrigate a vast circular garden, which has a radius of 1 furlong. However, your assistant is somewhat lackadaisical when it comes to gardening. Their plan is to pick two random points on the circumference of the garden and run a hose straight between them.

You’re concerned that different parts of your garden—especially your prized peach tree at the very center—will be too far from the hose to be properly irrigated.

On average, how far can you expect the center of the garden to be from the nearest part of the hose?

Without loss of generality, let's assume that we choose a coordinate system such that one end of the hose is located at the point $(1,0)$ and the other end is located at $(\cos \theta, \sin\theta)$ for some uniformly random $\theta \sim U(0,2\pi).$ The minimal distance from the origin, where the prized peach tree is, and the chord traced by the hose occurs along the ray that perpendicularly bisects the chord. We can arrive here either by calculus and finding $$\lambda^* = \arg\max \{ \| (1-\lambda + \lambda \cos \theta, \lambda \sin \theta ) \|_2 \mid 0\leq \lambda \leq 1 \} = \frac{1}{2}$$ or by spatial reasoning about Lagrange multipliers and optimizers occurring when contours and contraints are normal to one another or by any other means, I suppose. Whichever Feynman-esque path we take to arrive at the perpendicular bisector, we then see through some trigonometry that this minimal distance is this $$d(\theta) = \left| \cos \frac{\theta}{2}\right|,$$ as a function of the random $\theta.$

So, the average distance between the randomly placed hose and your precious peach tree is \begin{align*}\bar{d} &= \frac{1}{2\pi} \int_0^{2\pi} d(\theta) d\theta \\ &= \frac{1}{\pi} \int_0^\pi \cos \frac{\theta}{2} \,d\theta \\ &= \frac{2 \sin \frac{\pi}{2} - 2 \sin 0}{\pi} = \frac{2}{\pi}\approx 0.636519772\dots\end{align*} furlongs, which is about 420.3 feet. This doesn't seem too terribly bad, but given that the spread of an average peach tree is only about 20 feet (according to a quick Googling), your assistant's method is not expected to provide a large amount of water to your peaches.

Monday, November 10, 2025

Even more so, seems like an actual uniform generator would be simpler…

Randy has an updated suggestion for how the button should behave at door 2. What hasn’t changed is that if a contestant at door 2 and moves to an adjacent door, that new door will be 1 or 3 with equal probability.

But this time, on the first, third, fifth, and other odd button presses that happen to be at door 2, there’s a 20 percent the contestant remains at door 2. On the second, fourth, sixth, and other even button presses that happen to be at door 2, there’s a 50 percent chance the contest remains at door 2.

Meanwhile, the button’s behavior at doors 1 and 3 should in no way depend on the number of times the button has been pressed.

As the producer, you want the chances of winding up at each of the three doors—after a large even number of button presses— to be nearly equal. If a contestant presses the button while at door 1 (or door 3), what should the probability be that they remain at that door?

In this case, let $q$ be the probability of remaining at door 1 (or at door 3), then we can treat the two different behaviors at door 2 sequentially in order to come with the two-step transition matrix \begin{align*}Q & = \begin{pmatrix} q & 1-q & 0 \\ 0.4 & 0.2 & 0.4 \\ 0 & 1-q & q \end{pmatrix} \begin{pmatrix} q & 1-q & 0 \\ 0.25 & 0.5 & 0.25 \\ 0 & 1-q & q \end{pmatrix} \\ & = \begin{pmatrix} q^2 -\frac{1}{4}q +\frac{1}{4} & -q^2 + \frac{1}{2} q +\frac{1}{2} & -\frac{1}{4}q + \frac{1}{4}\\ \frac{2}{5}q + \frac{1}{20} & -\frac{4}{5}q + \frac{9}{10} & \frac{2}{5}q + \frac{1}{20}\\ -\frac{1}{4}q + \frac{1}{4} & -q^2 + \frac{1}{2} q + \frac{1}{2} & q^2 -\frac{1}{4}q + \frac{1}{4}\end{pmatrix}.\end{align*}

We will lean upon our own great (?) shoulders from the Classic problem to show that we need to solve for $q$ that makes the transition matrix symmetric. In this case, that requirement yields $$\frac{2}{5}q + \frac{1}{20} = -q^2 + \frac{1}{2} q + \frac{1}{2},$$ or equivalently, $$q^2 -\frac{1}{10} q - \frac{9}{20} = 0.$$ Solving this quadratic for the positive root (since after all we need $q\in[0,1]$ as a probability), gives that the appropriate probability to remain at door 1 in this even more complicated Markov scheme is $$q=\frac{ \frac{1}{10} + \sqrt{ \frac{1}{100} + 4 \frac{9}{20} } }{2} = \frac{1+\sqrt{181}}{20} \approx 0.722681202354\dots$$

Seems like an actual uniform generator would be simpler…

You are a producer on a game show hosted by Randy “Random” Hall (no relation to Monty Hall). The show has three doors labeled 1 through 3 from left to right, and behind them are various prizes.

Contestants pick one of the three doors at which to start, and then they press an electronic button many, many times in rapid succession. Each time they press the button, they either stay at their current door or move to an adjacent door. If they’re at door 2 and move to an adjacent door, that new door will be 1 or 3 with equal probability.

Randy has decided that when a contestant presses the button while at door 2, there should be a 20 percent chance they remain at door 2.

As the producer, you want the chances of a contestant ultimately winding up at each of the three doors to be nearly equal after many button presses. Otherwise, mathematicians will no doubt write you nasty letters complaining about how your show is rigged.

If a contestant presses the button while at door 1 (or door 3), what should the probability be that they remain at that door?

Firstly, so true ... if there were a meaningful bias in the supposedly uniformly random process of selecting which door, I would take notice. Though rather than calling to complain I would be more likely to try to get on the show to exploit this bias, but I guess potato-potato.

Moving on, per Randy's specifications and your desire for the appearance of uniform randomness, we have a three state Markov chain setup where the transition matrix $$P= \begin{pmatrix} p & 1-p & 0 \\ 0.4 & 0.2 & 0.4 \\ 0 & 1-p & p \end{pmatrix}$$ and we are wondering for which values of $p$ will $P^n \to U,$ as $n\to \infty$, where the limiting matrix $U$ is the $3\times 3$ matrix where each entry is $\frac{1}{3}.$

There are many ways to solve for $p$ here. For instance, though we would obviously start with some specific position, e.g., $\pi_0 = (1,0,0)$, if $P^n \to U,$ as $n\to \infty$, then we would necessarily need to have $u=(\frac{1}{3},\frac{1}{3},\frac{1}{3})$ satisfy $uP=u$. We could obviously just solve here and get three linear equations (that hopefully collapse well since there is only one unknown), but instead let's math out!

Since $P$ is a transition matrix its rows sum to one, so we see that $Pu^T=u^T,$ which if we combine with $uP=u$ implies the $$Pu^T= u^T = (uP)^T = P^Tu^T.$$ So $P$ must be symmetric $(P=P^T)$, which leaves the much easier linear equation $1-p=0.4,$ that is, in order to provide the illusion of a uniform distibution of the door's landing spot through this elaborate Markov scheme you must have the probability of remaining on door 1 when the button is pressed be $p=60\%.$

Of course, armed with the knowledge that we have a symmetric transition matrix, we can then justify that this works by using the Spectral Theorem. We have already seen that $\lambda=1$ and $u^T$ is an eigenpair. We could certainly additionally calculate the other two eigenpairs as well, or simply argue that the eigenvalues must have absolute value less than $1$ and that the eigenvectors can be chosen to be an orthonormal basis of $\mathbb{R}^3$, such that $P=VDV^T,$ where $D = diag(1, \lambda_2, \lambda_3)$ and $$V= \begin{pmatrix} \sqrt{3}/3 & v_{12} & v_{13} \\ \sqrt{3}/3 & v_{22} & v_{23} \\ \sqrt{3}/3 & v_{32} & v_{33} \end{pmatrix}$$ satisfies $V^TV=VV^T=I.$ Since this $V$ represents an orthonormal basis we can change bases and represent any initial $\pi_0$ in $V$-coordinates, where the first coordinate is also $\sqrt{3}/3$ whenever $\pi_0$ sums to $1.$ Let's generically assume that $\pi_0 v_2 = c_2 \in \mathbb{R}$ and $\pi_0 v_3 = c_3 \in \mathbb{R}.$ Then we see that \begin{align*}\pi_n = \pi_{n-1} P &= \pi_{n-1} VDV^T \\ &= \cdots = \pi_0 V D^n V^T \\ &= u + c_2 \lambda_2^n v_2^T + c_3 \lambda_3^n v_3^T\end{align*} Since $|\lambda_2|, |\lambda_2| \lt 1,$ we see that for any value of $\pi_0$ we get $\pi_n \to u,$ as desired.

Sunday, November 2, 2025

Extra Credit swinging the probabilities, or ... Hey, how'd the Catalan numbers show up here???

Instead of a best-of-seven series, now suppose the series is much, much longer. In particular, the first team to win $N$ games wins the series, so technically this is a best-of-($2N−1$) series, where $N$ is some very, very large number.

In the limit of large $N$, what is the probability swing for Game 1 in terms of $N$?

Applying the same logic used in the Classic Fiddler problem, we want to first find $p_{1,N} = \mathbb{P} \{ \text{win best of (2N-1) series} \mid \text{win game 1} \},$ from which we get the probability swing of game 1 in a best of $(2N-1)$ series as $\Delta_N = 2p_{1,N} - 1.$ Again following in the Classic Fiddler's solution's footsteps, if you win games $1$ and $k$, then there are $\binom{k-2}{N-2}$ ways of arranging another $N-2$ wins in the other $k-2$ games, so $$p_{1,k,N} = \mathbb{P} \{ \text{winning a best of $(2N-1)$ series in $k$ games} \mid \text{win game 1} \} = \binom{k-2}{N-2} \frac{1}{2^{k-1}}.$$ Summing over all possible values of $k = N, N+1, \dots, 2N-1,$ we get $$p_{1,N} = \sum_{k=N}^{2N-1} \binom{k-2}{N-2} \frac{1}{2^{k-1}}.$$ We could try to go further and define some generating function f_N, but this would lead to some escalating number of derivatives, that gets messy fast.

Instead let's set up a recursive formula. We note that $p_{1,1} = 1,$ which makes sense since it is a winner-takes-all one game playoff. For some $N \geq 1,$ let's take a look at $$p_{N+1} = \sum_{k=N+1}^{2N+1} \binom{k-2}{N-1} \frac{1}{2^{k-1}}.$$ The standard binomial coefficient recursion formula (which comes from the Pascal triangle) gives $$\binom{k-2}{N-1} = \binom{k-3}{N-1} + \binom{k-3}{N-2},$$ so we have \begin{align*} p_{N+1} & = \sum_{k=N+1}^{2N+1} \left(\binom{k-3}{N-1} + \binom{k-3}{N-2} \right) \frac{1}{2^{k-1}} \\ &= \left( \sum_{k=N}^{2N} \binom{k-2}{N-1} \frac{1}{2^k} \right) + \left( \sum_{k=N}^{2N} \binom{k-2}{N-2} \frac{1}{2^k} \right) \\ &= \frac{1}{2} \left( \sum_{k=N+1}^{2N} \binom{k-2}{N-1} \frac{1}{2^{k-1}} \right) + \frac{1}{2} \left( \sum_{k=N}^{2N-1} \binom{k-2}{N-2} \frac{1}{2^{k-1}} \right) + \binom{2N-2}{N-2} \frac{1}{2^{2N}} \\ &= \frac{1}{2} p_{1,N+1} - \binom{2N-1}{N-1} \frac{1}{2^{2N+1}} + \frac{1}{2} p_{1,N} + \binom{2N-2}{N-2} \frac{1}{2^{2N}}.\end{align*} Pulling the copy of $\frac{1}{2}p_{1,N+1}$ back onto the lefthand side and then multiplying by 2, we get the recursion formula \begin{align*} p_{1,N+1} &= p_{1,N} + \binom{2N-2}{N-2} \frac{1}{2^{2N-1}} - \binom{2N-1}{N-1} \frac{1}{2^{2N}} \\ &= p_{1,N} + \binom{2N-2}{N-2} \frac{1}{2^{2N-1}} - \left( \binom{2N-2}{N-1} + \binom{2N-2}{N-2} \right) \frac{1}{2^{2N}} \\ &= p_{1,N} - \frac{1}{4^N} \left( \binom{2N-2}{N-1} - \binom{2N-2}{N-2} \right) \\ &= p_{1,N} - \frac{1}{4^N} C_{N-1}, \end{align*} where $C_n = \frac{1}{n+1} \binom{2n}{n} = \binom{2n}{n} - \binom{2n}{n+1},$ for $n \in \mathbb{N}$ is the standard Catalan number.

Since we start with $p_{1,1} = 1,$ we then see that $$p_{1,N} = 1 - \sum_{k=1}^{N-1} \frac{C_{k-1}}{4^k} = 1 - \frac{1}{4} \sum_{k=0}^{N-2} \frac{C_k}{4^k}, \,\, \forall N \in \mathbb{N}.$$ We can rely on the fact that the generation function of the Catalan numbers is $$c(x) = \sum_{n=0}^\infty C_n x^n = \frac{1 - \sqrt{1-4x}}{2x},$$ so that $$\frac{1}{4} \sum_{k=0}^\infty \frac{C_k}{4^k} = \frac{1}{4} c(\frac{1}{4}) = \frac{1}{4} \frac{1 - \sqrt{1 - 4 \cdot \frac{1}{4}}}{2 \cdot \frac{1}{4}} = \frac{1}{2}.$$ Therefore, we see that $$p_{1,N} = 1 - \frac{1}{4} \sum_{k=0}^{N-2} \frac{C_k}{4^k} = 1 - \frac{1}{4} \sum_{k=0}^\infty \frac{C_k}{4^k} + \frac{1}{4}\sum_{k=N-1}^\infty \frac{C_k}{4^k} = \frac{1}{2} + \frac{1}{4} \sum_{k=N-1}^\infty \frac{C_k}{4^k},$$ for all $N \in \mathbb{N}.$ Now when $k$ is large we have $$C_k \sim \frac{4^k}{k^{3/2} \sqrt{\pi}},$$ from repeated applications of Stirling's approximation, so when $N$ is sufficiently large we have $$\frac{1}{4} \sum_{k=N-1}^\infty \frac{C_k}{4^k} \approx \frac{1}{4\sqrt{\pi}} \sum_{k=N-1}^\infty k^{-3/2} \approx \frac{1}{2\sqrt{\pi (N-1)}},$$ where the last approximation is due to the fact that $\sum_{k=N}^\infty k^{-p} \sim \int_N^\infty x^{-p} \,dx.$ Therefore, in a fairly concise way, we have $$p_{1,N} \approx \frac{1}{2} + \frac{1}{2\sqrt{\pi(N-1)}},$$ when $N$ is large, so the probability swing of winning the first game is $$\Delta_N = 2p_{1,N} -1 \approx \frac{1}{\sqrt{\pi(N-1)}}$$ when $N$ is large.

Swinging the probabilities

You and your opponent are beginning a best-of-seven series, meaning the first team to win four games wins the series. Both teams are evenly matched, meaning each team has a 50 percent chance of winning each game, independent of the outcomes of previous games.

As the team manager, you are trying to motivate your team as to the criticality of the first game in the series (i.e., “Game 1”). You’d specifically like to educate them regarding the “probability swing” coming out of Game 1—that is, the probability of winning the series if they win Game 1 minus the probability of winning the series if they lose Game 1. (For example, the probability swing for a winner-take-all Game 7 is 100 percent.)

What is the probability swing for Game 1?

Let's break this down as follows. Let $p_1 = \mathbb{P} \{ \text{win series} \mid \text{win game 1} \}.$ In order to win the series, you must win it in $k$ games for some $k=4, 5, 6, 7,$ so let's further let $$p_{1,k} = \mathbb{P} \{ \text{win series in $k$ games} \mid \text{win game 1} \},$$ where here we see that $p_1 = \sum_{k=4}^7 p_{1,k}.$ Now, the total number of ways to win the first and $k$th games and two others somewhere in games $2$ through $k-1$ is given by $\binom{k-2}{2}$ and the overall probability of any particular combination of $k$ games is $\frac{1}{2^k},$ so $$p_{1,k} = \frac{\mathbb{P} \{ \text{win series in $k$ games and win game $1$} \}}{\mathbb{P} \{ \text{win game 1 } \}} = \binom{k-2}{2} \frac{1}{2^{k-1}}.$$ Therfore, $$p_1 = \sum_{k=4}^7 \binom{k-2}{2} \frac{1}{2^{k-1}} = \frac{1}{2} \sum_{k=4}^7 \binom{k-2}{2} \frac{1}{2^{k-2}} = \frac{1}{2} \sum_{k=2}^5 \binom{k}{2} \frac{1}{2^k}.$$

Now one way of computing $p_1$ would be using some generating function wizardry. Define the function $f(x) = \sum_{k=2}^5 \binom{k}{2} x^k,$ in which case, $p_1 = \frac{1}{2} f(\frac{1}{2}).$ Now we also see that \begin{align*} f(x) &= \frac{1}{2} x^2 \sum_{k=2}^5 k(k-1) x^{k-2} \\ &= \frac{1}{2} x^2 \frac{d^2}{dx^2} \left( \frac{1-x^6}{1-x} \right) \\ &= \frac{1}{2} x^2 \frac{d}{dx} \left( \frac{1-6x^5+5x^6}{(1-x)^2} \right) \\ &= \frac{1}{2} x^2 \frac{2(1-15x^4+24x^5-10x^6}{(1-x)^3} \\ &= \frac{ x^2 ( 1 - 15 x^4 + 24x^5 -10x^6) }{(1-x)^3}.\end{align*} So we have $$p_1 = \frac{1}{2} f(\frac{1}{2}) = \frac{1}{2} \frac{ \frac{1}{4} \left( 1 - \frac{15}{16} + \frac{24}{32} - \frac{10}{64} \right) }{ \frac{1}{8} } = \frac{ 42}{64} = \frac{21}{32}.$$

Now from symmetry, we see that the probability of you winning having lost the first game, let's say $q_1 = \mathbb{P} \{ \text{win series} \mid \text{lose game 1} \}$ is the same as the probability of you winning the series having lost the series having won the first game. That is $q_1 = 1 - p_1.$ So the proabbility swing of the first game is $$\Delta = p_1 - q_1 = p_1 - (1- p_1) = 2p_1 - 1 = 2 \frac{21}{32} - 1 = \frac{5}{16} = 32.125\%.$$

Sunday, October 26, 2025

Mob rules: implied odds for binary outcomes based on bet frequency

Suppose there are two leading candidates, A and B, for MVP in the Fiddler Baseball League.

Part 1:

Assume the odds for A winning the award have been set to $+100x$, where $x \gt 1$. Let $f$ represent the fraction of dollars wagered in favor of A. For many values of $f$, the oddsmaker can set the odds for B so that they’ll make the same amount of money regardless of whether A or B wins the award. However, below a certain value of $f$, it’s impossible for the oddsmaker to do this. What is this critical value of f?

In this case, if A wins then the oddsmaker must pay out $-xf$ to the bettors who supported A but receive $+(1-f)$ from the losing bettors who supported B, so the total cashflow if A wins is $$\pi_A(x,f) = 1-(x+1)f.$$ Let's assume that if B wins that the oddsmaker will have to pay out $-\rho(1-f),$ for some $\rho \gt 0,$ while they would receive $+f$ from all of the losing A bettors. In this case, the total cashflow if B wins is $$\pi_B(f,\rho) = f - \rho(1-f).$$ If the oddsmaker wants to make sure that $$\pi = \pi_A = 1 - (x+1)f = f - \rho(1-f) = \pi_B,$$ then we would need to set the payoff ratio for B to be $$\rho = \rho(x,f) = - \frac{1 - (2+x)f}{1-f}.$$ Since we need to have $\rho \gt 0,$ and $1-f \gt 0$ by definition since $f \in (0,1),$ we must have $1 - (2+x) f \lt 0$ or equivalently $$f \gt \frac{1}{2+x}.$$ Ideally, the bookmaker wants to make a profit so we would also want to make sure that $\pi = 1 - (x+1)f \gt 0,$ so $$f \lt \frac{1}{1+x}.$$ Now we can convert the payoff ratio as follows: if $\rho \geq 1,$ then the odds for B should be $+100\rho;$ whereas, if $\rho \in (0,1)$ then the odds for B should be $-100 / \rho.$ Therefore, if the odds for A are $+100x,$ then as long as $$f \in \left(\frac{1}{2+x}, \frac{1}{1+x}\right)$$ the oddsmaker can make a profit of $\pi = 1-(1+x)f \gt 0,$ regardless of outcome.

Part 2:

Now, assume the odds for A winning the award have been set to $-100y$, where $y \gt 1$. Again, for many values of $f$, the oddsmaker can set the odds for B so they’ll make the same amount whether A or B wins the award. What is the critical value of $f$ below which this isn’t possible?

In this case, if A wins the oddsmaker must pay out $-\frac{f}{y}$ to the bettors who supported A but receive +(1-f) from the losing bettors who support B, so the total cashflow if A wins is $$\pi_A(y,f) = 1 - \left(1 + \frac{1}{y}\right) f.$$ Let's assume that if B wins that the oddsmaker will have to payout $-\varrho (1-f),$ for some $\varrho \gt 0,$ while they would receive $+f$ from all of the losing A bettors. In this case, the total cashflow if B wins is $$\pi_B(f,\varrho) = f - \varrho(1-f).$$ If the oddsmaker wants to make sure that $$\pi = \pi_A = 1 - \left(1+\frac{1}{y}\right)f = f - \varrho(1-f) = \pi_B,$$ then we would need to set the payoff ratio for B to be $$\varrho = \varrho(y,f) = - \frac{y - (2y+1)f}{y(1-f)}.$$ Since we need to have $\varrho \gt 0,$ and $1-f \gt 0$ by definition since $f \in (0,1),$ we must have $y - (2y+1) f \lt 0$ or equivalently $$f \gt \frac{y}{2y+1}.$$ Ideally, the bookmaker wants to make a profit so we would also want to make sure that $\pi = 1 - \left(1+\frac{1}{y}\right)f \gt 0,$ so $$f \lt \frac{y}{1+y}.$$ Now we can convert the payoff ratio as follows: if $\varrho \geq 1,$ then the odds for B should be $+100\varrho;$ whereas, if $\varrho \in (0,1)$ then the odds for B should be $-100 / \varrho.$ Therefore, if the odds for A are $-100y,$ then as long as $$f \in \left(\frac{y}{2y+1}, \frac{y}{1+y}\right)$$ the oddsmaker can make a profit of $\pi = 1-\left(1+\frac{1}{y}\right)f \gt 0,$ regardless of outcome.

MAWGWTWFMVP

With the regular season over, there are two clear favorites for baseball’s American League Most Valuable Player (MVP) award according to ESPN:

  • Aaron Judge of the New York Yankees, whose odds are $-150$.
  • Cal Raleigh of the Seattle Mariners, whose odds are $+110$.

While these betting lines may be informed by an assessment of Judge’s and Raleigh’s real chances, they may also be informed by how much money people are betting on each player. Suppose all bettors have wagered on either Judge or Raleigh with the odds above. Some fraction $f$ of dollars wagered have been in favor of Judge, while $1−f$ has been wagered on Raleigh. For what fraction $f$ will the oddsmaker earn the same amount of money, regardless of which player earns the MVP award?

Let's normalize everything so that there is a total of one unit of total dollars wagered. When the Baseball Writers Association of America rightfully crowns Cal Raleigh as the AL MVP, the bookmaker will happily take the $+f$ from all the sad crybaby losers who were enthralled by everything New York Yankees. On the other hand in this case, the bookmaker would have to payout a total of $-1.1(1-f) = 1.1f - 1.1$ to the rational baseball afficianados who saw through the glitz, glamour and sour grapes of the East Coast media blitz for Judge and bet on Raleigh. So the total cashflow for the oddsmaker is $+f + 1.1f - 1.1 = 2.1f-1.1.$

On the other hand, just from a pure devil's advocacy perspective, there could be some happenstance where Raleigh could theoretically lose the AL MVP (tvu!, tvu!, tvu!, bli ayin hara!), ranging from the incompetent (the BBWAA forgot to change over its ballots from last year), to the patehtic (the geriatric writers couldn't stay up past their East Coast bedtimes to watch the pure and utter dominace of Raleigh), to any number of more nefarious, perverse chicanery. In this purely theoretical scenario, the oddsmaker would have a total payout of $-\frac{f}{1.5} = -\frac{2f}{3}$ to those delusional Judge patrons, while they would take the $+(1-f)$ from the hard-working, salt-of-the-earth, righteous Raleigh boosters. In this infinitesimally remote scenario, the total cashflow for the oddsmaker is $-\frac{2f}{3} + 1 -f = 1 - \frac{5f}{3}.$

A sober, dispassionate bookmaker would want these two cashflows to align, no matter how justified he might be in skewing it towards Raleigh's side. In this case, the only justification for having the oddsmaker set Raleigh as the underdog in this AL MVP race is because the two sides would net the same amount of positive cashflow if $$2.1f - 1.1 = 1 - \frac{5f}{3},$$ that is, $$f = \frac{2.1}{2.1 + \frac{5}{3}} = \frac{63}{113} \approx 0.55752212389\dots$$ of all bets were for Judge, for some ungodly reason.

Sunday, October 19, 2025

Average Distance from Center of a Unit Cube to Its Surface

Let’s raise the stakes by a dimension. Now, you start at the center of a unit cube. Again, you pick a random direction to move in, with all directions being equally likely. You move along this direction until you reach a point on the surface of the unit cube. On average, how far can you expect to have traveled?

So unlike shifting the shape that we are trying to hit the perimeter of, in the Extra Credit problem, we instead increase the dimensionality. Here $$B_\infty = \left\{ (x,y,z) \in \mathbb{R}^3 \mid \max \{ |x|, |y|, |z| \} = \frac{1}{2} \right\}.$$ Also, instead of parameterizing the direction of travel by a single bearing with respect to the positive $x$-axis, here we need to have two angles, one $\theta \in [0, \pi]$ with respect to the positive $z$-axis and a second $\varphi \in [0,2\pi)$ with respect to the positive $x$-axis. Since the differential of the solid angle of a region of the surface of the sphere is $d\Omega = \sin \theta \,d\varphi \,d\theta$ and the entire surface area of a unit sphere is $4\pi,$ we see that the join distribution function for $\theta$ and $\varphi$ is $$f(\theta, \varphi) = \frac{\sin \theta}{4\pi}.$$ Now, again assuming a unit speed, and given we get the position vector $$r(t) = ( t \cos \varphi \sin \theta, t \sin \varphi \sin \theta, t \cos \theta ).$$

Since you will hit $B_\infty$ whenever $t \max \{ |\cos \varphi \sin \theta|, |\sin \varphi \sin \theta|, |\cos \theta| \} = \frac{1}{2},$ so therefore the distance traveled is \begin{align*} d(\theta, \varphi) &= \frac{1}{2 \max \{ |\cos \varphi \sin \theta|, |\sin \varphi \sin \theta|, |\cos \theta| \}} \\ & = \frac{1}{2} \min \{ |\sec \varphi| \csc \theta, |\csc \varphi| \csc \theta, |\sec \theta | \}, \end{align*} where the final equation removes $\csc \theta$ from the absolute values since $\sin \theta, \csc \theta \geq 0$ for all $\theta \in [0,\pi]$

Therefore, the average distance is $$\hat{d} = \frac{1}{4\pi} \int_0^\pi \int_0^{2\pi} d(\theta, \varphi) \sin \theta \,d\varphi \, d\theta.$$ Now we can decompose this integral into three regions, two of which seem to have analytical solutions, so let's try even though we could just stop here and do the larger integral right here and right now.

Let's only worry about the upper hemisphere, that is $0 \leq \theta \leq \frac{\pi}{2}$. Here we can then break the range into the following three sections:

  • Region I - when $0 \leq \theta \leq \frac{\pi}{4},$ where you will eventually hit the top of the unit cube no matter what the value of $\varphi$;
  • Region II - when $\cos^{-1} \frac{1}{\sqrt{3}} \leq \theta \leq \frac{\pi}{2},$ where you will eventually hit one of the vertical sides of the unit cube no matter what the value of $\varphi$; and
  • Region III - when $\frac{\pi}{4} \leq \theta \leq \cos^{-1} \frac{1}{\sqrt{3}},$ where depending on the value of $\varphi$ you will either hit the top or the sides.

In other words, in Region I, $$d_1(\theta, \varphi) = \frac{1}{2} |\sec \theta|, \forall \varphi \in [0,2\pi),$$ so we have \begin{align*}I_1 &= \frac{1}{2\pi} \int_0^{\pi/4} \int_0^{2\pi} d_1(\theta, \varphi) \,d\varphi \, \sin \theta \,d\theta \\ &= \frac{1}{2\pi} \int_0^{\pi/4} \pi \tan \theta \, d\theta = \left.-\frac{1}{2} \ln | \cos \theta | \right|^{\pi/4}_0 \\ &= \frac{1}{4} \ln 2 \approx 0.17328679514\dots.\end{align*} In Region II, $$d_2(\theta, \varphi) = \frac{\csc \theta}{2} \min \{ |\sec \varphi|, |\csc \varphi| \},$$ so we have \begin{align*} I_2 &= \frac{1}{2\pi} \int_{\cos^{-1} (1/\sqrt{3})}^{\pi/2} \int_0^{2\pi} \frac{1}{2} \min \{ |\sec \varphi|, |\csc \varphi| \} \,d\varphi \csc \theta \sin \theta \,d\theta \\ &= \int_{\cos^{-1} (1/\sqrt{3})}^{\pi/2} \left( \frac{1}{2\pi} \int_0^{2\pi} \frac{1}{2} \min \{ |\sec \varphi|, |\csc \varphi| \} \,d\varphi \right) \,d\theta\\ & = \frac{2}{\pi} \ln ( 1 + \sqrt{2} ) \left( \frac{\pi}{2} - \cos^{-1} \left(\frac{1}{\sqrt{3}}\right) \right)\\ & = \left( 1 - \frac{ 2 \cos^{-1} \left( \frac{1}{\sqrt{3}} \right) }{\pi} \right) \ln (1 + \sqrt{2}) \approx 0.345345573653\dots,\end{align*} since the integral with respect to $\varphi$ is none other than the expected distance traveled inside a square, which we found the solution for in the Classic Problem. The trickier third region has integral $$I_3 = \int_{\pi/4}^{\cos^{-1} (1/\sqrt{3})} \int_0^{2\pi} d(\theta,\varphi) \,d\varphi \sin \theta \, d\theta \approx 0.0920550322989\dots,$$ which does not seem to have a readily available analytical solution. Putting these regions together we get that the overall average distance traveled to the surface of the unit cube when uniformly randomly choosing a direction is $$\hat{d} = I_1 + I_2 + I_3 = \frac{1}{4} \ln 2 + \left( 1 - \frac{ 2 \cos^{-1} \left( \frac{1}{\sqrt{3}} \right) }{\pi} \right) \ln (1 + \sqrt{2}) + I_3 \approx 0.610687401568\dots.$$

Average Distance from Center of a Unit Square to Its Perimeter

You start at the center of the unit square and then pick a random direction to move in, with all directions being equally likely. You move along this chosen direction until you reach a point on the perimeter of the unit square. On average, how far can you expect to have traveled?

Let's assume that you start at the origin, which is the center of the unit square $B_\infty = \{ (x,y) \in \mathbb{R}^2 \mid \max \{ |x|, |y| \} = \frac{1}{2} \}.$ Further, let's assume that your randomly chosen bearing makes an angle of $\theta$ with the positive $x$-axis, so $\theta \sim U(0,2\pi),$ and let's assume that you move at some unit speed, so your position through time will be $r(t) = (t \cos \theta, t \sin \theta).$

Since you will hit the $B_\infty$ exactly when $t \max \{ |\cos \theta|, |\sin \theta| \} = \frac{1}{2}$, therefore, the distance you would travel from the origin is given by $$d(\theta) = \frac{1}{2 \max \{ |\cos \theta|, |\sin \theta| \} } = \frac{1}{2} \min \{ |\sec \theta|, |\csc \theta| \}.$$ Relying on the periodicity of $d(\theta)$ to break the entire integral into 8 equal parts, we get thatthe expected distance is therefore \begin{align*}\hat{d} &= \frac{1}{2\pi} \int_0^{2\pi} d(\theta) \,d\theta \\ &= \frac{1}{2\pi} \left( 8 \int_0^{\pi/4} \frac{1}{2} \sec \theta \,d\theta \right) \\ &= \frac{2}{\pi} \Biggl.\ln \left| \sec \theta + \tan \theta \right| \Biggr|^{\pi/4}_0 \\ &= \frac{2}{\pi} \ln \left( 1 + \sqrt{2} \right) \approx 0.561099852339\dots.\end{align*}

We can extend this problem to the slightly different problem of starting at the origin, pointing a random direction and traveling at a uniform, unit Euclidean speed until we get to any the perimeter of any half-unit ball, say $B_p = \{ (x,y) \in \mathbb{R}^2 \mid |x|^p + |y|^p = \frac{1}{2^p} \}.$ As you could possibly surmise, the official question was $B_\infty,$ which is the limit as $p \to \infty.$ In this case, we would get $$d_p (\theta) = \frac{1}{2 \sqrt[p]{\cos^p \theta + \sin^p \theta}}$$ and $$\bar{d}_p = \frac{2}{\pi} \int_0^{\pi/4} \frac{d\theta}{\sqrt[p]{\cos^p \theta + \sin^p \theta}} = \frac{4}{\pi} \int_0^{\sqrt{2}-1} \frac{dt}{\sqrt[p]{(1-t^2)^p + 2^p t^p}},$$ where the second integral is obtains through the mystical tangent half-angle substitution. In general, this is not an analytical answer, though in certain other cases it simplifies. Obviously, for the case of $p=2$ everything simplifies to $d_2(\theta) = \frac{1}{2}, \forall \theta \in [0,2\pi)$ and so $\bar{d}_2 = \frac{1}{2}$ as well. For $p=1$, we have $$d_1(p) = \frac{1}{2 (\cos \theta + \sin \theta)}$$ and \begin{align*}\bar{d}_1 &= \frac{4}{\pi} \int_0^{\sqrt{2}-1} \frac{dt}{(1-t^2) + 2t}\\ &= \frac{4}{\pi} \int_0^{\sqrt{2}-1} \frac{dt}{1+2t-t^2}\\ &= \frac{\sqrt{2}}{\pi} \int_0^{\sqrt{2}-1} \frac{1}{1 + \sqrt{2} - t} + \frac{1}{\sqrt{2} - 1 + t} \,dt \\ &= \frac{\sqrt{2}}{\pi} \left. \ln \left| \frac{\sqrt{2} - 1 + t}{\sqrt{2} + 1 - t} \right| \right|_0^{\sqrt{2}-1}\\ & = \frac{\sqrt{2}}{\pi} \left( \ln \left(\frac{2\sqrt{2}-2}{2} \right) - \ln \left( \frac{\sqrt{2}-1}{\sqrt{2}+1} \right) \right)\\ &= \frac{\sqrt{2}}{\pi} \ln ( 1 + \sqrt{2} ) \approx 0.396757510512\dots.\end{align*} For completeness's sake, below is a table for various other values of $p.$

$p$ $\bar{d}_p$
$1$ $0.396757510512\dots$
$1.5$ $0.466430018867\dots$
$2$ $0.500000000000\dots$
$2.5$ $0.518545527769\dots$
$3$ $0.529817160992\dots$
$4$ $0.542203450927\dots$
$5$ $0.548481398011\dots$
$10$ $0.557672594762\dots$
$100$ $0.561063099606\dots$
$1000$ $0.56109948237\dots$
$\infty$ $0.561099852339\dots$

Monday, October 13, 2025

Let's Make a Tic-Tac-Deal, Extra Credit

In the actual game, you get five rolls instead of three. But as with rolling a 2 or 12, rolling a number that you have already rolled is a wasted turn. With five rolls of the dice, what are your chances of getting three Xs in a row, either horizontally, vertically, or diagonally?

Let's just simulate our way out of this. We have the same 8 possible winning sets, that we will encode as sets in Python and then after simulating a 5 rolls of two dice, we can check whether the resulting set of roll outcomes contains any of the winning sets. See the code snippet below that we can use to encode playing a random round of Tic-Tac-Deal.

Running $1$ million simulations, we infer that the probability of winning the game with $5$ dice rolls is about $36.0907\dots\%.$ In particular, the $95\%$ confidence interval is $(35.996568\%, 36.184832\%).$ Further, note that if we run these simulations with the parameter $rolls = 3,$ then we get a $95\%$ confidence interval of $(5.725294\%, 5.816706\%)$ which certainly contains our analytics answer of $\frac{113}{1944}.$

Let's Make a Tic-Tac-Deal!

The game of Tic-Tac-Deal $2.0$ has a $3$-by-$3$ square grid with the numbers $3$ through $11$, arranged as follows:

$3$ $4$ $5$
$6$ $7$ $8$
$9$ $10$ $11$

You start by rolling a standard pair of six-sided dice and add the two numbers rolled. You place an X on the board on the square that contains the sum. If the sum is a $2$ or $12$, your roll is wasted. If you have exactly three rolls of the dice, what are your chances of getting three Xs in a row (either horizontally, vertically, or diagonally)?

First, we see that $$p_n = \frac{\min\{n-1,13-n\}}{36}$$ is the probability of getting a vakue of $n$ on a roll of two standard dice. Secondly, we see that there are a total of 8 winning configurations: $\{3,4,5\}$, $\{3,6,9\}$, $\{3,7,11\}$, $\{4,7,10\}$, $\{5,7,9\}$, $\{5,8,11\}$, $\{6,7,8\}$, and $\{9,10,11\}.$

Since order doesn't matter here, the probability of getting a winning configuration $W=\{w_1, w_2, w_3\}$ is given by $$p(W)= 3! \prod_{i=1}^3 p_{w_i}.$$ So the overall probability of winning in three rolls is \begin{align*}P=\sum_W p(W)&= 3! \left( p_3p_4p_5 + p_3p_6p_9 + p_3p_7p_{11}\right. \\ &\quad\quad +p_4p_7p_{10} + p_5p_8p_{11} + p_5p_7p_9 \\ &\left.\quad\quad\quad + p_6p_7p_8 + p_9p_{10}p_{11} \right)\\ &= \frac{\left( 144 + 240 + 144 + 324 + 576 + 240 + 900 + 144 \right)}{46656} \\ &= \frac{113}{1944} \approx 5.812757\dots\%\end{align*}

Monday, October 6, 2025

What is Anita's 𝝋?

It’s time for you to check Anita’s work. What was the measure of angle $\varphi$?

Ok, so as we saw in the prior Classic Fiddler problem, Anita will be crossing the $1$-inch segment while she is on her $4$-inch line segment, that is when she is traveling from $$X_3 = (1+2\cos \varphi+3\cos 2\varphi, 2\sin \varphi + 3\sin 2\varphi)$$ to $$X_4 = (1 + 2 \cos \varphi + 3 \cos 2\varphi + 4\cos 3\varphi, 2 \sin \varphi + 3 \sin 2\varphi + 4 \sin 3 \varphi).$$ We have already somewhat loosely located $\varphi \in ( \frac{3\pi}{4}, \frac{4\pi}{5} ).$

We see that the line from $X_3$ to $X_4$ is given by $$y - 2 \sin \varphi - 3 \sin 2\varphi = \tan 3\varphi \left( x - 1 - 2 \cos \varphi - 3 \cos 2\varphi \right).$$ Additionally we note that since for large angles (e.g., $\phi = \pi$) the crossing point is near $(0,0)$ and since the crossing point for the smaller $\phi = \frac{4\pi}{5}$ being closer towards the $(1,0)$ endpoint of the initial segment, it stands to reason that for the optimal angle $\varphi,$ that the crossing point would be precisely at the endpoint $(1,0).$ In this cas we would have $$ - 2 \sin \varphi - 3 \sin 2 \varphi = \tan 3\varphi \left( -2 \cos \varphi - 3 \cos 2\varphi \right).$$ Since $$\tan 3 \varphi = \frac{ 3 \tan \varphi - \tan^3 \varphi }{ 1 - 3 \tan^2 \varphi } = \tan \varphi \frac{ 3 - \tan^2 \varphi }{ 1 - 3 \tan^2 \varphi } = \tan \varphi \frac{ 4 \cos^2 \varphi - 1}{ 4 \tan^2 \varphi - 3},$$ $$-2 \cos \varphi - 3 \cos 2\varphi = 3 - 2 \cos \varphi - 6 \cos^2 \varphi,$$ and $$2 \sin\varphi + 3 \sin 2\varphi = 2 \tan \varphi \cos \varphi ( 1 + 3 \cos \varphi ),$$ the above equality is equivalent (which much algebraic expansion and reduction) to \begin{align*} 0 &= \tan \varphi \frac{4 \cos^2 \varphi - 1}{4 \cos^2 \varphi - 3} ( 3 - 2\cos \varphi - 6 \cos^2 \varphi) + 2 \tan \varphi \cos \varphi (1 + 3 \cos \varphi) \\ &= \frac{\tan \varphi}{4 \cos^3 \varphi - 3} \left( (4 \cos^2 \varphi - 1) ( 3 - 2 \cos\varphi - 6 \cos^2 \varphi ) + 2 \cos \varphi ( 4 \cos^2 \varphi - 3 ) (1 + 3 \cos \varphi) \right) \\ &= - \frac{\tan \varphi (4 \cos \varphi + 3)}{4 \cos^2 \varphi - 3},\end{align*} so we have that $\varphi = \cos^{-1} \left(-\frac{3}{4}\right) \approx 2.41885840578\dots.$

Anita's Walk

Anita the ant is going for a walk in the sand, leaving a trail as she goes. First, she walks $1$ inch in a straight line. Then she rotates counterclockwise by an angle $\varphi$, after which she walks another $2$ inches. She rotates counterclockwise an angle $\varphi$ again, after which she walks $3$ inches. She keeps doing this over and over again, rotating counterclockwise an angle $\varphi$ and then walking $1$ inch farther than she did in the previous segment.

At some point during her journey, she crosses over her initial $1$-inch segment. By “cross over,” I am including the two end points of that first segment. Anita realizes that $\varphi$ was the smallest possible angle such that she crossed over her $1$-inch segment. (Among the ants, she’s known for her mathematical prowess.) How long was the segment along which she first crossed over the $1$-inch segment? Your answer should be a whole number of inches.

Let's let Anita's position on the $xy$-plane with origin centered at her trail's beginning after completing her line segment of length $n$, be $X_n = (x_n, y_n)$. In particular we then see that $$x_n = \sum_{k=1}^n k \cos ((k-1) \varphi) \,\,\text{and}\,\, y_n = \sum_{k=1}^n k \sin ((k-1) \varphi).$$ One thing we notice immediately is that as $n \to \infty,$ $X_n$ does not converge and in fact $\|X_n\| \to \infty.$ Therefore, if there is crossing over the initial segment it would have to happen in the first handful of steps, rather than after some extremely large cycle.

We see that if $\phi=\pi,$ then obviously Anita would de facto double back into her first line segment in her $n=2$ step, since she would start retracing her steps immediately. Let's try to see whether or not there can be some $\phi \lt \pi$ such that Anita crosses the first line segment in her $n=3$ step. In this case we see that $X_2 = (1 + 2\cos \phi, 2\sin \phi)$ and $X_3 = (1 + 2 \cos \phi + 3 \cos 2\phi, 2\sin \phi + 3\sin 2\phi),$ so the line connecting $X_2$ and $X_3$ is given by $$y - 2\sin \phi = \tan 2\phi \left( x - 1 - 2 \cos \phi \right).$$ In particular, if we assume that Anita crossed at some point $(h, 0)$ on the initial line segment, that is with $0 \leq h \leq 1,$ then we would get the equality \begin{align*}0 &= 2 \sin \phi \left( 1 - \frac{ \cos \phi \left( 2 \cos \phi + 1 - h \right) }{ \cos 2\phi } \right)\\ &= \frac{2 \sin \phi}{\cos 2\phi} \left( \cos 2\phi - \cos \phi \left( 2\cos \phi + 1 - h \right) \right) \\ &= \frac{2 \sin \phi}{\cos 2\phi} \left( 2 \cos^2 \phi - 1 - \left(2 \cos^2 \phi + (1-h) \cos \phi\right) \right) \\ &= -\frac{2\sin \phi}{\cos 2\phi} \left( 1 + (1-h) \cos \phi \right)\end{align*} For $h \in (0,1],$ we have $-\frac{1}{1-h} \lt -1,$ so there are no solutions of $\phi \in \mathbb{R}$ with $1 + (1-h) \cos \phi = 0.$ Therefore, besides the case of $\phi = \pi$ as we saw before, there are no solutions where Anita crosses the $1$-inch segment on her $n=3$ step.

Let's look at the case of say $\phi = \frac{4\pi}{5}.$ In this case, we see that $X_3 = (0.309017\dots, -1.677599\dots)$ and $X_4=(1.545085\dots, 2.126627\dots),$ so the $n=4$ line segment crosses the $1$-inch line segment at $(h,0)$ for $h\approx 0.854102\dots.$ So there is some value of $\phi$ for which Anita would cross the $1$-inch segment on the $n=4$ segment.

Let's look at another case where $\phi=\frac{3\pi}{4}.$ Here we see the $X_3=(1-\sqrt{2}, \sqrt{2}-3)$ and $X_4=(1+\sqrt{2},3\sqrt{2}-3).$ The line segment between $X_3$ and $X_4$ narrowly missed the $1$-inch segment, crossing at $(4-2\sqrt{2},0)$. We can show (or at least vigorously wave our hands) that there are no angle $\phi \leq \frac{3\pi}{4}$ that there are similarly no instances where Anita would cross the $1$-inch segment.

Therefore, we see that the length of the segment that Anita crossed the initial path with was four.

Monday, September 29, 2025

Riskier Risk, Now with Wilds!

The full deck of Risk cards also contains two wildcards, which can be used as any of the three types of cards (infantry, cavalry, and artillery) upon trading them in. Thus, the full deck consists of $44$ cards.

You must have at least three cards to have any shot at trading them in. Meanwhile, having five cards guarantees that you have three you can trade in.

If you are randomly dealt cards from a complete deck of $44$ one at a time, how many cards would you need, on average, until you can trade in three? (Your answer should be somewhere between three and five. And no, it’s not four.)

Let $S$ be the random number of cards you have when you can first turn in a set of three cards. Obviously we have $3\leq S\leq 5,$ so lets go about calculating $p_s = \mathbb{P} \{ S= s\},$ for $s \in \{3, 4, 5\},$ which we can then compute $$\mathbb{E} [S] = \sum_{s=3}^5 sp_s.$$

From the Classic wildless Risk problem, we see that there remain $3836$ possible combinations of three card sets without using any wilds. To this we must at $\binom{2}{1} \binom{42}{2} = 1722$ possible combinations of three card sets with ome wild card and $\binom{2}{2}\binom{42}{1}=42$ possible combinations of three card sets with two wild cards. In total, there are $5600$ possible sets of three cards, out of a total $\binom{44}{3}=13244$ possible hands of three Risk cards, so we have $$p_3 = \frac{5600}{13244}=\frac{200}{473}\approx 42.283\dots\%$$

Rather than the messiness of figuring out the exact combinations for $S=4$, let's instead tackle $S=5.$ We see that if you do not get a set on your fourth card then you will automatically get a set by your fifth card, so we only need to figure out what situations lead you to not have a set after four cards. This can only happen whenever you have two of one type and two of a different type (with no wilds, obviously). There are $\binom{14}{2}\binom{14}{2}\binom{3}{2}=24843$ possible hands that have exactly two cards in two different types. Since there are $\binom{44}{4} = 135751$ possible combinations of four distinct Risk cards, we thus have $$p_5=\frac{24843}{135751}=\frac{3549}{19393}\approx 18.300\dots\%$$

Since we can reason that $p_4=1-p_3-p_5 = \frac{7644}{19393}\approx 39.416\dots\%,$ we have the expected number of Risk cards needed to turn in a set is $$\mathbb{E}[S]= 3 \frac{200}{473} + 4 \frac{7644}{19393} + 5 \frac{3549}{19394} = \frac{72921}{19393} \approx 3.760171\dots$$

Risk without Wilds

There are $42$ territory cards in the deck—$14$ that depict an infantry unit, $14$ that depict a cavalry unit, and $14$ that depict an artillery unit. Once you have three cards that either (1) all depict the same kind of unit, or (2) all depict different kinds of units, you can trade them in at the beginning of your next turn in exchange for some bonus units to be placed on the board.

If you are randomly dealt three cards from the $42$, what is the probability that you can trade them in?

In this case, we see that there are $\binom{42}{3}=11480$ possible hands combined of three Risk cards if you remove the wild cards. There are $\binom{14}{3}=364$ possible combinations of three cards within any kind and three different kinds (that is, infantry, cavalry and artillery), so a total of $1092$ different three of a kind sets that can be formed. There are also $14^3 = 2744$ different one of each sets. So there a total of $3836$ sets of three Risk cards without wild cards, which implies that the probability of being able to trade in a randomly dealt set of three cards when the wild cards have been removed is $$p = \frac{3836}{11480} = \frac{137}{410} \approx 33.41463\dots \%$$

Monday, September 15, 2025

Letters Boxed, Like Surgeons General

Consider the following diagram, which consists of eight points (labeled $A$ through $H$), two on each side of the square. A valid “letter boxed” sequence starts at any of the eight points, and proceeds through all of the other points exactly once. However, adjacent points in the sequence can never be on the same side of the square. The first and last points in the sequence can be on the same side, but do not have to be.

As an illustration, $AFBCHEDG$ is a valid sequence of points. However, $AFBCHGED$ is not a valid sequence, since $H$ and $G$ are adjacent in the sequence and on the same side of the square. How many distinct valid “letter boxed” sequences are there that include all eight points on the square?

Well in total, there are $8! = 40,320$ different words that can be made up of the letters $A$ through $H$ with no repeated letters. One possible way is to get this answer is to just enumerate all $40,320$ possible words and throw away any words with forbidden doublets, like $AB$, $EF$ or $HG$. This is not too terribly time consuming in a short Python script, which yields that there are a total of $13,824$ different letters boxed that include all eight points on the square.

Instead of two points on each side of the square (and eight points in total), now there are three points on each side (and twelve points in total), labeled A through L in the diagram below. How many distinct valid “letter boxed” sequences are there that include all $12$ points on the square?

While $8!$ might be well within a modern programming languages wheelhouse to try to iterate through, $12!$ is more than $500$ million possible words that are composed of the letters $A$ through $L$ with no repeated letters. So unless we want to run a script all weekend, let's try a different approach. We can encode the 12 points on the square and the adjacency of the letter boxed game into a graph $\mathcal{G},$ where for instance there is an edge from $A$ to each letter $D, E, F, \dots, L,$ but no edge between $A$ and $B$, $A$ and $C$ or $B$ and $C,$ since they all share the same side of the square. A letter boxed that contains all twelve letters would then be a Hamiltonian path on $\mathcal{G},$ so let's liberally borrow the Python3 code provided by user Gaslight Deceive Subvert from this StackOverflow post related to enumerating all Hamiltonian paths on a graph and get to countings. Since I don't particularly care about the paths themselves and just the number of them, instead of printing the entire list of all Hamiltonian paths, I modified it to just print the number of distinct Hamiltonian paths.

To confirm that everything seemed above board, I started out with the graph from the Classic problem given by \begin{align*}\textsf{G} &\textsf{= \{'A':'CDEFGH', 'B':'CDEFGH', 'C':'ABEFGH', 'D':'ABEFGH',}\\ &\quad\quad \textsf{'E':'ABCDGH', 'F':'ABCDGH','G':'ABCDEF', 'H' : 'ABCDEF'\}}.\end{align*} THankfully, the code retrieved the answer of $13,824$ letters boxed, so I then felt confident enough to encode the Extra Credit graph as \begin{align*}\tilde{G} &\textsf{= \{'A':'DEFGHIJKL', 'B':'DEFGHIJKL', 'C':'DEFGHIJKL',}\\ &\quad\quad\quad \textsf{'D':'ABCGHIJKL', 'E':'ABCGHIJKL', 'F':'ABCGHIJKL',}\\ &\quad\quad\quad\quad \textsf{'G':'ABCDEFJKL', 'H' : 'ABCDEFJKL', 'I':'ABCDEFJKL',}\\ &\quad\quad\quad\quad\quad \textsf{ 'J' : 'ABCDEFGHI', 'K' : 'ABCDEFGHI', 'L' : 'ABCDEFGHI'\}}.\end{align*}.$$ Here, after running the code for roughly 15 minutes, we get a grand total of $53,529,984$ letters boxed with all $12$ points on the square.

Monday, September 8, 2025

High-Low Hijinks Extra Credit

Your friend is playing an epic game of “high-low” and has made it incredibly far, having racked up a huge number of points. Given this information, and only this information, what is the probability that your friend wins the next round of the game?

Let's define this problem as the following probabilistic statement, what is the conditional probability that $N \geq n+1$ given that $N \geq n,$ when $n$ is large? That is, find $$\lambda = \lim_{n \to \infty} \mathbb{P} \left \{ N \geq n+1 \mid N \geq n \right\} = \lim_{n\to \infty} \frac{\mathbb{P} \{ N \geq n+1 \}}{\mathbb{P} \{ N \geq n \} } = \lim_{n\to\infty} \frac{p_{n+1}}{p_n}.$$

One method that we can do is to just compute successive values of $p_n,$ as per the following table:

$n$ $p_n$
1 $\frac{3}{4}$
2 $\frac{13}{24}$
3 $\frac{25}{64}$
4 $\frac{541}{1920}$
5 $\frac{1561}{7680}$
6 $\frac{47293}{322560}$
7 $\frac{36389}{344064}$
8 $\frac{7087261}{92897280}$
9 $\frac{34082521}{619315200}$
10 $\frac{1622632573}{40874803200}$

Even by this point, it seems that the ratio of $p_{n+1} / p_n$ approaches roughly $0.72134752\dots,$ but let's see if we can make sense of this otherwise mysterious limit.

Let's focus on the recursion $h_{n+1}(t) = \int_0^1 I(t,x) h_n(x) \,dx$ that we saw in the Classic problem. Let's assume that we have $h_{n+1} (x) \approx \lambda h_n (x),$ for large values of $n \gg 1.$ Then the recursion formula would settle down to something like $$\lambda h^*(t) = \int_0^1 I(t,x) h^*(x) \,dx = \begin{cases} \int_t^1 h^*(x)\,dx, &\text{if $t \lt \frac{1}{2};$}\\ \int_0^t h^*(x) \,dx, &\text{if $t \gt \frac{1}{2};$}\end{cases}.$$ Differentiating both sides of the equation, we get $$\lambda \frac{d}{dt} h^*(t) = \begin{cases} -h^*(t), &\text{if $t \lt \frac{1}{2};$}\\ h^*(t), &\text{if $t \gt \frac{1}{2}.$}\end{cases}.$$ Let's focus on when $t \gt \frac{1}{2},$ in which case we have $\lambda \frac{d}{dt} h^*(t) = h^*(t),$ so we would have $h^*(t) = C \exp \left( \frac{t}{\lambda} \right)$ for $t \gt \frac{1}{2},$ with $h^*(t) = h^*(1-t) = C \exp \left( \frac{ 1-t}{\lambda} \right)$ for $t \lt \frac{1}{2}.$ In this case, plugging back in, if our recursion formula still holds for $t \gt \frac{1}{2},$ we would need to have \begin{align*}\lambda C e^{t/\lambda} &= \int_0^\frac{1}{2} C \exp \left( \frac{1-t}{\lambda} \right)\,dt + \int_\frac{1}{2}^t C\exp \left(\frac{t}{\lambda}\right) \,dt \\ &= \lambda C \left( e^{1/\lambda} - e^{1/(2\lambda)} \right) + \lambda C \left( e^{t/\lambda} - e^{1/(2\lambda)} \right) \\ &= \lambda C e^{t/\lambda} + \lambda C \left( e^{1/\lambda} - 2 e^{1/(2\lambda)} \right).\end{align*} Therefore, we must have either $\lambda = 0$ or $e^{1/\lambda} - 2 e^{1/(2\lambda)} = 0$ in order tof the recursion formula to hold, which implies that the conditional probability of winning the next round in a long game of high-low is $$\lambda = \lim_{n \to \infty} \frac{p_{n+1}}{p_n} = \frac{1}{2\ln 2} \approx 0.721347520444\dots.$$

High-Low Hijinks

You’re playing a game of “high-low,” which proceeds as follows:

  • First, you are presented with a random number, $x_1$, which is between $0$ and $1.$
  • A new number, $x_2$, is about to be randomly selected between $0$ and $1$, independent of the first number. But before it’s selected, you must guess how $x_2$ will compare to $x_1$. If you think $x_2$ will be greater than $x_1$ you guess “high.” If you think $x_2$ will be less than $x_1$, you guess “low.” If you guess correctly, you earn a point and advance to the next round. Otherwise, the game is over.
  • If you correctly guessed how $x_2$ compared to $x_1$ then another random number, $x_3$, will be selected between $0$ and $1$. This time, you must compare $x_3$ to $x_2$, guessing whether it will be “high” or “low.” If you guess correctly, you earn a point and advance to the next round. Otherwise, the game is over.

You continue playing as many rounds as you can, as long as you keep guessing correctly. You quickly realize that the best strategy is to guess “high” whenever the previous number is less than $0.5,$ and “low” whenever the previous number is greater than $0.5.$ With this strategy, what is the probability you will earn at least two points? That is, what are your chances of correctly comparing $x_2$ to $x_1$ and then also correctly comparing $x_3$ to $x_2$?.

Let's define the indicator function $$I(s,t) = \begin{cases} 1, &\text{if $(t-s)(s-\frac{1}{2}) \leq 0;$} \\ 0, &\text{otherwise,}\end{cases}$$ such that $I(x_1,x_2) = 1$ whenever you successfully earned a point in the first round and $I(x_2,x_3) = 1$ whenever you successfully earned a second point. Let $N = N(x_1, x_2, \dots, x_n, \dots)$ be the random variable that corresponds to your score from playing this game of high-low. We see that based on this setup we have $$\mathbb{P} \{ N \geq n \} = \int_0^1 \int_0^1 \cdots \int_0^1 \prod_{i=1}^n I(x_i, x_{i+1}) \,dx_{n+1} \,dx_n \,dx_{n-1}\, \cdots \,dx_2 \,dx_1.$$ Further, let's define $h_0(t) = 1$ and $$h_1(t) = \int_0^1 I(t, x) \,dx = \begin{cases} \int_t^1 \,dx = 1-t, &\text{if $t \lt \frac{1}{2};$}\\ \int_0^t \,dx = t, &\text{if $t \geq \frac{1}{2};$} \end{cases} = \max \{ t, 1-t \}.$$ Finally, for any $n \in \mathbb{N},$ define $$h_{n+1}(t) = \int_0^1 I(t,x) h_n(x) \,dx.$$

In particular, if we want to see the probability of haveing a score of at least $2$ is \begin{align*}p_2 = \mathbb{P} \{ N \geq 2 \} &= \int_0^1 \int_0^1 \int_0^1 I(x_1,x_2) I(x_2, x_3) \,dx_3\,dx_2\,dx_1 \\ &= \int_0^1 \int_0^1 I(x_1,x_2) \left(\int_0^1 I(x_2, x_3) \,dx_3 \right) \,dx_2\,dx_1 \\ &= \int_0^1 \int_0^1 I(x_1,x_2) h_1(x_2) \,dx_2 \,dx_1\\ &= \int_0^1 h_2(x_1) \,dx_1.\end{align*} If we extrapolate further, which we will need primarily for the Extra Credit problem, we see that $$p_n = \mathbb{P} \{ N \geq n \} = \int_0^1 h_n(t) \,dt, \,\, \forall n \in \mathbb{N}.$$

Returning to the Classic problem here, we have $h_1(t) = \max \{ t, 1-t \}$ and \begin{align*}h_2(t) = \int_0^1 I(t,x) \max \{x, 1-x\} \,dx &= \begin{cases} \int_t^1 \max \{ x, 1 - x\} \,dx, &\text{if $t \lt \frac{1}{2};$} \\ \int_0^t \max \{x, 1-x\}\,dx, &\text{if $t \geq \frac{1}{2};$}\end{cases} \\ &= \begin{cases} \int_t^\frac{1}{2} (1-x) \,dx + \int_\frac{1}{2}^1 x\,dx, &\text{if $t \lt \frac{1}{2};$} \\ \int_0^\frac{1}{2} (1-x) \,dx + \int_\frac{1}{2}^t x \,dx, &\text{if $t \geq \frac{1}{2};$}\end{cases} \\ &= \begin{cases} \frac{3}{4} - t + \frac{t^2}{2}, &\text{if $t \lt \frac{1}{2};$} \\ \frac{1}{4} + \frac{t^2}{2}, &\text{if $t \geq \frac{1}{2};$}\end{cases} \\ &= \max \left\{ \frac{3}{4} -t + \frac{t^2}{2}, \frac{1}{4} + \frac{t^2}{2} \right\}.\end{align*} So we have the probability of earning at least two points is \begin{align*}p_2 = \mathbb{P} \{ N \geq 2 \} &= \int_0^1 h_2(t) \,dt\\ &= \int_0^\frac{1}{2} \left( \frac{3}{4} - t + \frac{t^2}{2} \right) \,dt + \int_\frac{1}{2}^1 \left( \frac{1}{4} + \frac{t^2}{2} \right) \,dt\\ &= \left[ \frac{3}{4}t -\frac{1}{2}t^2 + \frac{1}{6} t^3 \right]^{t=\frac{1}{2}}_{t=0} + \left[ \frac{1}{4} t + \frac{1}{6} t^3 \right]^{t=1}_{t=\frac{1}{2}} \\ &= \left(\frac{3}{8} - \frac{1}{8} + \frac{1}{48}\right) + \left(\frac{1}{4} + \frac{1}{6}\right) - \left( \frac{1}{8} + \frac{1}{48} \right) \\ &= \frac{13}{24}.\end{align*}

Monday, August 25, 2025

Sundown Trail Race with a Mulligan

Now let’s add one more wrinkle. At some point during the race, if you’re unhappy with the loop you’ve just been randomly assigned, you’re granted a “mulligan,” allowing you to get another random assignment. (Note that there’s a $25$ percent chance you’ll be assigned the same loop again.) You don’t have to use your mulligan, but you can’t use it more than once.

As before, the time is $5$:$55$ p.m. You have just completed a loop, and you haven’t used your mulligan yet. With an optimal strategy (i.e., using the mulligan at the right moment, if at all), on average, what score can you expect to earn between $5$:$55$ p.m. and $7$ p.m.?

As we saw in the Classic post, let's use the absorption probabilities matrix $B = \left(B(t,s)\right)_{t \in \mathfrak{T}, s \in \mathfrak{S}},$ which is the probability of starting at transient state $t$ and ending up at the final, absorbing state $s.$ This will give us the expected score of starting at transient state $i$ with no mulligans $$E(i) = \mathbb{E} [ S \mid S_0 = i ] = \sum_{s \in \mathfrak{S}} s B(i,s).$$ In particular, we have the vector $$E = \left(E(t)\right)_{t \in \mathfrak{T}} = \begin{pmatrix} \frac{19933}{4096} \\ \frac{5245}{1024} \\ \frac{1437}{256} \\ \frac{317}{64} \\ \frac{293}{64} \\ \frac{69}{16} \\ \frac{77}{16} \\ \frac{21}{4} \\ \frac{23}{4} \end{pmatrix}.$$

We can only use a single mulligan, but it is enough to change the overall expected score. Let's denote $$\tilde{E}(t) = \tilde{\mathbb{E}}[ S \mid S_0 = s ],$$ as the expected score when starting at state $s$. In particular, let's say that we are currently finishing up our lap to attain the transient score of $t \in \mathfrak{T},$ where we haven't yet used our mulligan, and we are quickly decide for which new laps we want to use our mulligan. Let's say the randomly assigned lap will move us from $t$ to $t^\prime \in \mathfrak{T}.$ If we use our mulligan in this turn then we will end up with an expected score of $E(t);$ whereas if we don't use the mulligan, then we would end up with an expected score of $\tilde{E}(t^\prime),$ since we would still have our mulligan remaining. So the optimal choice if we end up moving from $t$ to $t^\prime \in \mathfrak{T}$ would be to give us $\max \{ E(t), \tilde{E}(t^\prime) \}.$ Similarly, if we are randomly assigned to move from $t$ to some $s \in \mathfrak{S}$, then if we use our mulligan we end up with expected score of $E(t)$ versus a score of $s,$ if we don't. So if we are would move from $t$ to $s \in \mathfrak{S}$ the optimal choice gives us $\max \{E(t), s \}.$ Putting this altogether, we the corresponding transition probabilities from the $Q$ and $R$ matrices given in the Classic problem, we get the recursive formula $$\tilde{E}(t) = \sum_{t^\prime \in \mathfrak{T}} Q(t, t^\prime) \max \{ E(t), \tilde{E}(t^\prime) \} + \sum_{s \in \mathfrak{S}} R(t,s) \max \{ E(t), s \}.$$

So let's go recursing .... \begin{align*} \tilde{E}(5.5) &= \frac{3}{4} \max\{ E(5.5), 5.5 \} + \frac{1}{4} \max \{ E(5.5), 6.5 \} = \frac{3}{4} E(5.5) + \frac{1}{4} 6.5 = \frac{95}{16} \\ \tilde{E}(5) &= \frac{3}{4} \max \{ E(5), 5 \} + \frac{1}{4} \max \{ E(5), 6 \} = \frac{3}{4} E(5) + \frac{1}{4} 6 = \frac{87}{16} \\ \tilde{E}(4.5) &= \frac{1}{4} \max \{ E(4.5), \tilde{E}(5.5) \} + \frac{3}{4} \max \{ E(4.5), 4.5 \} = \frac{1}{4} \tilde{E}(5.5) + \frac{3}{4} E(4.5) = \frac{163}{32} \\ \tilde{E}(4) &= \frac{1}{4} \max \{ E(4), \tilde{E}(5) \} + \frac{3}{4} \max \{ E(4), 4 \} = \frac{1}{4} \tilde{E}(5) + \frac{3}{4} E(4) = \frac{147}{32} \\ \tilde{E}(3.5) &= \frac{1}{4} \max \{ E(3.5), \tilde{E}(4.5) \} + \frac{1}{2} \max \{ E(3.5), 3.5 \} + \frac{1}{4} \max \{ E(3.5), 6.5 \} \\ &\quad\quad = \frac{1}{4} \tilde{E}(4.5) + \frac{1}{2} E(3.5) + \frac{1}{4} 6.5 = \frac{83}{16} \\ \tilde{E}(3) &= \frac{1}{4} \max \{ E(3), \tilde{E}(4) \} + \frac{1}{4} \max \{ E(3), 3 \} + \frac{1}{4} \max \{ E(3), 6 \} + \frac{1}{4} \max \{ E(3), 6.5 \} \\ &\quad\quad = \frac{1}{4} E(3) + \frac{1}{4} E(3) + \frac{1}{4} 6 + \frac{1}{4} 6.5 = \frac{1411}{256}\\ \tilde{E}(2) &= \frac{1}{4} \max \{ E(2), \tilde{E}(3) \} + \frac{1}{4} \max\{ E(2), \tilde{E}(5) \} + \frac{1}{4} \max \{ E(2), \tilde{E}(5.5) \} + \frac{1}{4} \max \{ E(2), 6.5 \} \\ &\quad\quad = \frac{1}{4} E(2) + \frac{1}{4} E(2) + \frac{1}{4} \tilde{E}(5) + \frac{1}{4} 6.5 = \frac{3029}{512}\\ \tilde{E}(1) &= \frac{1}{4} \max \{ E(1), \tilde{E}(2) \} + \frac{1}{4} \max \{ E(1), \tilde{E}(4) \} + \frac{1}{4} \max \{ E(1), \tilde{E}(4.5) \} + \frac{1}{4} \max \{ E(1), \tilde{E}(5.5) \} \\ &\quad\quad = \frac{1}{4} \tilde{E}(2) + \frac{1}{4} E(1) + \frac{1}{4} E(1) + \frac{1}{4} \tilde{E}(5.5) = \frac{5657}{1024} \end{align*}

This leaves us with one more step to show that the total expected score on accrued between $5$:$55$ pm and $7$ pm going at a constant $10$ minute per mile pace with one single mulligan is \begin{align*} \tilde{E}(0) &= \frac{1}{4} \max \{ E(0, \tilde{E}(1) \} + \frac{1}{4} \max \{ E(0), \tilde{E}(3) \} + \frac{1}{4} \max \{ E(0), \tilde{E}(3.5) \} + \frac{1}{4} \max \{ E(0), \tilde{E}(4.5) \} \\ &\quad\quad = \frac{1}{4} \tilde{E}(1) + \frac{1}{4} \tilde{E}(3) + \frac{1}{4} \tilde{E}(3.5) + \frac{1}{4} \tilde{E}(4.5)\\ &\quad\quad = \frac{21921}{4096} = 5.351806640625 \end{align*}

Sundown Trail Race

You’re participating in a trail run that ends at sundown at $7$ p.m. The run consists of four loops: $1$ mile, $3$ miles, $3.5$ miles, and $4.5$ miles. After completing any given loop, you are randomly assigned another loop to run—this next loop could be the same as the previous one you just ran, or it could be one of the other three. Being assigned your next loop doesn’t take a meaningful amount of time; assume all your time is spent running.

Your “score” in the race is the total distance you run among all completed loops. If you’re still out on a loop at $7$ p.m., any completed distance on that loop does not count toward your score!

It is now $5$:$55$ p.m. and you have just completed a loop. So far, you’ve been running $10$-minute miles the whole way. You’ll maintain that pace until $7$ p.m. On average, what score can you expect to earn between $5$:$55$ p.m. and $7$ p.m.?

OK, let's agree on how to parametrize this. There are $65$ minutes left in the race and you are running at a clean $10$ minutes per mile, so let's just divide everything by $10$ minute per mile and deal with things entirely in miles or score space. In this space, the problem looks like this: Your score is currently $0$, you will be assigned a random score increment $\delta S$ whichi if uniformly distributed over the set $\Delta S = \{1 , 3, 3.5, 4.5 \}.$ If $S + \delta S \leq 6.5,$ then your new score is $S+\delta S.$ Otherwise, if $S + \delta S \gt 6.5,$ then your final score will be $S.$ For clarity, we see that the possible final scores are $\mathfrak{S} = \{ 3, 3.5, 4, 4.5, 5, 5.5, 6, 6.5\}.$

With the transformation complete, we can either go through the combinatorial enumeration of all possible paths to get to a final score of, say, $S=3.5$. In that particular case, we see that in order to end with a score of $3.5$ you must first be assigned $3.5$ in your first loop and then be assigned either $3.5$ or $4.5$ in your second lap. This would then have a total probability of $\frac{1}{4} \times \frac{1}{2} = \frac{1}{8}.$ This can definitely be done for each of the other elements in $\mathfrak{S},$ but for the Extra Credit answer we will want to take advantage of modeling this as an absorbing Markov chain .... so let's do that here too.

The total number of states in this Markov Chain is $17$, there are nine (9) transient states $\mathfrak{T} = \{0, 1, 2, 3, 3.5, 4, 4.5, 5, 5.5\}$ and then the eight final, absorbing states in $\mathfrak{S}.$ Note that there two possible states where the current score is, say $3.5$, one in $\mathfrak{T},$ which represents a state that could still have some $\delta S \lt 3.5$ added to it and the race continuing and the final one where the race is over in $\mathfrak{S}.$ If at any time we have to differentiate between the transient and absorbing states that have the same numerical values, we will use $T$ and $A$ superscripts, e.g, $3.5^T \in \mathfrak{T}$ and $3.5^A \in \mathfrak{S}.$ If no superscript is provided, then let's safely assume that we are talking about the final, absorbing state.

OK, now let's get the canonical form of the transition matrix $P = \begin{pmatrix} Q & R \\ 0 & I_8 \end{pmatrix},$ where $Q$ is the $9 \times 9$ matrix representing the transition probabilities entirely within the transient states $\mathfrak{T}$ and $R$ is the $9 \times 8$ matrix representing the transition probabilities that start at transient states in $\mathfrak{T}$ and end in absorbing states $\mathfrak{S}.$ In particular, we have \begin{align*} Q &= \begin{pmatrix} 0 & \frac{1}{4} & 0 & \frac{1}{4} & \frac{1}{4} & 0 & \frac{1}{4} & 0 & 0 \\ 0 & 0 & \frac{1}{4} & 0 & 0 & \frac{1}{4} & \frac{1}{4} & 0 & \frac{1}{4} \\ 0 & 0 & 0 & \frac{1}{4} & 0 & 0 & 0 & \frac{1}{4} & \frac{1}{4} \\ 0 & 0 & 0 & 0 & 0 & \frac{1}{4} & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & \frac{1}{4} & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & \frac{1}{4} & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \frac{1}{4} \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \end{pmatrix} \\ R & = \begin{pmatrix} 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & \frac{1}{4} \\ \frac{1}{4} & 0 & 0 & 0 & 0 & 0 & \frac{1}{4} & \frac{1}{4} \\ 0 & \frac{1}{2} & 0 & 0 & 0 & 0 & 0 & \frac{1}{4} \\ 0 & 0 & \frac{3}{4} & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & \frac{3}{4} & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & \frac{3}{4} & 0 & \frac{1}{4} & 0 \\ 0 & 0 & 0 & 0 & 0 & \frac{3}{4} & 0 & \frac{1}{4} \end{pmatrix} \end{align*} Based on the matrix $Q,$ we can compute the fundamental matrix $$N = (I_9 - Q)^{-1} = \begin{pmatrix} 1 & \frac{1}{4} & \frac{1}{16} & \frac{17}{64} & \frac{1}{16} & \frac{33}{256} & \frac{3}{8} & \frac{49}{1024} & \frac{11}{64} \\ 0 & 1 & \frac{1}{4} & \frac{1}{16} & 0 & \frac{17}{64} & \frac{1}{16} & \frac{33}{256} & \frac{3}{8} \\ 0 & 0 & 1 & \frac{1}{4} & 0 & \frac{1}{16} & 0 & \frac{17}{64} & \frac{1}{16} \\ 0 & 0 & 0 & 1 & 0 & \frac{1}{4} & 0 & \frac{1}{16} & 0 \\ 0 & 0 & 0 & 0 & 1 & 0 & \frac{1}{4} & 0 & \frac{1}{16} \\ 0 & 0 & 0 & 0 & 0 & 1 & 0 & \frac{1}{4} & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & \frac{1}{4} \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 \end{pmatrix}.$$

Using the fundamental matrix, we can get the absorption probabilities as $$B = NR = \begin{pmatrix} \frac{17}{256} & \frac{1}{8} & \frac{49}{512} & \frac{9}{32} & \frac{147}{4096} & \frac{33}{256} & \frac{321}{4096} & \frac{3}{16} \\ \frac{1}{64} & 0 & \frac{51}{256} & \frac{3}{16} & \frac{99}{1024} & \frac{9}{32} & \frac{49}{1024} & \frac{11}{64} \\ \frac{1}{16} & 0 & \frac{3}{64} & 0 & \frac{51}{256} & \frac{3}{16} & \frac{33}{256} & \frac{3}{8} \\ \frac{1}{4} & 0 & \frac{3}{16} & 0 & \frac{3}{64} & 0 & \frac{17}{64} & \frac{1}{4} \\ 0 & \frac{1}{2} & 0 & \frac{3}{16} & 0 & \frac{3}{64} & 0 & \frac{17}{64} \\ 0 & 0 & \frac{3}{4} & 0 & \frac{3}{16} & 0 & \frac{1}{16} & 0 \\ 0 & 0 & 0 & \frac{3}{4} & 0 & \frac{3}{16} & 0 & \frac{1}{16} \\ 0 & 0 & 0 & 0 & \frac{3}{4} & 0 & \frac{1}{4} & 0 \\ 0 & 0 & 0 & 0 & 0 & \frac{3}{4} & 0 & \frac{1}{4} \end{pmatrix},$$ where $B(t,s)$ is the probability of starting at transient state $t$ and eventually absorbing into the state $s \in \mathfrak{S}.$ For the purpose of the Classic problem, we are most interested in using the first row of $B,$ that is $B_{0,s},$ which is the probability of starting with score of $0$ and ending at score $s \in \mathfrak{S}$. More specifically, the expected score of running at a constant $10$ minute miles from $5$:$55$ pm until $7$ pm is \begin{align*}E(0) = \mathbb{E}[S \mid S_0 = 0] &= \sum_{s \in \mathfrak{S}} s B(0,s)\\ &= 3 \cdot \frac{17}{256} + 3.5 \cdot \frac{1}{8} + 4 \cdot \frac{49}{512} + 4.5 \cdot \frac{9}{32} + 5 \cdot \frac{147}{4096} \\ &\quad\quad\quad+ 5.5 \cdot \frac{33}{256} + 6 \cdot \frac{321}{4096} + 6.5 \cdot \frac{3}{16} \\ &= \frac{19933}{4096} = 4.866455078125.\end{align*}

Monday, August 18, 2025

Probable Free Money

You have the same $\$55$ worth of vouchers from the casino in the same denominations. But this time, you’re not interested in guaranteed winnings. Instead, you set your betting strategy so that you will have at least a $50$ percent chance of winning $W$ dollars or more.

What is the maximum possible value of W? In other words, what is the greatest amount of money you can have at least a 50 percent chance of winning from the outset, with an appropriate strategy? And what is that betting strategy?

Ok, well to explore this space, let's first explore the distribution of optimal strategy from the Classic answer. If we assume that we first bet one on each side then almost surely we end up with $2$ remaining $\$10$ vouchers and the $\$25$ voucher remaining, but with $W = 10.$ At this point the bet is $\$20$ on heads and $\$25$ on tails, so with probability $50\%$ you end up with $2$ $\$10$ vouchers and $W=30$. At this point in this strategy you would bet one $\$10$ voucher on heads and the other on tails, to reach the state of a single $\$10$ voucher and $W=40,$ again with a total probability of $50\%$. At this point you would bet your single $\$10$ voucher over and over again until you lose, which means you can attain $W=40+10n$ with probability $$p_n = \mathbb{P}\{W = 40+10n\} = \frac{1}{2^{n+2}}.$$

On the other hand, there is another possibility when you bet the $2$ $\$10$ vouchers against the single $\$25$ voucher, which is that, with probability $50\%$ you end up with on a single $\$25$ voucher and $W=35.$ In this case, you would just choose to wager your single $\$25$ voucher over and over again until you lose. So we have $W = 35+25n$ with probability $$q_n = \mathbb{P} \{W = 35+25n\} = \frac{1}{2^{n+2}}.$$ Putting these altogether we get the full distribution $$\mathbb{P} \{ W = w \} = \begin{cases} \frac{1}{4^{n+1}}, &\text{if $w=35+50n;$}\\ \frac{1}{2\cdot 4^{n+1}} + \frac{2}{32^{n+1}} = \frac{4\cdot 8^n +2}{32^{n+1}}, &\text{if $w = 60+50n$;} \\ \frac{1}{4 \cdot 2^n}, &\text{if $w = 40+10n$ and $n \mod 5 \not\equiv 2$;}\\ 0, \text{otherwise.}\end{cases}$$ In this case, we see that $$W^* = \max \{ w \in \mathbb{N} \mid \mathbb{P} \{ W \geq w \} = \frac{1}{2} \} = 50.$$ Additionally, we confirm that since $\mathbb{P} \{W \geq 35\} = 1,$ as we surmised in the classic answer.

However, as expected, we can do way ......... better. For instance, even if you bet all of your vouchers on heads each time you have a $50\%$ probability of getting at least $\$55,$ since in that greedy strategy your probability distribution if $$\hat{\mathbb{P}} \{ W = w \} = \begin{cases} \frac{1}{2^n}, &\text{ $w = 55n$;} \\ 0, &\text{otherwise.}\end{cases}$$

However, let's combine the greediness of throwing everything on one side and hoping for the best, with the optimal strategy to ensure a high of a minimum strategy as possible. In this case, you bet all $\$55$ on heads, let's say, in the first round. If you lose, well, that sucks. But if you win then you will be at the state $(3,1,55)$ with probability $50\%.$ Again, since we are looking to maximize the the value of $\max \{ w \in \mathbb{N} \mid \mathbb{P}^* \{ W \geq w \} \geq \frac{1}{2} \}$ and we already spent $50\%$ of the probability mass on the first wager, we see that our problem has now transformed into finding the maximum guaranteed winnings based on starting at the state $(3,1,55),$ since guaranteed winning from this point would still have a greater than or equal to $50\%$ overall probability from $(3,1,0).$ Thankfully, we've already solved this problem in the Classic Fiddler, and see that the greatest amount of money you can have at least a $50$ percent chance of winning from the outset is $$V(3,1,55) = 55+35 = \$90,$$ which employs the strategy of first betting everything on one side, say heads, and then if you win the first bet, then employing the strategy from the Classic Fiddler.

Sunday, August 17, 2025

Guaranteed Free Money

A casino offers you $\$55$ worth of “free play vouchers.” You specifically receive three $\$10$ vouchers and one $\$25$ voucher.

You can play any or all vouchers on either side of an even-money game (think red vs. black in roulette, without those pesky green pockets) as many times as you want (or can). You keep the vouchers wagered on any winning bet and get a corresponding cash amount equal to the vouchers for the win. But you lose the vouchers wagered on any losing bet, with no cash award. Vouchers cannot be split into smaller amounts.

What is the guaranteed minimum amount of money you can surely win, no matter how bad your luck? And what betting strategy always gets you at least that amount?

Let $S = (S_1, S_2, W)$ be the state of the system where you currently have $S_1$ $\$10$ vouchers, $S_2$ $\$25$ vouchers and $W$ winnings in cash. Further define $V(S) = V(S_1,S_2,W)$ to be the guaranteed minimum amount of money you can surely win if you start with state $S = (S_1,S_2,W).$

We can define $V(1,0,W) = V(0,1,W) = W,$ since regardless of whether you only have one voucher left, since you cannot split the vouchers, the fair game could go against your bet and you end up with no more money than you started.

Let's now consider $V(1,1,W).$ If you were to bet one $\$10$ voucher on say heads and one $\$25$ voucher on tails, then with a $50\%$ probability you would win $\$10$ and move to the state $S^\prime = (1,0,W+10)$ and with $50\%$ probability you would win $\$25$ and move to state $\tilde{S} = (0,1,W+25)$. Since $V$ measures the guaranteed minimum winnings we have $$\tilde{V}(1,1,W) = \min \{ V(1,0,W+10), V(0,1,W+25) \} = \min \{ W + 10, W + 25 \} = W+10.$$ If we were to pick another strategy, say only wager one $\$10$ voucher on heads, then we would get $$\bar{V}(1,1,W) = \min \{ V(1,1,W+10), V(0,1,W) \} = W,$$ which is sub-optimal with respect to the strategy of betting say $\$10$ on H and $\$25% on tails. So we see that that optimal strategy gives $$V(1,1,W) = W+10.$$

For $V(2,0,W),$ the optimal guaranteed strategy is to bet one voucher on heads and another on tails, since this gives $\tilde{V}(2,0,W) = V(1,0,W+10) = W+10,$ which is better than if you only wager on one side and get $\hat{V}(2,0,W) = W.$ So we get $$V(2,0,W) = \max \{ \tilde{V}(2,0,W), \hat{V}(2,0,W) \} = W+10.$$ Similarly, we get that $$V(3,0,W) = W+20,$$ by successively wagering one voucher on heads and another on tails, twice.

For $V(2,1,W)$, an optimal guaranteed strategy is to bet, say, two $\$10$ vouchers on heads and the $\$25$ voucher on tails. This gives $$\tilde{V}(2,1,W) = \min \{ V(2,0,W+20), V(0,1,W+25) \} = \min \{ W+30, W+25 \}= W+25.$$ This is better than first betting one $\$10$ on heads and another on tails, and then $$\bar{V}(2,1,W) = V(1,1,W+10) = W+20.$$ Both of these are better than betting only one $\$10$ voucher on heads and the $\$25$ voucher on tails which gives $$\hat{V}(2,1,W) = \min \{ V(2,0,W+10), V(1,1,W+25) \} = \min \{ W +20, W+25 \} = W + 20.$$ So we have $$V(2,1,W) = \max \{ \tilde{V}(2,1,W), \hat{V}(2,1,W), \bar{V}(2,1,W) \} = W+25.$$

Finally, for $V(3,1,W),$ we have an optimal guaranteed strategy to bet one $\$10$ on heads and another on tails, and get $$\tilde{V}(3,1,W) = V(2,1,W+10) = W+35.$$ Other strategies involve betting two $\$10$ vouchers on heads and the $\$25$ voucher on tails, which equivalently gives $$\bar{V}(3,1,W) = \min \{ V(3,0,W+20), V(1,1,W+25) \} = W+35.$$ Sub-optimal strategies includes bets on only one side, or betting only on $\$10$ voucher on heads and the $\$25$ voucher on tails, which given $$\hat{V}(3,1,W) = \min \{ V(3,0,W+10), V(2,1,W+25)\} = W+30.$$ So we have $$V(3,1,W) = \max \{ \tilde{V}(3,1,W), \bar{V}(3,1,W), \hat{V}(3,1,W) \} = W+35.$$ Therefore, the guaranteed minimum amount of money you can surely win is $V(3,1,0) = \$35,$ which you can get by first betting one $\$10$ voucher on each side of the game, then in the second round betting the two remaining $\$10$ vouchers on one side of the game and the $\$25$ voucher on the other.

Monday, August 11, 2025

Coldplay Canoodling - Extra Credit

Now, everyone at the concert spends at least some time canoodling. In particular, each member of a couple wants to spend some fraction of the time canoodling, where this fraction is randomly and uniformly selected between $0$ and $1$. This value is chosen independently for the two members of each couple, and the actual time spent canoodling is the product of these values. For example, if you want to canoodle during half the concert and your partner wants to canoodle during a third of the concert, you will actually canoodle during a sixth of the concert.

Meanwhile, the camera operators love to show canoodling couples. So instead of randomly picking couples to show on the jumbotron, they randomly pick from among the currently canoodling couples. (The time shown on the jumbotron is very short, so a couple’s probability of being selected is proportional to how much time they spend canoodling.)

Looking around the concert, you notice that the kinds of couples who most frequently appear on the jumbotron aren’t constantly canoodling, since there are very few such couples. Indeed, the couples who most frequently appear on the jumbotron spend a particular fraction $C$ of the concert canoodling. What is the value of $C$?

In this Extra Credit problem, we want to find the mode of the following distribution $$f_S(C) = \mathbb{P} \{ \text{shown canoodling}, \text{canoodling ratio} = C\} = k C \mathbb{P} \{ \text{canoodling ratio} = C \},$$ where the final equation comes from the assertin that a couple's probability of being selected is proportional to how much time they spend canoodling.

Let $X, Y \sim U(0,1)$ represent the canoodling propensity of the two members of the couple, then \begin{align*} \mathbb{P} \{ \text{canoodling ratio} = C \} &= \mathbb{P} \{ X \cdot Y = C \} \\ &= \frac{d}{dC} \mathbb{P} \{ X \cdot Y \leq C \} \\ &= \frac{d}{dC} \left( \int_0^1 \int_0^{\min \{ 1, C / x \}} \,dy \, dx \right) \\ &= \frac{d}{dC} \left( \int_0^1 \min \{ 1, \frac{C}{x} \} \,dx \right) \\ &= \frac{d}{dC} \left( \int_0^C 1\,dx + \int_C^1 \frac{C}{x} dx \right) \\ &= \frac{d}{dC} \left( C - C\ln C \right) \\ &= - \ln C.\end{align*} We already have $f_S(C) = - k C \ln C,$ and though it is irrelevant for answering the question, out of an abundance of rigor (??), let's find $k.$ From the law of total probability we get $$1 = \int_0^1 f_S(C) \,dC = k \int_0^1 (- C \ln C) \,dC = \left[ -\frac{C^2}{2} \ln C \right]_{C=0}^{C=1} + \int_0^1 \frac{C^2}{2} \,\frac{dC}{C} = \frac{k}{4},$$ so $k = 4.$

So the distribution is $f_S(C) = -4 C \ln C$ and we want to find the mode of this distribution. Since $$f^\prime_S(C) = -4 \ln C -4 = -4(1 + \ln C)$$ has only one zero in $[0,1]$ and $f_S(0) = f_S(1) = 0,$ we see that the most frequently appearing canoodling ratios on the Jumbotron is $$C^* = \frac{1}{e} \approx 0.367879441171\dots,$$ which gives $$f_S(C) \leq f_S(C^*) = \frac{4}{e}, \,\forall C \in [0,1].$$