Monday, August 25, 2025

Sundown Trail Race with a Mulligan

Now let’s add one more wrinkle. At some point during the race, if you’re unhappy with the loop you’ve just been randomly assigned, you’re granted a “mulligan,” allowing you to get another random assignment. (Note that there’s a $25$ percent chance you’ll be assigned the same loop again.) You don’t have to use your mulligan, but you can’t use it more than once.

As before, the time is $5$:$55$ p.m. You have just completed a loop, and you haven’t used your mulligan yet. With an optimal strategy (i.e., using the mulligan at the right moment, if at all), on average, what score can you expect to earn between $5$:$55$ p.m. and $7$ p.m.?

As we saw in the Classic post, let's use the absorption probabilities matrix $B = \left(B(t,s)\right)_{t \in \mathfrak{T}, s \in \mathfrak{S}},$ which is the probability of starting at transient state $t$ and ending up at the final, absorbing state $s.$ This will give us the expected score of starting at transient state $i$ with no mulligans $$E(i) = \mathbb{E} [ S \mid S_0 = i ] = \sum_{s \in \mathfrak{S}} s B(i,s).$$ In particular, we have the vector $$E = \left(E(t)\right)_{t \in \mathfrak{T}} = \begin{pmatrix} \frac{19933}{4096} \\ \frac{5245}{1024} \\ \frac{1437}{256} \\ \frac{317}{64} \\ \frac{293}{64} \\ \frac{69}{16} \\ \frac{77}{16} \\ \frac{21}{4} \\ \frac{23}{4} \end{pmatrix}.$$

We can only use a single mulligan, but it is enough to change the overall expected score. Let's denote $$\tilde{E}(t) = \tilde{\mathbb{E}}[ S \mid S_0 = s ],$$ as the expected score when starting at state $s$. In particular, let's say that we are currently finishing up our lap to attain the transient score of $t \in \mathfrak{T},$ where we haven't yet used our mulligan, and we are quickly decide for which new laps we want to use our mulligan. Let's say the randomly assigned lap will move us from $t$ to $t^\prime \in \mathfrak{T}.$ If we use our mulligan in this turn then we will end up with an expected score of $E(t);$ whereas if we don't use the mulligan, then we would end up with an expected score of $\tilde{E}(t^\prime),$ since we would still have our mulligan remaining. So the optimal choice if we end up moving from $t$ to $t^\prime \in \mathfrak{T}$ would be to give us $\max \{ E(t), \tilde{E}(t^\prime) \}.$ Similarly, if we are randomly assigned to move from $t$ to some $s \in \mathfrak{S}$, then if we use our mulligan we end up with expected score of $E(t)$ versus a score of $s,$ if we don't. So if we are would move from $t$ to $s \in \mathfrak{S}$ the optimal choice gives us $\max \{E(t), s \}.$ Putting this altogether, we the corresponding transition probabilities from the $Q$ and $R$ matrices given in the Classic problem, we get the recursive formula $$\tilde{E}(t) = \sum_{t^\prime \in \mathfrak{T}} Q(t, t^\prime) \max \{ E(t), \tilde{E}(t^\prime) \} + \sum_{s \in \mathfrak{S}} R(t,s) \max \{ E(t), s \}.$$

So let's go recursing .... \begin{align*} \tilde{E}(5.5) &= \frac{3}{4} \max\{ E(5.5), 5.5 \} + \frac{1}{4} \max \{ E(5.5), 6.5 \} = \frac{3}{4} E(5.5) + \frac{1}{4} 6.5 = \frac{95}{16} \\ \tilde{E}(5) &= \frac{3}{4} \max \{ E(5), 5 \} + \frac{1}{4} \max \{ E(5), 6 \} = \frac{3}{4} E(5) + \frac{1}{4} 6 = \frac{87}{16} \\ \tilde{E}(4.5) &= \frac{1}{4} \max \{ E(4.5), \tilde{E}(5.5) \} + \frac{3}{4} \max \{ E(4.5), 4.5 \} = \frac{1}{4} \tilde{E}(5.5) + \frac{3}{4} E(4.5) = \frac{163}{32} \\ \tilde{E}(4) &= \frac{1}{4} \max \{ E(4), \tilde{E}(5) \} + \frac{3}{4} \max \{ E(4), 4 \} = \frac{1}{4} \tilde{E}(5) + \frac{3}{4} E(4) = \frac{147}{32} \\ \tilde{E}(3.5) &= \frac{1}{4} \max \{ E(3.5), \tilde{E}(4.5) \} + \frac{1}{2} \max \{ E(3.5), 3.5 \} + \frac{1}{4} \max \{ E(3.5), 6.5 \} \\ &\quad\quad = \frac{1}{4} \tilde{E}(4.5) + \frac{1}{2} E(3.5) + \frac{1}{4} 6.5 = \frac{83}{16} \\ \tilde{E}(3) &= \frac{1}{4} \max \{ E(3), \tilde{E}(4) \} + \frac{1}{4} \max \{ E(3), 3 \} + \frac{1}{4} \max \{ E(3), 6 \} + \frac{1}{4} \max \{ E(3), 6.5 \} \\ &\quad\quad = \frac{1}{4} E(3) + \frac{1}{4} E(3) + \frac{1}{4} 6 + \frac{1}{4} 6.5 = \frac{1411}{256}\\ \tilde{E}(2) &= \frac{1}{4} \max \{ E(2), \tilde{E}(3) \} + \frac{1}{4} \max\{ E(2), \tilde{E}(5) \} + \frac{1}{4} \max \{ E(2), \tilde{E}(5.5) \} + \frac{1}{4} \max \{ E(2), 6.5 \} \\ &\quad\quad = \frac{1}{4} E(2) + \frac{1}{4} E(2) + \frac{1}{4} \tilde{E}(5) + \frac{1}{4} 6.5 = \frac{3029}{512}\\ \tilde{E}(1) &= \frac{1}{4} \max \{ E(1), \tilde{E}(2) \} + \frac{1}{4} \max \{ E(1), \tilde{E}(4) \} + \frac{1}{4} \max \{ E(1), \tilde{E}(4.5) \} + \frac{1}{4} \max \{ E(1), \tilde{E}(5.5) \} \\ &\quad\quad = \frac{1}{4} \tilde{E}(2) + \frac{1}{4} E(1) + \frac{1}{4} E(1) + \frac{1}{4} \tilde{E}(5.5) = \frac{5657}{1024} \end{align*}

This leaves us with one more step to show that the total expected score on accrued between $5$:$55$ pm and $7$ pm going at a constant $10$ minute per mile pace with one single mulligan is \begin{align*} \tilde{E}(0) &= \frac{1}{4} \max \{ E(0, \tilde{E}(1) \} + \frac{1}{4} \max \{ E(0), \tilde{E}(3) \} + \frac{1}{4} \max \{ E(0), \tilde{E}(3.5) \} + \frac{1}{4} \max \{ E(0), \tilde{E}(4.5) \} \\ &\quad\quad = \frac{1}{4} \tilde{E}(1) + \frac{1}{4} \tilde{E}(3) + \frac{1}{4} \tilde{E}(3.5) + \frac{1}{4} \tilde{E}(4.5)\\ &\quad\quad = \frac{21921}{4096} = 5.351806640625 \end{align*}

Sundown Trail Race

You’re participating in a trail run that ends at sundown at $7$ p.m. The run consists of four loops: $1$ mile, $3$ miles, $3.5$ miles, and $4.5$ miles. After completing any given loop, you are randomly assigned another loop to run—this next loop could be the same as the previous one you just ran, or it could be one of the other three. Being assigned your next loop doesn’t take a meaningful amount of time; assume all your time is spent running.

Your “score” in the race is the total distance you run among all completed loops. If you’re still out on a loop at $7$ p.m., any completed distance on that loop does not count toward your score!

It is now $5$:$55$ p.m. and you have just completed a loop. So far, you’ve been running $10$-minute miles the whole way. You’ll maintain that pace until $7$ p.m. On average, what score can you expect to earn between $5$:$55$ p.m. and $7$ p.m.?

OK, let's agree on how to parametrize this. There are $65$ minutes left in the race and you are running at a clean $10$ minutes per mile, so let's just divide everything by $10$ minute per mile and deal with things entirely in miles or score space. In this space, the problem looks like this: Your score is currently $0$, you will be assigned a random score increment $\delta S$ whichi if uniformly distributed over the set $\Delta S = \{1 , 3, 3.5, 4.5 \}.$ If $S + \delta S \leq 6.5,$ then your new score is $S+\delta S.$ Otherwise, if $S + \delta S \gt 6.5,$ then your final score will be $S.$ For clarity, we see that the possible final scores are $\mathfrak{S} = \{ 3, 3.5, 4, 4.5, 5, 5.5, 6, 6.5\}.$

With the transformation complete, we can either go through the combinatorial enumeration of all possible paths to get to a final score of, say, $S=3.5$. In that particular case, we see that in order to end with a score of $3.5$ you must first be assigned $3.5$ in your first loop and then be assigned either $3.5$ or $4.5$ in your second lap. This would then have a total probability of $\frac{1}{4} \times \frac{1}{2} = \frac{1}{8}.$ This can definitely be done for each of the other elements in $\mathfrak{S},$ but for the Extra Credit answer we will want to take advantage of modeling this as an absorbing Markov chain .... so let's do that here too.

The total number of states in this Markov Chain is $17$, there are nine (9) transient states $\mathfrak{T} = \{0, 1, 2, 3, 3.5, 4, 4.5, 5, 5.5\}$ and then the eight final, absorbing states in $\mathfrak{S}.$ Note that there two possible states where the current score is, say $3.5$, one in $\mathfrak{T},$ which represents a state that could still have some $\delta S \lt 3.5$ added to it and the race continuing and the final one where the race is over in $\mathfrak{S}.$ If at any time we have to differentiate between the transient and absorbing states that have the same numerical values, we will use $T$ and $A$ superscripts, e.g, $3.5^T \in \mathfrak{T}$ and $3.5^A \in \mathfrak{S}.$ If no superscript is provided, then let's safely assume that we are talking about the final, absorbing state.

OK, now let's get the canonical form of the transition matrix $P = \begin{pmatrix} Q & R \\ 0 & I_8 \end{pmatrix},$ where $Q$ is the $9 \times 9$ matrix representing the transition probabilities entirely within the transient states $\mathfrak{T}$ and $R$ is the $9 \times 8$ matrix representing the transition probabilities that start at transient states in $\mathfrak{T}$ and end in absorbing states $\mathfrak{S}.$ In particular, we have \begin{align*} Q &= \begin{pmatrix} 0 & \frac{1}{4} & 0 & \frac{1}{4} & \frac{1}{4} & 0 & \frac{1}{4} & 0 & 0 \\ 0 & 0 & \frac{1}{4} & 0 & 0 & \frac{1}{4} & \frac{1}{4} & 0 & \frac{1}{4} \\ 0 & 0 & 0 & \frac{1}{4} & 0 & 0 & 0 & \frac{1}{4} & \frac{1}{4} \\ 0 & 0 & 0 & 0 & 0 & \frac{1}{4} & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & \frac{1}{4} & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & \frac{1}{4} & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \frac{1}{4} \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \end{pmatrix} \\ R & = \begin{pmatrix} 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & \frac{1}{4} \\ \frac{1}{4} & 0 & 0 & 0 & 0 & 0 & \frac{1}{4} & \frac{1}{4} \\ 0 & \frac{1}{2} & 0 & 0 & 0 & 0 & 0 & \frac{1}{4} \\ 0 & 0 & \frac{3}{4} & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & \frac{3}{4} & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & \frac{3}{4} & 0 & \frac{1}{4} & 0 \\ 0 & 0 & 0 & 0 & 0 & \frac{3}{4} & 0 & \frac{1}{4} \end{pmatrix} \end{align*} Based on the matrix $Q,$ we can compute the fundamental matrix $$N = (I_9 - Q)^{-1} = \begin{pmatrix} 1 & \frac{1}{4} & \frac{1}{16} & \frac{17}{64} & \frac{1}{16} & \frac{33}{256} & \frac{3}{8} & \frac{49}{1024} & \frac{11}{64} \\ 0 & 1 & \frac{1}{4} & \frac{1}{16} & 0 & \frac{17}{64} & \frac{1}{16} & \frac{33}{256} & \frac{3}{8} \\ 0 & 0 & 1 & \frac{1}{4} & 0 & \frac{1}{16} & 0 & \frac{17}{64} & \frac{1}{16} \\ 0 & 0 & 0 & 1 & 0 & \frac{1}{4} & 0 & \frac{1}{16} & 0 \\ 0 & 0 & 0 & 0 & 1 & 0 & \frac{1}{4} & 0 & \frac{1}{16} \\ 0 & 0 & 0 & 0 & 0 & 1 & 0 & \frac{1}{4} & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & \frac{1}{4} \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 \end{pmatrix}.$$

Using the fundamental matrix, we can get the absorption probabilities as $$B = NR = \begin{pmatrix} \frac{17}{256} & \frac{1}{8} & \frac{49}{512} & \frac{9}{32} & \frac{147}{4096} & \frac{33}{256} & \frac{321}{4096} & \frac{3}{16} \\ \frac{1}{64} & 0 & \frac{51}{256} & \frac{3}{16} & \frac{99}{1024} & \frac{9}{32} & \frac{49}{1024} & \frac{11}{64} \\ \frac{1}{16} & 0 & \frac{3}{64} & 0 & \frac{51}{256} & \frac{3}{16} & \frac{33}{256} & \frac{3}{8} \\ \frac{1}{4} & 0 & \frac{3}{16} & 0 & \frac{3}{64} & 0 & \frac{17}{64} & \frac{1}{4} \\ 0 & \frac{1}{2} & 0 & \frac{3}{16} & 0 & \frac{3}{64} & 0 & \frac{17}{64} \\ 0 & 0 & \frac{3}{4} & 0 & \frac{3}{16} & 0 & \frac{1}{16} & 0 \\ 0 & 0 & 0 & \frac{3}{4} & 0 & \frac{3}{16} & 0 & \frac{1}{16} \\ 0 & 0 & 0 & 0 & \frac{3}{4} & 0 & \frac{1}{4} & 0 \\ 0 & 0 & 0 & 0 & 0 & \frac{3}{4} & 0 & \frac{1}{4} \end{pmatrix},$$ where $B(t,s)$ is the probability of starting at transient state $t$ and eventually absorbing into the state $s \in \mathfrak{S}.$ For the purpose of the Classic problem, we are most interested in using the first row of $B,$ that is $B_{0,s},$ which is the probability of starting with score of $0$ and ending at score $s \in \mathfrak{S}$. More specifically, the expected score of running at a constant $10$ minute miles from $5$:$55$ pm until $7$ pm is \begin{align*}E(0) = \mathbb{E}[S \mid S_0 = 0] &= \sum_{s \in \mathfrak{S}} s B(0,s)\\ &= 3 \cdot \frac{17}{256} + 3.5 \cdot \frac{1}{8} + 4 \cdot \frac{49}{512} + 4.5 \cdot \frac{9}{32} + 5 \cdot \frac{147}{4096} \\ &\quad\quad\quad+ 5.5 \cdot \frac{33}{256} + 6 \cdot \frac{321}{4096} + 6.5 \cdot \frac{3}{16} \\ &= \frac{19933}{4096} = 4.866455078125.\end{align*}

Monday, August 18, 2025

Probable Free Money

You have the same $\$55$ worth of vouchers from the casino in the same denominations. But this time, you’re not interested in guaranteed winnings. Instead, you set your betting strategy so that you will have at least a $50$ percent chance of winning $W$ dollars or more.

What is the maximum possible value of W? In other words, what is the greatest amount of money you can have at least a 50 percent chance of winning from the outset, with an appropriate strategy? And what is that betting strategy?

Ok, well to explore this space, let's first explore the distribution of optimal strategy from the Classic answer. If we assume that we first bet one on each side then almost surely we end up with $2$ remaining $\$10$ vouchers and the $\$25$ voucher remaining, but with $W = 10.$ At this point the bet is $\$20$ on heads and $\$25$ on tails, so with probability $50\%$ you end up with $2$ $\$10$ vouchers and $W=30$. At this point in this strategy you would bet one $\$10$ voucher on heads and the other on tails, to reach the state of a single $\$10$ voucher and $W=40,$ again with a total probability of $50\%$. At this point you would bet your single $\$10$ voucher over and over again until you lose, which means you can attain $W=40+10n$ with probability $$p_n = \mathbb{P}\{W = 40+10n\} = \frac{1}{2^{n+2}}.$$

On the other hand, there is another possibility when you bet the $2$ $\$10$ vouchers against the single $\$25$ voucher, which is that, with probability $50\%$ you end up with on a single $\$25$ voucher and $W=35.$ In this case, you would just choose to wager your single $\$25$ voucher over and over again until you lose. So we have $W = 35+25n$ with probability $$q_n = \mathbb{P} \{W = 35+25n\} = \frac{1}{2^{n+2}}.$$ Putting these altogether we get the full distribution $$\mathbb{P} \{ W = w \} = \begin{cases} \frac{1}{4^{n+1}}, &\text{if $w=35+50n;$}\\ \frac{1}{2\cdot 4^{n+1}} + \frac{2}{32^{n+1}} = \frac{4\cdot 8^n +2}{32^{n+1}}, &\text{if $w = 60+50n$;} \\ \frac{1}{4 \cdot 2^n}, &\text{if $w = 40+10n$ and $n \mod 5 \not\equiv 2$;}\\ 0, \text{otherwise.}\end{cases}$$ In this case, we see that $$W^* = \max \{ w \in \mathbb{N} \mid \mathbb{P} \{ W \geq w \} = \frac{1}{2} \} = 50.$$ Additionally, we confirm that since $\mathbb{P} \{W \geq 35\} = 1,$ as we surmised in the classic answer.

However, as expected, we can do way ......... better. For instance, even if you bet all of your vouchers on heads each time you have a $50\%$ probability of getting at least $\$55,$ since in that greedy strategy your probability distribution if $$\hat{\mathbb{P}} \{ W = w \} = \begin{cases} \frac{1}{2^n}, &\text{ $w = 55n$;} \\ 0, &\text{otherwise.}\end{cases}$$

However, let's combine the greediness of throwing everything on one side and hoping for the best, with the optimal strategy to ensure a high of a minimum strategy as possible. In this case, you bet all $\$55$ on heads, let's say, in the first round. If you lose, well, that sucks. But if you win then you will be at the state $(3,1,55)$ with probability $50\%.$ Again, since we are looking to maximize the the value of $\max \{ w \in \mathbb{N} \mid \mathbb{P}^* \{ W \geq w \} \geq \frac{1}{2} \}$ and we already spent $50\%$ of the probability mass on the first wager, we see that our problem has now transformed into finding the maximum guaranteed winnings based on starting at the state $(3,1,55),$ since guaranteed winning from this point would still have a greater than or equal to $50\%$ overall probability from $(3,1,0).$ Thankfully, we've already solved this problem in the Classic Fiddler, and see that the greatest amount of money you can have at least a $50$ percent chance of winning from the outset is $$V(3,1,55) = 55+35 = \$90,$$ which employs the strategy of first betting everything on one side, say heads, and then if you win the first bet, then employing the strategy from the Classic Fiddler.

Sunday, August 17, 2025

Guaranteed Free Money

A casino offers you $\$55$ worth of “free play vouchers.” You specifically receive three $\$10$ vouchers and one $\$25$ voucher.

You can play any or all vouchers on either side of an even-money game (think red vs. black in roulette, without those pesky green pockets) as many times as you want (or can). You keep the vouchers wagered on any winning bet and get a corresponding cash amount equal to the vouchers for the win. But you lose the vouchers wagered on any losing bet, with no cash award. Vouchers cannot be split into smaller amounts.

What is the guaranteed minimum amount of money you can surely win, no matter how bad your luck? And what betting strategy always gets you at least that amount?

Let $S = (S_1, S_2, W)$ be the state of the system where you currently have $S_1$ $\$10$ vouchers, $S_2$ $\$25$ vouchers and $W$ winnings in cash. Further define $V(S) = V(S_1,S_2,W)$ to be the guaranteed minimum amount of money you can surely win if you start with state $S = (S_1,S_2,W).$

We can define $V(1,0,W) = V(0,1,W) = W,$ since regardless of whether you only have one voucher left, since you cannot split the vouchers, the fair game could go against your bet and you end up with no more money than you started.

Let's now consider $V(1,1,W).$ If you were to bet one $\$10$ voucher on say heads and one $\$25$ voucher on tails, then with a $50\%$ probability you would win $\$10$ and move to the state $S^\prime = (1,0,W+10)$ and with $50\%$ probability you would win $\$25$ and move to state $\tilde{S} = (0,1,W+25)$. Since $V$ measures the guaranteed minimum winnings we have $$\tilde{V}(1,1,W) = \min \{ V(1,0,W+10), V(0,1,W+25) \} = \min \{ W + 10, W + 25 \} = W+10.$$ If we were to pick another strategy, say only wager one $\$10$ voucher on heads, then we would get $$\bar{V}(1,1,W) = \min \{ V(1,1,W+10), V(0,1,W) \} = W,$$ which is sub-optimal with respect to the strategy of betting say $\$10$ on H and $\$25% on tails. So we see that that optimal strategy gives $$V(1,1,W) = W+10.$$

For $V(2,0,W),$ the optimal guaranteed strategy is to bet one voucher on heads and another on tails, since this gives $\tilde{V}(2,0,W) = V(1,0,W+10) = W+10,$ which is better than if you only wager on one side and get $\hat{V}(2,0,W) = W.$ So we get $$V(2,0,W) = \max \{ \tilde{V}(2,0,W), \hat{V}(2,0,W) \} = W+10.$$ Similarly, we get that $$V(3,0,W) = W+20,$$ by successively wagering one voucher on heads and another on tails, twice.

For $V(2,1,W)$, an optimal guaranteed strategy is to bet, say, two $\$10$ vouchers on heads and the $\$25$ voucher on tails. This gives $$\tilde{V}(2,1,W) = \min \{ V(2,0,W+20), V(0,1,W+25) \} = \min \{ W+30, W+25 \}= W+25.$$ This is better than first betting one $\$10$ on heads and another on tails, and then $$\bar{V}(2,1,W) = V(1,1,W+10) = W+20.$$ Both of these are better than betting only one $\$10$ voucher on heads and the $\$25$ voucher on tails which gives $$\hat{V}(2,1,W) = \min \{ V(2,0,W+10), V(1,1,W+25) \} = \min \{ W +20, W+25 \} = W + 20.$$ So we have $$V(2,1,W) = \max \{ \tilde{V}(2,1,W), \hat{V}(2,1,W), \bar{V}(2,1,W) \} = W+25.$$

Finally, for $V(3,1,W),$ we have an optimal guaranteed strategy to bet one $\$10$ on heads and another on tails, and get $$\tilde{V}(3,1,W) = V(2,1,W+10) = W+35.$$ Other strategies involve betting two $\$10$ vouchers on heads and the $\$25$ voucher on tails, which equivalently gives $$\bar{V}(3,1,W) = \min \{ V(3,0,W+20), V(1,1,W+25) \} = W+35.$$ Sub-optimal strategies includes bets on only one side, or betting only on $\$10$ voucher on heads and the $\$25$ voucher on tails, which given $$\hat{V}(3,1,W) = \min \{ V(3,0,W+10), V(2,1,W+25)\} = W+30.$$ So we have $$V(3,1,W) = \max \{ \tilde{V}(3,1,W), \bar{V}(3,1,W), \hat{V}(3,1,W) \} = W+35.$$ Therefore, the guaranteed minimum amount of money you can surely win is $V(3,1,0) = \$35,$ which you can get by first betting one $\$10$ voucher on each side of the game, then in the second round betting the two remaining $\$10$ vouchers on one side of the game and the $\$25$ voucher on the other.

Monday, August 11, 2025

Coldplay Canoodling - Extra Credit

Now, everyone at the concert spends at least some time canoodling. In particular, each member of a couple wants to spend some fraction of the time canoodling, where this fraction is randomly and uniformly selected between $0$ and $1$. This value is chosen independently for the two members of each couple, and the actual time spent canoodling is the product of these values. For example, if you want to canoodle during half the concert and your partner wants to canoodle during a third of the concert, you will actually canoodle during a sixth of the concert.

Meanwhile, the camera operators love to show canoodling couples. So instead of randomly picking couples to show on the jumbotron, they randomly pick from among the currently canoodling couples. (The time shown on the jumbotron is very short, so a couple’s probability of being selected is proportional to how much time they spend canoodling.)

Looking around the concert, you notice that the kinds of couples who most frequently appear on the jumbotron aren’t constantly canoodling, since there are very few such couples. Indeed, the couples who most frequently appear on the jumbotron spend a particular fraction $C$ of the concert canoodling. What is the value of $C$?

In this Extra Credit problem, we want to find the mode of the following distribution $$f_S(C) = \mathbb{P} \{ \text{shown canoodling}, \text{canoodling ratio} = C\} = k C \mathbb{P} \{ \text{canoodling ratio} = C \},$$ where the final equation comes from the assertin that a couple's probability of being selected is proportional to how much time they spend canoodling.

Let $X, Y \sim U(0,1)$ represent the canoodling propensity of the two members of the couple, then \begin{align*} \mathbb{P} \{ \text{canoodling ratio} = C \} &= \mathbb{P} \{ X \cdot Y = C \} \\ &= \frac{d}{dC} \mathbb{P} \{ X \cdot Y \leq C \} \\ &= \frac{d}{dC} \left( \int_0^1 \int_0^{\min \{ 1, C / x \}} \,dy \, dx \right) \\ &= \frac{d}{dC} \left( \int_0^1 \min \{ 1, \frac{C}{x} \} \,dx \right) \\ &= \frac{d}{dC} \left( \int_0^C 1\,dx + \int_C^1 \frac{C}{x} dx \right) \\ &= \frac{d}{dC} \left( C - C\ln C \right) \\ &= - \ln C.\end{align*} We already have $f_S(C) = - k C \ln C,$ and though it is irrelevant for answering the question, out of an abundance of rigor (??), let's find $k.$ From the law of total probability we get $$1 = \int_0^1 f_S(C) \,dC = k \int_0^1 (- C \ln C) \,dC = \left[ -\frac{C^2}{2} \ln C \right]_{C=0}^{C=1} + \int_0^1 \frac{C^2}{2} \,\frac{dC}{C} = \frac{k}{4},$$ so $k = 4.$

So the distribution is $f_S(C) = -4 C \ln C$ and we want to find the mode of this distribution. Since $$f^\prime_S(C) = -4 \ln C -4 = -4(1 + \ln C)$$ has only one zero in $[0,1]$ and $f_S(0) = f_S(1) = 0,$ we see that the most frequently appearing canoodling ratios on the Jumbotron is $$C^* = \frac{1}{e} \approx 0.367879441171\dots,$$ which gives $$f_S(C) \leq f_S(C^*) = \frac{4}{e}, \,\forall C \in [0,1].$$

Coldplay Canoodling

All the many attendees at a particular Coldplay concert are couples. As the CEO of Astrometrics, Inc., you are in attendance with your romantic partner, who is definitely not the head of HR at Astrometrics, Inc. During the concert, the two of you spend half the time canoodling.

The camera operators love to show people on the jumbotron during the concert, but time is limited and there are many attendees. As a result, the camera operators show just $1$ percent of couples during the concert. Couples are chosen randomly, but never repeat at any given concert.

You and your partner are shy when it comes to public displays of affection. While you don’t mind being shown on the jumbotron, you don’t want to be shown canoodling on the jumbotron. How many Coldplay shows can the two of you expect to attend without having more than a $50$ percent chance of ever being shown canoodling on the jumbotron?

For the classic problem, we first want to find what the probability of not being shown on the screen canoodling during any individual concert would be. Let's say that $p_s = 1\%$ is the probability of being shown on screen during a show and $p_c=50\%$ is the probability of canoodling at any moment during a show. So the probability of not being shown while canoodling at any individual show is $P=1-p_cp_s=99.5\%.$

The probability of not being shown canoodling in any of $N$ consecutive shows is $P^N,$ so the maximum number of shows that can be attended without there being a greater than $50\%$ chance of being shown is \begin{align*}N^* &= \max \{n \in \mathbb{N} \mid P^n \geq \frac{1}{2}\}\\ &= \max \{ n \in \mathbb{N} \mid n \ln P \geq - \ln 2\}\\ &= \left\lfloor -\frac{\ln 2}{\ln P}\right\rfloor = 138\end{align*} Coldplay shows.