Monday, September 29, 2025

Riskier Risk, Now with Wilds!

The full deck of Risk cards also contains two wildcards, which can be used as any of the three types of cards (infantry, cavalry, and artillery) upon trading them in. Thus, the full deck consists of $44$ cards.

You must have at least three cards to have any shot at trading them in. Meanwhile, having five cards guarantees that you have three you can trade in.

If you are randomly dealt cards from a complete deck of $44$ one at a time, how many cards would you need, on average, until you can trade in three? (Your answer should be somewhere between three and five. And no, it’s not four.)

Let $S$ be the random number of cards you have when you can first turn in a set of three cards. Obviously we have $3\leq S\leq 5,$ so lets go about calculating $p_s = \mathbb{P} \{ S= s\},$ for $s \in \{3, 4, 5\},$ which we can then compute $$\mathbb{E} [S] = \sum_{s=3}^5 sp_s.$$

From the Classic wildless Risk problem, we see that there remain $3836$ possible combinations of three card sets without using any wilds. To this we must at $\binom{2}{1} \binom{42}{2} = 1722$ possible combinations of three card sets with ome wild card and $\binom{2}{2}\binom{42}{1}=42$ possible combinations of three card sets with two wild cards. In total, there are $5600$ possible sets of three cards, out of a total $\binom{44}{3}=13244$ possible hands of three Risk cards, so we have $$p_3 = \frac{5600}{13244}=\frac{200}{473}\approx 42.283\dots\%$$

Rather than the messiness of figuring out the exact combinations for $S=4$, let's instead tackle $S=5.$ We see that if you do not get a set on your fourth card then you will automatically get a set by your fifth card, so we only need to figure out what situations lead you to not have a set after four cards. This can only happen whenever you have two of one type and two of a different type (with no wilds, obviously). There are $\binom{14}{2}\binom{14}{2}\binom{3}{2}=24843$ possible hands that have exactly two cards in two different types. Since there are $\binom{44}{4} = 135751$ possible combinations of four distinct Risk cards, we thus have $$p_5=\frac{24843}{135751}=\frac{3549}{19393}\approx 18.300\dots\%$$

Since we can reason that $p_4=1-p_3-p_5 = \frac{7644}{19393}\approx 39.416\dots\%,$ we have the expected number of Risk cards needed to turn in a set is $$\mathbb{E}[S]= 3 \frac{200}{473} + 4 \frac{7644}{19393} + 5 \frac{3549}{19394} = \frac{72921}{19393} \approx 3.760171\dots$$

Risk without Wilds

There are $42$ territory cards in the deck—$14$ that depict an infantry unit, $14$ that depict a cavalry unit, and $14$ that depict an artillery unit. Once you have three cards that either (1) all depict the same kind of unit, or (2) all depict different kinds of units, you can trade them in at the beginning of your next turn in exchange for some bonus units to be placed on the board.

If you are randomly dealt three cards from the $42$, what is the probability that you can trade them in?

In this case, we see that there are $\binom{42}{3}=11480$ possible hands combined of three Risk cards if you remove the wild cards. There are $\binom{14}{3}=364$ possible combinations of three cards within any kind and three different kinds (that is, infantry, cavalry and artillery), so a total of $1092$ different three of a kind sets that can be formed. There are also $14^3 = 2744$ different one of each sets. So there a total of $3836$ sets of three Risk cards without wild cards, which implies that the probability of being able to trade in a randomly dealt set of three cards when the wild cards have been removed is $$p = \frac{3836}{11480} = \frac{137}{410} \approx 33.41463\dots \%$$

Monday, September 15, 2025

Letters Boxed, Like Surgeons General

Consider the following diagram, which consists of eight points (labeled $A$ through $H$), two on each side of the square. A valid “letter boxed” sequence starts at any of the eight points, and proceeds through all of the other points exactly once. However, adjacent points in the sequence can never be on the same side of the square. The first and last points in the sequence can be on the same side, but do not have to be.

As an illustration, $AFBCHEDG$ is a valid sequence of points. However, $AFBCHGED$ is not a valid sequence, since $H$ and $G$ are adjacent in the sequence and on the same side of the square. How many distinct valid “letter boxed” sequences are there that include all eight points on the square?

Well in total, there are $8! = 40,320$ different words that can be made up of the letters $A$ through $H$ with no repeated letters. One possible way is to get this answer is to just enumerate all $40,320$ possible words and throw away any words with forbidden doublets, like $AB$, $EF$ or $HG$. This is not too terribly time consuming in a short Python script, which yields that there are a total of $13,824$ different letters boxed that include all eight points on the square.

Instead of two points on each side of the square (and eight points in total), now there are three points on each side (and twelve points in total), labeled A through L in the diagram below. How many distinct valid “letter boxed” sequences are there that include all $12$ points on the square?

While $8!$ might be well within a modern programming languages wheelhouse to try to iterate through, $12!$ is more than $500$ million possible words that are composed of the letters $A$ through $L$ with no repeated letters. So unless we want to run a script all weekend, let's try a different approach. We can encode the 12 points on the square and the adjacency of the letter boxed game into a graph $\mathcal{G},$ where for instance there is an edge from $A$ to each letter $D, E, F, \dots, L,$ but no edge between $A$ and $B$, $A$ and $C$ or $B$ and $C,$ since they all share the same side of the square. A letter boxed that contains all twelve letters would then be a Hamiltonian path on $\mathcal{G},$ so let's liberally borrow the Python3 code provided by user Gaslight Deceive Subvert from this StackOverflow post related to enumerating all Hamiltonian paths on a graph and get to countings. Since I don't particularly care about the paths themselves and just the number of them, instead of printing the entire list of all Hamiltonian paths, I modified it to just print the number of distinct Hamiltonian paths.

To confirm that everything seemed above board, I started out with the graph from the Classic problem given by \begin{align*}\textsf{G} &\textsf{= \{'A':'CDEFGH', 'B':'CDEFGH', 'C':'ABEFGH', 'D':'ABEFGH',}\\ &\quad\quad \textsf{'E':'ABCDGH', 'F':'ABCDGH','G':'ABCDEF', 'H' : 'ABCDEF'\}}.\end{align*} THankfully, the code retrieved the answer of $13,824$ letters boxed, so I then felt confident enough to encode the Extra Credit graph as \begin{align*}\tilde{G} &\textsf{= \{'A':'DEFGHIJKL', 'B':'DEFGHIJKL', 'C':'DEFGHIJKL',}\\ &\quad\quad\quad \textsf{'D':'ABCGHIJKL', 'E':'ABCGHIJKL', 'F':'ABCGHIJKL',}\\ &\quad\quad\quad\quad \textsf{'G':'ABCDEFJKL', 'H' : 'ABCDEFJKL', 'I':'ABCDEFJKL',}\\ &\quad\quad\quad\quad\quad \textsf{ 'J' : 'ABCDEFGHI', 'K' : 'ABCDEFGHI', 'L' : 'ABCDEFGHI'\}}.\end{align*}.$$ Here, after running the code for roughly 15 minutes, we get a grand total of $53,529,984$ letters boxed with all $12$ points on the square.

Monday, September 8, 2025

High-Low Hijinks Extra Credit

Your friend is playing an epic game of “high-low” and has made it incredibly far, having racked up a huge number of points. Given this information, and only this information, what is the probability that your friend wins the next round of the game?

Let's define this problem as the following probabilistic statement, what is the conditional probability that $N \geq n+1$ given that $N \geq n,$ when $n$ is large? That is, find $$\lambda = \lim_{n \to \infty} \mathbb{P} \left \{ N \geq n+1 \mid N \geq n \right\} = \lim_{n\to \infty} \frac{\mathbb{P} \{ N \geq n+1 \}}{\mathbb{P} \{ N \geq n \} } = \lim_{n\to\infty} \frac{p_{n+1}}{p_n}.$$

One method that we can do is to just compute successive values of $p_n,$ as per the following table:

$n$ $p_n$
1 $\frac{3}{4}$
2 $\frac{13}{24}$
3 $\frac{25}{64}$
4 $\frac{541}{1920}$
5 $\frac{1561}{7680}$
6 $\frac{47293}{322560}$
7 $\frac{36389}{344064}$
8 $\frac{7087261}{92897280}$
9 $\frac{34082521}{619315200}$
10 $\frac{1622632573}{40874803200}$

Even by this point, it seems that the ratio of $p_{n+1} / p_n$ approaches roughly $0.72134752\dots,$ but let's see if we can make sense of this otherwise mysterious limit.

Let's focus on the recursion $h_{n+1}(t) = \int_0^1 I(t,x) h_n(x) \,dx$ that we saw in the Classic problem. Let's assume that we have $h_{n+1} (x) \approx \lambda h_n (x),$ for large values of $n \gg 1.$ Then the recursion formula would settle down to something like $$\lambda h^*(t) = \int_0^1 I(t,x) h^*(x) \,dx = \begin{cases} \int_t^1 h^*(x)\,dx, &\text{if $t \lt \frac{1}{2};$}\\ \int_0^t h^*(x) \,dx, &\text{if $t \gt \frac{1}{2};$}\end{cases}.$$ Differentiating both sides of the equation, we get $$\lambda \frac{d}{dt} h^*(t) = \begin{cases} -h^*(t), &\text{if $t \lt \frac{1}{2};$}\\ h^*(t), &\text{if $t \gt \frac{1}{2}.$}\end{cases}.$$ Let's focus on when $t \gt \frac{1}{2},$ in which case we have $\lambda \frac{d}{dt} h^*(t) = h^*(t),$ so we would have $h^*(t) = C \exp \left( \frac{t}{\lambda} \right)$ for $t \gt \frac{1}{2},$ with $h^*(t) = h^*(1-t) = C \exp \left( \frac{ 1-t}{\lambda} \right)$ for $t \lt \frac{1}{2}.$ In this case, plugging back in, if our recursion formula still holds for $t \gt \frac{1}{2},$ we would need to have \begin{align*}\lambda C e^{t/\lambda} &= \int_0^\frac{1}{2} C \exp \left( \frac{1-t}{\lambda} \right)\,dt + \int_\frac{1}{2}^t C\exp \left(\frac{t}{\lambda}\right) \,dt \\ &= \lambda C \left( e^{1/\lambda} - e^{1/(2\lambda)} \right) + \lambda C \left( e^{t/\lambda} - e^{1/(2\lambda)} \right) \\ &= \lambda C e^{t/\lambda} + \lambda C \left( e^{1/\lambda} - 2 e^{1/(2\lambda)} \right).\end{align*} Therefore, we must have either $\lambda = 0$ or $e^{1/\lambda} - 2 e^{1/(2\lambda)} = 0$ in order tof the recursion formula to hold, which implies that the conditional probability of winning the next round in a long game of high-low is $$\lambda = \lim_{n \to \infty} \frac{p_{n+1}}{p_n} = \frac{1}{2\ln 2} \approx 0.721347520444\dots.$$

High-Low Hijinks

You’re playing a game of “high-low,” which proceeds as follows:

  • First, you are presented with a random number, $x_1$, which is between $0$ and $1.$
  • A new number, $x_2$, is about to be randomly selected between $0$ and $1$, independent of the first number. But before it’s selected, you must guess how $x_2$ will compare to $x_1$. If you think $x_2$ will be greater than $x_1$ you guess “high.” If you think $x_2$ will be less than $x_1$, you guess “low.” If you guess correctly, you earn a point and advance to the next round. Otherwise, the game is over.
  • If you correctly guessed how $x_2$ compared to $x_1$ then another random number, $x_3$, will be selected between $0$ and $1$. This time, you must compare $x_3$ to $x_2$, guessing whether it will be “high” or “low.” If you guess correctly, you earn a point and advance to the next round. Otherwise, the game is over.

You continue playing as many rounds as you can, as long as you keep guessing correctly. You quickly realize that the best strategy is to guess “high” whenever the previous number is less than $0.5,$ and “low” whenever the previous number is greater than $0.5.$ With this strategy, what is the probability you will earn at least two points? That is, what are your chances of correctly comparing $x_2$ to $x_1$ and then also correctly comparing $x_3$ to $x_2$?.

Let's define the indicator function $$I(s,t) = \begin{cases} 1, &\text{if $(t-s)(s-\frac{1}{2}) \leq 0;$} \\ 0, &\text{otherwise,}\end{cases}$$ such that $I(x_1,x_2) = 1$ whenever you successfully earned a point in the first round and $I(x_2,x_3) = 1$ whenever you successfully earned a second point. Let $N = N(x_1, x_2, \dots, x_n, \dots)$ be the random variable that corresponds to your score from playing this game of high-low. We see that based on this setup we have $$\mathbb{P} \{ N \geq n \} = \int_0^1 \int_0^1 \cdots \int_0^1 \prod_{i=1}^n I(x_i, x_{i+1}) \,dx_{n+1} \,dx_n \,dx_{n-1}\, \cdots \,dx_2 \,dx_1.$$ Further, let's define $h_0(t) = 1$ and $$h_1(t) = \int_0^1 I(t, x) \,dx = \begin{cases} \int_t^1 \,dx = 1-t, &\text{if $t \lt \frac{1}{2};$}\\ \int_0^t \,dx = t, &\text{if $t \geq \frac{1}{2};$} \end{cases} = \max \{ t, 1-t \}.$$ Finally, for any $n \in \mathbb{N},$ define $$h_{n+1}(t) = \int_0^1 I(t,x) h_n(x) \,dx.$$

In particular, if we want to see the probability of haveing a score of at least $2$ is \begin{align*}p_2 = \mathbb{P} \{ N \geq 2 \} &= \int_0^1 \int_0^1 \int_0^1 I(x_1,x_2) I(x_2, x_3) \,dx_3\,dx_2\,dx_1 \\ &= \int_0^1 \int_0^1 I(x_1,x_2) \left(\int_0^1 I(x_2, x_3) \,dx_3 \right) \,dx_2\,dx_1 \\ &= \int_0^1 \int_0^1 I(x_1,x_2) h_1(x_2) \,dx_2 \,dx_1\\ &= \int_0^1 h_2(x_1) \,dx_1.\end{align*} If we extrapolate further, which we will need primarily for the Extra Credit problem, we see that $$p_n = \mathbb{P} \{ N \geq n \} = \int_0^1 h_n(t) \,dt, \,\, \forall n \in \mathbb{N}.$$

Returning to the Classic problem here, we have $h_1(t) = \max \{ t, 1-t \}$ and \begin{align*}h_2(t) = \int_0^1 I(t,x) \max \{x, 1-x\} \,dx &= \begin{cases} \int_t^1 \max \{ x, 1 - x\} \,dx, &\text{if $t \lt \frac{1}{2};$} \\ \int_0^t \max \{x, 1-x\}\,dx, &\text{if $t \geq \frac{1}{2};$}\end{cases} \\ &= \begin{cases} \int_t^\frac{1}{2} (1-x) \,dx + \int_\frac{1}{2}^1 x\,dx, &\text{if $t \lt \frac{1}{2};$} \\ \int_0^\frac{1}{2} (1-x) \,dx + \int_\frac{1}{2}^t x \,dx, &\text{if $t \geq \frac{1}{2};$}\end{cases} \\ &= \begin{cases} \frac{3}{4} - t + \frac{t^2}{2}, &\text{if $t \lt \frac{1}{2};$} \\ \frac{1}{4} + \frac{t^2}{2}, &\text{if $t \geq \frac{1}{2};$}\end{cases} \\ &= \max \left\{ \frac{3}{4} -t + \frac{t^2}{2}, \frac{1}{4} + \frac{t^2}{2} \right\}.\end{align*} So we have the probability of earning at least two points is \begin{align*}p_2 = \mathbb{P} \{ N \geq 2 \} &= \int_0^1 h_2(t) \,dt\\ &= \int_0^\frac{1}{2} \left( \frac{3}{4} - t + \frac{t^2}{2} \right) \,dt + \int_\frac{1}{2}^1 \left( \frac{1}{4} + \frac{t^2}{2} \right) \,dt\\ &= \left[ \frac{3}{4}t -\frac{1}{2}t^2 + \frac{1}{6} t^3 \right]^{t=\frac{1}{2}}_{t=0} + \left[ \frac{1}{4} t + \frac{1}{6} t^3 \right]^{t=1}_{t=\frac{1}{2}} \\ &= \left(\frac{3}{8} - \frac{1}{8} + \frac{1}{48}\right) + \left(\frac{1}{4} + \frac{1}{6}\right) - \left( \frac{1}{8} + \frac{1}{48} \right) \\ &= \frac{13}{24}.\end{align*}

Monday, August 25, 2025

Sundown Trail Race with a Mulligan

Now let’s add one more wrinkle. At some point during the race, if you’re unhappy with the loop you’ve just been randomly assigned, you’re granted a “mulligan,” allowing you to get another random assignment. (Note that there’s a $25$ percent chance you’ll be assigned the same loop again.) You don’t have to use your mulligan, but you can’t use it more than once.

As before, the time is $5$:$55$ p.m. You have just completed a loop, and you haven’t used your mulligan yet. With an optimal strategy (i.e., using the mulligan at the right moment, if at all), on average, what score can you expect to earn between $5$:$55$ p.m. and $7$ p.m.?

As we saw in the Classic post, let's use the absorption probabilities matrix $B = \left(B(t,s)\right)_{t \in \mathfrak{T}, s \in \mathfrak{S}},$ which is the probability of starting at transient state $t$ and ending up at the final, absorbing state $s.$ This will give us the expected score of starting at transient state $i$ with no mulligans $$E(i) = \mathbb{E} [ S \mid S_0 = i ] = \sum_{s \in \mathfrak{S}} s B(i,s).$$ In particular, we have the vector $$E = \left(E(t)\right)_{t \in \mathfrak{T}} = \begin{pmatrix} \frac{19933}{4096} \\ \frac{5245}{1024} \\ \frac{1437}{256} \\ \frac{317}{64} \\ \frac{293}{64} \\ \frac{69}{16} \\ \frac{77}{16} \\ \frac{21}{4} \\ \frac{23}{4} \end{pmatrix}.$$

We can only use a single mulligan, but it is enough to change the overall expected score. Let's denote $$\tilde{E}(t) = \tilde{\mathbb{E}}[ S \mid S_0 = s ],$$ as the expected score when starting at state $s$. In particular, let's say that we are currently finishing up our lap to attain the transient score of $t \in \mathfrak{T},$ where we haven't yet used our mulligan, and we are quickly decide for which new laps we want to use our mulligan. Let's say the randomly assigned lap will move us from $t$ to $t^\prime \in \mathfrak{T}.$ If we use our mulligan in this turn then we will end up with an expected score of $E(t);$ whereas if we don't use the mulligan, then we would end up with an expected score of $\tilde{E}(t^\prime),$ since we would still have our mulligan remaining. So the optimal choice if we end up moving from $t$ to $t^\prime \in \mathfrak{T}$ would be to give us $\max \{ E(t), \tilde{E}(t^\prime) \}.$ Similarly, if we are randomly assigned to move from $t$ to some $s \in \mathfrak{S}$, then if we use our mulligan we end up with expected score of $E(t)$ versus a score of $s,$ if we don't. So if we are would move from $t$ to $s \in \mathfrak{S}$ the optimal choice gives us $\max \{E(t), s \}.$ Putting this altogether, we the corresponding transition probabilities from the $Q$ and $R$ matrices given in the Classic problem, we get the recursive formula $$\tilde{E}(t) = \sum_{t^\prime \in \mathfrak{T}} Q(t, t^\prime) \max \{ E(t), \tilde{E}(t^\prime) \} + \sum_{s \in \mathfrak{S}} R(t,s) \max \{ E(t), s \}.$$

So let's go recursing .... \begin{align*} \tilde{E}(5.5) &= \frac{3}{4} \max\{ E(5.5), 5.5 \} + \frac{1}{4} \max \{ E(5.5), 6.5 \} = \frac{3}{4} E(5.5) + \frac{1}{4} 6.5 = \frac{95}{16} \\ \tilde{E}(5) &= \frac{3}{4} \max \{ E(5), 5 \} + \frac{1}{4} \max \{ E(5), 6 \} = \frac{3}{4} E(5) + \frac{1}{4} 6 = \frac{87}{16} \\ \tilde{E}(4.5) &= \frac{1}{4} \max \{ E(4.5), \tilde{E}(5.5) \} + \frac{3}{4} \max \{ E(4.5), 4.5 \} = \frac{1}{4} \tilde{E}(5.5) + \frac{3}{4} E(4.5) = \frac{163}{32} \\ \tilde{E}(4) &= \frac{1}{4} \max \{ E(4), \tilde{E}(5) \} + \frac{3}{4} \max \{ E(4), 4 \} = \frac{1}{4} \tilde{E}(5) + \frac{3}{4} E(4) = \frac{147}{32} \\ \tilde{E}(3.5) &= \frac{1}{4} \max \{ E(3.5), \tilde{E}(4.5) \} + \frac{1}{2} \max \{ E(3.5), 3.5 \} + \frac{1}{4} \max \{ E(3.5), 6.5 \} \\ &\quad\quad = \frac{1}{4} \tilde{E}(4.5) + \frac{1}{2} E(3.5) + \frac{1}{4} 6.5 = \frac{83}{16} \\ \tilde{E}(3) &= \frac{1}{4} \max \{ E(3), \tilde{E}(4) \} + \frac{1}{4} \max \{ E(3), 3 \} + \frac{1}{4} \max \{ E(3), 6 \} + \frac{1}{4} \max \{ E(3), 6.5 \} \\ &\quad\quad = \frac{1}{4} E(3) + \frac{1}{4} E(3) + \frac{1}{4} 6 + \frac{1}{4} 6.5 = \frac{1411}{256}\\ \tilde{E}(2) &= \frac{1}{4} \max \{ E(2), \tilde{E}(3) \} + \frac{1}{4} \max\{ E(2), \tilde{E}(5) \} + \frac{1}{4} \max \{ E(2), \tilde{E}(5.5) \} + \frac{1}{4} \max \{ E(2), 6.5 \} \\ &\quad\quad = \frac{1}{4} E(2) + \frac{1}{4} E(2) + \frac{1}{4} \tilde{E}(5) + \frac{1}{4} 6.5 = \frac{3029}{512}\\ \tilde{E}(1) &= \frac{1}{4} \max \{ E(1), \tilde{E}(2) \} + \frac{1}{4} \max \{ E(1), \tilde{E}(4) \} + \frac{1}{4} \max \{ E(1), \tilde{E}(4.5) \} + \frac{1}{4} \max \{ E(1), \tilde{E}(5.5) \} \\ &\quad\quad = \frac{1}{4} \tilde{E}(2) + \frac{1}{4} E(1) + \frac{1}{4} E(1) + \frac{1}{4} \tilde{E}(5.5) = \frac{5657}{1024} \end{align*}

This leaves us with one more step to show that the total expected score on accrued between $5$:$55$ pm and $7$ pm going at a constant $10$ minute per mile pace with one single mulligan is \begin{align*} \tilde{E}(0) &= \frac{1}{4} \max \{ E(0, \tilde{E}(1) \} + \frac{1}{4} \max \{ E(0), \tilde{E}(3) \} + \frac{1}{4} \max \{ E(0), \tilde{E}(3.5) \} + \frac{1}{4} \max \{ E(0), \tilde{E}(4.5) \} \\ &\quad\quad = \frac{1}{4} \tilde{E}(1) + \frac{1}{4} \tilde{E}(3) + \frac{1}{4} \tilde{E}(3.5) + \frac{1}{4} \tilde{E}(4.5)\\ &\quad\quad = \frac{21921}{4096} = 5.351806640625 \end{align*}

Sundown Trail Race

You’re participating in a trail run that ends at sundown at $7$ p.m. The run consists of four loops: $1$ mile, $3$ miles, $3.5$ miles, and $4.5$ miles. After completing any given loop, you are randomly assigned another loop to run—this next loop could be the same as the previous one you just ran, or it could be one of the other three. Being assigned your next loop doesn’t take a meaningful amount of time; assume all your time is spent running.

Your “score” in the race is the total distance you run among all completed loops. If you’re still out on a loop at $7$ p.m., any completed distance on that loop does not count toward your score!

It is now $5$:$55$ p.m. and you have just completed a loop. So far, you’ve been running $10$-minute miles the whole way. You’ll maintain that pace until $7$ p.m. On average, what score can you expect to earn between $5$:$55$ p.m. and $7$ p.m.?

OK, let's agree on how to parametrize this. There are $65$ minutes left in the race and you are running at a clean $10$ minutes per mile, so let's just divide everything by $10$ minute per mile and deal with things entirely in miles or score space. In this space, the problem looks like this: Your score is currently $0$, you will be assigned a random score increment $\delta S$ whichi if uniformly distributed over the set $\Delta S = \{1 , 3, 3.5, 4.5 \}.$ If $S + \delta S \leq 6.5,$ then your new score is $S+\delta S.$ Otherwise, if $S + \delta S \gt 6.5,$ then your final score will be $S.$ For clarity, we see that the possible final scores are $\mathfrak{S} = \{ 3, 3.5, 4, 4.5, 5, 5.5, 6, 6.5\}.$

With the transformation complete, we can either go through the combinatorial enumeration of all possible paths to get to a final score of, say, $S=3.5$. In that particular case, we see that in order to end with a score of $3.5$ you must first be assigned $3.5$ in your first loop and then be assigned either $3.5$ or $4.5$ in your second lap. This would then have a total probability of $\frac{1}{4} \times \frac{1}{2} = \frac{1}{8}.$ This can definitely be done for each of the other elements in $\mathfrak{S},$ but for the Extra Credit answer we will want to take advantage of modeling this as an absorbing Markov chain .... so let's do that here too.

The total number of states in this Markov Chain is $17$, there are nine (9) transient states $\mathfrak{T} = \{0, 1, 2, 3, 3.5, 4, 4.5, 5, 5.5\}$ and then the eight final, absorbing states in $\mathfrak{S}.$ Note that there two possible states where the current score is, say $3.5$, one in $\mathfrak{T},$ which represents a state that could still have some $\delta S \lt 3.5$ added to it and the race continuing and the final one where the race is over in $\mathfrak{S}.$ If at any time we have to differentiate between the transient and absorbing states that have the same numerical values, we will use $T$ and $A$ superscripts, e.g, $3.5^T \in \mathfrak{T}$ and $3.5^A \in \mathfrak{S}.$ If no superscript is provided, then let's safely assume that we are talking about the final, absorbing state.

OK, now let's get the canonical form of the transition matrix $P = \begin{pmatrix} Q & R \\ 0 & I_8 \end{pmatrix},$ where $Q$ is the $9 \times 9$ matrix representing the transition probabilities entirely within the transient states $\mathfrak{T}$ and $R$ is the $9 \times 8$ matrix representing the transition probabilities that start at transient states in $\mathfrak{T}$ and end in absorbing states $\mathfrak{S}.$ In particular, we have \begin{align*} Q &= \begin{pmatrix} 0 & \frac{1}{4} & 0 & \frac{1}{4} & \frac{1}{4} & 0 & \frac{1}{4} & 0 & 0 \\ 0 & 0 & \frac{1}{4} & 0 & 0 & \frac{1}{4} & \frac{1}{4} & 0 & \frac{1}{4} \\ 0 & 0 & 0 & \frac{1}{4} & 0 & 0 & 0 & \frac{1}{4} & \frac{1}{4} \\ 0 & 0 & 0 & 0 & 0 & \frac{1}{4} & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & \frac{1}{4} & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & \frac{1}{4} & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \frac{1}{4} \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \end{pmatrix} \\ R & = \begin{pmatrix} 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & \frac{1}{4} \\ \frac{1}{4} & 0 & 0 & 0 & 0 & 0 & \frac{1}{4} & \frac{1}{4} \\ 0 & \frac{1}{2} & 0 & 0 & 0 & 0 & 0 & \frac{1}{4} \\ 0 & 0 & \frac{3}{4} & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & \frac{3}{4} & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & \frac{3}{4} & 0 & \frac{1}{4} & 0 \\ 0 & 0 & 0 & 0 & 0 & \frac{3}{4} & 0 & \frac{1}{4} \end{pmatrix} \end{align*} Based on the matrix $Q,$ we can compute the fundamental matrix $$N = (I_9 - Q)^{-1} = \begin{pmatrix} 1 & \frac{1}{4} & \frac{1}{16} & \frac{17}{64} & \frac{1}{16} & \frac{33}{256} & \frac{3}{8} & \frac{49}{1024} & \frac{11}{64} \\ 0 & 1 & \frac{1}{4} & \frac{1}{16} & 0 & \frac{17}{64} & \frac{1}{16} & \frac{33}{256} & \frac{3}{8} \\ 0 & 0 & 1 & \frac{1}{4} & 0 & \frac{1}{16} & 0 & \frac{17}{64} & \frac{1}{16} \\ 0 & 0 & 0 & 1 & 0 & \frac{1}{4} & 0 & \frac{1}{16} & 0 \\ 0 & 0 & 0 & 0 & 1 & 0 & \frac{1}{4} & 0 & \frac{1}{16} \\ 0 & 0 & 0 & 0 & 0 & 1 & 0 & \frac{1}{4} & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & \frac{1}{4} \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 \end{pmatrix}.$$

Using the fundamental matrix, we can get the absorption probabilities as $$B = NR = \begin{pmatrix} \frac{17}{256} & \frac{1}{8} & \frac{49}{512} & \frac{9}{32} & \frac{147}{4096} & \frac{33}{256} & \frac{321}{4096} & \frac{3}{16} \\ \frac{1}{64} & 0 & \frac{51}{256} & \frac{3}{16} & \frac{99}{1024} & \frac{9}{32} & \frac{49}{1024} & \frac{11}{64} \\ \frac{1}{16} & 0 & \frac{3}{64} & 0 & \frac{51}{256} & \frac{3}{16} & \frac{33}{256} & \frac{3}{8} \\ \frac{1}{4} & 0 & \frac{3}{16} & 0 & \frac{3}{64} & 0 & \frac{17}{64} & \frac{1}{4} \\ 0 & \frac{1}{2} & 0 & \frac{3}{16} & 0 & \frac{3}{64} & 0 & \frac{17}{64} \\ 0 & 0 & \frac{3}{4} & 0 & \frac{3}{16} & 0 & \frac{1}{16} & 0 \\ 0 & 0 & 0 & \frac{3}{4} & 0 & \frac{3}{16} & 0 & \frac{1}{16} \\ 0 & 0 & 0 & 0 & \frac{3}{4} & 0 & \frac{1}{4} & 0 \\ 0 & 0 & 0 & 0 & 0 & \frac{3}{4} & 0 & \frac{1}{4} \end{pmatrix},$$ where $B(t,s)$ is the probability of starting at transient state $t$ and eventually absorbing into the state $s \in \mathfrak{S}.$ For the purpose of the Classic problem, we are most interested in using the first row of $B,$ that is $B_{0,s},$ which is the probability of starting with score of $0$ and ending at score $s \in \mathfrak{S}$. More specifically, the expected score of running at a constant $10$ minute miles from $5$:$55$ pm until $7$ pm is \begin{align*}E(0) = \mathbb{E}[S \mid S_0 = 0] &= \sum_{s \in \mathfrak{S}} s B(0,s)\\ &= 3 \cdot \frac{17}{256} + 3.5 \cdot \frac{1}{8} + 4 \cdot \frac{49}{512} + 4.5 \cdot \frac{9}{32} + 5 \cdot \frac{147}{4096} \\ &\quad\quad\quad+ 5.5 \cdot \frac{33}{256} + 6 \cdot \frac{321}{4096} + 6.5 \cdot \frac{3}{16} \\ &= \frac{19933}{4096} = 4.866455078125.\end{align*}