Monday, November 17, 2025

Millions of Peaches, Peaches for Free, .... or at least for Extra Credit

As before, your assistant intends to pick two random points along the circumference of the garden and run a hose straight between them.

This time, you’ve decided to contribute to the madness yourself by picking a random point inside the garden to plant a second peach tree. On average, how far can you expect this point to be from the nearest part of the hose?

Here let's again assume that the assistant's hose starts at $(1,0)$ and ends at $(\cos \theta, \sin \theta)$ for some uniform random $\theta \sim U(0,2\pi).$ Here however, we assume that there is a peach tree at the random point $(u,v)$ for some uniformly random point with $$f(u,v) = \frac{1}{\pi}\chi ( u^2+v^2 \leq 1 ).$$ We can again use the line $y = x\tan \frac{\theta}{2},$ which is the perpendicular bisector of the chord traces by the hose, or more precisely we will use parallel copies of it. Let's subdivide the circular area into three different spaces: \begin{align*}\Omega_1(\theta) &= \left\{ (u,v) \mid u^2 + v^2 \leq 1, \left| v - u \tan \frac{\theta}{2} \right| \leq \tan \frac{\theta}{2} \right\} \\ \Omega_2(\theta) &= \left\{ (u,v) \mid u^2 + v^2 \leq 1, v - u\tan \frac{\theta}{2} \geq \tan \frac{\theta}{2} \right\} \\ \Omega_3(\theta) &= \left\{ (u,v) \mid u^2 + v^2 \leq 1, v - u \tan \frac{\theta}{2} \leq - \tan \frac{\theta}{2} \right\}.\end{align*}

In general we get \begin{align*}\lambda^*(u,v,\theta) & = \arg\max \left\{ \| ( 1 - \lambda + \lambda \cos \theta - u, \lambda \sin \theta - v ) \|_2 \mid 0 \leq \lambda \leq 1 \right\} \\ &= \arg \max \left\{ (1-u)^2 + v^2 - 2 \lambda ( (1-u)(1-\cos \theta) + v\sin \theta ) \right.\\&\quad\quad\quad\quad\quad\quad\quad\quad \left.+ 2(1-\cos \theta) \lambda^2 \mid 0 \leq \lambda \leq 1 \right\} \\ &= \begin{cases} \frac{1}{2} \left( 1 - u + v \cot \frac{\theta}{2} \right), & \text{ if $(u,v) \in \Omega_1(\theta)$; } \\ 0, &\text{ if $(u,v) \in \Omega_2(\theta)$;} \\ 1, &\text{if $(u,v) \in \Omega_3(\theta).$}\end{cases}\end{align*} Plugging this back in we see that we have $$d(u,v,\theta) = \begin{cases} \frac{ \left| (1-u) \sin \theta + v(1-\cos \theta) \right| }{ 2 \sin \frac{\theta}{2} }, &\text{if $(u,v) \in \Omega_1(\theta)$;} \\ \sqrt{ (u-1)^2 + v^2 }, &\text{if $(u,v) \in \Omega_2(\theta)$;} \\ \sqrt{ (u- \cos \theta)^2 + (v - \sin \theta)^2 }, &\text{ if $(u,v) \in \Omega_3(\theta).$}\end{cases}$$

Let's first attack $A(\theta) = \iint_{\Omega_1(\theta)} d(u,v,\theta) f(u,v) \,du\,dv.$ We see that, so long as $0 \leq \theta \leq \pi$, by rotating the plane counterclockwise about the origin by $\frac{\pi-\theta}{2}$ that the chord is now parallel to the $x$-axis and stretches from $(-\sin \frac{\theta}{2}, \cos \frac{\theta}{2})$ to $(\sin \frac{\theta}{2}, \cos \frac{\theta}{2}).$ In this case, it is clear to see that the distance $d(u^\prime, v^\prime, \theta) = |v^\prime - \cos \frac{\theta}{2}|.$ So the transformed integral is \begin{align*}A(\theta) &= \frac{2}{\pi} \int_0^{\sin \theta/2} \int_{-\sqrt{1-x^2}}^{\sqrt{1-x^2}} \left|y - \cos \frac{\theta}{2}\right| \,dy \,dx \\ &= \frac{2}{\pi} \int_0^{\sin \theta/2} \left( \int_{-\sqrt{1-x^2}}^{\cos \theta/2} \left(\cos \frac{\theta}{2} - y \right)\,dy + \int_{\cos \theta/2}^{\sqrt{1-x^2}} \left( y - \cos \frac{\theta}{2} \right) \,dy \right) \,dx \\ &= \frac{2}{\pi} \int_0^{\sin \theta/2} \left(1 + \cos^2 \frac{\theta}{2} - x^2\right) \,dx \\ &= \frac{2}{\pi} \sin \frac{\theta}{2} \left( 1 + \cos^2 \frac{\theta}{2} \right) - \frac{2}{3\pi} \sin^3 \frac{\theta}{2} \\ &= \frac{4}{3\pi} \sin \frac{\theta}{2} \left( 1 + 2 \cos^2 \frac{\theta}{2} \right).\end{align*}

We see from symmetry that $\iint_{\Omega_2(\theta)} d(u,v,\theta) f(u,v)\,du\,dv = \iint_{\Omega_3(\theta)} d(u,v,\theta) f(u,v) \,du\,dv,$ so let's let \begin{align*}B(\theta) = \frac{2}{\pi} \iint_{\Omega_2(\theta)} d(u,v,\theta)\,du\,dv &= \frac{2}{\pi} \int_{-\cos \theta}^1 \int_{-\sqrt{1-x^2}}^{\tan \frac{\theta}{2} (x - 1)} \sqrt{ (x-1)^2 + y^2 }\,dy\,dx \\& = \frac{2}{\pi} \int_{\pi + \theta/2}^{3\pi/2} \int_0^{-2\cos \phi} \rho^2 \,d\rho \,d\phi,\end{align*} where the final change of variables represents the transformation $(u,v)$ into $(1 + \rho \cos \phi, \rho \sin \phi)$ for $\phi \in ( \pi + \theta/2, 3\pi/2)$ and $0 \leq \rho \leq -2\cos \phi.$ Therefore, we see that $$B(\theta) = \frac{2}{\pi} \int_{\pi + \theta/2}^{3\pi/2} \left(-\frac{8}{3} \cos^3 \phi\right) \,d\phi = \frac{2}{\pi} \left(\frac{16}{9} - \frac{8}{3} \sin \frac{\theta}{2} + \frac{8}{9} \sin^3 \frac{\theta}{2}\right).$$

Putting these two together we see that the average distance from the random point $(u,v)$ to the chord from $(1,0)$ to $(\cos \theta, \sin \theta)$ is given by $d(\theta) = A(\theta) + B(\theta).$ Therefore, through symmetry, we see that the average distance is given by $$\hat{d} = \frac{1}{\pi} \int_0^\pi d(\theta) \,d\theta = \frac{1}{\pi} \int_0^\pi A(\theta) \,d\theta + \frac{1}{\pi} \int_0^\pi B(\theta)\, d\theta.$$ We have \begin{align*}\frac{1}{\pi} \int_0^\pi A(\theta) \,d\theta &= \frac{4}{3\pi^2} \int_0^\pi \sin \frac{\theta}{2} \left( 1 + 2 \cos^2 \frac{\theta}{2} \right)\,d\theta \\ &= \frac{8}{3\pi^2} \left.\left( -\cos \frac{\theta}{2} - \frac{2}{3} \cos^3 \frac{\theta}{2} \right)\right|_0^\pi = \frac{40}{9\pi^2}.\end{align*} Similarly, we have \begin{align*}\frac{1}{\pi} \int_0^\pi B(\theta) \,d\theta &= \frac{2}{\pi^2} \int_0^\pi \left(\frac{16}{9} - \frac{8}{3} \sin \frac{\theta}{2} + \frac{8}{9}\sin^3 \frac{\theta}{2} \right) \,d\theta \\ &= \frac{2}{\pi^2} \left.\left( \frac{16}{9} \theta + \frac{32}{9} \cos \frac{\theta}{2} + \frac{16}{27} \cos^3 \frac{\theta}{2} \right)\right|_0^\pi = \frac{96\pi - 224}{27\pi^2}.\end{align*} Combining these together gives the average distance between the randomly place hose and your randomly placed peach tree at a slightly larger $$\hat{d} = \frac{40}{9\pi^2} + \frac{96\pi - 224}{27\pi^2} = \frac{96\pi - 104}{27\pi^2} \approx 0.741494295364\dots$$ furlongs.

Peaches, peaches, peaches, peaches, peaches … I likely won’t water you

You and your assistant are planning to irrigate a vast circular garden, which has a radius of 1 furlong. However, your assistant is somewhat lackadaisical when it comes to gardening. Their plan is to pick two random points on the circumference of the garden and run a hose straight between them.

You’re concerned that different parts of your garden—especially your prized peach tree at the very center—will be too far from the hose to be properly irrigated.

On average, how far can you expect the center of the garden to be from the nearest part of the hose?

Without loss of generality, let's assume that we choose a coordinate system such that one end of the hose is located at the point $(1,0)$ and the other end is located at $(\cos \theta, \sin\theta)$ for some uniformly random $\theta \sim U(0,2\pi).$ The minimal distance from the origin, where the prized peach tree is, and the chord traced by the hose occurs along the ray that perpendicularly bisects the chord. We can arrive here either by calculus and finding $$\lambda^* = \arg\max \{ \| (1-\lambda + \lambda \cos \theta, \lambda \sin \theta ) \|_2 \mid 0\leq \lambda \leq 1 \} = \frac{1}{2}$$ or by spatial reasoning about Lagrange multipliers and optimizers occurring when contours and contraints are normal to one another or by any other means, I suppose. Whichever Feynman-esque path we take to arrive at the perpendicular bisector, we then see through some trigonometry that this minimal distance is this $$d(\theta) = \left| \cos \frac{\theta}{2}\right|,$$ as a function of the random $\theta.$

So, the average distance between the randomly placed hose and your precious peach tree is \begin{align*}\bar{d} &= \frac{1}{2\pi} \int_0^{2\pi} d(\theta) d\theta \\ &= \frac{1}{\pi} \int_0^\pi \cos \frac{\theta}{2} \,d\theta \\ &= \frac{2 \sin \frac{\pi}{2} - 2 \sin 0}{\pi} = \frac{2}{\pi}\approx 0.636519772\dots\end{align*} furlongs, which is about 420.3 feet. This doesn't seem too terribly bad, but given that the spread of an average peach tree is only about 20 feet (according to a quick Googling), your assistant's method is not expected to provide a large amount of water to your peaches.

Monday, November 10, 2025

Even more so, seems like an actual uniform generator would be simpler…

Randy has an updated suggestion for how the button should behave at door 2. What hasn’t changed is that if a contestant at door 2 and moves to an adjacent door, that new door will be 1 or 3 with equal probability.

But this time, on the first, third, fifth, and other odd button presses that happen to be at door 2, there’s a 20 percent the contestant remains at door 2. On the second, fourth, sixth, and other even button presses that happen to be at door 2, there’s a 50 percent chance the contest remains at door 2.

Meanwhile, the button’s behavior at doors 1 and 3 should in no way depend on the number of times the button has been pressed.

As the producer, you want the chances of winding up at each of the three doors—after a large even number of button presses— to be nearly equal. If a contestant presses the button while at door 1 (or door 3), what should the probability be that they remain at that door?

In this case, let $q$ be the probability of remaining at door 1 (or at door 3), then we can treat the two different behaviors at door 2 sequentially in order to come with the two-step transition matrix \begin{align*}Q & = \begin{pmatrix} q & 1-q & 0 \\ 0.4 & 0.2 & 0.4 \\ 0 & 1-q & q \end{pmatrix} \begin{pmatrix} q & 1-q & 0 \\ 0.25 & 0.5 & 0.25 \\ 0 & 1-q & q \end{pmatrix} \\ & = \begin{pmatrix} q^2 -\frac{1}{4}q +\frac{1}{4} & -q^2 + \frac{1}{2} q +\frac{1}{2} & -\frac{1}{4}q + \frac{1}{4}\\ \frac{2}{5}q + \frac{1}{20} & -\frac{4}{5}q + \frac{9}{10} & \frac{2}{5}q + \frac{1}{20}\\ -\frac{1}{4}q + \frac{1}{4} & -q^2 + \frac{1}{2} q + \frac{1}{2} & q^2 -\frac{1}{4}q + \frac{1}{4}\end{pmatrix}.\end{align*}

We will lean upon our own great (?) shoulders from the Classic problem to show that we need to solve for $q$ that makes the transition matrix symmetric. In this case, that requirement yields $$\frac{2}{5}q + \frac{1}{20} = -q^2 + \frac{1}{2} q + \frac{1}{2},$$ or equivalently, $$q^2 -\frac{1}{10} q - \frac{9}{20} = 0.$$ Solving this quadratic for the positive root (since after all we need $q\in[0,1]$ as a probability), gives that the appropriate probability to remain at door 1 in this even more complicated Markov scheme is $$q=\frac{ \frac{1}{10} + \sqrt{ \frac{1}{100} + 4 \frac{9}{20} } }{2} = \frac{1+\sqrt{181}}{20} \approx 0.722681202354\dots$$

Seems like an actual uniform generator would be simpler…

You are a producer on a game show hosted by Randy “Random” Hall (no relation to Monty Hall). The show has three doors labeled 1 through 3 from left to right, and behind them are various prizes.

Contestants pick one of the three doors at which to start, and then they press an electronic button many, many times in rapid succession. Each time they press the button, they either stay at their current door or move to an adjacent door. If they’re at door 2 and move to an adjacent door, that new door will be 1 or 3 with equal probability.

Randy has decided that when a contestant presses the button while at door 2, there should be a 20 percent chance they remain at door 2.

As the producer, you want the chances of a contestant ultimately winding up at each of the three doors to be nearly equal after many button presses. Otherwise, mathematicians will no doubt write you nasty letters complaining about how your show is rigged.

If a contestant presses the button while at door 1 (or door 3), what should the probability be that they remain at that door?

Firstly, so true ... if there were a meaningful bias in the supposedly uniformly random process of selecting which door, I would take notice. Though rather than calling to complain I would be more likely to try to get on the show to exploit this bias, but I guess potato-potato.

Moving on, per Randy's specifications and your desire for the appearance of uniform randomness, we have a three state Markov chain setup where the transition matrix $$P= \begin{pmatrix} p & 1-p & 0 \\ 0.4 & 0.2 & 0.4 \\ 0 & 1-p & p \end{pmatrix}$$ and we are wondering for which values of $p$ will $P^n \to U,$ as $n\to \infty$, where the limiting matrix $U$ is the $3\times 3$ matrix where each entry is $\frac{1}{3}.$

There are many ways to solve for $p$ here. For instance, though we would obviously start with some specific position, e.g., $\pi_0 = (1,0,0)$, if $P^n \to U,$ as $n\to \infty$, then we would necessarily need to have $u=(\frac{1}{3},\frac{1}{3},\frac{1}{3})$ satisfy $uP=u$. We could obviously just solve here and get three linear equations (that hopefully collapse well since there is only one unknown), but instead let's math out!

Since $P$ is a transition matrix its rows sum to one, so we see that $Pu^T=u^T,$ which if we combine with $uP=u$ implies the $$Pu^T= u^T = (uP)^T = P^Tu^T.$$ So $P$ must be symmetric $(P=P^T)$, which leaves the much easier linear equation $1-p=0.4,$ that is, in order to provide the illusion of a uniform distibution of the door's landing spot through this elaborate Markov scheme you must have the probability of remaining on door 1 when the button is pressed be $p=60\%.$

Of course, armed with the knowledge that we have a symmetric transition matrix, we can then justify that this works by using the Spectral Theorem. We have already seen that $\lambda=1$ and $u^T$ is an eigenpair. We could certainly additionally calculate the other two eigenpairs as well, or simply argue that the eigenvalues must have absolute value less than $1$ and that the eigenvectors can be chosen to be an orthonormal basis of $\mathbb{R}^3$, such that $P=VDV^T,$ where $D = diag(1, \lambda_2, \lambda_3)$ and $$V= \begin{pmatrix} \sqrt{3}/3 & v_{12} & v_{13} \\ \sqrt{3}/3 & v_{22} & v_{23} \\ \sqrt{3}/3 & v_{32} & v_{33} \end{pmatrix}$$ satisfies $V^TV=VV^T=I.$ Since this $V$ represents an orthonormal basis we can change bases and represent any initial $\pi_0$ in $V$-coordinates, where the first coordinate is also $\sqrt{3}/3$ whenever $\pi_0$ sums to $1.$ Let's generically assume that $\pi_0 v_2 = c_2 \in \mathbb{R}$ and $\pi_0 v_3 = c_3 \in \mathbb{R}.$ Then we see that \begin{align*}\pi_n = \pi_{n-1} P &= \pi_{n-1} VDV^T \\ &= \cdots = \pi_0 V D^n V^T \\ &= u + c_2 \lambda_2^n v_2^T + c_3 \lambda_3^n v_3^T\end{align*} Since $|\lambda_2|, |\lambda_2| \lt 1,$ we see that for any value of $\pi_0$ we get $\pi_n \to u,$ as desired.

Sunday, November 2, 2025

Extra Credit swinging the probabilities, or ... Hey, how'd the Catalan numbers show up here???

Instead of a best-of-seven series, now suppose the series is much, much longer. In particular, the first team to win $N$ games wins the series, so technically this is a best-of-($2N−1$) series, where $N$ is some very, very large number.

In the limit of large $N$, what is the probability swing for Game 1 in terms of $N$?

Applying the same logic used in the Classic Fiddler problem, we want to first find $p_{1,N} = \mathbb{P} \{ \text{win best of (2N-1) series} \mid \text{win game 1} \},$ from which we get the probability swing of game 1 in a best of $(2N-1)$ series as $\Delta_N = 2p_{1,N} - 1.$ Again following in the Classic Fiddler's solution's footsteps, if you win games $1$ and $k$, then there are $\binom{k-2}{N-2}$ ways of arranging another $N-2$ wins in the other $k-2$ games, so $$p_{1,k,N} = \mathbb{P} \{ \text{winning a best of $(2N-1)$ series in $k$ games} \mid \text{win game 1} \} = \binom{k-2}{N-2} \frac{1}{2^{k-1}}.$$ Summing over all possible values of $k = N, N+1, \dots, 2N-1,$ we get $$p_{1,N} = \sum_{k=N}^{2N-1} \binom{k-2}{N-2} \frac{1}{2^{k-1}}.$$ We could try to go further and define some generating function f_N, but this would lead to some escalating number of derivatives, that gets messy fast.

Instead let's set up a recursive formula. We note that $p_{1,1} = 1,$ which makes sense since it is a winner-takes-all one game playoff. For some $N \geq 1,$ let's take a look at $$p_{N+1} = \sum_{k=N+1}^{2N+1} \binom{k-2}{N-1} \frac{1}{2^{k-1}}.$$ The standard binomial coefficient recursion formula (which comes from the Pascal triangle) gives $$\binom{k-2}{N-1} = \binom{k-3}{N-1} + \binom{k-3}{N-2},$$ so we have \begin{align*} p_{N+1} & = \sum_{k=N+1}^{2N+1} \left(\binom{k-3}{N-1} + \binom{k-3}{N-2} \right) \frac{1}{2^{k-1}} \\ &= \left( \sum_{k=N}^{2N} \binom{k-2}{N-1} \frac{1}{2^k} \right) + \left( \sum_{k=N}^{2N} \binom{k-2}{N-2} \frac{1}{2^k} \right) \\ &= \frac{1}{2} \left( \sum_{k=N+1}^{2N} \binom{k-2}{N-1} \frac{1}{2^{k-1}} \right) + \frac{1}{2} \left( \sum_{k=N}^{2N-1} \binom{k-2}{N-2} \frac{1}{2^{k-1}} \right) + \binom{2N-2}{N-2} \frac{1}{2^{2N}} \\ &= \frac{1}{2} p_{1,N+1} - \binom{2N-1}{N-1} \frac{1}{2^{2N+1}} + \frac{1}{2} p_{1,N} + \binom{2N-2}{N-2} \frac{1}{2^{2N}}.\end{align*} Pulling the copy of $\frac{1}{2}p_{1,N+1}$ back onto the lefthand side and then multiplying by 2, we get the recursion formula \begin{align*} p_{1,N+1} &= p_{1,N} + \binom{2N-2}{N-2} \frac{1}{2^{2N-1}} - \binom{2N-1}{N-1} \frac{1}{2^{2N}} \\ &= p_{1,N} + \binom{2N-2}{N-2} \frac{1}{2^{2N-1}} - \left( \binom{2N-2}{N-1} + \binom{2N-2}{N-2} \right) \frac{1}{2^{2N}} \\ &= p_{1,N} - \frac{1}{4^N} \left( \binom{2N-2}{N-1} - \binom{2N-2}{N-2} \right) \\ &= p_{1,N} - \frac{1}{4^N} C_{N-1}, \end{align*} where $C_n = \frac{1}{n+1} \binom{2n}{n} = \binom{2n}{n} - \binom{2n}{n+1},$ for $n \in \mathbb{N}$ is the standard Catalan number.

Since we start with $p_{1,1} = 1,$ we then see that $$p_{1,N} = 1 - \sum_{k=1}^{N-1} \frac{C_{k-1}}{4^k} = 1 - \frac{1}{4} \sum_{k=0}^{N-2} \frac{C_k}{4^k}, \,\, \forall N \in \mathbb{N}.$$ We can rely on the fact that the generation function of the Catalan numbers is $$c(x) = \sum_{n=0}^\infty C_n x^n = \frac{1 - \sqrt{1-4x}}{2x},$$ so that $$\frac{1}{4} \sum_{k=0}^\infty \frac{C_k}{4^k} = \frac{1}{4} c(\frac{1}{4}) = \frac{1}{4} \frac{1 - \sqrt{1 - 4 \cdot \frac{1}{4}}}{2 \cdot \frac{1}{4}} = \frac{1}{2}.$$ Therefore, we see that $$p_{1,N} = 1 - \frac{1}{4} \sum_{k=0}^{N-2} \frac{C_k}{4^k} = 1 - \frac{1}{4} \sum_{k=0}^\infty \frac{C_k}{4^k} + \frac{1}{4}\sum_{k=N-1}^\infty \frac{C_k}{4^k} = \frac{1}{2} + \frac{1}{4} \sum_{k=N-1}^\infty \frac{C_k}{4^k},$$ for all $N \in \mathbb{N}.$ Now when $k$ is large we have $$C_k \sim \frac{4^k}{k^{3/2} \sqrt{\pi}},$$ from repeated applications of Stirling's approximation, so when $N$ is sufficiently large we have $$\frac{1}{4} \sum_{k=N-1}^\infty \frac{C_k}{4^k} \approx \frac{1}{4\sqrt{\pi}} \sum_{k=N-1}^\infty k^{-3/2} \approx \frac{1}{2\sqrt{\pi (N-1)}},$$ where the last approximation is due to the fact that $\sum_{k=N}^\infty k^{-p} \sim \int_N^\infty x^{-p} \,dx.$ Therefore, in a fairly concise way, we have $$p_{1,N} \approx \frac{1}{2} + \frac{1}{2\sqrt{\pi(N-1)}},$$ when $N$ is large, so the probability swing of winning the first game is $$\Delta_N = 2p_{1,N} -1 \approx \frac{1}{\sqrt{\pi(N-1)}}$$ when $N$ is large.

Swinging the probabilities

You and your opponent are beginning a best-of-seven series, meaning the first team to win four games wins the series. Both teams are evenly matched, meaning each team has a 50 percent chance of winning each game, independent of the outcomes of previous games.

As the team manager, you are trying to motivate your team as to the criticality of the first game in the series (i.e., “Game 1”). You’d specifically like to educate them regarding the “probability swing” coming out of Game 1—that is, the probability of winning the series if they win Game 1 minus the probability of winning the series if they lose Game 1. (For example, the probability swing for a winner-take-all Game 7 is 100 percent.)

What is the probability swing for Game 1?

Let's break this down as follows. Let $p_1 = \mathbb{P} \{ \text{win series} \mid \text{win game 1} \}.$ In order to win the series, you must win it in $k$ games for some $k=4, 5, 6, 7,$ so let's further let $$p_{1,k} = \mathbb{P} \{ \text{win series in $k$ games} \mid \text{win game 1} \},$$ where here we see that $p_1 = \sum_{k=4}^7 p_{1,k}.$ Now, the total number of ways to win the first and $k$th games and two others somewhere in games $2$ through $k-1$ is given by $\binom{k-2}{2}$ and the overall probability of any particular combination of $k$ games is $\frac{1}{2^k},$ so $$p_{1,k} = \frac{\mathbb{P} \{ \text{win series in $k$ games and win game $1$} \}}{\mathbb{P} \{ \text{win game 1 } \}} = \binom{k-2}{2} \frac{1}{2^{k-1}}.$$ Therfore, $$p_1 = \sum_{k=4}^7 \binom{k-2}{2} \frac{1}{2^{k-1}} = \frac{1}{2} \sum_{k=4}^7 \binom{k-2}{2} \frac{1}{2^{k-2}} = \frac{1}{2} \sum_{k=2}^5 \binom{k}{2} \frac{1}{2^k}.$$

Now one way of computing $p_1$ would be using some generating function wizardry. Define the function $f(x) = \sum_{k=2}^5 \binom{k}{2} x^k,$ in which case, $p_1 = \frac{1}{2} f(\frac{1}{2}).$ Now we also see that \begin{align*} f(x) &= \frac{1}{2} x^2 \sum_{k=2}^5 k(k-1) x^{k-2} \\ &= \frac{1}{2} x^2 \frac{d^2}{dx^2} \left( \frac{1-x^6}{1-x} \right) \\ &= \frac{1}{2} x^2 \frac{d}{dx} \left( \frac{1-6x^5+5x^6}{(1-x)^2} \right) \\ &= \frac{1}{2} x^2 \frac{2(1-15x^4+24x^5-10x^6}{(1-x)^3} \\ &= \frac{ x^2 ( 1 - 15 x^4 + 24x^5 -10x^6) }{(1-x)^3}.\end{align*} So we have $$p_1 = \frac{1}{2} f(\frac{1}{2}) = \frac{1}{2} \frac{ \frac{1}{4} \left( 1 - \frac{15}{16} + \frac{24}{32} - \frac{10}{64} \right) }{ \frac{1}{8} } = \frac{ 42}{64} = \frac{21}{32}.$$

Now from symmetry, we see that the probability of you winning having lost the first game, let's say $q_1 = \mathbb{P} \{ \text{win series} \mid \text{lose game 1} \}$ is the same as the probability of you winning the series having lost the series having won the first game. That is $q_1 = 1 - p_1.$ So the proabbility swing of the first game is $$\Delta = p_1 - q_1 = p_1 - (1- p_1) = 2p_1 - 1 = 2 \frac{21}{32} - 1 = \frac{5}{16} = 32.125\%.$$