Processing math: 33%

Monday, March 31, 2025

Boosting busts more seeds in bigger tourneys

Instead of four teams, now there are 26, or 64, seeded from 1 through 64. The power index of each team is equal to 65 minus that team’s seed.

The teams play in a traditional seeded tournament format. That is, in the first round, the sum of opponents’ seeds is 26+1, or 65. If the stronger team always advances, then the sum of opponents’ seeds in the second round is 25+1, or 33, and so on.

Once again, the underdog in every match gets a power index boost B, where B is some positive non-integer. Depending on the value of B, different teams will win the tournament. Of the 64 teams, how many can never win, regardless of the value of B?

After setting up the 64 team bracket, we can choose various values for B and then compute the winners directly. Since the outcome will remain the same for all non-integer values in (B,B), we need only select a total of 64 values, e.g., b+0.5, for b=0,1,,63. The summary of the outcomes of the 64 team tournament bracket are in the table below. We note that there are 27 seeds (namely, 7, 13-15, and 25-47) who never win the tournament. Most unlucky out of each of them are 7, 15 and 47, each of which have sets of B of measure 3 for which they will end up the runner up of the tournament, though they will never win it.

B Winning Seed Runner Up
(0,1) 1 2
(1,2) 1 3
(2,3) 3 1
(3,4) 2 1
(4,5) 6 5
(5,6) 5 3
(6,7) 5 3
(7,8) 4 3
(8,9) 12 11
(9,10) 11 9
(10,11) 11 9
(11,12) 10 9
(12,13) 10 9
(13,14) 9 7
(14,15) 9 7
(15,16) 8 7
(16,17) 24 23
(17,18) 23 21
(18,19) 23 21
(19,20) 22 21
(20,21) 22 21
(21,22) 21 19
(22,23) 21 19
(23,24) 20 19
(24,25) 20 19
(25,26) 19 17
(26,27) 19 17
(27,28) 18 17
(28,29) 18 17
(29,30) 17 15
(30,31) 17 15
(31,32) 16 15
(32,33) 48 47
(33,34) 49 47
(34,35) 49 47
(35,36) 50 49
(36,37) 50 49
(37,38) 51 49
(38,39) 51 49
(39,40) 52 51
(40,41) 52 51
(41,42) 53 51
(42,43) 53 51
(43,44) 54 53
(44,45) 54 53
(45,46) 55 53
(46,47) 55 53
(47,48) 56 55
(48,49) 56 55
(49,50) 57 55
(50,51) 57 55
(51,52) 58 57
(52,53) 58 57
(53,54) 59 57
(54,55) 59 57
(55,56) 60 59
(56,57) 60 59
(57,58) 61 59
(58,59) 61 59
(59,60) 62 61
(60,61) 62 61
(61,62) 63 61
(62,63) 63 61
>63 64 63

Boosting will cause the 2 seed to always bust

Once again, there are four teams remaining in a bracket: the 1-seed, the 2-seed, the 3-seed, and the 4-seed. In the first round, the 1-seed faces the 4-seed, while the 2-seed faces the 3-seed. The winners of these two matches then face each other in the regional final.

Also, each team possesses a “power index” equal to 5 minus that team’s seed. In other words:

  • The 1-seed has a power index of 4.
  • The 2-seed has a power index of 3.
  • The 3-seed has a power index of 2.
  • The 4-seed has a power index of 1.

In any given matchup, the team with the greater power index would emerge victorious. However, March Madness fans love to root for the underdog. As a result, the team with the lower power index gets an effective “boost” B, where B is some positive non-integer. For example, B could be 0.5, 133.7, or 2π, but not 1 or 42.

As an illustration, consider the matchup between the 2- and 3-seeds. The favored 2-seed has a power index of 3, while the underdog 3-seed has a power index of 2+B. When B is greater than 1, the 3-seed will defeat the 2-seed in an upset.

Depending on the value of B, different teams will win the tournament. Of the four teams, how many can never win, regardless of the value of B?

As shown in the prompt, if B<1 then 2 will beat 3 in the first round. On the other side of the bracket, 1 will beat 4, since 4>B+1 in this case, so in the final round we have 1 beating 2 since again 4>3+B when B<1. So, since whenever B<1, 1 will win the championship. On the other hand, whenever B>1, 2 will lose to 3 in the first round. Therefore, 2 will never win the championship.

Whenever B(2,3), 3 will beat 1 for the championship, since 2+B>4. Whenver B>3, 4 will beat 1 in the first round and then go on to beat 3 for the championship. Thus, there are values of B for which 1, 3 and 4 will win the championship. So out of the four remaining teams, only one of them (the 2 seed) will never win.

Sunday, March 16, 2025

An Extra Credit π day π-cnic in π-land

Suppose the island of π-land, as described above, has a radius of 1 mile. That is, Diametric Beach has a length of 2 miles. Again, you are picking a random point on the island for a picnic. On average, what will be the expected shortest distance to shore?

Here we want to calculate the average, minimum distance to shore E=1ARRR2x20min where d_D and d_S were defined as above in the Classic Fiddler answer. As we saw in the Classic Fiddler answer, the region where d_D \leq d_S is given by \Omega, so we can break the integral above into two sections E = \frac{1}{A} \int_{-R}^R \int_0^{\frac{R^2 - x^2}{2R}} y \,dy\,dx + \frac{1}{A} \int_{-R}^R \int_{\frac{R^2-x^2}{2R}}^{\sqrt{R^2 - x^2}} \left(R - \sqrt{x^2+y^2}\right) \,dy \,dx := E_1 + E_2. The first integral, E_1, is relatively straightforward to do, \begin{align*}E_1 = \frac{1}{A} \int_{-R}^R \int_0^{\frac{R^2 - x^2}{2R}} y \, dy \,dx &= \frac{4}{\pi R^2} \int_0^R \left( \frac{y^2}{2} \right)_{y=0}^{y= \frac{R^2 - x^2}{2R}} \,dx\\ &= \frac{4}{\pi R^2} \int_0^R \frac{1}{2} \left( \frac{R^2 - x^2 }{2R} \right)^2 \,dx \\ &= \frac{4}{\pi R^2} \int_0^R \frac{1}{8R^2} \left( x^4 - 2R^2 x^2 + R^4 \right) \,dx \\ &= \frac{1}{2\pi R^4} \left[ \frac{x^5}{5} - \frac{2R^2x^3}{3} + R^4 x \right]_{x=0}^{x=R} \\ &= \frac{1}{2\pi R^4} \left( \frac{R^5}{5} - \frac{2R^5}{3} + R^5 \right) \\ &= \frac{R}{2\pi} \left( \frac{1}{5} - \frac{2}{3} + 1 \right) \\ &= \frac{R}{2\pi} \frac{3 - 10 + 15}{15} = \frac{4R}{15 \pi} \end{align*}

For the second integral, it will be easier to switch to polar coordinates, then do some trig-substitutions to get a handle on the resulting integral. The curve y = \frac{R^2 - x^2}{2R} is equivalent to the equation x^2 + 2R y - R^2 = 0 which is given by r^2 \cos^2 \theta + 2R r \sin \theta - R^2 = 0 in polar coordinates. Solving the quadratic in terms of r and choosing the positive solution when \sin \theta \geq 0 since \theta \in [0,\pi], we get \begin{align*}r &= \frac{ -2R \sin \theta + \sqrt{ 4R^2 \sin^2 \theta + 4R^2 \cos^2 \theta} }{2 \cos^2 \theta} \\ &= \frac{-2R \sin \theta + 2R}{2 \cos^2 \theta} \\ &= \frac{ R \left( 1 - \sin \theta \right) }{ \cos^2 \theta } \\ &= \frac{R}{1 + \sin \theta},\end{align*} since \cos^2 \theta = 1 - \sin^2 \theta = (1 - \sin \theta) (1 + \sin \theta). Therefore, we have \begin{align*}E_2 &= \frac{1}{A} \int_{-R}^R \int_{\frac{R^2 - x^2}{2R}}^{\sqrt{R^2 - x^2}} \left( R - \sqrt{x^2 + y^2} \right) \,dy \,dx \\ &= \frac{2}{\pi R^2} \int_0^\pi \int_{\frac{R}{1 + \sin \theta}}^R (R - r) r \,dr \, d\theta \\ &= \frac{2}{\pi R^2} \int_0^\pi \left[ \frac{Rr^2}{2} - \frac{r^3}{3} \right]^{r=R}_{r = \frac{R}{1 + \sin \theta}} \, d\theta \\ &= \frac{2}{\pi R^2} \int_0^\pi \left( \frac{R^3}{2} - \frac{R^3}{3} \right) - \left( \frac{R^3}{2(1 + \sin \theta)^2} - \frac{R^3}{3(1 + \sin \theta)^3} \right) \,d\theta \\ &= \frac{R}{3} - \frac{R}{3\pi} \int_0^\pi \frac{3 \sin \theta + 1}{(1 + \sin \theta)^3} \,d\theta = \frac{R}{3} - I.\end{align*} In order to calculate the last integral I, we need to do a trigonometric half-angle substitution with u = \tan \frac{\theta}{2}. Here we see that du = \frac{1}{2} \sec^2 \frac{\theta}{2} \,d\theta = \frac{1}{2} \left(1 + \tan^2 \frac{\theta}{2} \right) \,d\theta = \frac{1 + u^2}{2} \,d\theta and further that \sin \theta = 2 \sin \frac{\theta}{2} \cos \frac{\theta}{2} = 2 \tan \frac{\theta}{2} \cos^2 \frac{\theta}{2} = \frac{2 \tan \frac{\theta}{2}}{1 + \tan^2 \frac{\theta}{2}} = \frac{2u}{1+u^2}. We additionally see that when \theta = 0, then u = 0, whereas when \theta = \pi, then u = \tan \frac{\pi}{2} = \infty. Therefore we get \begin{align*} I &= \frac{R}{3\pi} \int_0^\pi \frac{3 \sin \theta + 1}{(1 + \sin \theta)^3} \,d\theta \\ &= \frac{2R}{3\pi} \int_0^\infty \frac{ 3 \frac{2u}{1+u^2} + 1 }{\left(1 + \frac{2u}{1 + u^2}\right)^3} \frac{2 \,du}{1 + u^2} \\ &= \frac{2R}{3\pi} \int_0^\infty \frac{ (1 + 6u + u^2) ( 1 + u^2 ) }{ (1 + u)^6 } \, du.\end{align*} Making a further substitution of v = 1 + u, we then get \begin{align*} I = \frac{R}{3\pi} \int_0^\infty \frac{ (1 + 6u + u^2) ( 1 + u^2 ) }{ (1 + u)^6 } \, du &= \frac{2R}{3\pi} \int_1^\infty \frac{ (1 + 6(v-1) + (v-1)^2) ( 1 + (v-1)^2 ) }{v^6} \, dv \\ &= \frac{2R}{3\pi} \int_1^\infty \frac{ (v^2 + 4v - 4)(v^2 - 2v + 2) }{v^6} \,dv \\ &= \frac{2R}{3\pi} \int_1^\infty v^{-2} + 2v^{-3} - 10v^{-4} + 16v^{-5} -8v^{-6} \,dv \\ &= \frac{2R}{3\pi} \left[ -\frac{1}{v} -\frac{1}{v^2} + \frac{10}{3v^3} -\frac{4}{v^4} + \frac{8}{5v^5} \right]_{v=1}^{v=\infty} \\ &= -\frac{2R}{3\pi} \left( -1 -1 + \frac{10}{3} -4 + \frac{8}{5} \right) = \frac{32R}{45\pi} \end{align*} Putting this altogether we get E_2 = \frac{R}{3} - \frac{32R}{45\pi} = \frac{R(15\pi - 32)}{45 \pi}.

Therefore, the average minimal distance to the beach on \pi-land if the picnic spot is uniformly randomly chosen over the area of \pi-land is E = E_1 + E_2 = \frac{4R}{15\pi} + \frac{R(15 \pi - 32)}{45 \pi} = \frac{R(15\pi -20)}{45 \pi} = \frac{R(3\pi - 4)}{9\pi}. So in particular, if R = 1, then we get an average distance to the beach of E = \frac{3 \pi - 4}{9\pi} \approx 0.191862272807\dots.

A \pi day \picnic on \pi-land

You are planning a picnic on the remote tropical island of \pi-land. The island’s shape is a perfect semi-disk with two beaches, as illustrated below: Semicircular Beach (along the northern semicircular edge of the disk) and Diametric Beach (along the southern diameter of the disk).

If you pick a random spot on \pi-land for your picnic, what is the probability that it will be closer to Diametric Beach than to Semicircular Beach? (Unlike the illustrative diagram above, assume the beaches have zero width.)

The local \pi-landers typically measure everything from the midpoint of the Diametric Beach, so let's assume that point is the origin of our xy-plane, with Diametric Beach coinciding with the x-axis. In this case, assuming that \pi-land has a radius of R, then the entire area of \pi-land is A = \frac{1}{2} \pi R^2. At any point (x,y) on the \pi-land the distance to the Diametric Beach is given by d_D (x,y) = y, while the distance to the Semicircular Beach is d_S (x,y) = R - \sqrt{x^2 + y^2}. So the region of \pi-land that is closer to Diametric Beach than to Semicircular Beach is given by \begin{align*} \Omega &= \left\{ (x,y) \in \mathbb{R}^2_+ \mid x^2 + y^2 \leq R^2, d_D(x,y) \leq d_S(x,y) \right\} \\ &= \left\{ (x,y) \in \mathbb{R}_+^2 \mid y \leq R - \sqrt{x^2+ y^2} \right\} \\ &= \left\{ (x,y) \in \mathbb{R}^2_+ \mid x^2 + y^2 \leq (R-y)^2 \right\} \\ &= \left\{ (x,y) \in \mathbb{R}^2_+ \mid x^2 \leq R^2 -2Ry \right\} \\ &= \left\{ (x,y) \in \mathbb{R}^2_+ \mid y \leq \frac{R^2 - x^2}{2R} \right\} \end{align*}

The area of \Omega is given by the integral A_\Omega = \int_{-R}^R \frac{R^2 - x^2}{2R} \,dx = 2 \left[ \frac{Rx}{2} - \frac{x^3}{6R} \right]^{x=R}_{x=0} = 2 \left( \frac{R^2}{2} - \frac{R^2}{6} \right) = \frac{2R^2}{3}. Therefore, the probability of randomly choosing a \pi-land picnic spot closer to Diametric Beach than to Semicircular Beach, that is, in \Omega, is given by p = \frac{A_\Omega}{A} = \frac{ \frac{2R^2}{3} }{ \frac{\pi R^2}{2} } = \frac{4}{3\pi} \approx 0.424413181578....

Sunday, March 9, 2025

Well below average domino placement

You are placing many, many dominoes in a straight line, one at a time. However, each time you place a domino, there is a 1 percent chance that you accidentally tip it over, causing a chain reaction that tips over all dominoes you’ve placed. After a chain reaction, you start over again.

If you do this many, many times, what can you expect the median (note: not the average) number of dominoes placed when a chain reaction occurs (including the domino that causes the chain reaction)?

Let's abstract to using a generic probability of accidentally tipping all of the dominoes over at p\gt 0. Let D_p be the random number representing the total number of dominoes (including the domino that you originally tip over) when the probability of accidentally tipping the dominoes over for any individual domino is p \gt 0, then we have \mathbb{P} \left[ D_p = d \right] = p (1-p)^{d-1}, since in order for you to tip over the dth domino you must first not have knocked over the first (d-1) dominoes, each with a probability of (1-p), and then knock over the dth domino, with probability p.

So for any d \gt 0, the probability of D \leq d is equal to \mathbb{P} \left[ D_p \leq d \right] = \sum_{m=1}^{d} \mathbb{P} \left[ D_p = m \right] = \sum_{m=1}^d p(1-p)^{m-1} = p \cdot \frac{1 - (1-p)^d}{1-(1-p)} = 1 - (1-p)^d. Alternatively, we have \mathbb{P} \left[ D_p \gt d \right] = 1 - \mathbb{P} \left[ D_p \leq d \right] = (1-p)^d. The integer M_p is the median of the distribution of D_p if and only if we have \mathbb{P} \left[ D_p \leq M_p \right] = 1 - (1 - p)^{M_p} \geq \frac{1}{2} and \mathbb{P} \left[ D_p \geq M_p \right] = \mathbb{P} \left[ D_p \gt M_p \right] + \mathbb{P} \left[ D_p = M_p \right] = (1-p)^{M_p-1} \geq \frac{1}{2}. Therefore, combining these two equations we need (1-p)^{M_p} \leq \frac{1}{2} \lt (1-p)^{M_p - 1}. Taking natural logarithms of all sides we get M_p \ln (1-p) \leq -\ln 2 \lt (M_p - 1) \ln(1-p), or equivalently M_p - 1 \lt -\frac{\ln 2}{\ln (1-p)} \leq M_p, or equivalently M_p = \Big\lceil -\frac{\ln 2}{\ln (1-p)} \Big\rceil.

Therefore, if p = 0.01, then we get M_{0.01} = \Big\lceil -\frac{\ln 2}{0.99} \Big\rceil = \lceil 68.9675639365\dots \rceil = 69 is the median number of dominoes that you will have placed when they all get knocked down. For further confirmation we see we get \mathbb{P} \left[ D_p \leq 68 \right] = 0.495114111213 \lt \frac{1}{2} \lt \mathbb{P} \left[ D_p \leq 69 \right] = 0.500162970101.

Though the note said to explicitly ignore the expectation, we see that \begin{align*}E_p = \mathbb{E} \left[ D_p \right] &= \sum_{m=1}^\infty m \mathbb{P} \left[ D_p = m \right]\\ &= \sum_{m=1}^\infty m p (1-p)^{m-1} \\ &= p \sum_{m=1}^\infty m (1-p)^{m-1} \\ &= p \left[ \frac{d}{dt} \left( \frac{1}{1-t} \right) \right|_{t=1-p} \\ &= p \left[ \frac{1}{(1-t)^2} \right|_{t = 1-p} \\ &= p \frac{1}{(1 - (1-p))^2} = p \frac{1}{p^2} = \frac{1}{p}.\end{align*} In particular, we see that for the Classic question, we have E_{0.01} = 100 \gt M_{0.01} = 69.

The Extra Credit question looks at the limit of M_p / E_p = p M_p as p \downarrow 0. Given the formulae above, we see that the limit of the ratio of the median to the average number of dominos placed is given by \lim_{p \to 0} p M_p = \lim_{p \to 0} p \cdot - \frac{\ln 2}{\ln ( 1- p) } = \lim_{p \to 0} \frac{ p\ln 2}{ - \ln (1-p) } = \lim_{p \to 0} \frac{ \ln 2 }{1 - p} = \ln 2, where the first equal sign comes from the squeeze theorem and the second to last equal sign is an application of L'Hôpital's Rule. So not only do we have M_p \lt E_p in the case of p = 0.01, but for all small p \approx 0.

Monday, March 3, 2025

Magical Rabbit Pulling

I have a hat with six small toy rabbits: two are orange, two are green, and two purple. I shuffle the rabbits around and randomly draw them out one at a time without replacement (i.e., once I draw a rabbit out, I never put it back in again).

Your job is to guess the color of each rabbit I draw out. For each guess, you know the history of the rabbits I’ve already drawn. So if we’re down to the final rabbit in the hat, you should be able to predict its color with certainty.

Every time you correctly predict the color of the rabbit I draw, you earn a point. If you play optimally (i.e., to maximize how many points you get), how many points can you expect to earn on average?

Let's define the Markov chain on \mathbb{N}^4 with states X = (g, o, p, s), where g is the number of green rabbits, o is the number orange rabbits, p is the number of purple rabbits and s is the current score. The state X = (g, o, p, s) in this magical rabbit pulling Markov chain is adjacent to the following states: (g-1, o, p, s), (g-1, o, p, s+1), (g, o-1, p, s), (g, o-1, p, s+1), (g, o, p-1, s) and (g,o, p-1, s+1), provided that the resulting quadruple remains in \mathbb{N}^4. The transition probabilities are given by the following: \begin{align*} p( X^{n+1} = (g-1,o,p,s) \mid X^n = (g,o,p,s) ) &= \begin{cases} \frac{g}{g+o+p} \left( 1 - \frac{1}{\arg\max \{g, o, p\}} \right), & \text{if $g = \max \{ g, o, p \};$}\\ \frac{g}{g+o+p}, &\text{if $g \lt \max \{ g, o, p \}$}\end{cases}\\ p( X^{n+1} = (g-1,o,p,s+1) \mid X^n = (g,o,p,s) ) &= \begin{cases} \frac{g}{g+o+p} \frac{1}{\arg\max \{g, o, p\}}, & \text{if $g = \max \{ g, o, p \};$}\\ 0, &\text{if $g \lt \max \{ g, o, p \}$}\end{cases} \\ p( X^{n+1} = (g,o-1,p,s) \mid X^n = (g,o,p,s) ) &= \begin{cases} \frac{o}{g+o+p} \left( 1 - \frac{1}{\arg\max \{g, o, p\}} \right), & \text{if $o = \max \{ g, o, p \};$}\\ \frac{g}{g+o+p}, &\text{if $o \lt \max \{ g, o, p \}$}\end{cases}\\ p( X^{n+1} = (g,o-1,p,s+1) \mid X^n = (g,o,p,s) ) &= \begin{cases} \frac{o}{g+o+p} \frac{1}{\arg\max \{g, o, p\}}, & \text{if $o = \max \{ g, o, p \};$}\\ 0, &\text{if $o \lt \max \{ g, o, p \}$}\end{cases} \\ p( X^{n+1} = (g,o,p-1,s) \mid X^n = (g,o,p,s) ) &= \begin{cases} \frac{p}{g+o+p} \left( 1 - \frac{1}{\arg\max \{g, o, p\}} \right), & \text{if $p = \max \{ g, o, p \};$}\\ \frac{g}{g+o+p}, &\text{if $p \lt \max \{ g, o, p \}$}\end{cases}\\ p( X^{n+1} = (g,o,p-1,s+1) \mid X^n = (g,o,p,s) ) &= \begin{cases} \frac{p}{g+o+p} \frac{1}{\arg\max \{g, o, p\}}, & \text{if $p = \max \{ g, o, p \};$}\\ 0, &\text{if $p \lt \max \{ g, o, p \}$}\end{cases} \end{align*} Here again we are only going to define the probabilities to the extend that these states remain in \mathbb{N}^4. In order to fully specify this as a Markov chain, let's assert that p( X^{n+1} = (0,0,0,s) \mid X^n = (0,0,0,s) ) = 1 for each s \in \mathbb{N} and that all other not explicitly stated transitions have a probability of zero.

Using these transition probabilities we can define the function f : \mathbb{N}^4 \to \mathbb{N} to be the expected total score when there are no more rabbits left in the hat conditional on starting at say X^0 = (g, o, p, s), or for instance we can have f(g, o, p, s) = \mathbb{E} \left[ e_4^T X^\infty \right] where \{ X^n \}_{n \in \mathbb{N}} is the Markov chain specified above. Given the transition probabilities, we can define the boundary conditions f(0,0,0,s) = s, for all s \in \mathbb{N}, with the recursion formula \begin{align*}f(g,o,p,s) &= I ( g \geq 1 ) \left( p( (g,o,p,s) \to (g-1, o, p, s) ) f(g-1, o, p, s) + p( (g, o, p, s) \to (g-1, o, p, s+1) ) f(g-1,o,p,s+1) \right) \\ & \quad + I ( o \geq 1 ) \left( p( (g,o,p,s) \to (g, o-1, p, s) ) f(g, o-1, p, s) + p( (g, o, p, s) \to (g, o-1, p, s+1) ) f(g,o-1,p,s+1) \right) \\ & \quad + I(p \geq 1) \left( p( (g,o,p,s) \to (g, o, p-1, s) ) f(g, o, p-1, s) + p( (g, o, p, s) \to (g, o, p-1, s+1) ) f(g,o,p-1,s+1) \right) \end{align*}

For the case of where we start out g = o = p = 2 rabbits and s =0 initial score, we can use this formula by hand repeatedly and eventually work out that we get the an average score of f(2,2,2,0) = \frac{101}{30} = 3.3666666666666666\dots if playing optimally.

For the case of starting with g=o=p=10 rabbits and s = 0, there are too many cases for many (or at least me) to keep track of, so we can appeal to a quick recursive function with memoization to make it relatively fast. For instance see the quick Python implementation below. This then gives the average score in the Extra Credit case of f(10,10,10,0) = 13.703132371084621\dots if playing optimally.

Sunday, February 23, 2025

Extra Credit LearnedLeague Defensive Efficiency

Now suppose your opponent is equally likely to get one, two, three, four, or five questions correct. As before, you randomly apply the six point values (0, 1, 1, 2, 2, 3) to the six questions. What is the probability that your defensive efficiency will be greater than 50 percent?

In the classic problem we costructed a table for N_2. Here we need to then construct similar tables for N_1, N_3, N_4 and N_5. For i=1, there is exactly one way apiece to score zero or 3 and two ways apiece to score 1 or 2, so we have the following table:

N_1 DE_1 \mathbb{P} \{ N_1 \}
0 1 \frac{1}{6}
1 \frac{2}{3} \frac{1}{3}
2 \frac{1}{3} \frac{1}{3}
3 0 \frac{1}{6}

Similarly, we can construct the other tables as follows:

N_3 DE_3 \mathbb{P} \{ N_3 \}
2 1 \frac{1}{20}
3 \frac{4}{5} \frac{1}{5}
4 \frac{3}{5} \frac{1}{4}
5 \frac{2}{5} \frac{1}{4}
6 \frac{1}{5} \frac{1}{5}
7 0 \frac{1}{20}
N_4 DE_4 \mathbb{P} \{ N_4 \}
4 1 \frac{2}{15}
5 \frac{3}{4} \frac{1}{5}
6 \frac{1}{2} \frac{1}{3}
7 \frac{1}{4} \frac{1}{5}
8 0 \frac{2}{15}
N_5 DE_5 \mathbb{P} \{ N_5 \}
6 1 \frac{1}{6}
7 \frac{2}{3} \frac{1}{3}
8 \frac{1}{3} \frac{1}{3}
9 0 \frac{1}{6}

In particular we see that \mathbb{P} \{ DE_i \gt \frac{1}{2} \} = \begin{cases} \frac{1}{2}, &\text{if $i \in \{1,3,5\}$;}\\ \frac{1}{3}, &\text{if $i \in \{2,4\}$.}\end{cases} So if your opponent answers I questions where I is uniformly distributed on the set \{1, 2,3,4,5\}, then the probability of having a defensive efficiency score greater than 50\% is equal to \begin{align*} \mathbb{P} \{ DE_I \gt \frac{1}{2} \} &= \sum_{i=1}^5 \mathbb{P} \{ DE_i \gt \frac{1}{2} \} \mathbb{P} \{ I = i \} \\ &= \frac{1}{5} \sum_{i=1}^5 \mathbb{P} \{ DE_i \gt \frac{1}{2} \} \\ &= \frac{1}{5} \left( \frac{1}{2} + \frac{1}{3} + \frac{1}{2} + \frac{1}{3} + \frac{1}{2} \right) \\ &= \frac{13}{30} = 43.\bar{3}\%.\end{align*}