## 4.3 Uniformly Most Powerful Test

In a simple testing case, the Neyman-Pearson theorem provides a theoretical justification for choosing a critical region to create the best hypothesis test, given the significance level.

However, when the hypotheses are no longer simple, the situation becomes more complicated.

To extend the result of Neyman-Pearson to non-simple cases, we need the following notion.

**Definition 4.7 (Uniformly Most Powerful Test) **The critical region \(C\) is a *Uniformly Most Powerful* (UMP) critical region
of size \(\alpha\) for testing the simple hypothesis \(H_0\) against an
alternative composite hypothesis \(H_1\) if the set \(C\) is a best critical
region of size \(\alpha\) for testing \(H_0\) against each simple
hypothesis in \(H_1\).

A test defined by this critical region \(C\) is called a uniformly most powerful test, with significance level \(\alpha\), for testing the simple hypothesis \(H_0\) against the alternative composite hypothesis \(H_1\).

**Example 4.3 **Let \(X_1, \dots, X_n\) be sample from \(N(\mu, 1)\).
Let \(H_0: \mu = 1\) and \(H_1: \mu >1\) be our hypotheses.
Suppose we want the significance level to be \(\alpha = 0.9\),
use whatever software you can to find the uniformly most
powerful critical region for this significance level and \(n = 10\).

**Example 4.4 **Let \(X_1, \dots, X_n\) be sample from \(N(0, \theta)\).
Let \(H_0: \theta = 3\) and \(H_1: \theta >3\) be our hypotheses.
Suppose we want the significance level to be \(\alpha = 0.9\),
use whatever software you can to find the uniformly most
powerful critical region for this significance level and \(n = 10\).

**Example 4.5 **There is no uniformly most powerful critical region for the two-tail test of the Gaussian.

**Definition 4.8 **We say that the likelihood \(\mathcal{L}(\theta;\mathbf{x})\) has *monotone likelihood ratio (mlr)* in the
statistic \(y = u (\mathbf{x})\) if, for \(\theta_1 < \theta_2\), the ratio
\[ \frac{\mathcal{L}(\theta_1;\mathbf{x} )}{\mathcal{L}( \theta_2;\mathbf{x} )}\]
is monotonic with respect to \(y\).

Using the language of randomized test, if mlr provides a sufficient condition to produce UMP tests.

**Theorem 4.3 (Keener, Theorem 12.9) **Suppose the family of densities has monotone likelihood ratios. Then

The test \(\phi^*\) given by \[ \phi^*(x) = \begin{cases} 1, & \text{if } T(x) > c; \\ \gamma, & \text{if } T(x) = c; \\ 0, & \text{if } T(x) < c, \end{cases} \] is uniformly most powerful for testing \(H_0 : \theta \leq \theta_0\) versus \(H_1 : \theta > \theta_0\) and has level \(\alpha = E_{\theta_0} \phi^*\). Also, the constants \(c \in \mathbb{R}\) and \(\gamma \in [0,1]\) can be adjusted to achieve any desired significance level \(\alpha \in (0,1)\).

The power function \(\beta(\theta) = E_{\theta} \phi^*\) for this test is nondecreasing and strictly increasing whenever \(\beta(\theta) \in (0,1)\).

If \(\theta_1 < \theta_0\), then this test \(\phi^*\) minimizes \(E_{\theta_1} \phi\) among all tests with \(E_{\theta_0} \phi = \alpha\).

**Example 4.6 (Hogg et.al., Example 8.2.5) **Let \(X_1, X_2, \ldots, X_n\) be a random sample from a Bernoulli
distribution with parameter \(p = \theta\), where \(0 < \theta < 1\). Let
\(\theta' < \theta''\). Consider the ratio of likelihoods

\[ \frac{\mathcal{L}(\theta'; x_1, x_2, \ldots, x_n)}{\mathcal{L}(\theta''; x_1, x_2, \ldots, x_n)} = \frac{(\theta')^{\sum x_i} (1 - \theta')^{n-\sum x_i}}{(\theta'')^{\sum x_i} (1 - \theta'')^{n-\sum x_i}} = \left(\frac{\theta'(1 - \theta'')}{\theta''(1 - \theta')}\right)^{\sum x_i} \left(\frac{1 - \theta'}{1 - \theta''}\right)^n. \]

Since \(\frac{\theta'}{\theta''} < 1\) and \(\frac{(1 - \theta'')}{(1 - \theta')} < 1\), so that \(\frac{\theta'(1 - \theta'')}{\theta''(1 - \theta')} < 1\), the ratio is a decreasing function of \(y = \sum x_i\). Thus we have a monotone likelihood ratio in the statistic \(Y = \sum X_i\).

Consider the hypotheses

\[ H_0 : \theta \leq \theta' \quad \text{versus} \quad H_1 : \theta > \theta'. \]

By our discussion above, the UMP level \(\alpha\) decision rule for testing \(H_0\) versus \(H_1\) is given by

\[ \text{Reject } H_0 \text{ if } Y = \sum_{i=1}^n X_i \geq c, \]

where \(c\) is such that \(\alpha = P_{\theta'}[Y \geq c]\).