next up previous
Next: Solution to Exercise 7.03. Up: No Title Previous: No Title

Solution to Exercise 6.22.

  (a) The theoretical mean is a functions of $\theta$:

\begin{displaymath}
E[X] \; = \; \int \, x f(x; \theta) \, dx
 \; = \; \int_0^1 \, x (\theta +1) x^{\theta} \, dx\end{displaymath}

\begin{displaymath}
\; = \; (\theta +1) \int_0^1 \, x^{\theta +1} \, dx
 \; = \;...
 ...heta +2}
\right]_{x=0}^1
 \; = \; \frac{\theta +1}{\theta +2} .\end{displaymath}

For the method of moments estimator with one parameter, we set the sample mean equal to the theoretical mean:

\begin{displaymath}
\bar{x} \; = \; \frac{\theta +1}{\theta +2} ,\end{displaymath}

and solve for $\theta$. Multiplying through by $\theta +2$ we get

\begin{displaymath}
\bar{x} (\theta +2) \; = \; \theta +1 ,\end{displaymath}

or

\begin{displaymath}
\bar{x} \theta + 2 \bar{x} \; = \; \theta +1 ,\end{displaymath}

which gives

\begin{displaymath}
(\bar{x} -1) \theta \; = \; 1 - 2 \bar{x} ,\end{displaymath}

and thus our estimator is

\begin{displaymath}
\hat{\theta}_{mom} \; = \; \frac{2 \bar{x} -1}{1 - \bar{x}} .\end{displaymath}

Putting the given data into minitab and computing the sample mean I get

\begin{displaymath}
\bar{x} \; = \; .80 .\end{displaymath}

(Must be made-up data.) Plugging into the formula above,

\begin{displaymath}
\hat{\theta}_{mom} \; = \; \frac{2 * .8 -1}{1 - .8}
 \; = \; \frac{.6}{.2} \; = \; 3.00 .\end{displaymath}

(b) The joint density is

\begin{displaymath}
f(x_1 , \ldots , x_{10} ; \theta ) \; = \; 
\prod_{i=1}^{n} ...
 ... = \; (\theta +1)^n \left( \prod_{i=1}^n x_i \right)^{\theta} ,\end{displaymath}

where the sample size n =10 for the given data. The log likelihood (logarithm of the joint density) is

\begin{displaymath}
L(\theta) \; = \; 
n \log (\theta +1) \, + \, \theta \sum_{i=1}^n \log x_i .\end{displaymath}

To find the value of $\theta$ that maximizes this, take derivatives (with respect to $\theta$) and set equals to zero:

\begin{displaymath}
\frac{dL}{d \theta}(\theta) \; = \; 
\frac{n}{\theta +1} \, + \, \sum_{i=1}^n \log x_i \; = \; 0 .\end{displaymath}

This gives

\begin{displaymath}
\frac{n}{\theta +1} \; = \; - \sum_{i=1}^n \log x_i\end{displaymath}

or

\begin{displaymath}
\frac{\theta +1}{n} \; = \; \frac{-1}{\sum_{i=1}^n \log x_i} ,\end{displaymath}

or

\begin{displaymath}
\theta \; = \; - 1 \, - \, 
\frac{n}{\sum_{i=1}^n \log x_i} .\end{displaymath}

Letting yi = $\log x_i$, we see that

\begin{displaymath}
\hat{\theta}_{mle} \; = \; - 1 \, - \, 
\frac{1}{\bar{y}}\end{displaymath}

where of course

\begin{displaymath}
\bar{y} \; = \; \frac{1}{n} \sum_{i=1}^n y_i
 \; = \; \frac{1}{n} \sum_{i=1}^n \log x_i .\end{displaymath}

Back to minitab, I went to Calc $\rightarrow$ Calculator and selected to calculate ``Natural log'' (you have to find this in the list of functions) of and entered C1 (where the original data is) and output to C2. The mean of C2 is

\begin{displaymath}
\bar{y} \; = \; -0.2430 .\end{displaymath}

Plugging this into the formula above, we see that the estimate is

\begin{displaymath}
\hat{\theta}_{mle} \; = \; -1 \, - \, 
\frac{1}{-0.2430} \; = \; 
3.12 .\end{displaymath}

Note: The Maximum Likelihood Estimate and Method of Moments estimates agree pretty well. Which one should we use? To assess the accuracy of the different methods, let's perform a simulation experiment. Let's simulate values from the given distribution for a couple of parameter values near where we think the true parameter is - we'll just use our two different estimates. We'll compute the MOM (Method of Moments estimator) and MLE (Maximum Likelihood Estimator) for each sample, and repeat this, say, 100 times. Then we can see how accurate each estimator is by looking at bias, variance, and mean squared error. How can we simulate from this distribution? There is a general technique for simulating from distributions using the inverse c.d.f.: if U is a uniform r.v. on [0,1] and F is a c.d.f., then X = F-1 (U) is a r.v. whose distribution has the given c.d.f. F. This was already discussed in class and the proof is easy. The c.d.f. of X is defined to be

\begin{displaymath}
F_X (x) \; = \; P[X \le x] .\end{displaymath}

Now

\begin{displaymath}
P[X \le x] \; = \; P[ F^{-1} (U) \le x ] \; = \; 
P[ U \le F(x) ] \; = \; F(x) ,\end{displaymath}

since the c.d.f. for U is FU (u) = u, for $0 \le u \le 1$. For our given distribution, the c.d.f. is

\begin{displaymath}
F (x) \; = \; \int_0^x \, (\theta +1) y^{\theta} \, dy
 \; = \; x^{\theta +1} , \quad 0 \le x \le 1 ,\end{displaymath}

so

\begin{displaymath}
F^{-1} (u) \; = \; u^{1/(\theta+1)} .\end{displaymath}

The results are shown in Table 1. We see that the standard errors of the two estimators are the same but the MOM has slightly less bias and RMSE (Root Mean Squared Error). In this example, the Method of Moments appears to be to be the better estimator.


 
 
Table 1: Table of simulation results on the accuracy of the Method of Moments (MOM) and Maximum Likelihood Estimator (MLE)
Estimator $\rightarrow$ MOM MLE MOM MLE
True $\theta$ $\rightarrow$ 3.00 3.00 5.12 5.12
Bias $\rightarrow$ 0.33 0.40 0.53 0.61
Std.Err. $\rightarrow$ 1.66 1.66 2.54 2.54
RMSE $\rightarrow$ 1.68 1.70 2.58 2.60


next up previous
Next: Solution to Exercise 7.03. Up: No Title Previous: No Title
Dennis Cox
3/24/2001