next up previous
Next: About this document

Spring 1997 John Rust
Economics 551b 37 Hillhouse, Rm. 27

PROBLEM SET 2

Bayesian Basics (II)

(Due: February 10, 1997)

QUESTION 1 Derive the convergence of posterior distribution when support of prior does not include true parameter tex2html_wrap_inline54 . Show that

displaymath48

where tex2html_wrap_inline56 is a point mass at tex2html_wrap_inline58 and tex2html_wrap_inline58 is defined by:

displaymath49

QUESTION 2 Derive the convergence of posterior distribution when there exists tex2html_wrap_inline62 observationally equivalent to tex2html_wrap_inline54 , i.e. where tex2html_wrap_inline66 for almost all x (i.e. except for x in a set of Lebesgue measure zero).

QUESTION 3 Do extra credit problem on an example of computing an invariant density for a Markov process. This is optional but recommended.

QUESTION 4 This question asks you to employ the Gibbs-sampling algorithm in a simple example taken from the book Bayesian Data Analysis by Gelman et. al. Here the data consist of a single observation tex2html_wrap_inline72 from a bivariate normal distribution with unknown mean tex2html_wrap_inline74 and known covariance matrix tex2html_wrap_inline76 given by

displaymath50

With a uniform prior distribution over tex2html_wrap_inline78 the posterior distribution of tex2html_wrap_inline78 is bivariate normal with mean tex2html_wrap_inline82 and covariance matrix tex2html_wrap_inline76 . Although it is trivial to sample directly from a bivariate normal distribution of tex2html_wrap_inline74 , the purpose of this question is to have you use the Gibbs sampler to draw from the posterior and compare how well the random samples from the Gibbs sampler approximate draws from the bivariate normal posterior.

1.
If tex2html_wrap_inline88 and tex2html_wrap_inline90 , use the normal random number generator in Gauss (or some other language you are comfortable with) to draw a random sample of 500 observations from the posterior ``directly''. Report the sample mean and covariance matrix from this random sample. (Hint: you can draw from a bivariate normal with covariance matrix tex2html_wrap_inline76 by drawing a bivariate normal with covariance matrix I from Gauss's random normal generator (i.e. two independent univariate normal draws tex2html_wrap_inline96 ) and then compute the Cholesky decomposition of tex2html_wrap_inline76 , tex2html_wrap_inline100 and multiply this ``square root'' matrix by tex2html_wrap_inline102 to get a bivariate random draw with mean (0,0)' and covariance matrix tex2html_wrap_inline76 ).

2.
Now use the Gibbs sampler to generate a random sample from the posterior. Starting from 500 points randomly, uniformly drawn over the square of width and length 1 with center (2,1) run the Gibbs sampler for 50 iterations. The Gibbs sampler is constructed by generating a draw from the normal conditional density of tex2html_wrap_inline110 given tex2html_wrap_inline112 given by:

displaymath114

Given the realized value of tex2html_wrap_inline110 draw a value of tex2html_wrap_inline112 from the conditional density of tex2html_wrap_inline112 given tex2html_wrap_inline110 given by:

displaymath124

Starting from 500 different randomly drawn initial values of tex2html_wrap_inline126 perform T=50 loops of the above Gibbs sampling algorithm and save the 500 final draws of tex2html_wrap_inline130 for each initial condition. Use these random draws to compute the sample mean and covariance matrix of the posterior. How well does it compare to the true mean and covariance matrix of the posterior? Is T=50 a sufficient number of iterations for the Gibbs sampler to converge to the invariant density?

3.
Repeat part 2, but now using T=1000 draws of the Gibbs sampler for each of the 500 randomly drawn initial conditions (save the initial conditions in part 2 in a file so you can use the same initial conditions for comparison of the the results in parts 2 and parts 3). Do you think T=1000 iterations is sufficient for the Gibbs sampler to have converged to the invariant density, i.e. the bivariate normal posterior distribution for tex2html_wrap_inline78 ?

4.
Instead of using 500 different randomly chosen initial conditions, run the Gibbs sampler only once from one initial condition tex2html_wrap_inline140 for T=2500 iterations. Discard the first 2000 values of tex2html_wrap_inline78 and save the last 500 values of tex2html_wrap_inline148 , tex2html_wrap_inline150 . Compute the sample mean and covariance matrix of these tex2html_wrap_inline148 's. How well do they approximate the true mean and covariance matrix of the posterior density? (Here we are illustrating the principle of ergodicity i.e. if the Gibbs sampler has converged to its invariant density, then time-series averages of the random draws of the Markov process converge to their expectations, i.e. their ``long run'' expectations with regard to the invariant density. Thus, even thought the draws of tex2html_wrap_inline148 are not independent for successive values of t, sample averages of these values should be close to the corresponding expectations when T is large, i.e.

displaymath160

where h is some function that has finite expectation with respect to the invariant density p of the Markov process tex2html_wrap_inline166 generated by the Gibbs sampling algorithm.)




next up previous
Next: About this document

John Rust
Sun Feb 2 18:51:07 CST 1997