Information Classification: General
Solutions Manual Bayesian Statistical Methods Brian J. Reich and Sujit K. Ghosh 1 / 4
Chapter 1: Basics of Bayesian Inference
Jump to probem: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18
(1) The integral over the PDF is and therefore if the PDF is valid.(2a) Since for all and the PDF is valid.(2b) The mean is and the variance is .(3)Since the parameter must be positive we select a Gamma prior. Setting the mean and variance of the prior to the desired mean and variance gives
Solving for and gives and . We can check these answers using MC sampling:
x <- rgamma(10000000,25/3,5/3) mean(x);var(x)
## [1] 5.000535
## [1] 2.998441
(4a) with and (4b) with and .(4c) and similarly which defines the conditional distribution of given . Given , and similarly .(4d) and Given , and . The conditional distribution of given is the same as .(4e) No, and are not independent because the conditional distribution of changes with .(5a) The marginal PDF is ½ ½ Chapter 1: Basics of Bayesian Inference file://///wolftech.ad.ncsu.edu/cos/stat/Redirect/sghosh2/Documents/RE...
- of 149/16/2019, 3:04 PMBayesian Statistical Methods (2019, 1st Ed) Brian Reich & Sujit Ghosh Last modified: 09/16/2019 1 CRC Press (Taylor & Francis Group) 2 / 4
and therefore is standard normal.(5b) Before computing the conditional distribution, we note if then where . Therefore, to find the conditional distribution we will rearrange the PDF to have this form and solve for and .where and . Therefore, .(6a) Note that the plots for , and are the same as those for , and and thus do not appear.pdf <- function(x1,x2){ (1/(2*pi))*(1+x1^2+x2^2)^(-3/2) } x1 <- seq(-5,5,0.1) plot(NA,xlim=range(x1),ylim=c(0,0.05),xlab="x1",ylab="Conditional PDF") x2 <- seq(-3,3,1)
for(j in 1:7){
d <- pdf(x1,x2[j]) lines(x1,d/sum(d),col=j) } legend("topright",paste("x2 =",seq(0,3,1)),lty=1,col=4:7,bty="n") Chapter 1: Basics of Bayesian Inference file://///wolftech.ad.ncsu.edu/cos/stat/Redirect/sghosh2/Documents/RE...
- of 149/16/2019, 3:04 PMBayesian=Statistical Methods (2019,=1st EdF Brian Reich & Sujit Ghosh Last=modified: 09/16/2019 2 CRC Press (Taylor & Francis Group) 3 / 4
(6b) They do not appear to be correlated as the mean of does not change with (it is always zero).(6c) They are not independent because the PDF of changes with .(7)First, the overall probability of a car being stolen ( = Raleigh, = Durham) is Applying Bayes' rule, (8a) Let if there is a convention and otherwise. The prior is . The commute time is distributed and . Bayes Rule gives (8b) is only consistent with and so .(9)The data for each word is the number of keystroke errors and the likelihood is Binomial . Therefore, the data are words = c("fun","sun","sit","sat","fan","for") e_sun = c(1,0,2,2,2,3) e_the = c(3,3,3,3,3,3) e_foo = c(2,3,3,3,2,1) These vectors give the number of errors between each typed word and each word in the dictionary.(9a) ¼ ¼ ¼ ¾ Chapter 1: Basics of Bayesian Inference file://///wolftech.ad.ncsu.edu/cos/stat/Redirect/sghosh2/Documents/RE...
- of 149/16/2019, 3:04 PMBayesian Statistical Methods (2019, 1st Ed) Brian Reich & Sujit Ghosh Last modified: 09/16/2019 3 CRC Press (Taylor & Francis Group)
- / 4