Sorry, you do not have access to this eBook
A subscription is required to access the full text content of this book.
In Bayesian inference, complete knowledge about a vector of model parameters, θ ∈ Θ, obtained by fitting a model ℳ y o b s ∈ Y , is contained in the posterior distribution. Here, prior beliefs about the model parameters, as expressed through the prior distribution, π(θ), are updated by observing data π ( θ | y o b s ) = p ( y o b s | θ ) π ( θ ) ∫ Θ p ( y o b s | θ ) π ( θ ) d θ , through the likelihood function p(yobs |θ) of the model. Using Bayes’ theorem, the resulting posterior distribution: π ( θ | y o b s ) ≈ 1 N ∑ i = 1 N δ θ ( i ) ( θ ) ,
A subscription is required to access the full text content of this book.
Other ways to access this content: