There has been a dramatic increase in the application of the Bayesian approach throughout all areas of Statistics within recent years, and this has had a considerable impact in the analysis of complex data. Bayesian statistics uses prior information, which represents all that is known, in addition to the data. Bayesian inferences are derived from the posterior (the probability distribution of the population/model parameters given the data), and treat the data as known and therefore fixed (non random). In Bayesian statistics anything unknown is a random variable. Bayesian statistics is now extensively used in every area where it is required to make inferences after observing data and evaluate the associated uncertainty, for example in the Social Sciences, Economics, Ecology, Biostatistics and Epidemiology.

The mural shows the equation for the posterior distribution *π(θ|x)* of the parameters given the data which is proportional to the product of the likelihood *ƒ(x|θ)* of the data, given the parameters, and the prior probabilities *p(θ)* of the parameters.

Bayesian inference is not based on various 'procedures', such as Least Squares Estimation (LSE), Maximum Likelihood Estimation (MLE) or the calculation of p-values. It is only based on the conditional probability formula, which is used to derive the posterior distribution. The conditional probability formula, in the context of Bayesian statistics, and when it involves data and model parameters is known as Bayes Theorem. Thus, if one knows some elementary probability theory, they already know quite a lot on the fundamentals of Bayesian Statistics!

Research by Dr. Cornelia Oedekoven