There has been a dramatic increase in the application of the Bayesian approach throughout all areas of Statistics within recent years, and this has had a considerable impact in the analysis of complex data. Bayesian statistics uses prior information, which represents all that is known, in addition to the data. Bayesian inferences are derived from the posterior (the probability distribution of the population/model parameters given the data), and treat the data as known and therefore fixed (non random). In Bayesian statistics anything unknown is a random variable. Bayesian statistics is now extensively used in every area where it is required to make inferences after observing data and evaluate the associated uncertainty, for example in the Social Sciences, Economics, Ecology, Biostatistics and Epidemiology.
The mural shows the equation for the posterior distribution π(θ|x) of the parameters given the data which is proportional to the product of the likelihood ƒ(x|θ) of the data, given the parameters, and the prior probabilities p(θ) of the parameters.
Bayesian inference is not based on various 'procedures', such as Least Squares Estimation (LSE), Maximum Likelihood Estimation (MLE) or the calculation of p-values. It is only based on the conditional probability formula, which is used to derive the posterior distribution. The conditional probability formula, in the context of Bayesian statistics, and when it involves data and model parameters is known as Bayes Theorem. Thus, if one knows some elementary probability theory, they already know quite a lot on the fundamentals of Bayesian Statistics!
Molitor, J., Brown, I. J., Chan, Q., Papathomas, M., Liverani, S., Molitor, N., Richardson, S., Van Horn, L., Daviglus, M., Dyer, A., Stamler, J. and Elliott, P. 2014. Blood Pressure Differences Associated with OMNIHEART-Like Diet Compared to a Typical American Diet. Hypertension. Dec;64(6):1198-204. doi: 10.1161/HYPERTENSIONAHA.114.03799
Oedekoven, C.S., Buckland, S.T., Mackenzie, M.L., King, R., Evans, K.O. and Burger, L.W. 2014. Bayesian methods for hierarchical distance sampling models. Journal of Agricultural, Biological and Environmental Statistics 19, 219-239.
Oedekoven C.S., R. King, S.T. Buckland, M.L. MacKenzie, K.O. Evans & L.W. Burger. 2016. Using hierarchical centering to facilitate a reversible jump MCMC algorithm for random effects models. Computational Statistics and Data Analysis, 98, 79-90.
Papathomas, M. and Richardson, S. 2016. Exploring dependence between categorical variables: benefits and limitations of using variable selection within Bayesian clustering in relation to log-linear modelling with interaction terms. Journal of Statistical Planning and Inference. 173, 47-63
Papathomas, M., Molitor, J., Hoggart, C., Hastie, D. and Richardson, S. 2012. Exploring data from genetic association studies using Bayesian variable selection and the Dirichlet process: application to searching for gene-gene patterns. Genetic Epidemiology. 36, 663-674
Papathomas, M., Molitor, J., Riboli, E., Richardson, S. and Vineis, P. 2011. Examining the joint effect of multiple risk factors using exposure risk profiles: lung cancer in non smokers. Environmental Health Perspectives, 119, 84-91