Rules for Creating Priors

Hi Jérôme (and everyone!).

For the first time, I’m trying to create my own priors for an assessment using historical data (no identifiable change in conditions/seems appropriate for prior creation etc.) in a real-world application. I haven’t been brave enough to do it outside of personal study/playing.

My intention is to:

  1. Take the historical data, run it through ExpoStats using the default priors
  2. Take the posterior distributions for the lognormal shape parameters and use them as new priors (completely unchanged).
  3. Use Expostats again with my new priors, to analyse this years’ data.

The historical data appears good quality, but small in quantity.

Using the above steps, would you expect the priors’ ‘inform-ness’ to be proportional to the number of historical samples? Is it wise (or appropriate) to try to de-inform the created priors?

I remember reading “there are no ‘rules’ to priors, only what you can justify to a skeptical audience”. Would love a sanity check.

Thanks,

John

Hello John,

just a tech precision : which expostats /Webexpo R function would you use for point 3 ? ( as the online apps do not allow any customization of the priors yet )

as for the philosophy : If I am not mistaken, what you propose is equivalent to putting all your data at the same time into expostats, so the weighting will be exactly the same between the old and new data. The advantage of doing this sequence “historical” and then “add new” instead of just “all” is that you can visualize the info in the old data, and see how it evolves with the newer.

If you want to downweight the historical data, you can use the “past data” options in the webexpo library, since this options asks for the logtransformed GM and GSDs of the historical data, as well as sample size, you can tweak the sample size to your liking. Warning though I think you have to standardize the logGM with the OEL beforehand (I haven’t used it in some time).

1 Like

Very helpful! I appreciate your time to help, as always. Thanks Jérôme.

Hey Jerome,

A follow question. Is there any things to be careful of if using exposure models (e.g. ART, IHMOD) as priors?

Hello John,

I would say there are 2 things to be carefull about :

  1. The shape of the prior : i.e. what kind of exposure estimate do you get, and what how is associated uncertainty described. If you use the ART bayesian function, the calculation is automated and the Tier 2 model priors is automatically processed. If, e.g. you want to use ART’s exposure model output as a prior for your own bayesian model, then you need to backtransform the point estimate and CI into a prior distritbution that can be handled by your model. Same issue for IHMOD.

  2. The information in the prior : the more informative the prior is, the more influence it will have on the posterior, depending on the number of measurements available for the likelihood part. This is a very tricky part, which has not been explored much yet : e.g. we would like to be able to say “this prior is roughly equivalent to 5 measurements”, we can’t yet.

For IHMOD, there is a paper by a student of G. Ramachandran that linked IHMOD to a baysien model, in which the emission model parameters themselves were “updated” with the measurements.

Cheers