Hello John,

The easy part first : No difference Bayes / Frequentist for this question.

More difficult : is this important…

In theory it is : it makes sense that if you have repeats on a worker, and it is likely that “worker” is at least to some extent a determinant of exposure levels, then 2 repeats on the same worker will be closer realtive to other measurements on other workers.

Then there is “in practice” : we’ll focus on group exceedance. You can estimate it 2 ways : using the simple model (one lognormal distribution, tool 1) which assumes independant samples ; or you can estimate it using the more complex model (ANOVA, tool2) which also assumes independant samples, but only within workers.

Heres one of the WEBEXPO examples (example 5) :

worker-1 31

worker-1 60.1

worker-1 133

worker-1 27.1

worker-2 61.1

worker-2 5.27

worker-2 30.4

worker-2 31.7

worker-3 20.5

worker-3 16.5

worker-3 15.5

worker-3 71.5

Simplified analysis (Tool1) : GM = 31 GSD = 2.4

More complex analysis (Tool2, group tab) : GM = 31 GSD = 2.6

In that case the complex model estimates quite low within worker correlation (0.13) and we see no big difference.

There is also the consideration of sample size : the more complex model is more costly because it adds one parameter to estimate. So with low sample size, its results will be more heavily influenced by the prior, which might compensate its theoretical advantage. In the frequentist world, no prior but the estimates wil be more variable / unstable.

If we take a much bigger sample (10 workers * 10 meas per worker and high correlation : 0.66, example 4 in the Webexpo report), we get the following :

simple model : GM = 29 GSD = 2.5

complex model : GM = 29 GSD = 2.6

This seems close too, but, not shown here, uncertainty is appreciably higher with the more complex model: 95th perc and UCL 135 and 175 for simpler model, 135 and 300 for more complex, so definitely a potential impact on decision. This is the main theoretical issue with not taking correlation into account, it causes underestimation of variance and uncertainty.

Bottomline (in my opinion) : Try both since Expostats allows it. if the conclusions would be dramatically different, maybe lean towards the simpler model if you have few samples, towards the more complex / theoretically sounder model if you have a more hefty sample size.

Did I just increase the general level of confusion ?