Sample number planning

Hi all,

Is there a standard / commonly accepted Bayesian approach to calculating sample numbers? I’ve always been taught to use a pre-determined sampling strategy such as the '77 NIOSH tables or a caluclation with t-stat, GM, & SD.

I’m reading “Doing Bayesian Data Anaylsis” by John Kuschke.

It explains to estimate desired sample numbers:

  1. Create a hypothetical informed prior.
    (In the practice, I understood that this prior would be informed in the same why you inform any prior - exposure results collected from the same task last year, results from a preliminary test (EN689), literature etc. etc.)

  2. Generate representative data set based on a particular plan - i.e. if N samples were collected

  3. Tally if the goal was reached by this data set
    (In the practice, is the overexposure risk below the overexposure risk threshold)

  4. Repeat many times to approximate the power if N samples were collected

I guess you then play around with how large N is in this simulation until the desired power is reached. (What level of power there should be when calculating the overexposure risk… I don’t know.)

Am I way off? Is there reading anyone can recommend on this topic in the context of occupational hygiene?

Diagram of power analysis taken from the mentioned textbook.

Hello John,

Interesting question !

Indeed in theory, sample size should be a function of the hypothesized truth (location and variability), degree of confidence on the conclusion, and power (chances that you will actually reach de desired conclusion).

There were some papers about that in occupational hygiene in the mid-90s (some refs below). The recomended sample sizes were often much higher than typical practice, but they might become more reasonable with the 70% vs 95% discussion.

The current approaches in most guidelines offer fixed sample size because of the associated simplicity, but of course, these sizes are too high for extreme situations, and too low closer to the OEL.

Your post made me think we could probably offer a sample size calculator in expostats at some point, based on what you describe of Krushcke (I need to go have a long look there) . If you play with R we could even work together on setting that up :slight_smile:

Jérôme

Lyles, R.H.; Kupper, L.L. (1996) On Strategies for Comparing Occupational Exposure Data to Limits. American Industrial Hygiene Association Journal 57(1):6-15

Thanks Jérôme!

I’m definitely going to play around in R.

When (if) I eventually get my head around it all, I’ll come back and share what I come up with. :grin:

1 Like

Hey Jérôme,

I have attempted to make a Power/Sample Size calculator based on the general ideas in Kuschke.

I would love if you could have a look! (I have emailed the R files to you)

Does it make sense? Can you recommend improvements?

I made two calculators:

  • First one calculates the power of the user-determined sample size
  • Second one calculates the sample size required to achieve user-determined power (this is better designed and easier to read)

Things to note:

  • It’s my first ever R project, so forgive the coding etiquette

  • It’s REALLY slow. As it runs hundreds/thousands of WebExpo Global JAGS calcs, the second calculator took 7.5 hours in its current set up
    – To save you that pain, I had it record the calculated representative parameter values and p95 70%UCL values for each test for reference

  • Some of the 70%UCLs of simulated data are INSANELY high! I’m not sure if they are realistic…

Ideas for improvement:

  • Find efficiencies in the looping MCMC calculations to reduce run times - reduce number of tests per sample size? Reduce MCMC nodes (down from 25,000), modify how information is stored?

  • Have an option for a “Routine Monitoring” sample calculator - that is, use the ‘historical data’ posterior as the prior for the ‘simulated data’ calculations

Thank you for any feedback :smiley:

Hi Jérôme (and Peter :slight_smile: )

I have a working script! And it runs within seconds.

They only problem now is that the power calculations are inconsistent run to run. I have ideas but not sure what the solution is.

If you had time to look/discuss, that would be great. If not, can you recommend anyone to reach out to? I’ve been offering compensation for peoples time but no takes yet.

GitHub link included here.

Hello John. I’ll have a look this week. At last my agenda is clearing up a bit.

Hi Again, I had a look at the word document. Here’s my take :

I haven’t had the time to read Krushcke’s section on power, but from your description I think I understand what he proposes.

In traditional power calculation, in essence you need to state you hypothesis about the true target distribution. I.e. you need to select a single value for mu and sigma. In the papers I mentionned before, you needed to choose gsd and then the ratio AM/OEL or P95/OEL. Once this is done, in some cases its possible to simply use a standard table to estimate power, but it is also quite simple to use simulation : draw n.sim samples of size n from the target distribution, apply the test, and see how often you achieve significance. Its a guesstimate, but I would surmise, n.sim = 1000 to 5000 would give a relatively stable estimate of power. Using your test (70% UCL on the P95), the estimated UCLS will vary a lot across the n.sim iteration, but to a reasonable extent because the data is sampled from the same exact target population. 1000 to 5000 iterations would be enough to average this sampling variability.

In the case of Krushke’s approach, you don’t hypothesize a fixed target population, but each time sample from a plausible target distribution according to your initial bayesian analysis. So in order to obtain a stable estimate of power, you need to average out not only sampling variability (i.e sampling repeatedly from the same distribution), but also the variabililty introduced by Krushke’s approach, i.e using an “uncertain” target population. I am therefore not suprised at what you report.

Finally, I think I need to think a bit more at the first stage of your proc. In the traditionnal approach to power calculation, the essence is : if my true population is such, I have “power”% chance of making the demonstration using this sample size.

In what you describe, is it something of the kind : given what is suggested by my initial survey, the chances that I would demonstrate compliance (e.g. 70%UCL below OEL) when I perform a full survey (with n samples) are XX% ? If its the case, the probability might often be low whichever the sample size if the suggested true population is non compliant.

Instead of power, I would rather call this “probability of demonstrating compliance”, given my initial knowledge, a certain sample size, and my compliance criteria.

Am I making any sense ?

1 Like

That makes complete sense. And thank you for taking the time to read it.

One correction would be that it doesn’t calculate the just chances of demonstrating compliance. If the initial survey data is non-compliant, it will calculate the chances of being non-compliant. That way the required sample size should decrease as the UCL gets further from OEL, whether its greater or lower than the OEL.

Yes, Krushke states that he took some liberty using the term “Power”. Moreover he was still finding the “power” when an interval excluded a ‘null-hypothesis’ value. Which is a bit different to what I’m proposing.

So, increasing the nsim to average out the variability makes sense - I’ve got it running tests for different nsim’s now to see if it helps.

But how might I manage the variability of an uncertain target population… I wonder… Would truncating the sigma + mu distributions be an ok thing to do? Limiting the possible data generating parameter values and in turn the simulated data, and the corresponding UCLs?

Hi John,

Two points :

  1. I think there might be a problem with your initial definition of compliance. It stems partly from the definition of compliance in EN689, which is a statistical test : compliance = achieve significance with rejecting the hypthesis that P95<=OEL, with alpha=30%.

When you use your initial data to estimate its “complliance”, I don’t think you should make a statistical test, but only use the point estimate of P95 and compare it to OEL : in the vocabulary I use in expostats, we use the initial data to guestimate overexposure or not in the population, then we apply the simulation to see what kind of survey design might enable us to demonstrate this state of overexposure (or no overexposure) with sufficient confidence (70%) and frequently enough (80%).

  1. RE managing the variability of an uncertain target pop : You have entire control over that. A power simulation’s main input is your belief of what is actually going on. In trad methods, we choose a mu and sigma (or a P95 and sigma). Krushke proposes to use a pilot study to estimate them, more realistic but now we are unhappy with variability. To me your options would be to play with the priors in the initial bayesian model, especially maybe tightening the GSD prior, a driver of most variability, and small samples will never help much for variability. You could either try a higher log-sigma-prec in the informedvar model, or even further a uniform à la IHDA using the “uninformative” model.
1 Like

Beautiful. Your guidance is much appreciated! I feel like I have a lot to go and play with.

very selfish :slight_smile:

I can vizualize a tab in expostats saying " given this data, you would need XX additional samples to demonstrate EN689 compliance with 80% probability", the “JohnP” tab !

It works!

I’m not sure what I’ve done is correct… but it works.

What I did:

  • Kept the default informvar prior (I found it worked best)
  • Seeded JAGS
  • Truncated the posterior parameter distributions to their 20%HDI ( i.e. 5000 most likely values of mu and sigma from the MCMC (25,000 steps long)
  • Seeded the sampling of parameter values
  • repeat the simulations multiple times and run a linear regression (power ~ sample size) to find sample size @ 80% ‘power’.

Seeding stuff didn’t change the results, just made them more consistent.

It now gives the same result +95% of time, otherwise it may vary +/- 1 sample.

Is the Shiny code for expostats public? Could you share ‘tool1 simplified’?

I want to make a basic page for the calculator as a proof of concept, but learning Shiny is making me crazy :dizzy_face:

Hello John,

Yep its public, though it is a bit messy because of the bilingual structure.

See here, this is a copy of the shinyapp.

The files useful to you would include app.R (contains the UI and SERVER parts), and the script folder, which include the functions.

everywhere you see “gett” : its basically a call to the bilingual part, each code being a sentence or part of it.

Jérôme

1 Like

Can I get your opinion again?

  1. Where the initial P95 is greater than the OEL the test should be 80% of the 70% LOWER credible limit above the OEL. Does that make sense? Strangely more samples doesn’t LCL the same way it does the UCL

  2. Unsurprisingly sample size gets really large as initial P95 approaches OEL. My two ideas are to either implement an arbitrary maximum sample size (50?) or change the test when P95 is very close to OEL (such as requiring a credible interval of a certain proportion). THoughts?

Thanks again.

P.S. I figured out Shiny, so I have a working prototype io app. Just want to make it prettier before sharing.

Hi John

for point 1, well on the one hand, it makes sense : When you estimate initially P95 below the OEL, how many samples do you need to demonstrate the true value is below the OEL with 70% confidence ; reversely, when you estimate initially P95 above the OEL, how many samples do you need to demonstrate the true value is above the OEL with 70% confidence.

On the other hand, if you refer to EN689 or the AIHA definitions of “compliance”, they define compliance as being sure that the true P95 is below the OEL with 70% confidence. If you can’t make this demonstratiion, you are “non compliant”. If you use this framework, then if the point estimate is above the OEL, it would be almost impossible to achieve compliance (except if your initial estimate was really high compared to reality). In that case, the advise would be “do something about it and forget measurements”.

So it all depends how you frame the notion of compliance.

For the “more sample for LCL” I am not surprise : lognormal CIs are not symmetrical, the LCL being usually closer to the estimate, and probably needing more samples to move noticeably.

For point 2 : How many samples do you think you could convince your boss to invest in ? I think you probably know this better than I do. I’d probably report >50, whenever n is above that. That said, you could also right away add “level of confidence” as a customized parameter. However I would not only mention it when P95 is close to the OEL : It would imply that its OK to lower our confidence criteria when the situation appears “dark orange”. IMO anyway,

Cheers

Jérôme

1 Like

Working beta for the calculator -

padgham.shinyapps.io/samplesize

Still lots of work to do:

  • The result should be explained clearly (perhaps graphical).
  • The labels need to be fixed
  • More testing for different situations…

I provide this beta as celebration of being accepted to present this calculator at the AIOH conference this year. :partying_face:

I took your advice for non-compliance (prelim P95 > OEL = no samples recommended).

As for the “dark orange” situations, I’ve capped the sample size at 50 - I’m surveying other hygienists and it seems 50 samples is very high already… so I think it will do as a max. Changing the confidence gets really tricky - I don’t have faith in the average user to do whatever to be justified in smaller samples sizes which defeats the intended point of the calculator.

I’m getting there… slowly but surely. :smiley:

If anyone does get to play before I fix it… the first box is meant to be the OEL, the second box is the preliminary data (at least 3) separated by commas (e.g. 21.2, 39, 28).

Hello,

The nerd stuff (most important for me :slight_smile:) does not show up when I click on the tab

Also, I tried with the following data : OEL = 500 + the default expostats dataset. I tried 2 times, both times, after ~ 1min, when the progression bar reached 10, I got disconnected.

Jé.

Apologies! Both issues are now fixed.

Still deciding how to explain it all. But I think the main points are there.

padgham.shinyapps.io/samplesize