When designing a scientific experiment, a key factor is the sample size to be used for the results of the experiment to be meaningful.
How many cells do I need to measure? How many people do I interview? How many patients do I try my new drug on?
This is of great importance especially for quantitative studies, where we use statistics to determine whether a treatment or condition has an effect. Indeed, when we test a drug on a (small) number of patients, we do so in the hope our results can generalise to any patient because it would be impossible to test it on everyone.
The solution is to perform a "power analysis", a calculation that tells us whether given our experimental design, the statistical test we are using is able to see an effect of a certain magnitude, if that effect is really there. In other words, this is something that tells us whether the experiment we're planning to do could give us meaningful results.
But, as I said, in order to do a power analysis we need to decide what size of effect we would like to see. So... do scientists actually do that?
We explored this question in the context of the chronic variable stress literature.
We found that only a few studies give a clear justification for the sample size used, and in those that do, only a very small fraction used a biologically meaningful effect size as part of the sample size calculation. We discuss challenges around identifying a biologically meaningful effect size and ways to overcome them.
Read more here!
https://physoc.onlinelibrary.wiley.com/doi/10.1113/EP092884
#experiments #ExperimentalDesign #effectsize #statistics #stress #research #article #power #biology