An overview of 67 different effect size estimators, including confidence intervals, for two-group comparisons:
https://journals.sagepub.com/doi/full/10.1177/25152459251323186
The authors have also developed a Shiny web app to evaluate these.
An overview of 67 different effect size estimators, including confidence intervals, for two-group comparisons:
https://journals.sagepub.com/doi/full/10.1177/25152459251323186
The authors have also developed a Shiny web app to evaluate these.
#statstab #305 The Fallacy of Employing Standardized Regression Coefficients and Correlations ad Measures of Effect
Thoughts: Everyone loves effect sizes, but mind how you compute and interpret them.
#statstab #294 So You Think You Can Graph - effectiveness of presenting the magnitude of an effect
Thoughts: Competition in the many ways to display effect magnitude. Some cool ideas.
#dataviz #stats #effectsize #effects #plots #figures #cohend
https://amplab.colostate.edu/SYTYCG_S1/SYTYCG_Season1_Results.html
#statstab #281 Correcting Cohenโs d for Measurement Error (A Method!)
Thoughts: Scale reliability can be incorporated into effect size computation (i.e., remove attenuation)
An even better solution would be a table where you could select which type of effect #effectSize measure to show (calculated using e.g. these calculations https://www.escal.site/). If anyone has the skills to implement that in #wikipedia #markup, please do so!
It always takes me some minutes to look up the interpretation guidelines for various effect size measures (yes, I know the rules of thumb are somewhat arbitrary). Today I edited Wikipedia to show three different guidelines for four different measures in the same table. Hopefully this can save some time for other researchers.
#methodology #psychometrics #EffectSize #OpenScience #wikipedia
#statstab #265 The limited epistemic value of โvariation analysisโ (R^2)
Thoughts: Interesting post and comments on what we can and can't say from an r2 metric.
#stats #r2 #effectsize #variance #modelcomparison #models #causalinference
https://larspsyll.wordpress.com/2023/05/23/the-limited-epistemic-value-of-variation-analysis/
#statstab #260 Effect size measures in a two-independent-samples case with nonnormal and nonhomogeneous data
Thoughts: "A_w and d_r were generally robust to these violations"
#robust #effectsize #ttest #2groups #metaanalysis #assumptions #ttest #cohend
#statstab #256 Rule of three (95%CI for no event)
Thoughts: Sometimes you have 0 recorded events, so how do you compute a Confidence Interval? Using the rule of 3!
#statstab #254 Statistical tests, P values, confidence intervals, and power: a guide to misinterpretations
Thoughts: I share tutorial papers, as people resonate with different writing styles and explanations.
#statstab #243 Approaches to Calculating Number Needed to Treat (NNT) with Meta-Analysis
Thoughts: Ppl love a one-number-summary. NNT has won out in medical/clinical. So, here are some ways to compute them (for what they're worth)
#NNT #metaanalysis #R #effectsize #statistics #clinical #clinicaltrials
#statstab #230 Power and Sample Size Determination
Thoughts: Frequentist power is a complicated and non-intuitive thing, so it's good to read various tutorials/papers until you find one that sticks.
#stats #poweranalysis #power #NHST #effectsize
https://sphweb.bumc.bu.edu/otlt/mph-modules/bs/bs704_power/bs704_power_print.html
#statstab #226 Standardization and other approaches to meta-analyze differences in means
Thoughts: "standardization after meta-analysis...can be used to assess magnitudes of a meta-analyzed mean effect"
#statstab #210 Effect Sizes for ANOVAs {effectsize}
Thoughts: ANOVAs are rarely what ppl want to report, but if it is then report an effect size! Just mind the % for the CIs ๐
#ANOVA #effectsize #APA #reporting #nonparametric #eta2 #ordinal
https://easystats.github.io/effectsize/articles/anovaES.html
#malcolmGladwell has another book, I guess trying to rescue his much-nitpicked #TippingPoint.
IDK if he's a net positive force in the world or not. As a #psychologist I've occasionally looked up the original #research he cites. He tends to portray findings in black-and-white terms, like "People do X in Y situation!" when, most often, I've found the research best supports something like "In some studies 12% of people did X in Y situation despite previous #models predicting it should only be 7%" or "The mean of the P group was 0.3 standard deviations higher than the mean of the Q group".
I see many of his grand arguments as built more or less on a house of cards. Or rather, built on a house of semi-firm jell-o that he treats as if it were solid bricks.
I'm not knocking (most of) the #behavioralScience he cites; Hell, I'm a behavioral scientist and I think this meta-field has a ton to offer. I just think it's important to keep #EffectSize and #PracticalSignificance built into any more complex theories or models that rely on the relevant research instead of assuming that #StatisticalSignificance means "Everything at 100%". I'm sure there's some concise way to say this.
Overall, I think he plays fast and loose with a lot of scientific facts, stacking them up as if they were all Absolutely Yes when they're actually Kinda Maybe or Probably Sort Of and I don't think the weight of the stack can be borne by the accumulated uncertainty and partial applicability indicated by the component research.
So I take everything he says with huge grains of salt and sometimes grimaces, even though I think sometimes he identifies really interesting perspectives or trends.
But is it overall good to have someone presenting behavioral research, heavily oversimplified to fit the author's pet theory? It gets behavioral science in the public eye. It helps many people with no connection to behavioral science understand the potential usefulness and perhaps scale of the fields. It also sets everyone--especially behavioral scientists--up for a fall. It's only a matter of time after each of his books before people who understand the research far better than he does show up to try to set the record straight, and then what has happened to public confidence in behavioral science?
Meh.
#statstab #159 Evaluation of various estimators for standardized mean difference in meta-analysis
Thoughts: Hedges' g is not a good choice for meta-analysis. Cohen's d may be better. Paper has code for various packages.
#statstab #157 Effect size measures in a two-independent-samples case with nonnormal and nonhomogeneous data
Thoughts: You can never have enough (confusing) effect size measure. At least make them appropriate for your data.
#statstab #153 Difference between Cohen's d and beta coefficient in a standardized regression
Thoughts: This relationship bw (beta) and (d) may be useful in some reporting edge cases, like #metaanalysis
"Rodent chronic variable stress procedures: a disjunction between stress entity and impact on behaviour"
https://www.biorxiv.org/content/10.1101/2024.07.04.602063v1.full.pdf+html
We systematically investigated 350+ studies using chronic variable stress procedures in rodents and assessed the characteristics of the procedure (how many stressors they used, and how many different types, and for how long), then measured the reported effect size for those using behavioural tests as an outcome.
Some key disconcerting findings from our study
- the large majority of articles uses a unique protocol, and articles featuring the same protocol were from the same authors (aside from one case)
- 91% of articles don't provide any justification for their choice of procedure
This is scientifically and ethically troubling given CVS procedures deliberately impose suffering to animals.
- when looking at the outcome behavioural procedures measured in the studies (some of which impose further stress on the animals) we found very little correlation between effect size and the characteristics (eg length, strength) of the stress protocol. When there was a statistically significant effect, this was generally very small.
We conclude
"Most of the studies in our review sought evidence for interventions that would prevent or reverse the effects of chronic stress. But if we are to have any confidence that translational CVS studies provide a foundation for potential clinical interventions, we must take an evidence- and ethics-informed approach to their design."
#chronicVariableStress #stress #reproducibility #ethics #effectSize #translationalResearch #physiology
#statstab #113 Effect Sizes for ANOVAs w/ {effectsize}
Thoughts: An easy package for reporting various effect sizes for linear models. Includes eta2, eta2-partial, omega2, epsilon2, and eta2-generalized; also for ordinal models.
https://easystats.github.io/effectsize/articles/anovaES.html