#PsychMethods

Nick Byrd, Ph.D.ByrdNick@nerdculture.de
2024-11-04

Do professors with less PC views self-censor?

Clark et al. reported only the linear relationship: the less PC their view, the more reluctant professors were to share it. Kudos to Clark et al. for publishing their data so Luke could detect a better-fitting non-linear, non-unified explanation: most professors were not self-censoring; they were either uncertain or else unreluctant to share.

doi.org/10.31234/osf.io/ab34v

#edu #higherEd #psychMethods #logic #replicability #manyAnalysts #metaScience

Pages 7 and 8, showing Figure 8.
Nick Byrd, Ph.D.ByrdNick@nerdculture.de
2024-01-25

Surprised this conclusion survived #peerReview: a "program succeeded in promoting positive attitudes and beliefs" about "#implicitBias #education ...among ...police" (N = 145).

The 1st survey was online, but the 2nd was in-person. And the 1st survey's questions weren't about the same trainings as the 2nd survey's.

So any differences in answers are as explainable by differences between the surveys as they are by one #education program.

doi.org/10.1080/09515089.2023.

#psychMethods #logic #psychology

Pages 13 and 14 showing information about the samples at Time 1 (T1) and Time 2 (T2).Pages 15 and 16 showing information about the online survey at Time 1 (T1).Pages 17 and 18 showing information about the in-person survey at Time 2 (T2).Pages 19 and 22 showing how officers' answers to some questions at Time 1 (T1) compared to their answers to different questions at Time 2 (T2).
Nick Byrd, Ph.D.ByrdNick@nerdculture.de
2023-10-17

When people ask me how to estimate the sample size needed for their research question, my answers fall broadly into two buckets: power analysis and precision for planning analysis. But there seem to be other options as well.

What's your preferred method?
Preferred software? (Or software package?)

qr.ae/pKnFql

#Stats #QuantPsych #PsychMethods #R #TheNewStats

Nick Byrd, Ph.D.ByrdNick@nerdculture.de
2023-10-10

Are you more likely to fall for trick (reflection test) questions on a smartphone or PC?

Turned out it didn't make a difference unless you let people self-select which device they used — and even that difference was better explained by gender and self-reported intuitive decision style.

doi.org/10.1080/07421222.2023.

#decisionScience #cogSci #PsychMethods #UX #tech

Nick Byrd, Ph.D.ByrdNick@nerdculture.de
2023-09-14

Remember that "...WEIRDest people in the world" paper?

Now #xPhi has one: Of "171 experimental philosophy studies [from] 2017 [to] 2023 [including one of mine] most ...tested only Western populations but generalized beyond them without justification."

Incentives may be part of the issue: "studies with broader conclusions ...had higher citation impact."

doi.org/10.1017/psa.2023.109

#xPhi #PsychMethods #Culture #Demography #PhilSci

Nick Byrd, Ph.D.ByrdNick@nerdculture.de
2023-09-08

"Deontological and absolutist moral dilemma judgments convey self-righteousness" in U.S., German-speaking, and British participants (N = 1254).

In the Journal of Experimental Social Psychology: doi.org/10.1016/j.jesp.2023.10

#ProcessDissociation #DecisionScience #psychMethods #moralPsychology #xPhi

Nick Byrd, Ph.D.ByrdNick@nerdculture.de
2023-08-30

#Civicbase brings upvoting and downvoting to preference measurement—but with a budget.

Participants can select or agree or disagree buttons (up to 7 times) to allocate a limited voting credits (that carry over to future studies?).

May reveal priorities that Likert scales and ranked-choices cannot.

doi.org/10.1002/aaai.12103

Presumably, this could be used for all sorts of preferences (beyond civics/politics).

#measurement #PsychMethods #openSource #decisionScience #poliSci #cogSci #gamification

Nick Byrd, Ph.D.ByrdNick@nerdculture.de
2023-08-28

How do we know what participants thought when we presented our stimuli?

#ProcessTracing can reveal what people saw (e.g., eye-tracking), consciously thought (e.g., concurrent think-aloud), etc.

Combining those two methods revealed:
(1) thinking aloud didn't impact gaze or word count
(2) retrospective think-aloud left out thoughts that were mentioned concurrently
(3) retrospective think-aloud introduced thoughts unmentioned concurrently

doi.org/10.1007/978-3-319-1495

#PsychMethods #CogSci #xPhi

2023-01-05

Planning a longitudinal study? Here’s four questions you should ask:

🔹 How should time be scaled?

🔹 How many assessments are needed?

🔹 How frequently should assessments occur?

🔹 When should assessments happen?

Hopwood et al. (2022). “Connecting theory to methods in longitudinal research”:
doi.org/10.1177/17456916211008

Author on Mastodon: @aidangcw

#Stats
#Statistics
#Methodology
#Psychology
#PsychMethods
#ResearchDesign
#LongitudinalResearch

Advances in methods for longitudinal data collection and analysis have prompted a surge of research on psychological processes. However, decisions about how to time assessments are often not explicitly tethered to theories about psychological processes but are instead justified on methodological (e.g., power) or practical (e.g., feasibility) grounds. In many cases, methodological decisions are not explicitly justified at all. The disconnect between theories about processes and the timing of assessments in longitudinal research has contributed to misspecified models, interpretive errors, mixed findings, and nonspecific conclusions. In this article, we argue that higher demands should be placed on researchers to connect theories to methods in longitudinal research. We review instances of this disconnect and offer potential solutions as they pertain to four general questions for longitudinal researchers: how time should be scaled, how many assessments are needed, how frequently assessments should occur, and when assessments should happen.
2022-12-28

New paper provides a history of “voodoo science,” which discusses the controversy surrounding Vul et al.’s (2009) controversial article “Puzzlingly High Correlations in FMRI Studies of Emotion, Personality, and Social Cognition.”

Five quotes follow: 🧵👉

🔓 doi.org/10.3390/socsci12010015

#MetaScience
#Neuroscience
#Neuroimaging
#MetaResearch
#PsychMethods
#ReplicationCrisis
#PhilosophyOfScience
#PhilSci
#Fmri
#VoodooCorrelations
#UseNovelty
#MultipleTesting

'Voodoo” Science in Neuroimaging: How a Controversy

Transformed into a Crisis

Abstract: Since the 1990s, functional magnetic resonance imaging (fMRI) techniques have continued to advance, which has led researchers and non specialists alike to regard this technique as infallible. However, at the end of 2008, a scientific controversy and the related media coverage called functional neuroimaging practices into question and cast doubt on the capacity of fMRI studies to produce reliable results. The purpose of this article is to retrace the history of this contemporary controversy and its treatment in the media. Then, the study stands at the intersection of the history of science, the epistemology of statistics, and the epistemology of science. Arguments involving actors (researchers, the media) and the chronology of events are presented. Finally, the article reveals that three groups fought through different arguments (false positives, statistical power, sample size, etc.), reaffirming the current scientific norms that separate the true from the false. Replication, forming this boundary, takes the place of the most persuasive argument. This is how the voodoo controversy joined the replication crisis.
2022-12-21

Critical Metascience:

2022 has been a bumper year for what I’d call “critical metascience” - work that takes a step back and offers a critical perspective in the field.

My Top 10 papers of 2022 in this area are, in alphabetical order… 🥁 🧵👉

#OpenScience
#MetaScience
#MetaResearch
#PsychMethods
#ReplicationCrisis
#SociologyofScience
#ScienceofScience
#PhilosophyOfScience
#PhilSci
#PhilScidon

1/12

2022-12-15

Replicability and Theory:

“Our results suggest that many of the practices that have been proposed as a means to improve the replicability of psychological research—such as open data and methods…preregistration and Registered Reports…and basing conclusions on Bayesian inference…or p < .005 rather than p < .05…—do indeed improve confidence in replicability among our sample.”

Continued 🙂 🧵👉

#MetaScience
#MetaResearch
#PsychMethods
#ReplicationCrisis
#PhilosophyOfScience
#PhilSci
#PhilScidon

2022-12-14

@MarkRubin This is massively simplistic. Hypotheses include the criteria for delineating phenomena in need of explanation, satisfaction criteria for success, disciplinary standards and practices, and taxonomies of subjects under investigation. IMO.
#MetaScience
#MetaResearch
#PsychMethods
#ReplicationCrisis
#PhilosophyOfScience
#PhilSci
#PhilScidon

2022-12-14

What’s a hypothesis?

“A hypothesis is not simply a guess about the result of an experiment. It is a proposed explanation that can predict the outcome of an experiment. A hypothesis has two components: (1) an explanation and (2) a prediction. A prediction simply isn’t useful on its own.” (Haroz, 2014)

Blog post: steveharoz.com/blog/2014/myste

#MetaScience
#MetaResearch
#PsychMethods
#ReplicationCrisis
#PhilosophyOfScience
#PhilSci
#PhilScidon

2022-12-13

A “quietist” response to the replication crisis:

“The quietist approach proposes that we should just accept that it is in the nature of science that we get things wrong, and that this is particularly true with sciences in early stages of development.”

Bird (2021). Understanding the replication crisis as a base rate fallacy.

🔒 doi.org/10.1093/bjps/axy051

🔓 kclpure.kcl.ac.uk/portal/files

#MetaScience
#MetaResearch
#PsychMethods
#ReplicationCrisis
#PhilosophyOfScience
#PhilSci
#PhilScidon
@philosophy

5.1. Quietism: This is the nature of science

The quietist approach proposes that we should just accept that it is in the nature of science that we get things wrong, and that this is particularly true with sciences in carly stages of development. Popper ([1963]), for example, urged scientists to for- mulate bold hypotheses. But bold hypotheses are likely to be false. So we should, like Popper himself, expect to find that our hypotheses, although they may pass some tests, will be falsified in due course.'® Thus a corollary of the quietist position is that one should have a corresponding lowered credence even in hypotheses that have passed the tests that we have set them. That in tumn should influence how think of new hypotheses. In the light of the falsity feedback effect, one should be wary of plac- ing too much prior confidence in new hypotheses that are modelled on other, appar- ently successtul hypotheses. The quietist must accept that the difference between pass- ing and failing a single test does not correlate that closely with the difference between truth and falsity. Consequently the quietist should value replication studies much more than they are currently valued in many areas of science. Klein ([2014], p. 327), for ex- ample, reports that the Journal of Personality and Social Psychology does not publish replication studies as a matter of policy, even when the replication concerns an alleged finding of considerable significance.'® The quietist should deplore this.
2022-12-10

Bad Stats / Poor Methods:

Qualitative study finds 39.8% of 548 psychology researchers believe that statistics and/or research methods are misused and/or misunderstood in the field.

Miranda et al. (May 2022). How do researchers in psychology perceive the field? A qualitative exploration of critiques and defenses. Collabra: Psychology.

doi.org/10.1525/collabra.35711

#Psychology
#Stats
#Statistics
#OpenScience
#MetaScience
#MetaResearch
#PsychMethods
#ReplicationCrisis

AbstractTable of critcisms
2022-12-10

No evidence of p-hacking in imaging research:

Analysis of 4,105 randomly sampled p-values finds no evidence of p-hacking in work published in over 100 imaging journals since 1972.

Rooprai et al. (2022): doi.org/10.1177/08465371221139

#Stats
#Statistics
#OpenScience
#MetaScience
#MetaResearch
#PsychMethods
#ReplicationCrisis

Abstract
Amanda Kay Montoyaakmontoya@mstdn.social
2022-12-09

Do you have #MissingData ? (Yes, we all do). Do you worry your imputation model may not be correct? (Yes, we all do!) Check out this paper by colleagues and alum from #UCLA for a description of methods to evaluate compatibility of your imputation model! #Statistics #PsychMethods #QuantMethods doi.org/10.3758/s13428-021-017

2022-12-03

Looks like a great talk from Stephan Guttinger on Questionable Research Practices

“What should be abandoned is not the idea of questioning practice, but the idea that there is a class of questionable research practices.”

Slides: philstatwars.files.wordpress.c

#OpenScience
#MetaScience
#MetaResearch
#PsychMethods
#ReplicationCrisis
#QRPs

Slide from presentation:

SUMMARY:

1. Hard to point to individual practices that are inside or outside of the class of QRPs. Label does not serve as a good guide to complex and shifting landscape of experimental practice (empirically inadequate)

2. Label has become supercharged —“QRP” now stands for detrimental practices, dishonest science, etc. Encourages blanket exclusions of certain practices and homogenised methodological landscape

We have a label that is an inaccurate guide but which wields great normative power —> Potentially damaging to science
2022-11-30

“We are not only in a replication but an interpretation crisis, a crisis of theory building.”

Benjamin Krämer (@benjkraemer) (2022, November). Why are most published research findings under-theorized? In Questions of Communicative Change and Continuity.

🔓 nomos-elibrary.de/10.5771/9783

#OpenScience
#MetaScience
#PsychMethods
#ReplicationCrisis
#ScienceofScience
#PhilosophyOfScience
#PhilSci
#PhilScidon
#Communication

AbstractExtract from article

Client Info

Server: https://mastodon.social
Version: 2025.04
Repository: https://github.com/cyevgeniy/lmst