#MetaSci

2025-03-13

Our #APC analysis preprint was cited in a Wall Street Journal story today, along with Elsevier's response.

wsj.com/business/media/scienti

#AcademicPublishing #MetaSci #ScholCommLab

A study recently estimated that the top five publishers earned nearly $9 billion in article-processing charges between 2019 and 2023. Elsevier alone collected an estimated $583 million in such charges in 2023, the authors calculated. 

The spokesman for Elsevier disputed the estimates but didn’t provide alternative figures for journal revenue.
2025-02-12

I've co-authored a piece calling on the open science community to lead by example in defending science. We are well positioned to combat the efforts by Trump, Musk & others that are anti-science, anti-evidence and anti-inclusion.

This is also a restatement of the importance of diversity.

Time to practice what we preach!

Read it here: upstream.force11.org/open-scie

#academicsky #openscience #metasci #ScholComm

"Reliability and validity are important properties for research methods to have. Yet, we also show that reliability and validity fall short of other epistemic virtues that are crucial to the quality of research methods" (Ventura, 2025). doi.org/10.1007/s112... #Methodology #MetaSci #PhilSci

How should we measure the quality of experimental research? With talk of a looming “replicability crisis”, this question has gained additional significance. Yet, common measures of research quality based on reliability and validity do not always track core epistemic virtues. To remedy this issue, we draw on information theory and propose a measure of research quality based on mutual information. Mutual information measures how much information an experimental method carries about the world. We show that this measure tracks epistemic virtues that reliability and validity do not. We conclude by discussing implications of this information-theoretic measure of research quality and address some limitations of this approach.

"It should be emphasized that citation counts should not be the primary motivation for using RRs." #OpenScience #MetaSci #Methodology

See also my discussion of Van Drimmelen et al.’s work here #MetaSci #STS #AcademicSky #Methodology 🧪

The Preregistration Prescripti...

“Theories in psychological science will always be vague, weak, and lacking in parsimony, falsifiability, and predictiveness because of the complexity and lack of generality of most social and behavioral phenomena.” #MetaSci #PhilSci #Psychology 🧪

And for my recent preprint on Popper and preregistration, see… #MetaSci #PhilSci #HPS 🧪

RE: https://bsky.app/profile/did:plc:4jb3re5tvklsvhuc3lkerj5q/post/3l3kml4vgmw2g

“It is an essential feature of exploratory research that it unfolds across a variety of research activities and projects over time rather than within the bounds of a single project or research article.” #MetaSci #PhilSci

Arie W. Kruglanski & Sophia Moskalenko (20 Dec 2024). Social psychology in the age of uncertainty: A tale of three quandaries, European Review of Social Psychology. doi.org/10.1080/1046... #SocialPsyc #MetaSci #PhilSci 🧪

We examine the current state of social psychology in terms of three major quandaries that challenge our field: Value, Trust, and Purpose. The Value quandary involves balancing commitments to truth and social justice without conflating the two. The Trust quandary centers on restoring the field’s credibility amid issues like data falsification and the replication crisis, addressed through evidence-based, theory-guided research. The Purpose quandary highlights the need to balance theory development and empirical research by incorporating formal theory training into social psychology curricula. As a discipline at the intersection of individuals and society, social psychology plays a critical role in addressing global social challenges. Resolving these quandaries is essential for the field’s advancement and its capacity to fulfill its potential.

"Only 28% of sampled manuscripts [n = 4] adhered to their analysis plan or transparently disclosed all deviations." New survey of preregistration in autism studies. Open Access: doi.org/10.1177/1362... BSky author: @fsedgewick.bsky.social #MetaSci #OpenScience 🧪

Pre-registration refers to the practice of researchers preparing a time-stamped document describing the plans for a study. This open research tool is used to improve transparency, so that readers can evaluate the extent to which the researcher adhered to their original plans and tested their theory appropriately. In the current study, we conducted an audit of pre-registration in autism research through a review of manuscripts published across six autism research journals between 2011 and 2022. We found that 192 publications were pre-registered, approximately 2.23% of publications in autism journals during this time frame. We also conducted a quality assessment of a sample of the pre-registrations, finding that specificity in the pre-registrations was low, particularly in the design and analysis components of the pre-registration. In addition, only 28% of sampled manuscripts adhered to their analysis plan or transparently disclosed all deviations. Autism researchers conducting confirmatory, quantitative research should consider pre-registering their work, reporting any changes in plans transparently in the published manuscript. We outline recommendations for researchers and journals to improve the transparency and robustness of the field.

“We need to scale back the maximalist attitudes, directives, and language we see about OS in science policy” #OpenScience #PhilSci #MetaSci 🧪

Generative Adversarial Collaborations "Before we aim to empirically arbitrate and ‘kill’ theories in our young field, we want them to mature through discussion, discovery, and community education." doi.org/10.1016/j.ti... Few quotes 👉🧵 #Methodology #MetaSci 🧪

Science progresses when ideas clash, leaving the most successful to survive and move us closer to the truth. In this ideal hypothetico-deductive approach [1], science is dynamic and fluid, with theories constantly tested and replaced. In reality, however, many opposing theories rarely meet. Scientists instead often work in entrenched paradigms or research programs – focused on their own frameworks, language, and methods – which resist direct comparison and evolve incrementally at a generational timescale rather than through confrontations [2,3]. Adversarial collaborations offer a promising alternative to accelerate scientific progress: a way to bring together researchers from different camps to rigorously compare and test their competing views [4,5].

Hi #MetaSci feed. I’ve removed “prereg,” “open science,” and “#openscience” as tag phrases for this feed to try to make its posts a bit more specific and relevant. The feed will continue to pick up other metascience-related terms, with the main hashtag being #metasci (not case sensitive)!

RE: https://bsky.app/profile/did:plc:4jb3re5tvklsvhuc3lkerj5q/feed/aaaicubeccmeq

“In practice, the question that is answered true or false in confirmatory scientific research is ‘do scientists adequately understand the theoretical hypothesis and associated research methods to reliably make accurate verifiable predictions?’” dx.doi.org/10.1037/met0... #Methodology #MetaSci 🧪

Falsifiable research is a basic goal of science and is needed for science to be self-correcting. However, the methods for conducting falsifiable research are not widely known among psychological researchers. Describing the effect sizes that can be confidently investigated in confirmatory research is as important as describing the subject population. Power curves or operating characteristics provide this information and are needed for both frequentist and Bayesian analyses. These evaluations of inferential error rates indicate the performance (validity and reliability) of the planned statistical analysis. For meaningful, falsifiable research, the study plan should specify a minimum effect size that is the goal of the study. If any tiny effect, no matter how small, is considered meaningful evidence, the research is not falsifiable and often has negligible predictive value. Power ≥ .95 for the minimum effect is optimal for confirmatory research and .90 is good. From a frequentist perspective, the statistical model for the alternative hypothesis in the power analysis can be used to obtain a p value that can reject the alternative hypothesis, analogous to rejecting the null hypothesis. However, confidence intervals generally provide more intuitive and more informative inferences than p values. The preregistration for falsifiable confirmatory research should include (a) criteria for evidence the alternative hypothesis is true, (b) criteria for evidence the alternative hypothesis is
2024-12-11

If you wonder if 4 studies on the same drug produce too consistent results, how would you test this? #stats #metasci

Call for Papers A topical issue of the European Journal for Philosophy of Science will consider: “The Pursuitworthiness of Experiments Across the Sciences” #MetaSci 🧪

RE: https://bsky.app/profile/did:plc:2uwrynll2xcbgrsx6gnb3bay/post/3lcdnyvgsrk2q

Vague Psychology Concepts "The vagueness of psychological concepts is often considered a bug that needs to be fixed. In contrast, we argue that it is not a bug, it is a feature." Hutmacher & Franz (2024): doi.org/10.1037/amp0... #Psychology #MetaSci #PhilSci 🧪

Psychology is currently facing a multilayered crisis stemming from the fact that the results of many psychological studies cannot be replicated (replication crisis), that psychological research has neglected cross-cultural and cross-temporal variation (universality crisis), and that many psychological theories are ill-developed and underspecified (theory crisis). In the present article, we use ideas derived from debates in theoretical and philosophical psychology as a basis for responding to all three crises. In short, we claim that psychological concepts are inherently vague in the sense that their meanings and the rules for their application are indeterminate. This does not imply that psychological concepts are ineffable or lack meaning. It implies, however, that hoping to arrive at a finite set of necessary and sufficient criteria that define psychological concepts once and for all is an illusion. From this, we deduce four recommendations for responding to psychology’s crises. First, we argue that the replication crisis could be approached by paying more attention to the context conditions under which psychological realities and knowledge about these realities are being created. Second, we claim that the universality crisis can be alleviated by putting more effort into exploring variability across times and cultures. Third, we contend that acknowledging the language dependence of psychological research could be a fruitful way of addressing the theory crisis. Last, we show

Client Info

Server: https://mastodon.social
Version: 2025.04
Repository: https://github.com/cyevgeniy/lmst