#knowledgeSystem

2025-09-26

Predatory journals publishing AI slop in the name of famous academics to confer legitimacy

I find it almost impossible to believe this short article in International Journal of Sociology Civics research was produced by Anthony Giddens and Robert Putnam. Everything about it, from venue through to methodology and style, scream bullshit:

They also have a publication of ‘new’ papers by Zygmunt Bauman, Pierre Bourdieu and Michael Burawoy (all deceased) in 2025. What’s going on here? Is a predatory journal faking authorship, through AI generated papers, in order to confer legitimacy? What’s spectacularly lazy about it is that the papers look nothing like something these authors would write, even if they were alive and interested in publishing in this journal.

#AISlop #fraud #knowledgeSystem #predatoryJournals #publishing #scholarlyPublishing

Hacker Newsh4ckernews
2025-06-07
2025-01-19

Call for papers: Peer Review in the Age of Large Language Models

Looking forward to doing a keynote at this workshop in May 👇

‘Peer Review in the Age of Large Language Models’ is an interdisciplinary workshop taking place on 14th May 2025 at the University of Bath. Dr. Harish Tayyar Madabushi from the University of Bath, and Dr. Mark Carrigan from the University of Manchester, will be giving keynotes. We are inviting you to submit an abstract for the workshop by Friday 14th February 2025, please see details below:

Call for Abstracts

With the emergence of large language models (LLMs), some scholars have begun to experiment with the use of these tools in various academic tasks, including peer review (Hosseini et al., 2023; Wedel et al., 2023; Liang et al., 2024). Recent studies have suggested LLMs could play some legitimate role in peer review processes (Dickinson and Smith, 2023; Kousha and Thelwall, 2024). However, significant concerns have been raised about potential biases; violations of privacy and confidentiality; insufficient robustness and reliability; and the undermining of peer review as an inherently social process.

Despite the relatively large volume of literature on peer review (Bedeian 2004; Batagelj et al., 2017; Tennant and Ross-Hellauer, 2020; Hug, 2022), we still know relatively little about key issues including: the decision-making practices of editors and reviewers; how understandings of the purposes and qualities of peer review vary between journals, disciplines, and individuals; and the measurable impact of peer review in advancing knowledge. Tennant and Ross-Hellauer (2020) suggest there is “a lack of consensus about what peer review is, what it is for and what differentiates a ‘good’ review from a ‘bad’ review, or how to even begin to define review ‘quality’.” Many commentators have also noted the negative effects of time and productivity pressures on the quality and integrity of peer review in practice. LLMs enter into a context of peer review fraught with both ambiguity and (time) scarcity.

Recently, many relevant entities, including the Committee on Publication Ethics (COPE), have published specific guidance on the use of AI tools in decision-making in scholarly publication. These frameworks address issues such as accountability, transparency, and the need for human oversight. The adoption of such guidance raises important questions about whether and how LLM technologies can be used responsibly to promote knowledge production and evaluation. Can these policies and guidelines, for example, fully address the technical limitations of LLMs? Can the use of LLMs ever be compatible with the purposes and qualities of academic research, writing, and authorship? What potential oversight responsibilities should editors have?

The aim of this workshop is to provide an opportunity to collectively and critically explore these possibilities and limitations from various disciplinary vantage points. We welcome scholars from all career stages, particularly doctoral researchers and early career academics. We welcome contributions on a wide range of topics, from across all disciplines, related to the use of LLMs in the peer review process. Topics may include, but are not limited to:

  • Empirical studies examining the nature and extent of LLM adoption and
    use in peer review;
  • Studies examining variations in disciplinary orientations towards LLMs
    in peer review;
  • Theoretical discussions of the limits and compatibility of LLMs as tools
    in peer review;
  • Papers considering LLMs in peer review from the perspective of social
    epistemology;
  • Papers considering LLMs in peer review from the perspective of Science
    and Technology Studies (STS);
  • Critical reflections on the ethics of LLM adoption and use in peer review;
  • Papers engaging with the politics and political economy of LLM use in
    peer review;
  • Sociotechnical evaluations of LLM systems used for peer review;
  • Value-sensitive design of peer review LLM tools;
  • Methods for audit and assurance of LLM in peer review;
  • Proposals for development of policies and standards for ethical and
    responsible use of LLMs in peer review;

Selected authors will be invited to present on a panel at the workshop. Each panel will have a chair (who will introduce the panel and lead the audience Q&A), and a discussant (who will ask questions, having read papers in advance). Following the acceptance of their abstracts, participants will then be asked to send draft papers no later than three weeks before the workshop to give the panel discussants enough time to prepare questions and feedback. Draft workshop papers should be approximately 5,000 words. We do not expect papers to be in final, publishable format. The aim of the workshop is to provide constructive and timely feedback on draft papers. Following the workshop, and in consultation with participants, the organisers will consider the most suitable options for future collaboration (e.g., a network, a Special Issue, or an edited volume).

A travel stipend is available to support participants who do not have access to funding to support conference attendance through their own institutions. If you wish to apply for this stipend, please state this on your application and indicate where you would be travelling from. Unfortunately, we cannot cover the costs of major international travel.

Please submit your title, abstract (200-300 words) and a short bio (~150 words) to ai-in-peer-review@bath.ac.uk by Friday 14th February 2025. The organising committee will communicate decisions by Friday 7th March 2025. Workshop papers (approx. 5,000 words) should be sent by Wednesday 23rd April 2025.

#generativeAI #knowledgeSystem #scholarlyPublishing

2024-11-29

The dizzying scale of malpractice by behavioural scientists in business schools

I wrote earlier in the year about the extent of malpractice within behavioural science, particularly in business schools. There’s an incredibly cutting article in the recent Atlantic going deeply into a crisis which is still very much in motion:

Business-school psychologists are scholars, but they aren’t shooting for a Nobel Prize. Their research doesn’t typically aim to solve a social problem; it won’t be curing anyone’s disease. It doesn’t even seem to have much influence on business practices, and it certainly hasn’t shaped the nation’s commerce. Still, its flashy findings come with clear rewards: consulting gigs and speakers’ fees, not to mention lavish academic incomes. Starting salaries at business schools can be $240,000 a year—double what they are at campus psychology departments, academics told me.

The research scandal that has engulfed this field goes far beyond the replication crisis that has plagued psychology and other disciplines in recent years. Long-standing flaws in how scientific work is done—including insufficient sample sizes and the sloppy application of statistics—have left large segments of the research literature in doubt. Many avenues of study once deemed promising turned out to be dead ends. But it’s one thing to understand that scientists have been cutting corners. It’s quite another to suspect that they’ve been creating their results from scratch.

https://www.theatlantic.com/magazine/archive/2025/01/business-school-fraud-research/680669/

What happens when you introduce generative AI into this toxic situation? It provides potent new tools for research misconduct but also potent need tools for document forensics. We’re in for an interesting few years 🍿

#behaviouralScience #fraud #knowledgeSystem #malpractice #psychology #publishing

2024-06-07

But upon closer inspection this quote seemingly isn’t in Table Talk after all, suggesting that Claude’s initial response was right. However what’s interesting is how they both immediately backed down when challenged. They also made claims about their searching (as if they were consulting a database) which simply aren’t true:

Because when I eventually went back to Google and found a search term which helped me get through the masses of low quality quotation sites, I found that it was actually in Coleridge’s Bioographia Literaria 🎯

But it’s not going to do that is it? It’s once again affecting a capability which it doesn’t have. Which is wonderfully productive as a generative interlocutor but is actively dangerous if people aren’t using it in the weirdly abstract way I’ve been advocating. The problem is that these systems are directly and indirectly contributing to degrading Google search in a way that will lead people to lean on them instead because of their seeming utility.

I recalled an effort a few years ago to identify the source of the phrase ‘making the familiar strange’. As Ash Watson references here, it was a distributed team effort which eventually concluded this actually goes back to the german romantic poet Novalis from the late 1700s, in contrast to a tendency to attribute it to C Wright Mills (by sociologists) or T.S. Eliot and Russian formalists (by many literary scholars). This experience of a distributed ad hoc search process, led by me but with many willing contributors united by nothing other than a shared curiosity, represents the lost promise of social media. In the information environment we are now entering into, it’s really sad this promise will never be realised.

https://markcarrigan.net/2024/06/07/why-you-cant-use-chatgpt-and-claude-to-answer-a-factual-question/

#attribution #coleridge #conversationalAgents #information #knowledgeSystem #makingTheFamiliarStrange #quotatoons #search

Client Info

Server: https://mastodon.social
Version: 2025.07
Repository: https://github.com/cyevgeniy/lmst