May 22, 2024

DNS Africa Resource Center

..sharing knowledge.

ChatGPT use 'polarises' peer reviewers – Times Higher Education

Resources for
academics and university staff
Everything you need for each step
of your study abroad journey
Less than a third of researchers believe artificial intelligence (AI) will have a positive impact on peer review, according to a new poll that reveals scholars’ deep unease about the technology.
In a survey of more than 3,000 academics by IOP Publishing, the publishing arm of the UK’s Institute of Physics, just 29 per cent of respondents felt generative AI would improve peer review. Some claim AI could help to check manuscripts for plagiarism and English language quality, which would help editors to filter out problematic or low-quality manuscripts before they go out for peer review.
However, a larger proportion – 35 per cent – believe generative AI would have a harmful impact on peer review, with several respondents viewing the technology in an entirely negative light.
“AI is so far very unintelligent…it could never answer my questions properly online,” commented one respondent, who felt the technology was a “a tool of corrupting human moral standard”. Another described AI as “a destructive tool for mankind” which should be “completely banned in academia and research”, even if it was useful for some industries.
Another replied simply: “AI is evil, burn it with fire!”
Some 36 per cent of respondents to the survey, part of the IOP’s State of Peer Review 2024 report, published on 14 May, say it would have a neutral effect or no impact whatsoever on peer review.
The report’s authors describe the responses towards AI as “extremely diverse” with the issue “polarising” reviewers.
Many respondents claim the technology, which has sparked controversy in the past year when a peer reviewer recommended a ChatGPT-generated reading list containing imaginary scholars, could play a role in approving or rejecting papers, although human critique of papers would remain essential.
According to the report, the “most common response to this question was that generative AI tools can provide some useful outputs, but expert human verification and editing is always required before any AI-generated text is used in the peer review process”.
Campus resource collection: AI transformers like ChatGPT are here, so what next?
The use of generative AI to write or augment peer review reports raised “a number of ethical issues, including data protection and breaches of confidentiality, and concerns about the veracity and accuracy of reviewer reports”, says the report.
Currently, IOP Publishing does not allow the use of generative AI to write or augment peer review reports, nor does it accept generative AI tools to be named as authors of manuscripts.
The report also highlights, however, the “opportunities” of AI, including “language editing”, stating that “publisher policies around generative AI need to be adaptable and fair”.
The IOP report also quizzed scholars about the volume of peer review requests they receive, with exactly 50 per cent stating the number of requests had increased in the past three years, 11 per cent saying the volume had decreased and 39 per cent saying it had remained constant.
Academics in Europe are more likely to receive substantial numbers of requests for peer review, with 24 per cent receiving three or more requests per month. That compared with those in China and India, where only 16 per cent and 15 per cent respectively received three or more requests per month.
Why register?
Or subscribe for unlimited access to:
Already registered or a current subscriber? Login
Social scientist sees hand of ChatGPT in list of non-existent papers cited in peer reviewer’s rejection
ARC responds to allegations that assessors were using chatbot to write feedback on applications
Social scientist sees hand of ChatGPT in list of non-existent papers cited in peer reviewer’s rejection
The AI chatbot may soon kill the undergraduate essay, but its transformation of research could be equally seismic. Jack Grove examines how ChatGPT is already disrupting scholarly practices and where the technology may eventually take researchers – for good or ill
The head of Nanyang Technological University leverages the institution’s youthful vigour to put AI at the heart of learning and uses his research background to inform his leadership and help recruit potential Nobel prizewinners
The likes of Google are being undermined by both AI and their own commercial imperatives. Researchers can no longer rely on them, says Andy Farnell
Institutions must accept rules have changed in order to survive ‘disruptive innovation’ brought about by new technologies, says online learning expert
Tracking teaching, grades and research productivity with electronic systems is affecting staff independence and well-being, union-backed survey finds
Subscribe to Times Higher Education
As the voice of global higher education, THE is an invaluable daily resource. Subscribe today to receive unlimited news and analyses, commentary from the sharpest minds in international academia, our influential university rankings analysis and the latest insights from our World Summit series.


About The Author