AI and ‘paper mills’: a big ethics battle
Author: Mafalda Cabana
Artificial Intelligence (AI) has gained a lot of attention from media and the public in the past few months. Despite being a great tool, multiple concerns have come up regarding this technology. From cheating in school essays to data privacy violations, the bad side has been recently highlighted. Science papers are no different and the credibility of scientific research might be compromised in the near future.
‘Paper mills’, or the “production of fake scientific papers to order” is an on-going fight within the scientific community. This means data can be forged and faked to produce a significant paper that stands out from the million published yearly. The reason behind it? Recent technology developments made it easier and easier to fake data and to write deceitful papers.
Lately, with the introduction of OpenAI software, ChatGPT, the fight has become even more challenging. According to integrity research-analysts, AI can generate even more plausible fake data as well as fake microscopy images. And they are harder to detect than you may think.
Fake data and images are harder to detect than you may think
Fight fire with fire
The problem is serious. Just in the first half of 2023 alone, roughly the same amount of ‘paper mills’ have been identified compared to the whole of 2022. The numbers have doubled and so has the quality of the AI software.
So, experts want to fight fire with fire. A specific software has been created to identify ChatGPT generated responses. But another AI tool can modify data slightly, making it undetectable by the integrity software. Thus, the validation of the papers becomes a never-ending loop.
Nevertheless, the basis of AI is to keep learning and upgrading its responses. It may be hard to code and develop, but a fraud-identifying AI appears be the strongest opponent to face the problem.
Solutions for journals
During its development, short-term solutions are already being implemented in journals. This includes giving the raw data to the reviewers to rule out result-fabrication. But implementing this rule journal-wide has proven to be harder and slower than necessary, assures the International Association of Scientific, Technical and Medical Publishers (STM).
Thus, there is an ongoing concern in science that multiple institutions try hard to combat daily. Specialists underline that there needs to be awareness to the scientific community as well as the public to stop the wave of ‘paper mills’ that are now empowered by AI software. So, AI is a great tool, but it can harm science on a pronounced scale if not stopped.