Artificial Intelligence and the SETAC Journals
Jen Lynch, Jenny Shaw, Erin Nelson and Sabine Apitz, SETAC Journals
It seems that everyone is talking about artificial intelligence (AI), text generation and how it fits—or if it fits at all—in a scientific setting. This is especially true at the SETAC journals. As ChatGPT exploded onto the scene in early 2023, editors at Integrated Environmental Assessment and Management (IEAM) and Environmental Toxicology and Chemistry (ET&C) started researching the issue and evaluating how best to address it. Before we could even begin to evaluate risks, benefits and the like, we undertook researching the logistics, exploring the ethics, and outlining journal policies at other scientific publishing houses. Here, we summarize what we’ve learned thus far and outline current requirements for authors publishing in SETAC journals.
What Are AI Tools?
Very briefly and rather simply, ChatGPT (and similar) are AI language-processing tools called large language models (LLMs) that rely on algorithms to generate written content from materials mined from the internet. Though AI-generated content is not new, the new generation of LLMs can put forth extremely intelligent content. Other AI tools that exist in the wild and should also be on our radar include text-to-image generators, such as DALL-E and Midjourney. While a lot of people are exploring what these tools can do by generating nonsense poems or bizarre images, others have been employing AI tools for more academic pursuits. Authors have even listed ChatGPT as a co-author on papers.
“The best way to think about [ChatGPT] is you are chatting with an omniscient, eager-to-please intern who sometimes lies to you.” ~ Ethan Mollick, University of Pennsylvania’s Wharton School of Business
AI Tools and Publishing
As editors, we naturally wondered how this technology can help or hinder scientific communication and, of course, came up with even more questions. Both ET&C and IEAM strive to publish high-quality content, including peer-reviewed, original research articles. While other article types, such as perspectives or letters to the editor, which are opinion pieces based on a person’s experience, do not undergo a traditional peer review, they are still reviewed by our journal editors. With that in mind, we asked ourselves what are the impacts of AI for publishing research? Will we be able to identify AI-generated material? How can we provide guidance to our editors and reviewers? Are there tools to screen manuscripts to detect whether they are machine-generated? The answer to at least the last question is yes, it is in development, but for the rest, we are still exploring resolutions.
If there are so many unanswered questions, why even bother working with AI systems? Are there any benefits? Some argue that LLMs could take some of the burden off of writing the fluff required in grant proposals. Others have wondered if it could help with grammar and syntax for those who struggle with writing, especially if English is not their first language. These are both possibilities, but in any instance, most of the editors who contributed to these musings agreed that an AI tool such as ChatGPT needed human oversight. The AI text generator could certainly provide a short passage from which to start, but it could not be a named author. A person would have to work with the output, verify the statements, and refine the material into something far beyond what the tool spit out. Ultimately, it is the human authors who are fully responsible for all content in a publication, however generated.
AI Tools and Copyright
One of the risks for employing these kinds of AI tools in publishing is that they mine the internet to “create” the material that you ask for, but they currently aren’t just mining open-source materials. We have seen from recent lawsuits that AI does not care about copyright. This could certainly be rectified, but it remains to be seen if that will hinder the power of the tool and reduce the benefit of what it can produce. While the SETAC journals have tools in place to detect plagiarism, this can leave journals vulnerable and compromise their reputation. A further concern is that, even when ChatGPT provides legitimate-seeming references (such as DOIs), these can be “fake,” making it clear that reviewers and editors, as well as authors, will need to be ever more vigilant about the validity and appropriateness of references cited. Although this is not altogether a new problem, this new technology could super-charge the issue.
The Way Forward
At the end of the day, any time we are considering a new policy at SETAC, we base our thought process in SETAC’s organizational mission and values. We value transparency, scientific integrity and equity. From everything we know about AI content generators to date, there is very little transparency around these tools. We don’t truly understand how they work and, what’s more concerning, neither do their creators. When the Bing AI search engine was “interviewed” by reporters, it seemed to go a bit crazy. The people who set up the algorithm were unable to articulate why that happened.
Since chatbots source materials from crawling the web, including public forums and platforms such as Reddit, there is also the question of biases. There are enough red flags raised by AI-generated content that U.S. President Joe Biden recently met with his council of science and technology advisors to discuss the risks and opportunities of AI tools. And thousands of AI experts and industry executives—including Elon Musk, who co-founded the company that makes ChatGPT—have signed an open letter calling for a six-month pause to the frenetic race to create ever-more powerful AI tools, instead using the time to develop safety protocols to guide further design and development. So, while software engineers work out the kinks in their programs, and policymakers wrestle with the ethics and regulation of AI-generated content, we feel it is of the utmost importance that our authors, editors and reviewers exhibit transparency and adhere to standards of scientific integrity. Needless to say, SETAC journals are working on an AI policy and will be employing AI-detection tools, seeking to be both responsive and flexible to this fast-moving issue.
Given the rationale above, what does this mean for anyone who is interested in publishing in the SETAC journals at this time?
- AI text generators, such as ChatGPT, cannot be authors.
- Authors remain completely responsible for the material in their submissions, regardless of source.
- Authors must disclose the use of AI in two locations in the paper:
- Methods: Authors must describe four details: 1) the AI platform that was used, 2) the prompts used to generate the content, 3) the sections of the article generated using AI, and 4) how they applied quality control to verify the accuracy of the AI-generated content and avoid potential Intellectual Property or copyright infringement for material used.
- Disclaimer: Include a brief disclosure statement of the tool’s usage in a Disclaimer section, for example, “This article contains [content/material/etc.] generated using artificial intelligence. Please see the Methods for details.”
- The journals reserve the right to reject or retract papers if concerns on these issues remain or are discovered after acceptance or publication.
- The Editors reserve the right to reject any article where it is clear that there are large tracts of material which are not original, including AI-generated figures or tables.
SETAC editors and editorial staff are contemplating how to craft and fine-tune policies around these emerging technologies. While we continue to do so, we invite SETAC members to join us in the conversations on the topic.
Correction: The journals policy was updated on 9 June and this article was edited to reflect the changes.
Authors’ contact information: [email protected]
