All 4 entries tagged AI
March 01, 2023
Follow-up to AI & Authorship from Exchanges Reflections: Interdisciplinary Editor Insights
A new policy of interest to authors using AI tools to support their research or writing has been introduced.
As discussed last month, the Exchanges Editorial Board have been considering the introduction of a new policy relating to authors and their use of artificial intelligence (AI) tools. As of the end of February, this new policy has been introduced and is line with best practice ethical guidance, and current publishing practices.
Briefly speaking, AI tools cannot be cited as authors within any Exchangespapers and authors are solely responsible for any and all contents of their manuscripts. Additionally, where AI tools are used to prepare any portion of the manuscript, the usage of these tools needs to cited, explained and transparent. In this way, authors are not denied the usage of AI tools within their work, but need to demonstrably show how such tools have contributed to their research, writing and related endeavours.
For more information see:
Announcement: New AI Policy Introduced
Journal Policies: Authorship & AI Tools PolicyOr of course, contact the journal to discuss this further.
February 14, 2023
Writing about web page https://publicationethics.org/cope-position-statements/ai-author
Like many of you I've been following the discussions around authorship and AI, especially as it relates to ChatGPT and scholarly communications in recent weeks (for example: Haggart, 2023; Lucey & Dowling, 2023). You probably saw the splash in the news recently too when the major research journal Science took the position ‘banning the use of text from ChatGPT and clarifying that the program could not be listed as an author.’ (Sample, 2023). Naturally, as a journal editor thoughts on originality are rarely far from my mind, and while there have been tools around for some time which can be deployed by authors in the creation of text – I can recall playing with them as far back as two decades ago – ChatGPT does rather seem to have shifted the practice from a niche to a mainstream activity.
Given Exchanges regularly looks to COPE (the Committee in Publishing Ethics) for best practice guidance in maters of publishing ethics, I’ve been keeping our powder dry as far as any related policy for the journal is concerned. Certainly, during the last few weeks we’ve received our first – and I doubt last – article submission relating to the issue. Note about not by, being the important elements in this respect. Nevertheless, I suspect in the fullness of time we may will almost certainly have contributions from authors who will be making use of AI tools in the creations of their papers.
Hence, this morning I noted with particular interest how COPE have now produced a position statement on the issue of authorship and the use of AI in the creation of research publications. I confess I’ve been waiting on this with anticipation, and now it’s here am glad to report it is fairly elegant in its simplicity. The key elements of COPE’s position being:
- AI Tools cannot be listed as paper authors given they cannot take any legal responsibility.
- Authors utilising AI tools in a manuscript’s creation must disclose where/how they were used.
- Authors retain responsibility and ethical liability for all of their paper’s contents
To my thinking this seems a rational, fair and workable approach. It doesn’t entirely exclude contributions from authors who may well wish to make a use of AI tools in the creation of their research outputs – which is good, because I wouldn’t want to preclude these from our considerations. However, it does clarify and demarcate the expectations of professional ethics, original contributions and the boundaries of authorship within any journal contributions. While I suspect questions around the use and misuse of AI tools within scholarship will not be evaporating any time soon, to my mind this position statement at least provides editors like myself with a framework upon which to consider and build our own related submission policies.
As such, with the hopeful agreement of Exchanges’ Editorial Board, we will be adapting and adopting a policy on AI tools and authorship, very much based on the COPE guidance. Given we as a journal typically look towards COPE for best ethical practice, this is line with the development of our extant policy frameworks. Authors seeking to explore these topics more, and how it may impact on the production of their submissions, naturally, are encouraged to contact myself for further discussions.
COPE, 2023. Authorship and AI Tools: COPE Position Statement. Committee on Publication Ethics. https://publicationethics.org/cope-position-statements/ai-author
Haggart, B., 2023. ChatGPT Strikes at the Heart of the Scientific World View. Centre for International Governance Innovation. 23 January 2023. https://www.cigionline.org/articles/chatgpt-strikes-at-the-heart-of-the-scientific-world-view/
Lucey, B., & Dowling, M., 2023. ChatGPT: our study shows AI can produce academic papers good enough for journals – just as some ban it. The Conversation, 26 January. https://theconversation.com/chatgpt-our-study-shows-ai-can-produce-academic-papers-good-enough-for-journals-just-as-some-ban-it-197762
Sample, I., 2023. Science journals ban listing of ChatGPT as co-author on papers. The Guardian, 26 Jan 2023. https://www.theguardian.com/science/2023/jan/26/science-journals-ban-listing-of-chatgpt-as-co-author-on-papers
June 01, 2021
Writing about web page https://anchor.fm/exchangesias/episodes/A-Conversation-withM-Onat-Topal-e11vufe
The latest* episode of The Exchanges Discourse is now live. This time I'm in conversation with another of our recent authors on the journal about their publication, research and thoughts on academic publication. The episode touches on the challenges of 'trash' journals and conferences, alongside some of the other pitfalls for new authors.
In this episode we discuss the article, ‘Use of Artificial Intelligence in Legal Technologies: A critical reflection’ and some of its implications with its lead author. As usual we delve into the guest’s current research and publishing activities, before closing with some advice for first time and new academic authors.
*18th if you're counting
November 03, 2020
Writing about web page https://exchanges.warwick.ac.uk/index.php/exchanges/announcement/view/28
The issue of intelligence lies at the heart of the scholarly lifeworld, although for much of history a topic focussed around a singular, human construct. Today though, algorithms, deep learning and artificial intelligence have emerged into the everyday world. From the seemingly trivial, to battling the pandemic or even fighting our future wars, applications of algorithmic intelligence are increasingly shaping critical decisions and policy helping meet emerging challenges. Should we be celebrating the transition to a more ‘automated’ workplace, freeing humankind from waged-labour exploitative drudgery or does it represent an existential threat to the livelihood of millions?
Some would argue humanity has cause to fear the unchecked rise of the machines in our society. For example, the recent examination debacle in the UK undoubtedly lays still sharp in the minds of many British students and their parents as an example of a misapplied technological aid. Other cautionary tales of unfettered algorithm use abound in fields as diverse as space imaging and earth observation, through to the evaluation of immigration applicants or ‘future crime’ prediction. Is the age of the 'Minority Report' a new era of safety to be trumpeted or a greater force for oppression and fear?
Conversely, many assert artificial intelligence, machine learning and algorithms offer humanity a brave new world of opportunity, advancement and potential achievement. Deployed in the service of humanity algorithmic intelligence could help us better plan for future building and habitation needs, predict cataclysmic acts of nature or even more efficiently discover curative treatments. Thus, the artificially intelligent enabled future may be a far brighter one than some currently anticipate. Where, if anywhere, does ‘the truth’ lay?
Hence, for the issue of Exchanges due for publication in Autumn 2021, we invite authors to submit original, exciting and insightful manuscripts for peer-reviewed publicationconsideration inspired by any aspect of this theme. We welcome papers written for a general academic audience exploring or reviewing the science, application and implementation of machine learning, artificial intelligence or algorithms within a broader societal setting. We also welcome submissions from the humanities, arts and social sciences dealing with the ethics, perceptions, interpretations and representations of these issues too.
First-time or early career authors may alternatively wish to consider submitting either a critical reflectionor conversational (interview) pieceinspired or informed by these themes. Such pieces would serve to provide much needed background to the topic for a general academic audience. Critical reflections and conversations only undergo editorial review ahead of publication and hence are especially suitable for first-time or early career authors.
All submitted manuscripts will undergo editorial review, with those seeking publication as a research article additionally undergoing formal peer-review. The online form should be used to make manuscript submissions.
> Peer-reviewed articles: 1st May 2021. | Conversations or critical reflections: 31st August 2021.
For more information on Exchangesand our activities, visit the journal’s website. For questions relating to this call, future submissions or other matters relating to the title please contact Editor-in-Chief, Dr Gareth J Johnson.