Artificial Intelligence Policy

All manuscripts submitted to Musiques : Recherches interdisciplinaires must be based on the original thinking and voice of the human authors. Any use of AI tools must be acknowledged in the article. AI-generated text should be marked as such in the article, and AI tools should be listed in the references. This includes, but is not limited to, data analyses, generated text, and images. We encourage authors to contact us if they have concerns or need support in this regard.

AI Policy [1]

AI-generated manuscripts are, by nature, superficial. Musiques : Recherches interdisciplinaires strives to develop relationships of trust and respect with its authors, reviewers, and readers. The superficiality of AI prevents this trust relationship from being established, because it calls into question the author's ability to master his or her subject and demonstrate expertise on ideas that have been developed before him or her. It also makes useless the constructive criticism that the Journal's reviewers could formulate to help authors strengthen their work. Finally, the objective of research is to advance knowledge and understand the phenomena under study; but AI, while it can identify correlations, is incapable of "understanding" causalities. Humans have capacities (intuition, sensitivity, morality, creativity, critical sense or understanding of cultural, social or disciplinary nuances) that are still difficult to automate.

That said, the assistance tools integrated into commonly used desktop or cloud software are a valuable aid, because they can improve the linguistic or intellectual work already done by the author. These tools may include grammar suggestions in MS Word, formula suggestions in Google Sheets, software like Antidote, etc. We are open to articles that discuss AI tools in libraries, especially those that approach the subject critically, in line with our editorial scope. We welcome contributions that have benefited from AI support during the preparatory phase of their writing (for example, for brainstorming, creating outlines or transcribing interviews), but we advise against using AI to generate texts or synthesize ideas. On the other hand, generative tools that produce a new text or image, develop an argument, draw conclusions, synthesize concepts or analyze information for the author (for example, ChatGPT) are not tolerated. Therefore, all manuscripts submitted to Musiques : Recherches interdisciplinaires must first and foremost be based on the original thought and authentic voice of the human authors [2].

AI Disclosure Checklist:

  • Have you used a Generative AI (GAI) tool in your writing, data analysis, or image creation process?
  • Have you indicated the GAI tool used (e.g. ChatGPT, Copilot, Meta AI, etc.)?
  • Have you provided a brief description of how the tool was used in your writing process?
  • Have you justified the use of AI?
  • If you included AI-generated text in your submission, have you fact-checked the text, reviewed it for plagiarism, and substantially revised it?

GAI is a rapidly evolving field. Accordingly, we will regularly review this policy (and our other editorial policies and procedures) to ensure that our readers, authors, and reviewers continue to publish high-quality scientific articles that are aligned with the vision and values ​​of the scientific community.

 

[1] Policy inspired by the In the Library with the Lead Pipe AI policy: https://www.inthelibrarywiththeleadpipe.org/ai-policy/

[2] Our experience with content generated by these tools is that it is often riddled with errors, or even created with an appearance of intellectual rigor. On this, see the article “ChatGPT is bullshit” (Hicks, Humphries, and Slater 2024): https://link.springer.com/article/10.1007/s10676-024-09775-5