Generative AI and AI-Assisted Technologies
This policy outlines the guidelines and expectations for the use of generative Artificial Intelligence (AI) and AI-assisted technologies in the preparation, submission, review, and publication of manuscripts within the International Research Journal of Scientific Studies. Our aim is to promote transparency, uphold research integrity, and provide clear guidance to authors, reviewers, and editors in an evolving technological landscape.
1. Introduction and Purpose
The rapid advancements in generative AI and AI-assisted technologies offer powerful tools that can enhance various aspects of research and scholarly publishing. However, their use also raises important ethical considerations regarding originality, authorship, transparency, and potential biases. This policy is established to ensure that the integration of these technologies aligns with the highest standards of academic integrity, ethical conduct, and responsible research practices.
2. Scope
This policy applies to all individuals involved in the publication process of the International Research Journal of Scientific Studies, including:
-
Authors: Those who submit manuscripts for consideration.
-
Reviewers: Individuals who evaluate submitted manuscripts.
-
Editors: Members of the editorial board responsible for overseeing the publication process.
-
Readers: Those who access and utilize published content.
3. General Principles
-
Human Oversight and Responsibility: The ultimate responsibility for the content of a manuscript, including any parts generated or assisted by AI, lies solely with the human authors. AI tools are aids, not authors.
-
Transparency: Any use of generative AI or AI-assisted technologies must be transparently disclosed.
-
Integrity and Accuracy: AI tools should be used in a manner that upholds the integrity, accuracy, and originality of the research.
-
Confidentiality and Data Security: Users must ensure that no confidential or sensitive information is shared with AI tools, especially those that may use input data for training purposes.
4. Guidelines for Authors
Authors are expected to adhere to the following guidelines when using generative AI and AI-assisted technologies:
4.1. Disclosure Requirements
-
Mandatory Disclosure: Authors must disclose the use of any generative AI or AI-assisted technologies in the preparation of their manuscript. This includes, but is not limited to, tools used for:
-
Generating text (e.g., introductions, literature reviews, discussions, conclusions).
-
Summarizing existing literature.
-
Translating text.
-
Editing or refining language, grammar, and style.
-
Generating code.
-
Analyzing data (if the AI tool itself performs the analysis, not just assists with traditional statistical software).
-
Creating figures, tables, or images.
-
-
Location of Disclosure:
-
A dedicated section titled "Declaration of Generative AI and AI-Assisted Technologies in the Writing Process" should be included at the end of the manuscript, immediately before the "References" section.
-
This section should clearly state which AI tool(s) were used, for what purpose(s), and how they were applied.
-
Example: "During the preparation of this manuscript, [Name of AI tool, version number] was used to [briefly describe purpose, e.g., assist with language polishing and grammar checking in Section 3.2]. The authors have reviewed and edited the content and take full responsibility for the accuracy and originality of the work."
-
-
No AI as an Author: AI tools cannot be listed as authors or co-authors. Authorship criteria (significant contribution, accountability) apply only to human individuals.
4.2. Responsibility for Content
-
Accuracy and Verification: Authors are fully responsible for the accuracy, validity, and originality of all content, including any text, code, or data generated or refined by AI tools. They must meticulously review and verify all AI-generated output.
-
Plagiarism and Originality: The use of AI tools does not absolve authors from the responsibility to avoid plagiarism. Content generated by AI must be treated with the same scrutiny as any other source, ensuring it is original, properly attributed (if it draws heavily on existing works), and does not infringe on copyright.
-
Fabrication and Falsification: Using AI tools to fabricate data, manipulate results, or misrepresent research findings is strictly prohibited and constitutes scientific misconduct.
4.3. Prohibited Uses
-
Generating Fake Data: AI tools must not be used to generate or synthesize data that is presented as real experimental or observational data.
-
Impersonation: AI tools must not be used to impersonate authors, reviewers, or editors, or to create fake identities.
-
Circumventing Peer Review: AI tools must not be used to generate fake peer review reports or to otherwise manipulate the peer review process.
4.4. Citation and Acknowledgement
-
If an AI tool significantly contributed to the conceptualization, methodology, or analysis, or if its output is directly presented and forms a critical part of the research, authors should consider citing the tool appropriately in the methodology section or acknowledging its role in the acknowledgements section, in addition to the mandatory disclosure.
-
For general writing assistance (e.g., grammar checks), a general disclosure is sufficient.
5. Guidelines for Reviewers
Reviewers play a crucial role in maintaining the integrity of the peer review process.
-
Confidentiality: Reviewers must not upload any part of a manuscript under review into generative AI tools or AI-assisted technologies that are not explicitly designed for secure, confidential peer review and do not guarantee the non-storage or non-use of the uploaded content for training purposes. This is critical to protect the confidentiality of unpublished research.
-
Ethical Use: Reviewers may use AI tools to assist with language refinement or summarization of their own review comments, provided that no confidential manuscript content is exposed to the AI tool.
-
Bias Awareness: Reviewers should be aware of potential biases in AI-generated content or summaries and ensure their review remains objective and based on the scientific merit of the manuscript.
6. Guidelines for Editors
Editors are responsible for upholding this policy and ensuring its consistent application.
-
Policy Enforcement: Editors will ensure that authors and reviewers are aware of and adhere to this policy.
-
Transparency Verification: Editors may request further details from authors regarding their use of AI tools if the initial disclosure is unclear or raises concerns.
-
Handling Violations: Suspected violations of this policy will be handled in accordance with the journal's existing policies on research integrity and misconduct. This may include rejection of the manuscript, retraction of published articles, and notification to relevant institutions.
7. Disclaimer and Future Updates
The field of generative AI and AI-assisted technologies is rapidly evolving. This policy will be reviewed periodically and updated as necessary to reflect new developments, best practices, and ethical considerations. We encourage open dialogue and feedback from our community to ensure this policy remains relevant and effective.