Artificial Intelligence (AI) Usage Policy
Sustainable Education and Digital Transformation (SEDT) recognizes the increasing presence of artificial intelligence (AI)–assisted tools in academic research and scholarly writing. The journal supports the responsible, transparent, and ethical use of AI technologies, while firmly maintaining principles of academic integrity, human authorship, and accountability.
SEDT’s approach to AI use is informed by internationally recognized best practices in scholarly publishing and aligns with guidance reflected in widely accepted ethical frameworks, including those associated with the core practices of the Committee on Publication Ethics (COPE) and authorship principles commonly referenced across academic disciplines.
The journal maintains a dedicated AI policy to ensure that the use of AI tools does not compromise the originality, reliability, or integrity of scholarly work.
1. Use of AI by Authors
Authors may use AI-assisted tools for limited and supportive purposes, such as language editing, grammar checking, or improving clarity and readability, provided that such use does not replace the authors’ own intellectual contribution.
AI tools must not be used to:
-
Generate original research ideas, research questions, hypotheses, or theoretical interpretations
-
Produce substantial portions of manuscript content
-
Fabricate, falsify, or manipulate data, results, references, or citations
-
Replace critical analysis, methodological decisions, or scholarly judgment
AI tools cannot be listed as authors, as authorship entails responsibility, accountability, and intellectual contribution that can only be assumed by human researchers.
2. Disclosure of AI Use
Authors are required to clearly and transparently disclose any use of AI-assisted tools in the preparation of their manuscript. Such disclosure should specify:
Disclosure may be included in the acknowledgements section or in a separate AI use statement.
Failure to disclose AI use where required may be considered an ethical concern and will be handled in accordance with the journal’s Publication Ethics and Malpractice Statement.
3. Responsibility and Accountability
The use of AI-assisted technologies does not alter the ethical responsibilities of authors, reviewers, or editors. Responsibility for the content, accuracy, originality, and integrity of a manuscript always rests with the human contributors.
Manuscripts involving undisclosed, inappropriate, or unethical use of AI tools may be rejected during editorial assessment or retracted after publication, in line with the journal’s ethical procedures.
4. Use of AI by Reviewers
Reviewers must not use AI tools to generate peer-review reports, evaluations, or recommendations. Reviewers are also prohibited from uploading manuscripts, reviewer reports, or any confidential content into AI systems that may compromise confidentiality or data protection.
Peer review relies on human expertise, scholarly judgment, and ethical responsibility.
5. Use of AI by Editors
Editors may use AI-assisted tools to support editorial workflows, such as plagiarism detection or administrative screening, where appropriate. However, AI tools must not be used to make editorial judgments or publication decisions.
All decisions regarding manuscript acceptance, revision, or rejection are made exclusively by human editors, based on scholarly merit and ethical considerations.
6. Ethical Concerns and Misuse
Suspected misuse of AI tools, including undisclosed use or use that compromises academic integrity, will be investigated in accordance with the journal’s Publication Ethics and Malpractice Statement.
Depending on the severity of the issue, editorial actions may include rejection of the manuscript, publication of a correction, retraction, or other appropriate measures.
7. Alignment with International Standards
This policy is informed by widely recognized discussions and guidance on the responsible use of artificial intelligence in scholarly publishing, including recommendations and position statements developed by international publishing and editorial organizations:
-
STM: Recommendations for a Classification of AI Use in Academic Manuscript Preparation
-
Elsevier: The use of generative AI and AI-assisted technologies in the review process
-
WAME: Chatbots, Generative AI, and Scholarly Manuscripts
SEDT acknowledges that policies related to generative AI are evolving and commits to periodically reviewing and updating this policy to remain aligned with emerging standards, ethical considerations, and best practices in academic publishing.