AI Usage Policy
Policy for AI Usage in Writing, Reviewing, and Publishing for the Journal of Communication, Language and Culture (JCLC)
Generative artificial intelligence (AI) and AI-assisted technologies (“AI Tools”) have rapidly shaped academic writing and publishing. When used responsibly, these tools can help researchers and reviewers work efficiently, gain insights, and enhance readability. However, their application must always conform to ethical principles, transparency, and academic integrity.
This policy outlines the standards for responsible use of AI by authors, reviewers, and editors of JCLC.
1. For Authors
The Use of Generative AI and AI-Assisted Technologies in Manuscript Preparation
Authors preparing manuscripts for JCLC may use AI Tools to support their work. These tools, when applied responsibly, can:
- Assist with synthesizing literature, providing overviews of a research area, identifying gaps, and generating ideas.
- Support language editing, content organization, and readability.
However, AI Tools must never substitute human expertise, judgment, or accountability. Authors are fully responsible for their submissions and must ensure that:
- Accuracy and Verification: All AI outputs are carefully reviewed, verified, and checked against reliable sources. AI-generated references must be validated, as they can be fabricated or incorrect.
- Authenticity: Manuscripts reflect the author’s own intellectual contribution, analysis, and interpretation.
- Transparency: The use of AI Tools is clearly disclosed to readers.
- Ethical Compliance: Data privacy, intellectual property rights, and third-party rights are safeguarded when using AI Tools.
Responsible Use of AI Tools
- Authors must review the terms and conditions of AI Tools to ensure data privacy and confidentiality of their inputs (including unpublished manuscripts).
- Personally identifiable data, copyrighted images, likenesses of real individuals, or branded content must not be generated.
- Authors must avoid bias, factual errors, or outputs that could compromise scientific integrity.
- Authors must not grant AI Tools rights to reuse or train on their manuscript content.
Disclosure
- Use of AI Tools in manuscript preparation must be declared in a dedicated AI Declaration Statement upon submission.
- The declaration must specify: the name of the tool, purpose of use, and extent of oversight.
- Example disclosure:
“The author(s) used [TOOL NAME] for [SPECIFIC PURPOSE]. The author(s) reviewed and edited the content as necessary and take full responsibility for the integrity of the work.”
- Grammar, spelling, and reference management tools do not require disclosure.
- AI use in research methods (e.g., AI-assisted data analysis) should be reported in the Methods section.
Authorship
- AI Tools cannot be listed as authors or co-authors.
- Authorship is restricted to humans who make substantial intellectual contributions.
- Authors remain accountable for the originality, accuracy, and integrity of their work.
Use of Generative AI in Figures, Images, and Artwork
- Generative AI tools may not be used to create or alter scientific figures, images, or artwork.
- Adjustments to brightness, contrast, or color balance are acceptable if they do not obscure information.
- Exceptions apply if AI use is part of research design or methodology (e.g., biomedical imaging). Such use must be transparently reported in the Methods section, including tool name, version, and process description.
- AI-generated graphical abstracts or cover art are not permitted unless prior permission is obtained from the editor and publisher.
2. For Reviewers
Confidentiality
- Manuscripts are confidential. Reviewers must not upload submissions, in whole or part, into AI Tools, as this may violate confidentiality and authors’ rights.
- Review reports are also confidential and must not be processed through AI Tools.
Responsible Use
- Generative AI must not be used to evaluate manuscripts or generate review content. Peer review requires critical thinking, expertise, and impartial judgment that cannot be delegated to AI.
- Reviewers remain fully accountable for the integrity and accuracy of their evaluations.
Supportive Use
- Reviewers may use AI-based language tools to polish their own review text but must ensure that this does not compromise confidentiality or accuracy.
3. For Editors
Confidentiality
- Submitted manuscripts and correspondence must not be uploaded into AI Tools, to safeguard author confidentiality and intellectual property.
Editorial Responsibility
- Editorial decisions require human expertise, fairness, and accountability. Generative AI must not be used to evaluate manuscripts or draft editorial decisions.
- Editors are responsible for ensuring that all final decisions and communications are based on human assessment.
Permissible Uses
- Editors may use AI Tools to support administrative tasks (e.g., managing submissions, monitoring deadlines, or checking for plagiarism/ethical issues).
- Editors may use licensed, publisher-approved AI tools for similarity checks or reviewer selection, provided they comply with ethical and privacy standards.
Oversight
- If an editor suspects inappropriate use of AI by authors or reviewers, they must notify the publisher.
Definition
Generative AI refers to tools that create text, images, audio, or data based on prompts, such as ChatGPT, Bard, DALL-E, Jasper, NovelAI, and similar platforms.
Conclusion
JCLC supports the responsible use of AI technologies to improve academic efficiency and readability, but human oversight, accountability, and transparency remain essential. AI must never replace critical thinking, originality, or scholarly integrity.
The AI Policy will be periodically reviewed and revised, as necessary, to reflect developments in journal management and publishing practices.
Last Reviewed: 1 September 2025