Purpose and alignment

Description. JIWE safeguards research integrity while enabling responsible innovation. This policy governs the use of artificial intelligence (AI) and AI-assisted tools by authors, editors, and reviewers. It aligns with the Principles of Transparency and Best Practice in Scholarly Publishing (v4) and operationalizes COPE guidance, including the position that AI tools cannot be authors. It also reflects Scopus’ expectations that journals publicly document robust ethics and malpractice controls across authorship, peer review, data/reproducibility, complaints/misconduct handling, and post-publication corrections. 

Detailed policy. Humans remain fully accountable for all scholarly content. We require transparent disclosure of any AI assistance, prohibit AI in confidential editorial and peer-review decision-making, and will not rely solely on AI-detection tools to determine misconduct.
Technicalities. We publish and enforce this policy across submission forms, review workflows, and production; we apply COPE flowcharts when concerns arise and we document all actions.

Scope and definitions

Description. “AI or AI-assisted tools” include large language models, code assistants, image/audio generators, and other generative or predictive systems used to produce or transform text, data, code, images, figures, or reviews.
Detailed policy. We distinguish (a) writing assistance that improves language/clarity from (b) content generation that produces novel text, data, analyses, or images. We also distinguish research use of AI (e.g., as a method) from manuscript-preparation use (e.g., editing prose).
Technicalities. Authors must describe AI used as a research method in Methods; manuscript-preparation use must be disclosed in a dedicated “AI Use and Provenance” statement (see below).

Authors: permitted, restricted, and prohibited uses

Description. Authors may use AI for limited language support with transparency; they must not outsource originality, interpretation, or accountability.
Detailed policy.

  • Permitted with disclosure. Grammar/clarity editing; formatting assistance; code refactoring comments that do not change scientific meaning; methodological planning assistance that authors independently verify.

  • Methodological use. If AI is part of the research workflow (e.g., model training, inference, data labeling), fully describe models, versions, prompts, hyperparameters, datasets, evaluation, and safeguards in Methods; share code/data where feasible.

  • Restricted. We discourage AI-generated citations and require authors to verify every reference; we may request prompt/response logs for verification.

  • Prohibited. Listing an AI system as an author; citing an AI system as an author; using AI to fabricate, falsify, or manipulate data, images, or results; using AI to generate or alter scientific figures/images beyond acceptable clarity adjustments; submitting undisclosed AI-generated text as original scholarship.
    Technicalities. Each submission must include an “AI Use and Provenance” statement placed before References (sample text below). When AI was used in research, include repository links/DOIs in the Data/Code Availability statement.

Reviewers: confidentiality and accountability

Description. Peer review requires human critical judgment and strict confidentiality.
Detailed policy. Reviewers must not upload any part of a manuscript or its data to public AI tools. Reviewers must not use AI to generate review reports or recommendations. Limited, offline language help that does not expose confidential content may be used, but reviewers remain fully responsible for their reports.
Technicalities. Breaches of confidentiality or undisclosed AI-generated reviews may lead to reviewer removal and notification to institutions, following COPE guidance.

Editors: decision-making and tool use

Description. Editorial decisions must be human-made; confidentiality is paramount.
Detailed policy. Editors must not upload confidential submissions to public AI tools and must not delegate editorial judgments to generative AI. The journal may use identity-protected, in-house or licensed AI systems for screening (e.g., metadata completeness, plagiarism similarity checks, reviewer discovery) provided confidentiality, bias evaluation, and data-privacy controls are in place. Editorial decisions remain the responsibility of human editors.
Technicalities. We document any editorial AI-assisted screening and regularly assess tools for accuracy, bias, and privacy compliance.

Image, figure, and multimedia integrity

Description. Scientific visuals must reflect underlying data faithfully.
Detailed policy. We do not permit generative AI to create or alter images/figures in research articles (including graphical abstracts) beyond standard, disclosed adjustments (brightness/contrast/color that do not obscure information). Any AI-generated illustrative artwork for non-scientific use (e.g., cover) requires prior editorial permission, rights clearance, and attribution.
Technicalities. We may use image-integrity checks and request original, unprocessed files.

Provenance, disclosure, and acknowledgments

Description. Transparency enables trust, reproducibility, and compliance.
Detailed policy.

  • Add an AI Use and Provenance statement structured as: tool/provider, version/date, purpose (language editing vs research method), scope of use, and author verification steps.

  • Do not list AI tools as authors or co-authors; authors accept full responsibility for content and for securing permissions for all material.
    Technicalities. Place the statement before References. Example: “The authors used [Tool, Version, Provider] on [date] for language editing only. No content, data, images, or references were generated by AI. The authors reviewed and verified all text and accept responsibility for the content.”

Detection, verification, and investigations

Description. We respond proportionately to suspected undisclosed or inappropriate AI use.
Detailed policy. We may screen submissions with similarity and other tools, request raw data, code, and AI interaction logs, and contact authors’ institutions. We never rely solely on AI-detection scores to judge misconduct; humans assess evidence using COPE flowcharts. Remedies can include correction with disclosure, manuscript rejection/withdrawal, expressions of concern, retraction, and submission bans for defined periods.
Technicalities. We keep auditable records and notify indexers of any post-publication changes.

Malpractice examples related to AI

Description. The following constitute research and publication malpractice:
Detailed policy. Undisclosed AI-generated text; fabricated citations or references hallucinated by AI; AI-generated or AI-altered data/figures without full methodological disclosure; uploading confidential manuscripts to public AI tools; using chatbots to draft peer reviews; paper-mill activity that leverages AI; AI-assisted plagiarism or translation plagiarism; coercive or self-serving AI-suggested citation practices. We address these under our Publication Ethics and Malpractice Statement using COPE guidance.
Technicalities. Sanctions include rejection, retraction, and institutional notification; see our Complaints, Appeals, and Misconduct procedures.

Data protection and confidentiality

Description. Confidentiality and privacy obligations extend to AI use.
Detailed policy. Do not upload personal data, unpublished data, or confidential manuscripts to public AI systems. When AI is used as part of research, ensure legal and ethical compliance (e.g., consent, privacy, data protection) and disclose safeguards.
Technicalities. We may request Data Protection Impact Assessments or equivalent documentation.

Post-publication updates

Description. We maintain the integrity of the scholarly record.
Detailed policy. If undisclosed or inappropriate AI use is found after publication, we act per COPE Retraction Guidelines—issuing corrections, expressions of concern, or retractions as warranted—and update metadata and indexers accordingly.
Technicalities. Notices remain linked and citable.