AI & Research Integrity Policy

Editorial Policy

 

Research Integrity AI Disclosure Editorial Ethics Double-Anonymous Review
Policy Principle. The journal permits the responsible use of artificial intelligence and AI-assisted technologies in scholarly work only where such use is transparent, appropriately disclosed, ethically justified, and fully supervised by human authors. Responsibility for the integrity, accuracy, originality, legality, and scholarly validity of all submitted content remains entirely with the human authors.

Journal of Social Cognition and Communication is committed to maintaining high standards of research integrity, publication ethics, and editorial transparency. As artificial intelligence and AI-assisted technologies become more widely used in academic research and scholarly communication, the journal recognizes the need to distinguish between legitimate assistance, improper reliance, and unacceptable misconduct.

This policy applies to authors, reviewers, editors, guest editors, and editorial staff. It governs the use of generative AI, large language models, machine learning tools, automated writing systems, AI image generators, AI-assisted data processing tools, and other related technologies where such use may affect the creation, evaluation, interpretation, or publication of scholarly content.

1. General Principles

Artificial intelligence tools may assist certain aspects of scholarly work, but they do not replace human authorship, scholarly judgment, or ethical responsibility. AI systems may generate inaccurate, fabricated, biased, misleading, plagiarized, or improperly sourced content. Accordingly, the journal requires meaningful human oversight at every stage where AI-assisted technologies are used.

The journal distinguishes between acceptable assistance and unacceptable substitution. Responsible use may include limited support for language refinement, coding assistance, data organization, or analytical exploration where transparently disclosed and carefully reviewed by the authors. Unacceptable use includes undisclosed AI-generated substantive content, fabricated citations, synthetic evidence presented as real research, manipulated images, falsified data outputs, or the delegation of core scholarly accountability to an AI system.

2. Authorship and Accountability

Only human beings may be listed as authors. AI tools, chatbots, language models, automated systems, and similar technologies may not be named as authors, co-authors, or corresponding authors.

All listed authors must take full responsibility for the submitted work, including the accuracy of facts, the reliability of arguments, the legitimacy of references, the originality of language and ideas, the lawfulness of reproduced materials, and the ethical acceptability of any AI-assisted processes used in preparing the manuscript.

Authors must be able to explain and defend all parts of the manuscript. No section of a manuscript may be treated as exempt from author responsibility on the ground that it was generated or suggested by an AI tool.

3. Disclosure of AI Use by Authors

Authors must disclose any meaningful use of AI or AI-assisted technologies in the preparation of the manuscript, research materials, figures, images, tables, code, data analysis, transcription, translation, or supplementary materials.

Disclosure should be made at the time of submission and, where appropriate, included in the manuscript in a clearly labeled section such as AI Use Disclosure, Methods, Acknowledgements, or another suitable location depending on the nature of the use.

Disclosure should specify, where relevant:

  • the name of the tool or system used;
  • the purpose for which it was used;
  • the stage of the research or writing process at which it was used;
  • the extent of human review, correction, and supervision applied to the output;
  • whether any AI-assisted output appears directly in the final manuscript, figures, data presentation, or supplementary files.
4. Uses Generally Permitted with Disclosure and Human Oversight
  • language polishing, grammar checking, and stylistic refinement;
  • translation support, where the authors verify the scholarly accuracy of the result;
  • coding assistance, script debugging, or computational workflow support;
  • data organization, transcription assistance, or formatting support;
  • exploratory idea generation that does not replace literature review, original reasoning, or source verification;
  • clearly labeled methodological uses of AI as an object of research or as a research instrument within the study design.
5. Uses Not Permitted
  • listing AI systems as authors or co-authors;
  • submitting AI-generated text, analysis, images, or data without disclosure;
  • using AI to fabricate citations, quotations, empirical findings, interview material, archives, observations, or references to works not actually consulted;
  • using AI to generate or manipulate figures, images, or data in a misleading way;
  • outsourcing core scholarly interpretation, argumentation, or conclusions to AI without human authorship and verification;
  • using AI systems in ways that infringe confidentiality, copyright, privacy, research participant protection, or data protection obligations;
  • using AI to produce peer review reports or editorial decisions in place of human academic judgment.
6. Citations, Sources, and Verifiability

Authors must independently verify every citation, quotation, factual statement, legal claim, statistical result, and bibliographic reference included in the manuscript, regardless of whether AI-assisted tools were used in drafting or organization.

AI-generated outputs must never be treated as authoritative sources in themselves unless the AI system is explicitly the object of analysis within the research. References must point to recoverable, citable, and academically appropriate sources.

7. Data, Images, and Methodological Integrity

Where AI is used in data processing, coding, classification, transcription, annotation, pattern recognition, or image-related procedures, authors must describe the role of the technology with sufficient clarity to allow editorial and scholarly evaluation.

Authors must not present synthetic, transformed, inferred, or reconstructed material as if it were original unassisted evidence unless the method is explicitly disclosed, methodologically justified, and ethically permissible.

If AI-assisted image generation, image enhancement, or data visualization materially affects interpretation, the manuscript must disclose this use and, where appropriate, provide sufficient explanation of the workflow.

8. Reviewers and Confidentiality

Reviewers must treat all submitted manuscripts as confidential documents. Reviewers may not upload manuscripts, reviewer forms, tables, figures, or any manuscript-derived content into public or non-confidential generative AI systems if doing so would expose unpublished work, author identity, proprietary content, or sensitive information.

Reviewers are expected to produce their own scholarly assessments. AI tools must not replace the reviewer’s original academic judgment, critical reasoning, or ethical responsibility. Where any permitted tool is used in a limited manner, confidentiality and integrity obligations remain fully binding.

9. Editors, Guest Editors, and Editorial Staff

Editors remain responsible for all editorial judgments and decisions. Generative AI must not be used as a substitute for editorial evaluation, acceptance decisions, rejection decisions, or ethical determinations.

Editors and editorial staff must protect manuscript confidentiality and should not upload submissions or associated confidential documents into AI systems that may store, learn from, expose, or reuse that content in ways inconsistent with editorial ethics, privacy obligations, or intellectual property protections.

The journal may use limited internal or privacy-protective technical tools for screening, workflow support, plagiarism checking, metadata handling, or administrative assistance, provided that such use does not displace human editorial accountability.

10. Detection, Investigation, and Editorial Action

The journal reserves the right to investigate suspected undisclosed AI use, fabricated references, synthetic text passed off as original scholarship, manipulated images, falsified data outputs, or other integrity concerns associated with AI-assisted technologies.

Where concerns arise, the journal may request clarification, prompt disclosure, source files, drafts, prompts, data documentation, image histories, analytical workflows, or other supporting materials reasonably necessary for editorial assessment.

If misconduct or serious non-disclosure is established, the journal may reject the manuscript, suspend review, publish a correction, issue an editorial expression of concern, retract the article, notify relevant institutions, or take other measures needed to protect the scholarly record.

11. Suggested Disclosure Statement
AI Use Disclosure: The authors used [name of tool/system] for [specific purpose, such as language refinement, coding assistance, transcription support, or data organization]. All outputs were reviewed, verified, and revised by the authors, who take full responsibility for the accuracy, integrity, and originality of the final manuscript.
12. Final Responsibility

Submission to the journal confirms that the authors have complied with this policy, have disclosed all meaningful AI-assisted uses relevant to the manuscript, and accept full responsibility for the submitted work. The journal’s commitment to research integrity extends equally to traditional scholarly practices and to emerging forms of technologically assisted academic production.

Compliance Note This policy should be read together with the journal’s Publication Ethics, Peer Review Process, Author Guidelines, Data Policy, and Copyright & Licensing policies.