AI Tools and Disclosure Statement

The growing availability of artificial intelligence (AI) technologies has reshaped the landscape of academic research and writing. Tools such as large language models (LLMs), automated grammar checkers, code assistants, and image generators are increasingly integrated into the daily practices of scholars. While these technologies offer powerful support for drafting, data analysis, and visualization, they also pose ethical, legal, and scholarly challenges that cannot be ignored. To safeguard the integrity of the academic record, SEM requires all authors to use AI responsibly, to disclose such use transparently, and to take full responsibility for any outputs generated with its assistance.

  1. Authorship and Accountability

AI systems, regardless of their sophistication, cannot be credited as authors. Authorship requires accountability for intellectual content, the ability to respond to questions about the work, and responsibility for ethical compliance. Because AI tools lack agency and cannot assume responsibility, they cannot meet authorship criteria established by international publishing bodies.

  • Human authors remain fully responsible for the accuracy, originality, and validity of their work.
  • Any text, analysis, or image produced with AI support must be reviewed, verified, and, if necessary, corrected by the human authors before submission.
  • Accountability for errors, plagiarism, or ethical breaches rests with the named authors, not with the AI tool.
  1. Acceptable Uses of AI

SEM recognizes that AI can support researchers in legitimate and constructive ways. Acceptable uses include, but are not limited to:

  • Language support: grammar correction, spelling, and syntax refinement to improve readability.
  • Editorial clarity: rephrasing for conciseness or stylistic consistency without altering substantive meaning.
  • Coding assistance: generating, debugging, or formatting syntax for SEM software (e.g., R, Mplus, LISREL, AMOS), provided the results are independently verified.
  • Data visualization aids: preliminary figures, diagrams, or plots, as long as authors check accuracy against raw data.

These applications can save time and enhance clarity, but they are permissible only when used transparently and responsibly.

  1. Unacceptable Uses of AI

Certain uses of AI are considered violations of publication ethics because they compromise originality, accuracy, and transparency. Examples include:

  • Generating large portions of manuscript text and presenting it as the authors’ original writing.
  • Fabricating references, datasets, or statistical results.
  • Creating false or misleading visualizations, model diagrams, or simulation outcomes.
  • Using AI to disguise or “rewrite” plagiarized material from other sources.
  • Submitting an AI-generated literature review or theoretical framework without proper scholarly synthesis.

These practices constitute misconduct because they erode trust in academic publishing and distort the scientific record.

  1. Disclosure Requirements

To ensure transparency, all uses of AI must be disclosed clearly in the manuscript. Authors should provide:

  1. A methods disclosure within the manuscript itself, describing the tool used, its purpose, and the extent of its contribution.
    • Example: “The authors used Grammarly (version X) to assist with grammar and language editing. No content or analysis was generated by AI. All interpretations and conclusions are the sole responsibility of the authors.”
  2. A cover letter statement at submission, confirming whether AI tools were used and specifying how they were employed.

This dual-disclosure system ensures that editors, reviewers, and readers can evaluate the role of AI in the research and writing process.

  1. Verification and Responsibility

Even when AI is used appropriately, authors must take active steps to verify its output. LLMs and other tools are prone to generating “hallucinations,” including fabricated references or inaccurate explanations. SEM, therefore, expects authors to:

  • Check every AI-assisted output against primary data, established methods, or authoritative sources.
  • Ensure that AI-generated figures or diagrams accurately represent the statistical model being tested.
  • Confirm that all citations exist, are correctly formatted, and are contextually accurate.

Failure to conduct such verification will be treated as negligence and may result in rejection of the manuscript or retraction after publication.

  1. Ethical Rationale

SEM’s approach to AI balances innovation with responsibility. We recognize that Decision Sciences, Economics, Mathematics, Modeling and Simulation, and the Social Sciences are all rapidly adopting AI tools in research and practice. However, the legitimacy of scholarship depends on the transparency of methods and the accountability of authors. By requiring disclosure, the journal protects the credibility of published work while allowing researchers to benefit from technological advances.

  1. Consequences of Non-Disclosure

Failure to disclose the use of AI tools or misuse of AI in manuscript preparation will be treated as research misconduct. Consequences may include:

  • Rejection of the manuscript at the editorial stage.
  • Retraction of a published article, with a clear statement of the reason.
  • Notification of the authors’ institutions, funding bodies, or professional associations.
  • Prohibition of future submissions for a specified period.

These measures ensure that the SEM remains a trusted source of reliable scholarship.