How to Disclose AI Properly


Reading time
3 mins
 How to Disclose AI Properly

Artificial intelligence is no longer a background tool. It drafts reports, refines academic writing, generates code, summarizes literature, and produces synthetic images and video. As AI becomes embedded in professional and research workflows, one question is becoming unavoidable:

When, and how should we disclose AI use?

Here are some insights into the AI disclosure policies forming in three countries: India, United States of America and the United Kingdom in 2026.

India: Disclosure as a Compliance Expectation

India has signaled a serious regulatory intent around digital accountability through frameworks overseen by the Ministry of Electronics and Information Technology (MeitY). Under the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, platforms are expected to address synthetic and manipulated content responsibly, including through labeling and traceability mechanisms.

While these rules primarily target intermediaries and online platforms, they reflect a broader shift: AI-generated or AI-manipulated content is no longer treated as neutral. It carries regulatory implications.

For businesses and media organizations, this means:

  • Clear labeling of synthetic or AI-generated content.
  • Internal processes for review and accountability.
  • Documentation mechanisms to support compliance.

How does this impact me?

For researchers and PhD scholars in India, the signal is cultural as much as legal. Even where academic writing is not directly regulated under IT Rules, expectations around transparency are rising. As regulatory scrutiny of digital content intensifies, undisclosed AI assistance may be viewed less as a minor omission and more as a credibility risk.

United States: Governance, Risk, and Institutional Policy

The US does not yet have a single comprehensive AI law. Instead, disclosure norms are developing through agency oversight and institutional policy.

The Federal Trade Commission (FTC) has cautioned organizations against deceptive AI claims and misleading automated practices. AI governance is increasingly framed as a risk management issue: if AI materially influences output, organizations must ensure accountability.

Think tanks have also emphasized transparency as a core principle of responsible AI deployment.

In academia, US universities have adopted varied but converging approaches:

  • Many require disclosure of generative AI use in theses or dissertations.
  • Most prohibit listing AI systems as authors.
  • Several treat AI assistance similarly to editorial support – permitted, but requiring acknowledgment.

How does this impact me?

Because policies differ across institutions and journals, researchers must check local guidelines carefully. However, the practical takeaway is consistent: if AI meaningfully shaped your research output, disclosure is increasingly expected.

United Kingdom: Academic Integrity Leading the Conversation

In the UK, universities and publishers have taken proactive steps to clarify responsible AI use in research and education.

Institutions such as the University of Oxford and the University of Cambridge have issued guidance that emphasizes three principles:

  1. Transparency: Be explicit about AI use.
  2. Accountability: Researchers remain responsible for accuracy and originality.
  3. Proportionality: The extent of disclosure should reflect the extent of AI involvement.

How does this impact me?

Publishers including SAGE Publishing and Elsevier have clarified that generative AI tools cannot be credited as authors and that AI assistance must be disclosed in submissions. However, early evidence suggests that actual disclosure rates remain low compared to policy adoption, indicating that many researchers are still navigating uncertainty around what “proper” disclosure entails.

What Proper AI Disclosure Typically Includes

Across these regions, a shared structure for disclosure is emerging.

1. Name the Tool

Identify the AI system used (and version, where relevant).

2. State the Purpose

Clarify how it was used:

  • Language editing
  • Literature summarization
  • Code generation
  • Data analysis support
  • Outline structuring

3. Describe Human Oversight

Explain what you verified, modified, or validated. Disclosure without accountability is incomplete.

4. Place It Transparently

Depending on context, disclosures may appear in:

  • Acknowledgments
  • Methods sections
  • Transparency statements
  • Institutional documentation

How does this impact me?

For PhD students, supervisor approval and alignment with graduate school policy are essential. Check if your institution has a set format or internal guidelines that you need to adhere to. It is better to continuously keep updated on the expectations, since the policies may have changed since the last time you checked. Always check the latest version of any policy before you submit your manuscript or thesis. If you are uncertain, it may be of benefit to make an appointment with your librarian, office of sponsored research or writing center and see if they have any current information for you. You can also ask them for samples of recent theses or publications from your institution to see how AI was disclosed.

Sample AI Disclosure Statements for Researchers and PhD Students

Below are short, adaptable examples.

For Language Editing

“ChatGPT (OpenAI, GPT-4) was used to improve language clarity and grammar in earlier drafts. All content was reviewed and revised by the author, who takes full responsibility for the final manuscript.”

For Literature Summarization or Structuring

“AI tools assisted in summarizing background literature and organizing initial outlines. All interpretations and conclusions were independently verified by the author.”

For Code or Data Support

“Generative AI tools were used to draft portions of analytical code. All code was reviewed, tested, and validated by the authors prior to implementation.”

For Theses or Dissertations

“AI-assisted tools were used for language editing and structural organization during preparation of this dissertation. No AI tools were used to generate original findings or interpretations. The author remains fully responsible for the work.”

Common Mistakes in AI Disclosure

As disclosure norms evolve, several risks are emerging.

1. Being Too Vague

Statements such as “AI tools were used” without explaining how provide little transparency.

2. Listing AI as an Author

Most universities and publishers prohibit this. AI cannot assume intellectual responsibility.

3. Disclosing Only After Publication

Transparency should occur at submission, not retroactively.

4. Assuming Minor Use Requires No Disclosure

Even limited editing support may require acknowledgment under some policies.

5. Ignoring Institutional Differences

A journal’s AI policy may differ from a university’s thesis guidelines. Researchers must check both.

From “Can I Use AI?” to “How Should I Disclose It?”

The debate has matured. The question is no longer simply whether AI tools are allowed. It is how their use should be reported responsibly.

To summarize:

  • AI use should not be hidden.
  • Disclosure should be meaningful.
  • Human accountability remains central.

Remember, it is better to check and confirm what is expected in your field. Transparency up front is better than finding out later that you could have done it better or differently!

 

Disclosure for this article: AI (Chat-GPT) was used for the outline of this blog post; refining the language and structural organization as well as image generation.

Author

Radhika Vaishnav

A strong advocate of curiosity, creativity and cross-disciplinary conversations

See more from Radhika Vaishnav

Found this useful?

If so, share it with your fellow researchers


Related post

Related Reading