If you are a master's or PhD student in 2026 and you are unsure where the line is on AI writing assistance, you are in good company. Your university is probably unsure too. The policies are being rewritten mid-semester. The guidelines your program sent in September may have been superseded in January. Advisors are making it up as they go.
This is the honest landscape, and the honest advice.
I write this as someone who works on AI writing tools and who also has academic background. I have watched dozens of graduate students panic about whether they've done something wrong, or worse, get accused of something they didn't do. I've also seen students submit frankly AI-generated text, get away with it in the short term, and then fail their first serious viva because they couldn't defend ideas they didn't actually hold.
There is a principled path here. This guide lays it out.
The State of University AI Policy in 2026
The 2026 snapshot, based on my tracking of policies across roughly 40 institutions in the US, UK, EU, and East Asia:
- Most major research universities now have at least a first-draft AI policy. Quality varies dramatically. The best policies distinguish between drafting, editing, and ideation assistance. The worst policies say "no AI" without defining what counts as AI.
- Policies differ by discipline within the same institution. Your humanities department may allow broader AI use than the STEM department next door. Your law school almost certainly has stricter rules than your English department. Check your specific program's policy, not the university-wide statement.
- Disclosure is increasingly the default expectation, even when AI use is permitted. Methodology sections, acknowledgments, and explicit AI-use statements are becoming standard.
- Dissertation and thesis policies are the strictest. Undergrad essay policies might tolerate substantial AI assistance. Master's theses and PhD dissertations almost universally expect the substantive intellectual work to be yours, with AI limited to narrowly defined support roles.
- Some institutions have begun requiring draft history (Google Docs version logs, timestamped drafts) as a defense against wrongful accusations and as a check on unacknowledged AI use.
- AI detection scores are being used but increasingly not as sole evidence. After several well-publicized wrongful-accusation cases, most institutions now treat detection as a signal for further inquiry rather than as proof.
Your action item: find your program's current written policy, read it yourself, and ask your advisor about ambiguities. Don't rely on what a classmate said or what the policy said last year.
What's Clearly OK: The Bright-Line Permitted Uses
Across almost all 2026 policies I've reviewed, the following uses of AI are explicitly permitted or uncontroversial:
Brainstorming and Ideation
Using Claude, GPT, or Gemini to explore a research question, list possible angles, challenge your working thesis, or surface counterarguments you hadn't considered. The ideas you end up using are still yours — AI served the same function as a conversation with a friend or a whiteboard session.
No disclosure typically required, though some acknowledgments sections are starting to include AI-assisted brainstorming as a courtesy.
Outlining and Structure Planning
Asking AI to suggest an outline structure given your argument, identify logical gaps, or propose a chapter sequence. You are delegating the scaffolding decision, not the substance. You will revise the outline extensively as you write, so the AI's draft becomes one input among many.
Grammar, Spelling, and Mechanics Checking
Using Coda One's Grammar Checker, Grammarly, or similar to catch subject-verb agreement, punctuation, and spelling. This is the closest AI equivalent to a human proofreader, which has always been permitted. No disclosure needed.
Rephrasing for Clarity (Sentence-Level, Your Own Text)
Taking a sentence you wrote, feeling it is awkward, and asking AI for three alternative phrasings — then picking the one closest to your intent, or rewriting again in your own words. You retain editorial control. The thought is yours. The phrasing decision is yours.
Borderline when you start accepting AI rephrasings without review. See the gray zone below.
Translation for Reading
Using AI to translate non-English sources so you can read them. You still need to cite the original source correctly, and for direct quotations in another language you should verify the translation with a fluent speaker or a standard published translation. But AI-assisted reading of foreign-language literature is uncontroversial.
Coding Assistance for Methodology
If your dissertation involves code (statistical analysis, data processing, simulations), using GitHub Copilot or similar to write the code. This is standard practice across STEM disciplines now and is generally not considered problematic as long as you understand the code and can defend its correctness.
Literature Search Assistance
Using AI to summarize papers you've already identified, help you understand an unfamiliar subfield, or suggest related work you might have missed. You still read the papers themselves and cite them from the original sources (not from AI-generated summaries that may hallucinate details).
What's Gray: Use With Caution and Usually With Disclosure
This is where most controversies happen. These uses are not universally forbidden, but they are also not universally permitted, and the institutional response varies.
AI Drafting Followed by Heavy Editing
You wrote your outline, fed it to Claude, got back a 2,000-word draft, and then spent four hours revising it substantially — restructuring arguments, adding specific citations, rewriting in your voice, adding your analysis.
Some programs accept this with disclosure. Some programs consider the AI the substantial author and consider your revision window-dressing. The question most advisors will ask: could you reproduce this argument in a conversation without notes? If yes, you probably wrote it. If no, you probably didn't.
My recommendation: disclose in your methodology or acknowledgments. Keep draft history. Be prepared to defend the substantive thinking in a viva or meeting. If you can't, don't submit.
Paraphrasing Sources With AI
You found a relevant passage in a source. You want to integrate the idea into your literature review. You ask AI to paraphrase the passage for you.
This is where plagiarism risk is highest. Paraphrasing is not a mechanical process — it is an act of understanding and re-expression. Delegating that to AI produces text that may or may not accurately capture the source, may or may not avoid tracking the source's specific wording too closely, and almost certainly doesn't reflect your genuine engagement with the material.
Better: read the source yourself. Write the paraphrase yourself, in your voice. Use AI at most for sentence-level polish after you've done the intellectual work of paraphrase. See /glossary/plagiarism-detection for how this is detected.
Generating Text in a Voice You've Prompted
Providing AI with examples of your own writing and asking it to produce new text in that style. This is technically "your voice" but not your writing. Some institutions accept this for certain contexts (administrative writing, abstract drafting); most do not accept it for core dissertation content.
Using AI to Interpret Your Own Data
Giving AI your statistical results, interview transcripts, or observations and asking for interpretation or themes. The risk is that AI will produce interpretations that sound plausible but don't match what a careful human analysis would find. Methodological integrity requires that you do the interpretation yourself, using AI at most as a sanity check after you've arrived at your own conclusions.
What's Not OK: The Bright-Line Prohibited Uses
The following are almost universally prohibited and will likely be treated as academic misconduct regardless of specific institutional policy:
Direct Submission of AI-Generated Text as Original Work
You ask ChatGPT to write a chapter. You run it through a humanizer to reduce detection scores. You submit it with your name on it. This is academic dishonesty. The humanizer does not change the ethical calculus. The low detection score does not change the ethical calculus.
Undisclosed AI Authorship of Substantive Content
Even if your institution allows AI-drafted content with disclosure, using AI for substantive content without disclosing it violates the disclosure requirement, which is itself a form of misconduct.
Fabricating Citations
AI-generated citations are notoriously unreliable. Models hallucinate plausible-looking but non-existent papers, misattribute real ideas to wrong authors, and invent journal issues that don't exist. Submitting a dissertation with AI-fabricated citations is a serious integrity violation even if unintentional. Verify every citation from the original source.
AI-Generated Data or Results
Using AI to produce interview responses, simulate data, generate participant quotes, or otherwise fabricate primary evidence. This is research fraud, full stop. It happens. People are caught. Careers end.
Using AI to Bypass Language Requirements
If your program has an English-language requirement and you are not yet proficient, using AI to produce text that makes it appear you are more proficient than you are can constitute misrepresentation — especially for admissions writing or language assessments.
Substituting AI Output for Required Intellectual Engagement
You were supposed to read 20 papers for your literature review. You read 2 and had AI summarize the other 18 based on abstracts. The resulting literature review is built on AI's possibly-inaccurate summaries. You did not do the work the degree was meant to measure.
How to Document AI Assistance Properly
If you used AI in ways that warrant disclosure, documentation is both an ethical requirement and a protection against misunderstanding. Here is the current best practice.
In Your Methodology Section (Dissertations)
Most social sciences and humanities dissertations now include a short subsection within methodology describing AI tool use. A template:
> "AI-assisted writing tools were used at the following stages: (1) brainstorming of potential research angles during the proposal phase, using Claude (Anthropic, model version X) in [month/year]; (2) sentence-level rephrasing assistance during chapter drafting, using Claude for alternative phrasings which were then reviewed and selected by the author; (3) grammar and mechanics checking during final revision, using [tool]. All substantive intellectual content, argumentation, analysis, and interpretation is the author's own. AI tools were not used for literature summarization, data interpretation, or citation generation."
Adapt to your actual use and your program's expectations. Err toward specificity.
In Your Acknowledgments
A shorter disclosure, often as a separate paragraph or footnote:
> "This work benefited from the use of AI-assisted writing tools (Claude, Coda One Grammar Checker) for drafting support and mechanics review. All arguments and conclusions are the author's own."
In Citations When Quoting AI Output
If you directly quote AI output (for example, as an example in a methodology discussion of AI tools), cite it.
APA 7th edition (current guidance as of 2026):
> Anthropic. (2026). Claude (Opus 4.7, Apr 17 version) [Large language model]. https://claude.ai
In-text: (Anthropic, 2026)
MLA 9th edition:
> "Response to 'your prompt text.'" Claude, Apr 17 version, Anthropic, 17 Apr. 2026, claude.ai.
Include the exact prompt and the date in your notes. Some journals now require submitting the full prompt-response transcript as a supplementary file.
Chicago 17th:
> Anthropic, Claude, Opus 4.7, Apr 17 version. Response to author's prompt, Apr 17, 2026. https://claude.ai.
Note: AI outputs are not retrievable by other readers the way published sources are. Each conversation is unique. This means citing AI is more like citing personal communication than citing a paper. Treat it accordingly.
In Your Draft History
Use Google Docs, Overleaf with version control, or Word with dated saves. If challenged, you should be able to show the document's evolution through your own sessions. This is not paranoia — it is defensive practice that saves careers when false positives or wrongful accusations occur.
Using Detectors to Verify Your Own Work (This Is Not Cheating)
One question I hear constantly: is it ethical to run my own dissertation through an AI detector?
Yes. It is a verification step, not an evasion tactic.
Reasons you might want to:
- Check for false positives on your honest writing. Especially if you're a non-native English speaker (see /blog/esl-writers-ai-humanizer-guide-2026 for the data — false positive rates of 15-40% on non-native English prose).
- Identify sections where AI-assisted drafting may have left too much AI signature. If you used AI for a first draft and then revised, a detector can show you which sections you edited lightly versus heavily.
- Know what your supervisor or committee will see if they run detection themselves.
- Catch sections where a collaborator (co-author, translator) may have used AI without disclosure.
Running Coda One's AI Detector, Originality.ai, or GPTZero on your own work is not cheating. It's the same kind of self-check as running spell-check or reading aloud. The ethical concern would be using the detector to optimize AI-generated text for bypass, not to verify your own honest work. See our glossary entry at /glossary/ai-detection-score for how to interpret scores.
My recommendation for dissertation students: run detection on each completed chapter. If scores are under 30% AI, you're in normal territory for honestly-written academic prose. If scores are above 50%, investigate — either your editing of AI-assisted drafts wasn't deep enough, or you have a false-positive-prone writing style and should consider structural revision (varying sentence length, reducing formulaic transitions) before submission.
Case Study: AI-Assisted Literature Review With Proper Documentation
Here is a realistic, honest workflow for a PhD student conducting a 20,000-word literature review chapter with appropriate AI assistance.
Student: Second-year PhD candidate in health policy. Native English speaker. Program policy: AI tools permitted with disclosure; all substantive content must be the student's.
Phase 1 (4 weeks): Source identification and reading.
Student uses Google Scholar, PubMed, and citation tracking to identify ~120 relevant papers. Reads all of them — not AI summaries — taking notes in Zotero. This is non-delegable work.
Phase 2 (3 weeks): Outline development.
Student drafts an outline on paper, iterates with advisor, then uses Claude to stress-test the outline. Prompt: "Here is my proposed literature review structure. Are there logical gaps? Does the organization serve my argument about [thesis]?" Claude suggests two structural changes. Student adopts one, rejects the other.
Documentation note: brainstorming with Claude on outline structure.
Phase 3 (8 weeks): Writing.
Student writes each subsection by hand from their notes. When stuck on transitions between paragraphs, uses Claude to suggest three options, picks the closest to intent, often rewrites further. For sentence-level awkwardness, uses Claude similarly.
Draft history in Google Docs shows continuous session-by-session work with Claude interactions noted in comments.
Phase 4 (2 weeks): Revision.
Student reads the full chapter aloud, revises for voice and argument integrity. Checks every citation against original source (catches 3 minor misattributions, 1 major one where they'd confused two similar studies — exactly the kind of error AI summaries cause if you rely on them).
Phase 5 (1 week): Mechanics and verification.
Student runs chapter through Coda One Grammar Checker. Accepts genuine fixes, rejects clarity-flattening suggestions. Runs through Coda One AI Detector. Chapter scores 18% AI — normal for academic prose. Cross-checks with Originality.ai (22%) and GPTZero (16%). Satisfied.
Documentation:
In methodology section, student writes a 150-word AI-use disclosure matching the template above. In acknowledgments, a sentence-level mention. All draft history preserved. Ready for submission.
This is what responsible AI-assisted academic work looks like. Speed gain: maybe 20-30% on writing phase. Integrity: preserved. Defensibility: high.
Closing
Universities will eventually settle on AI policies that work. We are not there yet. For the next two to three years, expect inconsistent rules, inconsistent enforcement, and occasional injustice in both directions — students accused of AI use they didn't commit, and students getting away with AI use they shouldn't have.
The stable ground underneath the chaos: if the substantive thinking is yours, if you can defend it, and if you've documented your process honestly, you have done right by your degree. That doesn't change regardless of policy updates.
The Coda One AI Detector and Humanizer are both free to use without a credit card for students running sanity checks on their own work. The Grammar Checker is equally free. Use them as self-verification tools, not as shortcuts around the actual work of scholarship.
Good luck with the dissertation.
Frequently Asked Questions
Is it OK to use AI to write parts of my dissertation?
It depends on your program's policy, which varies significantly in 2026. Broadly: using AI for brainstorming, outlining, grammar, and sentence-level rephrasing is widely accepted. Using AI to draft substantial content is gray territory that usually requires disclosure. Submitting AI-generated text as your own original work is almost universally prohibited. Check your specific program's current written policy.
Do I have to disclose AI use in my thesis?
Increasingly, yes. Even where AI use is permitted, disclosure is becoming the default expectation. Include a short statement in your methodology section or acknowledgments specifying which AI tools you used and for what purposes. Being specific is better than being vague. Undisclosed AI use can itself constitute misconduct even if the underlying use would have been permitted with disclosure.
How do I cite ChatGPT or Claude in APA format?
APA 7th edition (2026 guidance): Anthropic. (2026). Claude (Opus 4.7, Apr 17 version) [Large language model]. https://claude.ai. In-text: (Anthropic, 2026). Include the model version and the date, keep a copy of the exact prompt and response in your notes, and use the citation when you directly quote AI output or describe AI-generated content.
Can I run my own dissertation through an AI detector without it being cheating?
Yes. Running detection on your own work is a verification step, not an evasion tactic. It helps you catch false positives on honestly-written prose (especially important for non-native English speakers) and shows you what your committee will see. The ethical concern would be using the detector to optimize AI-generated text for bypass, not verifying your own writing. See /glossary/ai-detection-score.
What should I do if a detector flags my honestly-written thesis as AI?
Don't panic. First, verify with two or three different detectors — single detector scores are unreliable. If multiple detectors agree on a high score, consider structural revision: vary sentence length, reduce formulaic transitions, add specific examples. Preserve your draft history as evidence of your process. If formally accused, request a conversation where you can discuss the work substantively — a student who wrote their own thesis can engage deeply with it.
Can I use AI to paraphrase sources for my literature review?
Strongly discouraged. Paraphrasing is an act of understanding and re-expression, which is the point of a literature review — demonstrating your engagement with the sources. AI paraphrasing is also a plagiarism risk because AI may produce text that tracks the original too closely. Read sources yourself, paraphrase yourself, use AI at most for sentence-level polish after you've done the intellectual work.
Is using AI for grammar and spelling checking considered AI assistance that needs disclosure?
Most programs treat grammar and spelling check (Grammarly, Coda One Grammar Checker, Word's built-in) as uncontroversial, same category as traditional proofreading. Disclosure typically not required. The line is crossed when the 'grammar tool' is actually rewriting sentences for style — at that point you're doing substantive AI editing, which is different.
What if my advisor uses AI to edit my dissertation — am I allowed to?
Your advisor's personal workflow doesn't set policy. Advisors sometimes use AI for drafting emails or literature summaries for their own use, which is separate from whether students may use AI for submitted dissertation content. Your program's written policy governs what you may do. If in doubt, ask your advisor directly: 'What's the expectation for AI use in my dissertation chapters?'
How do I prove I wrote my own dissertation if I'm accused of AI use?
Preserved draft history is the strongest evidence: Google Docs version history, Overleaf commit log, dated Word documents showing incremental work. Substantive defense in conversation is the second-strongest: being able to discuss arguments, explain specific choices, expand on claims extemporaneously. Counter-detection scores are weakest because detectors are known to be unreliable. Save your process artifacts from day one — you cannot create them retroactively.
My university doesn't have a specific AI policy yet. What should I do?
Ask your advisor and your program director in writing for guidance specific to your work. Get the answer in email or written form so you have a record. In the absence of policy, default to conservative use: brainstorming and mechanics only, substantive intellectual work by hand, disclosure in acknowledgments anyway. If your program later adopts a stricter policy, you're already compliant. If it adopts a permissive policy, you haven't over-restricted yourself in a way that affects substance.
Is it plagiarism if AI-generated text happens to match a published source?
Yes. Plagiarism is about the final product representing someone else's work as your own. AI models are trained on published material, and their outputs sometimes track sources closely enough to constitute plagiarism even when you did not personally copy the source. This is one of the reasons AI-drafted content requires substantial human revision and careful source verification before submission. See /glossary/plagiarism-detection.
Are there AI tools made specifically for academic work that are safer to use?
Some tools market themselves as academic-safe, but the ethical framework is not about which tool you use — it's about how you use it. Any tool that drafts substantive content is in the gray/not-OK zone regardless of marketing. Tools limited to grammar check, reference management, and outline brainstorming are uncontroversial. The Coda One Grammar Checker, Zotero, and similar narrow-purpose tools are fine. Broad drafting tools require judgment about use.
Try Plagiarism Checker
Scan your text for originality and ensure it passes plagiarism checks before publishing.
Try FreeEnjoyed this article?
Get weekly AI tool insights delivered to your inbox.