Master Academic‑Integrity Review Prompt (copy‑paste this into ChatGPT)
Purpose: Generate a careful, evidence‑based integrity report. Avoid “AI detector” scores. Use textual forensics, citation checks, web matching (if available), and comparison to prior work. Treat signals as leads, not verdicts.
**Paste and fill the brackets before entering the Prompt**:
Context
Course/level: [ ]
Assignment prompt (verbatim): [ ]
Learning goals/skills assessed: [ ]
Allowed supports (e.g., grammar checker, peer feedback): [ ]
Disallowed supports (e.g., generative drafting): [ ]
Due date/time: [ ]
Rubric summary or paste full text: [ ]
Student info
Student name: [ ]
Baseline writing samples (2–5 short first drafts): [ ] [ ] [ ]
Notes on typical style/ability: [ ]
Submission under review
Full text (paste): [ ]
Claimed sources/bibliography: [ ]
Document metadata or version history (if available): [ ]
Figures/data/code/raw files (if any): [ ]
Analyze in sections:
A) Quick triage (5‑minute sniff test)
Summarize the thesis/claim.
List 6–10 voice features (sentence length, idioms, hedging, connectors).
Mark abrupt shifts by paragraph with brief explanations.
B) Stylometry vs. baseline
Compare sentence‑length spread, function‑word patterns, discourse markers, error signatures.
Quote 6–10 distinctive n‑grams from the baseline and note whether they appear here.
Conclude “Voice alignment: low / medium / high,” with evidence.
C) Citation & source audit
Extract every quotation/citation. Build a short list for each: the claim/quote; the source as cited; whether it exists; page/URL/DOI; topical relevance; red flags (wrong year/pages, nonexistent journal).
Spot‑check 3–5 key claims for page‑accurate quotes.
Conclude “Citation integrity: sound / mixed / compromised,” with examples.
D) Web match scan (verbatim & near‑verbatim)
Pull 20 distinctive 6–12‑word phrases and check for direct or near‑direct matches online.
Report any overlaps with URLs/titles and approximate overlap.
E) Intra‑class / cross‑document similarity (optional)
If you have peer submissions: cluster by rare‑phrase overlap, outline shape, and idiosyncratic errors; list suspicious pairs/groups.
F) Content plausibility & context fit
Check whether claims need specific page/scene, apparatus, datasets, or local class events—and whether the text actually shows those anchors.
Flag anachronisms, spelling/register flips, unexplained terminology jumps, and mismatched figure captions.
G) Oral verification plan (micro‑viva)
Draft 6–10 targeted questions that force the author to explain choices, sources, and revisions. Include 1–2 seeded errors from the submission and ask the student to find/fix them. Provide the expected short answers an authentic author would know.
H) Evidence summary & next steps
Roll up signals into “Low / Moderate / High concern,” citing quotes, URLs, and page numbers.
Recommend fair next steps (request drafts/source PDFs, short viva, compare to baseline).
Produce a neutral, copy‑ready note for the LMS documenting the process.
Add a closing summary list organized by assignment type with: Assignment Type | What Students Do | Why It’s Problematic | Detection Tips | Counter‑Cheating Plan.
Targeted Strategies & Ready Prompts for 20 Common Patterns
Below, each item includes quick detection tips, counter‑cheating moves, and a prompt to drop into your analysis (use alongside the master prompt).
1) Full essay generated with AI
Detect: Generic scaffolds, polished but shallow analysis, voice far from baseline.
Counter: Required process artifacts (outline → annotated sources → drafts with changes), micro‑viva, in‑class sample writing.
Prompt: “List all generic, stock phrasings and over‑smooth sentences (quote them). Contrast with the student’s baseline idioms and error habits. Identify 8–12 sentences likely beyond their voice and rewrite them in the baseline style to test plausibility.”
2) Paragraph‑by‑paragraph AI help
Detect: Voice/formatting swings between paragraphs.
Counter: Color‑coded revision passes, short paragraph rationales (“What did you change and why?”).
Prompt: “Rate each paragraph’s ‘voice distance’ from baseline on a 0–5 scale with a 1‑line explanation. Output a brief heatmap list with the biggest shifts and examples.”
3) AI summaries of readings
Detect: No distinctive lines or scene anchors; fuzzy on minor details.
Counter: Quote‑anchored prompts; 2–3 minute in‑class line analysis spot checks.
Prompt: “Extract 8–12 claims that require specific textual support. For each, propose the exact passage (book/chapter/page) that would back it. Flag claims with no plausible anchor.”
4) Math homework solved by AI
Detect: Perfect steps; weak transfer to novel problems.
Counter: Require scratch‑work photos; isomorphic quiz items with surface changes.
Prompt: “Create 3 isomorphic problems and explain whether the presented method still works. List 5 micro‑viva questions to confirm authorship of the shown method.”
5) Foreign‑language assignments via AI
Detect: Register/grammar noticeably above level; unnatural idiom use.
Counter: In‑class quickwrites and audio responses; restricted word banks.
Prompt: “Assess level features (CEFR). List 10 constructions that exceed the student’s baseline level and provide simpler paraphrases they would likely produce.”
6) Lab report from templates
Detect: Methods/results generic or mismatched to apparatus; no raw data.
Counter: Require setup photos, timestamps, raw files, and error analysis tied to equipment.
Prompt: “Check unit ranges, noise/variance plausibility, and apparatus alignment. Identify 5–10 data plausibility checks and flag any impossible precision or templated patterns.”
7) Fabricated/misaligned citations
Detect: Nonexistent journals, bad page ranges/DOIs, orphan quotes.
Counter: Annotated bibliography with quote and page; upload PDFs.
Prompt: “For each citation, confirm existence, correct form, and relevance to the specific claim it supports. List fabrications or mismatches and how to fix them.”
8) Using AI for multiple‑choice “hints”
Detect: Homework vs. in‑class score gap; identical switch patterns across peers.
Counter: Require 1‑sentence justifications; two‑stage exams.
Prompt: “Evaluate answer choices with their justifications. Flag items where the answer is correct but the justification shows superficial or incorrect reasoning.”
9) Creative writing generated by AI
Detect: Clichés, emotional flatness, sudden metrical sophistication.
Counter: Personalized prompts; process diary; brief read‑aloud plus craft Q&A.
Prompt: “Identify 8 craft choices (POV, tense, imagery, rhythm). For each, write a ‘why this, not that’ question and a plausible author rationale; flag spots where rationales are unlikely.”
10) AI‑written discussion posts
Detect: Polished yet impersonal; mirrors instructor phrasing; same structure across students.
Counter: Require references to classmates’ points; time‑windowed posting; occasional audio replies.
Prompt: “Check for concrete references to peers or specific class moments. Score specificity 0–5 and produce 5 follow‑ups that test actual engagement.”
11) AI personal statements/college essays
Detect: Template arcs, generic adversity tropes; voice misaligned with school work.
Counter: Interview‑based outlines; timeline artifacts (drafts/emails).
Prompt: “Extract 12 memory‑specific details (names, dates, settings). Create viva questions to test recall and note what corroborating artifacts would verify each.”
12) Book reviews without reading the book
Detect: No scene/page anchors; vague theme talk.
Counter: Quote‑anchored claims; random in‑class text checks.
Prompt: “List 10 claims that must be anchored to pages/scenes. Request the exact quotation for each and explain why paraphrase alone is insufficient.”
13) AI‑generated speeches
Detect: Over‑structured rhetoric and cadence; delivery doesn’t match text.
Counter: Submit outline and speaker notes; live Q&A.
Prompt: “Analyze ethos/pathos/logos and cadence. Mark lines likely beyond the student’s voice. Draft 6 live Q&A questions keyed to those lines.”
14) Group projects with AI “contributions”
Detect: Missing voice in drafts; slides misaligned with the speaker’s talk.
Counter: Contribution logs; commit histories; rotating stand‑ups.
Prompt: “Infer sub‑components and likely authorship signals from the final artifact and any version history. List discrepancies and targeted viva questions per member.”
15) Paraphrasing tools to evade plagiarism
Detect: Awkward synonym swaps, tense/voice drift, meaning distortion.
Counter: Side‑by‑side paraphrase with original and rationale; direct citation training.
Prompt: “Compare original vs. paraphrase for semantic fidelity, preserved technical terms, presence of citation, suspicious synonym swaps, and any ≥5‑word near‑verbatim strings. Conclude with a pass/fail recommendation and rationale.”
16) Fake interview transcripts
Detect: Uniform, too‑clean answers; no interruptions or fillers; no provenance.
Counter: Consent/contact info; audio snippet; timestamped notes.
Prompt: “Assess conversational features (interruptions, repairs, off‑topic drift). Propose 5 provenance checks (contact/email/audio) and list inconsistencies to probe.”
17) AI‑generated cheat sheets for closed‑book tests
Detect: Topic‑specific score spikes; similar handwritten formats among peers.
Counter: Open‑note but conceptual exams; item pools and versioning.
Prompt: “Analyze topic‑wise performance vs. prior history to find improbable jumps. Propose 5 concept‑variant items to re‑test understanding.”
18) Auto‑completed worksheets
Detect: Overlong, perfectly structured answers; identical phrasing across a class.
Counter: Randomized versions; explain‑your‑step fields; brief oral checks.
Prompt: “Identify repeated answer templates (phrasing/order/sentence frames). Cluster students with identical templates and list likely common sources to check.”
19) AI‑streamlined reflections/journals
Detect: Generic emotion; no sensory detail; identical structure across entries.
Counter: Prompts with concrete anchors (date/place/names); occasional in‑class timed entries.
Prompt: “Score each entry for concrete detail density (people, places, times, senses). Flag entries below threshold and generate 6 authenticity probes per entry.”
20) Scripted debates using AI talking points
Detect: Over‑rehearsed delivery; brittle under cross‑examination; identical rebuttal shells.
Counter: Surprise cross‑questions; evidence cards with citations; require prep notes.
Prompt: “Create 10 adversarial, evidence‑bound questions tailored to this speech and a rubric distinguishing responsive reasoning from pre‑scripted recitation.”
Batch Utilities (drop‑in when needed)
Cross‑student similarity clustering (paste 5–50 submissions):
“Normalize texts (remove headings/citations). Extract rare 6–12‑word phrases, outline shapes, and idiosyncratic errors. Build a similarity matrix and list the top 10 overlaps with quoted strings, percent overlap, and a hypothesis about shared sources or collaboration.”
Internet match scan (direct‑copy leads):
“Extract 25 distinctive phrases. Search exact and fuzzy matches online. Report overlaps with links/titles and the exact matched strings; note approximate overlap.”
Citation audit (entire bibliography):
“Resolve each reference (DOI/URL/journal/book). Confirm year/volume/pages and topic relevance to the specific claim it supports. List issues as: reference → problem → why it matters → how to fix, with verification links where possible.”
Voice‑drift vs. baseline (single student):
“Compare sentence‑length variance, function‑word ratio, idiom density, hedging markers, connectors, and punctuation habits. Highlight the 8 sentences most unlike the baseline and rewrite them in the student’s typical style to test plausibility.”
Lab/data forensics:
“Check units, ranges, replication consistency, and expected noise. Flag templated data or impossible precision. Suggest five follow‑up artifacts to request (photos, raw files with timestamps, notebook exports).”
Code/CS assignments:
“Analyze structure, naming patterns, comment style, and dependency choices. Identify likely borrowed segments. Provide 5 viva questions and a tiny live modification task that tests true authorship.”
Master Academic‑Integrity Review Prompt (copy‑paste this into ChatGPT)
Purpose: Generate a careful, evidence‑based integrity report. Avoid “AI detector” scores. Use textual forensics, citation checks, web matching (if available), and comparison to prior work. Treat signals as leads, not verdicts.
Paste and fill the brackets:
Context
Course/level: [ ]
Assignment prompt (verbatim): [ ]
Learning goals/skills assessed: [ ]
Allowed supports (e.g., grammar checker, peer feedback): [ ]
Disallowed supports (e.g., generative drafting): [ ]
Due date/time: [ ]
Rubric summary or paste full text: [ ]
Student info
Student name: [ ]
Baseline writing samples (2–5 short first drafts): [ ] [ ] [ ]
Notes on typical style/ability: [ ]
Submission under review
Full text (paste): [ ]
Claimed sources/bibliography: [ ]
Document metadata or version history (if available): [ ]
Figures/data/code/raw files (if any): [ ]
Analyze in sections:
A) Quick triage (5‑minute sniff test)
Summarize the thesis/claim.
List 6–10 voice features (sentence length, idioms, hedging, connectors).
Mark abrupt shifts by paragraph with brief explanations.
B) Stylometry vs. baseline
Compare sentence‑length spread, function‑word patterns, discourse markers, error signatures.
Quote 6–10 distinctive n‑grams from the baseline and note whether they appear here.
Conclude “Voice alignment: low / medium / high,” with evidence.
C) Citation & source audit
Extract every quotation/citation. Build a short list for each: the claim/quote; the source as cited; whether it exists; page/URL/DOI; topical relevance; red flags (wrong year/pages, nonexistent journal).
Spot‑check 3–5 key claims for page‑accurate quotes.
Conclude “Citation integrity: sound / mixed / compromised,” with examples.
D) Web match scan (verbatim & near‑verbatim)
Pull 20 distinctive 6–12‑word phrases and check for direct or near‑direct matches online.
Report any overlaps with URLs/titles and approximate overlap.
E) Intra‑class / cross‑document similarity (optional)
If you have peer submissions: cluster by rare‑phrase overlap, outline shape, and idiosyncratic errors; list suspicious pairs/groups.
F) Content plausibility & context fit
Check whether claims need specific page/scene, apparatus, datasets, or local class events—and whether the text actually shows those anchors.
Flag anachronisms, spelling/register flips, unexplained terminology jumps, and mismatched figure captions.
G) Oral verification plan (micro‑viva)
Draft 6–10 targeted questions that force the author to explain choices, sources, and revisions. Include 1–2 seeded errors from the submission and ask the student to find/fix them. Provide the expected short answers an authentic author would know.
H) Evidence summary & next steps
Roll up signals into “Low / Moderate / High concern,” citing quotes, URLs, and page numbers.
Recommend fair next steps (request drafts/source PDFs, short viva, compare to baseline).
Produce a neutral, copy‑ready note for the LMS documenting the process.
Add a closing summary list organized by assignment type with: Assignment Type | What Students Do | Why It’s Problematic | Detection Tips | Counter‑Cheating Plan.
Targeted Strategies & Ready Prompts for 20 Common Patterns
Below, each item includes quick detection tips, counter‑cheating moves, and a prompt to drop into your analysis (use alongside the master prompt).
1) Full essay generated with AI
Detect: Generic scaffolds, polished but shallow analysis, voice far from baseline.
Counter: Required process artifacts (outline → annotated sources → drafts with changes), micro‑viva, in‑class sample writing.
Prompt: “List all generic, stock phrasings and over‑smooth sentences (quote them). Contrast with the student’s baseline idioms and error habits. Identify 8–12 sentences likely beyond their voice and rewrite them in the baseline style to test plausibility.”
2) Paragraph‑by‑paragraph AI help
Detect: Voice/formatting swings between paragraphs.
Counter: Color‑coded revision passes, short paragraph rationales (“What did you change and why?”).
Prompt: “Rate each paragraph’s ‘voice distance’ from baseline on a 0–5 scale with a 1‑line explanation. Output a brief heatmap list with the biggest shifts and examples.”
3) AI summaries of readings
Detect: No distinctive lines or scene anchors; fuzzy on minor details.
Counter: Quote‑anchored prompts; 2–3 minute in‑class line analysis spot checks.
Prompt: “Extract 8–12 claims that require specific textual support. For each, propose the exact passage (book/chapter/page) that would back it. Flag claims with no plausible anchor.”
4) Math homework solved by AI
Detect: Perfect steps; weak transfer to novel problems.
Counter: Require scratch‑work photos; isomorphic quiz items with surface changes.
Prompt: “Create 3 isomorphic problems and explain whether the presented method still works. List 5 micro‑viva questions to confirm authorship of the shown method.”
5) Foreign‑language assignments via AI
Detect: Register/grammar noticeably above level; unnatural idiom use.
Counter: In‑class quickwrites and audio responses; restricted word banks.
Prompt: “Assess level features (CEFR). List 10 constructions that exceed the student’s baseline level and provide simpler paraphrases they would likely produce.”
6) Lab report from templates
Detect: Methods/results generic or mismatched to apparatus; no raw data.
Counter: Require setup photos, timestamps, raw files, and error analysis tied to equipment.
Prompt: “Check unit ranges, noise/variance plausibility, and apparatus alignment. Identify 5–10 data plausibility checks and flag any impossible precision or templated patterns.”
7) Fabricated/misaligned citations
Detect: Nonexistent journals, bad page ranges/DOIs, orphan quotes.
Counter: Annotated bibliography with quote and page; upload PDFs.
Prompt: “For each citation, confirm existence, correct form, and relevance to the specific claim it supports. List fabrications or mismatches and how to fix them.”
8) Using AI for multiple‑choice “hints”
Detect: Homework vs. in‑class score gap; identical switch patterns across peers.
Counter: Require 1‑sentence justifications; two‑stage exams.
Prompt: “Evaluate answer choices with their justifications. Flag items where the answer is correct but the justification shows superficial or incorrect reasoning.”
9) Creative writing generated by AI
Detect: Clichés, emotional flatness, sudden metrical sophistication.
Counter: Personalized prompts; process diary; brief read‑aloud plus craft Q&A.
Prompt: “Identify 8 craft choices (POV, tense, imagery, rhythm). For each, write a ‘why this, not that’ question and a plausible author rationale; flag spots where rationales are unlikely.”
10) AI‑written discussion posts
Detect: Polished yet impersonal; mirrors instructor phrasing; same structure across students.
Counter: Require references to classmates’ points; time‑windowed posting; occasional audio replies.
Prompt: “Check for concrete references to peers or specific class moments. Score specificity 0–5 and produce 5 follow‑ups that test actual engagement.”
11) AI personal statements/college essays
Detect: Template arcs, generic adversity tropes; voice misaligned with school work.
Counter: Interview‑based outlines; timeline artifacts (drafts/emails).
Prompt: “Extract 12 memory‑specific details (names, dates, settings). Create viva questions to test recall and note what corroborating artifacts would verify each.”
12) Book reviews without reading the book
Detect: No scene/page anchors; vague theme talk.
Counter: Quote‑anchored claims; random in‑class text checks.
Prompt: “List 10 claims that must be anchored to pages/scenes. Request the exact quotation for each and explain why paraphrase alone is insufficient.”
13) AI‑generated speeches
Detect: Over‑structured rhetoric and cadence; delivery doesn’t match text.
Counter: Submit outline and speaker notes; live Q&A.
Prompt: “Analyze ethos/pathos/logos and cadence. Mark lines likely beyond the student’s voice. Draft 6 live Q&A questions keyed to those lines.”
14) Group projects with AI “contributions”
Detect: Missing voice in drafts; slides misaligned with the speaker’s talk.
Counter: Contribution logs; commit histories; rotating stand‑ups.
Prompt: “Infer sub‑components and likely authorship signals from the final artifact and any version history. List discrepancies and targeted viva questions per member.”
15) Paraphrasing tools to evade plagiarism
Detect: Awkward synonym swaps, tense/voice drift, meaning distortion.
Counter: Side‑by‑side paraphrase with original and rationale; direct citation training.
Prompt: “Compare original vs. paraphrase for semantic fidelity, preserved technical terms, presence of citation, suspicious synonym swaps, and any ≥5‑word near‑verbatim strings. Conclude with a pass/fail recommendation and rationale.”
16) Fake interview transcripts
Detect: Uniform, too‑clean answers; no interruptions or fillers; no provenance.
Counter: Consent/contact info; audio snippet; timestamped notes.
Prompt: “Assess conversational features (interruptions, repairs, off‑topic drift). Propose 5 provenance checks (contact/email/audio) and list inconsistencies to probe.”
17) AI‑generated cheat sheets for closed‑book tests
Detect: Topic‑specific score spikes; similar handwritten formats among peers.
Counter: Open‑note but conceptual exams; item pools and versioning.
Prompt: “Analyze topic‑wise performance vs. prior history to find improbable jumps. Propose 5 concept‑variant items to re‑test understanding.”
18) Auto‑completed worksheets
Detect: Overlong, perfectly structured answers; identical phrasing across a class.
Counter: Randomized versions; explain‑your‑step fields; brief oral checks.
Prompt: “Identify repeated answer templates (phrasing/order/sentence frames). Cluster students with identical templates and list likely common sources to check.”
19) AI‑streamlined reflections/journals
Detect: Generic emotion; no sensory detail; identical structure across entries.
Counter: Prompts with concrete anchors (date/place/names); occasional in‑class timed entries.
Prompt: “Score each entry for concrete detail density (people, places, times, senses). Flag entries below threshold and generate 6 authenticity probes per entry.”
20) Scripted debates using AI talking points
Detect: Over‑rehearsed delivery; brittle under cross‑examination; identical rebuttal shells.
Counter: Surprise cross‑questions; evidence cards with citations; require prep notes.
Prompt: “Create 10 adversarial, evidence‑bound questions tailored to this speech and a rubric distinguishing responsive reasoning from pre‑scripted recitation.”
Batch Utilities (drop‑in when needed)
Cross‑student similarity clustering (paste 5–50 submissions):
“Normalize texts (remove headings/citations). Extract rare 6–12‑word phrases, outline shapes, and idiosyncratic errors. Build a similarity matrix and list the top 10 overlaps with quoted strings, percent overlap, and a hypothesis about shared sources or collaboration.”
Internet match scan (direct‑copy leads):
“Extract 25 distinctive phrases. Search exact and fuzzy matches online. Report overlaps with links/titles and the exact matched strings; note approximate overlap.”
Citation audit (entire bibliography):
“Resolve each reference (DOI/URL/journal/book). Confirm year/volume/pages and topic relevance to the specific claim it supports. List issues as: reference → problem → why it matters → how to fix, with verification links where possible.”
Voice‑drift vs. baseline (single student):
“Compare sentence‑length variance, function‑word ratio, idiom density, hedging markers, connectors, and punctuation habits. Highlight the 8 sentences most unlike the baseline and rewrite them in the student’s typical style to test plausibility.”
Lab/data forensics:
“Check units, ranges, replication consistency, and expected noise. Flag templated data or impossible precision. Suggest five follow‑up artifacts to request (photos, raw files with timestamps, notebook exports).”
Code/CS assignments:
“Analyze structure, naming patterns, comment style, and dependency choices. Identify likely borrowed segments. Provide 5 viva questions and a tiny live modification task that tests true authorship.”