Targeted Strategies & Ready Prompts for 20 Common Patterns

Below, each item includes quick detection tips, counter‑cheating moves, and a prompt to drop into your analysis (use alongside the master prompt).

1) Full essay generated with AI

  • Detect: Generic scaffolds, polished but shallow analysis, voice far from baseline.

  • Counter: Required process artifacts (outline → annotated sources → drafts with changes), micro‑viva, in‑class sample writing.

  • Prompt: “List all generic, stock phrasings and over‑smooth sentences (quote them). Contrast with the student’s baseline idioms and error habits. Identify 8–12 sentences likely beyond their voice and rewrite them in the baseline style to test plausibility.”

2) Paragraph‑by‑paragraph AI help

  • Detect: Voice/formatting swings between paragraphs.

  • Counter: Color‑coded revision passes, short paragraph rationales (“What did you change and why?”).

  • Prompt: “Rate each paragraph’s ‘voice distance’ from baseline on a 0–5 scale with a 1‑line explanation. Output a brief heatmap list with the biggest shifts and examples.”

3) AI summaries of readings

  • Detect: No distinctive lines or scene anchors; fuzzy on minor details.

  • Counter: Quote‑anchored prompts; 2–3 minute in‑class line analysis spot checks.

  • Prompt: “Extract 8–12 claims that require specific textual support. For each, propose the exact passage (book/chapter/page) that would back it. Flag claims with no plausible anchor.”

4) Math homework solved by AI

  • Detect: Perfect steps; weak transfer to novel problems.

  • Counter: Require scratch‑work photos; isomorphic quiz items with surface changes.

  • Prompt: “Create 3 isomorphic problems and explain whether the presented method still works. List 5 micro‑viva questions to confirm authorship of the shown method.”

5) Foreign‑language assignments via AI

  • Detect: Register/grammar noticeably above level; unnatural idiom use.

  • Counter: In‑class quickwrites and audio responses; restricted word banks.

  • Prompt: “Assess level features (CEFR). List 10 constructions that exceed the student’s baseline level and provide simpler paraphrases they would likely produce.”

6) Lab report from templates

  • Detect: Methods/results generic or mismatched to apparatus; no raw data.

  • Counter: Require setup photos, timestamps, raw files, and error analysis tied to equipment.

  • Prompt: “Check unit ranges, noise/variance plausibility, and apparatus alignment. Identify 5–10 data plausibility checks and flag any impossible precision or templated patterns.”

7) Fabricated/misaligned citations

  • Detect: Nonexistent journals, bad page ranges/DOIs, orphan quotes.

  • Counter: Annotated bibliography with quote and page; upload PDFs.

  • Prompt: “For each citation, confirm existence, correct form, and relevance to the specific claim it supports. List fabrications or mismatches and how to fix them.”

8) Using AI for multiple‑choice “hints”

  • Detect: Homework vs. in‑class score gap; identical switch patterns across peers.

  • Counter: Require 1‑sentence justifications; two‑stage exams.

  • Prompt: “Evaluate answer choices with their justifications. Flag items where the answer is correct but the justification shows superficial or incorrect reasoning.”

9) Creative writing generated by AI

  • Detect: Clichés, emotional flatness, sudden metrical sophistication.

  • Counter: Personalized prompts; process diary; brief read‑aloud plus craft Q&A.

  • Prompt: “Identify 8 craft choices (POV, tense, imagery, rhythm). For each, write a ‘why this, not that’ question and a plausible author rationale; flag spots where rationales are unlikely.”

10) AI‑written discussion posts

  • Detect: Polished yet impersonal; mirrors instructor phrasing; same structure across students.

  • Counter: Require references to classmates’ points; time‑windowed posting; occasional audio replies.

  • Prompt: “Check for concrete references to peers or specific class moments. Score specificity 0–5 and produce 5 follow‑ups that test actual engagement.”

11) AI personal statements/college essays

  • Detect: Template arcs, generic adversity tropes; voice misaligned with school work.

  • Counter: Interview‑based outlines; timeline artifacts (drafts/emails).

  • Prompt: “Extract 12 memory‑specific details (names, dates, settings). Create viva questions to test recall and note what corroborating artifacts would verify each.”

12) Book reviews without reading the book

  • Detect: No scene/page anchors; vague theme talk.

  • Counter: Quote‑anchored claims; random in‑class text checks.

  • Prompt: “List 10 claims that must be anchored to pages/scenes. Request the exact quotation for each and explain why paraphrase alone is insufficient.”

13) AI‑generated speeches

  • Detect: Over‑structured rhetoric and cadence; delivery doesn’t match text.

  • Counter: Submit outline and speaker notes; live Q&A.

  • Prompt: “Analyze ethos/pathos/logos and cadence. Mark lines likely beyond the student’s voice. Draft 6 live Q&A questions keyed to those lines.”

14) Group projects with AI “contributions”

  • Detect: Missing voice in drafts; slides misaligned with the speaker’s talk.

  • Counter: Contribution logs; commit histories; rotating stand‑ups.

  • Prompt: “Infer sub‑components and likely authorship signals from the final artifact and any version history. List discrepancies and targeted viva questions per member.”

15) Paraphrasing tools to evade plagiarism

  • Detect: Awkward synonym swaps, tense/voice drift, meaning distortion.

  • Counter: Side‑by‑side paraphrase with original and rationale; direct citation training.

  • Prompt: “Compare original vs. paraphrase for semantic fidelity, preserved technical terms, presence of citation, suspicious synonym swaps, and any ≥5‑word near‑verbatim strings. Conclude with a pass/fail recommendation and rationale.”

16) Fake interview transcripts

  • Detect: Uniform, too‑clean answers; no interruptions or fillers; no provenance.

  • Counter: Consent/contact info; audio snippet; timestamped notes.

  • Prompt: “Assess conversational features (interruptions, repairs, off‑topic drift). Propose 5 provenance checks (contact/email/audio) and list inconsistencies to probe.”

17) AI‑generated cheat sheets for closed‑book tests

  • Detect: Topic‑specific score spikes; similar handwritten formats among peers.

  • Counter: Open‑note but conceptual exams; item pools and versioning.

  • Prompt: “Analyze topic‑wise performance vs. prior history to find improbable jumps. Propose 5 concept‑variant items to re‑test understanding.”

18) Auto‑completed worksheets

  • Detect: Overlong, perfectly structured answers; identical phrasing across a class.

  • Counter: Randomized versions; explain‑your‑step fields; brief oral checks.

  • Prompt: “Identify repeated answer templates (phrasing/order/sentence frames). Cluster students with identical templates and list likely common sources to check.”

19) AI‑streamlined reflections/journals

  • Detect: Generic emotion; no sensory detail; identical structure across entries.

  • Counter: Prompts with concrete anchors (date/place/names); occasional in‑class timed entries.

  • Prompt: “Score each entry for concrete detail density (people, places, times, senses). Flag entries below threshold and generate 6 authenticity probes per entry.”

20) Scripted debates using AI talking points

  • Detect: Over‑rehearsed delivery; brittle under cross‑examination; identical rebuttal shells.

  • Counter: Surprise cross‑questions; evidence cards with citations; require prep notes.

  • Prompt: “Create 10 adversarial, evidence‑bound questions tailored to this speech and a rubric distinguishing responsive reasoning from pre‑scripted recitation.”

Batch Utilities (drop‑in when needed)

Cross‑student similarity clustering (paste 5–50 submissions):
“Normalize texts (remove headings/citations). Extract rare 6–12‑word phrases, outline shapes, and idiosyncratic errors. Build a similarity matrix and list the top 10 overlaps with quoted strings, percent overlap, and a hypothesis about shared sources or collaboration.”

Internet match scan (direct‑copy leads):
“Extract 25 distinctive phrases. Search exact and fuzzy matches online. Report overlaps with links/titles and the exact matched strings; note approximate overlap.”

Citation audit (entire bibliography):
“Resolve each reference (DOI/URL/journal/book). Confirm year/volume/pages and topic relevance to the specific claim it supports. List issues as: reference → problem → why it matters → how to fix, with verification links where possible.”

Voice‑drift vs. baseline (single student):
“Compare sentence‑length variance, function‑word ratio, idiom density, hedging markers, connectors, and punctuation habits. Highlight the 8 sentences most unlike the baseline and rewrite them in the student’s typical style to test plausibility.”

Lab/data forensics:
“Check units, ranges, replication consistency, and expected noise. Flag templated data or impossible precision. Suggest five follow‑up artifacts to request (photos, raw files with timestamps, notebook exports).”

Code/CS assignments:
“Analyze structure, naming patterns, comment style, and dependency choices. Identify likely borrowed segments. Provide 5 viva questions and a tiny live modification task that tests true authorship.”

Counter‑Cheating (Prevention) Playbook

  • Require process artifacts (proposal → outline → annotated quotes with page numbers → tracked‑changes drafts → final + reflection).

  • Establish a baseline writing sample in class early in the term (10–12 minutes) for voice comparison.

  • Use short micro‑vivas (3–6 minutes) for major assignments.

  • Personalize prompts to class material, local references, and prior discussions.

  • Require version history/metadata or photos of handwritten work.

  • Make quote‑anchoring a grading requirement for text‑dependent claims.

  • Add justification fields to MC/short answer.

  • Use two‑stage assessments (individual, then brief group).

  • Prefer open‑note, concept‑focused tests over closed‑book recall.

  • Bake into your rubric: specificity, verifiable evidence, and process transparency—cap scores if artifacts are missing.

  • Include a clear integrity clause in your syllabus (allowed vs. disallowed help, possible evidence you may request, and fair review steps).

  • Run frequent, low‑stakes checks to continuously sample authentic voice.

Personal Conference Question bank (mix and match):

  • “Walk me through the choice behind paragraph 3’s structure.”

  • “Which specific pages support your claim about ___? Read the exact line.”

  • “Show me step 4—why this method over alternative X?”

  • “Open your source: where did this quote come from, and what’s on the page before/after?”

  • “Why this variable/algorithm? What happens if we change X?”

  • “What changed between Draft 1 and the final, and why?”

  • “If you had 15 more minutes, what would you revise and why?”