Module 7: How to Create a Complete Personalized Assessment Package

Master Prompt:
"I’ve uploaded student snapshots, accommodation spreadsheets, and unit or lesson plans. For each student, generate the following:

A Primary (accessible) assessment based on their strengths and accommodations.
A Reach (stretch goal) assessment that targets a specific area of growth.
A step-by-step student-facing workflow or instructional guide for each task, including scaffolds and accommodations.
A customized rubric aligned to each student’s assessment goals and learning profile.
Clear, individualized directions and any needed materials or worksheets.
A sample feedback message that can be used after students complete the assessment, highlighting strengths and offering actionable improvement steps."

Module Slide 5 and 6.

Analyzing Standardized Assessment Data to Pinpoint Skill Gaps

I am uploading anonymized student performance data, assessment results, and/or curriculum standards for my subject and grade level. Analyze this in four stages. At the end of each stage, stop and ask me to type ‘Continue’ before moving to the next stage so we can go deeper without losing detail. Use plain, concise language, and format with clear headings, tables, or bullet points where it improves clarity. Focus only on the most important patterns and needs, not every single datapoint. Stage 1 – Class Summary of Strengths and Needs: Summarize overall performance patterns, highlighting major strengths, common areas of struggle, and key trends. Stage 2 – Detailed Skill or Standard Breakdown: For each key skill or standard, summarize class performance, noting where mastery is high, partial, or low, and flagging priority areas for improvement. Stage 3 – Individual Student Skill Playbooks: For each student, list their top 1–3 priority learning needs in practical, teachable terms, ready for immediate instruction. Stage 4 – Targeted Instructional Recommendations: Ask me clarifying questions about my subject, grade level, and teaching style. Then give specific, concrete ways to address the identified needs, including suggested units, lesson topics, skill-building activities, and differentiation strategies for varying ability levels. Begin with Stage 1 only, then wait for my confirmation before continuing to Stage 2, Stage 3, and Stage 4.

Teacher Prompt Menu: Streamlining Text Adaptation with ChatGPT

Works for any subject, any grade level. Replace placeholders to fit your lesson.
Format:
[paste your text here] — the text you want to adapt
[subject/topic] — e.g., Biology, US History, Algebra, Literature
[intended grade/skill level] — e.g., 5th grade, AP, beginner ELL

A. Simplifying & Summarizing

  1. Abridged Reading

“Simplify [paste your text here] for [intended grade/skill level] students studying [subject/topic], while keeping tone and essential details intact.”

  1. Outline & Summary

“Create a concise summary and clear outline of [paste your text here], highlighting key concepts, events, or processes.”

  1. Checklists

“Make a checklist of the most important ideas, terms, or processes from [paste your text here].”

  1. Reference Sheets

“Develop a quick-reference sheet summarizing the main points, terms, or processes from [paste your text here].”

  1. Study Guide

“Create an accessible study guide for [paste your text here] with main ideas and vocabulary.”

  1. Background Readings

“Suggest background materials that provide context for [paste your text here] in [subject/topic].”

B. Building Comprehension & Critical Thinking

  1. Worksheet Creation

“Create comprehension and analysis questions for [paste your text here] using Bloom’s Taxonomy, suited for [intended grade/skill level].”

  1. Sentence Starters

“Provide scaffolded sentence starters for discussion or writing about [paste your text here].”

  1. Guided Notes

“Produce guided notes that help students follow and capture key points from [paste your text here].”

  1. Annotation Strategies

“List effective annotation strategies tailored to [paste your text here].”

  1. Step-by-Step Annotation

“Write explicit instructions for annotating [paste your text here] to analyze structure and meaning.”

  1. Rhetorical Strategy Explanation

“Explain any rhetorical or persuasive strategies used in [paste your text here] with subject-relevant examples.”

  1. Understanding Systems / Processes

“Clearly explain any systems, sequences, or processes described in [paste your text here].”

C. Vocabulary Development

  1. Pre-Teaching Instructions

“Prepare notes introducing key vocabulary and concepts from [paste your text here] before students read it.”

  1. Vocabulary List

“List challenging words from [paste your text here] with student-friendly definitions and examples.”

D. Differentiation & Accessibility

  1. Tiered Worksheets

“Produce three levels of worksheets for [paste your text here]—advanced, intermediate, and developing.”

  1. Visual Materials

“Create visual aids (charts, diagrams, concept maps) to explain key ideas from [paste your text here].”

  1. Hands-On Activities

“Suggest physical or interactive activities to reinforce concepts from [paste your text here].”

  1. Auditory Processing Supports

“Develop auditory cues or listening activities to support comprehension of [paste your text here].”

  1. Multi-modal Lesson Design

“Design a lesson using [paste your text here] that integrates visual, auditory, and kinesthetic learning modes.”

E. Assignments & Assessments

  1. Assignment Instructions

“Write detailed, step-by-step instructions for an assignment based on [paste your text here].”

  1. Sample Problems / Examples

“Create model responses, worked examples, or problem solutions based on [paste your text here].”

  1. Rubric Development

“Build a grading rubric for an assignment tied to [paste your text here].”

  1. Student Work Feedback

“Give constructive feedback on a sample student response to [paste your text here].”

  1. Assessment Suggestions

“Suggest both informal and formal assessment methods for [paste your text here].”

F. Collaboration & Enrichment

  1. Peer Support / Group Activities

“Propose cooperative learning or peer support activities linked to [paste your text here].”

  1. Revision Process

“Create a revision checklist for student work based on [paste your text here].”

  1. High-Interest Reading Recommendations

“Suggest engaging, age-appropriate texts connected to [paste your text here]’s themes or concepts.”

  1. Social-Emotional Learning Guidance

“Integrate SEL strategies when teaching [paste your text here].”

G. Graphic & Creative Supports

  1. Graphic Organizers (Two Types)

“Create two graphic organizers for analyzing key ideas and relationships in [paste your text here].”

  1. DALL·E Image Creation

“Generate illustrations or visuals that represent the main ideas in [paste your text here] for [subject/topic] lessons.”

Module 3: Teach Me This Skill Prompt

Provide a detailed definition and explanation of the skill: [being a better behavior manager as a teacher in high school]. Break the skill down into clear, teachable components. Include guiding questions that learners can ask themselves to monitor and strengthen their use of the skill. Provide specific, illustrative examples demonstrating the skill in action in different contexts. Write for advanced high school or early college students who are developing higher-order thinking, analysis, and application skills.

Module 3: Classroom Data Diagonostic Prompt: I am uploading anonymized student performance data, assessment results, and/or curriculum standards for my subject and grade level. Analyze this in four stages. At the end of each stage, stop and ask me to type ‘Continue’ before moving to the next stage so we can go deeper without losing detail. Use plain, concise language, and format with clear headings, tables, or bullet points where it improves clarity. Focus only on the most important patterns and needs, not every single datapoint. Stage 1 – Class Summary of Strengths and Needs: Summarize overall performance patterns, highlighting major strengths, common areas of struggle, and key trends. Stage 2 – Detailed Skill or Standard Breakdown: For each key skill or standard, summarize class performance, noting where mastery is high, partial, or low, and flagging priority areas for improvement. Stage 3 – Individual Student Skill Playbooks: For each student, list their top 1–3 priority learning needs in practical, teachable terms, ready for immediate instruction. Stage 4 – Targeted Instructional Recommendations: Ask me clarifying questions about my subject, grade level, and teaching style. Then give specific, concrete ways to address the identified needs, including suggested units, lesson topics, skill-building activities, and differentiation strategies for varying ability levels. Begin with Stage 1 only, then wait for my confirmation before continuing to Stage 2, Stage 3, and Stage 4.

Module 5

Master Prompt: Quick Accommodations Spreadsheet

You are a special education expert trained in IEP compliance and accommodation planning. Please complete the following tasks using the uploaded IEP and the accompanying reference list of 134 classroom accommodations: PART 1: Extract Preexisting Accommodations • Scan the IEP thoroughly. • Identify and list all accommodations, supports, or modifications that already appear in the IEP. • Cross-reference these with the provided reference list of 134 accommodations. • For each match, include: • Matched Accommodation (from the 134-list) • IEP Source Text (quote or paraphrased section from IEP) • Justification/Notes (explain why it matches) PART 2: Suggest Additional Accommodations • Based on the IEP’s documented student needs, goals, present levels, evaluations, and challenges, suggest any additional accommodations that are appropriate but not already listed in the IEP. • Only suggest accommodations from the provided 134-item list. • Do not suggest duplicates. • For each suggested accommodation, include: • Suggested Accommodation • IEP-Based Rationale (specific issue or goal it supports) • Category (e.g., Environmental, Instructional, Testing, Behavioral) Final Output Format • Please provide two separate tables: • Table 1: Existing IEP Accommodations (Cross-Referenced) • Table 2: Additional Suggested Accommodations (Based on Analysis) • Both tables should be formatted for easy copy-paste into an Excel spreadsheet. • Use clear headers and keep language professional and concise.

(Non-Standardized Test) Classroom Data Analysis Prompt: You are my ELA writing coach. Follow these instructions exactly and answer IN THIS CHAT ONLY as Markdown tables. Do NOT create files, JSON, links, or downloads. WHAT YOU’LL DO Sort problems into THREE LEVELS (use these exact category names, written in full words): • Structural: thesis/claim clarity; organization & logical flow; paragraph unity/topic sentences; evidence selection & integration; analysis vs summary balance; coherence/transitions; intro/conclusion effectiveness; prompt alignment. • Sentence-Level: run-ons/comma splices; fragments; agreement (subject–verb/pronoun); punctuation basics; wordiness/redundancy; awkward/unclear phrasing; passive/nominalization overuse; sentence variety. • Conceptual: misinterpretation of text; unsupported claims; missing counterargument/refutation; misused evidence; flawed reasoning/fallacy; theme/author’s purpose confusion; tone/register mismatch; audience awareness. For every issue, include: A short quote from the draft (≤12 words). The location (paragraph number or line number if available). Confidence (High/Med/Low). In the Notes column, include the category’s overall percentage across all drafts (so each entry has both local evidence + global context). For each draft, count how many times each category appears. For ALL drafts together, compute: • Drafts Affected = number of drafts with ≥1 issue in that category. • % Drafts Affected = (Drafts Affected ÷ Total Drafts) × 100, rounded to 1 decimal. • Total Instances = sum of all occurrences across drafts. • % of All Issues = (Total Instances ÷ All Issues) × 100, rounded to 1 decimal. Also compute OVERALL PERCENTAGES BY LEVEL: • Level Total = sum of all category instances within that level. • % of All Issues = (Level Total ÷ All Issues) × 100, rounded to 1 decimal. When to suggest “What to Teach Next”: If any Level Share ≥40.0% OR % Drafts Affected ≥80.0%. Focus only on the top 2–3 categories (by Total Instances, tie-breaker = higher % Drafts Affected). Suggestions format: One-sentence mini-lesson focus. One-sentence quick practice idea. HOW TO FORMAT YOUR ANSWER Per-Draft Reports (one table per draft) Columns: Level (Structural / Sentence-Level / Conceptual) | Category | Count | Evidence | Confidence | Notes (include global % of All Issues) If a level has no issues, write “None found.” Group Summary by Category (one table for all drafts) Columns: Level | Category | Drafts Affected (n/N) | % Drafts Affected | Total Instances | % of All Issues | Typical Pattern Overall Level Summary (Structural vs Sentence-Level vs Conceptual) Columns: Level | Drafts Affected (n/N) | % Drafts Affected | Level Total | % of All Issues Top 3 Priorities (highest % Drafts Affected → tie-breaker = Total Instances) For each: mini-lesson + practice idea. What to Teach Next (only if thresholds triggered) List triggered level(s) + top categories. For each: mini-lesson + practice idea.

Module 7: How to Conduct a Self-Assessment of Your Teaching Using a Transcript

Prompt:
"You are an expert instructional evaluator certified in the 2013 Danielson Framework (NYC DOE adaptation) and trained in data-driven coaching. Score the teacher’s practice solely against the rubric’s four Domains and 22 Components and against the following custom lenses: Instructional Clarity, Questioning Techniques, Differentiation & Accommodations, Student Engagement, and Activity Appropriateness. Use only transcript evidence—quoted verbatim and timestamped—to justify every rating. Remain objective, jargon-free, and relentlessly improvement-oriented.

0 | Ingest & Setup
Read the full transcript and any artifacts supplied.
Skim the Danielson rubric (embedded in system context).
Create an empty evidence grid with all 22 Components down the left and the five custom focus areas across the top (you will fill this in later).

1 | Evidence Collection
For every Component & focus area:
Extract all relevant quotations or observed actions.
Timestamp each entry.
If no evidence, mark '🔍 Not observed.'

Output format example:

2b – Culture for Learning

  • 'Keep experimenting—mistakes mean we’re learning!' (09:22)

  • Student: 'I rewrote mine because I wanted it clearer.' (17:03)

2 | Rubric Ratings & Analysis
For each Domain, build a table:
Component
Rating (I/D/E/HE)
Rationale (2–3 sentences citing evidence)
Student-Learning Impact
Then write a domain-level paragraph synthesizing key trends.

3 | Custom Focus-Area Deep Dive
Produce a narrative section for each focus area:
What went well (cite evidence).
What limited effectiveness (cite evidence).
Concrete, high-leverage improvement moves (research-based, classroom-ready).

4 | Action Plan & Resources
Top 3 Strengths (Component code + why it matters).
Top 3 Growth Priorities (Component code + specific strategy).
For each priority, link one vetted article/video/tool (< 150 chars URL).
Suggested timeline & observable success indicators.

5 | Self-Reflection Prompts
Pose three coach-style questions that press the teacher to reflect on beliefs, evidence, and next steps.

6 | Snapshot Scorecard
Provide a one-page visual (ASCII is fine) summarizing Domain ratings and focus-area grades so progress can be tracked over multiple observations.

Notes for ChatGPT:
Adhere strictly to Danielson language when assigning levels.
Never guess—if evidence is missing, mark it as such and advise how to capture it next time.
Keep the tone supportive but candid; frame all critique as pathways to higher impact."

Module 3: Student Snapshot Chart Creator Prompt

I am uploading a PII-scrubbed, anonymized Individualized Education Program (IEP). Review the document thoroughly and distill the information into a structured, concise reference table suitable for teachers and instructional planning. Organize the extracted information clearly into the following sections, each including targeted actionable recommendations: • Strengths (highlight areas where the student excels) • Areas of Need (specify particular skills or areas requiring support) • Accommodations and Modifications (clearly match each with relevant source text from the IEP and brief justifications) • Assessment and Evaluation Modifications (include specifics such as testing environments, time allowances) • Behavioral or Social-Emotional Considerations (identify any behaviors or emotional factors impacting learning) • Learning Goals (extract and summarize measurable objectives) • Communication and Collaboration (highlight communication strategies, particularly involving parents or caregivers) • Examples of Successful Strategies (list effective practices noted from prior experience or documented successes) • Sensory and Environmental Considerations (mention any physical needs or sensitivities, like allergies or sensory tools) The final table should serve as a practical, quick-reference guide aligned to each student's unique profile, directly informing lesson planning, classroom management, and instructional differentiation.

Master Prompt: IEP Drafter

This is actually five prompts to be entered in separately, step by step:

First, use this prompt once you upload the documents: “Please confirm you have reviewed the uploaded documents labeled ‘Teacher Reports,’ ‘Parent Questionnaires,’ ‘Test Evaluations,’ and ‘IEP Meeting Notes.’”

Then use the Best Prompts for Each IEP Section one by one for maximum output by ChatGPT:

SECTION 1: Present Levels of Performance (PLOP)

Evaluation Results (No quotes needed)

Best Prompt: “Summarize the uploaded test assessments, evaluations, and transcripts, clearly highlighting key educational implications and specific areas needing targeted support.”

Academic and Functional Performance

Best Prompt: “Using the uploaded teacher reports and IEP meeting notes, draft a detailed narrative describing the student's academic and functional performance. Begin each academic area with a direct quote from stakeholders, expanding clearly and cohesively.”

SECTION 2: Student Strengths, Preferences, Interests

Best Prompt: “Review the uploaded teacher and parent questionnaires along with meeting notes. Identify and narrate the student's key strengths, preferences, and interests, clearly incorporating direct quotes from stakeholders.”

SECTION 3: Social Development

Best Prompt: “Draft a comprehensive narrative regarding the student's social development. Integrate relevant quotes from stakeholder feedback in the meeting notes and questionnaires, highlighting both strengths and specific areas needing support, including recommended strategies or interventions.”

SECTION 4: Physical Development

Best Prompt: “Using physical education reports, evaluations, and meeting notes, compose a clear narrative detailing the student's physical development. Include stakeholder quotes to illustrate progress, challenges, and educational impacts.”

Step 4: Proposing Management Needs

Best Prompt: “Based on all uploaded documentation, recommend detailed management strategies and accommodations tailored specifically to the student's academic, social, and physical needs. Support each recommendation explicitly with stakeholder quotes and relevant documentation. Additionally, suggest innovative interventions by referencing the uploaded accommodations list.”

 

Master Academic‑Integrity Review Prompt (copy‑paste this into ChatGPT)

Purpose: Generate a careful, evidence‑based integrity report. Avoid “AI detector” scores. Use textual forensics, citation checks, web matching (if available), and comparison to prior work. Treat signals as leads, not verdicts.

Paste and fill the brackets:

Context

Course/level: [ ]

Assignment prompt (verbatim): [ ]

Learning goals/skills assessed: [ ]

Allowed supports (e.g., grammar checker, peer feedback): [ ]

Disallowed supports (e.g., generative drafting): [ ]

Due date/time: [ ]

Rubric summary or paste full text: [ ]

Student info

Student name: [ ]

Baseline writing samples (2–5 short first drafts): [ ] [ ] [ ]

Notes on typical style/ability: [ ]

Submission under review

Full text (paste): [ ]

Claimed sources/bibliography: [ ]

Document metadata or version history (if available): [ ]

Figures/data/code/raw files (if any): [ ]

Analyze in sections:

A) Quick triage (5‑minute sniff test)

Summarize the thesis/claim.

List 6–10 voice features (sentence length, idioms, hedging, connectors).

Mark abrupt shifts by paragraph with brief explanations.

B) Stylometry vs. baseline

Compare sentence‑length spread, function‑word patterns, discourse markers, error signatures.

Quote 6–10 distinctive n‑grams from the baseline and note whether they appear here.

Conclude “Voice alignment: low / medium / high,” with evidence.

C) Citation & source audit

Extract every quotation/citation. Build a short list for each: the claim/quote; the source as cited; whether it exists; page/URL/DOI; topical relevance; red flags (wrong year/pages, nonexistent journal).

Spot‑check 3–5 key claims for page‑accurate quotes.

Conclude “Citation integrity: sound / mixed / compromised,” with examples.

D) Web match scan (verbatim & near‑verbatim)

Pull 20 distinctive 6–12‑word phrases and check for direct or near‑direct matches online.

Report any overlaps with URLs/titles and approximate overlap.

E) Intra‑class / cross‑document similarity (optional)

If you have peer submissions: cluster by rare‑phrase overlap, outline shape, and idiosyncratic errors; list suspicious pairs/groups.

F) Content plausibility & context fit

Check whether claims need specific page/scene, apparatus, datasets, or local class events—and whether the text actually shows those anchors.

Flag anachronisms, spelling/register flips, unexplained terminology jumps, and mismatched figure captions.

G) Oral verification plan (micro‑viva)

Draft 6–10 targeted questions that force the author to explain choices, sources, and revisions. Include 1–2 seeded errors from the submission and ask the student to find/fix them. Provide the expected short answers an authentic author would know.

H) Evidence summary & next steps

Roll up signals into “Low / Moderate / High concern,” citing quotes, URLs, and page numbers.

Recommend fair next steps (request drafts/source PDFs, short viva, compare to baseline).

Produce a neutral, copy‑ready note for the LMS documenting the process.

Add a closing summary list organized by assignment type with: Assignment Type | What Students Do | Why It’s Problematic | Detection Tips | Counter‑Cheating Plan.

Targeted Strategies & Ready Prompts for 20 Common Patterns

Below, each item includes quick detection tips, counter‑cheating moves, and a prompt to drop into your analysis (use alongside the master prompt).

1) Full essay generated with AI

Detect: Generic scaffolds, polished but shallow analysis, voice far from baseline.

Counter: Required process artifacts (outline → annotated sources → drafts with changes), micro‑viva, in‑class sample writing.

Prompt: “List all generic, stock phrasings and over‑smooth sentences (quote them). Contrast with the student’s baseline idioms and error habits. Identify 8–12 sentences likely beyond their voice and rewrite them in the baseline style to test plausibility.”

2) Paragraph‑by‑paragraph AI help

Detect: Voice/formatting swings between paragraphs.

Counter: Color‑coded revision passes, short paragraph rationales (“What did you change and why?”).

Prompt: “Rate each paragraph’s ‘voice distance’ from baseline on a 0–5 scale with a 1‑line explanation. Output a brief heatmap list with the biggest shifts and examples.”

3) AI summaries of readings

Detect: No distinctive lines or scene anchors; fuzzy on minor details.

Counter: Quote‑anchored prompts; 2–3 minute in‑class line analysis spot checks.

Prompt: “Extract 8–12 claims that require specific textual support. For each, propose the exact passage (book/chapter/page) that would back it. Flag claims with no plausible anchor.”

4) Math homework solved by AI

Detect: Perfect steps; weak transfer to novel problems.

Counter: Require scratch‑work photos; isomorphic quiz items with surface changes.

Prompt: “Create 3 isomorphic problems and explain whether the presented method still works. List 5 micro‑viva questions to confirm authorship of the shown method.”

5) Foreign‑language assignments via AI

Detect: Register/grammar noticeably above level; unnatural idiom use.

Counter: In‑class quickwrites and audio responses; restricted word banks.

Prompt: “Assess level features (CEFR). List 10 constructions that exceed the student’s baseline level and provide simpler paraphrases they would likely produce.”

6) Lab report from templates

Detect: Methods/results generic or mismatched to apparatus; no raw data.

Counter: Require setup photos, timestamps, raw files, and error analysis tied to equipment.

Prompt: “Check unit ranges, noise/variance plausibility, and apparatus alignment. Identify 5–10 data plausibility checks and flag any impossible precision or templated patterns.”

7) Fabricated/misaligned citations

Detect: Nonexistent journals, bad page ranges/DOIs, orphan quotes.

Counter: Annotated bibliography with quote and page; upload PDFs.

Prompt: “For each citation, confirm existence, correct form, and relevance to the specific claim it supports. List fabrications or mismatches and how to fix them.”

8) Using AI for multiple‑choice “hints”

Detect: Homework vs. in‑class score gap; identical switch patterns across peers.

Counter: Require 1‑sentence justifications; two‑stage exams.

Prompt: “Evaluate answer choices with their justifications. Flag items where the answer is correct but the justification shows superficial or incorrect reasoning.”

9) Creative writing generated by AI

Detect: Clichés, emotional flatness, sudden metrical sophistication.

Counter: Personalized prompts; process diary; brief read‑aloud plus craft Q&A.

Prompt: “Identify 8 craft choices (POV, tense, imagery, rhythm). For each, write a ‘why this, not that’ question and a plausible author rationale; flag spots where rationales are unlikely.”

10) AI‑written discussion posts

Detect: Polished yet impersonal; mirrors instructor phrasing; same structure across students.

Counter: Require references to classmates’ points; time‑windowed posting; occasional audio replies.

Prompt: “Check for concrete references to peers or specific class moments. Score specificity 0–5 and produce 5 follow‑ups that test actual engagement.”

11) AI personal statements/college essays

Detect: Template arcs, generic adversity tropes; voice misaligned with school work.

Counter: Interview‑based outlines; timeline artifacts (drafts/emails).

Prompt: “Extract 12 memory‑specific details (names, dates, settings). Create viva questions to test recall and note what corroborating artifacts would verify each.”

12) Book reviews without reading the book

Detect: No scene/page anchors; vague theme talk.

Counter: Quote‑anchored claims; random in‑class text checks.

Prompt: “List 10 claims that must be anchored to pages/scenes. Request the exact quotation for each and explain why paraphrase alone is insufficient.”

13) AI‑generated speeches

Detect: Over‑structured rhetoric and cadence; delivery doesn’t match text.

Counter: Submit outline and speaker notes; live Q&A.

Prompt: “Analyze ethos/pathos/logos and cadence. Mark lines likely beyond the student’s voice. Draft 6 live Q&A questions keyed to those lines.”

14) Group projects with AI “contributions”

Detect: Missing voice in drafts; slides misaligned with the speaker’s talk.

Counter: Contribution logs; commit histories; rotating stand‑ups.

Prompt: “Infer sub‑components and likely authorship signals from the final artifact and any version history. List discrepancies and targeted viva questions per member.”

15) Paraphrasing tools to evade plagiarism

Detect: Awkward synonym swaps, tense/voice drift, meaning distortion.

Counter: Side‑by‑side paraphrase with original and rationale; direct citation training.

Prompt: “Compare original vs. paraphrase for semantic fidelity, preserved technical terms, presence of citation, suspicious synonym swaps, and any ≥5‑word near‑verbatim strings. Conclude with a pass/fail recommendation and rationale.”

16) Fake interview transcripts

Detect: Uniform, too‑clean answers; no interruptions or fillers; no provenance.

Counter: Consent/contact info; audio snippet; timestamped notes.

Prompt: “Assess conversational features (interruptions, repairs, off‑topic drift). Propose 5 provenance checks (contact/email/audio) and list inconsistencies to probe.”

17) AI‑generated cheat sheets for closed‑book tests

Detect: Topic‑specific score spikes; similar handwritten formats among peers.

Counter: Open‑note but conceptual exams; item pools and versioning.

Prompt: “Analyze topic‑wise performance vs. prior history to find improbable jumps. Propose 5 concept‑variant items to re‑test understanding.”

18) Auto‑completed worksheets

Detect: Overlong, perfectly structured answers; identical phrasing across a class.

Counter: Randomized versions; explain‑your‑step fields; brief oral checks.

Prompt: “Identify repeated answer templates (phrasing/order/sentence frames). Cluster students with identical templates and list likely common sources to check.”

19) AI‑streamlined reflections/journals

Detect: Generic emotion; no sensory detail; identical structure across entries.

Counter: Prompts with concrete anchors (date/place/names); occasional in‑class timed entries.

Prompt: “Score each entry for concrete detail density (people, places, times, senses). Flag entries below threshold and generate 6 authenticity probes per entry.”

20) Scripted debates using AI talking points

Detect: Over‑rehearsed delivery; brittle under cross‑examination; identical rebuttal shells.

Counter: Surprise cross‑questions; evidence cards with citations; require prep notes.

Prompt: “Create 10 adversarial, evidence‑bound questions tailored to this speech and a rubric distinguishing responsive reasoning from pre‑scripted recitation.”

Batch Utilities (drop‑in when needed)

Cross‑student similarity clustering (paste 5–50 submissions):
“Normalize texts (remove headings/citations). Extract rare 6–12‑word phrases, outline shapes, and idiosyncratic errors. Build a similarity matrix and list the top 10 overlaps with quoted strings, percent overlap, and a hypothesis about shared sources or collaboration.”

Internet match scan (direct‑copy leads):
“Extract 25 distinctive phrases. Search exact and fuzzy matches online. Report overlaps with links/titles and the exact matched strings; note approximate overlap.”

Citation audit (entire bibliography):
“Resolve each reference (DOI/URL/journal/book). Confirm year/volume/pages and topic relevance to the specific claim it supports. List issues as: reference → problem → why it matters → how to fix, with verification links where possible.”

Voice‑drift vs. baseline (single student):
“Compare sentence‑length variance, function‑word ratio, idiom density, hedging markers, connectors, and punctuation habits. Highlight the 8 sentences most unlike the baseline and rewrite them in the student’s typical style to test plausibility.”

Lab/data forensics:
“Check units, ranges, replication consistency, and expected noise. Flag templated data or impossible precision. Suggest five follow‑up artifacts to request (photos, raw files with timestamps, notebook exports).”

Code/CS assignments:
“Analyze structure, naming patterns, comment style, and dependency choices. Identify likely borrowed segments. Provide 5 viva questions and a tiny live modification task that tests true authorship.”

Master Academic‑Integrity Review Prompt (copy‑paste this into ChatGPT)

Purpose: Generate a careful, evidence‑based integrity report. Avoid “AI detector” scores. Use textual forensics, citation checks, web matching (if available), and comparison to prior work. Treat signals as leads, not verdicts.

Paste and fill the brackets:

Context

Course/level: [ ]

Assignment prompt (verbatim): [ ]

Learning goals/skills assessed: [ ]

Allowed supports (e.g., grammar checker, peer feedback): [ ]

Disallowed supports (e.g., generative drafting): [ ]

Due date/time: [ ]

Rubric summary or paste full text: [ ]

Student info

Student name: [ ]

Baseline writing samples (2–5 short first drafts): [ ] [ ] [ ]

Notes on typical style/ability: [ ]

Submission under review

Full text (paste): [ ]

Claimed sources/bibliography: [ ]

Document metadata or version history (if available): [ ]

Figures/data/code/raw files (if any): [ ]

Analyze in sections:

A) Quick triage (5‑minute sniff test)

Summarize the thesis/claim.

List 6–10 voice features (sentence length, idioms, hedging, connectors).

Mark abrupt shifts by paragraph with brief explanations.

B) Stylometry vs. baseline

Compare sentence‑length spread, function‑word patterns, discourse markers, error signatures.

Quote 6–10 distinctive n‑grams from the baseline and note whether they appear here.

Conclude “Voice alignment: low / medium / high,” with evidence.

C) Citation & source audit

Extract every quotation/citation. Build a short list for each: the claim/quote; the source as cited; whether it exists; page/URL/DOI; topical relevance; red flags (wrong year/pages, nonexistent journal).

Spot‑check 3–5 key claims for page‑accurate quotes.

Conclude “Citation integrity: sound / mixed / compromised,” with examples.

D) Web match scan (verbatim & near‑verbatim)

Pull 20 distinctive 6–12‑word phrases and check for direct or near‑direct matches online.

Report any overlaps with URLs/titles and approximate overlap.

E) Intra‑class / cross‑document similarity (optional)

If you have peer submissions: cluster by rare‑phrase overlap, outline shape, and idiosyncratic errors; list suspicious pairs/groups.

F) Content plausibility & context fit

Check whether claims need specific page/scene, apparatus, datasets, or local class events—and whether the text actually shows those anchors.

Flag anachronisms, spelling/register flips, unexplained terminology jumps, and mismatched figure captions.

G) Oral verification plan (micro‑viva)

Draft 6–10 targeted questions that force the author to explain choices, sources, and revisions. Include 1–2 seeded errors from the submission and ask the student to find/fix them. Provide the expected short answers an authentic author would know.

H) Evidence summary & next steps

Roll up signals into “Low / Moderate / High concern,” citing quotes, URLs, and page numbers.

Recommend fair next steps (request drafts/source PDFs, short viva, compare to baseline).

Produce a neutral, copy‑ready note for the LMS documenting the process.

Add a closing summary list organized by assignment type with: Assignment Type | What Students Do | Why It’s Problematic | Detection Tips | Counter‑Cheating Plan.

Targeted Strategies & Ready Prompts for 20 Common Patterns

Below, each item includes quick detection tips, counter‑cheating moves, and a prompt to drop into your analysis (use alongside the master prompt).

1) Full essay generated with AI

Detect: Generic scaffolds, polished but shallow analysis, voice far from baseline.

Counter: Required process artifacts (outline → annotated sources → drafts with changes), micro‑viva, in‑class sample writing.

Prompt: “List all generic, stock phrasings and over‑smooth sentences (quote them). Contrast with the student’s baseline idioms and error habits. Identify 8–12 sentences likely beyond their voice and rewrite them in the baseline style to test plausibility.”

2) Paragraph‑by‑paragraph AI help

Detect: Voice/formatting swings between paragraphs.

Counter: Color‑coded revision passes, short paragraph rationales (“What did you change and why?”).

Prompt: “Rate each paragraph’s ‘voice distance’ from baseline on a 0–5 scale with a 1‑line explanation. Output a brief heatmap list with the biggest shifts and examples.”

3) AI summaries of readings

Detect: No distinctive lines or scene anchors; fuzzy on minor details.

Counter: Quote‑anchored prompts; 2–3 minute in‑class line analysis spot checks.

Prompt: “Extract 8–12 claims that require specific textual support. For each, propose the exact passage (book/chapter/page) that would back it. Flag claims with no plausible anchor.”

4) Math homework solved by AI

Detect: Perfect steps; weak transfer to novel problems.

Counter: Require scratch‑work photos; isomorphic quiz items with surface changes.

Prompt: “Create 3 isomorphic problems and explain whether the presented method still works. List 5 micro‑viva questions to confirm authorship of the shown method.”

5) Foreign‑language assignments via AI

Detect: Register/grammar noticeably above level; unnatural idiom use.

Counter: In‑class quickwrites and audio responses; restricted word banks.

Prompt: “Assess level features (CEFR). List 10 constructions that exceed the student’s baseline level and provide simpler paraphrases they would likely produce.”

6) Lab report from templates

Detect: Methods/results generic or mismatched to apparatus; no raw data.

Counter: Require setup photos, timestamps, raw files, and error analysis tied to equipment.

Prompt: “Check unit ranges, noise/variance plausibility, and apparatus alignment. Identify 5–10 data plausibility checks and flag any impossible precision or templated patterns.”

7) Fabricated/misaligned citations

Detect: Nonexistent journals, bad page ranges/DOIs, orphan quotes.

Counter: Annotated bibliography with quote and page; upload PDFs.

Prompt: “For each citation, confirm existence, correct form, and relevance to the specific claim it supports. List fabrications or mismatches and how to fix them.”

8) Using AI for multiple‑choice “hints”

Detect: Homework vs. in‑class score gap; identical switch patterns across peers.

Counter: Require 1‑sentence justifications; two‑stage exams.

Prompt: “Evaluate answer choices with their justifications. Flag items where the answer is correct but the justification shows superficial or incorrect reasoning.”

9) Creative writing generated by AI

Detect: Clichés, emotional flatness, sudden metrical sophistication.

Counter: Personalized prompts; process diary; brief read‑aloud plus craft Q&A.

Prompt: “Identify 8 craft choices (POV, tense, imagery, rhythm). For each, write a ‘why this, not that’ question and a plausible author rationale; flag spots where rationales are unlikely.”

10) AI‑written discussion posts

Detect: Polished yet impersonal; mirrors instructor phrasing; same structure across students.

Counter: Require references to classmates’ points; time‑windowed posting; occasional audio replies.

Prompt: “Check for concrete references to peers or specific class moments. Score specificity 0–5 and produce 5 follow‑ups that test actual engagement.”

11) AI personal statements/college essays

Detect: Template arcs, generic adversity tropes; voice misaligned with school work.

Counter: Interview‑based outlines; timeline artifacts (drafts/emails).

Prompt: “Extract 12 memory‑specific details (names, dates, settings). Create viva questions to test recall and note what corroborating artifacts would verify each.”

12) Book reviews without reading the book

Detect: No scene/page anchors; vague theme talk.

Counter: Quote‑anchored claims; random in‑class text checks.

Prompt: “List 10 claims that must be anchored to pages/scenes. Request the exact quotation for each and explain why paraphrase alone is insufficient.”

13) AI‑generated speeches

Detect: Over‑structured rhetoric and cadence; delivery doesn’t match text.

Counter: Submit outline and speaker notes; live Q&A.

Prompt: “Analyze ethos/pathos/logos and cadence. Mark lines likely beyond the student’s voice. Draft 6 live Q&A questions keyed to those lines.”

14) Group projects with AI “contributions”

Detect: Missing voice in drafts; slides misaligned with the speaker’s talk.

Counter: Contribution logs; commit histories; rotating stand‑ups.

Prompt: “Infer sub‑components and likely authorship signals from the final artifact and any version history. List discrepancies and targeted viva questions per member.”

15) Paraphrasing tools to evade plagiarism

Detect: Awkward synonym swaps, tense/voice drift, meaning distortion.

Counter: Side‑by‑side paraphrase with original and rationale; direct citation training.

Prompt: “Compare original vs. paraphrase for semantic fidelity, preserved technical terms, presence of citation, suspicious synonym swaps, and any ≥5‑word near‑verbatim strings. Conclude with a pass/fail recommendation and rationale.”

16) Fake interview transcripts

Detect: Uniform, too‑clean answers; no interruptions or fillers; no provenance.

Counter: Consent/contact info; audio snippet; timestamped notes.

Prompt: “Assess conversational features (interruptions, repairs, off‑topic drift). Propose 5 provenance checks (contact/email/audio) and list inconsistencies to probe.”

17) AI‑generated cheat sheets for closed‑book tests

Detect: Topic‑specific score spikes; similar handwritten formats among peers.

Counter: Open‑note but conceptual exams; item pools and versioning.

Prompt: “Analyze topic‑wise performance vs. prior history to find improbable jumps. Propose 5 concept‑variant items to re‑test understanding.”

18) Auto‑completed worksheets

Detect: Overlong, perfectly structured answers; identical phrasing across a class.

Counter: Randomized versions; explain‑your‑step fields; brief oral checks.

Prompt: “Identify repeated answer templates (phrasing/order/sentence frames). Cluster students with identical templates and list likely common sources to check.”

19) AI‑streamlined reflections/journals

Detect: Generic emotion; no sensory detail; identical structure across entries.

Counter: Prompts with concrete anchors (date/place/names); occasional in‑class timed entries.

Prompt: “Score each entry for concrete detail density (people, places, times, senses). Flag entries below threshold and generate 6 authenticity probes per entry.”

20) Scripted debates using AI talking points

Detect: Over‑rehearsed delivery; brittle under cross‑examination; identical rebuttal shells.

Counter: Surprise cross‑questions; evidence cards with citations; require prep notes.

Prompt: “Create 10 adversarial, evidence‑bound questions tailored to this speech and a rubric distinguishing responsive reasoning from pre‑scripted recitation.”

Batch Utilities (drop‑in when needed)

Cross‑student similarity clustering (paste 5–50 submissions):
“Normalize texts (remove headings/citations). Extract rare 6–12‑word phrases, outline shapes, and idiosyncratic errors. Build a similarity matrix and list the top 10 overlaps with quoted strings, percent overlap, and a hypothesis about shared sources or collaboration.”

Internet match scan (direct‑copy leads):
“Extract 25 distinctive phrases. Search exact and fuzzy matches online. Report overlaps with links/titles and the exact matched strings; note approximate overlap.”

Citation audit (entire bibliography):
“Resolve each reference (DOI/URL/journal/book). Confirm year/volume/pages and topic relevance to the specific claim it supports. List issues as: reference → problem → why it matters → how to fix, with verification links where possible.”

Voice‑drift vs. baseline (single student):
“Compare sentence‑length variance, function‑word ratio, idiom density, hedging markers, connectors, and punctuation habits. Highlight the 8 sentences most unlike the baseline and rewrite them in the student’s typical style to test plausibility.”

Lab/data forensics:
“Check units, ranges, replication consistency, and expected noise. Flag templated data or impossible precision. Suggest five follow‑up artifacts to request (photos, raw files with timestamps, notebook exports).”

Code/CS assignments:
“Analyze structure, naming patterns, comment style, and dependency choices. Identify likely borrowed segments. Provide 5 viva questions and a tiny live modification task that tests true authorship.”

 

Module 3: Applying the student snapshots to one of your lesson plans:

I've uploaded a detailed student chart outlining specific management needs, accommodations, and strengths. Using this uploaded student information, create a fully annotated and differentiated version of the provided lesson plan. Precisely follow the structure and format of the attached lesson plan, clearly integrating differentiated student supports at each lesson phase (such as Introduction, Direct Instruction, Group/Partner Activities, Independent Practice, Discussion, Homework, Closure, Assessment, etc.). For each lesson phase, explicitly indicate: • Student Name: Clearly state which student requires accommodations. • Adjustment Needed: Identify specific accommodations (e.g., extended time, graphic organizer, read-aloud support, movement break, simplified instructions, visual aid). • Implementation: Briefly describe concrete teacher actions or strategies to effectively deliver or manage these adjustments in real-time during class. Also include: • Differentiated, leveled questions during the Discussion or questioning phase, assigning students based on their documented strengths and needs. • A concise quick-reference checklist at the end summarizing all accommodations by student for easy reference during instruction. Ensure adherence to the provided lesson plan format, seamlessly embedding these supports to produce a clear, instructional-ready annotated lesson plan.

Module 6 IEP Drafter Master Prompt

Best Prompts:

Step 1 Prompt: "Please confirm you've reviewed the uploaded anonymized IEP document thoroughly."

Step 2 Prompt: “I am uploading a student’s anonymized Individualized Education Program (IEP). Please thoroughly analyze this document. Begin by summarizing key information including the student’s strengths, areas of need, accommodations, measurable annual goals, specially designed instruction, transition planning, and parent/student statements. Include direct quotations from the uploaded document as explicit evidence for each area identified.”

Step 3 Prompt:

Task:
Using the detailed compliance rubric provided below, explicitly evaluate the uploaded Individualized Education Program (IEP).

Instructions for Evaluation:

·         Address four categories at a time to ensure a detailed, accurate, and comprehensive analysis without errors or hallucinations.

·         For each rubric category, clearly state 'Present' or 'Not Present'.

·         Provide at least two explicit quotations as evidence for each category, directly citing:

o    The name and role of the person (teacher, family member, student, therapist, etc.) quoted within the IEP, or

o    The exact section title of the IEP (e.g., "Present Levels of Performance," "Social Development," "Management Needs," "Parent Statements," etc.).

·         Briefly explain your reasoning based directly on the rubric criteria.

Compliance Rubric Categories (Analyze in groups of four):

Group 1:

·         Language: Check for current, objective, observable, jargon-free, student-specific language.

·         Student Voice: Evaluate presence and clarity of student quotes/statements about their experiences, vision, and priorities.

·         Parent/Family Voice: Review presence and clarity of family quotes/statements about student’s vision and priorities.

·         Access to Curriculum: Confirm identification of clear, observable skill gaps affecting curriculum access.

Group 2:

·         Strengths and Learning Style: Confirm strengths/learning style are explicitly stated, observable, and connected directly to instructional goals.

·         Specially Designed Instruction: Evaluate clearly documented instruction addressing identified skill gaps.

·         Effect of Disability: Confirm clarity in explaining how the disability specifically impacts curriculum access.

·         Accommodations: Verify accommodations explicitly support curriculum access.

Group 3:

·         Recommended Services: Ensure service recommendations align explicitly with documented student needs.

·         Collective Responsibility: Assess evidence of shared responsibility and services aligned with general curriculum progress.

·         Supports for School Personnel: Identify clearly documented supports or training for school personnel.

·         Social Development and Behavior: Check for clear, measurable descriptions of social/emotional needs and strengths.

Group 4:

·         Behavioral Concerns: Evaluate clarity in documenting behavioral concerns, impact, strategies, and goals.

·         Transition Activities (If Applicable): Confirm clear, relevant transition activities aligned with student’s post-secondary goals.

·         Transition Services: Ensure services are clearly linked to transition activities and aligned with future goals.

·         Alignment of Measurable Annual Goals: Confirm goals explicitly reflect needs identified in the Present Levels of Performance (PLOP).

Group 5:

·         Alignment Across PLOP, Goals, and Services: Assess consistency and alignment across all sections.

·         Annual Goals Checklist: Verify goals are specific, measurable, observable, and clearly reflect stated student/family priorities.