0) Purpose and scope
Wharton School uses AI to personalize learning, strengthen mastery, and expand support for K–12 learners studying online and often independently. This policy applies to:
· all students (Grades 1–12), parents/guardians, faculty, staff, contractors
· the LMS (including Moodle-based courseware), school devices/accounts, and any school-approved AI tools
This AI policy suite does not replace existing student/parent handbook rules on participation, academic dishonesty, or student records; it extends them to AI-enabled learning and assessment.
1) Wharton’s AI Operating Model for Remote K–12
1.1 AI supports learning; teachers remain accountable for education
Wharton’s program is designed for online learning with strong teacher connectivity and college-preparatory expectations.
In that context:
· AI may serve as a “learning companion” (practice, explanation, feedback, study planning).
· Teachers remain responsible for instruction, grading decisions, academic support, and verifying mastery.
· Parents/guardians are partners, especially for younger students and home learning routines.
AI in Personalized, Competency-Based Learning
Wharton’s AI use aligns with Nevada’s Personalized Competency-Based Learning (PCBL) framework.
- Students advance based on demonstrated mastery, not time-on-task alone.
- AI may support practice, feedback, and reflection, but mastery validation requires independent evidence.
- Validation checks, oral explanations, and applied tasks confirm that learning is authentic and transferable.
1.2 Self-paced acceleration with guardrails
Wharton allows flexible pacing, but learners must meet participation expectations and minimum engagement requirements for semester courses.
AI tools may help students accelerate responsibly—but may not reduce integrity or evidence-of-learning standards.
1.3 “Focused mastery blocks” in the online day
To mirror an efficient, mastery-based model for remote learners:
· Students complete daily “mastery blocks” (AI-guided practice + teacher-checked work)
· Then shift to projects, writing, labs/simulations, discussion work, and enrichment
This supports sustained learning while maintaining required participation and submission expectations.
2) Approved AI Uses in Teaching and Learning (K–12)
2.1 Default-allowed uses (learning support)
Students may use school-approved AI to:
· get step-by-step explanations (age-appropriate, scaffolded)
· generate extra practice questions, flashcards, review quizzes
· receive formative feedback on drafts (clarity, grammar, organization)
· translate/clarify directions (especially for multilingual learners)
· plan study time and break tasks into steps
2.2 Allowed with conditions (higher risk or higher impact)
These uses require teacher direction and/or explicit course permission:
· AI-assisted research summaries (must verify sources)
· AI feedback aligned to a rubric (teacher reviews samples)
· AI-generated study guides for unit exams (teacher-provided scope)
2.3 Prohibited uses (not allowed)
· Using AI to complete graded work that is meant to be the student’s independent performance
· Asking AI for answers during closed-tool tests/exams
· Using AI to fabricate citations, quotes, lab results, data, or sources
· Using unapproved external AI tools with student information or course materials
3) Wharton AI Learning Companion Policy (Chatbot inside the LMS)
3.1 What the AI Learning Companion is
A school-approved chatbot embedded in the learning environment that can:
· tutor 1:1 with explanations and practice
· guide students through “next steps”
· help students review missed skills and improve focus
· point students back to Wharton course materials and teacher instructions
3.2 Age-based access tiers (K–12 safety)
Tier A: Grades 1–5
· restricted prompts (curriculum-only)
· no open web browsing
· no private messaging
· parent/guardian visibility into usage summaries
Tier B: Grades 6–8
· broader tutoring + study support
· stricter integrity guardrails for graded work
· source-checking prompts enabled by default
Tier C: Grades 9–12
· full tutoring + writing support + study planning
· explicit disclosure rules for assignments
· stronger assessment restrictions to protect credit integrity
3.3 The companion must NOT
· claim to be a teacher, counselor, or human
· provide instructions for cheating or bypassing school requirements
· request sensitive personal info
· give unsafe advice (self-harm, violence, illegal acts); must escalate to human support when safety is involved
3.4 “Ask a Human” escalation
The chatbot must include a one-click pathway to:
· teacher/TA support
· counseling or student services (where applicable)
· technical support
This aligns with the school’s expectation of student connection and support in an online environment.
4) Academic Integrity and Assessment Rules for AI (Remote + Independent Study)
Wharton’s handbook defines academic dishonesty to include cheating, plagiarism, and fraudulent or deceptive attempts to obtain credit.
AI-related integrity rules apply those definitions to modern tools.
4.1 Assessment types and AI permissions
A. Practice (AI encouraged)
· ungraded practice, self-check quizzes, draft feedback
B. Coursework (AI limited by teacher directions)
· students may use AI for help only within assignment rules
· students must still submit work progressively for teacher review (no “one-shot” final dumps)
C. Validation (AI prohibited unless explicitly allowed)
· unit tests, midterms, finals, proctored exams, mastery checks intended to verify independent competence
4.2 Required “AI Use Disclosure” (simple, K–12-friendly)
For any assignment where AI use is allowed but meaningful:
· Grades 6–12: students include an “AI Use Note”
· Grades 1–5: parent/guardian or teacher may help record the note if needed
AI Use Note template
· Tool used:
· What I used it for:
· What I changed/made myself:
· How I checked it was correct:
4.3 Identity and authorship checks (designed for remote learning)
Because students work online and independently, Wharton will use multiple evidence sources:
· short oral check-ins (video/phone), discussion posts, drafts/version history
· randomized “explain your thinking” prompts
· teacher review of progress over time
These are consistent with Wharton’s expectations for regular participation and individual work.
4.4 Violations and outcomes
Suspected violations (including AI misuse) may result in failure of the assignment/course and further discipline, consistent with Wharton policy.
Decisions must be based on human review with an opportunity for student/parent response.
5) Student Data Privacy Policy for AI (Minors + Families)
Alignment with Nevada STELLAR Security Principles
In accordance with Nevada’s STELLAR Pathway, Wharton prioritizes data security, privacy, and cybersecurity when deploying AI tools for minors. AI may enhance learning, but humans remain responsible for protecting students from data misuse, bias, or harm.
Wharton maintains student records consistent with FERPA and provides rights to inspect/review records.
AI tools must operate within those expectations.
5.1 Data minimization (collect only what we need)
AI systems may process:
· learning progress (mastery signals, quiz outcomes)
· course engagement needed for academic support
· student chat prompts to the AI companion (within limits)
AI systems must not collect:
· biometric data (face, voiceprints), emotion detection, covert surveillance
· unnecessary sensitive personal details
5.2 Parent/guardian rights and transparency
Parents/guardians (and eligible students, as applicable) will receive clear notice:
· what AI features exist, what data they use, why they’re used
· whether AI interactions are logged and how long they are retained
· how to request access or corrections consistent with student record rights
5.3 Directory information controls (and “lockfile” option)
Wharton identifies directory information categories and allows families to request nondisclosure (“Privacy Act Lockfile”).
AI tools must honor directory-information restrictions and must not expose student information through prompts, outputs, or logs.
5.4 Retention (recommended defaults)
· AI chat logs: retain short-term for support and safety (e.g., 30–90 days), unless needed for academic integrity review
· De-identified analytics: may be retained longer to improve instruction quality (without identifying students)
5.5 Vendor/third-party AI requirements (contractual)
Any AI provider must agree:
· no training on Wharton student data by default
· no sale or sharing of student data
· encryption in transit/at rest, access controls, breach notification
· subprocessor disclosure and approval
6) Responsible AI and Student Well-Being (Ethics)
AI use at Wharton is evaluated not by efficiency alone, but by its impact on student understanding, equity, engagement, and long-term achievement.
6.1 Fairness and bias checks
Wharton will monitor whether AI:
· provides unequal support across learner groups
· gives systematically different feedback tone or difficulty
If issues appear: adjust prompts, rules, content sequencing, and add human review.
AI Literacy and Student Empowerment
Consistent with Nevada guidance, Wharton teaches students not only how to use AI, but how to:
- evaluate accuracy and bias
- understand AI limitations and hallucinations
- decide when AI use is appropriate or inappropriate
- retain ownership of their learning and thinking
AI is framed as a support for curiosity, problem-solving, and reflection—not a shortcut to completed work.
6.2 Explainability and contestability
Students and families may ask:
· “Why did AI recommend this lesson/practice?”
They may also contest:
· incorrect mastery placement, unfair flags, or harmful outputs
Humans make final decisions.
6.3 Avoiding “surveillance school”
Wharton does not use AI for biometric monitoring, emotion detection, automated behavior scoring, or surveillance-based discipline. Engagement analytics are used solely to support learning and student well-being, never as the sole basis for academic or disciplinary decisions.
6.4 Safety escalation
If AI detects self-harm content, threats, or abuse indicators:
· it must stop normal tutoring and direct the student to immediate help
· it must alert designated school safety personnel following established safety procedures (with minimal disclosure)
6.5 Transparency in AI Use
Wharton commits to transparency in how AI tools are used in instruction, assessment, and student support.
- Students and families will be informed when AI is used to recommend lessons, provide feedback, or support learning decisions.
- AI-generated recommendations are advisory and never determinative of grades, placement, or discipline.
- Students and parents may request explanations of AI-supported decisions and may contest errors or harmful outputs.
Final academic and disciplinary decisions are always made by qualified Wharton educators.
7) Governance and Procedures (How Wharton Approves AI)
The AI Governance Team ensures all AI tools align with Nevada Department of Education guidance, federal student privacy laws, and Wharton’s mission to preserve human-centered teaching.
7.1 AI Governance Team
A standing group including:
· school leadership, lead teachers, IT/security, student services, accessibility, privacy/compliance
7.2 Tool approval workflow
1. Use-case request (what problem, which grades, what data)
2. Risk tiering (low/medium/high risk)
3. Pilot with a small cohort + parent notice
4. Decision (approve with guardrails / revise / reject)
5. Ongoing monitoring (incidents, learning outcomes, equity signals)
7.3 “AI Feature Card” required for each AI tool
· what it does + what it must not do
· grades allowed (1–5 / 6–8 / 9–12)
· data used + retention
· known failure modes (hallucinations, bias)
· human oversight and escalation contacts
8) Staff and Teacher Operating Procedures (Remote Reality)
8.1 Course setup requirements
Every course must publish:
· AI rules by assignment (Allowed / Limited / Not allowed)
· how students should disclose AI use
· what counts as unauthorized assistance
8.2 Teacher review norms (protecting credit integrity)
For key assignments:
· require drafts/checkpoints
· include short “explain it” reflections or oral checks
· validate that students are working individually, consistent with Wharton expectations
8.3 Support expectations
Wharton students are expected to log in regularly, participate, and submit unit work for teacher review; staff procedures must support that (feedback cadence, office hours, outreach when students stall).
9) Student and Parent/Guardian AI Use Guidelines (Plain Language)
Students:
· Use the AI companion to learn, practice, and improve.
· Do your own work on graded tasks unless your teacher says AI is allowed.
· If you use AI, say how you used it.
Parents/Guardians:
· Help set routines for independent study.
· Encourage “show your work” habits and honesty.
· Contact teachers if AI guidance seems confusing, incorrect, or inappropriate.
Student & Parent AI Guide (Wharton School – K–12 Online)
What is AI at Wharton?
AI is a computer helper (like a tutor chatbot). It can explain lessons, give practice, and help you study.
What AI is for (OK to use)
Students may use AI to:
· Ask: “Explain this in a simpler way.”
· Get extra practice questions.
· Check understanding: “Give me a short quiz.”
· Get feedback on a draft: “Is my paragraph clear?”
· Make a study plan: “Help me plan this week.”
· Get help with directions and vocabulary.
What AI is NOT for (Not allowed)
Do not use AI to:
· Do your graded work for you
· Give answers during tests or quizzes (unless your teacher says it’s allowed)
· Write your whole assignment and submit it as yours
· Make up sources, quotes, citations, data, or lab results
· Share private information (yours or someone else’s)
Keep your information safe
Never type into AI:
· your full name + address, phone number, passwords
· your school login details
· other students’ private info
· grades or teacher feedback (unless inside the school’s approved AI tool)
“AI Use Note” (when required)
Sometimes your teacher will ask you to add a short note if you used AI.
Copy/paste template (Grades 6–12):
· Tool used:
· What I used it for:
· What I wrote/changed myself:
· How I checked it was correct:
(Grades 1–5: your parent/guardian or teacher can help you write this note.)
How to use AI the right way (simple rules)
· Use AI to learn — not to skip learning.
· Check facts. AI can be wrong.
· Show your thinking. Keep notes, drafts, steps, and work.
· Ask a human when stuck. If AI is confusing, contact your teacher.
Parent/Guardian tips
· Ask your student: “Show me your steps, not just the final answer.”
· Encourage short daily study time + breaks.
· If AI gives unsafe, strange, or inappropriate responses: stop and tell the teacher.
Teacher AI Playbook (Wharton K–12 Online, Self-Paced)
1) Quick setup: “Assignment AI Labels” (post on every assignment)
Use one of these labels at the top of each task:
Label A — AI ENCOURAGED (Practice Only)
· AI allowed for: tutoring, extra practice, hints, checking understanding
· Student submission: final answers + brief reflection (“What I learned / what I fixed”)
Label B — AI LIMITED (With Disclosure)
· AI allowed for: brainstorming, outline help, grammar/clarity edits, practice questions
· AI not allowed for: writing final paragraphs/solutions verbatim, producing final code/analysis with no student work shown
· Student must include an AI Use Note (template below)
Label C — AI NOT ALLOWED (Independent Work)
· No AI tools while completing the task
· Students may use class notes/materials only (define what’s open/closed)
· Include an integrity reminder + a short authenticity check (see Section 4)
Label D — AI ALLOWED ONLY FOR ACCESSIBILITY
· AI permitted only for: read-aloud, translation, simplified directions, spelling support
· Not permitted for: generating content/answers
Label E — PROCTORED / VALIDATION CHECK (Closed Tool)
· No AI; timed or supervised
· Used to confirm independent mastery for credit
2) Required “AI Use Note” (copy/paste)
For Label B assignments (recommended Grades 6–12):
· Tool used:
· What I used it for:
· What I changed/made myself:
· How I verified accuracy (sources, steps, checking work):
For Grades 1–5 (simpler):
· I used AI to help me:
· I still did this work by:
· My parent/teacher helped me by:
3) Disclosure examples (show students what “good” looks like)
Example 1 (Writing):
“Used AI for an outline and to suggest stronger topic sentences. I rewrote all paragraphs, added evidence from Lesson 3, and removed one unsupported claim.”
Example 2 (Math):
“Used AI to explain the first step. I solved the rest on my own, showed work, and checked using a second method.”
Example 3 (Science):
“Used AI to quiz me on vocabulary. I did not use AI for the lab conclusion. I used my data table and class notes.”
4) Integrity-friendly assessment patterns (built for self-paced online)
Because self-paced remote learning needs strong evidence-of-learning, use layered proof rather than “gotcha” detection.
Pattern A: Draft → Feedback → Final (best for writing/projects)
· Require: outline + first draft + final
· Add: 3–5 sentence reflection: “What changed and why?”
· Optional: short audio/video “walkthrough” (30–60 seconds)
Pattern B: “Show Your Work” Checkpoints (math/science)
· Require: photos/scans of steps or typed reasoning
· Include: “Explain step 2 in your own words”
· Add: one “transfer” problem that’s not identical to practice
Pattern C: Micro-Oral Defense (fast, scalable)
· After submission, randomly select students (or all students for major tasks) for a:
o 2–4 minute call/voice note/video reply
o Ask 2 prompts: “Why did you choose this approach?” + “What would you change if…?”
· Works well for: essays, coding, projects, lab reports
Pattern D: Validation Check (credit protection)
· Short, timed, closed-tool quiz every unit/module
· Covers: core skills from the unit
· Used to confirm mastery even if AI helped during practice
Pattern E: Process Artifacts (project-based)
Require any 2–3 of:
· planning doc, data table, screenshots of work in progress
· version history (LMS submissions)
· peer feedback notes
· final reflection: what you tried, what failed, what improved
5) Practical rules for “AI in discussions” (forums, seminars)
· Allow AI to help students prepare, but require:
o one original claim + one piece of course evidence
o a personal connection (“I agree/disagree because…”)
· Prohibit posting AI-generated discussion responses verbatim.
6) Teacher workflow for AI incidents (simple steps)
If you suspect AI misuse:
1. Pause grading (don’t accuse in public)
2. Ask for process evidence: drafts, notes, steps, version history
3. Use a quick oral check: “Explain your approach” (2–4 minutes)
4. Decide based on human judgment + documented evidence
5. Apply consistent consequences per handbook/course policy
7) Recommended defaults (easy to run)
· Weekly: AI-encouraged practice
· Per unit: one AI-limited performance task + one closed-tool validation check
· For major assignments: draft + reflection + micro-oral defense
Regulatory Alignment Statement
This AI Policy is informed by Nevada’s STELLAR Pathway to AI Teaching and Learning: Ethics, Principles, and Guidance and is reviewed periodically to reflect evolving state and federal expectations for ethical, secure, and human-centered AI in K–12 education.