Manchester Schools Revise AI Policy for Ethics, Transparency

Manchester Schools Revise AI Policy for Ethics, Transparency


(TNS) — Members of the Manchester school board’s Committee on Teaching and Learning are backing an updated policy unveiled Wednesday governing use of artificial intelligence (AI) in city schools.

Leslie Whitney, executive director of teaching and learning for the Manchester School District, said the revised policy is “forward-thinking, embracing AI as something that not only exists in our community, but something that will shape learning.”

“Emphasis on ethical use of AI has been written into this policy with considerations for citation, disclosure, reporting, and data privacy,” Whitney said. “No digital program replaces great teaching. Digital tools are meant to strengthen it. A focus on critical thinking is paramount. We have kept academic honesty as top of mind when crafting the policy and procedures — student and staff data privacy has also been an important piece in creating these guidelines.”


The policy says there are three AI platforms approved for use in the Manchester School District — SchoolAI, Khan Academy’s Khanmigo, and Canva.

The policy stresses AI use must be “transparent.”

“Students and educators are expected to disclose AI assistance in their work in accordance with this policy,” the policy says. “Submitting Al-generated work as one’s own, without disclosure, is a violation of the district’s academic integrity standards. AI can generate inaccurate, biased, or misleading content. Students and educators must approach AI outputs with curiosity and skepticism, verifying information and applying their own judgment before acting on AI-generated content.”

Committee chair Sean Parr said parts of the policy are “really smart,” before raising the issue of enforcement.

“It’s a little bit less specific as to how staff, teachers, will be able to tell if there’s an AI offense, because of course we don’t really have tools that can tell us if something’s been created by AI,” Parr said. “Who makes the determination that AI was misused and what kind of evidence they’ll be able to have or be able to look at.”

Manchester School District attorney Matt Upton said the problem is that while there are AI detection tools, “some of them are really unreliable.”

“I share your concerns about the ability to properly enforce it — it’ll write a book report on any particular novel that you’re reading, you can take that material, make slight modifications to it — is that really student work?” said Upton. “There are some pretty gray lines here, and I think we’re going to have to rely on the teachers to really decide, you know, the integrity of the product, how much actual effort was put in by the student, and to develop patterns within the class of when they suspect that AI policy has been violated.

“It’s going to be a challenge and it’s going to be a work in progress.”

Upton said one of the dangers of AI use is losing critical thinking and research skills.

“I had a law school professor many years ago that said, if you want to memorize the answers, you don’t need me and you don’t need a school — you just learn how to memorize the answers,” Upton said. “What you go to school for is to learn how and why the answers are what they are. We have the potential of backing our students into this trap of just finding the answers and not knowing the whys.”

Manchester School District IT Executive Director Stephen Cross admitted there’s nothing to stop a student “from going home and using a personal device on their own personal network and using AI.”

“There’s nothing that we can do to stop that or prevent it,” Cross said. “It would be really at the teacher’s level to understand that there’s no way this child did this work. There’s some common sense involved.

“At some point, I’m sure we’ll have tools in place that will be integrated with instructions so that it automatically flags these things, but there’s no way for us to stop a student from doing that.”

Results of a survey by the Pew Research Center released earlier this year show the majority of teenagers in the United States believe their peers use artificial intelligence to cheat in school.

The survey of 1,458 American students ages 13 to 17 reports nearly six in 10 teens said students at their school use AI chatbots to cheat on work at least “somewhat often.”

The Committee on Teaching and Learning voted unanimously to recommend the policy be approved by the full school board next month.

© 2026 The New Hampshire Union Leader (Manchester, N.H.). Distributed by Tribune Content Agency, LLC.



Content Curated Originally From Here