The Badger Herald’s guidance and policies on using AI in our work.
Last updated: January 18, 2026.
Generative artificial intelligence is the use of large language models to create something new, such as text, images, graphics and interactive media. Although generative AI has the potential to improve newsgathering, it also has the potential to harm journalists’ credibility and our unique relationship with our audience.
As we proceed, the following five core values will guide our work. These principles apply explicitly to the newsroom and throughout other non-news departments including advertising, events, marketing and development.
Transparency — both internal and external.
Externally, as we use AI in our journalism, we will document and describe the tools with specificity. When AI tools influence audience-facing content, we will tell the audience in ways that both disclose and educate news consumers. We will work with editors and designers to create disclosures that are precise in language without being onerous to our audience. This may be a short tagline, a caption or credit, or for something more substantial, an editor’s note. When appropriate, we will include the prompts that are fed into the model to generate the material.
Our transparency works on multiple levels. Internally, it facilitates conversation and creativity. It will be clear to our peers whenever we are using generative AI. This will facilitate collective learning and help us create applicable, transitory policies as the technologies evolve.
Externally, communication and disclosure ideally create opportunities to get feedback from the audience, as well as educate consumers. As journalists, part of our job is to empower the audience with news literacy skills. AI literacy — understanding how generative AI works, what benefits it brings to the information ecosystem and how to avoid AI-generated misinformation — is a subset of news literacy.
Accuracy and human verification — All information generated by AI requires human verification. Everything we publish will live up to our standards of verification. Increasingly in all of our work, it is important to be explicit about how we know facts are facts. This will be particularly important when using AI. For example, an editor should review prompts, and any other inputs used to generate a story or other material. And, everything should be replicable.
Audience service — Our work in AI should be guided by what will be useful to our audience as we serve them. We have made a promise to our audience to provide them with information that accurately informs University of Wisconsin students.
Exploration — With the three previous principles as our foundation, we will embrace exploration and experimentation. We should strive to invest in newsroom training — internal or external — so every staff member is knowledgeable in generative AI tools.
Logistics
The point person/team on generative AI in our newsroom is Anna Smith who is supported by the managing editors Anna Kristoff and Zoe Klein. This team will also be the source of frequent interim guidance distributed throughout our organization.
The team will seek input from a variety of roles, particularly those who are directly reporting the news.
In addition, members of this team will:
- Write clear guidance about how The Badger Herald will or will not use AI in content-generation.
- Edit and finalize our AI policy and ensure that it is both internally available and where appropriate, publicly available.
- Seek input from our audience, through surveys, focus groups and other feedback mechanisms.
- Manage all disclosures about partnerships, grant funding or licensing from AI companies.
- Understand our policies and explain how they apply to AI and other product development. This includes regularly consulting with reporters, editors, lawyers or other experts that influence newsroom policies.
- Innovate ways to communicate with the audience to both educate them and gather data about their needs and concerns.
Here are the generative AI sources we encourage use of:
Otter.ai
Editorial use:
Generative AI is generally permitted for the following purposes:
Research — It’s fine to ask a publicly available large language model to research a topic. But, The Badger Herald journalists are required to independently verify every fact. So be wary. It is fairly common for AI to “hallucinate” information, including facts, biographical information and even newspaper citations.
Interviewing — Tools, such as Otter.ai, may be used to save time in the transcription process. Further, any AI-created summaries of interviews are allowed to be referenced, with the aim to increase understanding of content and better identify potential quotes. All information taken from these summaries, whether in the form of direct quotes or paraphrased material, will be verified with the original audio recording and text transcription.
Headline experimentation — Asking AI to generate headlines is a form of research. The same caveats apply. Also, be sure to put enough facts into the prompt that the headline is based on our journalism and not other reporting.
Summary paragraphs — Do not use AI to generate article summaries that appear at the top of our work.
Searching and assembling data — You are permitted to use AI to search for information, mine public databases or assemble and calculate statistics that would be useful to our audience. Any data analysis should be checked by an editor.
Visuals — Do not use AI service to create illustrations for publication.
Do not use AI to manipulate photos unless they are for illustration purposes and clearly defined. Visual journalists need to be aware of software updates to photo processing tools to ensure AI-enhancement is not/is being used according to our policies. Do not publish any reader submitted content without first verifying its authenticity.
Fact-checking — Use of AI alone is not sufficient for independent fact-checking. Facts should be checked against multiple authoritative sources that have been created, edited or curated by human beings. A single source is generally not sufficient — information should be checked against multiple sources.
Social media use
Use of verbatim GPT content is not permitted on our social channels.
Privacy and security
- No personal information from your staff or your customers should be entered into programs
- None of our intellectual property should be entered into a program
- Staff working with AI tools should have a clear understanding of our organization’s privacy policy.
Ongoing training
Training on AI tools and experiments will be available and at times even mandatory. This training will be delivered by a combination of members from the internal committee and outside experts.
If our content is found to have violated this policy, an investigation will be conducted to determine in what capacity AI was used, if “hallucinations” were present and what parties are responsible for the violation. Interviews will be held with the individual/s. Any content that is found to have violated this policy will be removed from our website, social media pages, etc. as soon as possible.
After the event of a first violation, the responsible party will be asked about their motivations for the prohibited AI usage and reminded about our policy. If the responsible party continues to violate the policy, they may face suspension or firing.
Guidelines for product teams and technologists
Our product/technology team is committed to understanding and staying up to date with all tools, software or companies we use or partner with. We will:
- Vet third-party vendors and their usage policies before testing any AI product.
- Make sure any product we use adheres to our own data and privacy policies.
- Perform comprehensive testing on all software and tools for reliability and accuracy before using them for any consumer-facing content.
- Ensure all software settings are correct, and in accordance with our policies, before using any LLM.
- Keep up-to-date on the latest software updates for products we use.
- Provide best-practices, documentation or training for new tools to internal users.
Template from Poynter Institute






