4 min readPuneUpdated: Apr 28, 2026 02:48 PM IST
The Indian Institute of Science Education and Research (IISER) Pune, one of India’s top science institutes, has allowed the default use of generative artificial intelligence. Through guidelines adopted in March, the institute stipulated that if AI use is to be disallowed for any assignments or research papers, it must be specified beforehand.
Universities and colleges across the world are grappling with the challenge of generative artificial intelligence. Assignments like essays or take-home problems, which formed the backbone of classroom assessment in many degrees, are no longer feasible options as AI tools can generate solutions in minutes.
To tackle this, IISER Pune constituted an AI committee under the chairmanship of Professor Sutirth Dey to develop guidelines for generative AI usage in July 2025. Formed after consultations with students and staff, the guidelines apply to all assignments, theses, scientific articles submitted to peer-reviewed journals, and any other documented output at the institute.
These guidelines are significant because they can form a basis for other higher education institutions that lack the expertise to form their own guidelines.
‘No strong method to detect AI use’
The most important point laid out in the guidelines is that, unless explicitly forbidden by the competent authority, the use of generative AI is permitted by default in all documented outputs and activities by students and staff. This means that course assignments, research papers, and other documents can be prepared with the help of AI tools.
Prof Dey told The Indian Express, “The implementation of a ban on generative AI is going to be very, very difficult. Theoretically, we do not have any strong method to figure out if some text has or has not been generated using generative AI. It’s not even that the method theoretically exists and nobody has been able to put it into software. The method itself is not there.”
According to the guidelines, if generative AI is used, the user will bear full responsibility and accountability for the output. She will be solely responsible for errors like omission or commission related to the content, accuracy, originality, or attribution of the output.
Story continues below this ad
If there are multiple authors associated with a study, a consensus on AI use must be reached before the work commences.
Viva-voce verification
If a competent authority wants to prevent the use of generative AI for any output, they must state this beforehand in writing. The authority must also state how such a ban will be implemented; examples could include using a viva-voce or an honour-based system, where students are expected to act honestly and uphold codes of conduct.
Giving an example of a verification system, Prof Dey explained, “In maths or in physics, after the assignments have been submitted, you can make the students sit in a room and then essentially give them some small subset of the assignment questions. Then, if they can solve the question in class, it means they have not used AI and will get the marks. If they cannot solve it, it means that they have used AI, and even though their submission is correct, they won’t get the marks.”
AI attribution
If generative AI is used in a way that its output does not form part of the final output, no attribution to AI is required. This includes literature research, understanding concepts, brainstorming, grammar checks, and copy editing using AI.
Story continues below this ad
For use cases that go beyond the limit of the previous category—like generating substantial computer code, complete sentences or paragraphs, tables or figures, proofs or derivations—the user must acknowledge the use of AI in the output.
If generative AI is used in the preparation of any thesis, the “contributions” page must contain an AI attribution statement.
For academic papers submitted to peer-reviewed journals, IISER’s guidelines recommend following the journal’s policy.
Data privacy
Any data uploaded to generative AI platforms can potentially be used by the company to train its model. Anything uploaded to such platforms must therefore be treated as if it has been posted on the internet, the guidelines say.
Story continues below this ad
Uploading proprietary research data, unpublished results, personally identifiable information of research participants or any other confidential information to AI platforms is prohibited. Lapses in this regard will be subject to institutional disciplinary and data-security policies.
to join Express Pune WhatsApp channel and get a curated list of our stories
© The Indian Express Pvt Ltd








