
Artificial intelligence concept illustration. Photo by Mike MacKenzie, via Wikimedia Commons, licensed under Creative Commons Attribution 2.0 Generic (CC BY 2.0).
Imagine opening a peer-reviewed journal article, reading the research results, and discovering the illustrations are completely fake. Not intentionally misleading, but actually generated by a computer program without proper disclosure. This scenario stopped being hypothetical recently when researchers discovered a published paper featuring AI-created cell diagrams showing impossible anatomy. This means that when machines write science, they pose ethical challenges.
The article got retracted within days, but the incident highlighted a growing problem in academic science: Artificial intelligence enables ethical violations at an unprecedented scale.
Researchers face unprecedented temptations to use AI for shortcuts that compromise scientific integrity. The tools themselves remain neutral, but human choices determine whether AI becomes a research assistant or a mechanism for deception. Universities and research institutions grapple with developing policies addressing AI use in scientific contexts.
Society relies on published research to guide decisions about health, technology, and policy. When AI enables corners to be cut, everyone suffers from diminished knowledge reliability. The stakes couldn’t be higher for establishing clear ethical guidelines around artificial intelligence in academic environments.
When machines write science, they pose ethical challenges
The retracted article titled “Cellular Functions of Spermatogonial Stem Cells in Relation to JAK/STAT Signaling Pathway” published in Frontiers in Cell and Developmental Biology exemplified how AI poses ethical challenges that can turn into serious problems.
The researchers used AI to generate cellular illustrations, but the images contained obvious anatomical anomalies impossible in actual biology. Rats displayed wildly oversized internal organs; cellular diagrams showed nonsensical structures.
The journal issued an official statement within three days of publication, explicitly confirming that authors generated the images using artificial intelligence without disclosing this fact. The researchers never responded to journal editors’ requests for clarification or revision justifications.
This case wasn’t isolated; similar incidents emerged in other prestigious publications. Neurosurgical Review had to restrict editor comments and letters because contributors were submitting text written with AI tools without attribution. The Retraction Watch database documents multiple cases where authors hid AI usage while peer reviewers remained unaware.
These documented violations demonstrate how easily AI enables corner-cutting and, again, how they pose ethical challenges. When researchers don’t disclose AI use transparently, they fundamentally compromise the peer review process.
Another common ethical challenge and violation involves automatically generating scholarly text without human disclosure or supervision. A recent article in AI and Society analyzed this growing trend across academic publishing. Researchers Steven Watson, Erik Brezovec, and Jonathan Romic documented how tools such as ChatGPT could draft complete articles without adequate human oversight or acknowledgment.
They warned that unsupervised AI writing threatens scientific integrity principles, specifically intellectual honesty, social responsibility, and technical quality. The problem lies not with the technology itself but with inadequate oversight mechanisms.
When algorithms generate text without human critical thinking intervening, research loses the careful judgment that distinguishes legitimate science from fabricated conclusions. Authors abdicate responsibility for content that AI produces, yet they claim authorship. This represents a fundamental betrayal of scientific ethics principles.
The critical importance of transparency standards
International organizations and national governments have responded to these ethical crises by developing clear transparency standards. One common recommendation requires researchers to declare explicitly exactly how and when artificial intelligence was used in their work.
Transparency means specifying which tasks benefited from AI assistance and which relied entirely on human effort. It also means being honest about when decisions proved difficult or uncertain. The transparency requirement goes beyond simple disclosure; it requires opening AI systems to external scrutiny.
Researchers must explain the training mechanisms, coding structures, and algorithms underlying AI systems they deploy. Such openness enables others to understand how the system functions and what assumptions shaped its outputs. Only through such transparency can scientific communities properly evaluate whether AI use was appropriate and ethical.
Colombia’s CONPES 4144, the national artificial intelligence policy document released in February 2025, emphasizes transparency as a cornerstone principle. The policy declares that humans must maintain central decision-making authority. AI functions as an assistive tool, supporting human judgment rather than replacing it.
The transparency requirement addresses a fundamental problem with AI in research. Machine learning systems cannot be held accountable for content they generate; responsibility remains entirely with human users who deploy these systems. When researchers use AI without transparent disclosure, they obscure this accountability.
Readers cannot evaluate whether human judgment shaped results or whether algorithms operated without proper oversight. Transparency also enables research communities to identify emerging ethical problems and develop appropriate responses.
Colombian institutions, including the National University, have begun hosting educational workshops on ethical AI use in scientific research. These sessions bring together researchers across disciplines to discuss implementation strategies for transparency guidelines.
Environmental and equity dimensions of AI ethics
Beyond honesty and disclosure, using AI in research raises serious environmental concerns often overlooked. Training sophisticated AI models demands enormous quantities of water and energy.
A University of California, Riverside study found that training GPT-3 requires approximately 700.000 liters of water simply to cool computing servers. This figure seems abstract until compared against annual water consumption; for 2027, projected global AI demands could require between 4.2 billion and 6.6 billion cubic meters of water annually.
This exceeds Denmark’s entire yearly water consumption. The carbon emissions from AI training prove equally troubling. Creating a single AI model can emit more than 626,000 pounds of carbon dioxide equivalent, roughly equivalent to the lifetime emissions from five automobiles. These environmental costs arise purely from developing the systems; deploying AI in research generates additional impacts.
The article’s authors recommend strictly limiting AI use to genuinely necessary cases. Using AI for routine tasks generates environmental harm without proportional scientific benefit. Researchers should evaluate whether AI actually improves their work or simply speeds up processes that humans could accomplish adequately.
This evaluation requires serious, honest reflection rather than automatically reaching for the newest technological tool. Environmental ethics demands considering planetary consequences alongside research convenience. Equally important, AI access creates new forms of inequality within the research community.
Well-funded institutions can afford subscriptions to advanced AI tools with superior capabilities. Poorly resourced universities and researchers access only limited, free versions. This disparity compounds existing educational inequalities, concentrating technological advantages among already-privileged populations.
Colombian national policy specifically addresses this concern, requiring that AI use in research respects human dignity, guarantees fundamental freedoms, and avoids discriminatory access practices. Technology should serve equity rather than widen existing gaps.
Understanding AI as augmentation, not replacement
The most constructive perspective views AI as a tool for enhancing human capabilities rather than replacing human researchers. Humans have always used technology to expand their abilities.
Archery improved hunting capacity; printing presses allowed accumulating knowledge; telescopes extended vision beyond natural limits. AI represents this tradition’s continuation rather than a fundamental break.
Professor Juan Mendoza-Collazos from the National University of Colombia proposes the concept of “augmented agency” to describe this relationship. Augmented agency involves using artifacts to improve capabilities while maintaining human control over decisions.
The key distinction separates augmenting human abilities from replacing human judgment. AI functions ethically when it helps researchers accomplish tasks more efficiently while keeping final decisions within human hands. It becomes problematic when it replaces human thinking, decision-making, or quality control.
Mental sedentarism presents an overlooked danger of excessive AI reliance. Just as physical muscles atrophy without exercise, cognitive capacities diminish if humans delegate all thinking to machines. Researchers need to maintain critical thinking, careful reasoning, and creative problem-solving abilities.
Relying exclusively on AI for complex intellectual tasks causes these faculties to weaken. Researchers lose capacity for original thought, memory formation, and nuanced decision-making if they stop exercising these abilities. This concern motivates recommendations that researchers maintain active intellectual engagement with their work.
They should not simply accept AI outputs as correct or reliable but should critically evaluate and verify all AI-generated content through independent human judgment. Ethical AI use requires maintaining human researchers as active participants rather than passive observers accepting machine outputs.
Building ethical frameworks for the future
Developing ethical criteria requires ongoing education because technology evolves rapidly and new dilemmas constantly emerge. Ethics itself, understood etymologically and practically, requires constant reflection and active engagement.
It cannot become a set of rules memorized once; instead, it demands continuous practice and reinvestment. Research teams should incorporate ethical consideration from the earliest planning stages through final publication. Researchers should ask whether AI use actually improves their work; whether human critical judgment remains central; whether they disclose AI use completely; and whether their approach respects environmental and equity considerations.
Maintaining detailed records of AI tool usage throughout the research process proves valuable. Such records document which versions were used, when they were deployed, and for which specific tasks. This documentation enables researchers to understand their work’s development and demonstrate that appropriate oversight occurred.
The National University of Colombia organized an educational workshop on June 16 addressing ethical AI use in scientific research. Seventy participants attended from multiple universities and countries.
The workshop identified both strengths and limitations of AI in research contexts and discussed international recommendations from UNESCO, OECD, the European Commission, and Colombian governmental bodies. Presenters explored three AI applications, demonstrating both ethical and unethical usage patterns.
The workshop recording remains available for researchers seeking guidance on implementing ethical standards. These educational initiatives prove essential because ethical judgment cannot be outsourced to policies or procedures; instead, it requires cultivating thoughtfulness and critical consciousness among individual researchers.
Science requires researchers, not just machines
The ethical challenges posed by artificial intelligence in research demand serious, sustained attention from academic communities, institutional leaders, and individual researchers.
Technology that enables unprecedented analytical capabilities simultaneously enables unprecedented ethical violations. Neither rejecting AI nor embracing it uncritically represents appropriate responses. Instead, researchers must develop sophisticated understanding of when and how AI genuinely improves their work and when it simply creates risks.
Using AI ethically requires maintaining researchers as central actors making key decisions. It demands complete transparency about tool usage so scientific communities can evaluate appropriateness. It necessitates recognizing environmental and equity implications of AI deployment.
Most importantly, it requires researchers to cultivate and maintain their own critical thinking, creativity, and judgment. Science advances through human curiosity, creativity, and careful reasoning. AI can assist these processes, but it cannot replace them. When researchers relegate themselves to passive observers accepting machine outputs, science itself becomes diminished.
The challenge ahead involves learning to work alongside artificial intelligence while remaining fully human in our thinking and our ethical accountability.






