Refik Anadol, Thomas P. Campbell and the Serpentine’s Future Art Ecosystems team on the art world’s AI dilemma

The recent publication of Stanford University’s authoritative Artificial Intelligence Index Annual Report confirmed big tech’s strengthening grip on innovation in the artificial intelligence sector and brought two areas into focus for the art world. For museums, how to work with industry giants, without being distanced from their cultural offering by the summarising power of AI. For artists, how to thrive creatively within the limits of access to sources of production that are monetised in Silicon Valley.

We spoke to the Future Art Ecosystems team at Serpentine, London, the museum director Thomas P. Campbell, and the artist Refik Anadol about how they engage with the challenges and opportunities of working with artificial intelligence in 2024.

‘Lines of power are still being drawn’

Case study: Future Art Ecosystems team, Serpentine, London

The Future Art Ecosystems (FAE) team published Future Art Ecosystems 4: Art x Public AI (FAE 4), their fourth annual report to encourage new thinking around art and technology, in March. They discussed some of the Stanford AI Index Report findings with The Art Newspaper in May.

Hans Ulrich Obrist (second left), director of Serpentine, at the launch of the FAE 4 report with (from left) Kay Watson, head of arts technologies at Serpentine; Victoria Ivanova, arts technologies R&D strategic lead; Eva Jäger, arts technologies curator & creative AI lead; and Alasdair Milne, co-author of the report Sam Nightingale

The FAE team at Serpentine says that there may be rationalism and pragmatism in the way Stanford’s Artificial Intelligence Index Report reads “the market dynamics in terms of who within the current tech ecosystem has the capacity to lead on frontier models [programs trained on vast amounts of data for the latest AI breakthroughs]”. But, as they write in FAE 4, “the legal challenges around training data build and the drumming up of… discourse around the need to negotiate the [public reception] of these models at all levels (cultural, regulatory, and ownership interests) indicate that lines of power distribution are still being drawn”.

In FAE 4, the team proposes lobbying, for public benefit, for “state management of computational quotas and taxes” and for a cultural sector that understands its critical role in addressing the AI landscape. “The mandate of cultural institutions is to make informed decisions that serve the public interest,” the team says. “This does not mean there should be an absolute embargo on partnering with large corporate actors, but the terms of that partnership should benefit the public above and beyond what we call ‘thin publicness’ [meaning, in this context, access to AI models].”

FAE4 proposes the prototyping of “data trusts” to provide independent oversight of data usage—something they are doing with the artists Holly Herndon and Mat Dryhurst for a forthcoming project—to offer the cultural sector “a pathway for negotiations” with the AI tech companies. “It is unclear,” the group says, “whether [AI] models that are advanced by industry standards are necessarily fit for purpose for cultural organisations or artists, which might want to encourage a narrower and more experimental use of AI systems.” LJ

‘Museums will have to engage with AI industry giants’

Case study: Thomas Campbell, director of the Fine Arts Museums of San Francisco

Thomas Campbell spoke to The Art Newspaper after addressing the “Promise of Digital” session at the Connecting Cultures, Bridging Times summit in Hong Kong, organised by the West Kowloon Cultural District Authority on 24-25 March.

Thomas Campbell says AI can help museums agglomerate and edit vast amounts of data they hold © Gary Sexton; Courtesy of the Fine Arts Museums of San Francisco

The museum sector is at a technological tipping point and will soon have to engage with industry giants such as Google to disseminate information and data, Campbell told The Art Newspaper in March.

“I think the positive thing is that museums have now spent about 25 years investing heavily in getting their collections online, along with bibliography and provenance information. And yet, although most of us are using two or three standard information management systems, we’re still very siloed. And I think that AI has the power to help us break down those silos and to agglomerate and edit vast amounts of data in unprecedented ways, drawing on a huge data set for art history and education.”

Companies like OpenAI, Google and Microsoft are already ingesting large volumes of information in the public domain into their AI systems, he adds. “A lot of companies are now developing interfaces for AI systems that would allow you to have a curatorial filter. It’s just a matter of months before these systems are going to be telling you about Monet, Medieval tapestries or Damien Hirst; they’re going to be doing that whether we are participating or not.”

Museums could upload data about collections for instance, that would be a “primary point of reference” before it goes on to platforms such as Wikipedia, Campbell says. “You could choose to privilege certain sources. It’s a curated filter and we’re looking at whether we could develop an audioguide 2.0 that would draw on a curated AI interface like that. This obviously would require a lot of investment but would increase the support we provide museum visitors by leaps and bounds.”

Campbell describes his experiences in developing digital platforms in San Francisco and during his time as the director of the Metropolitan Museum of Art in New York. “One of the things I found was that the Fine Arts Museums had made a considerable investment in digital back in the 1990s. But it had not really been kept up to date. So, somewhat surprisingly in the Bay Area where there is such a focus on technology, the museums had fallen behind. I fast-tracked some fairly straightforward decisions like adopting an off-the-shelf collections managementsystem, TMS, which is used by many museums,” he says.

He also described a pivotal moment outlined by Eric Schmidt, the former chief executive of Google. “He cited Moore’s Law, which predicted that because of the rate of progress and technology, the devices would get twice as powerful, twice as small and twice as cheap very rapidly. And I was worried at the time that it seemed such an elitist play, but he was dead right. By the time I left [the Met] in 2017, something like over 95% of our visitors were carrying smartphones.” GH

‘AI has to be for anyone and everyone’

Case study: Refik Anadol, digital artist

Refik Anadol is one of the leading digital artists working today, and a teacher at University of California Los Angeles (UCLA). He was part of an artist residency with Google Artists and Machine Intelligence (AMI) in 2016, when he started AI data painting and making AI data sculptures. Anadol has shown at art fairs and museums around the world and will present a new immersive commission at the Guggenheim Bilbao in 2025. He spoke to The Art Newspaper in London in March, and by video in May while attending the Google I/O conference in San Jose, California.

Refik Anadol: a champion of radical clarity in demonstrating how AI art is made, and of using open-source AI models Efsun Erkilic

This year Refik Anadol has been showing work made using a Large Nature Model—a program “trained” by analysing millions of images and audio recordings from leading zoological and botanical institutions, including the Smithsonian Institution in Washington, DC, the Natural History Museum in London, and Cornell University. The model was unveiled at the World Economic Forum in Davos before featuring in Echoes of the Earth: Living Archive, Anadol’s first solo show, at Serpentine, London earlier this year.

In 2025 Anadol plans to release Dataland, an open-source AI model, the fruits of the studio’s testing of the Large Nature Model. With Echoes of the Earth: Living Archive, Anadol made a mark on two emerging trends in digital art: for showing the artist’s process, in order to demystify acronym-rich technologies such as AI and NFTs; and for using AI to democratise information. Both trends have the intent of reducing fear of the technological “unknown”, and of demonstrating, to the traditional art world, the serious intent of digital artists.

Asked about the Stanford report’s analysis of the recent comparative performance of advanced data models (programs trained on vast amounts of text)—where those made with “closed AI” outperformed those made with “open AI” models—Anadol remains a determined champion of open-source AI. He sees hope in how more of the tech giants are adding “open” models to their output, most notably, Google.

“The future is open source,” he says. “The future needs to be inclusive AI research. AI is a technology of anything and everything. And if something is anything and everything, it has to be for anyone and everyone. And that’s where that future can be—for demystification, understanding, equally distributed knowledge. There is so much dignity there.” For artists, he says, open source is the only approach.

Anadol holds out hope that the “open” AI will prevail across the industry: “Meta does this, Google does this. I think Apple should. Microsoft doesn’t—I don’t know why… OpenAI in the beginning was all about ‘open’. I hope [Apple and Microsoft] share too. And then it’s a very, very fair environment that people can see and understand different models, their behaviours, actions.”

As an artist working in AI with big tech, Anadol is realistic about the resources, the volume of expensive computing power, that is required to make something as ambitious as his Large Nature Model and his forthcoming Dataland. It is impossible, he says, to create something on the scale of his Large Nature Model without support from a supplier of cloud computing. In his case, that support has come from Google and the chipmaker Nvidia. “It’s very challenging research… and so you just can’t do it without support of a tech pioneer, because you need that resource.” LJ

Originally Appeared Here

Author: Rayne Chancer