AI is already reshaping newsrooms, AP study finds

A new study from The Associated Press reveals that generative artificial intelligence is already reshaping newsroom roles and workflow.

Nearly 70% of newsroom staffers from a variety of backgrounds and organizations surveyed in December say they’re using the technology for crafting social media posts, newsletters and headlines; translation and transcribing interviews; and story drafts, among other uses. One-fifth said they’d used generative AI for multimedia, including social graphics and videos.

“News people have stayed on top of this conversation, which is good because this technology is already presenting significant disruptions to how journalists and newsrooms approach their work and we need everyone to help us figure this technology out for the industry,” said Aimee Rinehart, co-author and senior product manager of AI strategy at the AP.

Representatives from legacy media, public broadcasters and magazines were among the 292 surveyed, mostly based in the U.S. or Europe; more than 30% of those who responded were from newsrooms with more than 100 editorial employees.

The AP, which has been dabbling in AI for a decade, recently helped five local newsrooms develop generative AI tools.

“We reached deep into our AI in journalism networks to recruit participants for the survey, and not surprisingly, most of the people who took the survey had familiarity with generative AI in some form,” said Ernest Kung, co-author and AI product manager at the AP. “I was intrigued by the wide variety of current uses of generative AI journalists described.”

So while the numbers show some comfort with using generative AI, those surveyed likely have had more time to experiment with technology in their newsrooms. But the research shows that for any news organization looking to stay relevant, familiarity with AI is a must.

“Experiment, experiment, experiment,” said Hannes Cools, assistant professor at the University of Amsterdam and co-author of the study. “Responsible experimentation could spark discussion, and that could lead to more responsible use. I do believe that generative AI is here to stay, and it will (if it hasn’t already) be present in many aspects of our daily lives.”

The tension between ethics and innovation drove Poynter’s creation of an AI ethics starter kit for newsrooms last month. The AP — which released its own guidelines last August — found less than half of respondents have guidelines in their newsrooms, while about 60% were aware of some guidelines about the use of generative AI.

Cools was surprised to learn that generative AI guidelines was not the top answer to a question about responsible use of the technology. Instead, respondents said they’d simply verify what they got from an AI program — or not use AI at all.

“… (T)here was also mention of using their gut feeling to evaluate responsible use,” Cool said in an email. “This was quite surprising to me still, as we might not want to trust our gut feeling for judging what is ethical use.”

Some other highlights from the study:

  • 54% said they’d “maybe” let AI companies train their models using their content.
  • 49% said their workflows have already changed because of generative AI.
  • 56% said the AI generation of entire pieces of content should be banned.
  • Only 7% of those who responded were worried about AI displacing jobs.
  • 18% said lack of training was a big challenge for ethical use of AI. “Training is lovely, but time spent on training is time not spent on journalism — and a small organization can’t afford to do that,” said one respondent.

“It’s an exciting moment for journalism and technology, maybe a little too exciting, which makes it difficult to plan for the next year let alone what may transpire in the next 10 years,” said Rinehart. “One thing is clear from this research: more research is needed on AI and newsrooms, especially on workflow efficiency claims.”

Originally Appeared Here

Author: Rayne Chancer