TikTok to start labeling AI-generated content

TikTok will begin labeling content created using artificial intelligence when it’s uploaded from outside its own platform.

TikTok says its efforts are an attempt to combat misinformation from being spread on its social media platform.

“AI enables incredible creative opportunities, but can confuse or mislead viewers if they don’t know content was AI-generated,” the company said in a prepared statement Thursday. “Labeling helps make that context clear — which is why we label AIGC made with TikTok AI effects, and have required creators to label realistic AIGC for over a year.”

The move is part of an overall effort by those in the technology industry to provide more safeguards for AI usage.

In February Meta announced that it is working with industry partners on technical standards that will make it easier to identify images and eventually video and audio generated by artificial intelligence tools. The efforts would include Facebook and Instagram users seeing labels on AI-generated images that appear on their social media feeds.

People are also reading…

Google said last year that AI labels are coming to YouTube and its other platforms.



The TikTok logo is displayed Oct. 14, 2022, on a mobile phone in front of a computer screen in Boston.



A push for digital watermarking and labeling of AI-generated content was also part of an executive order that U.S. President Joe Biden signed in October.

TikTok said it’s teaming with the Coalition for Content Provenance and Authenticity and will use their Content Credentials technology.

The company said the technology can attach metadata to content, which it can use to instantly recognize and label AI-generated content. TikTok said its use of the capability started Thursday on images and videos and will be coming to audio-only content soon.

Over the upcoming months, Content Credentials will be attached to content made on TikTok, which will remain on content when downloaded. This will help identify AI-generated content that’s made on TikTok and help people learn when, where and how the content was made or edited. Other platforms that adopt Content Credentials will be able to automatically label it.

TikTok said it’s the first video-sharing platform to put the credentials into practice and will join the Adobe-led Content Authenticity Initiative to help push the adoption of the credentials within the industry.



TikTok Timeline

A TikTok sign is displayed March 11 on their building in Culver City, Calif.



“TikTok is the first social media platform to support Content Credentials, and with over 170 million users in the United States alone, their platform and their vast community of creators and users are an essential piece of that chain of trust needed to increase transparency online,” Dana Rao, Adobe’s executive vice president, general counsel and chief trust officer, said in a blog post.

TikTok’s policy in the past has been to encourage users to label content that has been generated or significantly edited by AI. It also requires users to label all AI-generated content where it contains realistic images, audio, and video.

“Our users and our creators are so excited about AI and what it can do for their creativity and their ability to connect with audiences.” Adam Presser, TikTok’s Head of Operations & Trust and Safety told ABC News. “And at the same time, we want to make sure that people have that ability to understand what fact is and what is fiction.”

The announcement initially came on ABC’s “Good Morning America” on Thursday.

TikTok’s AI actions come just two days after TikTok said that it and its Chinese parent company, ByteDance, filed a lawsuit challenging a new American law that would ban the video-sharing app in the U.S. unless it’s sold to an approved buyer, saying it unfairly singles out the platform and is an unprecedented attack on free speech.

The lawsuit is the latest turn in what’s shaping up to be a protracted legal fight over TikTok’s future in the United States — and one that could end up before the Supreme Court. If TikTok loses, it says it would be forced to shut down next year.

Millennials are the largest adopters of AI tools—for fun, not work

Millennials are the largest adopters of AI tools—for fun, not work

Millennials are the largest adopters of AI tools—for fun, not work

Artificial intelligence is seeing traction among the largest generational contingent in the U.S. workforce: millennials. And they may not be using it just to boost productivity at work.

Verbit analyzed Morning Consult survey data published in 2024 to illustrate how millennials interact most with emerging AI tools. Morning Consult polling suggests millennials are more active users of AI tools than even Gen Z. Respondents said they are about 20 percentage points more likely to use AI for work tasks when compared with overall user responses. But they also indicated they’re slightly more likely to use it for leisure or creative pursuits than work.

AI is found today not just in the generative AI chatbots like ChatGPT that have invaded workplaces and popular culture. It’s also embedded in the algorithms that power recommendations in music and television streaming apps, financial investment services, productivity software like Microsoft Excel, conversation transcription tools, and those that assist in writing code.

With more than a year of experimentation under their belts since the release of ChatGPT and the wave of AI software that followed, the appeal to millennials of using new AI-infused software for work is nearly on par with using it for entertainment recommendations.



Millennials make up the bulk of the workforce and are AI’s early power users

Millennials make up the bulk of the workforce and are AI's early power users

True artificial intelligence, called artificial general intelligence or AGI, refers to software that displays a level of intelligence indistinguishable from that of a human. Experts agree that true AI is a ways off. However, today’s AI-infused software is still powerful enough to significantly augment workers in many white-collar jobs—and even replace them in some limited cases like customer service functions.

Millennials’ equal propensity to use AI for entertainment or work may speak to the entertainment consumption habits of those aged 28-43. But, it could also be representative of the anxiety the proliferation of new AI tools has caused in the larger workforce.

While not yet capable of AGI, the current capabilities of AI have thus far been strong enough to generate significant apprehensiveness about job displacement. White-collar work was long thought to be immune from early forms of automation, which mainly displaced manual labor jobs with the assistance of robotic technology in warehouses and on factory floors.

And even though polling indicates nearly half of AI’s current users are leveraging the tools at work, other indicators show it hasn’t come without compounded stress.

About two-thirds of workers say they are concerned about AI replacing their jobs, and an equal portion say they fear falling behind in their jobs if they don’t use it at work, according to a 2023 survey developed by Ernst & Young to gauge worker anxiety around business’ adoption of AI.

While the future of AI—and how big of a threat it poses for workers—remains to be seen, a new report from corporate advisory firm Gartner provides an optimistic view running counter to worker anxiety. The 2024 report forecasts that generative AI adoption within corporate workplaces could slow in the coming years as organizations are hit with the reality of the costs it requires to fully train AI models, as well as the looming intellectual property challenges making their way through courts.

Story editing by Nicole Caldwell. Copy editing by Kristen Wegrzyn.

This story originally appeared on Verbit and was produced and distributed in partnership with Stacker Studio.



Originally Appeared Here

Author: Rayne Chancer