Buzzword or real risks? Advocating for greater transparency in AI-generated content

Buzzword or real risks? Advocating for greater transparency in AI-generated content

Content Forum chairman Rafiq Razali supports implementing regulatory measures for AI in content creation and distribution. 

As technology rapidly advances, AI has become a central topic in discussions about both its immense potential and its significant risks. Rafiq Razali states, “AI poses risks through the manipulation and exploitation of consumer data which can lead to privacy breaches and unauthorised use of personal information.”   

He continues, “Moreover, AI bias can limit exposure to diverse viewpoints, reducing meaningful dialogue and understanding, which can hinder social cohesion and personal growth.” 

Razali also notes that another concern is the lack of transparency in AI decision-making. 

“Consider a financial institution using AI to determine creditworthiness without revealing the criteria used. This lack of transparency can erode trust and create a sense of injustice among consumers who feel they are being unfairly judged by an opaque system.” 

Additionally, the rise of deepfakes and AI-generated scams poses an insidious threat.

“The Content Forum supports regulatory measures for AI in content creation and distribution,” Razali adds. “Deepfakes and AI-generated scams are particularly concerning because they can mislead people, causing confusion and potentially inciting unrest.”

By encouraging the disclosure of AI-generated content, the Content Forum believes transparency  can help prevent such scenarios. “When people know when they are interacting with AI-generated content, it helps protect them from being misled or defrauded.” 

Such disclosures would inform users that certain content is AI-generated, enabling them to better  assess its credibility and make informed decisions. For the Content Forum, maintaining public trust and safeguarding against misinformation is top priority. 

Raziq Razali hopes that including provisions for AI-generated content in the Content Code 2022 will be a significant step towards maintaining public trust and protecting against misinformation.

“When users know that content is AI-generated, they can approach it with the appropriate level of scepticism and critical thinking. This practice can significantly reduce the spread of false information and protect individuals from being deceived.” 

In essence, with the impressive power of AI comes great responsibility. The Content Forum advocates for content creators and platforms to be transparent about their use of AI, fostering a more trustworthy and reliable online environment.

However, this responsibility extends beyond just creators and platforms. It requires a collective effort. 

Extending transparency for AI content involves industry stakeholders, regulators, and AI service providers to work together to create comprehensive guidelines that address both innovation and safety. 

“This collaborative effort ensures that all perspectives are considered and that the resulting regulations are practical and effective to create transparency that is the key to a safe and trustworthy digital environment.” 

The Content Forum will continue to advocate for self-regulation in Malaysia’s digital space, calling for the extension of transparency requirements to AI-generated content to enhance consumer trust and protect against misinformation.

This content is provided by G.O Communication.

The views expressed here are those of the author/contributor and do not necessarily represent the views of Malaysiakini.

Interested in having your press releases, exclusive interviews, or branded content articles on Malaysiakini? For more information, contact [email protected] or [email protected].

Originally Appeared Here