Canada-based telecom Telus Corp. has reportedly pledged not to use artificial intelligence in Indigenous art in response to cultural misappropriation complaints by several communities.
The telecom behemoth, which offers internet, mobile, and cable TV services, is a prime example of how companies attempt to preserve public confidence while taking advantage of artificial intelligence’s efficiency. Fintech, media, and technology companies can purchase AI and content-moderation services from the company’s overseas section.
According to local news sites, AI-generated content that imitates Indigenous art has caused controversy in Australia. Some artists have complained that their work is being used without their consent to create pieces that are being sold online, and others have withdrawn from a portrait prize competition due to AI concerns.
The Canadian minister of foreign affairs issued an apology in December for releasing an image of an Indigenous lady created by artificial intelligence.
Telus uses generative AI for customer service and has used picture classification to build a recommendation engine. Employees have also generated photos with credit for usage, such as corporate slide displays, using third-party tools like OpenAI’s Dall-E.
However, Telus officials clarified that they cannot guarantee that outside AI models haven’t been trained on Indigenous art. Therefore, the company’s guarantee is restricted to its control over picture development.
AI Indistinguishable from Human Art
Telus’ pledge comes as AI art continues to be a topic of concern in the art community worldwide. A study from last year even revealed that AI art is becoming increasingly difficult to distinguish from human-made art.
The study’s main focus was the ability of participants to discriminate between images produced by AI and human artists. It was headed by distinguished research professor Dr. Scott Highhouse and doctorate candidate in industrial and organizational psychology Andrew Samo.
Despite breakthroughs in AI art, participants were unable to reliably discern between the two, typically correctly identifying only slightly more than half of the time.
Samo and Highhouse did not inform participants that AI would create art to avoid bias in their study design. Instead, participants were not informed about the AI’s involvement until they were instructed to view photos and evaluate them using aesthetic judgment standards.
According to the findings, participants’ confidence in their guesses was notably low. On average, they identified the source of the artwork correctly between 50 and 60 percent of the time. Notwithstanding the difficulty of difference, participants continuously indicated a stronger favorable emotional response to human-made art.
Real-Life Photographs vs. AI Art
People also continue to try to prove that AI art is an inferior form of art. Recently, a photographer used a real-life photograph to win first place in a prominent AI-generated photo contest. Shortly after it was discovered to be authentic, the photo was disqualified.
Photographer Miles Astray placed his actual image in the AI category of the 1839 Awards, a prestigious competition recognized for emphasizing innovation and cutting-edge technology in photography.
The jury first commended Astray’s picture for its originality and striking composition. But when the organizers realized the image was not AI-generated, they felt compelled to transfer the honors to other artists who met the category’s qualifications.
Astray entered a real shot in an AI competition to draw attention to the moral ramifications of artificial intelligence in photography. His goal was to show that scenes created by artificial intelligence may be just as imaginative as those created in real life. Tech Times