Deepfakes Pose Serious Threat and Risks Brands and Creators Image

Industries have embraced artificial intelligence with open arms for innovation and business ease. AI is also an area of concern as leaders and businesses fail to grasp the devastating impact technology could bring if used unethically.

This is where deepfakes come in. AI-generated deepfake media is a growing threat to businesses, brands, industries, politicians, celebrities, and international security. A report stated that AI provides ever-more sophisticated means of convincing people of the veracity of false information that has the potential to lead to greater political tension, violence, or even war.

Deepfakes are media content created by AI technologies that are generally meant to be deceptive. It’s a compilation of doctored images and audio designed to deceive people; trick audiences into believing made-up stories. The technique leverages deep learning algorithms to superimpose hyper-realistic face images of a target person onto another person’s body.

What Exactly Is a Deepfake and How Are They Made?

FGS Global highlighted that potential dangers are obvious – “imagine a deepfake video showing a CEO of a listed company accepting a bribe or confessing financial fraud” – such deepfake attacks could destroy reputations of executives and companies, and pulverize irrevocable trust and shareholder trust within minutes.

However, deepfake has been used for years and only surfaced in recent years because of the growing popularity and use of AI. Deepfake is often deployed in the film and television industry. For example in Fast and Furious 7, artificial intelligence was used to create Paul Walker, the late actor who played the character Brian O’Conner. A report revealed that the advanced AI technique of Computer Generated Imagery (CGI) was combined with deepfakes to create an ultra-realistic imitation of Paul Walker.

Besides creating deceased celebrities, there’s a growing trend in of individuals and businesses around the world creating AI-recorded tracks using artists’ voices and deceptive ads in which it seems (appears) a performer or celebrity is endorsing a product or service. Sir Lucian Grainge, chief executive of Universal Music Group, in a statement called upon the government to take action. “While we have an industry-leading track record of enabling AI in the service of artists and creativity, AI that uses their voice or identity without authorization is unacceptable and immoral. We call upon Congress to help put an end to nefarious deepfakes by enacting this federal right of publicity and ensuring that all Americans are protected from such harm.”

Britt Paris, an assistant professor at Rutgers University who studies AI-generated content, said people that make these technologies available, are the people that are really profiting off of deepfake technologies. “They don’t really care about everyday people. They care about getting scale and getting profit as soon as they can. We’re at a new crossroads here, a new nexus of what types of things are possible in terms of using someone’s likeness.”

Deepfakes: When seeing isn't believing

The expert said any time that someone cannot pay an actor or a celebrity to appear in their advertisements, they’ll probably do it. “These smaller scammer companies will definitely use the tools at their disposal to eke out whatever money they can from people.”

Siwei Lyu, a digital media forensics expert at the University at Buffalo, believes creators of those fake endorsements follow a straightforward process. “They start with a text-to-speech program that generates audio from a written script. Other programs can use a small sample of authentic audio from a given celebrity to recreate the voice, sometimes with as little as a minute of real audio. Other programs create lip movements to match spoken words in the audio track. That video is then overlaid onto the person’s mouth. All the software’s pretty easy to use. Colin Campbell, an associate professor of marketing at the University of San Diego, said the videos produced in bulk don’t go anywhere. “You can just target certain groups of consumers, and only those people will see them. So it becomes harder to detect these, especially if they’re targeting people who are less educated or just less aware of what might actually be happening.”

Experts highlighted that a fake video, sometimes, can be too perfect to be spotted or called out as a deepfake. Social media platforms are key arenas for consumer exposure to deepfakes.

Also Read: AI Takes Retail Marketing to a New Level

author avatar
Nandika Chand

Search