Voice cloning is the next tactic used by online scammers, following AI-generated faces. These days, fake commercials and news anchor interviews are widely shared on social media to promote apps and games that promise rapid riches. Artificial intelligence deepfakes are distorting the faces of a number of celebrities, including Ed Sheeran, Kylian Mbappé, Taylor Swift, and international news presenters. In this piece, Cliche Magazine delves deeper into these fictitious interviews and promotions.
Deepfake technology is used in fraudulent advertising to mimic the voices of celebrities, such as the French football star Kylian Mbappé, in order to promote online gambling. These fraudulent activities, which also use other well-known people as props, give rise to questions about the usage of AI algorithms to produce false material.
Recent events have brought to light the hazards associated with the inappropriate use of AI to produce deceptive material, given the rapid advancement of deepfakes. Legislators in the US have proposed legislation to protect individual privacy in response to cases such as the “Taylor Swift case”, in which inappropriate photographs created by artificial intelligence became viral (we’ll go more on the musician later). This pattern emphasizes the necessity of stricter oversight and the right kind of legislation to stop abuse.
In this regard, websites like YouTube are implementing measures to stop the propagation of dangerous deepfakes. In the first, explicit guidelines for flagging AI content are to be established. To preserve your privacy, X temporarily disabled searches for Taylor Swift. The use of Mbappé’s voice in deceptive advertisements on Facebook serves as evidence of the need to take action against deepfakes, which is why these policies reflect increasing attempts to establish a safer digital environment.
Celebs’ Cloned Voices Used in Advertisements for Gambling: How Deepfakes Are Created?
By producing content that seems to be approved by celebrities, advertisements take advantage of media appearances and interviews. Tech experts exposed these frauds on social media by demonstrating how artificial intelligence techniques can turn real video into a deceptive commercial.
Initially, an Internet-accessible clip featuring well-known public people, including athletes, actors, politicians, and businessmen, is selected. After that, the video passes through an Al program, which replaces the subject’s voice and mouth motions exclusively with realistic results. The final video will appear just like the first, but the speaker is now saying anything the individual who altered the video intended them to say.
These kinds of clips are overflowing on social media right now. Videos of well-known people are being repurposed by stutterers to give the impression that they are endorsing or promoting dubious betting applications and investment programs. Though technologically sophisticated, the main goal of this dishonest activity is to lure people to fraudulent gambling platforms, such as casino apps, and encourage spending through a bogus guarantee of quick profits: earning money, trading cryptos, etc.
The use of celebrities’ cloned voices, such as those of Mbappé, in malicious commercials is a concerning trend in Internet scam tactics. Despite their coarse content, these clips end up on websites like Google Play, where they encourage shady gaming. Deepfake technologies are becoming more and more sophisticated, making it harder to detect them. Despite their imprudence, they are able to infiltrate well-known social media platforms like Facebook, implicating authorities and users in the danger of recognizing and thwarting these sophisticated frauds.
Deepfake Clips of Famous Indian and French TV Presenters Promoting Casino Gaming Apps Go Viral
A video featuring the managing editor and anchor of News18 Hindi, Amish Devgan, reporting on the advantages of the “11Winner” casino app, is going viral online. As it turned out, Devgan never encouraged the usage of such an app, and his assertions in the video were untrue – the video is a deepfake.
In the footage, which plays like a news broadcast, Devgan initially discusses the app before telling the tale of a man who made a lot of earnings with it. According to the video, one man (the name is irrelevant at this point because it’s invented just like the entire clip) changed his life by using his app winnings to purchase a sports car and a two-story home. The phrase “300,000,000 INR is the jackpot won by a Mumbai native using a mobile app” is displayed graphically in the video. Facebook users are sharing the video with no caption.
The well-known TF1 television news anchor Gilles Bouleau is also a victim of an ad campaign that uses cloned voices to encourage gambling online.
Taylor Swift’s Case Started It All
As 2023 was coming to an end, Taylor Swift was going through hell as she became a victim of non-consensual deepfake pornography.
The popular American singer-songwriter’s deepfake porn photographs quickly went viral through X for nearly a full day in the last week of December. The social networking site, which was once known as Twitter, reacted so slowly that one picture received 47 million views before being removed. The majority of Swift’s followers organized and widely disseminated the photos, sparking widespread indignation that even the White House described as “alarming”. This case is prompting some serious concerns about the moral limits of technology and its propensity to mistreat and abuse individuals. On the night of December 28, X finally took down the pictures and stopped people from searching for the pop singer.
Do you want to look up Taylor Swift on X? That’s not possible, though. The search box for Taylor Swift on Elon Musk’s owned social media site disappeared a few days ago. The decision was made in response to this troubling controversy that featured explicit, artificial intelligence-generated photos of the singer. The X users currently receive an error notification when they attempt to search for her name.
How Can Gambling Companies Combat These False Ads?
When deepfake advertisements with celebrity photos and voices are used for online gambling companies’ own promotional purposes, they run the risk of being shut down. Artificial intelligence is a fantastic way to stop deepfake advertisements and Internet crime.
Deepfake advertisements utilizing celebrity photos and footage have recently surfaced, demonstrating their widespread occurrence in the media. AI will be a crucial instrument in the fight against this growing fraud danger. This is a threat to financial organizations looking to confirm identity through Know-Your-Customer procedures in addition to the advertising business. The worry is that this might also fool clients and staff into paying money they shouldn’t by taking over accounts or engaging in other illegal activities.
Reputable suppliers of payment and compliance solutions for the financial sector think that verifying the identification of customers using various data points is the only approach to counter the threat. This will involve comparing transaction histories, social media interactions, and online behavioral trends.
Artificial Intelligence will make all of this much faster and less labor-intensive because it can sort through massive data sets and arrive at conclusions with clarity. Also, continuous real-time monitoring rather than one-time inspections would result from this. Luckily for avid gamblers, the large number of online operators reviewed at TopCasinoExpert, such as the best Flatdog online casinos in 2024, already started implementing these guidelines.
Scams Created by AI Will Raise Cyber Dangers by 2024
Leading computer security software companies anticipate that by 2024 people, including children, may be more vulnerable to phishing scams, cyberbullying, and identity theft due to artificial intelligence-generated frauds like deepfake media.
By 2024, AI will play a major role in helping cybercriminals use deepfakes—realistic fake films, photos, and audios of actual people or places produced using deep learning technology—and other schemes to control social media and sway public opinion like never before.
Deepfakes can have a disastrous effect on their victims’ lives, as is already evident. Worse, with a few images or audio samples, certain tools and solutions let even inexperienced users generate deepfake speech, video, and photo scams.
Furthermore, it’s anticipated that cybercriminals will take advantage of victims’ empathy, fear, and pain in order to increase the frequency of charity fraud in 2019. One way that charity frauds occur is when a criminal creates a phony website or fabricated page to deceive well-intentioned donors into believing they are contributing to worthy causes.
Using generative artificial intelligence techniques, scammers may also write code more quickly and effectively to produce malware and harmful websites.