In the information age, fake news and unverified data have become common. Only in 2020, 72% of the population in Mexico was an internet user (INEGI). So it is not impossible to think that these users are repeatedly exposed to false information. While it might be easier to believe images or videos because of how complicated it could be to manipulate them, the truth is that it is becoming easier to modify videos and audio thanks to Artificial Intelligence (AI).
The IA uses software to create compelling images, audio, and videos so believable that it is tough to distinguish that they are fake at first sight; this is called Deepfake. Deepfake differs from fake news because it does not necessarily involve multimedia elements. Instead, it is limited to providing false information in text format. One of the best-known uses for deepfakes is to discredit or alter opinions and facts in the political field.
“AI is assisted by software to create convincing images, audio, and videos that are so believable that it is tough to distinguish at first sight that they are fake; this is called Deepfake.”
This intelligence became famous in 2017 when a Reddit user posted explicit videos of actresses such as Daisy Ridley, Gal Gadot, and Scarlett Johanson in sex videos. After this event, an app was created to facilitate the creation of this type of content. And while it wasn’t perfect, it became relatively accessible for anyone to modify moving images. Such was the reach that by 2019, speculatively, 96% of what was created by deepfake has been pornographic material.
Machine Learning is responsible for giving rise to deepfakes through two competing algorithms: generator and discriminator. The generator, as its name says, creates false content from images given to it. And on the other hand, the discriminator is in charge of scoring with 0 (if it is false) and 1 (if it is real), the result of the generator. This process is done over and over again. With enough repetitions, the generator will start to give more realistic content. In the end, both algorithms feed off each other. This whole process is known as generative confrontation, or GAN for short.
Another way this content is created is through an algorithm called an encoder. This requires thousands of facial shots of two people. The algorithm finds and learns similarities between the two faces to reduce both to their shared characteristics and thus compresses these images. Then comes the decoder in charge of rescuing the compressed images, one sequence more similar to the original face and another to the second face introduced. Finally, only the second face is introduced in the sequence and will look legit.
Machine Learning is in charge of giving rise to deepfakes through two competing algorithms: generator and discriminator.
Although, as already mentioned, this technology is often used for political purposes, the business sector is not exempt from suffering its consequences, from influencing the price of a stock to efforts to damage the reputation of a person belonging to the organization, thereby damaging the reputation of the company. Cybersecurity is also at risk as identity theft can negatively affect the organization and the company’s customers.
Naturally, the perceived damage from a sociological point of view is also worrying because if this practice continues, societies will be increasingly distrustful of what they see, even when it involves audio or videos.
At XalDigital, we have the necessary solutions for your company’s growth through data and technology. Contact us to know what we can offer you.