Imagine you are scrolling through your phone like usual. It's a normal day, and you've had a tiring day at work, school or college. All of a sudden, your phone starts to go off. Apparently, pictures showing you doing explicit acts have been leaked. You are surprised; you never took these pictures. Hell, they don't even look like you. There is a huge wave of emotions that befall your mind- shock, disgust, guilt. How and why would someone do this?
While the above may be a fictional paragraph for many of you, it is the stark reality for many women in the world today. With artificial intelligence image processors and generators gaining such popularity, a wildly disgusting tool has also grown with them- AI deepfake generators. These AI deepfake tools essentially take any normal clothed photo and can essentially ‘undress’ you. Basically, these tools generate explicit photos from what are otherwise normal, decent and innocent pictures. And the kicker? These tools are very widely available, with almost anyone having access to them with the right search on Google.
The rise in popularity of these deepfake tools is very concerning for anyone. If you have ever posted a photo on any public social media platform, anyone could run them through the deepfake generators and have explicit content of you within minutes. This ease of availability has seen many persons, primarily women, become victims of blackmail or public shame over photos that are not even real. While there has been large outrage over the widespread availability of these tools, there seems to be no permanent solution in sight. Regional bans can only temporarily deter those who are desperate to use this tool, and there seems to be no global ban in sight.
What this invariably means is that, once again, a technology with enormous bounds of potential has been put to use to harm and target women. Anyone with access to deepfake generators can print out hundreds, if not thousands, of illegally and non-consensually generated explicit images of anyone on the planet. Not only does this threaten the autonomy and well-being of adults, but minors aren't spared either. There have been multiple instances of high school children being targeted, and even some American lawmakers after they announced plans to ban these tools. And these are just the cases that we know of.
The large majority of these generated explicit images target everyday people who lack the resources to get these images taken down from pornographic websites. This danger is heightened in societies such as those of India, where the vast majority of the population does not even know of such a threat.
In fact, the top search on Google search for “AI Deepfake India” does not show even a single article or blog talking about the incredibly frivolous dangers of these tools. The results that do pop up are only concerned with these tools being used for political gain, and while this remains an important issue, it cannot compare to the harm that explicit imagery made using these tools can cause to anyone’s life.
This becomes a real cause for concern as these tools are certainly available in India. While a large chunk of the population remains unaware, those with the intent to misuse and target individuals will no doubt get their hands on it.
In a society such as India, where ‘leaked’ images have the potential to ruin the lives of individuals and families, this is a huge red flag that must be addressed as soon as possible. In the hands of those who intend to harm, AI deepfake generators can lead to explicit images that are highly realistic and very difficult to completely remove from the internet. And these generators pose the biggest threat to women, who often have no ability to fight against these pictures, which seem to have appeared out of thin air.
Not only does this further exacerbate gender-based exploitation, but it also makes public social media platforms inherently unsafe and non-private. Any photos posted online can be downloaded and used, and this can lead to large cases of online harassment for which the perpetrator does not even have to know the victim.
It becomes clear as day that most major technological advancements seem to have dark clouds looming above, and more times than not, it is women over whom the incessant toxic rain pours. In a world that grows largely dystopian by the day, global regulations over the ethics and use of AI must be put in place to ensure that the mental trauma and hardship faced by those affected by AI deepfakes are not commonplace. Without decisive action, the cost will be borne by innumerable people who become unwilling victims of this digital menace. And their only mistake? Believing that they were safe on the internet.
Let us know your thoughts in the comments below. If you have burning thoughts or opinions to express, please feel free to reach out to us at larra@globalindiannetwork.com.