Amelie Patel discusses how the rise of AI has amplified misogyny, enabling new forms of exploitation and harm towards women online which current laws are failing to address.
Recently, X’s (formerly Twitter) AI chatbot Grok sparked outrage over a function enabling users to undress women and children without their consent. Widespread indignation responds to a preventable situation, stemming from the deep-rooted knowledge that AI has never been a woman’s friend.
Although deepfakes are a term now invariably associated with AI, the term was coined in 2017 on Reddit. In the same year, a new model of neural networks (GAN) enabled machines to create realistic generative data of images and videos. Bad actors quickly exploited this lack of countermeasures, using face-swapping technology by superimposing celebrity faces onto pornographic images.
In 2022 AI was released into the public domain without any appropriate safeguarding measures to restrict its use. Programmed predominantly by men who identify as white, cis males, and trained on historically warped data, it works optimally for this group, but at the disadvantage of women and minority groups. In her seminal book Invisible Women, Caroline Criado Perez exposes how in a pre-AI world, systematic data bias impacted women in various aspects of life. Now, in an AI world, these inequalities are being exacerbated. Various chatbots promote discriminatory economic, psychological and health related advice to women that negatively impacts their choices. As AI embeds into external systems, women face tangible risks: employment rejection due to biased hiring algorithms, misdiagnosis of health issues resulting in degraded care plans, and unfair assessments of credit worthiness, preventing them from securing loans.
Explicit deepfake images enact an unacceptable violation of their victims. While in 2017 face-swapping apps targeted celebrities, women from various backgrounds find themselves prey to deep-fakes: in 2023 98% of 95,820 deepfakes online were pornography and 99% of those videos targeted women both inside and outside the public facing sector. The casualness with which a user feels justified in violating a woman’s body shows how easy it is to erase an individual’s bodily autonomy through AI technology. Those who dare to speak out risk being exploited by the very tools they condemn; Dr Daisy Dixon spoke to the BBC about seeing more Grok generated images of herself online after she spoke about their danger. This underscores the need for a collective response – to counteract its isolated attacks designed to silence and divide us.
Legislation to protect women from deepfakes has been woefully inadequate so far. In the UK, sharing pornography became illegal in 2023, yet a crucial follow up law, which will require platforms to remove non-consensual images, faces unexpected delays since it was passed in June, leaving victims without recourse. Back in July, Danish copyright law began an expansion to give people the rights to their own body, facial features and voice, providing individuals with invaluable autonomy over their personal property online. I believe this law is essential for protecting personhood in a growing tech-based world and governments elsewhere should follow suit. As AI continually regenerates and finds new ways of targeting women, legislation cannot afford to be a barrier in its protection of women’s rights. These laws provide a wholesale defence, rather than appearing as a scandal deflection.
When chatbots make biased suggestions about what roles women should and should not pursue, or a deepfake deprives a woman of her autonomy, patriarchal attitudes are subliminally reinforced. Similarly, AI girlfriend websites that present personality options for women limited to the tropes ‘submissive’, ‘innocent’, or ‘caregiver’ perpetuate the normalcy of conventional gender roles. Importantly, these tools do not exist in a vacuum, working within an increasingly far-right online climate that promotes misogynistic influencers like Clavicular and the ‘trad-wife’ trend. These forces reaffirm one another, and slowly but surely these attitudes seep into real life, resulting in increased rates of violence against women and girls.

Deep fakes forebode a future where women’s bodies may be weaponized against them by AI. Recent fears of patriarchal surveillance being expanded can be seen through data extrapolation, particularly in the US, as Meta was found to be illegally collecting reproductive health information through mass period tracking apps last August. This is a massive privacy violation for the sake of accurate consumer advertising, and to identify those seeking abortions in ‘illegal’ states. AI has clear potential to be repurposed as misogynistic warfare.
In Carmen Maria Machado’s speculative fiction story Real Women Have Bodies (2017), women are subject to a phenomenon of physical ‘fading’. In a darkly ironic response to the plundering of women’s minds and bodies by society, they are converted to the ultimate realm of fantasy. Rereading this story, I could see disturbing parallels in Machado’s writing to current AI girlfriends, who similarly exist in an incorporeal domain. A reality in which AI girlfriends become the new normal feels imminent, though where this leaves real women is unclear.
Overall, large-scale retraining of AI models is essential, but unlikely without significant protest. Public pressure is a powerful tool to advocate for the critical involvement of regulators and ethicists in monitoring biases during program development. We must also hold our government accountable and fight for legislation that fully protects women’s rights from AI misappropriation.
For advice for victims of deepfake porn: