Navigating the Intersection: Disinformation, Deepfakes and Gender-Based Violence

In recent discussions about the potential threats posed by Artificial Intelligence (AI), the spotlight has shifted from Hollywood’s exaggerated narratives of self-conscious machines controlling nuclear weapons to a more nuanced understanding. Now, the attention is slowly turning toward the pressing issues of disinformation and discrimination, particularly in the context of gender.

In this year’s global “16 days of activism against gender-based violence” (GBV) campaign, e.g., we are already observing an increasing focus of civil society actors on technology-facilitated violence. Yet, key players in tech and political spaces have not yet picked the issue up.

In this article, we argue that the urgency of addressing the threats at the intersection of AI and GBV is still not prioritized enough. We aim to share our preliminary findings to inspire a global discussion on adopting intersectional perspectives to combat the threats of disinformation and deepfakes.

The Rise of Deepfakes: A Pervasive Challenge

At the intersection of AI and GBV, so-called deepfakes emerge as a core issue for GBV. Deepfakes are images that have been manipulated by AI to show humans in situations that they did not actually partake in. Deepfakes are defined as a form of online disinformation. Surprisingly (or not?), the majority of deepfakes are pornographic, challenging the predominant political discourse surrounding this issue.

According to Deeptrace, the prevalence of deepfake videos has seen a staggering 84% increase, reaching over 14,678 videos online by October 2019. Disturbingly, 96% of this content was pornographic, with 100% of these videos replacing female subjects. A recent independent analysis disclosed that 244,625 deepfake porn videos were uploaded to the top 35 websites dedicated to hosting such content over the past seven years, showcasing a 54% increase in uploads in the first nine months of this year compared to all of 2022. Is this addressed sufficiently by private and public organisations?

Policy Gaps: The Challenge of Regulating Deepfakes

The regulation of deepfake (porn) remains insufficiently addressed by policymakers – the EU AI Act avoids concrete regulation to our knowledge. That the majority of deepfake are pornographic content, is not reflected in the Act’s architecture. In response to the lack of concrete and effective solutions, FemAI has conducted initial research on deepfakes to challenge this policy gap.

Calling for Responsibility of the private sector

Recent headlines, such as “Google’s and Microsoft’s search engines have a problem with deepfake porn videos” (WIRED, 16.10.2023), underscore the need for companies to take responsibility. This WIRED article highlights the critical issue of the rise of deepfake porn on the internet. The call for companies such as Microsoft and Google to take responsibility in fighting against the spread of deepfake (porn) is grounded in the following reasoning:

1. Prevalence of Deepfake Porn on Search Engines: Deepfake porn videos are rampant on search engines, particularly Google and Microsoft’s search engines. The scale of the issue is substantial, with a significant number of deepfake pornographic videos easily accessible through these platforms.

2. Lack of Effective Measures: Despite the increasing prevalence of deepfake (porn), the current measures in place by major search engines are inadequate in curbing the spread of such content. This lack of effectiveness underscores the need for more proactive and robust actions to address the issue.

3. Impact on Users and Society: Deepfake porn can have severe consequences for individuals who become targets, causing harm to their reputation, mental well-being, and personal relationships. The societal impact of unchecked deepfake (porn) and the need for responsible action to protect users should be taken seriously when desinging governance and policy structures.

4. Feminist Imperative: The call for responsibility is grounded in the ethical and moral imperative for technology companies to contribute to creating a safer online environment. As platforms with considerable influence, companies such as Google and Microsoft are urged to prioritize user safety and well-being over the potential harms associated with unregulated deepfake (porn). A feminist imperative calls for prioritising marginalised groups when designing solutions.

Our Research Method and Findings

We based this article on desk research as well as qualitative interviews and a dedicated roundtable discussion we facilitated on the issue.

FemAI engaged in a 75-minute discussion through our Feminist AI and Digital Policy Roundtable, enriching our research with experts in AI and marginalized voices.

A Feminist Lens on Deepfake Regulation: issues to discuss

To address the multifaceted challenges posed by deepfakes, FemAI has identified a first set of action steps to take and points to discuss

1. Raising awareness through public and private sector cooperation.

2. Initiating campaigns against deepfake porn.

3. Developing educational and technical solutions by private organizations.

4. Pushing for effective and efficient regulation on both national and EU-level.

5. Creation of urgency plan for upcoming political campaigns.

6. Fostering collaboration between stakeholders (big tech, policymakers, marginalized voices, SME, and NGOs.

Outlook

FemAI commits to deepening this dialogue through policy briefs and upcoming roundtables, actively seeking funding to further deepening this critical work. Additionally, we emphasize the necessity of specific regulation. We highlight the upcoming EU-directive on combating violence against women and domestic violence as an opportunity for such specific regulation on deepfake porn.