skip to Main Content

Deep fakes and Public Figures: How the No Fakes Act Aims To Combat Ai-Driven Deception

INTRODUCTION

Public figures are often subject to open-ended criticisms. Over time, the likeness of these figures has become an asset in itself that now needs protection against the massively expanding scope of Artificial Intelligence. Deep Fake technologies use the voice and likeness of a renowned figure for commercial or any other use without the consent of these figures. In India, the controversy was sparked by the historical win of Anil Kapoor (2) (Slum-Dog-millionaire of the West) against such technologies where the high court recognized the right of these celebrities to personality. On the other hand, politicians have made this a tool for campaigning. The United States has taken a leap forward in this direction by introducing the No Fakes Act bill (Nurture originals, Foster Art, and Keep Entertainment Safe) Various questions may stem in one’s mind after being introduced to this issue: What is deep fake technology? Why is it dangerous? What does this act seek to address? This blog will deal with the act and various provisions in it that seek to protect public figures.

WHAT ARE DEEP FAKES?

We have a come a long way from securing our bank account numbers to sharing them randomly at any website seeking those details. Artificial Intelligence is a power, a power that seeks to preserve itself, perpetuate itself and perpetrate itself just like any other human would do. Deep fakes is a minor part of a massive project, a project against each individual that clicks “I agree” to the terms and conditions without reading them. These softwares are just like human babies where an algorithm is fed to them and they come up with outputs resembling the data. It refers to the use of AI technology to create content that is not authentic(5); the term took its course in 2017 when a Reddit moderator created a subreddit where he swapped faces and called it ‘deep fakes’. It not only creates a replica of the image but also of a voice so as to misuse the likeness of figures who have public appeal.  Its operations require two softwares, one software works on creating replicas resembling the original ones as much as possible and the other software works on identifying the authenticity of replicas(3). Simultaneous working of these systems produce an output which is not humanly possible to be traced or differentiated on the basis of authenticity.

Deep Fake
[Image Sources: Shutterstock]

WHY ARE THEY DANGEROUS?

In a society where everything said on social media has become the word of ‘God’ itself, people have developed a tendency to believe everything they see on internet. In a way “Fact checks have left the chat”. Deep fakes have aggravated into pornography, Donald Trump, Narendra Modi, Queen Elizabeth, Pope Francis are ones amongst the few who were targeted (1). One of the major concerns of this prevailing practice is the widespread misinformation among users. Other implications involve infringement of privacy, potential defamation, conflict with intellectual property rights, and curtailment of freedom of speech and expression. However, the problem does not end here; this technology is created by way of two software through machine learning mechanisms, which makes these videos untraceable to their origins. The accuracy with which the privacy of an individual can be torn into pieces is an unsettling proposition that shakes the victims to their cores.

KEY PROVISIONS OF THE NO FAKES BILL:

Licensing: The bill intends to introduce a provision for a License; once a license is granted, the person will become a right holder, and he will be protected from any content that is circulated without his consent. The act is inclusive in nature and includes individuals, both living and dead.

Property Right: The act proposes to treat rights as property that is transferable, can be bequeathed in a will, or can be passed on to the heirs. The right terminates at the end of a period ranging from 10 to 70 years. For minors, the license expires at the age of majority.

Consent of the right holder: The right holder’s consent is a pre-requisite for any replica, publication, or distribution to take place. However, exceptions are made if such use is for bona fide public affairs, representation in historical documentaries, or where the use of such replica is fleeting or negligible. However, they must not create a false impression of authenticity or lead to commercial gain by exploitation.

Right to sue for damage: Public figures harmed by unauthorized deep fakes have the civil recourse of suing for damages and availing compensation for reputational harm and lost income.

Freedom of speech and expression: The act safeguards freedom of speech and expression by making bona fide commentary, criticism, or satire an exception.

CONCLUSION:

Artificial Intelligence no-doubt has assisted human civilization however it has far reaching implications on innocent users who are trapped into the world of Digitization without essential awareness of the subject matter. A Netflix movie Ctrl featured by Ananya panday(4), aired on 4th October 2024, is an urgent critique on our reliance on dark side of the technologies. In its essence the theme it tried to address is the amount of control we have on technology and vice-a-versa. While we are being constantly tapped by wake-up-calls, it is about time we learn about the working mechanisms of these systems. NO FAKES act(proposed) is a welcome step to begin with public figures and their likeness as they tend to have wider impact on public in general. However, this is not first such initiative precedents like guidelines in India on the deepfakes of Katrina and Rashmika, COPIED act are other such steps taken in this direction. As we are stepping into the universe of artificial algorithms it is essential we step up with the right mindset, understanding our rights, liabilities and consequences emerging from them. In light of the AI being a harbinger of doom, it is about time we asked “Are we reading this with our own free will, or are we controlled by the algorithm?”

Author: Rimjim Kheda, in case of any queries please contact/write back to us via email to [email protected] or at IIPRD.

Back To Top