skip to Main Content

Algorithmic Amplification and Defamation: Legal and Ethical Implications for Digital Platforms

Introduction

Today, algorithms dictate what information users see on various social media and other Internet outlets. Such optimizations, aimed at increasing user interaction, cause negative externalities like disinformation is content that harms someone’s or an organization’s reputation. As this becomes the case, the following legal and ethical questions become relevant to platform accountability, negligence, and liability.

For years, internet companies availed themselves of fair ‘safe harbor’ laws like the Section 230 of the US Communications Decency Act. Nevertheless, where the algorithms are themselves actively involved in promoting the content as active participants in purveying defamation, the boundaries between the mere facilitation of content, on the one hand, and carrying out the task of active publishing of defamation, on the other hand, start to become more ambiguous. This blog delves into the two sides of algorithms especially, regarding defamation, and tries to determine the adequacy of existing laws about algorithmic repetition of abuse. Furthermore, it brings into focus the principles of ethics in designing the platform and incorporating algorithms in its bases, with the major war being between the goals of getting back the most revenues and the negative impacts the platform fuels on society.

Algorithmic Amplification and Legal Liability:

The legal concern of algorithmic amplification is hinged on whether platforms can be seen as merely neutral carriers when algorithms embedded within amplify dangerous content. Hearing content from Facebook, YouTube, and Twitter are often shielded by coups such as Section 230 of the U.S. Communications Decency Act and India’s Information Technology Act, 2000 to the extent that they do not act against contents generated by the users, even if unlawful once informed. However, as discussed in the article by Goodman and Whittington discussing Section 230, the presence of algorithms obscures this legal protection. Critics insist that when platforms, for profits and engagement, promote or display content that is harmful, defamatory, etc., they are no longer down-line biased middlemen. This is about what happened in the Myanmar crisis where Facebook’s algorithm chose to display hate speech that aided in ethnic violence against the Rohingya Muslim people; an aspect that rebukes algorithmic amplification as harmless and puts into question the adequacy of existing legal protections.

Legal and Ethical Implications for Digital Platforms
[Image Sources: Shutterstock]

Main trends associated with algorithmic amplification can be observed in several gross cases. For example, in Gonzalez v. Google LLC, Nohemi Gonzalez, Nuclear Missile Crisis Definition Gonzalez’s family brought an action against Google for negligence by claiming that YouTube recommended ISIS content to the perpetrator of a terrorist attack. The case also leads to an important legal issue regarding the instructions of active promotion involving the distribution of content where this does not strip the provisions of “safe harbor” under Section 230. In its facts, platform liability concerning algorithmic amplification, the case was not an outright victory for Google, even though the US Supreme Court eventually repealed the decision made by the lower instance by recognizing that Section 230 (C) of Title 47 which protects Google from legal responsibility for speaker content that is posted and amplified through its search algorithm.

According to Christian Fuchs in a critique of Herman and Chomsky’s propaganda model, this creates social reality through algorithms that control performativity in tendency based on reception based on what delivers users, posing ethical problems for platforms that have to make sure that its algorithms do not enhance tendencies towards worse and defamatory content in favor of performativity.

Facebook’s involvement in the Myanmar genocide is another example of positive reinforcement with consequences. Facebook’s algorithm meant to promote shocking content led to hate speech, which resulted in the persecution of Rohingya Muslims in Myanmar. This unfortunate episode put into spotlight the battle between platform expansionism and societal malaise and as highlighted by Susan Etlinger in her report on platform governance, platforms need to revisit their governance framework to begin to factor in the social cost of algorithms. The case of Facebook in Myanmar established that existing regimes could not protect against the dangerous spreading and the platform did not assume ethical obligations for users’ security despite gaining profit from people’s interactions. Combined, these cases point to the fact that it is high time for the platforms to reflect on their legal status regarding the safe harbor laws and consider ethical responsibilities concerning moderation of the content that circulates after algorithms promote it.

Ethical Dilemmas in Algorithmic Design:

The repetition of defamatory content by algorithms not only raises legal considerations – it also provides ethical dilemmas. In general, algorithms apply practices that reward content that creates a lot of engagement – ‘engagement’ is often achieved at the price of ‘responsibility’. This leads to a crucial ethical dilemma: Should platform owners build the algorithms with the goals of telling the truth and keeping their users safe?

This issue ‘rests between the right to speak and write as one desire on one hand and the need to protect individuals and society at large from harm on the other’. Even though the platforms proclaim themselves to be agents of free speech, the consequences of their decisions made based on their algorithms are real. As Christian Fuchs also explains when discussing propagandistic models in the age of social media, algorithms can be weaponized to manipulate public opinion, wondering about the possibilities of destabilizing defamation and misinformation.

In running these platforms, there needs to be a duty of care in the implementation of algorithms where more effort is taken to avoid causing harm while continuing to attract user interest. It is evident that content amplification contributes to increased societal influence, and thus emphasizes the importance of having ethical algorithms for these platforms.

The Path Forward: Legal and Regulatory Reforms:

The legal regulation of digital platforms will always lag behind the development of the platforms as technology advances. The threats sparked by algorithmic amplification require a change in safe harbor protections, systems that were conceived during the time when platforms only merely provided venues for various forms of sharing, rather than positively enhancing them. Potential reforms are as follows:

  1. Algorithmic Impact Statements (AIS): The adoption of AIS is a proposal that calls for the platforms to have the algorithms evaluate the social, ethical, or legal ramifications of an algorithm before it is updated. These AISs would be similar to environmental impact assessments where someone will sit down and imagine how an algorithm can be constructed in such a way that it accentuates the spread of bad things such as fake news, hate speech, or defamation. By considering possible risks before these algorithms are launched on the platform, the major harms arising from content amplification are addressed, and the technology used is made legal and ethical. Laying down over a watchful and transparent approach for the development of such algorithms that possibly influence the eccentric public substantially.
  2. Graduated Liability Model – This is a sensible model that proposes the liability of said platforms increases proportionally with the scope and importance of the platforms. Platforms could then be differentiated by the size of their user base, with, for instance, the larger platforms required to pre-existing algorithms more transparent and have better moderation policies to bear greater societal responsibility, at the same time not imposing overly large regulatory burdens on smaller platforms. To enhance accountability, safe harbors should be examined based on which platforms that actively promote or conspire to present streams of vile or libelous content should not receive safe harbor protection. This would avoid the abuse of the ‘neutral platform’ doctrine of exonerating Internet companies that act as conduits for the distribution of malevolent content.

Conclusion

The spread of such content by algorithms is a key problem for legal and ethical disciples in today’s social media space. That is why algorithms as the main means of content distribution make platform actors not act as intermediaries or even neutral ones. Legal changes require making distinctions between active algorithms’ involvement in the dissemination of negative content, whereas ethical ones should offer platform designs that are less destructive.

That is the reason why the future of digital content moderation can be discussed in terms of freedom of speech, responsibilities of the platforms, and preventing harm. As legal cases and ethical debates continue to unfold, one thing is certain: such platforms must bear more accountability regarding the material that pushes recommendation algorithms for which users ultimately suffer.

Author: Mimuksha Darak, in case of any queries please contact/write back to us via email to [email protected] or at IIPRD.

References:

  1. Goodman, Ellen P., and Ryan Whittington, “Section 230 of the Communications Decency Act and the Future of Online Speech”, German Marshall Fund of the United States, 2019. http://www.jstor.com/stable/resrep21228
  2. Fuchs, Christian “Propaganda 2.0: Herman and Chomsky’s Propaganda Model in the Age of the Internet, Big Data and Social Media”, In The Propaganda Model Today, edited by Joan Pedro-Carañana, Daniel Broudy, and Jeffery Klaehn, University of Westminster Press, 2018. https://www.jstor.org/stable/j.ctv7h0ts6.8
  3. Etlinger, Susan “What’s So Difficult about Social Media Platform Governance?” in Models for Platform Governance. Centre for International Governance Innovation, 2019 https://www.jstor.org/stable/resrep26127.6
  4. Reynaldo Gonzalez, Et. Al. v. Google LLC, 2023 SCC OnLine US SC 12
  5. Section 230(c) of Title 47 of the U.S. Communications Decency Act (47 U.S.C. § 230(c) (1996)):
  6. Section 230 (c) (1) – Protects online service providers from being treated as the publisher or speaker of content provided by others.
  7. Section 230 (c) (2) – Protects online service providers from civil liability if they remove or restrict content that they deem objectionable, as long as they act in good faith.
Back To Top