Regulatory Challenges and Opportunities in Cryptocurrency: A Comparative Study of India and the U.S.
IAMAI v. RBI and SEC v. Ripple Labs, two cases from different contexts of law,…
Today, algorithms dictate what information users see on various social media and other Internet outlets. Such optimizations, aimed at increasing user interaction, cause negative externalities like disinformation is content that harms someone’s or an organization’s reputation. As this becomes the case, the following legal and ethical questions become relevant to platform accountability, negligence, and liability.
For years, internet companies availed themselves of fair ‘safe harbor’ laws like the Section 230 of the US Communications Decency Act. Nevertheless, where the algorithms are themselves actively involved in promoting the content as active participants in purveying defamation, the boundaries between the mere facilitation of content, on the one hand, and carrying out the task of active publishing of defamation, on the other hand, start to become more ambiguous. This blog delves into the two sides of algorithms especially, regarding defamation, and tries to determine the adequacy of existing laws about algorithmic repetition of abuse. Furthermore, it brings into focus the principles of ethics in designing the platform and incorporating algorithms in its bases, with the major war being between the goals of getting back the most revenues and the negative impacts the platform fuels on society.
The legal concern of algorithmic amplification is hinged on whether platforms can be seen as merely neutral carriers when algorithms embedded within amplify dangerous content. Hearing content from Facebook, YouTube, and Twitter are often shielded by coups such as Section 230 of the U.S. Communications Decency Act and India’s Information Technology Act, 2000 to the extent that they do not act against contents generated by the users, even if unlawful once informed. However, as discussed in the article by Goodman and Whittington discussing Section 230, the presence of algorithms obscures this legal protection. Critics insist that when platforms, for profits and engagement, promote or display content that is harmful, defamatory, etc., they are no longer down-line biased middlemen. This is about what happened in the Myanmar crisis where Facebook’s algorithm chose to display hate speech that aided in ethnic violence against the Rohingya Muslim people; an aspect that rebukes algorithmic amplification as harmless and puts into question the adequacy of existing legal protections.
Main trends associated with algorithmic amplification can be observed in several gross cases. For example, in Gonzalez v. Google LLC, Nohemi Gonzalez, Nuclear Missile Crisis Definition Gonzalez’s family brought an action against Google for negligence by claiming that YouTube recommended ISIS content to the perpetrator of a terrorist attack. The case also leads to an important legal issue regarding the instructions of active promotion involving the distribution of content where this does not strip the provisions of “safe harbor” under Section 230. In its facts, platform liability concerning algorithmic amplification, the case was not an outright victory for Google, even though the US Supreme Court eventually repealed the decision made by the lower instance by recognizing that Section 230 (C) of Title 47 which protects Google from legal responsibility for speaker content that is posted and amplified through its search algorithm.
According to Christian Fuchs in a critique of Herman and Chomsky’s propaganda model, this creates social reality through algorithms that control performativity in tendency based on reception based on what delivers users, posing ethical problems for platforms that have to make sure that its algorithms do not enhance tendencies towards worse and defamatory content in favor of performativity.
Facebook’s involvement in the Myanmar genocide is another example of positive reinforcement with consequences. Facebook’s algorithm meant to promote shocking content led to hate speech, which resulted in the persecution of Rohingya Muslims in Myanmar. This unfortunate episode put into spotlight the battle between platform expansionism and societal malaise and as highlighted by Susan Etlinger in her report on platform governance, platforms need to revisit their governance framework to begin to factor in the social cost of algorithms. The case of Facebook in Myanmar established that existing regimes could not protect against the dangerous spreading and the platform did not assume ethical obligations for users’ security despite gaining profit from people’s interactions. Combined, these cases point to the fact that it is high time for the platforms to reflect on their legal status regarding the safe harbor laws and consider ethical responsibilities concerning moderation of the content that circulates after algorithms promote it.
The repetition of defamatory content by algorithms not only raises legal considerations – it also provides ethical dilemmas. In general, algorithms apply practices that reward content that creates a lot of engagement – ‘engagement’ is often achieved at the price of ‘responsibility’. This leads to a crucial ethical dilemma: Should platform owners build the algorithms with the goals of telling the truth and keeping their users safe?
This issue ‘rests between the right to speak and write as one desire on one hand and the need to protect individuals and society at large from harm on the other’. Even though the platforms proclaim themselves to be agents of free speech, the consequences of their decisions made based on their algorithms are real. As Christian Fuchs also explains when discussing propagandistic models in the age of social media, algorithms can be weaponized to manipulate public opinion, wondering about the possibilities of destabilizing defamation and misinformation.
In running these platforms, there needs to be a duty of care in the implementation of algorithms where more effort is taken to avoid causing harm while continuing to attract user interest. It is evident that content amplification contributes to increased societal influence, and thus emphasizes the importance of having ethical algorithms for these platforms.
The legal regulation of digital platforms will always lag behind the development of the platforms as technology advances. The threats sparked by algorithmic amplification require a change in safe harbor protections, systems that were conceived during the time when platforms only merely provided venues for various forms of sharing, rather than positively enhancing them. Potential reforms are as follows:
Conclusion
The spread of such content by algorithms is a key problem for legal and ethical disciples in today’s social media space. That is why algorithms as the main means of content distribution make platform actors not act as intermediaries or even neutral ones. Legal changes require making distinctions between active algorithms’ involvement in the dissemination of negative content, whereas ethical ones should offer platform designs that are less destructive.
That is the reason why the future of digital content moderation can be discussed in terms of freedom of speech, responsibilities of the platforms, and preventing harm. As legal cases and ethical debates continue to unfold, one thing is certain: such platforms must bear more accountability regarding the material that pushes recommendation algorithms for which users ultimately suffer.
Author: Mimuksha Darak, in case of any queries please contact/write back to us via email to chhavi@khuranaandkhurana.com or at IIPRD.
References: