Background of the acquisition Google is the world’s most popular search engine. It also has…
AI Governance in Carbon Capture Technology: Bridging Indian Legal Frontiers for Climate Resilience – Part III
The Need for Lawful AI Regulation in CCS Technology
Concerns about Accountability and Transparency
The first issue to discuss is the complexity of AI applications in CCS technology and the range of legal issues they raise, as we critically examine the necessity of legal regulation of AI in this sector. AI applications span a wide range of activities, each raising specific legal and ethical issues. These activities range from the design of machine learning algorithms to the handling of massive amounts of environmental data, from decision-making processes involving strategic planning of CCS initiatives to optimising the operations of CCS systems.
In order to foster acceptance and confidence in AI systems, transparency is essential, especially when these systems are used in contexts as important and consequential as CCS. Transparency is severely hampered by the “black box” character of AI algorithms, whose inner workings are sometimes unknown to even their creators. These difficulties lead to the following queries:
Reliance on AI: Is it Reliable?
The solution is probably going to be complicated and reliant on several variables. The resilience of the AI system’s design and implementation, the calibre and applicability of the data it is trained on, and the degree of human oversight in its deployment and usage will all have a significant impact on how reliable the AI system’s choices are. However, there is still a substantial lack of confidence in AI, especially when it comes to crucial fields like CCS, which makes it difficult for the technology to be widely accepted. AI regulations must thus take into account the essential procedures to lessen this lack of confidence. Such strategies may be enforcing the inclusion of explanations in AI design or promoting hybrid decision-making models that combine AI and human judgement.
[Image Sources: Shutterstock]
Drawbacks of AI Choices
The response is not so simple. Because AI is a “black box,” it might be difficult to completely understand and anticipate the possible effects of actions made by AI, especially in situations as complicated and dynamically changing as CCS. This uncertainty might have unanticipated and perhaps detrimental effects, endangering the trustworthiness and dependability of AI-integrated systems. Therefore, rules for thorough testing, monitoring, and validation of AI systems should be included in the formulation of AI regulations, especially when those systems are used in crucial industries. In addition, legal and regulatory frameworks must to be put in place to ensure responsibility and provide recourse in situations when AI systems have unanticipated bad outcomes.
Therefore, while drafting AI rules and regulations, these issues must be taken into proper account for transparency.
Next is the question of responsibility. The following queries come up when an AI system using CCS technology takes a choice that has unforeseen repercussions.
AI’s Accountability Factor
It is crucial to acknowledge that the solution is not simple. The particulars of the case, such as the decision-making environment, the nature of the unintended repercussions, and the involvement of various stakeholders in the design, operation, and supervision of the AI system, have a substantial impact.In general, one may contend that accountability ought to rest with the organisation that has the greatest influence on the creation, application, and functioning of the AI system. However, because AI systems are so interconnected and heavily integrated, this becomes more difficult in real life. The AI system’s creators may contend that they only supplied the instruments and that operators were the ones who abused or improperly employed them. On the other hand, the operators may contend that they used the system for what it was designed to do and that the designer should be held accountable if the system did not function as planned. However, policymakers may counter that they only permitted the deployment of AI technology since that was the knowledge that was available at the time. Because of this blurring of the borders between duty, thorough laws that specify responsibilities and accountability in specific situations are required. This would include creating legal frameworks—possibly based on pre-existing legal notions like “product liability” and “professional negligence”—that expressly address responsibility in AI systems.
Who would be Liable for the acts of AI?
One may argue that the AI system’s designers or trainers should bear the blame. They should make sure that the data utilised is impartial and representative as they are usually in charge of selecting the data. This emphasises how important it is to have strong data governance procedures when designing AI systems, and these procedures should be supported by moral and legal standards. In actuality, though, the situation can be more complicated. For example, biases may have been unknown or inevitable, and the designers may have utilised the best data available at the time. Moreover, it may be said that policymakers and operators are equally to blame for failing to sufficiently evaluate the AI system and any potential biases before to its implementation.
As a result, the instances above are only a few and highlight the crucial requirement for legislative regulation to provide responsibility in the use of AI in CCS technology.
The future of CCS is becoming increasingly AI-integrated, and as such, the legal framework must address the problem of bias in AI choices. As previously shown for the CCS technology, unconscious biases in training data can lead to AI systems reinforcing or escalating these biases in their conclusions. This might result in an unfair distribution of resources or responsibilities, which would have a substantial impact on CCS initiatives and society as a whole. To guarantee that AI systems are created and trained to reduce prejudice and promote equal results in CCS technology, clear laws are required.
Author: Kaustubh Kumar, in case of any queries please contact/write back to us via email to [email protected] or at IIPRD.