AI and Indian Law: Addressing Privacy, Ethics, and Copyright Challenges in the Digital Age

AI and Indian Law: Addressing Privacy, Ethics, and Copyright Challenges in the Digital Age

Background:

The rapid evolution of Artificial Intelligence (AI) has transformed the digital landscape, offering unprecedented advancements in productivity, time efficiency, and cost optimization across various sectors, including healthcare, education, agriculture, smart cities, transportation, etc in India. However, as AI technology continues to evolve, it also presents growing concerns, particularly regarding ethical standards, data privacy and the potential for copyright infringement. These issues are particularly pronounced due to AI’s reliance on vast amounts of data, including literary works and creative content owned by original authors or publishers. With the absence of specific regulations for AI, it is imperative to evaluate the adequacy of India's current legal framework in protecting individual rights and ensuring that the benefits of AI do not come at the cost of privacy or intellectual property. This article examines the current gaps in the legal framework for AI regulation and governance in India and suggests potential solutions.

AI Functioning:

AI companies gather, analyze and process data from various sources to configure and train their AI systems, thereby setting initial parameters. These systems often rely on machine learning (ML) algorithms, which allow machines to autonomously learn from data, recognize patterns (such as consumer behavior/preferences), and make predictions. They can also perform tasks automatically without needing to be explicitly programmed for each task every time. A key characteristic of AI systems, especially those with machine learning, is their ability to adapt to new data over time, improving performance or adapting to shifts/change in patterns, thus replicating human intelligence and problem-solving abilities without requiring human intervention[1] each time.

Ethical Concerns:

If the data used to initially configure AI systems is biased, incomplete, or inadequately protected, it can lead to unethical practices or discriminatory/flawed/harmful recommendations which are an outcome of automated decision-making feature of the AI system. A recent incident in Texas has brought global attention to the potential dangers of AI, emphasizing the significant risks involved in its use. A 17-year-old teen, seeking help from the Character.ai chatbot service, which generates human-like responses, was advised by the AI chatbot to kill his parents due to restrictions on screen time[2]. Character.ai, allegedly developed by former Google AI developers, has faced criticism after the family of the teen filed a lawsuit against both the platform and Google. This incident is not an isolated case; there have been other reports involving manipulative behavioral recommendations or harmful interactions with Character.ai and other similar AI platforms, including incidents related to suicide and self-harm. As AI platforms become more integrated into daily life, their inability to address harmful interactions in an ethical manner raises serious concerns. This situation brings up crucial questions about the responsibility and accountability of AI developers and platform operators in establishing and enforcing ethical guidelines and governance frameworks. A key aspect of this is ensuring the safety, dignity, and well-being of users, particularly vulnerable individuals. Additionally, there is a need to determine accountability for the dissemination of incorrect information or harmful recommendations made by these platforms. However, these issues remain largely unresolved due to the lack of specific regulations governing AI applications and systems. Without clear and comprehensive regulatory frameworks, the potential for harm remains, and the ethical responsibilities of AI developers and operators continue to be ambiguous.

AI and DPDPA Interplay:

AI companies act as data aggregators/data fiduciaries and collect various types of data, including public, personal and SPDI of individuals/data principals to train and configure their AI systems for purposes such as targeted advertising, healthcare diagnostics, financial services, etc. While public data can be used without legal concerns, the use of personal data, including SPDI would trigger India’s data protection law i.e. the Information Technology (Reasonable Security Practices and Procedures and Sensitive Personal Data or Information) Rules, 2011 ("SPDI Rules"), which aims to safeguard sensitive personal data, including passwords, financial details, medical history, sexual orientation, and biometric data, provided to a data fiduciary for services or data processing under a lawful agreement. Recently, India introduced the Digital Personal Data Protection Act, 2023 ("DPDPA"), which will come into effect through an official Gazette notification. The DPDPA provides uniform regulation for all personally identifiable data (both personal and SPDI), whether in digital form or digitized subsequently and has extraterritorial applicability if data is processed in relation to offering goods or services to Data Principals within India. As per the data protection framework in India, data fiduciaries (including entities/platforms which control the AI systems/platforms) need to comply with the following relevant provisions[3]:

a. Consent Mandate: Personal data must be collected with explicit prior consent (through privacy notice specifying the use and purpose of collection) from individuals, who should have the ability to understand, review, modify, or withdraw their consent at any time. Upon withdrawal, the data must be deleted.

b. Data Protection Measures: The collected data must be secured against breach by using adequate practices, such as adherence to the IS/ISO/IEC 27001 standard for information security management. This level of protection must also apply to any data transferred to third-party processors.

c. Grievance Redressal: The data principals must be notified about the grievance redressal mechanism and the officer appointed thereunder to reach out and complain against breach or inaccuracy of data.

Even though the data protection law in India comprehensively deals with privacy aspects, the use of personal data by AI systems in and outside India raises serious privacy, security, transparency and accountability issues. The DPDPA and the recent draft Digital Personal Data Protection Rules, 2025 do not specifically deal with the crucial aspects such as profiling, surveillance and the regulation of AI systems, which are increasingly becoming prevalent in various sectors. This poses potential risks for data principals and makes the protection of such data from security breaches, hacking, misuse and unauthorized access challenging. Further, the issue with respect to regulation of future AI systems/applications still needs to be considered given the changing trends, technological advancements and the possibility of misuse of data which may lead to serious consequences and damages to users and broader societal impact. Hence, the protection of personal data in AI systems requires more robust and adaptive regulations to keep pace with evolving technological trends and the potential for misuse.

AI and Copyright Interplay:

Content-generating AI tools have become increasingly popular, especially in academic and professional spheres. These applications gather online information or existing literature on a given topic and instantly generate content. However, this functionality can raise concerns about copyright infringement if the content is copied/used/summarized without the copyright holder's consent, as well as issues related to ownership, transparency, and accountability regarding the accuracy of the provided information. The Copyright Act does not explicitly cover AI-generated content, leaving a significant gap in determining the ownership of works created by AI systems/machines. This has led to potential legal disputes from content owners. A recent example is a lawsuit filed in November 2024 before the Hon’ble Delhi High Court by the news agency Asian News International (ANI) against OpenAI for using ANI's copyrighted news data to train its AI model/system/chatbot, ChatGPT, without obtaining any license or permission [4]. It will be intriguing to see how the court interprets whether such usage constitutes copyright infringement or qualifies as "fair use" under Indian copyright law. The outcome of this case is expected to serve as a foundational ruling and precedent on the intersection of AI and copyright law. While this ruling may provide some clarity, it highlights the urgent need for specific provisions or legislation to address copyright issues arising from the use of AI systems and applications. Such a regulation is essential to mitigate potential copyright violations and ensure that AI technologies are developed and utilized in a way that does not result in infringement of intellectual property rights.

Conclusion & Way Forward:

In light of the above, it can be concluded that the current legal framework in India is inadequate for regulating AI systems and applications. In the absence of specific legislation for AI regulation, the guidelines and principles for responsible AI released by NITI Aayog, India's policy think tank, may provide a guiding framework. However, these guidelines are non-binding and insufficient, leaving the regulatory landscape for AI in India unclear and inadequate, especially in areas related to privacy, ethics, and copyright.

Given the gaps in existing regulations, there's a pressing need for a specific legislation to govern AI technologies, ensuring legal protection for citizens, especially in areas such as privacy, ethics, and copyright. In this regard, India can draw insights from the European Union’s (EU) new AI Act which is the first comprehensive regulation on AI. The EU’s AI Act categorizes AI systems based on their risk levels and establishes specific compliance requirements. It bans AI systems/applications that create unacceptable risk, such as government-run social scoring of the type used in China; sets stricter requirements/obligations for high-risk applications (such as a CV-scanning tools)[5] and does not subject AI systems presenting only minimal risk for people to additional obligations. It also addresses issues such as transparency, accountability, and data governance standards, AI-driven copyright infringement and the ethical implications of AI with more robust provisions. Adopting a similar approach in India would help address existing and potential issues related to ethics, privacy, security, and copyright violations, while also striking a balance between promoting technological innovation and ensuring legal protections.

Search

Other Articles / Blogs by the Author

Practice Areas

Archives


Authors