A STATUTE BEHIND THE CURVE: A CRITICAL EVALUATION OF THE DPDP ACT’S INADEQUACY IN REGULATING GENERATIVE AI SYSTEMS

Understanding Artificial Intelligence: Mechanisms of Data-Driven Decision Making
By Harjas Singh Gulati (NMIMS School of Law, Bengaluru) & Priyanshi Bainwala ( Associate at Choudhury’s Law Offices)
Image Source: Cyber Security and Privacy
AI is an advanced form of tool used in analysis, utilization and learning of data patterns for making decisions on behalf of the user. The technology of Artificial Intelligence helps in providing contextually aware explanations and answers to user prompted questions, in relation to automated document viewing, proof reading, information seeking, information storage as well as helps in providing a bird’s eye view to personalized solutions to stimulated queries. AI functions by processing large amount of data using its algorithm. These algorithms enable the machine to function through recognition of patterns, predictions, and implementation of self-learning and diagnostic measures over time.
Generative AI and the DPDP Act: Generative AI and the DPDP Act
The DPDP Act primarily addresses data protection principles concerning data processing, consent management, and data localization. It focuses on protection of the personal data without taking into consideration the recent development in AI. The DPDP Act, 2023 while recent, does not explicitly address the emerging concerns of open-sourced generative AIs. The Act not only fails to address the impact of such AIs but also fails to define AI, machine learning, or automated decision-making systems.
The DeepSeek R1 Reasoning Open-Sourced model is considered as a more efficient data generator than closed-source competitors like ChatGPT. Consequently, its increasing popularity and demand leads many users to choose it over other GenAI models. However, the lack of clear definition of ‘AI’ in the act creates a significant challenge in categorizing models, such as DeepSeek, leveraging extensive datasets for training and inference, which raises concerns about data minimization and proportionality — principles that remain unaddressed in the DPDP Act. Due to the unavailability of the regulated guidelines or provisions in India for Gen AI, it is deployed without the adequate oversight, potentially violating the right to privacy and undermining individual autonomy. The landmark judgment KS Puttaswamy v UOI, corroborates the Right to Privacy as a Fundamental right under Article 21 of the Constitution of India. The judgement helps in providing the foundational legal basis as to how data should be processed, thus safeguarding our ‘Right to Privacy’.
Data Retention and Pseudonymisation: The Conflict Between AI Practice and Data Protection Principles
Pseudonymisation is a well-defined technique of processing personal data in which data cannot be solely identified or linked to an individual, thereby reinforcing privacy. Section 8(7) of DPDP Act clearly specify that the personal data could only be stored or collected till the time the purpose is fulfilled. However, DeepSeek often requires data to be retained for extended periods, even after the purpose is fulfilled to improve the algorithms’ performance without any specific timelines for Pseudonymisation or data removal. This creates conflict with data minimisation principle, which mandates that data should not be kept after the purpose the fulfilled.
The provisions in the act fail to define ‘reasonable time’, creating ambiguity regarding data storage, giving upper hand to AI developing companies to store the data indefinitely for their own exploitative measures. The individual in this process stands deprived of the right to removal of personal data after specific use articulated in the DPDP act. Furthermore, such practices have larger implications for usage and transfer of data by AI in cross border transfer.
Cross-Border Data Transfers and the Absence of Transfer Impact Assessment in Indian Law
The increasing trend of cross-border AI users has allowed Gen AI companies to store and access privileged data of individuals from various countries. This leads to transfer of data from one country to another, necessitating the need for uniform standards of privacy across all jurisdictions.
Section 16 of DPDP act read with Rule 14 of draft DPDP rules talk about restriction on transfer of personal data by data fiduciary outside India. AI technology often relies on data from multiple jurisdictions, raising questions about compliance with both domestic and international standards of privacy. However, the act does not adequately address the specific challenges of uniform implementation of privacy standards posed by transfer of data in other countries. The DPDP act fails to align with international data protection standards like Transfer Impact Assessment under GDPR.
Transfer Impact Assessment is a process wherein the risks of transferring personal data outside EU countries are subjected to a stringent privacy evaluation. It checks whether there exists proper safeguard mechanisms and standards in the recipient country, ensuring compliance with international standards. However, the lack of such assessment within DPDP provides unchecked freedom to Open-Sourced AI’s like DeepSeek to collect and transfer data back to data fiduciaries origin servers without any attention to privacy standards followed in India. This runs contrary to the core principles for protection of personal data in DPDP act. DeepSeek raises a serious issue in safeguarding the data protection and ultimately leads to violation of privacy, as also observed from the prompt steps of banning of the application by the Italian government. Recently, the demand to ban DeepSeek knocked the doors of Indian judiciary was met with the unsatisfactory response of ‘wait and watch’ rather than any proactive measures to combat privacy issues.
Limitations of Traditional Consent in DPDP’s AI Context
Section 6 of the DPDP Act enshrines the core principles of the Act. It mandates that personal data processing should be communicated through clear, informed and explicit consent. AI system, especially those used for profiling and automated decision making, operate in ways that make it difficult for the individuals to fully understand the extent of their consent. The traditional model of consent [opt in or opt out consent] doesn’t specify the complexities in AI system, where data can be reused, repurposed, and processed in limitless ways that extend beyond the scope of original consent. The Puttaswamy judgment emphasized that the consent must be “informed” and “specific” reflecting the understanding of how data of the individual would be used and processed.
AI models like DeepSeek, termed as a dangerous tool by the Delhi HC, uses the traditional consent mechanism to take the consent of the user, thereby creating a façade of security and privacy. It specifically collects the user’s behavioural biometrics data through the collection of users’ keystroke patterns or rhythms: the speed, rhythm, and key-press duration. This seemingly harmless data collection creates a digital profile of the users, without making them realise that they are providing a digital footprint of identification to the AI model. This identification, in turn, can expose users to identity thefts and algorithmic profiling. The precedent of Facebook v. Power Ventures underscores the importance of explicit consent in obtaining data through verifiable and authorized means. This implies that unauthorized data collection by AI systems should be not be allowed to bypass explicit consent through misuse of traditional consent mechanisms. Unauthorized data collection by AI systems exploiting traditional consent mechanisms necessitates the evolution of legal frameworks like the DPDP Act to mandate AI-specific, granular consent protocols, alongside stringent technical safeguards and enhanced transparency in AI data practices.