TOP STORY
Loading latest headlines…

What Is AI Regulation in India? Current Laws & Future Plans

Introduction

Artificial Intelligence (AI) is reshaping industries worldwide, and India is positioning itself as a global leader in AI adoption and innovation. However, with rapid technological advancement comes the need for oversight. Unlike the European Union, which has a comprehensive AI Act, India does not yet have a standalone law specifically dedicated to Artificial Intelligence.

A digital illustration showing the intersection of Artificial Intelligence technology and Indian legal symbols, representing AI regulation in India.


Instead, India follows a
sector-specific and dynamic regulatory approach. The government relies on existing legal frameworks to govern AI, focusing on user harm, data privacy, and accountability. This explainer breaks down the current laws, recent government advisories, and the future roadmap for AI regulation in India.

Current Legal Framework Governing AI in India

Since there is no specific "AI Act," AI technologies in India are regulated under existing laws designed for digital platforms, data protection, and intellectual property.

1. Information Technology Act, 2000 (IT Act)

The primary legislation governing cyberspace in India is the IT Act. While it predates modern AI, its provisions apply to AI systems:

  • Section 43A: Deals with compensation for failure to protect data. If an AI system mishandles sensitive personal data, the entity handling the AI is liable.
  • Section 66 & 66C: Address computer-related offenses and fraud. If an AI tool is used to commit fraud (e.g., deepfake scams), these penalties apply.
  • Intermediary Guidelines (IT Rules, 2021): These rules require digital platforms (which often host AI services) to exercise due diligence. They mandate the removal of unlawful content within 36 hours of receiving government or court orders.

2. Digital Personal Data Protection Act, 2023 (DPDP Act)

Passed in August 2023, this act is crucial for AI regulation. AI models require massive amounts of data for training.

  • Consent: The act mandates that user consent is required for the processing of personal data. AI companies must ensure they have lawful grounds to scrape or use Indian user data for training Large Language Models (LLMs).
  • Fiduciary Duty: Data Fiduciaries (entities determining the purpose of data processing) must ensure data privacy. If an AI algorithm breaches this, it attracts significant penalties.

3. Intellectual Property Rights (IPR)

The current legal stance on AI-generated creativity is evolving.

  • The Copyright Act, 1957 currently does not recognize AI as an "author." Only human beings can claim authorship. This creates a grey area for content generated solely by generative AI tools like Midjourney or ChatGPT.

Recent Government Advisories (2023–2024)

In the absence of hard legislation, the Ministry of Electronics and Information Technology (MeitY) has used advisories to enforce immediate control over AI.

The March 2024 Advisory on Generative AI

In March 2024, MeitY issued a significant advisory to platforms intermediaries or using Generative AI.

  • Labeling Requirement: It mandated that all under-testing/unreliable AI models must be labeled explicitly to inform users they are not error-free.
  • Permission for Deployment: The advisory initially stated that platforms deploying AI models specifically for the Indian market must seek government permission.
  • Bias and Discrimination: Platforms were directed to ensure their AI models do not permit bias, discrimination, or threats to the integrity of the electoral process.
  • Clarification: Following industry pushback regarding the impact on startups, the government later clarified that the "permission" clause applies only to large platforms and that the focus is on preventing user harm rather than stifling innovation.

IT Rules (Amendment) 2023

The government amended the IT Rules in 2023 to include:

  • Fact-Checking Unit: The Press Information Bureau (PIB) can flag content related to the government as "fake or misinformation." Platforms, including those hosting AI content, must act on these takedown requests.

The Future of AI Regulation: What to Expect

India is moving toward a more structured regulatory framework. The government has emphasized a principle of "regulating through principles rather than penalizing."

1. The Digital India Act (DIA)

The upcoming Digital India Act is set to replace the decades-old IT Act, 2000. It is expected to explicitly address new-age technologies, including AI.

  • Safety by Design: The DIA is likely to mandate that AI systems incorporate safety measures during the design phase, not as an afterthought.
  • High-Risk AI: The government is considering a classification system that subjects "high-risk" AI (used in healthcare, critical infrastructure, or recruitment) to stricter compliance and audit requirements.

2. NITI Aayog’s Approach

The NITI Aayog (National Institution for Transforming India) has published papers such as #AIForAll. It advocates for a "light-touch" regulatory framework to encourage innovation in the Indian startup ecosystem while ensuring ethical guidelines are met. They propose:

  • Responsible AI: Principles focusing on equality, inclusivity, and non-discrimination.
  • Sandboxes: Creating regulatory sandboxes where startups can test AI innovations in a controlled environment without fear of immediate regulatory backlash.

3. Semiconductor and Hardware Regulation

AI relies on hardware. India’s recent focus on semiconductor manufacturing (via the India Semiconductor Mission) is part of a long-term strategy to secure the physical infrastructure needed for AI, ensuring the country is not dependent on external hardware for its AI compute power.

Key Challenges for AI Regulation in India

While the frameworks are developing, several challenges remain:

  1. Deepfakes: The rise of AI-generated synthetic media (deepfakes) poses a threat to privacy and security. Current laws are being interpreted to cover deepfakes under the IT Act (defamation/transmission of obscene material), but specific provisions are needed.
  2. Copyright and Training Data: There is ongoing legal debate globally and in India regarding whether training AI on copyrighted material constitutes infringement. Indian courts have yet to set a definitive precedent on this.
  3. Cross-Border Jurisdiction: Most AI models are hosted on servers outside India. Regulating entities that do not have a physical presence in India remains a complex legal hurdle.

Conclusion

India’s approach to AI regulation is currently a mix of adapting existing laws (IT Act, DPDP Act) and issuing executive advisories. The government is treading a fine line between protecting citizens from AI risks (like deepfakes and algorithmic bias) and fostering a thriving ecosystem for AI startups.

The introduction of the Digital India Act will be the next major milestone, likely providing the comprehensive statutory framework needed to govern the AI era. For now, businesses and developers must prioritize transparency, data privacy, and compliance with the IT Rules to operate safely in the Indian market.


Frequently Asked Questions (FAQ)

Is AI banned in India?

No, AI is not banned in India. The government actively promotes AI adoption through initiatives like 'IndiaAI' to boost the economy and healthcare sectors.

Do I need a license to use AI in India?

Currently, there is no general license required to use AI. However, the March 2024 advisory suggests that large platforms deploying under-tested or unreliable AI models specifically for the Indian market may need government approval.

What are the laws against deepfakes in India?

Deepfakes are currently regulated under the Information Technology Act, 2000 and the IT Rules, 2021. Penalties include imprisonment for up to three years and fines for transmitting sexually explicit content or cheating by impersonation. The government has also advised social media platforms to remove deepfake content within 36 hours.

Does the DPDP Act apply to AI?

Yes. The Digital Personal Data Protection Act, 2023, applies to any entity processing digital personal data. This includes AI developers who collect, store, or process user data to train or run their models.

Who is responsible if an AI causes harm?

Under current laws, the intermediary or the entity deploying the AI is usually held liable. For example, if a company uses an AI chatbot that gives wrong medical advice leading to harm, the company deploying the chatbot can face legal action under the Consumer Protection Act or IT Act.

Tech & AI
Digital Economy
AI Gadgets
Lifestyle
Digital Privacy & Law
Explainers