Rajeev Chandrasekhar, the Minister of State for Electronics and Information Technology, clarified the Centre’s directive regarding artificial intelligence (AI) platforms.
Read More: Kerala school launches ‘Iris’, India’s first AI teacher
What Happened: Taking to the social media platform X, Chandrasekhar stated that the advisory, which caused considerable confusion among tech companies, including startups, about needing approvals before launching their AI platforms, is specifically aimed at significant, large platforms and does not apply to startups.
The minister emphasized that the advisory seeks to regulate untested AI platforms from being deployed on the Indian internet. He explained that the process of seeking permission, along with labelling and consent-based disclosure to users about untested platforms, serves as an “insurance policy” for platforms, protecting them from potential consumer lawsuits.
Read More: Shivratri 2024 Bank Holiday: Are Banks Closed Tomorrow On Mahashivratri?
Chandrasekhar reaffirmed the government’s commitment to ensuring the safety and trust of India’s internet as a shared goal among the government, users, and platforms.
What does the advisory say: Issued on March 1 by the Ministry of Electronics and Information Technology (MeitY), the advisory mandates that all AI models, large-language models (LLMs), software using generative AI, or any algorithms currently in the beta stage or deemed unreliable must obtain explicit government permission before deployment to Indian users. This first-of-its-kind advisory globally aims to prevent bias, discrimination, or threats to electoral integrity through AI and related technologies.
While the advisory is not legally binding, Chandrasekhar hinted at the future of regulation, suggesting that non-compliance could eventually lead to legal and legislative consequences. The advisory follows incidents of reported bias by AI platforms, including a notable case involving Google’s AI model Gemini, which sparked a response from union ministers Ashwini Vaishnaw and Chandrasekhar, emphasizing that Indian users should not be subject to experimentation with unreliable platforms.
Furthermore, the advisory calls for all platforms deploying generative AI to label the potential fallibility or unreliability of their output and recommend a “consent popup” mechanism to inform users explicitly about these issues.
Read More: Delhi Court Issues Notice To Kejriwal On ED’s Complaint Of Not Complying With Summons In Liquor Scam
It also outlines requirements for labelling content that could potentially be used as misinformation or deepfakes, ensuring traceability of the synthetic creation’s origin.