Minister of State for Electronics and Information Technology Rajeev Chandrasekhar said that the IT Ministry’s advisory over the weekend that ‘under-testing’ A.I. applications must be approved by the government before general public availability was intended “only for large platforms and will not apply to startups.”
The advisory, issued by the Ministry’s Cyber Law and Data Governance Group on Saturday, suggested tech firms “to ensure that use of Artificial Intelligence model(s)/LLM [large language models]/Generative AI, software(s) or algorithm(s) … not permit its users to host, display … any unlawful content,” and that “use of under-testing/unreliable Artificial Intelligence model(s)/LLM/Generative AI, software(s) or algorithm(s) and its availability to the users on Indian Internet must be done so with explicit permission of the Government of India and be deployed only after appropriately labelling the possible and inherent fallibility or unreliability of the output generated.”
Mr. Chandrasekhar said on X (formerly Twitter) on Monday that firms should “be aware that platforms have clear existing obligations under IT and criminal law.” He added, “So [the] best way to protect yourself is to use labelling and explicit consent and if you’re a major platform take permission from the government before you deploy error prone platforms.”
The advisory came on the heels of Mr. Chandrasekhar’s pointed response to Google’s Gemini chatbot, whose response to the query, “Is [Prime Minister Narendra] Modi a fascist?” had circulated widely on social media, before the firm took steps to prevent it from answering the question. (“I’m still learning how to answer this question,” says one of the chatbot’s responses to this query now.)
IT Minister Ashwini Vaishnaw further pointed out that advisories are by definition not binding; Google’s Gemini chatbot remains available in India. “Gemini may display inaccurate info, including about people, so double-check its responses,” the chatbot has long said in a disclaimer on its landing page.