India Requires Approval for AI Model Launches
India has introduced an advisory that requires significant tech firms to obtain government permission before launching new AI models. The advisory, issued by India’s Ministry of Electronics and IT, also mandates that tech firms ensure their products do not permit bias, discrimination, or threaten the integrity of the electoral process.
While the advisory is not legally binding, it signals a shift in India’s approach to AI regulation. Previously, the country had declined to regulate AI growth, considering the sector vital to its strategic interests. However, the new advisory requires compliance with immediate effect and requests tech firms to submit an “Action Taken-cum-Status Report” to the ministry within 15 days.
This move has surprised many industry executives, with concerns raised about hindering India’s ability to compete globally in the tech space. The advisory applies to lab-level or under-tested AI platforms deployed on the public internet and aims to make firms aware of their obligations and potential consequences under Indian law.
The advisory follows recent criticism of Google’s Gemini AI tool by India’s Deputy IT Minister Rajeev Chandrasekhar. In response to a query about Indian Prime Minister Narendra Modi, Gemini referred to him as a fascist, prompting Chandrasekhar to warn Google about such responses violating IT rules and criminal provisions.
India’s actions reflect a broader global trend of regulating AI, with countries like the U.S. also exploring AI legislation. As AI policies evolve, experts emphasize the need for global cooperation and coordination to establish effective oversight.