Watchdogs flag BJP's AI 'weapon' on Muslims, warn of surveillance risks ahead of summit
Published At: 14 Feb 2026
Telegraph India
Titled ‘India AI Impact Summit 2026: AI Governance at the Edge of Democratic Backsliding’, the report highlights how generative AI has become a tool for the BJP to ‘demonise’ minorities and how the technology was being used for ‘indiscriminate mass surveillance’ in India

Days before the India AI Impact Summit opens in the capital, digital rights group Internet Freedom Foundation and US-based think tank The Centre for the Study of Organised Hate have released a report raising concerns over the political and social use of artificial intelligence in India.
Titled "India AI Impact Summit 2026: AI Governance at the Edge of Democratic Backsliding", the report highlights how generative AI has become a tool of the BJP to "demonise" minorities and the use of AI for "indiscriminate mass surveillance" in India.
The report says: "Generative AI has also emerged as a convenient tool for the BJP to demonise, dehumanise, and incite violence against minorities. The ruling party’s weaponisation of social media to spread Hindu nationalist propaganda and silence dissenters has been well-documented.
"Just a week before the India AI Impact Summit, BJP’s Assam unit uploaded an AI-generated video on its official X account, depicting the chief minister of Assam, Himanta Biswa Sarma, shooting at two visibly Muslim men with the title 'No Mercy'. One of the individuals in the framed picture appeared to be a morphed photo of the (Assam) Opposition leader, Gaurav Gogoi, wearing a skullcap," it says.
Several other examples of the similar use of AI by the BJP in Delhi, Chhattisgarh and Karnataka have also been cited.
The report says that the unchecked dissemination of harmful content must also be seen as a failure of social media and generative AI platforms in enforcing their terms of service and community guidelines. "Generative AI tools lack adequate safety guardrails, especially in local languages and social contexts. An investigation revealed the lack of safety guardrails in popular text-to-image tools, with Meta AI, Microsoft Copilot, ChatGPT, and Adobe Firefly responding to harmful prompts and generating imagery reinforcing stereotypes and demonising the Muslim community."
On surveillance, the report explains: "Recently, Devendra Fadnavis, the chief minister of Maharashtra, the second most populous state in the country, announced the development of an AI tool in collaboration with the Indian Institute of Technology Bombay to detect alleged Bangladeshi immigrants and Rohingya refugees across the state. The said tool is reported to use language-based verification to analyse 'speech patterns, tone and linguistic usage' to assist law enforcement in the initial screening of suspected illegal immigrants....
"But linguistic experts doubt the possibility of building an AI tool to distinguish nationalities, given the shared culture and history of Bengal and the resultant overlap of Bengali dialects spoken in India and Bangladesh. It is thus extremely likely that this tool could become another instrument to discriminate against the highly persecuted Bengali-speaking Muslim community and low-income migrant workers from Assam and Bengal," the report says.
Referring to the widespread use of facial recognition technology (FRT) by police, the report points out: "The lack of transparency in the procurement and use of FRT systems further means that there is little public information about their accuracy; available limited data shows the prevalence of high error rates can have a significant impact on the lives of those wrongfully identified in a country where criminal cases take years, if not decades, and undertrials languish in prisons.
"Across the world, civil society and policymakers have recognised the need to regulate and limit the use of FRT," the report says, explaining regulatory laws in Europe and the US on AI that India doesn't have.
AI error, warns the report, has also led to unwarranted exclusion of welfare scheme beneficiaries in several states, despite the broad exemption of these databases from India's privacy law.
The report says: "The state’s deployment of opaque algorithmic systems without public consultation in the absence of effective grievance redressal mechanisms unfairly places the burden of proving their right to access public goods on citizens."
Citing the deletion of valid electors and other malfunctions of AI in elections, the report says: "The opacity on the deployment of the software and the underlying logic used to flag suspected voters can exacerbate the risks of disenfranchisement in an already controversial revision exercise, which places the burden of proving the right to vote on citizens."
The report ends with long lists of recommendations for states, industry, generative AI content and civil society.
The prescription for states includes framing of policies and regulations through consultation and adherence to global human rights norms. Industry should be transparent and provide for human oversight, and institute third-party audits for content. Civil society should raise awareness, document cases of harm and build Global South coalitions for AI governance.