

Artificial intelligence systems are not neutral, and without structural safeguards they risk deepening existing inequalities — particularly for women and marginalised communities. That was the central message from a high-level panel at the ongoing AI Impact Summit 2026 in Delhi.
The five-day global gathering of technology leaders, policymakers and researchers began on February 16, bringing together some of the most influential voices in AI. Among the most pointed discussions was a session titled Women, work and the future of AI.
Kalika Bali, principal researcher at Microsoft, warned that the way AI systems are currently designed and trained poses “a great danger” to minority and marginalised groups — especially women.
She argued that rapid model development, coupled with large-scale data scraping, often overlooks ethnographic safeguards and context-sensitive design.
“There is a great danger because of how AI is built. It is very easy to overlook minority or marginalised communities — especially women, who are the largest minority community in the world,” she said during the panel.
According to Bali, the global power balance could shift further if AI systems replicate and institutionalise existing biases embedded in training data.
The discussion gained urgency following the recent controversy surrounding Grok, the AI chatbot developed by xAI. Users allegedly generated and circulated deepfake, sexualised and manipulated images of women public figures using the tool.
Panellists said such incidents demonstrate how AI tools, when released without strong ethical guardrails, can amplify gender-based harm at scale.
Dr Urvashi, founder and CEO of Digital Futures Lab, highlighted the dangers of accelerated AI deployment, particularly when combined with large-scale biometric and surveillance datasets.
She warned that databases built for immigration control or security could, when integrated with facial recognition systems, be used to target ethnic minorities and vulnerable populations.
“Surveillance systems that build datasets of marginalised populations are risky. They create the basis for technologies that can undermine human rights to function very efficiently,” she cautioned.
According to her, AI systems are increasingly centralising power and creating new vulnerabilities — with those already disadvantaged facing disproportionate impact.
AI systems reproducing social and economic inequality
Centralisation of technological and economic power
Gender bias embedded in training data
Deepfake abuse and online harassment
Surveillance tools targeting marginalised communities
Limited representation of women and minorities in AI leadership
The panel suggested structural reforms in AI development rather than reactive fixes.
Key recommendations included:
Conducting ethnographic research before deploying AI systems
Consulting grassroots women workers and minority communities during model design
Embedding human rights safeguards in training datasets
Creating leadership pathways for women data annotators and labellers
Diversifying AI development teams to reduce systemic bias
Speakers emphasised that who builds AI is as important as how it is built. Without inclusive design and governance frameworks, they warned, AI risks becoming a tool that reinforces — rather than reduces — inequality.
As global AI investment accelerates, the Delhi summit has placed gender equity and human rights firmly at the centre of the technology debate.