Will AI close or widen the development gap?

AI is often presented as the next peak of human innovation, owing to its potential to revolutionise industries, transform economies, and improve lives. But will AI truly benefit everyone, or will it d...

featured-image

AI is often presented as the next peak of human innovation, owing to its potential to revolutionise industries, transform economies, and improve lives. But will AI truly benefit everyone, or will it deepen existing divides? The answer depends on how the technology is developed, deployed, and governed. Without purposeful interventions, AI’s potential will be harnessed for narrow gains by those who prioritise profits over people.

Encouragingly, the cost of AI development is beginning to decline. While OpenAI’s GPT-4 cost $100mn to train, the Chinese startup DeepSeek’s comparable model apparently cost a fraction of that. This trend has promising implications for developing countries, which generally lack the massive financial resources that earlier AI innovations required but could soon be able to access and leverage these technologies more affordably.



The choices we make today will determine whether AI becomes an instrument of inclusion or exclusion. To ensure that AI serves humanity, we need to focus on incentives. AI development today is largely dictated by market forces, with an excessive focus on automation and monetising personal data.

The few countries spearheading AI technologies are investing billions of dollars in labour-replacing applications that will exacerbate inequality. Making matters worse, government subsidies frequently focus on technical merits, which often target efficiency, without sufficient consideration of their direct and indirect societal impact. Where jobs disappear, economic, social, and political instability tend to follow.

Yet public funding continues to flow toward automation. Governments must realign incentives to encourage AI that serves social needs, such as enhancing education, improving health outcomes, and tackling climate challenges. AI should empower, not replace, human workers.

Population ageing is a major challenge in some countries. Although household robots may help address some of the problems of an ageing population, the frontier of current development focuses on priorities such as dynamic performance (running, jumping, or obstacle avoidance) in outdoor environments, rather than functions centring on safety and practicality, daily living assistance, or chronic disease management. The task cannot be left to venture capital alone, which funnelled $131.

5bn into startups in 2024, largely chasing overhyped and speculative technologies like artificial general intelligence. Narrower-purpose models can advance medical diagnostics, assist radiologists, predict natural disasters, and much more. Redirecting investments toward solutions that directly benefit society is essential to keeping AI development aligned with collective progress, rather than shareholder value.

It is also necessary to bridge the divide between developed and developing economies. AI’s transformative potential remains largely untapped in low- and middle-income countries, where inadequate infrastructure, limited skills, and resource constraints hinder adoption. Left unaddressed, this technological divide will only widen global inequalities.

Consider what AI could do just for health care. It could broaden access to personalised medicine, giving patients in resource-limited settings tailored treatments with greater efficacy and fewer adverse effects. It could assist in diagnosis, by helping doctors detect diseases earlier and more accurately.

And it could improve medical education, using adaptive learning and real-time feedback to train medical professionals in underserved areas. More broadly, AI-powered adaptive learning systems are already customising educational content to meet individual needs and bridge knowledge gaps. AI tutoring systems offer personalised instruction that increases engagement and improves outcomes.

By making it far easier to learn a new language and acquire new skills, technology could drive a massive expansion of economic opportunities, particularly for marginalised communities. Nor are the uses confined to health care and education. The University of Oxford’s Inclusive Digital Model (IDMODEL) demonstrates that equipping marginalised groups – especially women and young people – with digital skills allows them to participate in the global digital economy, reducing income disparities.

But global co-operation is crucial to unlock these benefits. AI must be approached collectively, such as through South-South initiatives to create solutions tailored to developing countries’ circumstances and needs. By fostering partnerships and knowledge-sharing, lower- and middle-income countries can bridge the technological divide and ensure that AI serves a broad range of constituencies beyond the dominant players.

Then there is the question of safety and ethical use. These issues also must be addressed globally. Without robust ethical frameworks, AI can be – and already has been – used for harmful purposes, from mass surveillance to the spread of misinformation.

The international community will need to agree on shared principles to ensure that AI is used consistently and responsibly. The United Nations – through inclusive platforms like the Commission on Science and Technology for Development – can help shape global regulations. The top priorities should be transparency (ensuring that AI decision-making is discernible and explainable); data sovereignty (protecting individuals and countries’ control over their own data); harm prevention (prohibiting applications that undermine human rights); and equitable access.

Multilateral initiatives to develop digital infrastructure and skills can help to ensure that no country is left behind. This is not only an issue for policymakers and the private sector. Throughout history, transformative change has often started from below.

Women’s suffrage, the civil-rights movement, and climate activism all began with grassroots efforts that grew into powerful forces for change. A similar movement is needed to steer AI in the right direction. Activists can highlight the risks of unregulated AI and apply pressure on governments and corporations to put human-centred innovation first.

AI’s social, economic, and political effects will not naturally bend toward inclusion or equity. Governments must steer incentives toward innovation that augments human potential. Global organisations must establish ethical frameworks to safeguard human rights and data sovereignty.

And civil society must hold political and business leaders accountable. The decisions made today will determine whether AI becomes a bridge or a chasm between the world’s haves and have-nots. International collaboration, ethical governance, and public pressure can ensure that we make the right ones.

• Shamika Sirimanne is senior adviser to the secretary-general of UN Trade and Development. Xiaolan Fu is professor of technology and international development at the University of Oxford..