In the era of artificial intelligence, leaders are making high-stakes decisions faster than ever—decisions like who gets hired, promoted, or flagged for performance issues. But as we embrace AI to optimize operations, the question of fairness is receiving insufficient attention: How do we guaranty the tools we increasingly trust to inform our actions have been built with proper attention to fairness ? In his groundbreaking 1971 book A Theory of Justice, John Rawls introduced a thought experiment that we should make central to decisions about the design and use of AI today: the veil of ignorance. This idea offers a simple tool for understanding fairness and a blueprint for how leaders might leverage that understanding to fairly design and implement AI .
Imagine you are creating rules for a new society. But there’s a catch: you don’t know in advance who you will be in that society. You might end up rich or poor, healthy or disabled, part of the majority or a marginalized minority.
Operating behind this "veil of ignorance" would prevent rule makers from crafting decisions that would provide them personal benefit. Instead, people would be motivated to create fair rules that protect fundamental rights, ensure opportunities are open to all, and provide extra support to the disadvantaged. Rawls proposed that just systems are those people would agree to without knowing how they offered them advantages or disadvantages.
The idea is that fairness is about building structures that everyone - the powerful and the powerless - would deem acceptable. Expecting people with personal benefits at stake to sublimate that self-interest is unreasonable. Hence, there is a need for a veil of ignorance.
In traditional policymaking, Rawls’ thought experiment has provided an important perspective in debates on issues such as healthcare design. Now, as AI begins to influence hiring, promotions, and countless other workplace personnel decisions, the veil of ignorance offers a critical guide: build AI systems as if you don’t know whether you will be the one judged by them. Despite all its promise, AI does not naturally operate under Rawlsian principles.
Instead, most AI systems today amplify historical inequalities rather than correct them. Why? Because AI learns from historical data — that data reflects the biases, blind spots, and injustices of the past. An AI trained on decades of corporate hiring decisions, for example, might “learn” to favor resumes with elite university credentials, Western-sounding names, or traditional career paths — not because those qualities should be preferred today, but because historically they were favored.
It doesn’t help that most AI tools function as black boxes: their decision-making processes are opaque, and their biases are hard to detect until harm has already been done. Leaders must acknowledge that AI will not operate behind a veil of ignorance unless deliberately designed otherwise . It will reflect and reinforce existing social inequalities, not imagine a world where fairness matters.
The good news is that businesses can design AI systems that approximate Rawls' fairness principles. However, this requires conscious choices at every stage: data curation, algorithm design, decision auditing, and human oversight. As mentioned, hiring is a common area where companies are eagerly looking to apply AI.
It can be a mind-numbing task to plod through applications; fatigue or boredom can make errors likely, and many process elements are routine. It is easy to see how the potential of resume screening tools, video interview analyzers, and skill assessment algorithms to make hiring faster and smarter is so attractive. Yet, without considering the metaphor of the veil of ignorance, they can easily produce less fair outcomes.
Consider a traditional AI hiring system. A company trains its model on historical hiring data, feeding it thousands of resumes and outcomes. If that data reflects a bias toward hiring white men from a few elite universities, the AI will "learn" those patterns — and perpetuate them.
It will favor applicants who match the historical profile and penalize those who don't, regardless of their skills or potential. This is precisely the kind of unfairness Rawls would warn against – and it is easy to see who might and might not be troubled by this design. Embracing Rawlsian principles in AI is not just a matter of ethics — over the long run, it will undoubtedly become a matter of competitive advantage.
Companies that design fair AI systems will tap into wider talent pools, build more interesting and innovative teams, strengthen their reputations in a world increasingly skeptical of corporate bias, and reduce legal and regulatory risks should governments scrutinize AI discrimination. John Rawls challenged us to imagine building societies from behind a veil of ignorance — crafting rules we could live with no matter our fate. As leaders deploy AI across critical areas of human life, the same challenge stands before them.
AI cannot naturally embody fairness. It must be taught and trained to do so. Only when those entrusted with the power to build AI systems controlling the destinies of so many embrace Rawls' insights do we have the chance to avoid the past's mistakes and create a more just, dynamic, and prosperous future.
The veil of ignorance is not just philosophy. It is an essential perspective for building and deploying AI worthy of human trust ..
Technology
You Must Build Workplace AI Behind A Veil Of Ignorance

Imagine you are designing rules for a new society but you don't know in advance who you will be in that society. Would that shape your concern for creating fair rules?