A human hand and a robotic hand, meeting in the middle, touching knuckles. Artificial Intelligence is advancing at speed. Both the momentum and the money is focused on performance: faster models, more integrations, ever accurate predictions.
But as industry sprints toward artificial general intelligence (AGI), one question lingers in the background: what happens to humans ? A recent report from Elon University’s Imagining The Digital Future Center surveyed nearly 300 global technology experts. The resulting report, ‘Being Human in 2035’, concluded that most are concerned that the deepening adoption of AI systems over the next decade will negatively alter how humans think, feel, act and relate to one another. MIT Media Lab is trying to answer a similarly alarming issue : how can AI support, rather than replace, human flourishing? It is the central question of the Lab’s newly launched Advancing Humans with AI (AHA) program.
Heralded as a bold, multi-year initiative not to just improve AI, but to elevate human flourishing in an AI-saturated world, a star-studded symposium kicked off the concept and the different research domains it will tackle. Speakers included Arianna Huffington who spoke of AI being like a ‘GPS for the soul’, and Tristan Harris who warned about systems exploiting human vulnerabilities under the guise of assistance. Both agreed that AI shouldn’t just be optimized for efficiency rather it should be designed to cultivate wisdom, resilience, and reflection.
This echoed AHA’s deeper vision to reorient AI development around designing for the human interior, the parts of us that make life worth living but often get left out of technical design conversations. Pat Pataranutaporn, co-lead of the AHA program, summed this up to the assembled audience, asking, ”What is the point of advancing artificial intelligence if we simultaneously devalue human intelligence and undermine human dignity? Instead, we should strive to design AI systems that amplify and enhance our most deeply human qualities” The Missing Research Layer in AI While safety and alignment dominate AI ethics debates, AHA concerns itself with longer-term human outcomes, as woven through the sections of the event which covered Interior Life, Social Life, Vocational Life, Cerebral Life and Creative Life. From over-reliance and skill atrophy to growing emotional attachment and isolation, people are already reshaping their lives around AI.
But few research efforts are dedicated to systematically understanding these changes, let alone designing AI to mitigate them. AHA aims to do just that. The initiative is grounded in six research domains: Comprehension & Agency : AI systems that support, rather than replace, critical thinking and reasoning.
Mental & Physical Wellbeing : Tools that enhance long-term health, and deliver personlized support. Curiosity & Learning : Adaptive learning that sparks exploration for each person’s learning journey Creativity & Expression : Systems that amplify human agency and expression, rather than homogenize it. Sense of Purpose : AI that helps people reflect, plan, and align their values and goals whilst contributing to society.
Healthy Social Lives : AI that strengthens authentic human connection rather than diminishing it. A Moonshot Mindset The ambition of AHA is matched by its moonshot projects. These include: The Atlas of Human-AI Interaction : Advancing a comprehensive database mapping of how humans relate to AI across contexts.
A prototype has been mapping papers on human-AI interaction to identify positive outcomes. Benchmarks for Human Flourishing : This will develop a series of benchmarks that provide rigorous metrics and assessment frameworks to measure how AI systems contribute to human flourishing across cognitive, emotional, and social dimensions. A workshop on September 17 and 18 at the Media Lab will invite experts to begin the design of benchmarks.
The Global Observatory for AI Impact : An observatory for tracking the emotional, cognitive, and social effects of AI adoption around the world, allowing for large scale randomized experiments into how people are responding to AI across different countries or locations. The message is clear: it’s time to measure the wellbeing of humans not just the performance of machines. Why Now? As AI becomes increasingly embedded in health, education, work, and social life, the choices made by engineers and designers today will shape cognitive habits, emotional norms, and social structures for decades.
Yet, as AHA’s contributors pointed out throughout the symposium, AI is still mostly optimized for business metrics and safety concerns rather than for psychological nuance, emotional growth, or long-term well-being. MIT’s AHA initiative is not a critique of AI. It’s a call to design better, to design not just smarter machines, but systems that reflect us as our best selves.
As Professor Pattie Maes, co-lead of the AHA program and director of the Fluid Interfaces group, told me after the event, ‘We are creating AI and AI in turn will shape us. We don’t want to make the same mistakes we made with social media. It is critical that we think of AI as not just a technical problem for engineers and entrepreneurs to solve, but also as a human design problem, requiring the expertise from human-computer interaction designers, psychologists, and social scientists for AI to lead to beneficial impact on the human experience.
’.
Technology
MIT Media Lab To Put Human Flourishing At The Heart Of AI R&D

How can AI support, rather than replace, human flourishing? That is the central question of MIT Media Lab’s newly launched Advancing Humans with AI (AHA) program.