UNITED KINGDOM - DECEMBER 11: The ordinary bicycle, or penny-farthing, was popular from the early ...
More 1870s to the late 1880s but was unstable and difficult to mount and dismount. The front wheels became increasingly large as makers realised that the larger the wheel, the further the cycle could travel with one rotation of the pedals. The lower and more stable safety bicycle of 1885 with two similarly sized wheels, which superseded the ordinary, was much the same in design as modern bicycles.
The bicycle has revolutionized the way in which we move. Artificial intelligence might do the same to the way in which we communicate. It is all around us, working behind the scenes in apps we use daily.
It has begun to change how we work, live, and connect. Think about movie recommendations on streaming services, navigation apps guiding you through traffic, or even tools that help doctors diagnose illnesses – AI is involved in all of them. Its influence is growing fast, bringing amazing possibilities and real challenges.
Released this week, the new AI Index Report 2025 from Stanford University’s Institute for Human-Centered Artificial Intelligence offers a good overview of the ongoing AI revolution. Let's break it down using four simple ideas: Awareness (knowing what's going on), Appreciation (seeing the good), Acceptance (facing the challenges), and Accountability (figuring out our responsibilities). We need to be acutely aware of just how quickly AI is developing and spreading.
AI systems are ever more rapidly improving, acing tests in math and complex thinking that they failed miserably just a year ago. Imagine AI tackling complex coding problems or graduate-level science questions – performance on benchmarks testing these skills has jumped dramatically since 2023. On one coding test (SWE-bench), AI went from solving only 4% of problems in 2023 to nearly 72% in 2024! This isn’t just happening in labs.
AI is showing up in everyday life. In 2023, the FDA approved 223 medical devices using AI, compared to only six in 2015. In more and more cities, you can now hail a self-driving taxi from companies like Waymo (in the US) or Baidu (in China).
Global investment in AI hit a massive $252.3 billion in 2024. To put that into perspective – this single year of investment is nearly as large as the entire annual economic output of a country like Portugal, which was around $289 billion in 2023.
It also dwarfs the budget of major government agencies; for instance, it's roughly ten times NASA's entire budget for fiscal year 2023 ($25.4 billion). Furthermore, this annual AI investment significantly exceeds the estimated total historical cost of constructing the International Space Station over decades, which is often cited between $100 billion and $150 billion.
More companies than ever (78% in 2024, up from 55% in 2023) are using AI. While the US still creates the most cutting-edge AI models, China is catching up fast, with their best models now performing nearly as well as America's top ones on essential tests. Good news for users: AI is getting cheaper and easier to access.
Powerful AI systems are becoming dramatically less expensive – the cost for certain tasks dropped over 280 times between late 2022 and late 2024! Governments are also paying attention, investing billions in AI development and starting to create rules for its use. Focusing on the futuristic hype is easy, but AI already offers real benefits. This is where appreciation comes in – seeing how AI can help us, especially when combined with human skills ( hybrid intelligence ) and used for social good ( prosocial AI ).
One major plus is productivity. Studies show AI tools can help people get more done at work, often by assisting less experienced employees, effectively leveling the playing field. Think of AI as an intelligent assistant that handles routine tasks, freeing up humans for more complex problem-solving or creative work.
Beyond individual benefits, AI holds promise for tackling big societal problems. Prosocial AI aims to use this technology to bring out the best in and for people and planet - think of using AI to monitor climate change, manage resources like water more efficiently, improve healthcare access for remote communities, or create personalized learning tools for students. Science and medicine are seeing gigantic gains.
AI has played a part in discoveries worthy of Nobel Prizes. Tools like AlphaFold3 are helping scientists understand diseases and develop new drugs faster than ever. AI systems are getting better than doctors at specific tasks, like spotting cancer on scans.
And tools that listen in during doctor visits (AI scribes) are helping reduce paperwork burnout for physicians. Public opinion on AI presents a mixed picture, highlighting a potential gap between current perceptions and future outlooks. According to the HAI Index Report 2025, drawing on 2024 Ipsos data, a slight majority (55%) of people globally feel that current AI products and services offer more benefits than drawbacks, suggesting a cautious global optimism.
However, a contrasting view emerges when looking at future expectations in specific regions; a recent Pew Research Center report (April 2025) found that only 17% of U.S. adults believe AI will have a positive impact on the country over the next 20 years, with over half expressing more concern than excitement about AI's growing role.
This difference underscores how views on AI can vary based on geography and whether people are considering present benefits or future implications. (For more on the potential of prosocial AI, check a Whitepaper released following the United Nations Science Summit 2024). Appreciating the good means we also have to accept the downsides and risks.
Building trust in AI is a significant challenge. Many people worry about how companies use their personal data, and concerns about AI being unfair or biased are real. Even AI systems designed to be neutral can pick up and repeat harmful stereotypes about race or gender found in their training data.
Misinformation is another big worry. AI makes it easier to create fake images, videos, or audio (deepfakes) that can be used to spread lies, especially during elections, or to harass people. We're also seeing more reports of AI causing problems, such as facial recognition making wrong accusations or chatbots giving harmful advice.
Even the data AI learns from is becoming an issue, as more websites block access, which could limit how well future AI works. And despite the progress, AI isn't perfect. It still struggles with complex reasoning that requires common sense or planning multiple steps ahead.
Training the most potent AI models uses enormous amounts of energy, contributing to carbon emissions. While AI helps many workers, concerns about jobs being automated away are understandable. Accepting AI means being honest about these complex issues.
Finally, accountability is about who takes responsibility for guiding AI development and use. Efforts to ensure AI is developed responsibly are growing, but there's still work to do. We're starting to see better ways to test AI for safety and fairness, but these aren't yet standard practice for all significant AI developers.
Governments are becoming more active, investing in AI and creating rules. Countries are starting to work together more on AI safety , setting up international groups and agreements focused on making AI trustworthy and aligned with human rights. However, companies often discuss responsible AI more than they implement strong safeguards.
(You might remember the moratorium to pause AI development that more than 1,000 technology leaders, researchers and others signed in March 2023 and failed to respect) We need more transparency about how AI systems are built and trained. Everyone has a role in that regard – developers, businesses, governments, and citizens – each of us. Education is key.
We need to ensure people understand AI, but there's a big gap. For instance, in the US, while 81% of computer science teachers think AI should be taught in schools, less than half (including only 46% of high school CS teachers) feel prepared to teach it. Closing this gap is crucial for building a future where everyone can navigate and shape the world of AI.
And amid all this, it is essential to remember that billions worldwide have never touched ChatGTP – not for lack of volition but because they struggle to access clean water, nutritious food, or the internet. So, what can you do? This isn’t just something for tech experts or politicians. We all have a part to play – this requires natural intelligence: Awareness: Get curious about AI.
Learn how it works and where it appears in your life. Follow reliable news sources (not just hype) to understand its impact. Appreciation: Think about how AI could help you or your community.
Could AI tools make your job easier or more creative? Look for examples where AI is being used for positive social change. Acceptance: Be a critical thinker. Don't just accept what AI tells you.
Question its outputs, be aware of potential biases, and understand the risks, from privacy to fake information. Accountability: Use your voice. Support companies and policies that prioritize ethical and responsible AI.
Think about your own use of AI – are you using it responsibly? Talk to others about the importance of guiding AI for everyone's benefit. The AI Index 2025 is a solid point of departure to get a (temporary) grip on this constantly shifting landscape of the AI. It may be one of the most powerful technologies ever created.
By staying aware, appreciating the good, accepting the challenges, and holding ourselves and others accountable, we can help shape an AI future that works for all of us..
Technology
Reading The AI Index 2025: Is AI Your Friend, Foe, Or Co-Pilot?

The new AI Index Report 2025 from Stanford's HAI offers a good overview of the ongoing AI revolution. Let's break it down using 4 simple ideas, to distill 4 takeaways.