
According to a panel of hundreds of artificial intelligence researchers, the field is currently pursuing artificial general intelligence the wrong way. This insight was revealed at the Association for the Advancement of Artificial Intelligence (AAAI)’s 2025 Presidential Panel on the Future of AI Research. The lengthy report was put together by 24 AI researchers whose expertise ranges from the state of AI infrastructure to the social aspects of artificial intelligence.
The report included a main takeaway for each section, as well as a community opinion section where respondents were asked their own thoughts about the section. The section on “AI Perception vs. Reality”, chaired by MIT computer scientist Rodney Brooks, referenced the Gartner Hype Cycle characterization, a five-stage cycle common for technology hype.
In November 2024, Gartner “estimated that hype for Generative AI had just passed its peak and was on the downswing,” the report noted. 79% of respondents in the community opinion section stated that current public perceptions of AI’s capabilities do not match the reality of AI research and development, with 90% saying that the mismatch is hindering AI research—74% of that number saying that “the directions of AI research are driven by the hype.” Artificial general intelligence (AGI) refers to human-level intelligence: The hypothetical intelligence of a machine that interprets information and learns from it as a human being would.
AGI is a holy grail of the field, with implications for automation and efficiency across countless fields and disciplines. Consider any menial task that you don’t want to spend much time doing, from planning a trip to filing your taxes. AGI could be deployed to ease the burden of rote tasks, but also catalyze progress in other fields, from transportation to education and technology.
The surprising majority—76% of 475 respondents—said that simply scaling up current approaches to AI will not be sufficient to yield AGI. “Overall, the responses indicate a cautious yet forward-moving approach: AI researchers prioritize safety, ethical governance, benefit-sharing, and gradual innovation, advocating for collaborative and responsible development rather than a race toward AGI,” the report wrote. Despite hype distorting the state of research—and current approaches to AI not putting researchers on the most optimal path towards AGI—the technology has made leaps and bounds.
“Five years ago, we could hardly have been having this conversation – AI was limited to applications where a high percentage of errors could be tolerated, such as product recommendation, or where the domain of knowledge was strictly circumscribed, such as classifying scientific images,” explained Henry Kautz, a computer scientist at the University of Virginia and chair of the report’s section on Factuality & Trustworthiness, in an email to Gizmodo. “Then, quite suddenly in historic terms, general AI started to work and come to public attention through chatbots such as ChatGPT.” AI factuality is “far from solved”, the report read, and the best LLMs only answered about half of a set of questions correctly in a 2024 benchmark test.
But new training methods can improve the robustness of those models, and new ways of organizing AI can further better their performance. “I believe the next stage in improving trustworthiness will be the replacement of individual AI agents with cooperating teams of agents that continually fact-check either other and try to keep each other honest,” Kautz added. “Most of the general public as well as the scientific community—including the community of AI researchers—underestimates the quality of the best AI systems today; the perception of AI lags about a year or two behind the technology.
” AI is not going anywhere; after all, the Gartner Hype Cycle doesn’t end with “fade into oblivion,” but instead the “plateau of productivity.” Different arenas of AI use cases have different levels of hype, but with all the clamor about AI—from the private sector, from government officials, heck, from our own families—the report is a refreshing reminder that AI researchers are thinking very critically about the state of their field. From the way AI systems are built to the ways they are deployed in the world, there is room for innovation and improvement.
Since we aren’t going back to a time without AI, the only direction is forward..