I just tested Manus vs ChatGPT with 5 prompts — here's the winner

Here’s what happened when I put Manus head-to-head with ChatGPT with five prompts.

featured-image

Since its launch last week, the AI agent Manus has rapidly gained traction online. Developed by Wuhan-based startup Butterfly Effect, the AI community has taken notice, with more than 2 million people on the waitlist.Likening it to DeepSeek, Manus distinguishes itself as what it claims to be the world’s first general AI agent, setting it apart from traditional AI chatbots.

Instead of relying on a single large language model, like ChatGPT, Grok, DeepSeek and other conversational AI systems, Manus operates with multiple AI models — including Anthropic’s Claude 3.5 Sonnet and fine-tuned versions of Alibaba’s open-source Qwen.Additionally, Manus uses a network of independently functioning agents, allowing it to autonomously perform a broad range of tasks beyond simple conversation.



7 days. 2 million people on the waitlist. We’re so excited and humbled by the incredible demand, and we’re working around the clock to bring Manus to more of your as soon as possible.

Image credit to one of our amazing Manus creators🩷 pic.twitter.com/S8hdk9H5GRMarch 12, 2025Despite the buzz surrounding Manus, though, very few people have actually had the chance to try it.

So far, fewer than 1% of users on the waitlist have received an invite code. I couldn’t wait to take Manus for a test drive and compare it to ChatGPT. Here’s what happened when I put the two models head-to-head with five prompts.

1. Complex problem solving(Image credit: Future)Prompt: "Analyze the potential economic impacts of implementing a universal basic income in a developed country."ChatGPT provided a structured breakdown of key UBI impacts and included a balanced discussion of positives and negatives.

The practical examples and real-world references support the arguments for an easy-to-follow format.Manus offered theoretical depth by exploring mainstream and heterodox economic perspectives, which provided a well-rounded analysis. However, the response was extremely dense and very academic.

The highly technical detail made the response too hard to digest for a general audience. The response took nearly an hour to generate, making it a less practical chatbot choice.Winner: ChatGPT wins for a faster response that is best suited for a general audience with a high-level overview.

Manus went far too deep in this case.2. Creative content generation(Image credit: Future)Prompt: "Compose a poem that captures the essence of autumn in a metropolitan city.

"ChatGPT crafted a poem with a rhythmic and musical quality that mirrors the bustling yet harmonious urban environment. The imagery is vivid, with a creative mix of natural elements and urban architecture. The poem encapsulates the transient beauty of autumn.

Manu presented a longer, free-verse poem rich in sensory details and metaphoric language. The image here shows just one of the five stanzas. The agent went deeper into the sensory experiences of the city in autumn, reflecting on the interplay between human activity and the natural environment.

Winner: Manus wins for a more introspective and sensory-rich exploration of the season's essence within the context of a metropolitan setting.3. Technical explanation(Image credit: Future)Prompt: "Explain the concept of blockchain technology to a non-technical audience.

"ChatGPT gave a solid and clear explanation, while making blockchain technology less intimidating. The response is concise and gets straight to the point, making the explanation brief for a quick overview. While ChatGPT explains blockchain concepts, it does not explore blockchain’s applications and missed some important explanations about transactions and downsides.

Manus provided a much more thorough breakdown of blockchain with a comprehensive look at history and real-world applications. The engaging metaphors and real-world use cases helped to breakdown the complexity of the subject. The response was dense and less conversational than ChatGPT.

Winner: Manus wins for a superior explanation covering blockchain from multiple angles while keeping it simple enough to understand.4. Ethical dilemma (Image credit: Future)Prompt: "Discuss the ethical considerations of using AI in surveillance systems.

"ChatGPT uses real-world examples and covers multiple ethical concerns. The chatbot also provided actionable solutions for each concern, which made the response practical for a general audience. However, it lacked theoretical depth and offered limited regulatory discussion.

Manus discusses a variety of perspectives in extreme detail and depth. Everything from multi-stakeholder views and global regulatory strategies are analyzed. The response is very academic yet lacks actionable and succinct takeaways because it doesn’t summarize the key points clear enough.

Winner: Manus wins for its depth, historical perspective, and regulatory analysis despite providing too much information for a general audience.5. Advanced reasoning(Image credit: Future)Prompt: “Two cyclists start riding toward each other from two towns that are 300 miles apart.

Cyclist A departs Town X at a speed of 60 mph, while Cyclist B leaves Town Y at 40 mph. Assuming they maintain their speeds and travel in a straight path, at what time and at what point between the two towns will they meet? Provide a detailed step-by-step explanation of your reasoning.”ChatGPT answered the question correctly and used clear language with a recap for the key findings at the end to help reinforce clarity.

Manus also got the correct answer and double checked the answer to ensure complete accuracy. The AI offered more structured labeling making it easier to follow along.Winner: Manus wins for thoroughness and verification, ensuring that users can trust the chatbot’s answers with confidence.

Overall winner: ManusThroughout this experiment, Manus consistently delivered the most thorough responses with explicit detail that showed how the AI arrived at each answer. On average, the responses from Manus took over twice as long to generate than ChatGPT.Often, Manus responded with a far more extensive answer than was needed, making it less user-friendly.

ChatGPT always responded in a straightforward way that got straight to the point.Manus is the winner here, but is it practical to use regularly? From my experience, no. Because the agent is so thorough and detailed for every prompt — whether deep research is needed or not — it isn't worth the time for me.

ChatGPT naturally knows when to go a little deeper and when to give an overview.I can confidently say that I will not be integrating Manus into my AI toolbox as often as its rival DeepSeek and other popular chatbots.Have you tried Manus yet? Tell me in the comments about your experience.

I'd love to know how it compares to mine.More from Tom's GuideGoogle is giving away Gemini's best paid features for free — here's the tools you can try nowGoogle is putting it's Gemini 2.0 AI into robots — here's how it's goingI’ve been using Apple Intelligence for 3 months — here are 5 features I use every day.