New open source AI company Deep Cogito releases first models and they’re already topping the charts

featured-image

The initial model lineup includes five base sizes: 3 billion, 8 billion, 14 billion, 32 billion, and 70 billion parameters.

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Deep Cogito, a new AI research startup based in San Francisco, officially emerged from stealth today with Cogito v1, a new line of open source large language models (LLMs) fine-tuned from Meta’s Llama 3.2 and equipped with hybrid reasoning capabilities — the ability to answer quickly and immediately, or “self-reflect” like OpenAI’s “o” series and DeepSeek R1.

The company aims to push the boundaries of AI beyond current human-overseer limitations by enabling models to iteratively refine and internalize their own improved reasoning strategies. It’s ultimately on a quest toward developing superintelligence — AI smarter than all humans in all domains — yet the company says that “All models we create will be open sourced.” Deep Cogito’s CEO and co-founder Drishan Arora — a former Senior Software Engineer at Google who says he led the large language model (LLM) modeling for Google’s generative search product — also said in a post on X they are “the strongest open models at their scale – including those from LLaMA, DeepSeek, and Qwen.



” The initial model lineup includes five base sizes: 3 billion, 8 billion, 14 billion, 32 billion, and 70 billion parameters, available now on AI code sharing community Hugging Face , Ollama and through application programming interfaces (API) on Fireworks and Together AI . They’re available under the Llama licensing terms which allows for commercial usage — so third-party enterprises could put them to work in paid products — up to 700 million monthly users, at which point they need to obtain a paid license from Meta. The company plans to release even larger models — up to 671 billion parameters — in the coming months.

Arora describes the company’s training approach, iterated distillation and amplification (IDA), as a novel alternative to traditional reinforcement learning from human feedback (RLHF) or teacher-model distillation. The core idea behind IDA is to allocate more compute for a model to generate improved solutions, then distill the improved reasoning process into the model’s own parameters — effectively creating a feedback loop for capability growth. Arora likens this approach to Google AlphaGo’s self-play strategy, applied to natural language.

The Cogito models are open-source and available for download via Hugging Face and Ollama, or through APIs provided by Fireworks AI and Together AI. Each model supports both a standard mode for direct answers and a reasoning mode, where the model reflects internally before responding. Benchmarks and evaluations The company shared a broad set of evaluation results comparing Cogito models to open-source peers across general knowledge, mathematical reasoning, and multilingual tasks.

Highlights include: Cogito models generally show their highest performance in reasoning mode, though some trade-offs emerge — particularly in mathematics. For instance, while Cogito 70B (Standard) matches or slightly exceeds peers in MATH and GSM8K, Cogito 70B (Reasoning) trails DeepSeek R1 in MATH by over five percentage points (83.3% vs.

89.0%). Tool calling built-in In addition to general benchmarks, Deep Cogito evaluated its models on native tool-calling performance — a growing priority for agents and API-integrated systems.

These improvements are attributed not only to model architecture and training data, but also to task-specific post-training, which many baseline models currently lack. Looking Ahead Deep Cogito plans to release larger-scale models in upcoming months, including mixture-of-expert variants at 109B, 400B, and 671B parameter scales. The company will also continue updating its current model checkpoints with extended training.

The company positions its IDA methodology as a long-term path toward scalable self-improvement, removing dependence on human or static teacher models. Arora emphasizes that while performance benchmarks are important, real-world utility and adaptability are the true tests for these models — and that the company is just at the beginning of what it believes is a steep scaling curve. Deep Cogito’s research and infrastructure partnerships include teams from Hugging Face, RunPod, Fireworks AI, Together AI, and Ollama.

All released models are open source and available now. If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI.

Read our Privacy Policy Thanks for subscribing. Check out more VB newsletters here . An error occured.

.