
Market mein naya AI robotic maal aaya hai! Will the normies ever get to use it? Probably not in a decade, despite what ‘em AI pundits say. Amid a flurry of AI robotics, Google has unveiled a Gemini-powered AI robotic model that doth some household chores with much ease. This leads the AI out in the real world to interact with space, matter, and time itself, outside of the digital construct it inhabits in a series of tokens.
A project of GoogleDeepmind, the company launched two models. First, Gemini Robotics, which runs on Gemini 2.0’s advanced vision-language-action VLA model.
The second one is the Gemini Robotics-ER, which is integrated with advanced spatial understanding. It runs its own programs via the Gemini’s embodied reasoning (ER). The new models from the house of GoogleDeepmind can perform general tasks that are too complex for robots with remarkable ease.
These tasks generally involve multiple steps. In the videos shared on social media, these Gemini Robotics could be seen packing lunch boxes, folding origami, playing tic-tac, arranging objects, and more. See Also: Meet BooBoo, The AI Smart Pet That Is Helping Chinese Teens Combat Loneliness See Also: Overworked South Korean AI Robot Commits Suicide; Netizens Say ‘Even Robots Are Fed Up’ The company that recently made a buzz by deleting its charter on AI’s weaponization stated in the blog that “ the physical safety of robots and the people around them is a longstanding, foundational concern in the science of robotics.
That's why roboticists have classic safety measures such as avoiding collisions, limiting the magnitude of contact forces, and ensuring the dynamic stability of mobile robots." The blog further states, "Gemini Robotics-ER can be interfaced with these ‘low-level’ safety-critical controllers, specific to each particular embodiment. Building on Gemini’s core safety features, we enable Gemini Robotics-ER models to understand whether or not a potential action is safe to perform in a given context and to generate appropriate responses .
” See Also: AI-Powered Robot Goes On A Rampage At Chinese Festival Alarming The Internet: ‘Not A Bug—It's A Warning’ See Also: Life-Sized Humanoid Robot Girls At Beijing World Robotics Conference Confuses The Internet Google also remarks that it’s new ASIMOV dataset will help researchers to rigorously measure the safety implications of robotic actions in real-world scenarios .” Meet Gemini Robotics: our latest AI models designed for a new generation of helpful robots. 🤖 Based on Gemini 2.
0, they bring capabilities such as better reasoning, interactivity, dexterity and generalization into the physical world. 🧵 https://t.co/EXRJrmxGxl pic.
twitter.com/MeEkRLomXm — Google DeepMind (@GoogleDeepMind) March 12, 2025 Gemini Robotics can solve multi-step tasks that require significant dexterity, such as folding origami 📄 packing a lunch box 🥗 and more. See it in action ↓ pic.
twitter.com/WgHfQz8n9N — Google DeepMind (@GoogleDeepMind) March 12, 2025 ⚙️ It goes head to head with our team to wrap a timing belt around gears - a feat that’s harder than you think ↓ pic.twitter.
com/Q9D5s7Md7d — Google DeepMind (@GoogleDeepMind) March 12, 2025 Robots must be able to interact seamlessly with humans. 🤝 When it’s interrupted or situations change, Gemini Robotics can adjust its actions on the fly. This level of steerability will empower us to better work with future robot assistants in the home, at work and beyond.
pic.twitter.com/3JuPAifFCX — Google DeepMind (@GoogleDeepMind) March 12, 2025 They also accomplished tasks not seen in training, showing the ability to generalize to new scenarios.
💡 We show that on average, Gemini Robotics more than doubles performance on a comprehensive generalization benchmark - compared to other state-of-the-art...
pic.twitter.com/rqgRiM2hRs — Google DeepMind (@GoogleDeepMind) March 12, 2025 Our model Gemini Robotics-ER allows roboticists to tap into the embodied reasoning of Gemini.
🌐 For example, if a robot came across a coffee mug, it could detect it, use ‘pointing’ to recognize parts it could interact with - like the handle - and recognize objects to avoid when...
pic.twitter.com/HQMXvWLoJ5 — Google DeepMind (@GoogleDeepMind) March 12, 2025 See Also: Watch: Cute Hotel Robot In China Brings Food To Australian Vlogger; Hilariously Tells ‘I Am Working, Don’t Disturb’ See Also: Elon Musk’s Tesla To Employ Hundreds Of Humanoid Robots In 2025; Internet Reacts With I-Robot Memes Our ultimate goal is to develop AI that could work for any robot - no matter its shape or size.
This includes bi-arm platforms like ALOHA 2 and Franka 🦾0but also for more complex embodiments such as the Apollo developed by @Apptronik . pic.twitter.
com/dPd5JtyWpo — Google DeepMind (@GoogleDeepMind) March 12, 2025 We're partnering with @Apptronik to build the next generation of humanoid robots with Gemini 2.0 - and opening our Gemini Robotics-ER model to trusted testers such as Agile Robots, @AgilityRobotics , @BostonDynamics and @EnchantedTools . Find out more → https://t.
co/EXRJrmxGxl pic.twitter.com/dtEV6DX0A8 — Google DeepMind (@GoogleDeepMind) March 12, 2025 See Also: Watch This Spherical Crime Fighter AI Robot In China That Shoots Webs At Criminals; Internet Has Thoughts See Also: Chatbots Start Talking In ‘GGWaves’ On Realizing They Are Both AI; Internet Drops Terminator Memes Cover: Patrick Gawande / Mashable India.