Google is expanding the reach of its AI assistant with a major new update: Gemini Live now supports both screen and camera sharing on Pixel 9 and Samsung Galaxy S25 devices. This rollout marks the next step in Google's Project Astra initiative — the company's vision for building a truly helpful AI agent that can see and understand the world around you. Continues below advertisement window.
addEventListener("load", function() { let ad_unit_fire_time = 1000; if(ad_delay_time_abp > 0){ ad_unit_fire_time = parseInt(ad_delay_time_abp) + 500; } setTimeout(function () { googletag.cmd.push(function() { googletag.
display("div-gpt-ad-9167143-2"); }); },ad_unit_fire_time) }); Gemini Can Now See Your Screen Until now, Gemini’s interactions were limited to inputs like voice, images, PDFs, or YouTube links. That changes with the new screen sharing feature. By tapping the “Share screen with Live” chip in the Gemini overlay, users can now invite the assistant to view their display in real-time.
Once granted permission, Gemini can follow your navigation, interpret what’s on screen, and respond to questions accordingly. Android will prompt users to confirm full-screen sharing with the Google app, which powers Gemini. Unlike previous implementations, there’s no option to share just a single app — it’s all or nothing.
A new call-style notification in the status bar shows a live session timer, and tapping it brings up the full Gemini Live interface. A short vibration signals that the AI is ready to interact. Continues below advertisement window.
addEventListener("load", function() { let ad_unit_fire_time = 1000; if(ad_delay_time_abp > 0){ ad_unit_fire_time = parseInt(ad_delay_time_abp) + 500; } setTimeout(function () { googletag.cmd.push(function() { googletag.
display("div-gpt-ad-1253031-3"); }); },ad_unit_fire_time) }); A New Way To See The World With Gemini In addition to screen sharing, users can now engage Gemini Live via their phone’s camera. Launching the rear camera allows you to get real-time feedback on objects or scenes in your surroundings. A live preview dominates the screen, with a simple toggle in the corner to switch to the front-facing camera if needed.
“For better results, capture objects with steady movements,” Google advises. Users are also reminded: “To interrupt Gemini, tap or start talking.” These visual interactions require the screen to remain active, ensuring the assistant keeps receiving video input during your query.
Astra-Powered Intelligence Expands Access Project Astra, which debuted during Google I/O 2024, serves as the backbone for these capabilities. Built by DeepMind, it aims to create a “universal AI agent that is helpful in everyday life.” Gemini 2.
0's enhancements, announced in December, laid the groundwork for more advanced, multimodal features like these. This update is part of a broader Pixel Drop for Pixel 9 owners and is also rolling out to Galaxy S25 users. According to Google, these new features are “starting with all Gemini app users” on those devices.
To enable the functionality immediately, users are encouraged to force stop and restart the Gemini or Google app. Soon, the update will also reach all Gemini Advanced subscribers — part of the $19.99 per month Google One AI Premium plan — bringing next-gen AI interaction to even more Android users.
.
Google Gemini Live Gets Smarter: Camera & Screen Sharing Arrives On Pixel 9, Samsung Galaxy S25

Google is expanding the reach of its AI assistant with a major new update: Gemini Live now supports both screen and camera sharing on Pixel 9 and Samsung Galaxy S25 devices. This rollout marks the next step in Google's Project Astra initiative — the company's vision for building a truly helpful AI agent that can see and understand the world around you.Gemini Can Now See Your ScreenUntil now, Gemini’s interactions were limited to inputs like voice, images, PDFs, or YouTube links. That changes with the new screen sharing feature. By tapping the “Share screen with Live” chip in the Gemini overlay, users can now invite the assistant to view their display in real-time. Once granted permission, Gemini can follow your navigation, interpret what’s on screen, and respond to questions accordingly.Android will prompt users to confirm full-screen sharing with the Google app, which powers Gemini. Unlike previous implementations, there’s no option to share just a single app — it’s all or nothing. A new call-style notification in the status bar shows a live session timer, and tapping it brings up the full Gemini Live interface. A short vibration signals that the AI is ready to interact.A New Way To See The World With GeminiIn addition to screen sharing, users can now engage Gemini Live via their phone’s camera. Launching the rear camera allows you to get real-time feedback on objects or scenes in your surroundings. A live preview dominates the screen, with a simple toggle in the corner to switch to the front-facing camera if needed.“For better results, capture objects with steady movements,” Google advises. Users are also reminded: “To interrupt Gemini, tap or start talking.” These visual interactions require the screen to remain active, ensuring the assistant keeps receiving video input during your query.Astra-Powered Intelligence Expands AccessProject Astra, which debuted during Google I/O 2024, serves as the backbone for these capabilities. Built by DeepMind, it aims to create a “universal AI agent that is helpful in everyday life.” Gemini 2.0's enhancements, announced in December, laid the groundwork for more advanced, multimodal features like these.This update is part of a broader Pixel Drop for Pixel 9 owners and is also rolling out to Galaxy S25 users. According to Google, these new features are “starting with all Gemini app users” on those devices. To enable the functionality immediately, users are encouraged to force stop and restart the Gemini or Google app.Soon, the update will also reach all Gemini Advanced subscribers — part of the $19.99 per month Google One AI Premium plan — bringing next-gen AI interaction to even more Android users.