Google officially announced the global rollout of the long-awaited video analysis feature in its AI assistant, Gemini, on July 19, 2025. After several months of limited testing, previously reported by 9to5Google, this capability is now available to all users, marking Geminis transition into the ranks of truly multimodal AI systems. Users can now not only interact with Gemini using text and images but also upload video files (up to 5 minutes long) or provide links to YouTube videos for in-depth analysis. The artificial intelligence, powered by Googles most powerful models, can understand the context and content of the video. This opens up numerous new use cases. For example, one can upload a long lecture and ask Gemini for a brief summary, or show a repair video and get step-by-step instructions in text format. The assistant can also find specific moments in a video by description ("find the moment where inflation is discussed") or identify objects and people. This launch is a direct competitive response to the multimodal capabilities of OpenAIs GPT-4o and confirms that the future of AI assistants lies in their ability to understand and process all types of information, not just text. This feature significantly expands Geminis utility for education, creativity, and solving everyday tasks.
Google Gemini Can Now Analyze Videos: New Feature Launched
