
Google Releases Veo 3.1 Video Model With Improved Controls and Longer Video Durations
Google released the first major update to its artificial intelligence (AI) video generation model Veo 3 on Wednesday. Dubbed Veo 3.1, the updated model arrives less than five months after the release of Veo 3 and brings significant improvements to prompt adherence and granular control over the final output. Users can now add reference images to guide the video, and even place the first and final frame of the video to let the AI connect the dots. The model is currently not available on the Gemini app.
Veo 3.1 Comes With Big Improvements in Prompt Adherence
In a post on X (formerly known as Twitter), the official handle of Google DeepMind announced the release of Veo 3.1 video AI model. The company said that in the five months since Veo 3’s launch in May, users have generated more than 275 million videos, highlighting its popularity. The new update, the tech giant claims, focused on user feedback and brings more artistic controls. Currently, Veo 3.1 is available via the Flow app and in the Gemini API for developers.
Redmi Note 15 Pro+, Note 15 Pro India Launch Timeline, Price and Specifications Leaked
Broadly, there are three new features. The first is dubbed “Ingredients to Video,” which lets users upload multiple reference images while generating a video. The AI analyses the images and integrates them into the output. Google says this will allow users to generate videos that are closer to the creator’s vision.
The second feature is “Frames to Video,” and with this, users can add a starting and ending image. The AI model will then generate a video that connects the two points. This feature is intended to help users get their desired narrative from the AI-generated videos with creative transitions.
Finally, the third feature is called Extend. Users can upload a clip, and Veo 3.1 continues the shot to build on top of it. Google says this mode will let users generate longer videos that last more than a minute. These videos are generated based on the final second of the uploaded clip to create continuity and help users when they want a longer establishing shot.
Andhra Google AI Hub: Adani Group To Partner For Largest Data Centre Of India
Notably, while the Flow app is available to users with the Google AI Pro and Google AI Ultra subscriptions, developers can access Veo 3.1 via the Gemini API. The pricing has been kept the same as Veo 3, with each second of the video generation being charged at $0.40 (roughly Rs. 35). Additionally, the Veo 3.1 Fast model will cost devs $0.15 (roughly Rs. 13) per second of generation. GADGETS 360
Latest Posts
- Brilliant Babar Azam Ends 807-Day Wait With 20th ODI Hundred, Steers Pakistan to Victory
November 15, 2025 | Featured Edition, Sports - Bihar Elections 2025: NDA Surges Ahead in Bihar, Opens Dominant Lead as Counting Crosses Six Hours
November 14, 2025 | Featured Edition, Breaking News, Politics - Pune Accident: Eight Killed, 15 Injured After Two Trucks Collide, Trapping Car Between Them
November 13, 2025 | Featured Edition, Breaking News, National - iLEAD Organises Educational Tour Showcasing Murshidabad’s Unmatched Potential for Experiential Learning
November 13, 2025 | National, Featured Edition - CO₂ Emissions Growth of India Slows Down in 2025: Report
November 13, 2025 | National, Climate & Environment, Featured Edition - Al Falah University Founder Javed Ahmed Siddiqui Ran 9 Firms, Jailed In Rs 7.5 Crore Cheating Case
November 13, 2025 | Featured Edition, National - Red Fort Blast: Delhi Terror Plot Linked to 32 Explosive-Laden Cars, NIA Probes Jaish-E-Mohammed Role
November 13, 2025 | Breaking News, Featured Edition, National - Nyoma Airbase Near China Border Operationalised, Air Force Chief Lands Aboard C-130J
November 13, 2025 | National, Breaking News - Massive X5.1 Solar Flare Sparks Radio Blackouts Across Europe and Africa
November 13, 2025 | Featured Edition, Research/Discovery/Science/Inventory, Technology - Damarchus inazuma: New Half-Male, Half-Female Spider Species Discovered in Thailand
November 13, 2025 | Featured Edition, International, Research/Discovery/Science/Inventory
