Google's new AI video upgrade

PLUS: Turn photos into personal branding videos with Veo 3.1

Good morning, AI enthusiasts. Google just released Veo 3, the company's upgraded AI video model — but after OpenAI's viral Sora 2 explosion just weeks ago, the hype doesn't feel the same.

With new editing features and general upgrades, Google is targeting filmmakers and creatives over viral feeds. The problem? In today's attention economy, being useful might matter less than being memorable.

In today’s AI rundown:

  • Google’s upgraded Veo 3.1 video model

  • Anthropic’s fast, low-cost Claude 4.5 Haiku

  • Turn photos into personal branding videos with Veo 3.1

  • Google’s Gemma-based AI finds new cancer treatment

  • 4 new AI tools, community workflows, and more

LATEST DEVELOPMENTS

GOOGLE

Image source: Google

The Rundown: Google just rolled out Veo 3.1, a new video generation model that claims quality improvements, better realism, upgraded image-to-video capabilities, and a series of new editing features directed at filmmakers and creative control.

The details:

  • Veo 3.1 now accepts up to three reference images to maintain character consistency across scenes.

  • Users can also provide start and end frames, with 3.1 generating smooth transitions between them and matching audio.

  • New scene extension capabilities allow users to create up to 1-minute-long videos by continuously adding segments to match the previous clip.

  • Both standard and fast versions of 3.1 are rolling out across Google’s ecosystem, including its Flow filmmaking tool, Vertex AI, and Gemini.

Why it matters: After Sora 2 raised the AI video bar in a massively viral way just weeks ago, Veo 3.1 doesn’t hit with the same hype — despite what the benchmarks may say. The bigger upgrade may be within the editing realm, with abilities like scene extending and start/end frames giving the extra control needed to take outputs to the next level.

TOGETHER WITH VANTA

The Rundown: Tune in on Nov. 19 to VantaCon, where leaders from Vanta, Anthropic, 1Password, Sublime Security, and more tackle the biggest challenges security professionals are facing today, and the opportunities new technologies and trends are uncovering for the future.

By joining VantaCon, you’ll:

  • Build connections with 400+ peers in the GRC security space

  • Learn best practices and insights from GRC professionals across the industry

  • Get hands-on with labs and learning opportunities to sharpen your skillset

Register today to watch the VantaCon livestream and participate in the virtual Q&A.

ANTHROPIC

Image source: Anthropic

The Rundown: Anthropic released Claude Haiku 4.5, the smallest variant of its latest model family that delivers performance comparable to the company’s flagship model from just months ago for significantly reduced prices and upgraded speeds.

The details:

  • The new model matches Claude Sonnet 4's coding abilities from May while charging just $1 per million input tokens versus Sonnet's $3 pricing.

  • Despite its size, Haiku beats out Sonnet 4 on benchmarks like computer use, math, and agentic tool use — also nearing GPT-5 on certain tests.

  • Enterprises can orchestrate multiple Haiku agents working in parallel, with the recently released Sonnet 4.5 acting as a coordinator for complex tasks.

  • Haiku 4.5 is available to all Claude tiers (including free users), within the company’s Claude Code agentic development tool and via API.

Why it matters: With Haiku, the utopia of ‘intelligence too cheap to meter’ still seems to be following the trendline. Anthropic’s latest release shows how quickly the AI industry’s economics are shifting, with a small, low-cost model now capable of performances that commanded premium pricing just a few months ago.

AI TRAINING

The Rundown: In this tutorial, you will learn how to create professional personal branding videos using Google’s new Veo 3.1 model in Flow, transforming AI-generated photos into polished video content without ever being on camera.

Step-by-step:

  1. Generate your headshot and workspace in Google Gemini using prompts like "photo of this person [upload reference], casual denim shirt, looking slightly right" and "modern office, city view, minimalist desk"

  2. Open Google Flow, create a new project, switch from "Text to Video" to "Ingredients to Video,” and upload both generated images

  3. Prompt your first scene: "Using the uploaded photo as me, sitting at a desk, smiling at the camera while sipping coffee, then typing on a laptop. Warm morning light, add soft acoustic music"

  4. Click "Add to Scene", then "+" to Extend with: "I finish typing, look at the camera saying 'Ready to collaborate? DM me!' Fade to text overlay with title"

  5. Review timeline, ensure smooth transitions between clips, then click the download icon to export your complete branding video

Pro tip: Use the editing tool to select areas and insert any objects within your video —it’s perfect for creating product demo videos or training content without recording.

PRESENTED BY DELVE

The Rundown: Delve's brand-new AI security questionnaire tool uses state-of-the-art GraphRAG to help you fly through security reviews with unparalleled accuracy — built by AI experts from Stanford and MIT.

Delve's AI agents:

  • Pull evidence, resolve conflicts, and reason across your entire policy graph

  • Interrogate your infrastructure and draft bulletproof responses automatically

  • Navigate reviews with F50 companies while saving dozens of hours

Book a demo with Delve for $1,000 off compliance certifications and unlimited free AI questionnaire access.

AI RESEARCH

Image source: Reve / The Rundown

The Rundown: Google and Yale University researchers introduced C2S-Scale 27B, a foundation model — based on Google’s open-Gemma family — that discovered a previously unknown cancer treatment pathway, proven to work in living cells.

The details:

  • The C2S AI system reads cellular data like a language, capturing how individual cells will behave and respond to treatments.

  • Researchers tasked the system with finding compounds that can make tumors more visible to the immune system, but only when certain signals were present.

  • It identified silmitasertib, an existing drug never before linked to helping the immune system spot cancer cells.

  • Laboratory tests confirmed the AI's prediction, with the drug combination making tumor cells about 50% more visible to immune defenses.

Why it matters: The ‘novel’ discoveries from AI systems are starting to trickle in — something many skeptics thought impossible. With Google also finding that “biological models follow clear scaling laws”, we could be in for an absolutely wild period of scientific progress as models continue to get larger and more capable.

QUICK HITS

  • 💨 Claude 4.5 Haiku - Anthropic’s new small, cost-efficient model

  • 🎥 Veo 3.1 - Google’s upgraded video generation model

  • 📽️ Flow - Google’s filmmaking tool, with new artistic control

  • 🎨 MAI-Image-1 - Microsoft’s first in-house image generation model

MIT introduced Recursive Language Models, a technique allowing models to process long contexts by recursively calling themselves, with an RLM-powered GPT-5 mini outperforming GPT-5 by 114% on long-context benchmarks.

Apple announced its M5 chip, featuring AI-focused upgrades including specialized processors to make AI tasks 4x faster across its product lines.

Runway introduced Apps, a new collection of streamlined video editing tools with features like element removal, product reshoots, dialogue adding, and more.

The International AI Safety Report provided a ‘First Key Update’ to its 2025 report, saying that performance, adoption, and oversight concerns are increasing safety risks.

Meta announced plans for a new $1.5B, 1GW AI-optimized data center in El Paso, the company’s 29th data center and third in Texas.

OpenAI rolled out its low-cost ChatGPT Go tier to new regions, now available in 89 countries.

COMMUNITY

Every newsletter, we showcase how a reader is using AI to work smarter, save time, or make life easier.

Today’s workflow comes from reader Jason B. in Morgantown, WV:

"As an educator and department director, the most useful workflow we have created is generating accessible and usable transcripts for videos. Our instructors often rely on YouTube for videos, but the auto-transcription there does not meet accessibility standards by a country mile. We throw the YouTube "word salad" into a couple of prompts we've developed and generate high-quality transcripts for lengthy videos."

How do you use AI? Tell us here.

That's it for today!

Before you go we’d love to know what you thought of today's newsletter to help us improve The Rundown experience for you.

Login or Subscribe to participate in polls.

See you soon,

Rowan, Joey, Zach, Shubham, and Jennifer — the humans behind The Rundown

Reply

or to participate.