AI creates infinite gaming worlds

PLUS: Runway brings 3D cinematic control to AI video generation

Welcome, AI enthusiasts.

The line between imagination and gameplay just got much thinner, thanks to an AI architect taking gaming into new territory.

A new model called Oasis generates playable, open-world environments in real-time — and traditional game development may never be the same. Let’s get into it…

In today’s AI rundown:

  • Oasis AI model generates open-world games

  • Runway brings 3D control to video generation

  • Turn product photos into creative logo mockups

  • Claude gets new PDF vision capabilities

  • 6 new AI tools & 4 new AI jobs

  • More AI & tech news

Read time: 4 minutes

LATEST DEVELOPMENTS

OASIS

Image source: Etched

The Rundown: AI labs Decart and Etched just launched Oasis, an AI model that generates playable video game environments in real-time — alongside a playable Minecraft-style demo.

The details:

  • Oasis responds to keyboard and mouse inputs to generate game environments frame-by-frame, including physics, item interactions, and dynamic lighting.

  • Running at 20 FPS on current hardware, Oasis operates 100x faster than traditional AI video generation models.

  • The companies are releasing the code, a 500M parameter model for local testing, and a playable demo of a larger version.

  • Future versions will run in 4K resolution on Etched's upcoming Sohu chip, with the ability to scale to handle 10x users and massive 100B+ parameter models.

Why it matters: While text-to-video has grabbed headlines, Oasis represents something deeper — real-time interactive worlds generated entirely by AI. This could revolutionize how we think about game development and virtual environments, even potentially eliminating the need for traditional game engines altogether.

TOGETHER WITH ADA

The Rundown: Ada is launching a new multi-part webinar series featuring insights from industry leaders at NTT,  Zoominfo, and predictions from Microsoft, designed to help you take customer service to the next level using AI.

In the first episode, you'll discover how to:

  • Evaluate your current AI maturity

  • Understand the five key dimensions to build a high-performing AI customer service program

  • Accelerate your company’s AI adoption and establish a competitive advantage

The webinar series kicks off on Nov. 7 — secure your spot today.

RUNWAY

Image source: Runway

The Rundown: Runway just unveiled Advanced Camera Control for its Gen-3 Alpha Turbo model, bringing new precision to AI-generated video outputs with features that mirror traditional filmmaking techniques and capabilities.

The details:

  • Users can now precisely control camera movements, including panning, zooming, and tracking shots with adjustable intensity.

  • The system maintains 3D consistency as users navigate through generated scenes, preserving depth and spatial relationships.

  • The update hints at Runway's progress in developing ‘world models’ — AI systems that can simulate realistic physical environments.

  • The release also follows Runway's recent partnership with Lionsgate, suggesting potential applications in major film production could be on the way.

Why it matters: While AI video quality has taken mind-blowing leaps, the tooling to reliably and accurately shape outputs hasn’t scaled with it—until now. This upgrade signals the start of AI video generation transitioning from luck-based ‘slot machine’ outputs into a real tool that creators can confidently control.

AI TRAINING

The Rundown: Ideogram Canvas helps you to transform any product photo into a professional logo mockup.

Step-by-step:

  1. Access Ideogram and choose the option Canvas in the left sidebar.

  2. Upload your product photo using the "Upload" button.

  3. Use Magic Fill to select where you want your logo to appear.

  4. Describe your logo vision and generate multiple design options.

Pro tip: Save multiple versions to compare different logo placements and gather feedback from your team. You can also type simple prompts to generate consistent AI characters in any setting.

PRESENTED BY ASSEMBLY AI

The Rundown: AssemblyAI just released Universal-2, their most advanced speech-to-text model yet — delivering even greater accuracy and precision for impeccable audio data.

With Universal-2, you can:

  • Achieve 21% better accuracy in transcribing alphanumerics

  • Improve proper noun recognition by 24%, capturing names and places precisely

  • Enhance transcript formatting by 15% for clearer, more readable text

Explore Universal-2 today and start experiencing flawless transcriptions.

ANTHROPIC

Image source: Anthropic

The Rundown: Anthropic just released PDF support for its Claude 3.5 Sonnet model in public beta, unlocking the ability to analyze both text and visual documents like charts and images within large documents.

The details:

  • The system processes PDFs in three stages — extracting text, converting pages to images, and performing a combined visual-textual analysis.

  • The model supports documents up to 32MB and 100 pages, handling everything from financial reports to legal documents.

  • The feature can also be integrated with other Claude features like prompt caching and batch processing.

  • The vision capabilities are available both through Anthropic’s Claude platform and via direct API access in applications.

Why it matters: Claude’s ability to handle large documents was already a game-changer — but viewing and understanding imagery within them takes it to a whole new level. This upgrade transforms Claude into a more comprehensive analyst for industries like healthcare or finance, where critical info is often visual.

NEW TOOLS & JOBS

  • 🎥 Kling AI - Next-gen AI creative studio for image and video generation

  • 🎁 GyftPro - AI-powered gift recommendations to find the perfect present for any occasion

  • 📈 Truva - Supercharge your sales team with AI-powered CRM updates, follow-up emails, action items, coaching, and more

  • 📝 NoteThisDown - Transform handwritten notes into digital text, with seamless integration into Notion

  • 🥝 Kiwi Fitness - AI-powered personalized fitness training

QUICK HITS

Chinese military researchers reportedly used Meta's open-source Llama model to develop ChatBIT, an AI tool designed for military intelligence analysis and strategic planning.

Microsoft teased that its ‘Copilot Vision’ feature is coming ‘very soon,’ enabling the AI assistant to see and understand a user’s browser content and behavior.

Google released ‘Grounding with Google Search’ for its Gemini API and AI studio, letting developers integrate real-time search results into model responses for reduced hallucinations and improved accuracy.

Disney launched a new ‘Office of Technology Enablement’ group responsible for managing AI and mixed reality adoption within the company, with the goal of ensuring the tech is deployed responsibly across the media giant’s divisions.

Amazon has reportedly delayed the rollout of its AI-infused Alexa to 2025, as testing has faced technical challenges, including hallucinations and deteriorating performance on basic tasks.

Nvidia researchers introduced DexMimicGen, a system that can automatically generate thousands of robotic training demonstrations from as few as 5 examples and has a 90% success rate on real-world humanoid tasks.

THAT’S A WRAP

That's it for today!

Before you go we’d love to know what you thought of today's newsletter to help us improve The Rundown experience for you.

Login or Subscribe to participate in polls.

See you soon,

Rowan, Joey, Zach, and Alvaro—aka The Rundown Team

Reply

or to participate.