- The Rundown AI
- Posts
- China's open-source AI surge continues
China's open-source AI surge continues
PLUS: Microsoft launches ‘Copilot Mode’ for agentic browsing
Good morning, AI enthusiasts. While the AI world awaits the imminent launch of OpenAI’s open-source model and GPT-5, Chinese labs continue to churn out the headlines.
New launches from Zai and Alibaba just raised the open-source bar again in both language and video models — continuing a relentless pace of development out East that is shifting the AI landscape faster than ever.
Note: We just opened multi-seats for our AI University. If you're looking to build your team's AI upskilling learning path, reach out here.
In today’s AI rundown:
Z.ai’s new open-source powerhouse
Microsoft’s ‘Copilot Mode’ for agentic browsing
Replace any character voice in your videos
Alibaba’s Wan2.2 pushes open-source video forward
4 new AI tools & 4 job opportunities
LATEST DEVELOPMENTS
Z AI

Image source: Zai
The Rundown: Chinese startup Z.ai (formerly Zhipu) just released GLM-4.5, an open-source agentic AI model family that undercuts DeepSeek's pricing while nearing the performance of leading models across reasoning, coding, and autonomous tasks.
The details:
4.5 combines reasoning, coding, and agentic abilities into a single model with 355B parameters, with hybrid thinking for balancing speed vs. task difficulty.
Z.ai claims 4.5 is now the top open-source model worldwide, and ranks just behind industry leaders o3 and Grok 4 in overall performance.
The model excels in agentic tasks, beating out top models like o3, Gemini 2.5 Pro, and Grok 4 on benchmarks while hitting a 90% success rate in tool use.
In addition to 4.5 and 4.5-Air launching with open weights, Z.ai also published and open-sourced their ‘slime’ training framework for others to build off of.
Why it matters: Qwen, Kimi, DeepSeek, MiniMax, Z.ai… The list goes on and on. Chinese labs are putting out better and better open models at an insane pace, continuing to both close the gap with frontier systems and put pressure on the likes of OpenAI’s upcoming releases to stay a step ahead of the field.
TOGETHER WITH GUIDDE
The Rundown: Stop wasting time on repetitive explanations. Guidde’s AI helps you create stunning video guides in seconds, 11x faster.
Use Guidde to:
Auto-generate step-by-step video guides with visuals, voiceovers, and a CTA
Turn boring docs into visual masterpieces
Save hours with AI-powered automation
Share or embed your guide anywhere
MICROSOFT

Image source: Microsoft
The Rundown: Microsoft just released ‘Copilot Mode’ in Edge, bringing the AI assistant directly into the browser to search across open tabs, handle tasks, and proactively suggest and take actions.
The details:
Copilot Mode integrates AI directly into Edge's new tab page, integrating features like voice and multi-tab analysis directly into the browsing experience.
The feature launches free for a limited time on Windows and Mac with opt-in activation, though Microsoft hinted at eventual subscription pricing.
Copilot will eventually be able to access users’ browser history and credentials (with permission), allowing for actions like completing bookings or errands.
Why it matters: Microsoft Edge now enters into the agentic browser wars, with competitors like Perplexity’s Comet and TBC’s Dia also launching within the last few months. While agentic tasks are still rough around the edges across the industry, the incorporation of active AI involvement in the browsing experience is clearly here to stay.
AI TRAINING

The Rundown: In this tutorial, you will learn how to transform AI-generated videos by replacing their default voices with custom voices using Google Veo, audio conversion tools, and ElevenLabs’ voice cloning.
Step-by-step:
Create your AI video using Google Veo and download the MP4 file
Convert the video to MP3 using any audio extractor from a video tool
Go to ElevenLabs’ Voice Changer, upload your MP3, and generate speech with your chosen voice.
Import both the original video and new audio into CapCut, mute the original audio, and export your video with the custom voice.
Pro tip: Create voice clones in ElevenLabs to maintain consistent character voices across all your video projects.
PRESENTED BY PREZI
The Rundown: Prezi AI doesn’t just make slides. It builds persuasive narratives with a dynamic format designed to hold attention and help your message land. Whether you're pitching or presenting to your team, Prezi transforms your ideas into presentations that actually perform.
With Prezi, you can:
Go from rough ideas or PDFs to standout presentations in seconds
Engage your audience with a format proven to be more effective than slides
Get AI-powered suggestions for content, structure, and design
Try Prezi AI for free and beat boring slides.
ALIBABA

Image source: Alibaba
The Rundown: Alibaba's Tongyi Lab just launched Wan2.2, a new open-source video model that brings advanced cinematic capabilities and high-quality motion for both text-to-video and image-to-video generations.
The details:
Wan2.2 uses two specialized "experts" — one creates the overall scene while the other adds fine details, keeping the system efficient.
The model surpassed top rivals, including Seedance, Hailuo, Kling, and Sora, in aesthetics, text rendering, camera control, and more.
It was trained on 66% more images and 83% more videos than Wan2.1, enabling it to better handle complex motion, scenes, and aesthetics.
Users can also fine-tune video aspects like lighting, color, and camera angles, unlocking more cinematic control over the final output.
Why it matters: China’s open-source flurry doesn’t just apply to language models like GLM-4.5 above — it’s across the entire AI toolbox. While Western labs are debating closed versus open models, Chinese labs are building a parallel open AI ecosystem, with network effects that could determine which path developers worldwide adopt.
QUICK HITS
🎬 Runway Aleph - Edit, transform, and generate video content
🧠 Qwen3-Thinking - Alibaba’s AI with enhanced reasoning and knowledge
🌎 Hunyuan3D World Model 1.0 - Tencent’s open world generation model
📜 Aeneas - Google’s open-source AI for restoring ancient texts
📱 Databricks - Senior Digital Media Manager
🗂️ Parloa - Executive Assistant to the CRO
🤝 UiPath - Partner Sales Executive
🧑💻 xAI - Software Engineer, Developer Experience
Alibaba debuted Quark AI glasses, a new line of smart glasses launching by the end of the year, powered by the company’s Qwen model.
Anthropic announced weekly rate limits for Pro and Max users due to “unprecedented demand” from Claude Code, saying the move will impact under 5% of current users.
Tesla and Samsung signed a $16.5B deal for the manufacturing of Tesla’s next-gen AI6 chips, with Elon Musk saying the “strategic importance of this is hard to overstate.”
Runway signed a new partnership agreement with IMAX, bringing AI-generated shorts from the company’s 2025 AI Film Festival to big screens at ten U.S. locations in August.
Google DeepMind CEO Demis Hassabis revealed that Google processed 980 trillion (!) tokens across its AI products in June, an over 2x increase from May.
Anthropic published research on automated agents that audit models for alignment issues, using them to spot subtle risks and misbehaviors that humans might miss.
COMMUNITY
Join our next workshop this Friday, August 1st, at 4 PM EST with Dr. Alvaro Cintas, The Rundown’s AI professor. By the end of this workshop, you’ll have practical strategies to get the AI to do exactly what you want.
RSVP here. Not a member? Join The Rundown University on a 14-day free trial.
That's it for today!Before you go we’d love to know what you thought of today's newsletter to help us improve The Rundown experience for you. |
See you soon,
Rowan, Joey, Zach, Alvaro, and Jason—The Rundown’s editorial team
Reply