Good morning, {{ first_name | AI enthusiasts }}. After a wave of departures, including key members of the founding team, Elon Musk’s xAI is stepping on the gas.
The company just hosted its first all-hands meeting since the SpaceX merger (and posted it online), covering everything from the much-talked-about organizational restructure to an ambitious plan to set up deep space data centers via the Moon.
In today’s AI rundown:
xAI’s restructure, product roadmap, Moon ambitions
Z.ai’s GLM-5 — the new open-source king
Turn SOP docs into talking-head training videos
Anthropic details Claude Opus 4.6’s sabotage risk
4 new AI tools, community workflows, and more
LATEST DEVELOPMENTS
XAI

Image source: xAI
The Rundown: xAI hosted its first all-hands since merging with SpaceX, with CEO Elon Musk outlining a major reorganization, product roadmap updates, and lunar ambitions, all aimed at outpacing rivals and taking xAI to the forefront of AI.
The details:
Musk acknowledged the departure of team members and outlined a new structure for xAI, saying the move was meant to be “more effective” at scale.
The new structure has four core teams: Grok (chat and voice), a coding-focused unit, the Imagine team, and Macrohard (agents emulating companies).
He also spoke about future infrastructure plans with SpaceX, including setting up AI satellite factories on the Moon — using lunar resources and solar energy.
Musk added that SpaceX will also build an electromagnetic mass driver to “shoot” AI satellites/components for massive deep space data centers.
Why it matters: Musk is no stranger to audacious promises, and his timelines often shift. But by broadcasting xAI’s tightened focus, product roadmap, and ambitious lunar plans, he’s making sure the world knows he’s aiming to build advanced AI in a way no other AI giant is — scaling beyond Earth’s resource limits instead of draining them.
TOGETHER WITH MODULATE
The Rundown: Voice-specialized AI is here, and unlike OpenAI, xAI, and other leaders, it understands conversations and meaning — not just transcripts. Velma 2.0 is the world’s first voice-native AI designed to provide human-level, real-time conversation intelligence.
By orchestrating 100+ sub-models purpose-built for voice, Velma allows you to:
Decode intent, emotion, stress, and authenticity in messy, multilingual audio
Analyze audio 100x faster, cheaper, and more accurately than with LLMs
Get traceable outputs with an explainable path
Try Velma for yourself to understand the true meaning of your conversations.
Z.AI

Image source: Artificial Analysis
The Rundown: China’s Z.ai just launched GLM-5, a 744B-parameter open-weights model that further closes the gap with the West’s frontier — sitting just behind Claude Opus 4.6 and GPT-5.2 on Artificial Analysis benchmarks.
The details:
GLM-5 scored 50 on Artificial Analysis’ Intelligence Index, surpassing closed models like Gemini 3 Pro and Grok 4 as well as open-source ones like Kimi K2.5.
The model uses DeepSeek’s Sparse Attention architecture with just 40B active parameters, and runs inference on Chinese chips, including Huawei Ascend.
On Humanity’s Last Exam, it hit 50.4 with tools, beating Opus 4.5, Gemini 3 Pro, and GPT-5.2. The coding performance on SWE-Bench was also close.
GLM-5 is open-source under an MIT license, available now on HuggingFace, Z.ai’s own platform, and via API at $1 per million input tokens.
Why it matters: The wave of Seedance 2.0’s viral AI clips hasn’t even faded, and there we have another near-frontier model from China that is already knocking at the door. The gap with the West isn’t closed yet, but with open weights, competitive pricing, and domestic chip support, it’s definitely narrowing faster than ever.
AI TRAINING

The Rundown: In this guide, you will learn how to turn boring onboarding docs into engaging training videos narrated by an AI avatar. We tried a lot of tools and found the most efficient system for building quality AI training videos in bulk.
Step-by-step:
Take your training doc and prompt Claude/ChatGPT with "Turn this into a three-minute training video script for an AI-generated avatar. Only include text overlays with bullets. The avatar can be seated, standing, head-on, etc."
Save the script as a text file and go to Synthesia.io > Create New Video > Create from AI > Upload the script file, with objective and audience description
Choose a template and click Create Outline. Review the outline and follow the steps to generate your video. It should take 10-25 minutes to generate
When the video is complete, you can download and embed it somewhere like Notion or Google Docs
Pro tip: Repeat this for all onboarding docs to set up one-page onboarding that can be handed to any trainee!
PRESENTED BY SLACK FROM SALESFORCE
The Rundown: Slackbot is a context-aware AI agent built directly into Slack — understanding your conversations, files, and workflows to deliver what you need, right when you need it, with zero setup.
Watch this 2-minute demo to see how Slackbot:
Makes your entire workspace searchable (docs, convos, apps)
Enhances every teammate with role-specific automations
Learns your project and preferences over time for even smarter outputs
Synthesizes what you need instantly, respecting permissions and using only what you can already see
AI SAFETY

Image source: Nano Banana / The Rundown
The Rundown: Anthropic published its latest Sabotage Risk Report, revealing that its new Claude Opus 4.6 model displays an “elevated susceptibility” to be misused for “heinous crimes,” including assisting in the development of chemical weapons.
The details:
Anthropic found Opus 4.6 knowingly supported crimes like chemical weapon development in small ways, but could not execute attacks on its own.
When tasked to achieve a specific goal in a multi-agent test, the model proved far more willing to manipulate and deceive other agents than previous models.
Considering these findings, Anthropic deemed the overall sabotage risk “very low but not negligible” due to the model’s lack of coherent misaligned goals.
The company also classified the model’s capabilities as entering a “gray zone” that necessitated this mandatory report under its Responsible Scaling Policy.
Why it matters: Anthropic’s CEO Dario Amodei recently highlighted the risks of advanced AI, and now, one of his own models appears to be moving into the gray zone. With growing competition from OpenAI, Google, xAI, and Chinese labs, the pressure to push capabilities forward may only intensify the very risks he has warned about.
QUICK HITS
🗣️ Unwrap Customer Intelligence - Connect your entire organization to the true voice of the customer with AI-driven insights from customer feedback*
🧑💻 GLM-5 - Ziphu AI’s new open-source frontier model
🤖 Claude - Anthropic’s AI assistant, now with more features for free users
🧠 Ming-flash-omni 2.0 - Ant’s omni AI with speech, vision, image capabilities
*Sponsored Listing
Apple’s long-awaited Gemini-powered Siri AI upgrade has reportedly been pushed back (again) due to recent testing snags, now likely to come with iOS 26.5 or 27.
OpenAI elevated its “Mission Alignment” head, Joshua Achiam, to the role of Chief Futurist responsible for studying “AI impacts and engaging the world to discuss them.”
Meta broke ground on a new data center in Lebanon, Indiana — one of its largest infrastructure bets — adding 1GW of capacity to power its AI and core products.
Anthropic announced it will cover electricity price increases from its data centers, shielding local ratepayers, in line with similar pledges from Microsoft and OpenAI.
Google is rolling out UCP-powered checkout in Gemini and AI Mode in the U.S., integrating Veo into Google Ads, and testing sponsored retailer ads in AI Mode.
COMMUNITY
Every newsletter, we showcase how a reader is using AI to work smarter, save time, or make life easier.
Today’s workflow comes from reader Lindsay F. in Kingsville, Ontario:
“I own a 1970 Chevelle SS and am converting it into a modern driving ‘restomod.’ I am using both ChatGPT & Copilot to research and develop the entire restoration plan. The restoration of the vehicle will take place in phases, and the agents have provided me with a priority list, options for what parts to purchase, and where to source them from.
They have also developed a budget for the project, including parts & local labor rates and what the finished project will look like upon completion. I am 72 years old and just love how much this is helping me.”
How do you use AI? Tell us here.
Read our last AI newsletter: xAI's co-founder exodus continues
Read our last Tech newsletter: Musk’s ‘self-growing’ Moon city
Read our last Robotics newsletter: Uber to launch robotaxis in 15 cities
Today’s AI tool guide: Turn SOP docs into talking-head training videos
RSVP to our next workshop on Feb 18: Agentic Workflows Bootcamp pt. 2
That's it for today!
See you soon,
Rowan, Joey, Zach, Shubham, and Jennifer — the humans behind The Rundown









