Microsoft launches a mini AI model

PLUS: Ray-Bans get AI vision

Welcome, AI enthusiasts.

Microsoft just proved that when it comes to high-performing AI, bigger doesn’t always mean better.

The tech giant just dropped Phi-3-mini, a model that could change the game for on-device AI — unlocking powerful tech right into our pockets. Let’s explore…

In today’s AI rundown:

  • Microsoft releases Phi-3 powerhouse

  • Meta’s Ray-Ban glasses go multimodal

  • Supercharge Google Search with AI

  • AI-generated gene editing breakthrough

  • 5 new AI tools & 4 new AI jobs

  • More AI & tech news

Read time: 4 minutes



Image source: Microsoft

The Rundown: Microsoft just announced Phi-3, a new family of small language models that outperform rivals of larger sizes — reaching new benchmark milestones in the class of smaller-sized AI models.

The details:

  • The Phi-3 model family comes in three sizes: Phi-3-mini at 3.8B parameters, Phi-3-small at 7B parameters, and Phi-3-medium at 14B parameters.

  • Phi-3-mini’s benchmarks rival Mixtral and GPT 3.5 despite being significantly smaller, while also having a staggering 128k context window.

  • Mini’s 3.8B parameter size also enables the model to be deployed on-device while still maintaining quality and cost-effectiveness.

  • 7B Phi-3-small and 14B Phi-3-medium are still in training and will also be available in the coming weeks.

Why it matters: Microsoft is showing that with the right training data and techniques, small language models can punch well above their weight class. The Phi-3-mini capabilities, in particular, are a massive breakthrough — opening the door for high-performing models to run efficiently on our phones.


The Rundown: AE Studio tackles the toughest challenges in tech, like AI alignment. While their R&D team explores the frontiers of AI safety and consciousness, their consulting team delivers cutting-edge AI solutions.

Partner with AE Studio for:

  • A proven track record in AI, blockchain, BCI, and more

  • End-to-end expertise, from ideation to deployment

  • A team that treats each project like their own startup

AE Studio is ready to listen to any big idea and make it a reality. Share your business challenge today.


Image source: Meta

The Rundown: Meta just announced that multimodal capabilities are now rolling out to all Ray-Ban Meta smart glasses, integrating AI features that can process and understand a user's surroundings.

The details:

  • Meta’s AI assistant, previously limited to audio interactions, can now process visual data from the glasses' built-in camera and offer relevant insights.

  • Users can ask the glasses to translate text, identify objects, or provide other context-specific information, all hands-free.

  • Wearers can also share views during video calls on WhatsApp and Messenger, enabling hands-free, real-time sharing of experiences.

  • The multimodal AI upgrade will be available as a beta feature to all users in the US and Canada.

Why it matters: Meta’s multimodal integration marks a major step forward for smart glasses, transforming them from stylish wearables into powerful, context-aware assistants. Just as important is Ray-Ban’s iconic frames, which feature tech so subtle it’s hardly noticeable—a critical factor for mainstream adoption.


The Rundown: In this tutorial, you’ll learn how to unlock AI-powered Google searches, which will allow you to get more in-depth responses from search queries.


  1. Go to Search Labs and sign in with your Gmail account.

  2. Scroll down to where it says “SGE, generative AI in Search” and click the “Turn on” button.

  3. Go to Google, and you should now see ‘AI overviews’ when you Search.

  4. Once Google provides you with an AI-powered answer, select “Show more” to ask follow-up questions and add images for better results.

Note: Search Labs is not available worldwide yet. If it’s unavailable in your country, the only workaround is to use a US-based VPN.


The Rundown: Ready to dive into the AI revolution? Learn how to easily design, deploy, and scale back-end, front-end, and storage network fabrics with Simeon’s new ‘Generative AI Solutions’ guide.

You’ll learn:

  • How to support the unique demands of data-intensive ML and GenAI apps

  • The impact of GenAI on data center infrastructure and how to adapt your network

  • Real-world examples of ML and GenAI models

Download Siemon’s guide today and unlock the full potential of your AI deployments.


Image source: Profluent

The Rundown: Profluent just developed OpenCRISPR-1, the world's first open-source AI-developed gene editor capable of editing the human genome.

The details:

  • Profluent trained LLMs on a vast dataset of diverse CRISPR systems to generate millions of new CRISPR-like proteins not found in nature.

  • OpenCRISPR-1 worked as well as or better than a naturally occurring editor.

  • CEO Ali Madani said the success points to a future where AI “precisely designs what is needed to create a range of bespoke cures for disease.”

  • Profluent open-sourced the model to make the gene-editing tech more accessible to researchers working on disease treatments.

Why it matters: Gene editing already had the potential to revolutionize the medical field, and adding AI to the mix could take it to a whole new level. Tools like OpenCRISPR can eventually speed up innovation, reduce costs, and improve access to life-altering treatments.


  • 🎨 Adobe Firefly Image 3 in Photoshop - AI-image generation features in Photoshop beta

  • 🚀 Cascadeur - Speed up keyframe animation and model rigging

  • 🧠 Grimo AI - Collective knowledge engine powered by sources like Github, YouTube, and more

  • 📝 Sonnet AI - Automate meeting notes and CRM

  • 👕 IDM VTON - Most authentic virtual try-on

  • 🎨 AE Studio - Product Designer & Manager

  • 💼 Cohere - Director, Sales

  • 🧪 OpenAI - Engineering Manager - Supercomputing

  • 🏢 Anthropic - Facilities Manager


Perplexity AI announced a $63M funding round at a valuation of over $1B for the company’s AI-driven search engine capabilities.

Walmart is deploying 19 autonomous FoxBot electric forklifts across four distribution centers, aiming to enhance operational efficiency after a successful 16-month pilot program.

Llama 3’s capabilities are already being increased through open-source training initiatives, with Matt Shumer posting a new version that achieves 2x the context window of the standard release.

Throwframe unveiled the ‘Thermonator’, a $9,420 robotic flame-throwing dog capable of shooting 30-foot jets of fire.

Elon Musk made a comment on X stating that the integration of xAI’s Grok chatbot with Tesla vehicles is ‘coming’.

Microsoft executive Mustafa Suleyman said that AI is a ‘new kind of digital species’ during a recent TED Talk, also predicting the next decade will be ‘the most productive’ in human history.



Get your product in front of over 550k+ AI enthusiasts

Our newsletter is read by thousands of tech professionals, investors, engineers, managers, and business owners around the world. Get in touch today.


How would you rate today's newsletter?

Vote below to help us improve the newsletter for you.

Login or Subscribe to participate in polls.

If you have specific feedback or anything interesting you’d like to share, please let us know by replying to this email.

Join the conversation

or to participate.