- From The Screen 👨🏻💻
- Posts
- Last Week in AI - Week 11
Last Week in AI - Week 11
Hey there,
This is a letter about the last week in AI.
Featuring News, updates & Insights in AI for Week 11, 2025.
—
Updates & News - Week 11
DeepMind CEO: Human-Level AI is Closer Than You Think

Context
DeepMind CEO Demis Hassabis just dropped a bold prediction, that AGI (Artificial General Intelligence) could be here within 5 to 10 years.
That’s a lot sooner than most AI experts have been saying.
Key Information
AGI refers to AI being able to perform any intellectual task a human can at the same level, or even higher level.
Hassabis believes rapid progress in scaling, data, and computing power will make it possible.
DeepMind has been working on AI models like Gemini, which keep pushing the boundaries of reasoning and multimodal capabilities.
What to Take Away from This
If AGI arrives in the next decade, we’re talking about a world where AI doesn’t just assist humans but actually rivals them in intelligence.
The implications?
Massive.
Regulation, employment, and entire industries will need to adapt fast.
Read more:
OpenAI’s New AI Agents: Building Your Own GPT-Powered Assistant

Context
OpenAI is rolling out a suite of tools to help businesses create AI agents. Think ChatGPT, but fully customizable for specific workflows.
Key Information
The tools let companies train AI on internal data and fine-tune behavior.
These AI agents can handle customer service, research, and automation tasks.
OpenAI is targeting enterprise users who want tailored AI without building from scratch.
What to Take Away from This
This is OpenAI doubling down on AI as a service. Instead of just providing a chatbot, they’re letting companies tailor their AI system into whatever they need.
It shows how businesses are constantly looking to base their repetitive tasks on tailored AI tools to get the job done faster and cheaper than before.
Read more:
Google’s Gemini 2.0 Flash Can Generate Images Natively

Context
Google’s AI model just got a major upgrade, Gemini 2.0 Flash can now generate images without needing an external tool like DALL·E.
Key Information
Unlike previous versions, this update allows for native image generation.
Google aims to compete more directly with OpenAI’s DALL·E and MidJourney.
The model is optimized for speed, meaning faster outputs at scale.
What to Take Away from This
AI-generated images are already a game-changer, but having it built directly into Gemini makes the process even smoother.
Google wants to keep users within its ecosystem instead of relying on third-party tools.
Expect AI-generated visuals to become an even bigger part of search, content creation, and marketing.
Read more:
Meta is Testing In-House AI Training Chips

Context
Meta is reportedly developing its own AI training chips, signalling a move away from reliance on Nvidia and other third-party suppliers.
Key Information
The chips are designed to train AI models more efficiently in-house.
Meta has been struggling with AI compute costs, making this a strategic shift.
If successful, it could reduce their dependence on Nvidia’s GPUs.
What to Take Away from This
AI training is expensive, and Big Tech doesn’t like being dependent on outside vendors. If Meta successfully builds its own AI chips, expect other companies to follow suit. This could reshape the AI hardware landscape and potentially shake up Nvidia’s dominance in the space.
Read more:
—
That was everything for Week 11, hoping that you found it valuable!
If you want more updates in AI then make sure to follow me on my socials 👇
LinkedIn: https://linkedin.com/in/axelflerén
Youtube: https://www.youtube.com/@Axel_Fleren
Back soon,
Axel
