by Luis Rodrigues
What I’m building with AI + one hot take + four links worth your time.
The Take
A friend sent me Microsoft's AI Diffusion Q1 report this week. Normally, I wouldn't write about it, but the numbers tell a timely story. Git pushes are up 78% year-over-year globally. Despite the layoffs, software developer jobs reached a record high of 2.2 million in 2025 (an 8.5% increase over 2024), and Q1 2026 is already showing 4% growth.
We're all discussing if AI will replace developers. For now the data says it's helping to create more. When creating software becomes cheap and easy, companies will build more.
I keep hearing stories about people building internal tools, new automations, or even a new product that before didn't have a positive ROI. The demand for software is elastic, if the price drops, the volume explodes.
Those 78% more git pushes are from companies approving projects that would not have been possible last year.
The lesson here is the same as in this week's Build Log. If you’re building with AI, or ready to build, what matters now is not doing more lines of code/more slides/more pivot tables. It’s knowing what to ask and evaluating what the agent gives you.
Together with

Every meeting I have about Thoughtled generates action items, and I always forget some. Now Granola runs in the background, no bot joining the call, transcribes everything. New users get the first month free → https://www.granola.ai
The Build Log
This week, I wrote a LinkedIn post that had hundreds of thousands of views.
The hook was simple: vibe coding without product judgment is just bolting parts onto a car.

The inspiration was last year's talk by Erik Schluntz at Anthropic on the use of coding agents. He explains how we should build code in the Agentic Engineering age. And it pretty much aligns with my experience with thoughtled.ai.
His argument was: stop reviewing the code line by line. That won't scale and won't work once the models become even better.
You need to focus on three things: planning the architecture, letting AI manage the leaf nodes (the final features with no dependencies), and ensuring everything works through tests.
When I started Agentic Engineering, I asked the agent to convert a full web page to React. It produced 1k+ lines of code that worked but were impossible to maintain or to reason about.
So I changed the process. I started explaining to the agent how to build the feature. The component structure, state management, layout hierarchy, edge cases, and which parts are leaf nodes vs. core architecture.
Also, never forget the "simple stuff"; if you're using a library like shadcn, put that in CLAUDE.md. If you don't, Claude might decide it wants to build a new component, and that's how you end up with 1000 lines instead of 400.
After the agents generate the code, I check three things: does the architecture match the plan, did it do anything stupid, and do the automated tests pass.
If there's something wrong, I don't rewrite the code. I give it a new prompt to update the plan and run again.
The real change is refactoring. Upgrades that used to take weeks or months can now be done in a few hours of planning, plus the time the model takes to execute.
You need high test coverage to trust that the code does what you asked.
I still check the code, but don't go deep into all the functions. I don't write a single line of code. Schluntz doesn't either.
Erik Schluntz's talk is here.
If you're still not building, try this: take a feature of a product you're using and just write the plan of how it should be built, what happens in failure cases, and how you would know the build was successful. That's the muscle you need to train. Using models is the easy part.
On My Radar
Claude Managed Agents launched features like dreaming, outcomes, and multiagent orchestration.
Are we sure AI agents are the future of finance?
Vision agents feel like a shortcut until you count the tokens.
Forget the geopolitics. The real story is scientists doing unglamorous work instead of chasing credit, and shipping faster because of it.
What are you building?
Reader Build: My friend Will from the US packaged his AI implementation framework (the same one he uses in clients) into a 16-chapter playbook covering tool stacking, personal AI assistants, and which tool fits which job. This is built from real client work, not slideware work. → empower-core.com/ai-playbook
Got something you built? Hit reply, the best ones get featured next week.
Know someone still wondering about the future with AI? Forward them this.
New here? Someone forwarded this to you? → Subscribe to Build What Matters: https://luisrodrigues.ai/
How was this issue?
P.S. Subscribers get $5K+ in software credits through our Secret partnership (AWS, Loom, Notion, Lovable, and more). Haven't claimed yours yet? → https://build-what-matters.joinsecret.com/