Hello and welcome to this What’s Up Monday! I’m Lauro Müller, and I’m super happy to have you here with me 😄 This is a bit different from our Thursday exchanges: on Mondays, I’d like to share what caught my attention in the last week, something cool I came across or figured out while coding, and, when time allows, a personal take on relevant topics for our careers 🙂 Ready to get started? Let’s go!
Over the last couple of weeks I've been running several things in parallel: courses, labs, launching my own platform, publishing every week, and preparing AI-focused corporate trainings. AI assistance is definitely there and there are many touchpoints, so of course I thought I was moving faster.
But, as it turns out, when you run five or ten agents simultaneously, those little pesky pauses and permission requests pile up. Every time an agent stopped to think, do some research, or wait for my review, there was a temptation to switch to the next immediately available task. After a while (and a very tired brain), I realized I was doing a fraction of what I would have done if I had picked one thing and stuck with it.
Although AI enables faster work and iteration, it hasn't changed the biology of our brains. We're still wired the same way we were back then when ChatGPT needed "Think step by step" to produce useful output (remember the good old times? I'm not even gonna mention the StackOverflow days).

And the fact is: our brains are not wired for true multitasking. Using these brief pauses to switch context to other tasks just increases the cognitive burden and energy expenditure of our brains, and, while we might go under the illusion we are doing more for a while, we'll end up exhausted after a single work session.
So I prefer to use these small pauses as opportunities: to review what they're building, to steer the next step, to do real pair programming (one cool thing you get with agents is at least some visibility into their reasoning process, which is not always true for human pair-programming 😅). But that only works if you're actually paying attention to it.
So... How did this impact how I work? My approach now is to set a time window, pick one task, and focus primarily on it. With the anecdotal evidence I collected over the last few weeks, I noticed at least one of three things happens:
I finish earlier than expected (the time window is still running) and I can switch to and focus on the next task.
I make far more progress by the end of the time window than if I spread myself thin across five workstreams.
I produce something I'm genuinely happy with on the first pass.
Don't feel guilty for choosing to focus on a single task. You're not wasting time. You’re working in harmony with your brain.
This Week at LM Academy
Over the last couple of weeks I’ve been working hard on publishing my own e-learning platform, LM Academy, including:
All the courses I currently have (and future ones, of course 🙂)
Exclusive hands-on labs and content available only at LM Academy
And I’m excited to share that two exclusive Kubernetes autoscaling labs are now live! Horizontal Pod Autoscaling in Kubernetes and Kubernetes Queue-Based Autoscaling with Custom Metrics. The first covers CPU-based HPA end-to-end, including stabilization windows and scale-down tuning. The second picks up where the first leaves off: we build a full custom metrics pipeline that exposes Redis queue depth as a native Kubernetes metric and scales workers directly in response to queue load. Both labs are included in the Pro subscription, so make sure to check that out if you haven’t yet!
👉 And here’s a code for 20% off for 3 months in any subscription: PRO20OFF. Valid only for one week!
Weird AI moment of the week...
This week I caught myself thinking about giving feedback to my microwave that "the food is not warm enough in the middle, please warm it more evenly."...

Weellll... Anyway, let's proceed 😅
Quick Tip of the Week: Ask Claude Code for References
When you ask Claude Code (or any AI companion, for that matter) to explain something about your codebase, it will probably read its files before answering, but by default it will not add references to the files and codes in the response. I've found that adding this single line to my exploratory questions and commands makes it much easier for me to refer to which code Claude is using in the response:
“Reference specific files and line numbers from the codebase.”
Here is the before:
Prompt: Explain the data flow in the application.

And here is the after:
Prompt: Explain the data flow in the application. Reference specific files and line numbers from the codebase.

See how smooth that second explanation is? I know where things are coming from and which files I should start looking at. It might seem like a small change, but when you are navigating hundreds of files, it makes a huge difference to know exactly where to go. The same principle that makes RAG effective in production applies here: the citations that validate RAG information also make code navigation and exploration much more actionable, particularly when navigating a new (or large) code base.
What Caught My Attention
You probably heard about Claude Code’s leak? (No shame if you didn’t, btw). This article looks at specific features you can use to take your interactions to the next level. Among the most useful ones:
Claude Code is explicitly wired to run tool calls in parallel, but only fires them simultaneously if your prompt signals the tasks are independent;
The
CLAUDE.mdfile is not a convention but a first-class architecture decision baked into the system prompt - put your project constraints, preferred libraries, and build commands there and Claude Code picks them up automatically at the start of every session;File edits use exact string matching rather than full rewrites, so if an edit continues to fail, tell Claude Code to read the file first before attempting the change.
There’s a lot of conversation around which skills will remain valuable as the field of software engineering shifts due to AI. The article’s core argument is that the bottleneck in software development has shifted from coding to judgment. What that means in practice is that the roles with the longest shelf life are those built around understanding systems deeply enough to catch what AI gets wrong:
Platform engineers who design the guardrails;
SREs who catch production failures that passed code review;
Senior engineers who can tell the difference between code that works and code that holds up at scale.
The consulting angle is also worth noting. The old model (eighteen months, a team of bodies, implementation-heavy) is giving way to fractional senior expertise: a few days a week of someone who can review architecture decisions and flag the things your team and your AI tools are both likely to miss. If you are building toward senior or staff-level work, that is the direction the market is moving.
Anthropic published a clear breakdown of why long-running agents fail, with a table of four failure modes and what to do about each. The most recognizable one: the agent declares the project done before it actually is. That may just as well be the AI version of a developer closing a ticket without testing the edge cases (no one has ever done that, right? 😅). How did they tackle the challenge? With a structured feature list file that an initializer agent writes at the start. Every subsequent coding agent reads it before marking anything complete. Simple concept, but the details on formatting it and prompting for incremental progress are worth the read if you're building anything that spans multiple context windows.
Thanks for reading this issue through! If you’re curious, the Quick Tip of the Week used the asynchronous translation application I use in my OpenTelemetry course, which I’d highly recommend to anyone aiming to bring the SRE and DevOps skills to the next level 🙂
I hope you enjoyed this issue, and let me know if you have any ideas or would like to see a specific topic covered here!
