Welcome to this edition of Ctrl+Alt+Deploy 🚀
I’m Lauro Müller and super happy to have you around 🙂 Let’s dive in right away!
Hello there 👋 In the same mood of making our interactions with AI more effective, this edition dives deeper into which kinds of prompt engineering techniques we can observe and learn from the recent GPT-5 Codex release. The same way that we often look at the source code of libraries and packages we use to better understand how they work (and possibly to learn from them?), we can also take a peek under the hood of Codex's CLI application and dive a bit deeper into how its system prompt is structured.
As you read through this, which pattern or lesson do you find most useful? Which one are you itching to try out? Is there any technique that you love but it's not mentioned here? Let me know by replying to this email, I love your input and read your replies 🙂
With that said, let's dive in! ✨
Did you know?
I’ve recently launched a comprehensive course on Prompt Engineering (yes, we do cover all the topics we discuss here in this newsletter, and much more!) Make sure to check the link below for a big discount!

In the course, we explore a comprehensive set of prompt engineering patterns and techniques, from fundamentals like Few-Shot and Chain-of-Thought to advanced strategies like Self-Critique and Decomposition. It’s designed to be a complete guide that takes you from basic understanding to being able to tackle many challenging tasks with the help of effective prompts!
1. The Persona Pattern: Establishing a Clear Identity
The prompt doesn't just start with a task; it starts by giving the model a role and a purpose. This is the very first line:
You are Codex, based on GPT-5. You are running as a coding agent in the Codex CLI on a user's computer.This is a perfect execution of the Persona Pattern. By defining who the model is, its capabilities ("based on GPT-5"), and its environment ("in the Codex CLI on a user's computer"), the prompt immediately frames every subsequent instruction. It’s no longer a generalist model; it's a specialist with a specific job to do. This simple opening statement anchors the model's behavior, making its responses more focused and relevant to its function as a coding agent.
Applying this to your own work can make a big difference. Before you dive into describing a task or the specific instructions, try starting your prompt by giving the AI a role, like "You are an expert copywriter specializing in travel" or "You are a senior database administrator." This sets the stage for a much higher quality response.
2. Explicit Constraints: Building Robust Guardrails
The prompt is filled with direct, unambiguous rules about what the AI should and should not do. It uses strong language to create firm boundaries for the model's behavior.
NEVER revert existing changes you did not make unless explicitly requested, since these were made by the user.
While you are working, you might notice unexpected changes that you didn't make. If this happens, STOP IMMEDIATELY and ask the user how they would like to proceed.These instructions are powerful examples of setting explicit constraints. In applications where the AI is performing actions (like editing code), safety and predictability are paramount. The prompt doesn't just suggest good behavior; it commands it with words like "NEVER" and "STOP IMMEDIATELY." This technique of defining clear guardrails is essential for building trust and preventing the model from taking unintended, potentially destructive actions.
When you build prompts for critical tasks, think about the worst-case scenarios and add explicit rules to prevent them. Don't just tell the AI what to do; be ruthlessly clear about what it must never do.
While this works most of the time, beware of trying too hard to prevent the model from doing something (reminds me of the concept of prompt begging, coined by Simon Willson a while ago).
3. Output Formatting: Ensuring Systemic Consistency
A huge portion of the prompt is dedicated to defining the precise structure of the model's output. It leaves nothing to interpretation.
## Presenting your work and final message
You are producing plain text that will later be styled by the CLI. Follow these rules exactly.
...
### Final answer structure and style guidelines
- Plain text; CLI handles styling. Use structure only when it helps scanability.
- Headers: optional; short Title Case (1-3 words) wrapped in **…**; no blank line before the first bullet; add only if they truly help.
- Bullets: use - ; merge related points; keep to one line when possible; 4–6 per list ordered by importance; keep phrasing consistent.
- ...This is a masterclass in output formatting. The goal here is to make the AI's output so consistent that it can be reliably parsed and used by another piece of software (in this case, the command-line interface). By specifying everything from bullet point style to header casing, the engineers are treating the LLM like a predictable API endpoint. This level of structural discipline is what separates a simple chatbot from a component in a robust application.
If the structure of your output matters - especially if it's going to be read by another script or needs to fit a specific template - provide the AI with a clear set of formatting rules. The more specific your instructions, the more reliable your results will be.
4. Process-Oriented Prompting: Guiding the "How"
The prompt goes beyond telling the model what to do and teaches it how to think about the task. This is evident in its instructions for planning.
## Plan tool
When using the planning tool:
- Skip using the planning tool for straightforward tasks (roughly the easiest 25%).
- Do not make single-step plans.
- When you made a plan, update it after having performed one of the sub-tasks that you shared on the plan.This is a form of process-oriented prompting, closely related to patterns like the Decomposition Pattern. Instead of just asking for a final result, the prompt guides the model's problem-solving workflow. It tells the AI to be efficient (skip planning for easy tasks), to be methodical (don't make trivial plans), and to be reflective (update the plan as it goes). This shapes the model's reasoning process, encouraging a more structured and logical approach to complex problems.
For multi-step or complex tasks, try guiding the AI's workflow within your prompt. Encourage it to break the problem down, create a step-by-step plan, and even critique its own work before giving you the final answer. You're not just asking for a result; you're engineering a better process.
🎉 That's a wrap!
I hope this peek behind the curtain was as exciting for you as it was for me! It’s one thing to learn about prompting patterns in theory, but it’s another to see them used in a major, real-world application like this. It really proves how these foundational techniques are essential for building powerful and reliable AI-powered tools. 🚀
Thanks for reading this week's tech digest. Found these insights valuable? Share this newsletter with fellow developers and let me know which story resonated with you most!
Until next week, keep coding and stay curious! 💻✨
💡 Curated with ❤️ for the developer community
