Welcome to this edition of Ctrl+Alt+Deploy 🚀

I’m Lauro Müller and super happy to have you around 🙂 Let’s dive in right away!

In the previous article of this series, we focused on the foundations of reliability when writing Python code. You learned to handle errors gracefully, log with purpose, externalize configuration, and think about testing. Your scripts are no longer fragile, one-shot “magic”: they are robust; they are dependable.

While this is already a huge step in your growth, we now need to start thinking about the next level. And when we work in teams, the next level doesn’t necessarily mean more complex scripts; but rather, scripts that others feel comfortable around and can reliably work with. If you’re the only person on your team who understands how to run your code, debug it, or modify it, your brilliant script has just become a new piece of technical debt. You’ve successfully climbed out of the "it works on my machine" hole only to fall into the "it only works on my machine" trap.

The next stage of your growth is about moving from a solo contributor to a team player. It's about writing code that doesn’t just solve a problem but empowers your colleagues. This is how you start building a reputation not just as someone who writes good scripts, but as an engineer who builds valuable, lasting tools.

The Mindset Shift: From Solving Problems to Solving Classes of Problems

A junior engineer is often tasked with a very specific problem: "Write a script that cleans up the log files in the /var/log directory on server X." They go off, write a 50-line script with a hardcoded path, and the problem is solved. At least for now.

A more experienced engineer hears the same request but thinks differently. They ask, "Are there other servers where we need to clean logs? Are there other directories? What happens when the retention policy changes from 30 days to 60? How can I build something that solves the log-cleanup problem, not just this one instance of it?" And perhaps even more importantly, “How can I avoid overengineering my script and find a balance between solving today’s problem while ensuring the script can be easily extended to tackle tomorrow’s challenges?”

This is the leap from writing a script to engineering a tool. It's the difference between a disposable solution and a reusable asset. Your goal is no longer just to make your own life easier, but to build something that makes the entire team more effective. Every skill that follows is a building block for this mindset.

Want to learn real-world Python skills?

In my Python for DevOps course, we focus on going beyond Python fundamentals and learning critical features and skills for implementing robust real-world Python projects and scripts. From decorators and generators all the way to implementing a fully-fledged CI/CD pipeline to publish Python projects, it covers many important aspects of working with Python. Want to bring your skills to the next level? Then make sure to check it out!

Skill 1: Making Your Code Usable as a Library

Your teammate comes to you and says, "Hey 👋 that function you wrote to parse the server status from the API is awesome! I want to use it in my monitoring script." Great moment, isn’t it?

Until they add from your_script import parse_server_status to their code, run it, and all of a sudden your entire script starts running: connecting to the database, cleaning up files, everything. They just wanted one function, but they got the whole script… Why? Because the code that executes your script's logic is just sitting there in the global scope of the file. Importing the file runs every single line in it. This is how the sin looks like:

import os

def list_files(dir):
    print(f"Listing files in {dir}:")
    return os.listdir(dir)

target_dir = "/var/log"
files = list_files(target_dir)
print(files)

How can we prevent this? This is where the so famous if __name__ == "__main__": construct comes in. It’s one of the most important idioms in Python for writing reusable code. This block of code will only run when the file is executed directly from the command line (e.g., python your_script.py). It will not run when the file is imported as a module by another script.

Now, let's fix it so it's both a runnable script and an importable library.

import os
import sys

def list_files(directory):
    return os.listdir(directory)

if __name__ == "__main__":
    if len(sys.argv) > 1:
        target_dir = sys.argv[1]
    else:
        target_dir = "/var/log"
    
    files = list_files(target_dir)
    print(files)

We could even go one step further and split this into two files: one that contains only the reusable functions and another that contains only the script.

# utils.py
import os

def list_files(directory):
    return os.listdir(directory)
import sys
from .utils import list_files

if __name__ == "__main__":
    if len(sys.argv) > 1:
        target_dir = sys.argv[1]
    else:
        target_dir = "/var/log"
    
    files = list_files(target_dir)
    print(files)

You don’t really get any better separation than that!

Now your teammate can safely import list_files_in_dir without any unexpected side effects. You’ve successfully separated your tool’s logic (the function) from its execution (either a separate file, or the if block).

This is a fundamental building block for teamwork. It indicates that you see your .py files not as isolated scripts, but as potential modules in a larger system. Engineers who structure their code this way from the start produce work that can be easily integrated, reused, and built upon by others, dramatically increasing its value.

Skill 2: Designing Configurable Systems

In the previous article, we moved hardcoded values to environment variables. This is a great first step. But what happens when the configuration gets more complex? You might have different settings for your dev, staging, and prod environments, and you would like (you definitely would like) to keep track of these configurations in version control (except for sensitive values, of course 🙂). Your logic starts to look like this:

if ENV == "prod":
    retries = 5
    timeout = 30
elif ENV == "staging":
    retries = 3
    timeout = 10

This is not sustainable. Your code's logic is now tangled up with its configuration. A truly reusable tool doesn't change its code to work in a different environment; it's given a different configuration.

Relying solely on dozens of environment variables becomes clumsy. There's no structure, no validation, and no easy way to see the entire configuration in one place. It scatters the operational knowledge of your system across pipeline definitions and shell scripts. You can easily forget to change one value, or change it in the incorrect if block, and then… You know… 💣💥

Mature systems read their configuration from a dedicated, structured source, like a YAML or TOML file (external systems can also be used here). The script becomes a general-purpose engine, and the configuration file provides the specific instructions for a given run.

import yaml
import sys
from dataclasses import dataclass

@dataclass
class BackupConfig:
    source_directories: list[str]
    destination_path: str
    compression: int = 5

def load_config(source: str) -> BackupConfig:
    if source == "yaml":
        config_path = sys.argv[1]
        with open(config_path, 'r') as f:
            config_data = yaml.safe_load(f)
        
        return BackupConfig(
            # Build based on YAML file
        )
    
    elif source == "api":
        return BackupConfig(
            source_directories=["/default/path"],
            destination_path="/default/backup",
            compression=5
        )
    
    else:
        raise ValueError(f"Bad source: {source}")

def run_backup(config: BackupConfig):
    # ... actual backup logic here

if __name__ == "__main__":
    config = load_config("yaml")
    run_backup(config)

Your prod_config.yaml can specify one set of directories and a high compression level, while dev_config.yaml can point to a local directory with no compression. The Python code doesn't change. It has become a flexible tool guided by its configuration.

Designing systems this way proves you're thinking about the full operational lifecycle. You're not just writing code; you're creating a tool that is easy for other operations engineers to use and manage. It makes your automation far easier to deploy, test, and adapt. Separating the stable logic from the variable configuration is a core principle of robust system design.

Skill 3: Creating Reusable Logic with Decorators

You'll quickly notice patterns emerging in your code. Before calling an API, you need to check for a valid auth token. For every function that performs a critical action, you need to log its start and end time.

The temptation is to copy-paste this boilerplate logic into every function that needs it. Does the following look familiar?

import time

def upload_files():
    start = time.time()
    print("Starting file upload...")
    # ... core upload logic ...
    end = time.time()

    print(f"Duration: {end - start:.2f}s")

def sync_database():
    start = time.time()
    print("Starting database sync...")
    # ... core sync logic ...
    end = time.time()
    print(f"Duration: {end - start:.2f}s")

This works, but it’s a maintenance disaster. Lots of duplication! What if you want to change the log format? You have to find and edit it in ten different places. What if you want to use perf_counter() instead of time()? You guessed: You have to find and edit it in ten different places. This violates the "Don't Repeat Yourself" (DRY) principle, a cornerstone of sustainable software engineering.

Decorators are a powerful Python feature that lets you wrap a function with another function. Think of it as adding a layer of reusable behavior before and after your main logic runs. They let you abstract away that boilerplate.

import time
from functools import wraps

def log_execution_time(func):
    @wraps(func)
    def wrapper(*args, **kwargs):
        start = time.time()
        result = func(*args, **kwargs)
        end = time.time()
        print(f"Duration: {end - start:.2f}s")
        return result
    return wrapper

@log_execution_time
def upload_files():
    # Core upload logic is clean now
    time.sleep(1) 

@log_execution_time
def sync_database():
    # Core sync logic is clean now
    time.sleep(0.5)

upload_files()
sync_database()

By simply adding the @log_execution_time line above each function, you’ve attached that timing logic. Now, if you want to change the logging, you only have to edit the decorator in one place. Your core business logic is clean and easy to read. One point I must mention, though, is that of trying too hard: if your decorator is becoming increasingly complex and full of conditional logic to handle different cases, either:

  1. The logic is not really that reusable, and you should reconsider the decorator.

  2. The decorator design is flawed, and you could consider splitting it up into multiple decorators that do one thing at a time (remember, you can stack decorators on top of each other!).

  3. Some third option that it’s impossible for me to foresee because every code base is different 🥲

Using decorators effectively shows a deeper understanding of Python and a commitment to writing clean, maintainable code. It demonstrates that you can identify cross-cutting concerns (like logging, caching, or authentication) and create elegant, reusable solutions for them. This is a significant step towards writing professional-grade tools.

Skill 4: Documentation and Type Hints

Let's be honest. You've stumbled upon a function in your codebase written by someone six months ago (maybe even yourself) that looks like this:

def process_items(items, flag, config):

A chill runs down your spine. What are items? A list of strings? A dictionary? What kind of flag is it? A boolean? A string that can be 'on' or 'off'? What’s its impact on the function behavior and result? And what on earth is config? To answer these questions, you have to embark on an archaeological dig through 100 lines of code, and you're not even sure you'll find the treasure.

This is the opposite of code your teammates want to work with. It's a puzzle box with no instructions. It creates friction, wastes time, and is a breeding ground for bugs.

Mature engineers understand that code is read far more often than it is written. They prioritize clarity not as a "nice-to-have," but as a core feature of their work. Two of the most powerful tools in Python for achieving this are type hints and docstrings.

Type hints tell you the "what": what kind of data a function expects and what it returns. Docstrings tell you the "why": why this function exists and how to use it. Together, they form a contract that makes your code predictable and easy to use.

Let's refactor that scary function into something a teammate would be happy to see.

def process_items(
    items: list[str], 
    is_active_filter: bool, 
    api_config: dict[str, str]
) -> list[str]:
    """Processes a list of items based
       on a filter and configuration.

    This function takes a list of
    item IDs, filters them based on
    the active flag, and then
    performs an operation using the
    provided API configuration.

    Args:
        items: A list of item IDs.
        is_active_filter: If True,
            only processes active items.
        api_config: A dictionary
            containing 'url' and 'token'
            for an API call.

    Returns:
        A list of processed item
        strings that succeeded.
    """
    # ... the logic is the same
    # but now it's understandable
    # from the outside!

    return results

Look at that difference! Without reading a single line of the implementation, you know exactly what to pass in and what you'll get back. Your IDE (like VS Code or PyCharm) will also understand this, giving you better autocompletion and flagging errors before you even run the code. You've turned a mysterious black box into a clear, self-documenting tool.

This isn't about being verbose for the sake of it. It's an act of professional courtesy. It shows you respect your teammates' time and are committed to building a codebase that is maintainable for the long term. This practice alone can dramatically elevate the quality and clarity of your contributions.

Your Mid-Level Action Plan

  1. Adopt Context Managers: Find a script you’ve written that interacts with files. If it’s not using a with statement, refactor it. This is a quick and high-impact change.

  2. Make a Script Safely Importable: Take one of your most useful scripts and wrap its main execution logic in an if __name__ == "__main__": block. Bonus points: try importing one of its functions from another Python script just to prove it works. Extra bonus points: refactor reusable logic and script logic into separate files with intentional naming conventions and easy-to-understand usage logic 🏅

  3. Refactor to a Config File: Pick a script that uses several environment variables or has hardcoded settings. Create a simple config.yaml file for it. Modify the script to read that file to get its settings.

  4. Write Your First Decorator: Find two functions in your codebase that share a piece of boilerplate logic at the beginning or end. It could be logging, it could be a simple check. Try to extract that logic into a decorator and apply it to both functions.

As always, here are a couple of resources to get you started on many of the topics we discussed in this article (for Real Python links, if they ask you to sign up, just open it in an Incognito or Private tab 🙂):

What's Next?

First of all, let’s take a moment to congratulate ourselves 🥳 Your progress is amazing, and your code is now not only reliable but also reusable, configurable, and clean. But your journey isn’t over!

The next stage of your career is about moving from building great tools to designing the very platform your team operates on. How do you take a complex, risky task, like creating and tearing down temporary cloud resources, and make it so simple and safe that it becomes a trusted building block for everyone? What happens when your automation, built for today's scale, suddenly has to handle 100 times the load without catching fire? And how do you shift from fixing security holes to designing systems where entire classes of vulnerabilities are impossible by default?

This is the shift from writing code to architecting systems. It's about thinking in terms of leverage, resilience, and security from the very start. In the next article, we will explore the skills that transition you from a strong individual contributor to a technical leader

🎉 That's a wrap!

Thanks for reading this edition of Ctrl+Alt+Deploy! Found these insights valuable? Share this newsletter with fellow developers and let me know which story resonated with you most!

Until next time, keep coding and stay curious! 💻

💡 Curated with ❤️ for the developer community

Keep Reading