Welcome to this edition of Ctrl+Alt+Deploy 🚀

I’m Lauro Müller and super happy to have you around 🙂 Let’s dive in right away!

Hey there! 👋 As you’re probably aware, I've recently launched a course on Prompt Engineering, and I'm super excited to share an incredible resource that can help you practice interacting with LLMs and bringing your generative AI skills to the next level 🚀

OpenAI has a great repo where you can find hundreds (if not thousands!) of examples of applications of generative AI models for many common tasks: The OpenAI Cookbook. Maybe you can find something there related to some repetitive task that you can actually benefit from automating? Or some really cool example that you can adapt and add to your portfolio?

Here's the repo link once again: OpenAI Cookbook

And I get it, you open it and see hundreds of examples and... Where do I start, really? 🤔So here is a curated selection of X examples that caught my attention, ordered (more or less) by complexity degree. Feel free to pick one of these, or navigate the amazing codebase to find one that suits you the most!

Did you find something interesting? Or maybe you implemented something that you would like to share? Perhaps you also have a favorite coding/example repo? Hit the reply button and send that back to me, I'd love to hear back from you and I do read everything you share 🙂

Now over to the list!

Formatting Inputs to Chat Models

This is probably the best place to start. It explains the fundamental structure of an API call using the OpenAI interface: the messages array with system, user, and assistant roles. It's the "Hello, World!" for the Chat Completions API and bridges the gap between writing a prompt in the ChatGPT interface and structuring it correctly for an API call.

Achieving Reproducible Outputs with the seed Parameter

Non-determinism is a big thing in LLMs. This guide shows you how to get more consistent, deterministic outputs from the model using the seed parameter. It demonstrates how repeated requests with the same seed and other parameters will return the same result, which is incredibly useful for testing, debugging, and creating predictable user experiences.

How to Stream Completions

This example introduces a core technique for building responsive applications. It shows how to get the model's response back token-by-token, creating the real-time "typing" effect users expect. This is fundamental for building any chatbot with a good user experience.

Basic Function Calling

This is your introduction to making models "do things." The notebook explains how to define a function (like a hypothetical weather API) and have the model generate the right arguments to call it based on a user's prompt. This is the first step toward building agents that can interact with external tools, and it’s a fun exercise!

Handling Long Documents for Entity Extraction

A common challenge you'll face is that many real-world documents (or collections of documents) are too long to fit into a model's context window. This notebook provides a practical and essential strategy to overcome this. It shows you how to break down a large PDF into smaller text chunks, process each chunk individually to extract key information, and then combine the results into a single, structured output.

Semantic Text Search with Embeddings

Ready to build a search engine that understands meaning, not just keywords? These two examples introduce you to the power of embeddings. You'll learn how to convert text into numerical vectors and then use vector similarity to find the most conceptually related information in a dataset of product reviews. It’s a fantastic introduction to the core technology behind modern search and recommendation systems.

Question Answering using a Search API and Re-ranking

Semantic search is cool, isn’t it? Let’s take it to the next level? Instead of just finding similar documents, this example builds a complete question-answering system. It shows you how to generate multiple search queries from a single user question, fetch results from an API, and then use a clever "re-ranking" technique with a hypothetical ideal answer to dramatically improve the relevance of the results.

Advanced Data Extraction and Transformation from PDFs

This notebook showcases a truly powerful use of multimodal models like GPT-4o. You'll see how to go beyond traditional OCR to extract and structure data from complex, multilingual PDF invoices. The model doesn't just read the text; it understands the layout, groups related fields, and transforms the extracted data to fit a specific JSON schema.

Integrating with Real-World APIs using OpenAPI Specs

This is where you start making your AI an active participant in digital workflows. This example demonstrates how to give the model the ability to interact with any RESTful API by providing it with an OpenAPI specification. You'll see how the model can plan and execute a series of API calls from a single natural language command. Empowering? Dangerous? What’s your take?

Evaluating Model Confidence with Logprobs

How do you know if you can trust your model's output? This more advanced notebook shows you how to look "under the hood" by using the logprobs parameter. You'll learn how to analyze the model's confidence in its own token predictions. This is a powerful technique for building more reliable applications, as it allows you to create self-evaluating systems that can flag when they are uncertain.

Where do I find it? Using_logprobs.ipynb

🎉 That's a wrap!

Thanks for reading this edition of Ctrl+Alt+Deploy. Found these insights valuable? Share this newsletter with fellow developers and let me know what resonated with you most!

Until next time, keep coding and stay curious! 💻

💡 Curated with ❤️ for the developer community

Keep Reading