Welcome to this edition of Ctrl+Alt+Deploy 🚀
I’m Lauro Müller and super happy to have you around 🙂 Let’s dive in right away!
Hey there! 👋
If you work with Kubernetes, you've probably heard the big news: the widely used, community-managed Ingress-NGINX project is being retired.
For years, it's been the default, go-to tool for getting traffic into our clusters. But don't panic! This isn't a crisis. It's an upgrade. The old Ingress API, for all its utility, had some fundamental limitations that many of us have felt for a while. This change is our collective chance to move to something much more powerful, flexible, and officially part of Kubernetes: the Gateway API.
Let's dive into what it is and why it's a huge step forward for everyone.
Bring Your Docker and Kubernetes Skills to the Next Level

In my Docker and Kubernetes complete course, we focus on developing a strong foundation for working with containers, Docker, and Kubernetes. From building container images and running them locally all the way to deploying a fully managed Kubernetes cluster with GKE, it covers many important aspects of working with both Docker and Kubernetes. Want to bring your skills to the next level? Then make sure to check it out!
Why We Needed Something Better Than Ingress
To understand why the Gateway API is such a big deal, we have to remember the growing pains of the old Ingress system.
Annotation Hell: If you wanted to do anything beyond the most basic HTTP routing, you had to use annotations. Need a URL rewrite? Add a vendor-specific annotation. Need to configure CORS policies, connection timeouts, or canary releases? More and more annotations. Soon, your simple Ingress resource was buried under a mountain of
nginx.ingress.kubernetes.io/annotations that were completely non-portable. If you ever wanted to switch to a different Ingress controller like Traefik or HAProxy, you had to throw everything away and start over.Role Confusion: A single
Ingressobject tried to be everything to everyone, which created a messy ownership model. It mixed infrastructure concerns (like TLS certificates, domains, and load balancer configs) with simple application routing rules (like mapping/apito theapi-service). This created a constant friction point between cluster operators (who care about stability and security) and application developers (who just want to ship their features). Who owns this YAML file? Who is allowed to change what? The lines were dangerously blurry.
The Gateway API was designed from the ground up to solve these problems by creating a clear, role-based separation of concerns.
Meet the Gateway API: A Quick Tour
The Gateway API introduces a new, more structured model built on a few core components. The real magic is how they work together, assigning clear responsibilities to different roles, just like in a real organization.
Here’s the hierarchy and who owns what:

Alright, we’ve introduced quite a few new terms and CRDs here. Let’s now spend a few minutes to understand them:
For the Platform Admin or Ops Teams (Infrastructure Team):
GatewayClass: This is a cluster-wide template that defines a type of load balancer you can create (e.g., "the Kong load balancer," "the Istio load balancer"). You typically just install it once and rarely touch it again.Gateway: This is an actual instance of a load balancer. The admin creates aGateway(e.g.,prod-web-gateway) listening on a specific port and domain. This is the official "doorway" that application teams can use.
For the Application Developer:
HTTPRoute: This is where you define your routing rules without touching infrastructure. You create anHTTPRoutethat says, "I want traffic formy-app.com/footo go to myfoo-service." Crucially, you simply attach yourHTTPRouteto an existingGateway. You don't have to worry about IPs, ports, or TLS certs, since the admin already handled that.
This model is a considerable improvement on the existing Ingress object. The Ops team provides a stable, secure entrypoint (Gateway), and Dev teams can safely map their own paths (HTTPRoute) without needing to ask for permission or risk breaking someone else's routes.
Hands-On: Your First Gateway API Deployment
Talk is cheap, so let's build something! This guide is based on a clean Minikube setup, but the principles apply to any Kubernetes cluster.
Step 1: Start Your Cluster
First things first, let's get a local cluster running.
minikube startStep 2: Install the Gateway API CRDs
Before we do anything else, we need to teach our cluster the vocabulary of the Gateway API. This is not shipped by default with Kubernetes, so we need to install it ourselves. We do this by applying the official Custom Resource Definitions (CRDs), which add resources like Gateway and HTTPRoute to the Kubernetes API.
# The --server-side flag is a good practice for applying CRDs
kubectl apply --server-side -f https://github.com/kubernetes-sigs/gateway-api/releases/download/v1.4.0/standard-install.yamlStep 3: Install the Kong Gateway Controller via Helm
We now have the new resources, but we are still missing a controller that understands these new resources. There are various implementations for this purpose, and we’ll focus on Kong for this guide. We’ll install it using Helm, the standard package manager for Kubernetes. Below I’m providing a specific version for compatibility purposes, but any recent version of Kong will support the Gateway API implementation.
helm repo add kong https://charts.konghq.com
helm repo update
helm upgrade --install \
kong kong/ingress \
--namespace kong \
--create-namespace \
--version 0.21.0This command creates a dedicated kong namespace and installs the controller that will watch for Gateway API resources and configure the underlying proxy.
Step 4: Create the GatewayClass and Gateway
This is the crucial "platform admin" step. The Helm chart installed the controller, but now we need to create the specific Gateway resources that our applications will use. Let’s create a new gateway.yaml file containing the definition for both the GatewayClass and a simple Gateway. Notice how we link the GatewayClass to the Kong controller via the spec field. We also mention the kong GatewayClass in the Gateway resource, so it creates a Gateway with the specified class.
# gateway.yaml
apiVersion: gateway.networking.k8s.io/v1
kind: GatewayClass
metadata:
name: kong
spec:
controllerName: konghq.com/kic-gateway-controller
---
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
name: kong
namespace: kong
spec:
gatewayClassName: kong
listeners:
- name: proxy
port: 80
protocol: HTTP
allowedRoutes:
namespaces:
from: AllLet’s now apply this file with:
kubectl apply -f gateway.yamlStep 5: Deploy an App and an HTTPRoute (The Developer's Job)
Now for the fun part! As an application developer, all we need to do is deploy our app and an HTTPRoute that points to the Gateway the operator created for us.
Let’s create a file named app.yaml:
# app.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-test-app
spec:
replicas: 2
selector:
matchLabels:
app: my-test-app
template:
metadata:
labels:
app: my-test-app
spec:
containers:
- name: web
image: nginxdemos/hello
---
apiVersion: v1
kind: Service
metadata:
name: my-test-service
spec:
selector:
app: my-test-app
ports:
- port: 80
targetPort: 80
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: my-test-route
spec:
# This points our route to the Gateway
# named 'kong' in the 'kong' namespace.
parentRefs:
- name: kong
namespace: kong
rules:
- matches:
- path:
type: PathPrefix
value: /hello
backendRefs:
- name: my-test-service
port: 80Notice how clean this new model is? We just say "attach me to the kong gateway" and define our rules. No annotations, no infrastructure details. We can then apply this file with:
kubectl apply -f app.yamlStep 6: Test It!
Finally, let's get the URL for our Kong proxy from Minikube and test that our route is working.
# This command asks minikube for the URL of the proxy service
minikube service -n kong kong-gateway-proxy --urlThis will output a URL like http://127.0.0.1:54321. Now, take that URL and add your path:
# Use the URL from the previous command
curl http://<YOUR_MINIKUBE_URL>/helloYou should see the welcome message from the NGINX demo app. Success! You've just configured your first application using the modern Kubernetes Gateway API.
Welcome to the Future
The retirement of Ingress-NGINX might feel like a big shift, but it's a necessary step toward a more robust, standardized, and secure way of managing traffic in Kubernetes. The Gateway API, with its role-based design and expressive power, is a fantastic successor.
My advice? Don't wait. The future is here. Start spinning up a Gateway in your test clusters and attach a non-critical service to it. Get a feel for the new workflow.
I'd love to know what you think. Have you started experimenting with the Gateway API yet? What are your first impressions? Just hit reply and share your thoughts
🎉 That's a wrap!
Thanks for reading this edition of Ctrl+Alt+Deploy. Found these insights valuable? Share this newsletter with fellow developers and let me know which story resonated with you most!
Until next time, keep coding and stay curious! 💻✨
💡 Curated with ❤️ for the developer community
