API Gateway vs ALB vs Ingress on EKS in 2026 –And Where Gateway API Fits

Share

Confused about ALB vs Ingress vs API Gateway on EKS? Learn when to use each with real production architectures, Gateway API, service mesh, and scaling patterns used in 2026.

The other day I was teaching microservices architecture on Kubernetes in my bootcamp.

The class was building an e-commerce platform on EKS. Six microservices. We had just added an API Gateway.

That is when the questions started.

“What is the difference between API Gateway and ALB?”

“How is Ingress different from API Gateway when both use Nginx?”

“Where does service mesh fit?”

“And what about Network Policies?”

“Wait, isn’t Ingress NGINX being retired? Should we even be learning this?”

That last question made me stop. Because it changed everything.

In March 2026, Ingress NGINX moved into formal retirement. Kubernetes 1.36, released April 22, 2026, marks the shift to Gateway API as the official successor to Ingress.

So this post is the explanation I gave my class. Updated for 2026. Covering ALB, Ingress, Gateway API, API Gateway, Service Mesh, and Network Policies — and exactly when to use which.

If you have ever stared at a Kubernetes architecture diagram and wondered why there are six different boxes that all seem to route traffic, this post is for you.

Why are these tools so confusing in the first place

All of them can route HTTP. Most of them can do TLS termination. Several use Nginx or Envoy as the engine.

The features overlap. The names don’t help.

The clarity comes from a different question. Not “what does it do” but “why does it exist.”

Each layer was born to solve a production problem the previous setup could not handle.

So instead of comparing features, let me build the architecture with you. Layer by layer.

What is the difference between ALB and a LoadBalancer Service?

Your service is in a pod. It runs. It works.

You can hit it from inside the cluster. Nobody outside can reach it.

So you create a LoadBalancer Service. Kubernetes provisions a real cloud load balancer. Your app gets a public URL.

It works perfectly. Until you have 10 services.

Now you have 10 cloud load balancers. Each one shows up on your AWS bill every single month.

LoadBalancer Service solved external access. But one per service does not scale.

So you put one AWS Application Load Balancer (ALB) in front of everything.

ALB is an AWS-managed load balancer. It runs outside your cluster. It can route traffic to many services based on path or host.

Request for /api/products goes to the product service. Request for /api/orders goes to the order service.

One AWS load balancer on your bill instead of ten.

But here is the catch. You configure the ALB through AWS Console or Terraform.

Your developers cannot ship a new microservice without filing an infrastructure ticket. The routing rules live outside the cluster. The cluster lives in YAML. The two are constantly out of sync.

ALB cut your costs. But it took routing control away from your team.

What is Kubernetes Ingress, and why was it created?

Ingress is a Kubernetes resource. It defines routing rules in YAML.

Your developers commit an Ingress file alongside their service code. Routing lives where the code lives.

But Ingress is just a config file. Something has to read it and execute it.

That something is the Ingress Controller. Nginx Ingress. Traefik. AWS Load Balancer Controller.

On EKS, the AWS Load Balancer Controller is the one most teams use. You write Ingress YAML. The controller talks to AWS and provisions an ALB with the right rules automatically.

You get the cost benefits of one ALB. And the YAML-first control of Kubernetes.

ALB without Ingress was unmanageable from the cluster side. Ingress fixed that.

But Ingress had problems. That is why Gateway API exists.

Ingress worked. For ten years it was the default way to get traffic into a Kubernetes cluster.

But Ingress had limits.

It only handled HTTP and HTTPS. Want to route TCP or UDP traffic? You need vendor-specific extensions.

Advanced features like canary deployments, traffic splitting, and header-based routing required annotations. Lots of annotations. Each Ingress Controller had its own syntax. Migrating from one to another meant rewriting all of them.

The platform team and application team shared the same Ingress resource. There was no clean separation of who owned what.

And as of March 2026, Ingress NGINX itself is no longer maintained.

So the Kubernetes community built Gateway API.

What is Kubernetes Gateway API, and how does it differ from Ingress?

Gateway API is the official successor to Ingress. It went GA in November 2023, and adoption accelerated through 2025 and 2026.

It splits the old Ingress resource into three:

GatewayClass defines the type of underlying infrastructure. The platform team owns this.

Gateway represents the actual entry point. It listens on ports and handles TLS. Cluster operators manage these.

HTTPRoute (and TCPRoute, GRPCRoute) defines the actual routing rules. Application developers own these.

This is role-oriented design. Each team manages what they should manage. No more arguing about who owns the Ingress.

Gateway API also supports L4 protocols natively. TCP. UDP. gRPC. It has built-in support for traffic splitting and header-based routing without annotations.

On EKS, the AWS Load Balancer Controller now supports Gateway API as of 2026. You write Gateway and HTTPRoute resources. The controller provisions an ALB. Same cost benefits as Ingress, with a cleaner model.

If you are starting a new EKS project in 2026, use Gateway API. If you are running Ingress in production today, you have time. Ingress is not deprecated. But the future investment is in Gateway API.

Now your microservices need to talk to each other.

Service A calls Service B. Service B calls Service C.

Inside the cluster. Using ClusterIP Services.

It works. Until something fails.

You have no idea which call broke. You have no idea if traffic between pods was even encrypted. Some pods retry forever. Some give up immediately. Every team writes their own retry logic differently.

You want mTLS between every service. You want consistent retries. You want distributed tracing across requests.

You can ask every developer to add this to every service. Or you can add a layer.

So you install a service mesh. Istio. Linkerd. Consul.

A service mesh injects a sidecar proxy into every pod. All traffic between pods goes through the sidecar.

The sidecar handles the network problems. mTLS. Retries. Timeouts. Tracing. Traffic splitting.

Your application code stays clean. The mesh handles the plumbing.

Service-to-service traffic was a black box. Service mesh fixed that.

But your services are still wide open inside the cluster.

By default, any pod in your cluster can talk to any other pod.

Your frontend pod can talk to your payments database. Your build pod can talk to your auth service.

That is not what you want. If one pod gets compromised, the attacker can move laterally to anything.

So you use Network Policies.

A Network Policy is a Kubernetes resource that defines which pods can talk to which other pods.

You write a policy that says “only the order service can reach the payments database.” You write another that says “the frontend can only reach the API Gateway.”

Compromised pods cannot reach what they should not reach.

A flat cluster network was a security risk. Network Policies fixed that.

Then the business launches a mobile app.

Your platform was working. Internal traffic is meshed. Network policies lock things down. External traffic comes in through Gateway API.

Then the business launches a mobile app. Then a partner wants API access. Then a third-party developer wants to integrate.

Now you need to issue API keys. You need rate limits per customer. You need JWT validation centralized.

You start adding auth code to every service. Rate limiting code to every service.

Six microservices. Six different implementations of the same thing.

Customer A should get 1000 req/sec. Customer B should get 100. The free tier should get 10.

Where does that logic live? You start to lose track.

So you add an API Gateway.

Kong. APISIX. AWS API Gateway. Tyk.

An API Gateway sits between your Gateway/Ingress and your microservices. It handles everything that is not business logic.

API key validation. JWT validation. Rate limiting per customer. Request transformation. Response caching. Usage analytics.

A request comes in. The gateway checks the API key. Sees the customer’s plan allows 1000 req/sec. Forwards the request to the right service.

Your microservices stay focused on business logic. The gateway handles the API contract.

API-level concerns scattered across services made the platform fragile. API Gateway fixed that.

Wait. If Gateway API exists, do I still need an API Gateway?

This is the most confusing part of all of this. And the question my class kept coming back to.

The names are almost identical. They sound like the same thing.

They are not.

Gateway API is a Kubernetes specification for routing traffic. It replaces Ingress. It handles north-south routing into the cluster.

API Gateway is an architectural pattern that handles API management. Auth, rate limiting, API keys, transformations, analytics.

You can implement an API Gateway using Gateway API resources. Tools like Kong and Envoy Gateway support both.

But Gateway API on its own does not give you API key management or per-customer rate limits. That is API Gateway territory.

The simple rule: Gateway API gets traffic into the cluster. API Gateway manages what your APIs do once it is in.

In 2026, the line between them is blurring. Some implementations like Kong, Envoy Gateway, and APISIX are doing both. But conceptually, they solve different problems.

Production patterns: which setup do you actually need on EKS?

There are three real patterns. Each fits a different stage of your platform.

Pattern 1: Internal app or simple frontend

You have a React frontend and a few microservices behind it. The frontend is the only client.

You do not have third-party API consumers. You do not need to issue API keys.

The AWS Load Balancer Controller provisions the ALB from your Gateway and HTTPRoute resources. One YAML. One AWS bill. Done.

This is what 80% of EKS workloads actually look like. No API Gateway. No service mesh. No drama.

The trap: engineers add a Nginx “API Gateway” Deployment here because tutorials told them to. It is a reverse proxy with extra steps and a monthly cost.

Pattern 2: Public APIs for mobile or third-party clients

Now your e-commerce platform has a mobile app. Partners want to integrate.

You need API keys. Rate limits per customer. Centralized JWT validation.

The Gateway API still gets traffic into the cluster. That has not changed.

What is new is the API Gateway sitting between Gateway API and your services. Kong or APISIX or Envoy Gateway, deployed in-cluster as a Deployment with 2+ replicas.

Customer A gets 1000 req/sec. Customer B gets 100. Free tier gets 10. None of that logic touches your microservices.

This is the setup most teams skip until they have already polluted every service with auth code. Then they spend a quarter ripping it out.

Pattern 3: Scale, with internal traffic too

Your platform is bigger now. Mobile clients hit public APIs. The frontend hits internal APIs. Services talk to each other constantly.

You need different policies for different traffic. You need observability across all of it.

Public traffic goes through the API Gateway. Auth, rate limits, transformations.

Internal frontend traffic skips the gateway. It is trusted, latency-sensitive, and does not need API key validation.

Service-to-service traffic goes through the service mesh. mTLS between every pod. Distributed tracing.

Network Policies enforce who can talk to whom across the entire cluster.

This is what production at scale actually looks like. Each layer does one job well.

How to know which pattern you need

Ask one question. Who is calling your APIs?

If it is only your own frontend, you are at Pattern 1.

If you have a mobile app or third-party clients, you are at Pattern 2.

If you are running 50+ services and care about mTLS, distributed tracing, and zero-trust networking, you are at Pattern 3.

Most teams skip Pattern 1 because they read a microservices blog. Then they over-engineer toward Pattern 3 because they read a Netflix blog.

The right answer is almost always one step simpler than what you think you need.

The mistake most teams make

They jump to Pattern 3 because they read a Netflix blog.

They build an API Gateway when they have one frontend client.

They install Istio before they understand a single pod.

A Nginx Deployment routing traffic is not an API Gateway. It is a reverse proxy.

An Ingress Controller is not a load balancer. It is a router that sits behind one.

A service mesh is not an API Gateway. It handles east-west traffic. API Gateway handles north-south traffic.

Network Policies are not a firewall. They are pod-level traffic rules enforced by your CNI.

Get the names right and the architecture decisions get easier.

Should I migrate from Ingress to Gateway API in 2026?

If you are starting a new EKS project, yes. Use Gateway API from day one.

If you are running Ingress in production with the AWS Load Balancer Controller, you have time. Ingress is stable. It is not deprecated. AWS supports both.

If you are using Ingress NGINX specifically, plan your migration. The project is in retirement as of March 2026. No more security patches.

The migration path is straightforward. Both Ingress and Gateway API can run side by side. Move new services to Gateway API. Migrate old ones one at a time.

Frequently Asked Questions

Is API Gateway the same as Kubernetes Ingress?

No. Ingress (and its successor Gateway API) is a Kubernetes resource for routing external traffic to services. API Gateway is a pattern for managing API-level concerns like authentication, rate limiting, and API keys. Both can route HTTP, but they solve different problems.

Is API Gateway the same as Gateway API?

No, even though the names sound identical. Gateway API is a Kubernetes specification that replaces Ingress. API Gateway is an architectural pattern. You can implement an API Gateway using Gateway API resources, but they are not the same thing.

Do I need both ALB and Ingress on EKS?

Yes. ALB is the AWS-managed load balancer. Ingress (or Gateway API) is the Kubernetes resource that tells AWS Load Balancer Controller how to configure the ALB. They work together.

Is Ingress NGINX deprecated?

Ingress NGINX entered formal retirement in March 2026. It still works, but no new security patches will be released. Plan migration to Gateway API or another supported Ingress Controller.

Can I use AWS API Gateway with EKS?

Yes. AWS API Gateway can route to EKS-hosted services through a Network Load Balancer or VPC Link. But most teams running on EKS prefer in-cluster API Gateways like Kong, APISIX, or Envoy Gateway because they are easier to configure with Kubernetes-native tools.

Do I need a service mesh if I have an API Gateway?

They solve different problems. API Gateway handles north-south traffic (internet to your services). Service mesh handles east-west traffic (service to service inside the cluster). Most teams need both at scale.

What is the difference between Gateway API and API Gateway in Kubernetes?

Gateway API is a Kubernetes specification for routing external traffic into the cluster. It replaces Ingress. API Gateway is a pattern that handles authentication, rate limiting, and API management. Some tools like Kong and Envoy Gateway implement both.

The takeaway

Each layer between your user and your pod exists because the previous setup was not enough.

ALB gets traffic to your cluster. Gateway API (or Ingress) gets traffic into your services. Service Mesh secures and observes pod-to-pod traffic. Network Policies enforce who can talk to whom. API Gateway manages what your public APIs do.

Five tools. Five different problems. Easy to confuse on paper. Clear once you see the story.

Pick the simplest pattern that solves your real problem. Add the next layer when you hit a wall. Not when you read about one.

That is what I told my students. That is what I want you to remember the next time someone asks you to draw the architecture on a whiteboard.


If you are serious about production-grade DevOps — not tutorial DevOps — I run a 25-week live bootcamp covering AWS, Kubernetes, MLOps, and AIOps. Real EKS architecture. Real war stories. The kind of skills that show up in interviews and on the job.

[25-Week AWS DevOps + MLOps + AIOps Bootcamp →]

Share
Akhilesh Mishra

Akhilesh Mishra

I am Akhilesh Mishra, a self-taught Devops engineer with 11+ years working on private and public cloud (GCP & AWS)technologies.

I also mentor DevOps aspirants in their journey to devops by providing guided learning and Mentorship.

Topmate: https://topmate.io/akhilesh_mishra/