Setting the Scene for Ingress
Get introduced to Ingress.
We'll cover the following
You’ll need a working knowledge of Kubernetes Services before reading this chapter. If you don’t already have it, consider going back and reading about Services first.
We’ll capitalize Ingress as it’s a resource in the Kubernetes API. We’ll also be using the terms LoadBalancer and load balancer as follows:
LoadBalancer
refers to a Kubernetes Service object oftype=LoadBalancer
.load balancer refers to one of the cloud’s internet-facing load balancers
For example, when we create a Kubernetes LoadBalancer
Service, Kubernetes talks to our cloud platform and provisions a cloud load balancer.
Ingress was promoted to generally available (GA) in Kubernetes version 1.19 after being in beta for over 15 releases. For over three years, it was in alpha and beta. Service meshes increased in popularity, and there’s now some overlap in functionality. As a result, if we’re planning on deploying a service mesh, we may not need Ingress.
Background
The previous chapter showed us how to use NodePort
and LoadBalancer
Services to expose applications to external clients. However, both have limitations.
NodePort
Services only work on high port numbers; clients must keep track of node IP addresses. LoadBalancer
Services fix this but require a one-to-one mapping between internal Services and cloud load balancers. This means a cluster with 25 internet-facing apps will need 25 cloud load balancers, which can cost money! Our cloud may also limit the number of load balancers we can create.
Ingress fixes this by letting us expose multiple Services through a single cloud load balancer.
It creates a single cloud load balancer on port 80
or 443
and uses host-based and path-based routing to map connections to different Services on the cluster.
Get hands-on with 1400+ tech skills courses.