I called my last post ‘basic’ external access into the cluster because I didn’t get a chance to talk about the ingress object. Ingress resources are interesting in that they allow you to use one object to load balance to different back-end objects. This could be handy for several reasons and allows you a more fine-grained means to load balance traffic. Let’s take a look at an example of using the Nginx ingress controller in our Kubernetes cluster.
To demonstrate this we’re going to continue using the same lab that we used in previous posts but for the sake of level setting we’re going to start by clearing the slate. Let’s delete all of the objects in the cluster and then we’ll start by build them from scratch so you can see every step of the way how we setup and use the ingress.
kubectl delete deployments --all kubectl delete pods --all kubectl delete services --all
Since this will kill our net-test
pod, let’s start that again…
kubectl run net-test --image=jonlangemak/net_tools
Recall that we used this pod as a testing endpoint so we could simulate traffic originating from a pod so it’s worth keeping around.
Alright – now that we have an empty cluster the first thing we need to do is build some things we want to load balance to. In this case, we’ll define two deployments. Save this definition as a file called back-end-deployment.yaml
on your Kubernetes master…
apiVersion: extensions/v1beta1 kind: Deployment metadata: name: deploy-test-1 spec: replicas: 2 template: metadata: labels: app: web-front-end version: v1 spec: containers: - name: tiny-web-server-1 image: jonlangemak/web1_8080 ports: - containerPort: 8080 name: web-port protocol: TCP --- apiVersion: extensions/v1beta1 kind: Deployment metadata: name: deploy-test-2 spec: replicas: 2 template: metadata: labels: app: web-front-end version: v2 spec: containers: - name: tiny-web-server-2 image: jonlangemak/web1_9090 ports: - containerPort: 9090 name: web-port protocol: TCP
Notice how we defined two deployments in the same file and separated the definitions with a ---
. Next we want to define a service that can be used to reach each of these deployments. Save this service definition as back-end-service.yaml
…
apiVersion: v1 kind: Service metadata: name: backend-svc-1 spec: ports: - port: 80 protocol: TCP targetPort: web-port selector: app: web-front-end version: v1 --- apiVersion: v1 kind: Service metadata: name: backend-svc-2 spec: ports: - port: 80 protocol: TCP targetPort: web-port selector: app: web-front-end version: v2
Notice how the service selectors are looking the specific labels that match each pod. In this case, we’re looking for app
and version
with the version
differing between each deployment. We can now deploy these definitions and ensure everything is created as expected…
I created a new folder called ingress to store all these definitions in.
user@ubuntu-1:~/ingress$ kubectl create -f back-end-deployment.yaml deployment "deploy-test-1" created deployment "deploy-test-2" created user@ubuntu-1:~/ingress$ user@ubuntu-1:~/ingress$ kubectl get deployments NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE deploy-test-1 2 2 2 2 11s deploy-test-2 2 2 2 2 11s net-test 1 1 1 0 50s user@ubuntu-1:~/ingress$ user@ubuntu-1:~/ingress$ kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE deploy-test-1-1702481282-568d6 1/1 Running 0 20s 10.100.1.20 ubuntu-3 deploy-test-1-1702481282-z6nx7 1/1 Running 0 20s 10.100.3.31 ubuntu-5 deploy-test-2-2194066824-3d47q 1/1 Running 0 20s 10.100.0.25 ubuntu-2 deploy-test-2-2194066824-8dx3w 1/1 Running 0 20s 10.100.2.28 ubuntu-4 net-test-645963977-6216t 1/1 Running 0 2m 10.100.3.33 ubuntu-5 user@ubuntu-1:~/ingress$ user@ubuntu-1:~/ingress$ kubectl create -f back-end-service.yaml service "backend-svc-1" created service "backend-svc-2" created user@ubuntu-1:~/ingress$ kubectl get services -o wide NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR backend-svc-1 10.11.12.117 80/TCP 8s app=web-front-end,version=v1 backend-svc-2 10.11.12.164 80/TCP 8s app=web-front-end,version=v2 kubernetes 10.11.12.1 443/TCP 23m user@ubuntu-1:~/ingress$
These deployments and services represent the pods and services that we’ll be doing the actual load balancing to. They appear to have deployed successfully so let’s move onto the next step.
Next we’ll start building the actual ingress. To start with we need to define what’s referred to as a default back-end. Default back-ends serve as the default endpoint that the ingress will send traffic to in the event it doesn’t match any other rules. In our case the default back-end will consist of a deployment and a service that matches the deployed pods to make them easily reachable. First define the default back-end deployment. Save this as a file called default-back-end-deployment.yaml
…
apiVersion: extensions/v1beta1 kind: Deployment metadata: name: default-http-backend spec: replicas: 2 template: metadata: labels: app: default-http-backend spec: terminationGracePeriodSeconds: 60 containers: - name: default-http-backend # Any image is permissable as long as: # 1. It serves a 404 page at / # 2. It serves 200 on a /healthz endpoint image: gcr.io/google_containers/defaultbackend:1.0 livenessProbe: httpGet: path: /healthz port: 8080 scheme: HTTP initialDelaySeconds: 30 timeoutSeconds: 5 ports: - containerPort: 8080 resources: limits: cpu: 10m memory: 20Mi requests: cpu: 10m memory: 20Mi
Next lets define the service that will match the default back-end. save this file as default-back-end-service.yaml
…
apiVersion: v1 kind: Service metadata: name: default-http-backend spec: type: ClusterIP ports: - port: 80 protocol: TCP targetPort: 8080 selector: app: default-http-backend
Now let’s deploy both the definition for the default back-end deployment as well as the service…
user@ubuntu-1:~/ingress$ kubectl create -f default-back-end-deployment.yaml deployment "default-http-backend" created user@ubuntu-1:~/ingress$ kubectl create -f default-back-end-service.yaml service "default-http-backend" created user@ubuntu-1:~/ingress$ user@ubuntu-1:~/ingress$ kubectl get deployments NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE default-http-backend 2 2 2 2 14s deploy-test-1 2 2 2 2 14m deploy-test-2 2 2 2 2 14m net-test 1 1 1 0 2m user@ubuntu-1:~/ingress$ user@ubuntu-1:~/ingress$ kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE default-http-backend-4080621718-fmn8q 1/1 Running 0 20s 10.100.3.32 ubuntu-5 default-http-backend-4080621718-psm04 1/1 Running 0 20s 10.100.0.26 ubuntu-2 deploy-test-1-1702481282-568d6 1/1 Running 0 15m 10.100.1.20 ubuntu-3 deploy-test-1-1702481282-z6nx7 1/1 Running 0 15m 10.100.3.31 ubuntu-5 deploy-test-2-2194066824-3d47q 1/1 Running 0 15m 10.100.0.25 ubuntu-2 deploy-test-2-2194066824-8dx3w 1/1 Running 0 15m 10.100.2.28 ubuntu-4 net-test-645963977-6216t 1/1 Running 0 4m 10.100.3.33 ubuntu-5 user@ubuntu-1:~/ingress$ user@ubuntu-1:~/ingress$ kubectl get services NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE backend-svc-1 10.11.12.117 <none> 80/TCP 4m backend-svc-2 10.11.12.164 <none> 80/TCP 4m default-http-backend 10.11.12.88 <none> 80/TCP 21s kubernetes 10.11.12.1 <none> 443/TCP 28m user@ubuntu-1:~/ingress$
Great! This looks just like we’d expect but let’s do some extra validation from our net-test
pod to make sure the pods and services are working as expected before we get too far into the ingress configuration…
If you aren’t comfortable with services see my post on them here.
user@ubuntu-1:~/ingress$ kubectl exec -it net-test-645963977-6216t curl http://backend-svc-1 This is Web Server 1 running on 8080! user@ubuntu-1:~/ingress$ kubectl exec -it net-test-645963977-6216t curl http://backend-svc-2 This is Web Server 1 running on 9090! user@ubuntu-1:~/ingress$ kubectl exec -it net-test-645963977-6216t curl http://default-http-backend default backend - 404 user@ubuntu-1:~/ingress$
As expected pods can resolve the services by DNS name and we can successfully reach each service. In the case of the default back-end we get a 404. Since all of the back end pods are reachable we can move on to defining the ingress itself. The Nginx ingress controller comes in the form of a deployment. The deployment definition looks like this…
apiVersion: extensions/v1beta1 kind: Deployment metadata: name: nginx-ingress-controller spec: replicas: 1 revisionHistoryLimit: 3 template: metadata: labels: app: nginx-ingress-lb spec: terminationGracePeriodSeconds: 60 containers: - name: nginx-ingress-controller image: gcr.io/google_containers/nginx-ingress-controller:0.8.3 imagePullPolicy: Always readinessProbe: httpGet: path: /healthz port: 18080 scheme: HTTP livenessProbe: httpGet: path: /healthz port: 18080 scheme: HTTP initialDelaySeconds: 10 timeoutSeconds: 5 args: - /nginx-ingress-controller - --default-backend-service=$(POD_NAMESPACE)/default-http-backend - --nginx-configmap=$(POD_NAMESPACE)/nginx-ingress-controller-conf env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace ports: - containerPort: 80 - containerPort: 18080
Go ahead and save this file as nginx-ingress-controller-deployment.yaml
on your server. However – before we can deploy this definition we need to deploy a config-map. Config-maps are a Kubernetes construct that are used to handle non-private configuration information. Since the Nginx ingress controller above expects a config-map, we need to deploy that before we can deploy the ingress controller…
apiVersion: v1 kind: ConfigMap metadata: name: nginx-ingress-controller-conf labels: app: nginx-ingress-lb group: lb data: enable-vts-status: 'true'
In this case, we’re using a config-map to pass service level parameters to the pod. In this case, we’re passing the enable-vts-status: 'true'
parameter which is required for us to see the VTS page of the Nginx load balancer. Save this as nginx-ingress-controller-config-map.yaml
on your server and then deploy both the config-map and the Nginx ingress controller deployment…
user@ubuntu-1:~/ingress$ kubectl create -f nginx-ingress-controller-config-map.yaml configmap "nginx-ingress-controller-conf" created user@ubuntu-1:~/ingress$ kubectl create -f nginx-ingress-controller-deployment.yaml deployment "nginx-ingress-controller" created user@ubuntu-1:~/ingress$ user@ubuntu-1:~/ingress$ kubectl get deployments nginx-ingress-controller NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE nginx-ingress-controller 1 1 1 1 26s user@ubuntu-1:~/ingress$ user@ubuntu-1:~/ingress$ kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE default-http-backend-4080621718-fmn8q 1/1 Running 0 33m 10.100.3.32 ubuntu-5 default-http-backend-4080621718-psm04 1/1 Running 0 33m 10.100.0.26 ubuntu-2 deploy-test-1-1702481282-568d6 1/1 Running 0 48m 10.100.1.20 ubuntu-3 deploy-test-1-1702481282-z6nx7 1/1 Running 0 48m 10.100.3.31 ubuntu-5 deploy-test-2-2194066824-3d47q 1/1 Running 0 48m 10.100.0.25 ubuntu-2 deploy-test-2-2194066824-8dx3w 1/1 Running 0 48m 10.100.2.28 ubuntu-4 net-test-645963977-6216t 1/1 Running 0 30m 10.100.3.33 ubuntu-5 nginx-ingress-controller-2003317751-r0807 1/1 Running 0 43s 10.100.3.34 ubuntu-5 user@ubuntu-1:~/ingress$
Alright – so we’re still looking good here. The pod generated from the deployment is running. If you want to perform another quick smoke test at this point you can try connecting to the Nginx controller pod directly from the net-test
pod. Doing so should result in landing at the default back-end since we have not yet told the ingress what it should do…
user@ubuntu-1:~/ingress$ kubectl exec -it net-test-645963977-6216t curl http://10.100.3.34 default backend - 404 user@ubuntu-1:~/ingress$
Excellent! Next we need to define the ingress policy or ingress object. Doing so is just like defining any other object in Kubernetes, we use a YAML definition like the one below…
apiVersion: extensions/v1beta1 kind: Ingress metadata: name: nginx-ingress spec: rules: - host: website8080.com http: paths: - backend: serviceName: backend-svc-1 servicePort: 80 - host: website9090.com http: paths: - backend: serviceName: backend-svc-2 servicePort: 80 - host: website.com http: paths: - path: /eightyeighty backend: serviceName: backend-svc-1 servicePort: 80 - path: /ninetyninety backend: serviceName: backend-svc-2 servicePort: 80 - path: /nginx_status backend: serviceName: nginx-ingress servicePort: 18080
The rules defined in the ingress will be read by the Nginx ingress controller and turned into Nginx load balancing rules. Pretty slick right? In this case, we define 3 rules. Let’s walk through the rules one at a time from top to bottom.
- Is looking for http traffic headed to the host
website8080.com
. If it receives traffic matching this host it will load balance it to the pods that match the selector for the servicebackend-svc-1
- Is looking for http traffic headed to the host
website9090.com
. If it receives traffic matching this host it will load balance it to the pods that match the selector for the servicebackend-svc-2
- Is looking for traffic destined to the host website.com on multiple different paths…
- Is looking for http traffic matching a path of
/eightyeighty
. If it receives traffic matching this path it will load balance it to the pods that match the selector for the serviceback-end-svc-1
- Is looking for http traffic matching a path of
/ninetyninety
. If it receives traffic matching this path it will load balance it to the pods that match the selector for the serviceback-end-svc-2
- Is looking for http traffic matching a path of
/nginx_status
. If it receives traffic matching this path it will load balance it to the pods that match the selector for the servicenginx-ingress
(not yet defined)
- Is looking for http traffic matching a path of
Those rules are pretty straight forward and things we’re used to dealing with on traditional load balancing platforms. Let’s go ahead and save this definition as nginx-ingress.yaml
and deploy it to the cluster…
user@ubuntu-1:~/ingress$ kubectl create -f nginx-ingress.yaml ingress "nginx-ingress" created user@ubuntu-1:~/ingress$ user@ubuntu-1:~/ingress$ kubectl get ingress NAME HOSTS ADDRESS PORTS AGE nginx-ingress website8080.com,website9090.com,website.com 192.168.50.75 80 41s
We can see that the ingress has been created successfully. If we would have been watching the logs of the Nginx ingress controller pod as we deployed the ingress we would have seen these log entries shortly after defining the ingress resource in the cluster…
I0501 20:56:20.040242 1 event.go:216] Event(api.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"nginx-ingress", UID:"a085ce3e-2eb0-11e7-ac2c-000c293e4951", APIVersion:"extensions", ResourceVersion:"4355820", FieldPath:""}): type: 'Normal' reason: 'CREATE' default/nginx-ingress I0501 20:56:20.042062 1 controller.go:491] Updating loadbalancer default/nginx-ingress with IP 192.168.50.75 I0501 20:56:20.045410 1 event.go:216] Event(api.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"nginx-ingress", UID:"a085ce3e-2eb0-11e7-ac2c-000c293e4951", APIVersion:"extensions", ResourceVersion:"4355822", FieldPath:""}): type: 'Normal' reason: 'UPDATE' default/nginx-ingress I0501 20:56:20.045870 1 event.go:216] Event(api.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"nginx-ingress", UID:"a085ce3e-2eb0-11e7-ac2c-000c293e4951", APIVersion:"extensions", ResourceVersion:"4355820", FieldPath:""}): type: 'Normal' reason: 'CREATE' ip: 192.168.50.75 I0501 20:56:27.368794 1 controller.go:420] updating service backend-svc-1 with new named port mappings I0501 20:56:27.380260 1 controller.go:420] updating service backend-svc-2 with new named port mappings W0501 20:56:27.385636 1 controller.go:829] service default/nginx-ingress does not exists W0501 20:56:27.386426 1 controller.go:777] upstream default-nginx-ingress-18080 does not have any active endpoints. Using default backend
The ingress controller is constantly watching the API server for ingress configuration. Directly after defining the ingress policy, the controller started building the configuration in the Nginx load balancer. Now that it’s defined, we should be able to do some preliminary testing within the cluster. Once again from our net-test
pod we can run the following tests…
user@ubuntu-1:~/ingress$ kubectl exec -it net-test-645963977-6216t -- curl -q http://10.100.3.34 default backend - 404 user@ubuntu-1:~/ingress$ kubectl exec -it net-test-645963977-6216t -- curl -H Host:website8080.com http://10.100.3.34 This is Web Server 1 running on 8080! user@ubuntu-1:~/ingress$ kubectl exec -it net-test-645963977-6216t -- curl -H Host:website9090.com http://10.100.3.34 This is Web Server 1 running on 9090! user@ubuntu-1:~/ingress$ kubectl exec -it net-test-645963977-6216t -- curl -H Host:website.com http://10.100.3.34 default backend - 404 user@ubuntu-1:~/ingress$ kubectl exec -it net-test-645963977-6216t -- curl -H Host:website.com http://10.100.3.34/eightyeighty This is Web Server 1 running on 8080! user@ubuntu-1:~/ingress$ kubectl exec -it net-test-645963977-6216t -- curl -H Host:website.com http://10.100.3.34/ninetyninety This is Web Server 1 running on 9090! user@ubuntu-1:~/ingress$
We can run tests from within the cluster by connecting directly to the pod IP address of the Nginx ingress controller which in this cases is 10.100.3.34. You’ll notice the first test fails and we end up at the default back-end
. This is because we didnt pass a host header. In the second example we pass the website8080.com
host header and get the correct response. In the third example we pass the website9090.com
host header and also receive the response we’re expecting. In the fourth example we attempt to connect to website.com
and receive once again the default back-end response of 404. If we then try the appropriate paths we’ll see we once again start getting the correct responses.
The last piece that’s missing is external access. In our case, we need to expose the ingress to the upstream network. Since we’re not running in a cloud environment, the best option for that would be with a nodePort type service like this…
apiVersion: v1 kind: Service metadata: name: nginx-ingress spec: type: NodePort ports: - port: 80 nodePort: 30000 name: http - port: 18080 nodePort: 32767 name: http-mgmt selector: app: nginx-ingress-lb
I used a nodePort service here but you certainly could have also used the externalIP construct as well. That would allow you to access the URLs on their normal port.
Notice that this is looking for pods that match the selector nginx-ingress-lb
and provides service functionality for two different ports. The first will be port 80 which we’re asking it provide on the host’s interface on port (nodePort) 30000. This will service the actual inbound requests to the websites. The second port is 18080 and we’re asking it to provide that on nodePort 32767. This will let us view the Nginx VTS monitoring page of the load balancer. Let’s save this definition as nginx-ingress-controller-service.yaml
and deploy it to our cluster….
user@ubuntu-1:~/ingress$ kubectl create -f nginx-ingress-controller-service.yaml service "nginx-ingress" created user@ubuntu-1:~/ingress$ user@ubuntu-1:~/ingress$ kubectl get services nginx-ingress -o wide NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR nginx-ingress 10.11.12.214 <nodes> 80:30000/TCP,18080:32767/TCP 15s app=nginx-ingress-lb user@ubuntu-1:~/ingress$
Now we should be able to reach all of our URLs from outside of the cluster either by passing the host header manually as we did above, or by creating DNS records to resolve the names to a Kubernetes node. If you want to access the Nginx VTS monitoring page through a web browser you’ll need to go the DNS route. I created local DNS zones for each domain to test and was then successful in reaching the website from my workstation’s browser…
If you added the DNS records you should also be able to reach the VTS monitoring page of the Nginx ingress controller as well…
When I first saw this, I was immediately surprised by something. Does anything look strange to you? I was surprised to see that the upstream pools listed the actual pod IP addresses. Recall when we defined our ingress policy we listed the destinations as the Kubernetes services. My initial assumption was that the Nginx ingress controller would then simply be resolving the service name to an IP address and using the single service IP as it’s pool. That is – the ingress controller was just load balancing to a normal Kubernetes service. Turns out that’s not the case. The ingress controller relies on the services to keep track of the pods but doesn’t actually use the service construct to get traffic to the pods. Since Kubernetes is keeping track of which pods are in a given service the ingress controller can just query the API server to get a list of pods that are currently alive and match the selector for the service. In this manner, traffic is load balanced directly to a pod rather than through a service construct. If we mapped out a request to one of our test websites through the Nginx ingress controller it would look something like this…
If you arent comfortable with how nodePort services work check out my last post.
In this case, I’ve pointed the DNS zones for website.com
, website8080.com
, and website9090.com
to the host ubuntu-2. The diagram above shows a client session headed to website9090.com
. Note that the client still believes that it’s TCP session (orange line) is with the host ubuntu-2 (10.20.30.72). The nodePort service is doing it’s job and sending the traffic over to the Nginx ingress controller. In doing so, the host hides the traffic behind it’s own source IP address to ensure the traffic returns successfully (blue line). This is entirely nodePort service functionality. What’s new is that when the Nginx pod talks to the back end pool, in this case 10.100.2.28, it does so directly pod to pod (green line).
As you can see – the ingress is allowing us to handle traffic to multiple different back ends now. The ingress policy can be changed by editing the object using kubectl edit ingress nginx-ingress
. So for instance, let’s say that we wanted to move the website8080.com
to point to the pods that are selected by backend-svc-2
rather than backend-svc-1
. Simply edit the ingress to look like this…
# Please edit the object below. Lines beginning with a '#' will be ignored, # and an empty file will abort the edit. If an error occurs while saving this file will be # reopened with the relevant failures. # apiVersion: extensions/v1beta1 kind: Ingress metadata: creationTimestamp: 2017-05-01T20:56:20Z generation: 1 name: nginx-ingress namespace: default resourceVersion: "4355822" selfLink: /apis/extensions/v1beta1/namespaces/default/ingresses/nginx-ingress uid: a085ce3e-2eb0-11e7-ac2c-000c293e4951 spec: rules: - host: website8080.com http: paths: - backend: serviceName: backend-svc-2 servicePort: 80 - host: website9090.com http: paths: - backend: serviceName: backend-svc-2 servicePort: 80 - host: website.com http: paths: - backend: serviceName: backend-svc-1 servicePort: 80 path: /eightyeighty - backend: serviceName: backend-svc-2 servicePort: 80 path: /ninetyninety - backend: serviceName: nginx-ingress servicePort: 18080 path: /nginx_status status: loadBalancer: ingress: - ip: 192.168.50.75
Then save the configuration and try to reconnect to website8080.com once again…
The Nginx ingress controller watches for changes in configuration on the API server and then implements those changes. If we would have been watching the logs on the Nginx ingress controller container we would have seen something like this in the log entries after we made our change…
I0502 23:29:27.406603 1 command.go:76] change in configuration detected. Reloading...
The ingress controller is also watching for changes to the service. For instance, if we now scaled our deploy-test-2 deployment we could see the Nginx pool size increase to account for the new pods. Here’s what VTS looks like before the change…
Then we can scale the deployment up with this command…
user@ubuntu-1:~$ kubectl scale --replicas=4 deployment/deploy-test-2 deployment "deploy-test-2" scaled user@ubuntu-1:~$
And after a couple of seconds VTS will show the newly deployed pods as part of the pool…
We can also modify properties of the controller itself by modifying the configMap that the controller is reading for it’s configuration. One of the more interesting options we can enable on the Nginx ingress controller is sticky sessions. Since the ingress is load balancing directly to the pods rather than to a service it’s possible for it to maintain sessions between the back-end pool members. We can enable this by editing the config map. kubectl edit configmap nginx-ingress-controller-conf
and then add the enable-sticky-sessions
parameter with a value of true
to the configMap…
# Please edit the object below. Lines beginning with a '#' will be ignored, # and an empty file will abort the edit. If an error occurs while saving this file will be # reopened with the relevant failures. # apiVersion: v1 data: enable-vts-status: "true" enable-sticky-sessions: 'true' kind: ConfigMap metadata: creationTimestamp: 2017-05-01T20:38:45Z labels: app: nginx-ingress-lb group: lb name: nginx-ingress-controller-conf namespace: default resourceVersion: "4354268" selfLink: /api/v1/namespaces/default/configmaps/nginx-ingress-controller-conf uid: 2c21c076-2eae-11e7-ac2c-000c293e4951
Once again, the Nginx ingress controller will detect the change and reload it’s configuration. Now if we access website8080.com
repeatedly from the same host we should see the load is sent to the same pod. In this case the pod 10.100.0.25…
The point of this post was just to show you basics of how the ingress worked from a network perspective. There are many other uses cases for them and many other ingress controllers to choose from besides Nginx. Take a look at the official documentation for them as well as these other posts that I found helpful during my research…
Hi ,
This post was extremely helpful. Thanks a lot for it. I am successfull in implemeting ingress .But I’am implememnting ingress in aws . The above configurations worked. I exposed the ingress controller using elb. I was able to access my various services. but I’m not able to access the nginx_status page publicly. I ‘m able to access that page from net-test pod. Any thoughts ?
Great post!!!! Thanks a lot.
After going through the setup, I am able to access the endpoints this way
curl -H Host:jcia.federated.fds http://11.168.84.xx:30000 This is Web Server 1 running on 8080!
where jcia.federated.fds is the dns resolvable name of the master
This is the ingress spec part
host: jcia.federated.fds
http: paths:
backend: serviceName:
backend-svc-1 servicePort: 80
I want to completely get the nodeip out of the way and want to access the webservice thro a single url like dns name of master/load balancer or VIP so that even if one node goes down, it doesnt affect the application.
Could you please let me know how this can be achieved?
Thanks once again
Forgot to mention that 11.168.84.xx is my node ip and that is what I want to avoid in my access url.
I keep getting “ImagePullBackOff” when created image net_tools, even i deleted the pod and create again and again. Any ideas?
$ kubectl get po
NAME READY STATUS RESTARTS AGE
deploy-test-1-3200654403-1fbbg 1/1 Running 0 24m
deploy-test-1-3200654403-n3v4d 1/1 Running 0 20m
deploy-test-2-3792706633-3038g 1/1 Running 0 24m
deploy-test-2-3792706633-rws85 0/1 ImagePullBackOff 0 7m
net-test-353103886-43gm4 0/1 ImagePullBackOff 0 2m
ImagePullBackOff typically happens if the image reference is invalid
Pingback: Kubernetes Ingress Resources on Azure | GoodChangeIT