GKE - Global ingress in practice on Google Container Engine - Part 2- Demo
Table of Contents
This article is a follow up to a couple previous ones, namely Global Kubernetes in 3 Steps on GCP which walks through setting up a global cluster, and Global ingress in practice on Google Container Engine — Part 1: Discussion which discusses how you would use a Global cluster with a Google LoadBalancer for Ingress.
Source Code The source code for this walkthrough is at GitHub - cgrant/global-k8s-ingress-with-gce-controller
In there you’ll find:
- /app - the code for the apps used in the demo
- /cluster - a single script to provision a federated cluster
- /deploy - the k8s yaml files we’ll be using in the demo
Setup the cluster
I’ve covered this in detail in the Global Kubernetes in 3 Steps post and have provided it again in the README This article is more about how to use it so I won’t go into the setup process here
Deploy the Applications and Ingress
First off, grab the repo and clone it locally
git clone https://github.com/cgrant/global-k8s-ingress-with-gce-controller
cd global-k8s-ingress-with-gce-controller
For this example I’ve provided sample python applications in the /apps directory. I’ve prebuilt them and posted them to docker hub as well. You should be able to customize the code or swap in your own images.
Deploy the apps
kubectl apply -f deploy/app-with-context.yaml
kubectl apply -f deploy/simple-echo.yaml
As mentioned in Part 1, you’ll need to explicitly set the NodePort values on the Deployment. I’ve already set these for you to 30048
and 30050
Review the workloads page in Google Cloud Console Google Cloud Platform
Deploy the Ingress Once the application pods are deployed we can look at sticking them together with an ingress.
First thing we need to do is create a global IP for the Global LoadBalancer.
This is critical for ingress to work on multiple clusters. the default ephemeral IPs are only regional and won’t allow proper federation
Create a global ip named ingress-ip
gcloud compute addresses create ingress-ip --global
Now deploy the Ingress itself
kubectl apply -f deploy/ingress.yaml
This will take awhile, 5 minutes or so, for the loadbalancers to be created worldwide and for all the health checks to pass.
You can review the progress on the Discovery & load balancing page in the Cloud Console Google Cloud Platform
Once the load balancer is up you can click on the name for either one and see the details. Scroll down and find the backends listed
There will be a backend for each service in your ingress yaml. Go ahead and click on one of the backends
As traffic flows to your new service, you can see the break down of location and other metrics on this page.
Thats it, you’re good to go!
On Your Own
Thats the short and sweet of the demo. Be sure to explore the code to understand the mechanics. Also try updates on your own
Go ahead and deploy your own service, modify the ingress.yaml and run
kubectl apply -f deploy/ingress.yaml
New services on the load balancer take a few minutes to come up but changes to existing services is pretty quick
Try it by updating the image in the app Deployment to something new, for example maybe i update simple-echo.yaml
spec:
containers:
- name: simple-echo
image: cgrant/simple-echo-server
to
spec:
containers:
- name: simple-echo
image: cgrant/simple-echo-server:version1
kubectl apply -f deploy/simple-echo.yaml
The containers will update promptly and you’re all set!
I really like the ability of using native Google Cloud products with Kubernetes. It takes much of the hassle out of the infrastructure while providing me with robust managed tools for my apps.
Hope you enjoyed. Please leave any comments or questions