Get hands on experience by deploying a typical back end/ front end sample application
This guide will show you how to set up a basic web application with database connectivity. In this case we will set up a PHP/Redis application, which on deletion will completely vanish, including the saved data, hence a stateless application.
Prerequisites
- Running Kubernetes cluster
- Internet connection
- Kubectl connection
The Kubernetes cluster in this guide will be based on All in One Kubernetes Cluster with kubeadm.
Step 1 – Prepare Deployment Resources
We will prepare all files needed for later deployment of the different components.
- frontend-deployment.yaml
- frontend-service.yaml
- redis-master-deployment.yaml
- redis-master-service.yaml
- redis-slave-deployment.yaml
- redis-slave-service.yaml
Create a directory where we will store all Kubernetes resources.
user@computer$ mkdir guestbook-example
user@computer$ cd guestbook-example
Download all the example resource files for the guestbook example.
user@computer$ wget https://raw.githubusercontent.com/kubernetes/examples/master/guestbook/frontend-deployment.yaml
user@computer$ wget https://raw.githubusercontent.com/kubernetes/examples/master/guestbook/frontend-service.yaml
user@computer$ wget https://raw.githubusercontent.com/kubernetes/examples/master/guestbook/redis-master-deployment.yaml
user@computer$ wget https://raw.githubusercontent.com/kubernetes/examples/master/guestbook/redis-master-service.yaml
user@computer$ wget https://raw.githubusercontent.com/kubernetes/examples/master/guestbook/redis-slave-deployment.yaml
user@computer$ wget https://raw.githubusercontent.com/kubernetes/examples/master/guestbook/redis-slave-service.yaml
And that’s it. We are now ready to start with the deployment of the different Kubernetes resources.
Step 2 – Deploy Backend Resources
Let’s set up a data store where our guestbook application can read and write it’s data to.
Create a deployment with one pod for the redis-master database by using the redis-master-deployment.yaml file.
user@computer$ kubectl create –f redis-master-deployment.yaml
redis-master-deployment.yaml |
|
Check if the redis-master pod exists by using the pods
or po
key.
user@computer$ kubectl get pods
NAME READY STATUS RESTARTS AGE
redis-master-57cc594f67-zxk7b 1/1 Running 0 57s
Let’s check that the application in the pod is running without errors, by checking the logs for this pod.
user@computer$ kubectl logs redis-master-57cc594f67-zxk7b
_._
_.-“__ ”-._
_.-“ `. `_. ”-._ Redis 2.8.19 (00000000/0) 64 bit
.-“ .-“`. “`\/ _.,_ ”-._
( ‘ , .-` | `, ) Running in stand alone mode
|`-._`-…-` __…-.“-._|’` _.-‘| Port: 6379
| `-._ `._ / _.-‘ | PID: 1
`-._ `-._ `-./ _.-‘ _.-‘
|`-._`-._ `-.__.-‘ _.-‘_.-‘|
| `-._`-._ _.-‘_.-‘ | http://redis.io
`-._ `-._`-.__.-‘_.-‘ _.-‘
|`-._`-._ `-.__.-‘ _.-‘_.-‘|
| `-._`-._ _.-‘_.-‘ |
`-._ `-._`-.__.-‘_.-‘ _.-‘
`-._ `-.__.-‘ _.-‘
`-._ _.-‘
`-.__.-‘
[1] 11 Oct 10:31:26.852 # Server started, Redis version 2.8.19
[1] 11 Oct 10:31:26.852 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command ‘echo never > /sys/kernel/mm/transparent_hugepage/enabled’ as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.
[1] 11 Oct 10:31:26.852 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
[1] 11 Oct 10:31:26.852 * The server is now ready to accept connections on port 6379
Let’s make the pod reachable inside our Kubernetes cluster by defining a service policy to our pod. The service will know which pods to direct the connections to by matching the selector section in the service resource to the label section in the pod resource.
user@computer$ kubectl create -f redis-master-service.yaml
redis-master-service.yaml |
|
Check if the redis service has been created by using the service
or svc
key.
user@computer$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 1d
redis-master ClusterIP 10.97.242.250 <none> 6379/TCP 8m
You can check if the service registered a pod by using the
describe
key on the service and looking at the endpoints section.user@computer$ kubectl describe svc redis-master
…
Endpoints: 10.244.0.5:6379
…
If you compare the IP to the IP of he pod you will see they are the same.
user@computer$ kubectl describe pod redis-master-57cc594f67-zxk7b
…
IP: 10.244.0.5
…
Let’s add 2 redis slave replicas to make our data store highly available. The slave pods will be used for read operations.
user@computer$ kubectl create -f redis-slave-deployment.yaml
redis-slave-deployment.yaml |
|
We see 2 new redis-slave pods, when listing our running pods.
user@computer$ kubectl get pods
NAME READY STATUS RESTARTS AGE
redis-master-57cc594f67-zxk7b 1/1 Running 0 28m
redis-slave-84845b8fd8-c4jhd 1/1 Running 0 1m
redis-slave-84845b8fd8-dbmhn 1/1 Running 0 1m
If we check the pod logs in both instances we can see that the slave connected successfully to the redis-master pod by going through the redis-master service.
user@computer$ kubectl logs redis-slave-84845b8fd8-dbmhn
_._
_.-“__ ”-._
_.-“ `. `_. ”-._ Redis 3.0.3 (00000000/0) 64 bit
.-“ .-“`. “`\/ _.,_ ”-._
( ‘ , .-` | `, ) Running in standalone mode
|`-._`-…-` __…-.“-._|’` _.-‘| Port: 6379
| `-._ `._ / _.-‘ | PID: 6
`-._ `-._ `-./ _.-‘ _.-‘
|`-._`-._ `-.__.-‘ _.-‘_.-‘|
| `-._`-._ _.-‘_.-‘ | http://redis.io
`-._ `-._`-.__.-‘_.-‘ _.-‘
|`-._`-._ `-.__.-‘ _.-‘_.-‘|
| `-._`-._ _.-‘_.-‘ |
`-._ `-._`-.__.-‘_.-‘ _.-‘
`-._ `-.__.-‘ _.-‘
`-._ _.-‘
`-.__.-‘
6:S 11 Oct 10:57:44.302 # Server started, Redis version 3.0.3
6:S 11 Oct 10:57:44.303 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command ‘echo never > /sys/kernel/mm/transparent_hugepage/enabled’ as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.
6:S 11 Oct 10:57:44.303 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
6:S 11 Oct 10:57:44.303 * The server is now ready to accept connections on port 6379
6:S 11 Oct 10:57:44.303 * Connecting to MASTER redis-master:6379
6:S 11 Oct 10:57:44.394 * MASTER <-> SLAVE sync started
6:S 11 Oct 10:57:44.394 * Non blocking connect for SYNC fired the event.
6:S 11 Oct 10:57:44.394 * Master replied to PING, replication can continue…
6:S 11 Oct 10:57:44.394 * Partial resynchronization not possible (no cached master)
6:S 11 Oct 10:57:44.395 * Full resync from master: 0917f55d5e1341ae828086cbdd9672f82657f98d:1
6:S 11 Oct 10:57:44.472 * MASTER <-> SLAVE sync: receiving 18 bytes from master
6:S 11 Oct 10:57:44.472 * MASTER <-> SLAVE sync: Flushing old data
6:S 11 Oct 10:57:44.472 * MASTER <-> SLAVE sync: Loading DB in memory
6:S 11 Oct 10:57:44.472 * MASTER <-> SLAVE sync: Finished with success
Now let’s make the slave service reachable inside our kubernetes cluster by attaching the pods to a service.
user@computer$ kubectl create -f redis-slave-service.yaml
redis-slave-service.yaml |
|
Let’s check again if the service has been created successfully.
user@computer$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 2d
redis-master ClusterIP 10.97.242.250 <none> 6379/TCP 30m
redis-slave ClusterIP 10.104.58.25 <none> 6379/TCP 2m
If we look now which pods have connected to our redis-slave service we can see our two slave replica container IP’s.
user@computer$ kubectl describe svc/redis-slave
…
Endpoints: 10.244.0.6:6379,10.244.0.7:6379
…
user@computer$ kubectl describe pod redis-slave-84845b8fd8-dbmhn
…
IP: 10.244.0.6
…
user@computer$ kubectl describe pod redis-slave-84845b8fd8-c4jhd
…
IP: 10.244.0.7
…
Great we got ourselves a highly available data store with a master and slave redis database. Let’s see how we deploy a front end application that makes use of the back end we set up.
Step 3 – Deploy Front End Resources
Now we get to the colorful part. We will be deploying a PHP guestbook front end that is already configured to connect to our redis-master service for writing operations and to our redis-slave service for reading operations. This is done by using the cluster internal DNS name resolution.
Spawn 3 replicas of the PHP guestbook front end by running the prepared Kubernetes file resource.
kubectl create -f frontend-deployment.yaml
frontend-deployment.yaml |
|
Now let’s check for the spawned front end pods. Instead of trying to visually identify our pods in a long list, let’s just display pods identified by their label.
user@computer$ kubectl get pods -l app=guestbook -l tier=frontend
NAME READY STATUS RESTARTS AGE
frontend-685d7ff496-2267v 1/1 Running 0 3m
frontend-685d7ff496-6lb2w 1/1 Running 0 3m
frontend-685d7ff496-899vl 1/1 Running 0 3m
The frontend service is at the moment only reachable by it’s dynamic pod IP inside the cluster. To make it reachable outside our cluster we will again create a service linking to the pod, but this time the service will be externally reachable by its node IP address by using the NodePort type, because of our setup.
user@computer$ kubectl create -f frontend-service.yaml
frontend-service.yaml |
|
Let’s check for the service by using the label selector.
user@computer$ kubectl get svc –l app=guestbook –l tier=frontend
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
frontend NodePort 10.108.26.128 <none> 80:32145/TCP 3m
As this example is based on my guide All in One Kubernetes Cluster with kubeadm we will have to get the IP address of our machine running the cluster and the external facing port of the front end service.
Type the IP address and port in your browser and you will reach a simple interactive guestbook. When you submit text, the guestbook saves your query by contacting redis-master and when displaying what you saved, the guestbook contacts the redis-slave inside your Kubernetes master.
Step 4 – Testing Data Store Resilience
We set up a high available data store, now we want to see how it holds up when we take down it’s pods.
Go to your guestbook application in your browser and type some entries to save to the database.
Now let’s delete the redis master pod. When you reload your browser you will still see your guest book entries. The same happens when deleting a slave pod.
user@computer$ kubectl delete pod redis-master-57cc594f67-68bcr
user@computer$ kubectl delete pod redis-master-84845b8fd8-8bwrl
Scale down the redis slave deployment to 0 replicas so no redis slaves can be reached.
user@computer$ kubectl scale –replicas 0 deploy redis-slave
Your data still exists in the master but your guestbook app accesses the data through the redis-slave service which has no pods to redirect the request to.
Scale up the redis slave deployment to it’s original 2 replicas. After reloading your page you will see your data again.
user@computer$ kubectl scale –replicas 2 deploy redis-slave
Do the same for the redis master deployment this time. Now you won’t be able to add entries as the app directs write operations through the redis master service policy.
user@computer$ kubectl scale –replicas 0 deploy redis-master
user@computer$ kubectl scale –replicas 1 deploy redis-master
Let’s delete all redis pods by using the label selector.
user@computer$ kubectl delete pods -l app=redis
When we reload the page all our data have vanished.
Conclusion
You have now learned the basics of deploying a back end/ front end service by setting up a basic master/slave database and a front end php application. All pods can communicate with each other by using the service policies you setup and you can even reach the front end by in our case using the NodePort type.
You also got to test the database resiliency by scaling and deleting pods.