Notes after Katacoda Training on Kubernetes Container Orchestration
By David WORMS
Dec 14, 2017
- Categories
- Containers Orchestration
- Learning
- Tags
- Helm
- Ingress
- Kubeadm
- CNI
- Micro Services
- Minikube
- Kubernetes [more][less]
Never miss our publications about Open Source, big data and distributed systems, low frequency of one email every two months.
A few weeks ago, I dedicated two days to follow the turorials available on Katacoda, the interactive learning platform for Kubernetes or any other container orchestration platform. I’m sharing my notes which I happen to use regularly as a cheat sheet.
If you haven’t tried Katacode yet and have an interest in Kubernetes, Docker or any of the courses covered, you will be amazed by how easy, fast and efficient they popularize technologies. Complementary to the course, they provide sandbox accesses, called playgrounds, to CoreOS, DC/OS and Kubernetes. In less than a minute, you’ll be logged in and ready to test any of those plateforms.
Launch A Single Node Cluster
Learn how to launch a Single Node Minikube cluster including DNS and Kube UI
Installation involves:
- an hyperviser, eg VirtualBox
- kubectl
- minikube
Minikube runs a single-node Kubernetes cluster inside a VM on your laptop:
minikube version
minikube start
From now one, Kubernetes is available:
kubectl cluster-info
kubectl get nodes
Starting a container is similair to docker:
kubectl run first-deployment --image=katacoda/docker-http-server --port=80
Kubernetes natively handles TCP/HTTP routing:
kubectl expose deployment first-deployment --port=80 --type=NodePort
As with Docker, to get container informations such as the PORT, use go templates:
service=first-deployment
export PORT=$(kubectl get svc ${service} -o go-template='{{range.spec.ports}}{{if .nodePort}}{{.nodePort}}{{"\n"}}{{end}}{{end}}')
curl host01:$PORT
The Kubernetes dashboard is available on port 8080.
Launch a multi-node cluster using Kubeadm
Bootstrap a Kubernetes cluster using Kubeadm
Kubeadm handles TLS encryption configuration, deploys the core Kubernetes components and ensures that additional nodes can easily join the cluster.
Here is a nice presentation of the Kubernetes architecture.
To initialize a cluster:
# From master
kubeadm init
kubeadm token list
# From the node (previously called minion), get stdout
kubeadm join --token=102952.1a7dd4cc8d1f4cc5 172.17.0.9:6443
To configure and connect the client:
# Copy et export configuration
sudo cp /etc/kubernetes/admin.conf $HOME/
sudo chown $(id -u):$(id -g) $HOME/admin.conf
export KUBECONFIG=$HOME/admin.conf
# Contact the server
kubectl get nodes
An alternative is to set the Kubernetes master address as an environment variable:
export KUBERNETES_MASTER=HTTP://${k8s_host}:${k8s_port}
The Container Network Interface (CNI) defines how the different nodes and their workloads should communicate.
Network providers are available here.
To install Weave Net:
kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"
# Show Weave Net pods
kubectl get pod -n kube-system
# Or
kubectl get pod --all-namespaces
To deploy a pod:
kubectl run http --image=katacoda/docker-http-server:latest --replicas=1
# Show the pod
kubectl get pods
docker ps | grep docker-http-server
To install Kubernetes web-based dashboard UI:
kubectl apply -f dashboard.yaml
kubectl get pods -n kube-system
kubectl get svc -n kybe-system kubernetes-dashboard
curl {host}:{ip}
Deploy Guestbook Web App Example
How to deploy the Guestbook example using Kubernetes
Use Pods, Replication Controllers, Services, NodePorts by installing Redis with one master for storage and a replicated set of redis slaves.
The launch script installs the following components:
- etcd with Docker image ”grc.io/google_container/etcd”
- API service with Docker image ”grc.io/google_container/hypercube” and command
/hypercube apiserver
- Kubelet agent with Docker image ”grc.io/google_container/hypercube” and command
/hypercube kubelet
- Kubernetes server with kubectl cluster-info
- Proxy service with Docker image ”grc.io/google_container/hypercube” and command
/hypercube proxy
- DNS discovery
The Kubelet is the primary “node agent” that runs on each node. The Kurnetes program is directly downloaded from the Internet (curl + chmod u+x).
The Kubernetes network proxy runs on each node and is used to reach services. It does TCP, UDP stream forwarding or round robin TCP, UDP forwarding across a set of backends.
DNS is a built-in service. Kubernetes DNS schedules a DNS Pod and Service on the cluster, and configures the kubelets to tell individual containers to use the DNS Service’s IP to resolve DNS names.
To enable DNS discovery:
kubectl -s http://host01:8080 create -f ~/kube-system.json
kubectl -s http://host01:8080 create -f ~/skydns-rc.json
kubectl -s http://host01:8080 create -f ~/skydns-svc.json
Once installed, client environment is available after:
export KUBERNETES_MASTER=http://host01:8080
kubectl cluster-info
Kubernetes service deployment has, at least, two definitions:
- replication controller: ensure that a pod or a homogeneous set of pods is always up and available. It defines how many instances should be running, the Docker Image to use, and a name to identify the service.
- service: defines a logical set of Pods and a policy by which to access them - sometimes called a micro-service.
RC definition connect Redis slaves to its worker using GET_HOSTS_FROM
with the value dns
to find service host information from DNS at runtime.
Service defined as NodePort
set well-known ports shared across the entire cluster. This is like -p 80:80
in Docker.
To find the assigned NodePort
using kubectl
:
kubectl describe service frontend | grep NodePort
Deploy Containers Using Kubectl
Use Kubectl to launch containers and make them accessible
Use Kubectl to create and launch Deployments, Replication Controllers and expose them via Services without writing yaml definitions.
A deployment controller is a Kubernetes object which provides declarative updates for Pods and ReplicaSets.
The definition describes a desired state in a Deployment object and the controller changes the actual state to the desired state at a controlled rate. Deployments are used to create new ReplicaSets, or to remove existing deployments and adopt all their resources with new deployments.
Kubectl run is similar to docker run but at a cluster level and it creates a deployment.
View the status of the deployments:
kubectl get deployments
Describe the deployment process (optionally with the pod name at the end):
kubectl describe deployment
Expose a port to the host external IP:
kubectl expose deployment http --external-ip="172.17.0.9" --port=8000 --target-port=80
This creates a service exposing the port 8000
:
kubectl get svc
When using docker run with option hostport, the Pod is not a service and is exposed via Docker port mapping. With docker ls, we see that it is not the container which exposes the ports but the pod. Other containers in the pod share the same network namespace. This improves network performance and allows multiple containers to communicate over the same network interface.
kubectl run ${name} --image=${image} --replicas=1 --port=80 --hostport=8001
# Pod is not a service
kubectl get svc
# Show the container and its associated pod
docker ls | grep ${name}
To scale the number of Pods running for a particular deployment or replication controller:
kubectl scale --replicas=3 deployment http
Deploy Containers Using YAML
Learn how to use YAML definitions to deploy containers
YAML definitions define the Kubernetes Objects that are scheduled for deployment. The objects can be updated and redeployed to the cluster to change the configuration.
A service definition matches applications using labels:
# Note, only the most relevant lines are displayed
sed -n '2,2p;4,10p' deployment.yaml
# kind: Deployment
# spec:
# replicas: 1
# template:
# metadata:
# labels:
# app: webapp1
sed -n '2,6p;7p;12,13p' service.yaml
# kind: Service
# metadata:
# name: webapp1-svc
# labels:
# app: webapp1
# spec:
# selector:
# app: webapp1
Networking capabilities are controlled via the Service definition with nodePort
:
# Note, only the most relevant lines are displayed
sed -n '2,2p;7,11p' service.yaml
# kind: Service
# spec:
# type: NodePort
# ports:
# - port: 80
# nodePort: 30080
Use kubectl apply
to reflect changes to a definition file:
# eg, update number of replica
kubectl apply -f deployment.yaml
# eg, update the exposed port with nodePort
kubectl apply -f service.yaml
Create Ingress Routing
Define host and path based Ingress routing
Ingress allows inbound connections to the cluster, allowing external traffic to reach the correct Pod. Functionnalities enable externally-reachable urls, load balance traffic, terminate SSL, offer name based virtual hosting…
Ingress stands for incoming connections while egress stands for outgoing connections. Kubernetes, in its latest version, supports policies for both types.
Ingress rules can be based on a request host (domain), or the path of the request, or a combination of both.
To deploy ingress object types:
kubectl create -f ingress-rules.yaml
To view all the Ingress rules:
kubectl get ing
I just learned a new trick with HTTP and curl useful for testing. Instead of creating a new entry inside “/etc/hosts” to fake an HTTP hostname, pass the “Host” header:
# Instead of
echo '127.0.0.1 adaltas.com' >> /etc/hosts
curl adaltas.com/en/home/
# Do
curl -H 'Host: adaltas.com' 127.0.0.1/en/home/
Use Kubernetes To Manage Secrets And Passwords
Keep secrets secure
Kubernetes allows you to create secrets that are mounted to a pod via environment variables or as a volume. This allows secrets, such as SSL certificates or passwords, to only be managed via an infrastructure team in a secure way instead of having the passwords stored within the application’s deployment artefacts.
Secret are created as Kubernetes objects.
Here’s how they look like:
apiVersion: v1
kind: Secret
metadata:
name: test-secret
type: Opaque
data:
username: $username
password: $password
To create and view secrets:
kubectl create -f secret.yaml
kubectl get secrets
If running docker ps
, you’re wondering what are the pause containers, here’s how Eric Paris describes them
The pause container is a container which holds the network namespace for the pod. It does nothing ‘useful’. (It’s actually just a little bit of assembly that goes to sleep and never wakes up)
This means that your ‘apache’ container can die, and come back to life, and all of the network setup will still be there. Normally if the last process in a network namespace dies the namespace would be destroyed and creating a new apache container would require creating all new network setup. With pause, you’ll always have that one last thing in the namespace.
A Pod which has environment variables populated includes something like:
- name: SECRET_USERNAME
valueFrom:
secretKeyRef:
name: test-secret
key: username
kubectl exec
is designed after docker exec
.
To view the populated environmental variables:
kubectl exec -it ${pod_name} env | grep SECRET_
To mount the secret in a file, create the pod with a volume and mount it:
# Note, only the most relevant lines are displayed
sed -n "2p;5,9p;10p;14,17p" pod.yaml
# kind: Pod
# spec:
# volumes:
# - name: secret-volume
# secret:
# secretName: test-secret
# container:
# volumeMounts:
# - name: secret-volume
# mountPath: /etc/secret-volume
Be careful, permissions must be enforced, default is ‘444’.
Liveness and Readiness Healthchecks
Ensure containers health using Liveness and Readiness probes
Readiness Probe checks if an application is ready to start processing traffic. This probe solves the problem of the container having started, but the process being still warming up and configuring itself meaning it’s not ready to receive traffic.
Liveness Probes ensure that the application is healthy and capable of processing requests.
Deploying from source onto Kubernetes
Get from source to running service in Kubernetes
The .spec.revisionHistoryLimit
specifies the number of old ReplicaSets to retain to allow rollback.
The imagePullPolicy
property accept is one of Always (default), Never or IfNotPresent.
The dnsPolicy
property accepts one of ClusterFirst
(default), ClusterFirstWithHostNet
, Default
.
A container registry is a central service that hosts images.
To push a local container into a custom container registry:
# Build an image
cat Dockerfile
docker build -t hello-webapp:v1 .
# Create a tag for the Docker image that contains the Docker repository name
docker tag hello-webapp:v1 $REGISTRY/hello-webapp:v1
# Push the Docker image to the registry
docker push $REGISTRY/hello-webapp:v1
The registry can be reference as part of the Docker image in the deployment definition:
- image: my_registry_server/my_image:my_tag
Kubernetes kubectl
automatically reads “~/.kube/config”.
Forge automates service deployment into Kubernetes and does the following:
- build the Dockerfile
- push the image to a registry
- build de deployment definition
- deploy the container into Kubernetes
Helm Package Manager
Use Helm Package Manager for Kubernetes to deploy Redis
Helm is the package manager for Kubernetes. Packages are called charts and consist of pre-configured Kubernetes resources.
Helm has two parts: a client (helm) and a server (tiller); Tiller runs inside of your Kubernetes cluster, and manages releases (installations) of your charts.
To install Helm:
# Manually
curl -LO https://storage.googleapis.com/kubernetes-helm/helm-v2.6.1-linux-amd64.tar.gz
tar -xvf helm-v2.6.1-linux-amd64.tar.gz
mv linux-amd64/helm /usr/local/bin/
# On Arch
yaourt -S kubernetes-helm
# Init
helm init
helm repo update
To retrieve package information:
# Find charts
helm search {package}
# Get chart information
helm inspect {chart}
Monocular is a web UI for managing Kubernetes applications packaged as Helm Charts, it looks promising.