If you have ever worked with Containers such as Dockers, you know how efficient and fast they are.
Kubernetes is a container-orchestrator that automated the deployments and shutdown instances to make them highly available and elastic to respond to the demand flotation.
Everything starts with a Cluster. It is the “pool” that contains N “resources”, called Nodes.
The first node is the “master” and manager the workload on the other nodes, called “workers”.
The Pods (that contain the containers with the application) are distributed among the nodes for better performance.
Kubernetes can scale up and down the number of nodes based on the parameters set such as CPU or RAM usage, which would indicate the physical resources are not sufficient or they are idle.
INSTALLING KUBERNETES
curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl chmod +x ./kubectl sudo mv ./kubectl /usr/local/bin/kubectl
Download the configuration file from your Kubernetes server and define a variable to it (place it in a safe place and appropriate permissions):
export KUBECONFIG=kube1-kubeconfig.yaml
MANAGING KUBERNETES
Commands list that can be used to manage the cluster:
kubectl --help kubectl get nodes kubectl cluster-info
Example of a simple manual deployment of a container:
kubectl run nginx-api --image=nginx --port=80 kubectl get pods kubectl describe pods kubectl delete pods nginx-api
Getting a Shell into the Container
kubectl exec --stdin --tty nginx-api-***********-**** -- bash
kubectl exec -it nginx-api-***********-**** -- bash
Automating Deployments
Create the deployments.yaml using the template (indentation must be respected):
apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment labels: app: nginx-instance spec: replicas: 6 selector: matchLabels: app: nginx-instance template: metadata: labels: app: nginx-instance spec: containers: - name: nginx-instance image: nginx imagePullPolicy: Always ports: - containerPort: 80
kubectl apply -f nginx-deployment.yaml
Editing the deployment configuration:
kubectl edit deployment nginx-deployment kubectl get pods -o wide
Load Balancer
Create a file called loadbalancer.yaml like the example:
apiVersion: v1 kind: Service metadata: name: nginx-loadbalancer annotations: service.beta.kubernetes.io/linode-loadbalancer-throttle: "5" labels: app: nginx-loadbalancer spec: type: LoadBalancer selector: app: nginx-instance ports: - name: http protocol: TCP port: 80 targetPort: 80 sessionAffinity: None
kubectl apply -f loadbalancer.yaml kubectl get services kubectl describe service nginx-loadbalancer
Port-Forwarding
kubectl port-forward app-********-***** 8080:80 kubectl port-forward pods/app-********-***** 8080:80 kubectl port-forward deployment/app 8080:80 kubectl port-forward replicaset/app-******** 8080:80 kubectl port-forward service/app 8080:80
Now NGINX is accessible externally through the load balancer service!
Other useful commands:
kubectl get pods -w kubectl get all kubectl scale deploy/nginx --replicas=3 kubectl rollout status deploy/nginx kubectl rollout undo deploy/nginx kubectl deploy nginx --image=nginx:1.17-alpine -o yaml --dry-run=client kubectl explain services
SEE ALSO
Minikube on Ubuntu 22.04 [Link].
MicroK8s on Ubuntu 22.04 [Link].
K3s on Ubuntu 22.04 [Link].
Kubernetes Persistent Volumes [Link].
Kubernetes Dashboard [Link].