Deploying And Scaling A Simple Application Using Minikube

Deploying And Scaling A Simple Application Using Minikube

Welcome To Kubernetes

Introduction

Although scaling and deploying a basic Kubernetes application can appear difficult, Minikube and other similar solutions make the process easier by enabling local Kubernetes cluster operations. Kubernetes services is also managed in the cloud by providers such as kubernetes on AWS, Azure Kubernetes Service (AKS), Google Kubernetes Engine (GKE) Minikube is a lightweight Kubernetes implementation that installs a single node in a basic cluster on your local PC by creating a virtual machine. It's a great resource for local development and understanding Kubernetes. In this tutorial, we'll cover the following steps:

  1. Setting up Minikube.

  2. Deploying an Nginx application.

  3. Scaling the Nginx application.

What Is Kubernetes ?

A system for automating the deployment, scaling, and management of containerized applications is called Kubernetes, or K8s. Benefits: - Deploying containers is made simple using K8s. - K8s facilitates easy scaling up and down (Run more Replicas of an application to scale up, run fewer Replicas to scale down) - Manage Networking, Security, Configuration management, etc. - You can deploy many replicas of your application across multiple servers. learn more on the official K8s website, kubernetes.io.

What Is kubernetes Cluster ? A group of worker machines running containers is called a kubernetes cluster.

Control Plane ? A group of services known as the control plane manages the cluster. The control plane, which tracks the cluster's condition, is used by users to communicate with it.
Kubernetes Control Plane ? The Kubernetes control plane is also known as the master Node. It is made up of multiple individual components. These can be run anywhere, and for high availability, you can run numerous instances of each component.

Nodes ? Within the cluster, nodes are machines that run containers. The node operates and maintains containers, keeps an eye on their condition, and communicates that information to the control plane.

Kubernetes Worker Nodes? To manage containers, Kubernetes Worker Nodes need a container runtime. They also employ a kubelet component to control Kubernetes activities on the node

Kubernetes Object ? The persistent data entities that Kubernetes stores are called objects. They are an indication of your cluster's condition. By adding, removing, and altering items, you may run containers, deploy and configure apps, and set up cluster behavior. All of this is made possible using the Kubernetes API.

Pods ? Kubernetes objects come in a variety of forms. Possibly the most significant is referred to as the pod. Containers are run and managed using pods.

What is Kubectl ? You can issue commands against Kubernetes clusters using the Kubectl command-line tool. Kubernetes may be used to view logs, analyze and manage cluster resources, and deploy applications.

Prerequisites

Before we begin, ensure you have the following installed on your system:

  • Docker: Minikube uses Docker to create and manage the VM.

  • Minikube: The tool for running Kubernetes locally.

  • Kubectl: The command-line tool for interacting with the Kubernetes cluster.

Step-by-Step Guide:

Step 1: Setting Up Minikube - Minikube is a testing & development Environment - Firstly, start minikube using Docker as the driver to prepare the kubernetes environment

Step 2: Deploying an Nginx Application - Create a deployment using the command kubectl create deployment <Name your deployment> --image=<Name of your Image>

Step 3: Commands to use to gain a deeper understanding of the state and events occurring in your Kubernetes environment, helping you to manage and debug your applications more effectively. - Verify Deployment Using Commandkubectl get deployments

Step 3a: The kubectl get events command retrieves a list of all the events that have occurred in the Kubernetes cluster. These events provide a chronological record of actions taken by the Kubernetes system. - I have successfully created a pod - From the kubectl get events output, it appears that my pod ladyintech-54cb566469-dqf52 faced multiple issues while trying to pull the nginx image from Docker Hub but eventually succeeded

Step 3b: The kubectl describe deployment command provides detailed information about a specific deployment in a Kubernetes cluster as shown below

Step 3c: The kubectl get deployment ladyintech -o yaml command retrieves detailed information about the ladyintech deployment in YAML format. This format provides a comprehensive view of the deployment's configuration and current state in a human-readable and structured manner.

Step 4: Scaling the Nginx Application - Scaling means increasing or decreasing the number of pod replicas for the deployment. Open Up a Yaml File Text Editor - YAML : A human-readable data serialization format called YAML (YAML Yet Another Markup Language) is frequently used for configuration files and data interchange between programming languages that have differing data structures.

Step 4a: An Empty File To Edit - NOTE: Click the "I" key on your keyboard to open the file so that you may work and modify in your text editor

Step 4b: Template From Kubernetes.io Site - Click on link to the kubernetes website to copy template https://kubernetes.io/docs/concepts/workloads/controllers/deployment/ - Click On Documentation - On the left pane search for deployment, you will be prompted to "Create a Deployment" page -Copy the controllers/nginx-deployment.yaml file

Step 4c: Edit Deploy.Yaml File - edit file to your file specification, In this instance we scaled up our replicas to 3 - Save & Exit : As shown in the screenshot below, click the "esc" button on your computer and input ":wq" to exit

Step 5: Using cat deployment.yml allows you to quickly view the file's contents in the terminal without opening it in an editor. This is useful for verifying the configuration or for sharing the contents in a readable format.

Step 6: The command kubectl replace -f deployment.yml is used to update an existing Kubernetes resource with the configuration specified in the deployment.yml file. In this case, deployment.yml

Step 7: The command kubectl expose deployment ladyintech is used to create a service that exposes the pods managed by the ladyintech deployment. This command makes the deployment accessible from outside the cluster or within the cluster using a network service. The service will forward traffic to port 80 on the pods managed by the deployment, provide stable endpoints to access the pods.

Step 8: The purpose of the kubectl get service command is to provide information about the services currently running in the cluster. This includes details such as the service name, type, cluster IP, external IP, ports, used to route traffic to the appropriate pods.

Step 8a: The command kubectl get ep ladyintech is used to display the endpoints associated with the service named ladyintech. Endpoints in Kubernetes are the IP addresses and ports of the pods that are targeted by a service

Step 9: Scale Up Replicas - The command kubectl scale deployment ladyintech --replicas=6 is used to scale a Kubernetes deployment to a specified number of replicas. In this case, it scales the ladyintech deployment to run 6 replicas of the pod. - This command is to adjust the number of pod instances running in the ladyintech deployment. Scaling up increases the number of replicas, providing higher availability and potentially better performance. Scaling down decreases the number of replicas, which can save resources.

Step 10: View what Is In Your Deployment & Pods

Step 11: The command kubectl get ep ladyintech is used to display the endpoints associated with the service named ladyintech. Endpoints in Kubernetes are the IP addresses and ports of the pods that are targeted by a service

-List All Pods : The command kubectl get pod -o wide is used to list all pods in the Kubernetes cluster with additional information. The -o wide flag extends the output to include more details about each pod.

Step 11a: kubectl get pod <Name of pod> - This command is to view a specific pod

Step 12: The command kubectl delete deployment is used to delete a specific deployment in a Kubernetes cluster and its associated resources. Deleting a deployment removes the deployment object itself and all the pods managed by that deployment.

Step 12a: Successfully Deleted Deployment

Conclusion

Congratulations! You've successfully deployed and scaled an Nginx application using Minikube. This setup provides a great foundation for experimenting with Kubernetes features on your local machine.

By following these steps, you can easily manage deployments and scale applications, gaining hands-on experience with Kubernetes concepts and operations.