Skip to content
Snippets Groups Projects
Commit a199b521 authored by marcoemi.poleggi's avatar marcoemi.poleggi
Browse files

Update README.md

parent 7c5bbcdc
No related branches found
No related tags found
No related merge requests found
# lab-k8s # lab-k8s
Practice Kubernetes in a single host environment via Kind. Practice Kubernetes in a single-host environment via KinD which uses Docker as a container runtime.
## Objective ## Objectives
This exercise will guide you through the process of provisioning and managing a Kubernetes cluster using Kind (Kubernetes IN Docker) on an OpenStack / Switch Engines (SE) instance. In a IaaS perspective, your _infrastructure_ is the host where Kind is installed. This exercise will guide you through the process of provisioning and managing a Kubernetes cluster using Kind (Kubernetes in Docker) on an OpenStack / Switch Engines (SE) instance. In a IaaS perspective, your _infrastructure_ is the host where KinD is installed.
You will: Tasks:
1. [Install Kind](https://kind.sigs.k8s.io/docs/user/quick-start#installation) on your instance. 1. [Install KinD](https://kind.sigs.k8s.io/docs/user/quick-start#installation) on your instance.
2. Provision a [Kind cluster](https://kind.sigs.k8s.io/docs/user/quick-start#creating-a-cluster) with the base image. 2. Provision a [KinD cluster](https://kind.sigs.k8s.io/docs/user/quick-start#creating-a-cluster) with the base image.
3. Interact with the cluster to understand its components. 3. Interact with the cluster to understand its components.
4. Modify the cluster [configuration](https://kind.sigs.k8s.io/docs/user/configuration/) to add worker nodes. 4. Modify the cluster [configuration](https://kind.sigs.k8s.io/docs/user/configuration/) to add worker nodes.
5. Reprovsion the clustare and verify the new setup. 5. Reprovision the cluster and verify the new setup.
6. Deploy a microservice with a load balancer and test it. 6. Deploy a microservice with a load balancer and test it.
7. Tear down the cluster and snapshot the instance. 7. Tear down the cluster and snapshot the instance.
## Prerequesites ## Prerequesites
- Ensure you have access to a beefy Switch Engines Linux instance. A c1.large instance with at least a [4 cpus, 4GB RAM]) should be OK. - Ensure you have access to a beefy Switch Engines Linux instance. A `c1.large` instance with at least a [4 cpus, 4GB RAM]) should be OK.
- Kind uses quite a few resources, after all we are simulating a full-fledged cluster. For the purposes of this exercise, choose Ubuntu 22.04 as the image. - KinD uses quite a few resources, after all we are simulating a full-fledged cluster. For the purposes of this exercise, choose Ubuntu 22.04 as the image.
## Part 1: Installing Kind ## Part 1: Installing Kind on your VM
Kind uses Docker as a container runtime. Log in to your VM via SSH.
### 1. Installing Docker ### 1. Installing Docker
If Docker is not already installed on your instance, install it using the following commands: If Docker is not already installed on your instance, install it using the following commands:
1. Set up Docker's apt repostiory: 1. Set up Docker's apt repository:
```bash ```bash
# Add Docker's official GPG key: # Add Docker's official GPG key:
sudo apt-get update sudo apt-get update
...@@ -68,7 +68,7 @@ docker run hello-world ...@@ -68,7 +68,7 @@ docker run hello-world
### 2. Installing Kind ### 2. Installing Kind
To install kind (assuming an AMD64 / x86_64 system), run: Assuming an AMD64 / x86_64 system, run:
```bash ```bash
[ $(uname -m) = x86_64 ] && curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.24.0/kind-linux-amd64 [ $(uname -m) = x86_64 ] && curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.24.0/kind-linux-amd64
...@@ -86,16 +86,22 @@ Good, you are ready to create a cluster. ...@@ -86,16 +86,22 @@ Good, you are ready to create a cluster.
## Part 2: Provision a Kind cluster with the base image ## Part 2: Provision a Kind cluster with the base image
There are two methods to create a cluster: The command `kind create cluster` provisions by default a [single-node cluster](https://kind.sigs.k8s.io/docs/user/quick-start/#creating-a-cluster). To specify more nodes and the roles assigned to them, you shall
1. Manually, by using the `kind create cluster` method, which will create a single node cluster
2. Using a `kind-config.yaml` configuration file, and running `kind create cluster --config kind-config.yaml`. In this case, we can specify the number and type of nodes.
Refer to [this webpage](https://kind.sigs.k8s.io/docs/user/quick-start/#creating-a-cluster) and create a configuration to deploy a cluster with one `control-plane` node and several worker nodes. You can choose the number of worker nodes, but please stay <= 10. 1. Write a `kind-config.yaml` configuration file following the [advanced method](https://kind.sigs.k8s.io/docs/user/quick-start/#creating-a-cluster), which specifies a `control-plane` node and a couple `worker` nodes
2. Run `kind create cluster --config kind-config.yaml`
To confirm that the cluster was correctly provisioned, run:
```bash
kind get clusters
kind get nodes
```
## Part 3: Interacting with the Cluster ## Part 3: Interacting with the Cluster
### 1. Installing Kubectl ### 1. Installing Kubectl
To interact with this cluster, let's install `kubectl`, the main command line tool to interact with kubernetes clusters.
To interact with your cluster, let's first install the official K8s CLI tool `kubectl`:
```bash ```bash
# Download the binary # Download the binary
...@@ -108,13 +114,19 @@ kubectl version --client ...@@ -108,13 +114,19 @@ kubectl version --client
### 2. Check the Nodes ### 2. Check the Nodes
- How many nodes are deployed? Are they all working? Run
```bash ```bash
kubectl get nodes kubectl get nodes -o wide
``` ```
### 3. Check the Cluster - How many nodes are deployed?
- Are they all working? Try to ping them
- What's the cluster's overlay IP network?
- Compare with the output of the command `ip addr`: what kind of host-level network is the overlay?
- Are there any pods running?
### 2. Check the Cluster
A K8s cluster is much more than its nodes. Check all the moving parts of your cluster. It should be empty for now, but use this command later to verify if your deployment is successful: A K8s cluster is much more than its nodes. Check all the moving parts of your cluster. It should be empty for now, but use this command later to verify if your deployment is successful:
...@@ -169,7 +181,7 @@ Good, now we can create deployment and service file, which in this case, we'll h ...@@ -169,7 +181,7 @@ Good, now we can create deployment and service file, which in this case, we'll h
### 2. Deployment and Service File ### 2. Deployment and Service File
Finally! Here is the little app we are going to deploy. Create a YAML file in your VM with the following code: Finally! Here is the little app we are going to deploy. Create a YAML deployment file `lb-deployment.yaml` in your VM with the following content:
```yaml ```yaml
# deployment-service.yaml # deployment-service.yaml
...@@ -206,7 +218,7 @@ spec: ...@@ -206,7 +218,7 @@ spec:
apiVersion: v1 apiVersion: v1
kind: Service kind: Service
metadata: metadata:
name: LoadBalancer name: loadbalancer
spec: spec:
type: LoadBalancer type: LoadBalancer
selector: selector:
...@@ -223,23 +235,29 @@ The service is of type `LoadBalancer`, and looks for pods with the `app: http-ec ...@@ -223,23 +235,29 @@ The service is of type `LoadBalancer`, and looks for pods with the `app: http-ec
To deploy: To deploy:
```bash ```bash
kubectl apply -f <FILENAME>.yaml kubectl apply -f lb-deployment.yaml
``` ```
### 3. I deployed, now what? ### 3. I deployed, now what?
You deployed, now what? Well, now you are going to do a bash program to constantly `curl` the load balancer. You deployed, now what? Well, now you are going to do a bash program to constantly `curl` the load balancer.
First, check the External IP of the load balancer: First, check the **External IP** of the load balancer:
```bash ```bash
kubectl get service http-echo-service kubectl get service http-echo-service
``` ```
Then, write your program. Make sure to print out the response from `curl`. Then, write a shell script that sends some (at least 10) HTTP requests in a loop via `curl`.
Remember to change the permissions of your code before running (`chmod +x ....`)
Run your code, what do you see? (PS: it will take some time to show both of the instances, as this Load Balancer is not really a Round-Robin style Load Balancer) Run your script: it should show HTTP reponses from two different IP addresses. It might take some time to show output from both instances, as metallb is not a round-robin-style load balancer.
Now, compare the source IPs of the reponses with the loadbalancer's public IP. Why the responses come from a network different than the loadbalancer's?
Then, run the following command
```bash
kubectl get pods -o wide
```
## Part 6: Destroying the Cluster ## Part 6: Destroying the Cluster
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment