Kubernetes for Beginners

Kubernetes for Beginners

Introduction

Hello! And welcome to the Kubernetes for Beginners! The objective of this course is to demystify what containers and Kubernetes are, describe how they can be useful to you, and walk you through the steps of working with containers, Docker and Kubernetes so that at the end of this course Kubernetes will be another tool available to you in your toolbox.

Who Am I?

WhatsApp Image 2022-05-04 at 2.11.28 PM.jpeg

My name is Rana Vivek Singh. I'm presently (as of writing) a Co-founder of the company called BigBuddy and a developer Advocate at Civo.

Previous to that, I was a cloud advocate and developer team lead at Getboarded

I was into GSoC (Google Summer of Code) 2021. I also worked for the company called Video.wiki which is backed by Wikipedia, located in Portugal. I was also a Full-Stack developer at Talspo and SubhMuhurat Solutions.

I'm also stoked to be a President of the amazing initiative by BMSCE that is Centre for Innovation, Incubation and Entrepreneurship. We have an awsome team of developers here, If you are intrested in the topics like Web3 especially you can refer to aayushi's Course on it.

My biggest passions in life is making people know whatever I know in tech. I hope by going through this course that it can improve your life in some meaningful way and that you in turn can improve someone else's life.

Please catch up with me on social media, would love to chat:

Who Are You?

This course is aimed at a developer demographic. While all the examples will be dealing with JavaScript applications, you don't necessarily need to be a JavaScript developer to grasp this case; the code will be incidental to the concepts being taught.

In this lesson, we are going to cover the various topics and make sure you have a good understanding of each one so when you go back to studying on your own, you won’t come across anything that will leave you scratching your head. If you are a Windows user, please be using Windows 10. You'll need to either use WSL 2 or VirtualBox. In the coming section, I’m going to go through the steps needed to get your environment set up properly.

Do note that containers can take a lot of CPU and memory. If you have a modern-ish processor and 8GB, you will be fine. This could probably be done with some slow down on 4GB but anything lower would be pretty tough.

This can also take a lot of bandwidth because we'll be downloading a lot of things. Be aware of that.

Okay, Let's start Now

Before moving ahead, I want you to ask one question to yourself. Whyyyyyyy

Why Kubernetes, Why Docker..... Ok fine an easier one Why are you here.

Of course, Because you Paid 300 bucks!!

Well the answer to Why Kubernetes or Why Docker or Why Containers or Why Cloud is...

Picture a young boy. He just finished a few months worth of work. He’s proud of what he accomplished yet fearful whether it will work. He did not yet try it out on a “real” server. This will be the first time he’ll deliver the fruits of his work. He takes a floppy disk out from a drawer, inserts it into his computer, and copies the files he compiled previously. He feels fortunate that perforated cards are a thing of the past. He gets up from his desk, exits the office, and walks towards his car. It will take him over two hours to get to the building with servers. He’s not happy with the prospect of having to drive for two hours, but there is no better alternative. He could have sent the floppy with a messenger, but that would do no good since he wants to install the software himself. He needs to be there. There is no remote option. A while later, he enters the room with the servers, inserts the floppy disk, and copies and installs the software. Fifteen minutes later, his face shows signs of stress. Something is not working as expected. There is an unforeseen problem. He’s collecting outputs and writing notes. He’s doing his best to stay calm and gather as much info as he can. He’s dreading a long ride back to his computer and days, maybe even weeks, until he figures out what caused the problem and fixes it. He’ll be back and install the fix. Perhaps it will work the second time. More likely it won’t. So, this was a glimpse from the past. We can imagine the uncertainty and the effort one needed to put for getting a simple deployment task done.

Ok long story short, we want jobs that's why we need to learn K8s and by the way Cloud Architect is highest paid job in software industry.

By the way let me tell you, Kubernetes is tough, damm tough and that's why we can't dive directly into this. So let's take an Bumpy road before.

The Intro of K8s in the shortest term is -

Kubernetes is the most widely used container scheduler that has a massive community behind it.

What Are Containers?

Containers are probably simpler than you think they are. Before I took a deep dive into what they are, I was very intimidated by the concept of what containers were. I thought they were for one super-versed in Linux and sysadmin type activties. In reality, the core of what containers are is just a few features of the Linux kernel duct-taped together. Honestly, there's no single concept of a "container": it's just using a few features of Linux together to achieve isolation. That's it.

So how comfortable are you with the command line? This course doesn't assume wizardry with bash or zsh but this probably shouldn't be your first adventure with it. If it is, check out Vaibhav on it. This explanation will give you more than we'll need to keep up with this course.

Why Containers

Let's start with why first, why we need containers.

Bare Metal

Historically, if you wanted to run a web server, you either set up your own or you rented a literal server somewhere. We often call this "bare metal" because, well, your code is literally executing on the processor with no abstraction. This is great if you're extremely performance sensitive and you have ample and competent staffing to take care of these servers.

The problem with running your servers on the bare metal is you become extremely inflexible. Need to spin up another server? Call up Dell or IBM and ask them to ship you another one, then get your tech to go install the phyiscal server, set up the server, and bring into the server farm. That only takes a month or two right? Pretty much instant. 😄

Okay, so now at least you have a pool of servers responding to web traffic. Now you just to worry about keeping the operating system up to date.

  • Oh, and all the drivers connecting to the hardware. And all the software running on the server.
  • And replacing the components of your server as new ones come out.
  • Or maybe the whole server.
  • And fixing failed components.
  • And network issues.
  • And running cables.
  • And your power bill.
  • And who has physical access to your server room.
  • And the actual temperature of the data center.
  • And paying a ridiculous Internet bill.

You get the point. Managing your own servers is hard and requires a whole team to do it.

Virtual Machines

Virtual machines are the next step. This is adding a layer of abstraction between you and the metal. Now instead of having one instance of Linux running on your computer, you'll have multiple guest instances of Linux running inside of a host instance of Linux (it doesn't have to be Linux but I'm using it to be illustrative.) Why is this helpful? For one, I can have one beefy server and have it spin up and down servers at will. So now if I'm adding a new service, I can just spin up a new VM on one of my servers (providing I have space to do so.) This allows a lot more flexibility.

Another thing is I can separate two VMs from each other on the same machine totally from each other. This affords a few nice things.

Imagine both Coca-Cola and Pepsi lease a server from Microsoft Azure to power their soda making machines and hence have the recipe on the server. If Microsoft puts both of these servers on the same physical server with no separation, one soda-maker could just SSH into the server and browse the competitor's files and find the secret recipe. So this is a massive security problem. Imagine one of the soda-makers discovers that they're on the same server as their competitor. They could drop a fork bomb and devour all the resources their competitors' website was using. Much less nefariously, any person on a shared-tenant server could unintentionally crash the server and thus ruin everyone's day.

So enter VMs. These are individual operating systems that as far as they know, are running on bare metal themselves. The host operating system offers the VM a certain amount resources and if that VM runs out, they run out and they don't affect other guest operating systems running on the server. If they crash their server, they crash their guest OS and yours hums along unaffected. And since they're in a guest OS, they can't peek into your files because their VM has no concept of any sibling VMs on the machine so it's much more secure.

All these above features come at the cost of a bit of performance. Running an operating system within an operating system isn't free. But in general we have enough computing power and memory that this isn't the primary concern. And of course, with abstraction comes ease at the cost of additional complexity. In this case, the advantages very much outweigh the cost most of the time.

Public Cloud

So, as alluded to above, you can nab a VM from a public cloud provider like Microsoft Azure or Amazon Web Services. It will come with a pre-allocated amount of memory and computing power (often called virtual cores or vCores because their dedicated cores to your virutal machine.) Now you no longer have to manage the expensive and difficult business of maintaining a data center but you do have to still manage all the software of it yourself: Microsoft won't update Ubuntu for you but they will make sure the hardware is up to date.

But now you have the great ability spin up and spin down virtual machines in the cloud, giving you access to resources with the only upper bound being how much you're willing to pay. And we've been doing this for a while. But the hard part is they're still just giving you machines, you have to manage all the software, networking, provisioning, updating, etc. for all these servers. And lots of companies still do! Tools like Terraform, Chef, Puppet, Salt, etc. help a lot with things like this because they can make spinning up new VMs easy because they can handle the software needed to get it going.

We're still paying the cost of running a whole operating system in the cloud inside of a host operating system. It'd be nice if we could just run the code inside the host OS without the additional expenditure of guest OSs.

Containers

And here we are, containers. As you may have divined, containers give us many of the security and resource-management features of VMs but without the cost of having to run a whole other operating system. It instead usings chroot, namespace, and cgroup to separate a group of processes from each other. If this sounds a little flimsy to you and you're still worried about security and resource-management, you're not alone. But I assure you a lot of very smart people have worked out the kinks and containers are the future of deploying code.

So now that we've been through why we need containers, let's go through the three things that make containers a reality.

But Before that, let's do some Hands-on. Let's setup your Environment

Bare with me on it🙂

1)Install ubuntu 18.04 LTS from Microsoft store. 2)After downloading, if you open the ubuntu 18.04, you will notice that it will close automatically in some cases (depends on your WSL setting) this depict you don't have WSL in your PC. So, yeah you have to install WSL.

Open your PowerShell in administrative mode and run the below command:

wsl --install

3)Restart your PC

4)Open Ubuntu 18.04, now it will open. If it still won't, definitely you need to catch your head not me.

Okay, So you are at your kind off an ubuntu server. But no cheers now😁

5)Update your server

sudo apt update && sudo apt upgrade

Let’s install kubectl.

Feel free to skip the installation steps if you already have kubectl. Just make sure that it is version 1.8 or above.

Linux

If, you’re a Linux user, the commands that will install kubectl are as follows

curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl
chmod +x ./kubectl
sudo mv ./kubectl /usr/local/bin/kubectl

To verify your installation

Let’s check kubectl version and, at the same time, validate that it is working correctly. No matter which OS you’re using, the command is as follows.

kubectl version

The output is as follows.

Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.0", GitCommit:"925c127ec6b946659ad0fd596fa959be43f0cc05", GitTreeState:"clean", BuildDate:"2017-12-15T21:07:38Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"darwin/amd64"}
The connection to the server localhost:8080 was refused - did you specify the right host or port?

That is a very ugly and unreadable output. Fortunately, kubectl can use a few different formats for its output. For example, we can tell it to output the command in yaml format

kubectl version --output=yaml

The output is as follows.

clientVersion:
  buildDate: "2019-04-08T17:11:31Z"
  compiler: gc
  gitCommit: b7394102d6ef778017f2ca4046abbaa23b88c290
  gitTreeState: clean
  gitVersion: v1.9.0
  goVersion: go1.9.2
  major: "1"
  minor: "9"
  platform: darwin/amd64
The connection to the server localhost:8080 was refused - did you specify the right host or port?

That was a much better (more readable) output.

We can see that the client version is 1.14.0. At the bottom is the error message stating that kubectl could not connect to the server. That is expected since we did not yet create a cluster. That’s our next step.

Installing Minikube

Minikube supports several virtualization technologies. We’ll use Docker throughout the course since it is the only virtualization supported in all operating systems.

Finally, we can install Minikube.

Linux

If, you prefer Linux, the command is as follows.

curl -Lo minikube https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64 && chmod +x minikube && sudo mv minikube /usr/local/bin/

Validation

We’ll test whether Minikube works or not by checking its version.

minikube version

The output is as follows.

minikube version: v1.0.0

Now we’re ready to give the cluster a spin.

Let's install Docker Desktop, to run minikube on virtualization (Windows).

You don't need docker desktop if you are on mac or other destro apart from Windows.

After installing, enable all ticks in docker desktop's setting and restart your PC.

Woooh, Now your PC is ready for virtualization, and local deployment.

To run your local environment, open Ubuntu and Docker desktop simultaneously.

The folks behind Minikube made creating a cluster as easy as it can get. All we need to do is to execute a single command. Minikube will start a virtual machine locally and deploy the necessary Kubernetes components into it. The VM will get configured with Docker and Kubernetes via a single binary called localkube.

minikube start --vm-driver=docker

In the next lesson, we will finally create a local Kubernetes cluster.

If you Liked it, Please give a thumbs up. If you don't, let me know where I can go right.