Hi all , I’ve started playing with kubernetes some time back , and i wanted to write a series of articles about it , maybe starting with the basics and syntax and finishing with the internals ( if i get time to learn it myself)

First of all , we gonna use minikube for these series , when we get to parts where minikube isn’t enough then we’ll plan something else.

You can get minikube @ https://github.com/kubernetes/minikube , but as you might expect is in brew, apt etc.

minikube will create a vm (virtualbox in my case) and install kubernetes in there , create a cluster , set up etcd so on and so forth.

Overview:

So this is roughly how it would look like:

image

Kubectl: It is a console line tool that connects to the api to perform actions(such as create deployments , services etc)

Master: is the scheduler , not only holds the api but also takes care of write changes to etcd , manage replication , listen for dead containers that need re-spawning

Etcd: is key/value store that is used to store configuration items and labels from kubernetes , is distributed and fault tolerant . Foreign applications can also take advantage and use this component too.

Node: A “physical” box or a VM

Pod: Pod is a higher level “container” , it can contain one or many “containers” , in simple terms a Pod runs one or many Docker containers (for example) , pods run in shared contexts so certain thins will be shared amongst containers running in a pod.

In Action:

Before finishing i wanted to show a little of how this looks, we will create a pod with 3 replicas of nginx

(replicas are the numbers of containers that have to be running at any given time , it could be anything)

So I’m gonna create a deployment:

apiVersion: apps/v1beta1  
kind: Deployment  
metadata:  
  name: nginx-deployment  
spec:  
  replicas: 3  
  template:  
    metadata:  
      labels:  
        app: nginx  
        env: prod  
        role: web  
    spec:  
      containers:  
      - name: nginx  
        image: nginx:1.7.9  
        ports:  
        - containerPort: 80  
        resources:  
          requests:  
            cpu: 250m  
          limits:  
            cpu: 500m

image

And there it is , as simple as that.

These containers are literally unaccessible , we will need to run some sort of proxy or “service” which kubernetes thankfully provides.

So next time we shall talk about :

Services , Addresses and Scaling.

Thank you!