Kubernetes        4

What:        4

Problem it solves        4

Features        4

Basics concepts        5

Pod        5

IP        6

External service (NodePort)        6

Internal service (ClusterIP)        6

Ingress        6

ConfigMap        6

Secrets        7

Volumes        7

Deployments        8

StatefulSet        8

What do we have up till now:        10

Architecture        10

Worker Node’s Main component/processes        10

1. Container runtime        11

2. Kubelet        11

3. KubeProxy        11

Issue: interaction with the cluster        11

Master process        12

API Server        12

Scheduler        13

Controller manager        13

ETCD        13

Callouts        14

Virtual network:        14

MiniKube & Kubectl        14

MinuKube        14

Kubectl        14

Minikube setup        15

Commands        15

General commands        15

Debugging pods        16

Debugging deployment        16

Command callouts        16

Layers of abstractions        17

YAML configuration file        17

3 parts of the configuration file        17

Connecting deployments to service to pods        18

Demo        20

Simple setup of nginx deployment and service        20

Complete application setup with Kubernetes components        20

Steps        21

Namespace        23

What        23

Create namespaces        24

Why do we need it        24

Use cases when to use Namespace        25

Characteristic of namespaces        25

Ingress        26

External service vs Ingress        26

YAML configuration file External service vs Ingress        27

Ingress controller        27

Ingress controller in Minikube        28

Multiple path with the same domain/host Configuring SSL certificate        28

Configuring SSL certificate        30

Helm        30

Helm chart        30

Features        31

Helm chart structure        31

Kubernetes Volume        32

3 Components of Kubernetes volume        32

Persistent volume        32

Persistent volume claim - PVC        35

Storage class        37

Local volume types        38

Local vs Remote volume types        38

Callouts        39

StatefulSet        40

Kubernetes Service        41

1. Cluster IP service        41

2. Headless service        41

3. NodePort service        41

4. LoadBlancer service        41

5. Ingress        41

6. ExternalName        41

Additional Concepts and Terminologies        42

1. Cluster        42

2. Master Node (Control Plane)        42

3. Worker Node        43

4. Pods:        43

5. Container        43

6. Service        43

7. Deployment        43

8. ReplicaSet        43

9. Namespaces        43

10. Ingress        43

11. ConfigMaps and Secrets        44

12. Volume        44

13. Request and Limit        44

● Additional concepts        45

Resource Overcommitment:        45

Resource Quotas:        45

Quality of Service (QoS) Classes:        45

14. Labels and Selectors:        47

15. Concepts to ensure pods are scheduled and managed in a Kubernetes cluster        48

NodeSelectors:        48

Node Affinity and Anti-Affinity        49

Pod Affinity/ Anti-Affinity:        50

Taints and Tolerations:        50

DaemonSets:        51

Pod Priority and Preemption:        51

Pod Spread Constraints:        51

Pod Topology Spread Constraints:        51

Pod Disruption Budgets (PDBs):        51

16.  Probes (Liveness, Readiness, Startup)        52

17.  StatefulSets:        53

18.  Jobs/CronJobs:        53

19.  Horizontal Pod Autoscaling (HPA) and Vertical Pod Autoscaler (VPA)        53

20.  Custom Resource Definitions (CRD)        53

21.  RBAC (Role-Based Access Control)        53

Flow        53

Nginx        54

Nginx Ingress Controller        54

Ingress Controller        55

Ingress Resources        55

Ingress Rules        55

Backend        55

Service        56

Pod        56

Namespace        56

Daemon        56

Node        56

Endpoints        56

Annotations        56

Helm        56

ReplicaCount        56

LoadBalancer        57

SSL/TLS Termination        57

NodeSelector:        57

Scenario for NodeSelector        57

Taints and Tolerations:        57

Scenario for Taints and Tolerations        58

Callouts        58



Problem it solves

  1. A trend from monolithic to microservices
  2. Increased usage of containers
  3. Demand for a proper way of managing those hundreds of containers


  1. High availability or no downtime
  2. Scalability or high performance
  3. Disaster recovery, back up and restore

Basics concepts


Pod: This is a component of K8S that manages the containers without our intervention

External service (NodePort)
Internal service (ClusterIP)


Application replica Issue: Let’s say you have one application on a pod and it stops for some reason → your user won't be able to access your site

Now what: replicate application on multiple servers



Database replica Issue: when one pod containing your database dies

Need: replicate your database as well

Issue: you can’t replicate the database using a deployment because database has a state which is its data // if we have clones or replicas of the database, they would all need to access the same shared data Storage in there; you would need some kind of mechanism that manages which pods are currently writing to that storage, or which pods are reading from that storage in order to avoid the data inconsistencies

StatefulSet: replication of database feature offered by another Kubernetes component

When: stateful application that has a database should be created using StatefulSet and not deployment

What: it will take care of replicating the pods and scaling them up and down making sure the database reads and writes or is synchronized so that no database inconsistencies are offered

IMP: Common practice to host database application outside of the Kubernetes cluster and just have the deployment or stateless application that replicate and scale with no problem inside of the Kubernetes cluster and communicate with an external database

What do we have up till now:


Worker Node’s Main component/processes
  1. Worker | server | node

3 processes of Kubernetes cluster

  1. Container runtime
  1. needs to run/install on every node. Ex: Docker, other technology  
  1. Now you can run containers inside pods
  2. But who schedules containers into pods? → Kubelet
  1. Kubelet 
  1. schedules containers into pods.  
  2. It has an interface with both container runtime and node(vm)
  3. Responsible for taking configuration and actually running/starting pod with the container and assigning resources from that node to the container like CPU RAM and storage resources // running pods = running application
  4. Communication between nodes happens using services // sort of load balancer that matches the request directed through the pods(database) and forwards it to another pod  
  1. KubeProxy
  1. responsible for forwarding requests from services to pods
  2. Must be installed on VM
  3. Make sure that communication works in a performant way with low overhead(ex: send the request, for example, from application to database on the same pod that request came through rather than sending it to replica running on another node = save network overhead)  
Issue: interaction with the cluster

Issue: how do you interact with the cluster

How to: 

  1. Schedule pod
  2. Monitor which pod dies
  3. re-schedule/re-start pod
  4. Join a new node

Answer: master nodes

Master process  

4 process that runs on every master node.

API Server
Controller manager

Virtual network: 

Enables worker and master nodes to talk to each other

MiniKube & Kubectl








How it works

Minikube setup


You’ll need a hypervisor.(hyperkit)


  1. brew update
  2. brew install hyperkit  // install hypervisor
  3. brew install minikube  // install minikube
  1. It will also install kubectl. So you don’t need to install it outside
  1. Commands
  1. minikube start –vm-driver=hyperkit// to start minikube OR minikube start –driverer=docker
  2. kubectl get nodes // get status of nodes
  3. minikube status // status of minikube
  4. kubectl version
  1. To know client/server version

For MAC user follow this guide


Link: https://gitlab.com/nanuchi/youtube-tutorial-series/-/blob/master/basic-kubectl-commands/cli-commands.md 

General commands


What it does

kubectl get nodes

Get status of nodes // master

kubectl get pods

Get status of pods

kubectl get services

Get status of services

kubectl get replicaset

To get the status of replicaset

kubectl get deployment

To see what you have for deployment

kubectl create deployment nginx-deploy –image=nginx

Create pod(in the form of deployment)  

  • Check the get pod command
  • Pod name get the prefix of deployment name & replicaset ID & it’s own ID                                

kubectl edit deployment [NAME]

Edit deployment

kubectl delete deployment [NAME]

Delete deployment

  • Pod will be terminated
  • Replicaset will be gone

kubectl apply -f FILENAME

To create/update component included in config file

Debugging pods


What it does

kubectl logs [POD NAME]

To see the logs

kubectl describe pod  [POD NAME]

kubectl describe service  [SERVICE NAME]

Get info about pods // State changes happening inside the pod

kubectl exec -it [POD NAME] – bin/bash

Get the terminal of pod container as a root user // useful in debugging when testing something

kubectl get pod -o wide

More details on pod to chek which IPs are assign to them

Debugging deployment


What it does

kubectl get deployment [DEPLOY NAME] -o yaml > nginx-deployment-result.yml

Get the status k8s automatically generates: This is updated configuration of your deployment(resides in ETCD)  

Command callouts


  1. You create pods in the form of deployment
  2. When you create pod through deployment, pod ID gets the → deployment name + replicaset ID + it’s own ID


  1. Is managing a replicas of pod; you’ll never create/delete/update replica of your pod in practice


  1. Every  CRUD operations done under the deployment
Layers of abstractions
  1. Deployment manages replicaset
  2. Replicaset manages all the replica of pods
  3. Pod is an abstraction of container
  4. Everything below of pod manages by Kuberenets

YAML configuration file

3 parts of the configuration file

First 2 lines are for declaring what you want to create

  1. Line 1: for each component there is different API version
  2. Line 2: name of the component
  1. Kind: name of the component  

Part 1: Metadata

What: metadata of component that you’re creating

Part 2: Specification

What: each component config file will have specification where you put every kind of config that you want to apply for that component

Part 3: Status

What: automatically generated and added by Kubernetes


Format of configuration file 

Connecting deployments to service to pods

Template has it’s own metadata and spec section

Connecting Components (labels & selectors & ports)

Connection established using labels and selectors

Metadata part contains labels

Connectine service to deployment using metadata → labels

Specification port contains selectors 

Service has it’s port configuration(ports:)

In deployment, container inside the pods, running or needs to run at some port

Simple setup of nginx deployment and service

Create 2 files

  1. Nginx deployment
  1. Specify deployment name, container port(8080), replica(2)
  1. Nginx service deployment
  1. Point service to container port(8080) - to connect to deployment,
  2. Service listening point(80) - to take request from other service(database)


  1. kubectl apply -f filename


  1. kubectl get pod // you’ll see 2 pod running
  2. kubectl get service  // you’ll see one service created for nginx
  3. kubectl describe service serviceName // validate service has the right pods that it forwards requests to
  1. In the EndPoints field you’ll see the IP address endpoint of pods that service is forward request to
  1. kubectl get pod -o wide // more details on pod to chek which IPs are assign to them
  2. kubectl get deployment filename -o yaml > storefile.yml // see the status file that Kubernetes automatically generates → get the updated configuration from ETCD → save output in file
  1. Additional details you can see here
  2. If you want to copy deployment that you already have using automated script → you’ll have to clean teh deployment configuration form this file and deploy it from this blueprint
  1. kubectl delete -f filename

Complete application setup with Kubernetes components


  1. Deploy 2 application
  1. MongoDB and Mongo Express



  1. MongoDB pod
  1. Internal Service // to talk to mongo db pod
  1. Internal service // no external request allowed to pod, only component in the same cluster will allow to talk to this service
  1. Mongo express deployment
  1. Need mongoDB database url(to connect to DB), username and password(to authenticate)
  2. Pass this info to mongo express via deployment config file using environment variable(Config Map and secret)
  1. Mongo express external service that allows access through browser // allow external request to talk to pod(mongo express)
  1. Url: Ip address of node and port of external service


  1. Request comes from web browser
  2. Request goes through Mongo Express External service
  3. Forwards to mongo express pod
  4. Pod will forwards to internal service of mongo db
  1. That is database url
  1. Service then forward to mongo db database pod
  1. It will authenticate request through credentials
  1. Create mongo db config file
  2. Create secret file
  1. kubectl apply -f file name
  2. kubectl get secret // to see the secret
  3. Now reference it to our deployment

  1. Apply Mongo db config file
  2. Create internal service // so other component in the cluster can talk to mongo db database
  1. Write the service component in the same file as deployment file
  2. Apply it using kubectl
  1. Create config map // that contains momgo db database server address
  1. Config map must be ready when referencing it // must deploy before referencing it
  2. Apply it using kubectl
  1. Create mongodb express deployment config file
  2. Create mongodb express service file
  1. Type: LoadBalancer // assigns external service an IP Address so it can accept external request
  2. nodePort: 30000-32767 // user can access application at
  1. Assign public IP address to external service
  1. minikube service AppServiceName


  1. Internal service or Cluster IP is default
  2. External service, you have to specify type: LoadBalancer
  1. You also need nodePort ranges between 30000-32767



  1. kube-system
  1. no meant for your use // you shouldn’t create.modify under this namespace
  2. System processes are deployed here
  3. Master and kubectl processed deployed here
  1. kube-public
  1. Contains publicly accessible data
  2. It has a config map, which contains cluster info
  1. Kube-node-lease
  1. Holds info of nodes
  2. Each node has associated lease object in the namespace.
  3. determines the availability of an node
  1. Default
  1. resources you create are located here
  2. you can add or create new workspaces
Create namespaces
  1. Create with Command
  1. kubectl create namespace [NAME]
  1. Create with configuration file
  1. BEST WAY: because that way you’ll have history of what resources you have in cluster
Why do we need it
  1. No Overview
  1. If you have multiple deployments, replicaset, service and configMap within one default namespace, it will get difficult to have an overview of your components(what’s in there)
  2. You need a namespace divide your components/resources into groups // logically grouping your resources inside the cluster
  3. Examples
  1. database namespace where you deploy database and all required resources
  2. monitoring namepsace where you deploy prometheus
  3. Nginx ingress
  4. Elastic stack
  1. Conflicts: many team, same application 
  1. When you have multiple team working on a same cluster. What if accidentally both team has a deployment with same name but with different configuration, in that case the team that deploys later will overwrite the deployment of initial deployment  
  1. Resource sharing: staging and dev
  1.  When you want to host your dev and stanging deployment onto the same cluster
  2. Re-usable component are nginx-ingress-controller and elastic stack(for login) used by both dev and staging environments. That way you don’t need to create it separately
  3. Blue-green deployment: you want to deploy 2 different version on the same cluster (one active and one inactive): it this case common/re-usable components can be utilized by both versions
  1. Access and resource limit on NameSpace
  1.  When you’re working with multiple team and want to limit the access of namespace
  1. Let’s say 2 team working on a same cluster with diffrent namepsaces, you can then restrict the access of namespace, so they can do CRUD operations in their namespaces // increase security
  1. Limit resources that each namespace consumes
  1. Because clusters are limited resources, you want to give each team a share of resources for their application. So if 1 team consumes too much, cluster will run out of resources and other team can’t deploy their application
  1. Define resource quota on namespace: how much CPU, RAM, and Storage one namespace can use

Use cases when to use Namespace
  1. Structure your components.
  2. Avoid conflicts between teams
  3. share services between different environments
  4. access and resource limits on Namespaces level.
Characteristic of namespaces
  1. you can’t access most resources from another namespace
  1. if you have secrets or configmap of a shared service, you will have to create configmap and secret in each namespace where are you are going to need that
  2. Service can share across namespaces
  1. Some components cannot be created within a namespace
  1. live globally in a cluster
  2. you cannot isolate them
  3. resources are volume or persistent, volume and node(when you create a volume, it is accessible through the whole cluster because it’s not in a namespace.
  4. you can list resources that you do not want to keep in namespace with K8S commands
  1. kubectl api-resources –namespaced=false
  2. kubectl api-resources –namespaced=true

Create component in a Namespace

  1. kubectl apply -f FIlENAME –namespace=NSNAME
  2. have namespace inside the configuration file


You can also change active namespace


External service vs Ingress


How to achieve this

External service: 


YAML configuration file External service vs Ingress

Ingress controller

If you create above ingress component alone, it won’t work for ingress routing rules to work



What :

Different Entrypoints

  1. If using cloud → load balancer - this is your entry point and you need to configure it to ingress controller
  2. Bare metal → you need to configure some kind of entrypoint
  1. External proxy server: software or hardware solution

Flow: request → proxy server/Load balancer → ingress controller → ingress component(to check ingress rule) → service → app  

Ingress controller in Minikube


  1. Install ingress controller in Minikube // automatically states K8S Nginx implementation of ingress controller

minikube addons enable ingress # install ingress controller

kubectl get pod -n kube-system  # to check the controller

  1. Create ingress rule // so controller can evaluate

kubectl apply -f FILE_NAME # Create ingress rule

kubectl get ingress -n NS
# To see the created ingress

kubectl describe ingress NAME -n NS
# To see the ingress

  1. Configure default backend process
  1. Why: if you don’t have your controller rules setup in controller component, this will automatically route to some kind of error page // custom error message response
Multiple path with the same domain/host Configuring SSL certificate

What: one domain(google) but many service(analytics, shopping, calendar, gmail)

Use case - 1: same domain with multiple applications/service


  1. This will have one host and multiple paths

Use case - 2: sub domain for multiple applications/service


  1. This will have multiple host

Configuring SSL certificate


  1. Data key needs to be “tls.crt” and “tls.key”
  2. Values are file content NOT file paths/location
  3. Secret component must be in the same namepsace as the ingress component


What: package manager for kubernetes

Why: so multiple team can use single set of yaml manifest file if they want to

When to use:

Helm chart


When: you have multiple microservices, that uses same set of deployment and service configuration with the only difference is application name, version and docker image name


Helm chart structure


Kubernetes Volume

Need: to persist data in K8s using volume

What: application pod needs to read/write data in the database data

Why: When database pod dies, database that has will also be gone. So we need to persist data



3 Components of Kubernetes volume

Why: for data persistence

What for:

How: application writes/updates/read data from database pod

Persistent volume



  1. Where does this storage comes from
  1. Needs to take storage from actual physical storage like local disk(from cluster node), external nfs server or cloud storage and made available to cluster
  1. Who makes it available to the cluster
  2. What type of storage do you need for your application - you decide
  3. You need to create and manage them(backup and make sure they don’t get currupt)

Persistent volume YAML example

  1. Nfs storage

  1. Google cloud storage

  1. Local storage on the node

Persistent volume claim - PVC

Application has to claim that persistent volume using PVC

Created using YAML configuration

Use the PVC in pod configuration 

Why: so that all the pods have access to that PV storage

Issue: Dev needs to ask system admins to create PV they need for the application before the deployment, and admin then manually request cloud storage provider and creates 100 PV for all the applications that need storage // tedious and time consuming - this issue solved by Storage class

Storage class

In the below image/file StorageClass creates PV volumes dynamically in the background

Define storage backend in the persistent volume component, now we have to define in the storage class component using the provisioner attribute

Storage class usage

Local volume types

What: managed by Kubernetes


  1. ConfigMap
  2. Secret

Usage in YAML:

Local vs Remote volume types

Each volume has its own use case

Local Volume types violate 2 and 3 requirement for data persistence

always use remote volumes

K8s Administrator and K8s User

persistent, volume are resources that needs to be there before the pod that depends on it is created

there are two main roles in Kubernetes

  1. administrator:
  1. Who sets up and maintains the cluster and also make sure the cluster has enough resources
  2. configure the actual storage(nfs, cloud) that would be available for cluster
  3. creates the PV component from these storage backends
  4. these are generally system administrators or devops engineers in the company. Developer states them how much and what type of storage they would need to run their application.
  1. Kubernetes User: 
  1. Deploys the Application in the cluster either directly or through CI pipeline
  2. User create claim to PV
  1. Volumes is directory with some data
  2. These volumes are accessible in container in a pod
  3. How made available, backend by which storage medium
  1. Defined by specific volume type(aws, azure)
  1. Mount multi-type volumes to your app pod


  1. Dev needs to ask system admins to create PV they need for the application before the deployment, and admin then manually request cloud storage provider and creates 100 PV for all the applications they need storage // tedious and time consuming - this issue solved by Storage class



Application replication

Database replication 

Database pods are difficute to replicate compared to Application pods



Kubernetes Service

6 Types are

  1. Cluster IP service
  1. What: Internal to K8s cluster
  1. Default service in K8s. so you don’t need to mention it in your yaml files
  1. Headless service
  2. NodePort service
  1. What: access pod service via the internet
  1. LoadBlancer service
  1. What: access pod service via the internet
  2. How: In the cloud(ex: Azure), if you create an LB service, it will create public IP and associate with LB. So, say you have 10 LB services; it will create 10 public IPs and attach/map to LB in order to access that service via the internet
  1. Ingress
  1. What: access pod service via the internet  
  1. ExternalName


externalName: if you have a backend that uses AWS RDS database. So if you want to access AWS database inside a AKS cluster, you need to define DB externalName service

Cluster IP: if you have frontend app that needs to talk to your backend app, you must use Cluster IP

LoadBalancer and Ingress: if your user needs to access your frontend application then use either of these services  

Additional Concepts and Terminologies

  1. Cluster

  1. Master Node (Control Plane)

        * entry point to the K8S cluster

        * serves the Kubernetes RESTful API

        * When you want to deploy an application to K8S cluster, you interact with the master node’s API server using a client(UI, API(script), CLI)

        * schedules the pod on a different node

        * how: first it looks at your request and determines how many resources(CPU & RAM) the application you want to schedule will need. it look upon worker node and see which nodes are available to take this application

        * keeps an overview of what’s happening in the cluster, whether something needs to be repaired or if a container died, it needs to be restarted

        * ex: when pod dies, controller manager detects that and recovers cluster state

        * Brain of the cluster - holds the data about cluster(actual status)

        * Key value storage which holds at any time the current/actual state of the K8S cluster: you can recover the whole cluster state using that ETCD snapshot

  1. Worker Node
  1. Pods:
  1. Container
  1. Service
  1. Deployment
  1. ReplicaSet
  1. Namespaces
  1. Ingress

  1. ConfigMaps and Secrets
  1. Volume
  1. Request and Limit

   Resource Overcommitment:


Resource Quotas:

    Quality of Service (QoS) Classes:
  1. Guaranteed: Pods in this class have resource requests equal to their resource limits. They are not evicted for resource reasons unless they exceed their limits.

                    * Pod C (Guaranteed):

                        * Memory Request: 2Gi

                        * CPU Request: 1 (1 CPU core)

                        * Memory Limit: 2Gi

                        * CPU Limit: 1 (1 CPU core)

                    * Pod D (Guaranteed):

                        * Memory Request: 1Gi

                        * CPU Request: 0.5 (0.5 CPU core)

                        * Memory Limit: 1Gi

                        * CPU Limit: 0.5 (0.5 CPU core)

  1. Burstable: Pods in this class have resource requests and can consume resources up to their limit. They can be evicted if the node is under heavy resource pressure, but they're given a chance to use their requested resources first.

                * Example:          

                    * Pod A (Burstable):

                        * Memory Request: 1GB

                        * CPU Request: 500m (0.5 CPU core)

                        * Memory Limit: 2GB

                        * CPU Limit: 1 (1 CPU core)

                    * Pod B (Burstable):

                        * Memory Request: 512Mi (0.5 GiB)

                        * CPU Request: 250m (0.25 CPU core)

                        * Memory Limit: 1Gi

                        * CPU Limit: 0.5 (0.5 CPU core)

  1. Best-Effort: Pods in this class don't specify any resource requests or limits. They get the leftover resources on the node. They can be evicted first under resource pressure. These Pods are considered the lowest priority, and resources will be provided to them only when they're available and not being consumed by higher QoS Pods.



250m (mili) = .25 CPU core

512 MB

500m (mili) = .5 CPU core

1 GB

14. Labels and Selectors:
  1. Labels:        
  1. Selectors:
  1.  Service Discovery: In a microservices architecture, services need to discover and communicate with each other. Selectors can be used to group pods that provide similar services. For example, all web servers could be labeled with app: web-server, and a service can be created to forward traffic to all pods with this label.
  2.  Load Balancing: When you have multiple pods providing the same service (like web servers), a service with a selector can balance incoming requests across all pods that match the labels.
  3.  Scaling and Auto-Healing: Using tools like Horizontal Pod Autoscaler (HPA) or ReplicaSets, you can automatically scale the number of pods based on metrics. For instance, you might want to maintain a certain number of pods with a specific label (e.g., environment: production).
  4.  Applying Policies: You can use selectors to apply policies or configurations to specific groups of pods. For example, you might want to apply a certain set of security policies or resource limits to all pods with a specific label.
  5.  Rolling Updates: During a rolling update, you may want to update pods incrementally. A selector can be used to identify the pods that need updating.
  6.  Troubleshooting and Debugging: When investigating issues, you might want to focus on a specific group of pods. You can use a selector to filter logs, events, or metrics related to those pods.
  7.  Canary Releases: For canary releases, you can use selectors to target a specific subset of pods for testing new versions while keeping the rest on the stable version.
  8.  Resource Management: You can use selectors in combination with ResourceQuotas or LimitRanges to apply resource constraints to specific groups of pods.

In summary, selectors allow you to define logical groupings of pods based on their labels, and this grouping enables you to apply various operations or policies to those groups. It's a fundamental concept in Kubernetes that provides a powerful way to manage and operate your applications.

15. Concepts to ensure pods are scheduled and managed in a Kubernetes cluster
Node Affinity and Anti-Affinity
Pod Affinity/ Anti-Affinity:
Taints and Tolerations:

- key:
"NoSchedule" #If there are no nodes with this label, don't schedule me anywhere."

Pod Priority and Preemption:
Pod Spread Constraints:

This allows you to specify rules for spreading pods across nodes in a cluster to ensure high availability and fault tolerance.

Pod Topology Spread Constraints:
Pod Disruption Budgets (PDBs):

16.  Probes (Liveness, Readiness, Startup)

What: Kubernetes uses the Liveness, Readiness, and Startup Probes to determine the health of pods.

  1. Liveness Probes: A Liveness probe checks if the application within a pod is running. If the liveness probe fails, Kubernetes restarts the container.
  1. Just because of pod is running, that doesn’t mean that the application is running. Sometimes application is not responsive due to some reasons(application that gets locked or frozen after running for a certain period). So the idea is to send a request to the app and see if we get response. If not then restart the pod(even if the pod is running)
  2. Example: For instance, consider a simple application that gets locked or frozen after running for a certain period due to some bug or unexpected condition. The application is running, but it's not able to handle requests or perform its operations. Here, a Liveness probe can be beneficial.
  1.  Readiness: A Readiness probe is used to know when an application is ready to accept requests.
  1. Usage: Use Readiness Probes when your app is running but not ready to receive traffic
  2. Example, suppose you have an application that needs to load a large dataset or some configuration on startup. While the application might be running (passing a Liveness probe), it's not yet ready to serve requests. A Readiness probe helps to inform Kubernetes when to start sending requests, avoiding possible errors or slower responses.
  3. What does it mean long initial startup time: By default, Kubernetes assumes that applications are ready as soon as they are running. This works well for simple applications, but for complex ones that require initial startup time to load large datasets or establish connections with databases, they might be running but not yet ready to serve requests.
  4. If don't have a Readiness prob: If you don't specify a readiness probe, Kubernetes starts sending traffic to your pod as soon as it starts, which might lead to errors if the application isn't ready to handle it. If this is fine for your application, you might not need a readiness probe.
  1.  Startup: The startup probe is used to know when a container application has started. It disables other liveness and readiness checks until it succeeds, making sure slower starting applications aren't killed by Kubernetes.
  1. Example: Consider an application that takes a considerable amount of time to start (say, due to database migrations), during that time, it won't pass readiness or liveness probes. This is where the Startup probe is helpful – it gives your application enough time to get up and running before the Liveness and Readiness probes start checking.
  2. The startup probe tells Kubernetes to wait for a certain period of time before doing liveness or readiness checks. This can help the application to handle any long-running startup processes like database migrations or loading large data files. After that time has passed or the startup probe has succeeded, Kubernetes begins liveness and readiness checks as usual.

17.  StatefulSets:
18.  Jobs/CronJobs:
19.  Horizontal Pod Autoscaling (HPA) and Vertical Pod Autoscaler (VPA)
20.  Custom Resource Definitions (CRD)
21.  RBAC (Role-Based Access Control)





A few typical use-cases when one would use NGINX:


Nginx Ingress Controller


When & When


Terminology Used in The NGINX Ingress Controller:

Ingress Controller

Ingress Resources
Ingress Rules

SSL/TLS Termination

Scenario for NodeSelector
  1. Suppose, in a company, we have two types of servers: Basic servers and High-Memory servers. We label the Basic servers as server-type=Basic and the High-Memory servers as server-type=High-Memory.
  2. We have an application (Pod) that needs a lot of memory. So, we use a NodeSelector to make sure it gets scheduled on a server-type=High-Memory.  
  3. This Pod (high-memory-app) will now explicitly be scheduled on Nodes with the label server-type=High-Memory.
  4. Still, if there's a need, other Pods (without NodeSelector or even those with server-type=Basic NodeSelector) might be scheduled on our High-Memory server if no other resources are available.

Taints and Tolerations:
Scenario for Taints and Tolerations
  1. Continuing with our previous scenario, let’s say some of the High-Memory servers are very expensive and we want to reserve them only for high-memory applications.
  2. We apply a taint to the High-Memory servers like this: kubectl taint nodes high-memory-node-1 high-cost=yes:NoSchedule.
  3. Now, by default, no Pod will be scheduled on high-memory-node-1, unless it has a toleration for the high-cost=yes taint.
  4. We need to add a corresponding toleration to our high-memory application:  
  5. With the toleration, high-memory-app can now be scheduled on high-memory-node-1, despite the taint. And importantly, no other Pods (e.g., basic Pods) can use high-memory-node-1, ensuring that the high-cost, high-memory Node is only used for appropriate workloads.




  1. The smallest unit of K8S
  2. Abstraction over container
  3. Usually, one application per pod
  1. Database, application, and server each get one pod
  2. They need to talk to each other to send/store data
  1. Each pod gets it’s own IP address
  2. Ephemeral - dies frequently
  3. New IP Address on a re-creation


  1. Pods communicate with each other using service target than IP
  2. A permanent ip address(attached to each pod)
  3. The lifecycle of pod and service is not connected - even if the pod dies, it’s service ip address remains the same - no need to change the endpoint.
  4. External service: service that opens the communication from an external source
  1. You’ll need Type:LoadBlanacer, NodePort:30000-32767
  1. Internal service: you don’t want your database to be open to the public request


  1. Instead of external service, a request goes to ingress, and it is then forwarded to the service. // practical usage

ConfigMap & Secret

  1. ConfigMap to store database endpoint
  2. Secrets to store database username and password


  1. Physical storage attached to pod.
  2. Storage could be on local machine or on the same node or cloud storage


  1. You won’t be creating pods(in case of failure and relication). You’ll be creating a deployment  
  2. Where you define how many replicas you want
  3. Can scale up and down the number of replicas of pod you need when one of your applications gets down


  1.  Replica of a database would be handled by StatefulSet
  2. Use it when you have a database application



  1. In practice, you’ll have
  1. 2 master nodes = consumes less resources(RAM & CPU)
  2. 3 worker node  = more resources(RAM & CPU)  
  1. As the complexity of your application and need of more resources grows you add more master and worker node
  2. How to add a new Master/worker-node server
  1. Get a new bare server
  2. Install all the master/worker-node processes
  3. Add it to the cluster