Docker by LinuxAcedemy

Docker        4

Docker version        4

Moby        4

Docker 5 Whys        4

How Docker works?        4

Why you need docker        5

What is the issue        5

How issue solved?        5

Use case of Docker        5

Code concepts        6

How to start in Cloud        6

Prerequisite        7

Docker Architecture        7

Client: where you run and execute command        7

Client-server architecture        8

What you do        8

Create Dockerfile        9

Create Image        9

Create container:        9

Docker objects        9

Call outs        10

Docker Engine        10

Running a container flow        10

Docker commands        12

Docker general command        12

Docker image command        12

Docker Container command        13

Exec Command Explained via Scenario        14

Container flag command 2        15

Docker Ports Command        16

Flow:        17

Execute container command        19

Docker logging Command        19

Docker Networking        20

Network Commands        21

Scenarios        22

Network subnet:gateway commands        22

Callouts        22

Scenarios        23

Docker storage        24

How docker stores data on a local file system        24

Layered Architecture        24

Category for data storage        24

Mount data        25

2 ways to handle Persistent data        25

Bind mount        25

Volume: Preferred way to store data        26

Docker Volume commands        26

Scenarios        27

Docker file        27

General guidelines        28

Create a Dockerfile        29

Instructions        30

Commands        31

Environment variables        31

Build and Run Containers with env        32

Build arguments        32

Build and Run Containers with Build env        33

Build commands & flags        34

Multi stage build        35

Tag Commands        35

Docker hub        36

Image Push Command        36

Objectives        37

Example        37

Lab challenge        38

You want to dockerize your application        38

Tagging and Pushing Images to DockerHub        38

Image history        39

Container management        39

Container processes commands        39

Start container automatically        39

Example command        40

Container events        40

Managing stopped containers        41

Docker portainer        41

Watchtower        41

Scenarios        41

Portainer        41

Watchtower        43

Docker Compose        44

Idea        44

Example        44

Issue resolved by docker-compose        44

Docker Compose File explained        45

Callouts:        45

Docker-Compose File        46

Compose with Volume and Network        46

Management Commands        47

Compose Commands        48

Example        48

Callouts        50

Container orchestration        50

Docker Swarm        51

What is docker swarm        51

Setup        52

How it works        53

What’s it for?        53

Swarm commands        53

Service        55

Example        55

Service commands        56

Swarm networking        56

Ingress network        58

Embedded DNS        61

Swarm Network commands        62

Swarm Volums        62

Volume commands        63

Deploy swarm stack        64

Example        64

Example 2        66

What’s Given        68

Docker

Docker version

Moby

Docker 5 Whys

https://www.youtube.com/watch?v=fqMOX6JJhGo&ab_channel=freeCodeCamp.org

How Docker works?

Dockerfile → docker image → container(many) 

Dockerfile is used to create an image for an application

Image can run on any host with docker installed on it

Ops team - can use this image to deploy application

Why you need docker

Setup E2E application stack that includes various different services/technologies

What is the issue

Developing an app with different components

Need something that helps us with compatibility issues and something that will allow us to modify or change these components without affecting the other components and even modify the underlying operating system as required

  1. Compatibility with the underlying OS
  1. Check the compatibility between the services and the libraries and dependencies on the OS // matrix from hell
  1. difficult to set up a new environment when a New developer on board
  2. cannot guarantee the application we are building would run the same way in different environments/OS

How issue solved?

run each component in a separate container with its own dependencies and the libraries all on the same VM and the OS but within separate environments or containers

we just had to build a docker configuration once and all developers could get started with simple docker run command irrespective of what operating system they are on.

All they need to do is to make sure they have docker installed on their system

Use case of Docker

Code concepts

How to start in Cloud

  1. Buy a server
  2. Install Docker CE

Prerequisite

# Uninstall old versions:

sudo yum remove -y docker \
                 docker-client \
                 docker-client-latest \
                 docker-common \
                 docker-latest \
                 docker-latest-logrotate \
                 docker-logrotate \
                 docker-engine
# Install Docker CE


# Add the Utilities needed for Docker:
sudo yum install -y yum-utils \
 device-mapper-persistent-data \
 lvm2

# Set up the stable repository:
sudo yum-config-manager \
   --add-repo \
   https://download.docker.com/linux/centos/docker-ce.repo


# Install Docker CE:
sudo yum -y install docker-ce

# Enable and start Docker:
sudo systemctl start docker && sudo systemctl
enable docker

# Add cloud_user to the docker group:
sudo usermod -aG docker cloud_user

Docker Architecture

Client: where you run and execute command

Docker daemon: a persistent process that manages containers.

  • It listens for Docker API requests and then manages Docker objects. A few of these objects include images, containers, networks, and volumes.
  • build, run, distribute docker containers

Client-server architecture

  • Docker daemon and client are separate binaries and the client can be used to communicate with different daemons.
  • This means that you use the client to go and execute commands. The daemon is responsible for doing all the heavy lifting. This includes building, running, and distributing Docker containers.
  • Both the daemon and the client communicate using a REST API via UNIX sockets or a network interface.
  • Flow: execute a command via Docker client → it communicates with the API, which receives the request → and then tells the Docker daemon to execute the action.
  • Docker registry: When we start getting into building our own images, we need a place a store them, and that is where the Docker registry comes in.
  • Docker Hub: By default, Docker is set up to use Docker Hub. It's a public registry where you can go and save your images, as well as download other images created by other people or companies.

What you do

Client: communicate with docker daemon using the API

Create Dockerfile
Create Image
Create container:  
Docker objects

Call outs

Docker Engine

Major components: all these component work together to create and run containers  

Running a container flow

Docker commands

  1. Management commands
  2. Container and image commands

Docker general command

Command

What it does

Docker -h | more

Docker help

Docker systemctl restart docker

Stop and restart docker service

Docker image command

Command

What it does

Docker image -h

Returns a list of all commands that are associated with docker image

Docker image ls OR docker images

Show all image with their ID, Tag, Size, Date

Docker image pull nginx

Pull a image or repository from a registry (docker hub)

  • It will download the image and not run the container

Docker image inspect ImageID

Returns low level of info/details on docker objects in JSON

  • ID: sha256
  • RepoTag: nginx:latest
  • Env: env variables defined
  • Cmd: command that gets executed when container starts up

Docker image push

Push a image or repository from a registry (docker hub)

Docker image import

Imports the contents from a tarball to create a filesystem image

Docker rmi ImageID/Name

Docker rmi -f ImageID/Name

Remove Image

  • Make sure no container is running out of that image

Force remove

Docker Container command

Command

What it does

Docker container -h

Returns list of all commands that are associated with docker containers

Docker container ls OR ps

Docker container ls OR ps -a

Iist containers that are running

List all container including those that have been stopped

Docker container run busybox

Run a command in a new container

Docker container run -d -P nginx

-P = Takes all our ports that exists for this container and map it to a random port number

-d = bcz if you don’t containers will run in the foreground and you can’t execute any other command

Docker container inspect ImageID

Returns detailed info on one more more container

  • Status
  • Networking                 

Docker container top ImageID

Display the running processes of containers

  • Master process
  • Worker process

Docker container stats ImageID/ContainerName

Display a live steam container resource usage statistic

  • Memory
  • CPU

Docker container logs ImageID/ContainerName

Return log data coming back from container

Docker container attach ImageID/ContainerName

Attach local standard input/output, and error streams to a running container

Docker container stop

Stop one or more running container

Docker container start ImageID/ContainerName

Start one or more running container

Curl localhost:32774

Generate traffic

  • We need host port number that assigned to it

Docker container exec -it ImageID/ContainerName /bin/bash

Docker container exec -it ImageID/ContainerName ls /usr/share/nginx/html

Run a command in a running container

i = interactive

t = TTY

  • This is for going into the container and see the filesystem
  • With exec you can execute any command that is present on container

Getting the content of that directory

You’re not logging into that container to execute the command you need

Docker container pause ImageID/ContainerName

Pause all processes within one or more container

Docker container unpause ImageID/ContainerName

Unpause all processes within one or more container

Docker container rm -f ImageID/ContainerName

Remove one or more stopped/existing container

f = use this flag if container is running; bypass the prompt

Docker container prune

Delete/Remove all stopped container

Exec Command Explained via Scenario

When you’re running the docker run ubuntu command, it runs an instance of the ubuntu image and exits immediately // you wouldn’t see the container running if you were to list the running containers because it’s the exit after executing the command

Why?

unlike a virtual machine, containers are not meant to host an operating system. containers are meant to run a specific task or process such as to host an instance of a Web server or application server or database or simply to carry out some kind of computation task. when the task is complete the container exits. a container only lives as long as the process inside it is alive. Ex: if a web service inside the container is stopped or crashes then the container exits

Ex: docker run ubuntu sleep 5

What we saw is executing a command when the container is running

What if we want to execute a command on a running container

docker run ubuntu sleep 5

What we saw is executing a command when a container is running

Docker exec <ContainerName> cat /etc/hosts

execute a command on a running container

Container flag command 2

Example:  docker container run busybox

Container is a long-running process

Ex:

  1. In the case of busybox
  1. in the case of Nginx container

Command

What it does

Docker container run --help

Print usage // can’t use -h bc it’s reserved for something else

Docker container run --rm busybox

Automatically remove the container when it exists

For some instances when we go and execute the task, we don’t need this container to hang around

  • Use --rm flag // to have the container automatically gets deleted once it’s completed        

Docker Ports Command

Understand exposing and publishing ports on docker: https://www.youtube.com/watch?v=6by0pCRQdsI&ab_channel=SelfTuts

Host machine that contains docker containers

Outside wants to talk to docker container // apache web server // code files

Command:  docker -p <host-port>:<docker port> image

Flow:

Exposing ports

3 ways on Container networking

  1. Exposing ports on container
  1. Docker container run -d --expose 3000 nginx // use expose flag to open port 3000 on nginx; we haven’t mapped anything to it yet
  2. Docker container run -d --expose 3000 -p 80:3000 nginx // host port 80 mapped to container port 3000
  1. If you curl localhost:3000 it return connection failed // because We don’t have a process that is listening on port 3000
  1. Docker container run -d --expose 3000 -p 8080:80 nginx // host port 80 mapped to container port 3000
  1. Map TCP & UDP ports
  1. Docker container run -d -p 8000:80/tcp -p 8080:80/udp nginx // can have multiple -p
  1. Randomly assign a port from host and map it to container using -P flag
  1. Docker container run -d -P nginx
  1. Docker container port to see all port mapping for a specified container

Execute container command

3 ways we can execute a command on a container

 

  1. Defining a command within a Dockerfile which will be executed when the container starts up
  1. Docker container run -d nginx
  1. Execute docker run:  then define what command we want to executed by container when it starts up
  1. Overwrite Dockerfile by specifying when we execute a docker container run
  2. Docker container run -it nginx /bin/bash
  1. You are no longer in the docker host, you’re in container itself executing command.
  2. So any command you’ll execute now on, you’ll be executing it within container
  1. Docker container Exec command allows us execute a command on a running container
  1. This will only run while the container's primary process is running.
  2. the command will be run in the default directory of the container.
  1. However, there is a caveat to that. If the container is built using the working directory directive, it will use the working directory instead.
  1. Optionally, we can use the exec command to specify /bin/bash, which will take us to the prompt of the container, then we can go and execute the command we want to run from within the container itself.
  2. Docker container exec -it ImageID ls /usr/share/nginx/html
  1. Listing directory listed in /usr/share/nginx/html

Docker logging Command

Agenda:

  1. How to retrieve logs for our container
  2. How to go and send application log data to our container log

Callouts

 

Commands to retrieve log data

Docker container logs <Name>

show information logged by a running container(that is not running as a service - docker swarm)

Docker service logs <Name>

show information logged by all containers participating in a service

Docker Networking

When you install docker, it creaes 3 networks automatically

Network Commands  

Docker network ls

List networks

Docker network inspect <Name>

Getting detailed info on a network

Docker network create <Name>

Docker network localhost --internal

Create a network

Network is internal and not bound to any of the interfaces(ex: Database container)

Docker network rm <Name>

Removing a network

Docker network prune

Remove all unused networks

  • Risky because we’re not controlling what network is going to be deleted        

Docker network connect <Network> <Container>

Connect a pre-existing container to a network

Docker container disconnect <Network> <Container>

Disconnect container from a network

Scenarios

Connect and remove the container from a network

  1. Create a container // Nginx
  1. Docker container run -d --name NetworkTest -p 8080:80 nginx
  1. Create a network  
  1. Docker network create br01
  1. Connect a pre-existing container to a network
  1. Docker network connect br01 NetworkTest
  1. Check if the container is attached to network we created
  1. Docker container inspect NetworkTest
  1. Remove container from a network
  1. Docker container disconnect br01 NetworkTest

Network subnet:gateway commands

Gateway: is the IP that is assigned to bridge and bridge is assigned all the interfaces on our server  

Docker container create --subnet 10.1.0.0/24 --gateway 10.1.0.1 br02

Disconnect container from a network

Docker container create --subnet 10.1.0.0/16 --gateway 10.1.0.1 \ --ip-range=10.1.4.0/24 --driver=bridge --label=host4network br04

Create Subset of IP(network) within the above range

--ip-range = define subset of IPs we want use

Docker container run --name n1 -it --network br04 centos /bin/bash

Docker container run --name <Name> -it --network <Network> <Image> <CMD>

Adding a container automatically to network

Docker container run -d --name n2 --ip 10.1.4.102 --network br04 nginx

Docker container run -d --name <Name> --ip <IP> --network <Network> <Image>

Docker container inspect n2 | grep IPAddr

Container gets specific IP // Assigning IP to a container

To grab IP address

Callouts

Scenarios

Client wants to containerize the app, having 2 containers, one for the front end which stays publicly accessible, second for the database staying private

Requirements

  1. App with frontend and database
  2. Having 2 containers
  1. Frontend - public
  2. Database - on private network, not accessible via internet

How?

  1. Create 2 networks
  1. Public network for frontend

Docker network create frontend

  1. Private network for database
  1. --internal make sure that Network is internal and not bound to any of the interfaces

Docker network create localhost --internal

  1. Test your network

Docker container ls

  1. Create MySql container attach to private network

Docker container run -d --name database \

–-network localhost \

-e MYSQL_ROOT_PASSWORD=GIVEPASSWORD \

Mysql:5.7

  1. Create Nginx container connected to both network

Docker container run -d \

--name frontend-app \

--network frontend \

nginx:latest

  1. Connect nginx to localhost network

Docker network connect localhost frontend-app

Docker storage

How docker stores data on a local file system

Layered Architecture

Image:

Container:

 Category for data storage 

  1. Non-persistent // local storage
  1. Data that is ephemeral(data that is only exists for a short period of time). Ex: application code because it’s tied to the lifecycle of our container
  2. Every container has non-persistent storage
  1. Storage gets created with a container which is a read/write layer
  1. non-persistent storage is tied to the lifecycle of our container // when the container is deleted so is the data
  1. Persistent: data that is not ephemeral // data stick around using volume
  1. Volumes
  2. Volumes are decoupled from container
  1. Volume data lives outside of the life cycle of the container
  1. Application being ephemeral meaning, if our application talks to our database, we want to make sure that data is persistent and therefore it lives in the volume; this allows us to go and upgrade our database container without losing any database data that is tied to the application

Non-persistent data

Mount data

Since we have to store database’s data created inside the container in the /var/lib/mysql directory, we need a volume on a docker host that takes the data from container’s directory.

When you create volume it creates a folder under /var/lib/docker/volume/{VOL_NAME} and the data will be stored there.

So SOURCE will be the volume created on a docker host

TARGET will be data created by the container in the respective directory  

Docker container run -d --name <NAME> --mount type=bind/volume,source=<SOURCE>,target=<TARGET> <IMAGE>

2 ways to handle Persistent data

  1. if we want to mount a single file into our container. Ex: configuration file
  2. The reason why using a bind mount to manage a configuration file is pretty handy is, it allows us to go and make a change to that file and then restart the container to go and pick it up. This way, we don't have to go and make a change to an image, rebuild it, and then go and redeploy the container.
  1. Using the mount flag

Docker container run -d --name <NAME> --mount type=bind,source=<SOURCE>,target=<TARGET> <IMAGE>

Docker container run -d \

--name nginx-bind-mount1 \

--mount type=bind,source=”$(pwd)”/target,target=/app \

nginx

  1. Using the volume flag

Docker container run -d \
--name nginx-bind-mount2 \
-v
"$(pwd)"/target2:/app \
nginx

  1. Create the volume
  2. Create your container

Docker Volume commands

Docker volume -h

For help

Docker volume ls

List volumes

Docker volume create <NAME>

Create a volume

Docker volume inspect <NAME>

Inspecting a volume

Docker volume rm <NAME>

delete a volume

Docker volume prune

remove all unused volume

Scenarios

Working in a project that requires a database, deploy a MySql container to your dev environment.

How?

Steps

Docker volume create mysql_data

Docker container run -d --name app-database \

--mount type=volume,source=mysql_data,target=/var/lib/mysql \

-e MYSQL_ROOT_PASSWORD=GIVEPASSWORD \

Mysql:latest

Docker file

It’s a set of instruction on how to build an image

Instructions are commands used to start a container defined in the Dockerfile

General guidelines

  1. Keep containers as ephemeral as possible.
  1. Meaning is you can go and stop and destroy your container at a moment's notice, create a new one to replace it with very little effort.
  1. Follow the sixth principle of the twelve-factor app, which is about processes.
  1. Execute the app as one or more stateless processes.
  2. Twelve-factor processes are stateless and share nothing. Any data that needs to persist must be stored in a stateful backing service, typically a database.
  1. Avoid including unnecessary files.
  1.  Don't add any bloat to our image. All this will do is just make the image bigger.
  1. Use the .dockerignore file to avoid having unnecessary files getting copied over to the image
  1. In the .dockerignore file, all you have to do is supply a list of either files or directories you want to be excluded, and when the image is built, those files won't get copied over.
  1. Use multi-stage builds to reduce the size of your Docker image.
  1. In a multi-stage build, 2 Docker images are being built from 1 Docker file.
  1. The first image is used to create your build artifact. This will include all the tools that are necessary to build your image, as well as test it.
  2. The second image, which is the image that will be created, is where you copy your build artifact to. And this image will only have the necessary binaries and libraries to run your application.
  1. And this will greatly reduce the size of your image.
  1. Avoid installing any unnecessary packages.
  1. This will also contribute to the size of the image.
  1. Decouple the application.
  1. Ex: for WordPress We don't want one container that is running WordPress along with the database.
  2. We would decouple this by creating multiple containers, one for the WordPress application itself, and another for the database,
  1. Minimize the number of layers
  1. because as we add additional layers, we do add additional size to the image.
  2. And a good way of reducing the number of layers is by using multi-stage builds.
  1. sort multi line argument alphanumerically.
  1. This is going to help make your Dockerfile more readable,
  2. Also, it's a good idea to have a space before your backslash.
  1. Leverage the build cache
  1. When Docker goes to build an image, it's going to step through each instruction in order. And since each layer is its own image, Docker's going to go and look for a cached image. And if it finds that cached image, it's going to go and reuse it.
  2. Optionally, when you're executing a docker image build, you can use the no-cache flag, and set it to true.
  1. we're ready to go and build our first image.

Create a Dockerfile

  1. Setup environment

sudo yum install git -y

mkdir docker_images

cd docker_images

mkdir weather-app

cd weather-app

git clone https://github.com/linuxacademy/content-weather-app.git src

  1. Create a Dockerfile

# Create an image for the weather-app
FROM node
LABEL org.label-schema.version=v1.1  
# Add additional metadata using LABEL
RUN mkdir -p /var/node
# Directory where your code gets stored
ADD src/ /var/node/ # Copies file/directory on to container by ADD & COPY
WORKDIR /var/node # set working directory for RUN,CMD,COPY,ADD,ENTRYPOINT
RUN npm install # command will be executed within working directory
EXPOSE 3000 # exposing container node
CMD ./bin/www  # default command that executes when container starts

Instructions

 

Working with Instructions

FROM: Initializes a new build stage and sets the Base Image

RUN: Will execute any commands
in a new layer

CMD: Provides a default
for an executing container. There can only be one CMD instruction in a Dockerfile

LABEL: Adds metadata to an image

EXPOSE: Informs Docker that the container listens on the specified network ports at runtime

ENV: Sets the environment variable <key> to the value <value>

ADD: Copies new files, directories or remote file URLs from <src> and adds them to the filesystem of the image at the path <dest>.

COPY: Copies new files or directories from <src> and adds them to the filesystem of the container at the path <dest>.

ENTRYPOINT: Allows
for configuring a container that will run as an executable

VOLUME: Creates a mount point with the specified name and marks it as holding externally mounted volumes from native host or other containers

USER: Sets the user name (or UID) and optionally the user group (or GID) to use when running the image and
for any RUN, CMD, and ENTRYPOINT instructions that follow it in the Dockerfile

WORKDIR: Sets the working directory
for any RUN, CMD, ENTRYPOINT, COPY, and ADD instructions that follow it in the Dockerfile

ARG: Defines a variable that users can pass at build-time to the builder with the docker build
command, using the --build-arg <varname>=<value> flag

ONBUILD: Adds a trigger instruction to the image that will be executed at a later time, when the image is used as the base
for another build

HEALTHCHECK: Tells Docker how to
test a container to check that it is still working

SHELL: Allows the default shell used
for the shell form of commands to be overridden

Commands

Docker image build -t linuxacademy/weather-app:v1 .

Docker image build -t <NAME>:<TAG> -f <Dockerfile_Path>

Build the application image

-t = specify name and tag for the image

-f = Dockerfile path // but since the DF is present in the weather-app directory we have used.

docker container run -d --name weather-app1 -p 8081:3000 linuxacademy/weather-app:v1

Create a docker container

Environment variables

Use the --env flag to pass an environment variable when building an image

--env <Key>=<Value>

Use the Env instruction in the Dockerfile

ENV <KEY>=<VALUE>

ENV <KEY> <VALUE>

# Create an image for the weather-app
FROM node
LABEL org.label-schema.version=v1.1
ENV NODE_ENV=
"development" #Example 1
ENV PORT 3000 #
#Example 2

RUN mkdir -p /var/node
ADD src/ /var/node/
WORKDIR /var/node
RUN npm install
EXPOSE
$PORT
CMD ./bin/www

Build and Run Containers with env

Create the weather-app container:
docker image build -t linuxacademy/weather-app:v2 .

Inspect the container to see the environment variables:
docker image inspect linuxacademy/weather-app:v2

Deploy the weather-dev application:
docker container run -d --name weather-dev -p 8082:3001 --env PORT=3001 linuxacademy/weather-app:v2

Inspect the development container to see the environment variables:
docker container inspect weather-dev

Deploy the weather-app to production:
docker container run -d --name weather-app2 -p 8083:3001 --env PORT=3001 --env NODE_ENV=production linuxacademy/weather-app:v2

Inspect the production container to see the environment variables:
docker container inspect weather-app2

Get the logs for weather-app2:
docker container logs weather-app2
docker container run -d --name weather-prod -p 8084:3000 --env NODE_ENV=production linuxacademy/weather-app:v2

Build arguments

Use the --build-arg flag when building an image:

--build-arg [NAME]=[VALUE]

Use the ARG instruction in the Dockerfile:

ARG [NAME]=[DEFAULT_VALUE]

# Create an image for the weather-app
FROM node
LABEL org.label-schema.version=v1.1
ARG SRC_DIR=/var/node
# Supply build argument. This is default argument; if no argument supplied during build; this will used by default

RUN mkdir -p
$SRC_DIR # Reference build variable
ADD src/
$SRC_DIR
WORKDIR
$SRC_DIR
RUN npm install
EXPOSE 3000
CMD ./bin/www

Build and Run Containers with Build env

Build the weather-app image:

docker image build -t linuxacademy/weather-app:v3 --build-arg SRC_DIR=/var/code .

Inspect the image:
docker image inspect linuxacademy/weather-app:v3 | grep WorkingDir

Create the weather-app container:
docker container run -d --name weather-app3 -p 8085:3000 linuxacademy/weather-app:v3

Verify that the container is working by executing curl:
curl localhost:8085

Build commands & flags

Docker image build -t <NAME>:<TAG>

-t lets you write docker file name and tag

-f, --file string

docker image build -t linuxacademy/weather-app:path-example2 --label com.linuxacademy.version=v1.8 -f Dockerfile.test .

 

Name of the docker file // just in case have duplicate dockerfile in the same directory and you want to test duplicate file first

--force-rm

Always remove intermediate containers that are created during build process

--label

Set metadata for an image

--rm

remove intermediate containers after successful build

--ulimit

Ulimit option

docker image build -t <NAME>:<TAG> -<<EOF

Build instructions

EOF

docker image build -t linuxacademy/nginx:stind --rm -<<EOF

FROM nginx:latest

VOLUME ["/usr/share/nginx/html/"]

EOF

Building image by piping the Dockerfile through STDIN:

docker image build -t <NAME>:<TAG> <GIT_URL>#<REF>

docker image build -t <NAME>:<TAG> <GIT_URL>#:<DIRECTORY>

docker image build -t <NAME>:<TAG> <GIT_URL>#<REF>:<DIRECTORY>

docker image build -t linuxacademy/weather-app:github https://github.com/linuxacademy/content-weather-app.git#remote-build

Building an image using a URL:

REF can be a tag or branch

DIR - if we have dockerfile in subdirectory

docker image build -t <NAME>:<TAG> - < <FILE>.tar.gz

Building an image from a zip file

 - <  = Import tar

Multi stage build

Use Multi stage build to make image small

It allows us to use multiple images within single Dockerfile to built a final product  

It has 2 stages

Stages are not named, stages are numbered using integer starting with 0

We can name the stages; name is arbitrary value you can set

At last stage where we copying artifact, we need to reference the first stage  

# Create an image for the weather-app using multi-stage build
FROM node AS build
# Name the build stage
RUN mkdir -p /var/node/
ADD src/ /var/node/
WORKDIR /var/node
RUN npm install

FROM node:alpine
ARG VERSION=V1.1
LABEL org.label-schema.version=
$VERSION
ENV NODE_ENV=
"production"
COPY --from=build /var/node /var/node
#Reference the build stage by --from
WORKDIR /var/node
EXPOSE 3000
ENTRYPOINT [
"./bin/www"]

Tag Commands

Problem with tagging is we don;t know what source code is tied to that image

Solution is tag a image with commit hash

docker image build -t <name>:<tag>

docker image build --tag <name>:<tag>

Add a name and an optional tag with -t or --tag, in the name:tag format:

git log -1 --pretty=%H

docker image build -t linuxacademy/weather-app:<GIT_HASH> .

Use our Git commit hash as the image tag:

  • Use this command to get hash in source directory of the code

Build the image using the Git hash as the tag:

docker tag <SOURCE_IMAGE><:TAG> <TARGET_IMAGE>:<TAG>

Use the Docker tag to a create a new tagged image:

  • Tag a image off of another image

docker image tag linuxacademy/weather-app:<GIT_HASH> linuxacademy/weather-app:latest

Tag the weather-app as the latest using the image tagged with the commit hash:

Example with Hash Tag

Get the commit hash:

cd docker_images/weather-app/src
git
log -1 --pretty=%H
cd ../

Docker hub

Image Push Command

https://hub.docker.com/

Create a Docker Hub account:

docker image push <USERNAME>/<IMAGE_NAME>:<TAG>

Docker Push:

docker image tag <IMAGE_NAME>:<TAG> <linuxacademy>/<IMAGE_NAME>:<TAG>

Creating an image for Docker Hub:

docker image push <USERNAME>/weather-app:<HASH>

Push the image to Docker Hub:

Objectives 

Example

# Create an image for the weather-app using multi-stage build
FROM node AS build
RUN mkdir -p /var/node/
ADD src/ /var/node/
WORKDIR /var/node
RUN npm install

FROM node:alpine
ARG VERSION=V1.1
LABEL org.label-schema.version=
$VERSION
ENV NODE_ENV=
"production"
COPY --from=build /var/node /var/node
WORKDIR /var/node
EXPOSE 3000
ENTRYPOINT [
"./bin/www"]

# Get the weather-app code:
git
clone https://github.com/linuxacademy/content-weather-app.git src

# Use the Git commit hash as the image tag:
cd src
git
log -1 --pretty=%H
cd ../

#Build the image:
docker image build -t <USERNAME>/weather-app:<HASH> --build-arg VERSION=1.5 .


# Tag the image before pushing it to Docker Hub:
docker image tag linuxacademy/weather-app:<HASH> <USERNAME>/weather-app:<HASH>

# Push the image to Docker Hub:

docker login

docker image push <USERNAME>/weather-app:<HASH>

# Tag the latest image:

docker image tag <USERNAME>/weather-app:<HASH> <USERNAME>/weather-app:latest

# Push the latest image to Docker Hub:

docker login <USERNAME>

docker image push <USERNAME>/weather-app:latest

Lab challenge

  1. You want to dockerize your application

How?

  1. Tagging and Pushing Images to DockerHub

How?

Image history

We have ability to see how image is build

Container management

Container processes commands

Docker container top <NAME>

Docker container exec -it <NAME> /bin/bash

top

Display the running processes of a container

Run Top command locally // when you logged in to the container

Docker container stats <NAME>

Display a live stream of container’s resource usage statistics

Start container automatically

by-default container restart is set to no. so every container we created won’t restart.

If the docker server stops and server rebooted, those containers are not going to start back up

4 option with container auto-start

To configure the restart policy for a container, use the --restart flag

Container set to

no

The default. Do not automatically restart the container  

on-failure

Restart the container if it exits due to an error. Which manifests as a non-zero exit code

always

Always restart the container if it stops

unless-stopped

Example when container will not start

Docker container stop <NAME>

Sudo systemctl restart docker

Similar to always except that when the container is stopped, it is not restarted even after the docker deamon restarts

If you go and stop the container, it’s not going to restart. This means that this container will not restart even if the docker demon is restarted  

Example command

docker container run -d --name <NAME> --restart <RESTART> <IMAGE>

Automatically Restarting a container:

Container events

Docker system events

Docker system events --since ‘TIME_PERIOD’

Get real time events from docker server

Docker system events --filter <FILTER_NAME>=<FILTER> --since ‘1h’

Docker system events --filter type=container --since ‘1h’

Docker system events --filter type=container --filter event=start --since ‘1h’

Filter events

Ex: filter event for container object type

Event=start // getting events when container starts AND anytime when a new container starts  

Event = attach // ex: in distributive behavior

Event = die

Event = stop

Managing stopped containers

Docker container -a -q

Docker container -a -q -f status=exited

List all containers IDs including running and stopped

-q = quiet

Pass these IDs to the script and can manipulate containers in some way(restart all stopped container via bash script)

List of container IDs that are exited

Docker portainer

To manage docker and docker swarm.

It’s an alternative of docker cli and lets you do everything with GUI

Watchtower  

Scenarios

  1. Portainer

You have several docker servers that you manage container on for client

Management of docker host has become a bit of a pain

Looking for all in one tool = portainer

We'll install Portainer and use it to manage our Docker host.

# Create a volume for Portainers data:
docker volume create portainer_data

#Create the Portainers container:
docker container run -d --name portainer -p 8080:9000 \
--restart=always \
-v /var/run/docker.sock:/var/run/docker.sock \
-v portainer_data:/data portainer/portainer

docker container ls

#The ls should output:
rivethead42/weather-app:latest
NODE_ENV production

  1. Watchtower

After having to manually update several container, you have decided to look into an automated solution

Watchtower: updates all running containers when changes are made to the image that the container is running

Docker Compose

Idea

With Docker You’ll be deploying microservices, and as more and more of these services build up, they become a little bit more difficult to manage.

Example

let's say we have an application that is built using multiple microservices.

We have the web frontend, an ordering system, a catalog, and then the database that they all talk to.

Now deploying all these services individually can be a bit of a pain.

If you're running your application in a production environment, everything needs to work whenever you make changes. And if not, your customers will experience disruption

Issue resolved by docker-compose

Docker Compose allows you to go and declaratively describe your entire application in a single YAML file.

Then you're able to go and take that file and deploy your application and then manage the entire lifecycle of it using Docker Compose.

Docker Compose File explained

Callouts:

Docker-Compose File

# Create a docker-compose file:

vi docker-compose.yml
docker-compose.yml contents:

version:
'3'
services:
 web:
   image: nginx
   ports:
   -
"8080:80"
   volumes:
   - nginx_html:/usr/share/nginx/html/
   links:
   - redis
 redis:
   image: redis
volumes:
 nginx_html: {}

Compose with Volume and Network

Objective

version: '3'
services:
 ghost:
   container_name: ghost
   image: ghost:latest
   ports:
     -
"80:2368"
   environment:
     - database__client=mysql
     - database__connection__host=mysql
     - database__connection__user=root
     - database__connection__password=P4SSw0rd0!
     - database__connection__database=ghost
   volumes:
     - ghost-volume:/var/lib/ghost
   networks:
     - ghost_network
     - mysql_network
   depends_on:
     - mysql

 mysql:
   container_name: mysql
   image: mysql:5.7
   environment:
     - MYSQL_ROOT_PASSWORD=P4SSw0rd0!
   volumes:
     - mysql-volume:/var/lib/mysql
   networks:
     - mysql_network

volumes:
 ghost-volume:
 mysql-volume:

networks:
 ghost_network:
 mysql_network:

Management Commands

build: Build or rebuild services
bundle: Generate a Docker bundle from the Compose file
config: Validate and view the Compose file
create: Create services
down: Stop and remove containers, networks, images, and volumes
events: Receive real time events from containers
exec: Execute a command in a running container
help: Get help on a command
images: List images
kill: Kill containers
logs: View output from containers
pause: Pause services
port: Print the public port
for a port binding
ps: List containers
pull: Pull service images
push: Push service images
restart: Restart services
rm: Remove stopped containers
run: Run a one-off
command
scale: Set number of containers
for a service
start: Start services
stop: Stop services
top: Display the running processes
unpause: Unpause services
up: Create and start containers
version: Show the Docker-Compose version information

Compose Commands

docker-compose up -d

Create a compose service:

  • Make sure you run this command where your docker-compose.yml file is located // if toot you can supply path by -f  

This will deploy everything listed in docker-compose file

docker-compose ps

List containers created by compose:

docker-compose stop

Stopping a compose service:

docker-compose start

Starting a compose service:

docker-compose restart

Restarting a compose service:

docker-compose down

Delete a compose service:

docker-compose build

If you make changes to Dockerfile after your compose up and running, you need to make sure it gets rebuilt before executing docker-compose up

Example

You have your application needed to be deployed to your production Docker server. You need to create 3 containers that will be load balanced by Nginx. So you need to have docker compose file that has following

version: '3'

services:

  weather-app1:

    build:

      context: ./weather-app

    args:

      - VERSION=v2.0

    ports:

      - "8080:3000"

    network:

      - weather_app

    environment:

      - NODE_ENV=production

  weather-app2:

    build:

      context: ./weather-app

    args:

      - VERSION=v2.0

    ports:

      - "8081:3000"

    network:

      - weather_app

    environment:

      - NODE_ENV=production

  weather-app3:

    build:

      context: ./weather-app

    args:

      - VERSION=v2.0

    ports:

      - "8082:3000"

    network:

      - weather_app

    environment:

      - NODE_ENV=production

  nginx:

    build: ./nginx

    tty: true

    ports:

      - "80:80"

    network:

      - weather_app

      - frontend

networks:

  frontend:

  weather_app:

    internal: true

Callouts

Container orchestration

You can run a single instance of your application (node.js)

Problem:

Solve

  1.  automatically adding additional hosts to support the user load.
  2. not just clustering and scaling, It also provides support for advanced networking between this container across the different hosts
  3. load balancing user request across different hosts
  4. Sharing storage between the host
  5. configuration management and security within the cluster

Container orchestration solutions

  1. Docker
  1. Easy to started but lacks some autoscaling features for complex production grade application
  1. Kubernetes
  1. Difficult to get started but provides lot of options to customize deployments and has support for many different vendors
  2. Supported by GCP, Azure and AWS
  1. MESOS
  1. Difficult to set up and get started. But supports many advanced features

Docker Swarm

What is docker swarm

you could now combine multiple docker machines/host/server together into a single cluster. docker swarm will take care of distributing your services or your application instances into separate hosts for high availability for load balancing across different systems and hardware.

2 components

  1. Swarm cluster (Enterprise grade and secure)
  1. Manage one or more Docker nodes as a cluster
  1. When you initialize it that node is always going to be manager
  1. Encrypted distributed cluster store
  2. Encrypted networks
  1. By default, communication between the nodes is encrypted.
  1. Secure join tokens
  1. Swarm also uses secure join tokens for both manager and worker nodes.
  1. Orchestration engine  
  1. API for deploying and managing microservices
  2. Declarative manifest files for defining apps
  3. Provides availability to scale apps, and perform rolling updates and rollbacks
  4. Swarm was initially a separate product layered on Docker, since Docker 1.12 it has become a part of the engine.

Cluster 

A swarm consists of one or more Docker nodes. Nodes are either a managers or a worker. And that first node is always going to be a manager.

Swarm can be run on anything from physical servers to virtual machines, cloud instances, and it could even run on raspberry Pi.

Managers:

Workers:

State is held in etcd

Security 

Orchestration

Setup 

To set up Docker swarm You must have a docker hosts or multiple hosts/servers with docker install on them. then you must design one host to be the swarm manager/master/ and other as slaves/workers

TODO:

  1. Create 2 servers for your worker and 1 for manager
  2. Install docker on both worker server
  3. On your manager, intialize docker swarm
  1. --advertise-addr flag: This flag specifies the address that will be advertised to other members of the swarm. This will include API access as well as the overlay network.
  2. When we initialize a swarm manager, we want to use the private IP. // swarm manager host private IP

# Initialize the manager:

docker swarm init \
--advertise-addr [PRIVATE_IP]
# Swarm manager server’s private IP

  1. Step 3 provided a join-token which we'll be using on both of the worker nodes.

# Add the worker to the cluster:

docker swarm join --token [TOKEN] \
[PRIVATE_IP]:2377

Perform docker swarm init on Swarm manager to intialize the swarm manager

Perform docker swarm join --token <TOKEN> to join the manager

Now you are ready to create services and deploy them on the swarm cluster

How it works

Key Component of swarm orchestration is Docker service. Docker services are one or more instances of a single application or service that runs across the nodes in the swarm cluster. for example in this case we could create a docker service to run multiple instances of web application across worker nodes in our swarm cluster. for this to work I have to run following command

docker service create --replicas=3 my-web-server

What’s it for?

Swarm commands

docker swarm init

Initialize swarm on master

docker swarm join --token <TOKEN>

Append token on worker nodes

docker node ls

Listing nodes

docker node inspect [NODE_NAME]

Inspecting a node

  • Use Host Name or ID                

Info getting back

  • Spec
  • Role: manager
  • Availability: active
  • Description
  • Hostname
  • Platform: architecture  
  • OS
  • Engine
  • Docker version and plugin
  • TLS info
  • Certificate
  • Status
  • State: ready
  • Addr: private IP
  • Manager status
  • Reachability: true
  • Addr: private IP with port

docker node promote [NODE_NAME]

Promoting a worker to a manager:

Node name= enter node name you want to promote // this will turn status to reachable 

Why?

Maybe leader is on maintenance

docker node demote [NODE_NAME]

Demoting a manager to a worker when we don’t need it

docker node rm -f [NODE_NAME]

Removing a node form the swarm (node must be demoted first):

-f = because the node is active

docker swarm leave

Make a node leave the swarm:

  • Node will still be there

docker swarm join-token [worker|manager]

Getting the join-token:

docker swarm join --token [TOKEN] \

<PRIVATE_IP>:2377

Make the node rejoin the swarm:

  • You will have 2 copies of worker node. One is in down and another is in reachable status
  • Remove down host via rm command // supply id

Service

An application that is deployed out to a Docker host running in swarm mode is deployed out as a service.

when a service is created, it is accepted by the swarm manager and the service definition represents the desired state.

Based on the number of replicas, the swarm will schedule replica tasks, and each task invokes a single container, and these containers run in isolation.

When looking at the diagram, we have a single service.

When the container's live, the scheduler then recognizes the task is in a running state.

Example

# Creating a service:

docker service create -d --name [NAME] \
-p [HOST_PORT]:[CONTAINER_PORT] \
--replicas [REPLICAS] \
[IMAGE] [CMD]

Service commands

docker service create --replicas=3 my-web-server

docker service create -d --name [NAME] \

-p [HOST_PORT]:[CONTAINER_PORT] \

--replicas [REPLICAS] \

[IMAGE] [CMD]

To create a service

docker service ls

List services

docker service inspect [NAME]

Inspecting a service

docker service logs [NAME]

Getting a logs for service

docker service ps [NAME]

LIst all tasks of a service

docker service scale [NAME]=[REPLICAS]

Scaling a service up and down

docker service update [OPTIONS] [NAME]

Updating a service

Use docker service -h for more help

  • -d // update services in detach mode
  • --dns- // update DNS Server
  • --entrypoint // override entrypoint
  • --env-add list // update env variables
  • Update CPU & memory limits
  • adding & removing mount  

Swarm networking

3 types of network in docker

  1. Bridge: by-default containers attached to
  1. Private internal network created by the docker on the host
  2. All containers attached to this network bydefault and get an internal IP address usually in the range of 172.17 series
  3. Container can access each other using this internal IP if required
  4. To access these containers from the outside world, Map ports of these containers to ports on the docker host
  1. None:
  2. Host:
  1. You don;t need to use -p command to map container’s port to web server

Bridge networking concept

for example, we have multiple doctor hosts running containers. each docker host has its own internal private network in the 170.17 series allowing containers running on each host to communicate with each other.

however containers across the host has no way of communicating with each other unless you publish the ports on those containers and set up some kind of routing yourself. this is where overlay networks comes into the play

with docker swarm you could create a new network of type overlay which will create an internal private network that spans across all the notes participating in the swarm cluster. we could then attach the containers or services to this network using the network option

docker network create --driver overlay --subnet 10.0.9.0/24 my-overlay-network

docker service create --replica 2 --network my-overlay-network nginx

Ingress network

let’s say you have a single instance application listening on host port 80 and container port 5000 and for the outside world to connect to your Web container it needs to be map with host port 80. now if you were the create 2replicas you cannot use container port 5000 again. we cannot have 2 mappings on the same port. // issue resolved by ingress networking

When you create docker swarm it automatically creates an ingress network. it has a built-in load balancer that redirects traffic from the published port which in the case is port 80 all the map ports which are the ports 5000 on each containers

You don’t need to do any manual configuration you just have to create service you need by running the service by create command and publish the port using -p option. by this engress

 network and internal load-balancing will simply work out of the box

how ingress network works when there are multiple nodes in the doctor Swarm cluster

what’s given

How this might work without ingress inteworking?

how do we expect the user to access our services in a swarm cluster of multiple notes

with ingress network

Embedded DNS

What’s given

How?

  1. Use the internal IP address assigned to mysql container // not ideal
  1. Use container name

Swarm Network commands

docker network create -d overlay NET_NAME

Creating a overlay network

-d = driver name

docker network create -d overlay --opt encrypted NET_NAME

To encrypt data(Network) use --opt flag

docker service create -d --name nginx_overlay --network NET_NAME -p 8081:80 --replicas 2 nginx:latest

Creating a service with an overlay network

docker service update --network-add NET_NAME SERVICE_NAME

Update service into different network

docker network rm NET_NAME

Remove network

Swarm Volums

Issue:

Volume commands

docker plugin install [PLUGIN] OPTIONS[]

Adding plugins

docker plugin ls

Listing plugins

docker plugin disable ID

Disable plugin

  • Disable in order to remove plugin        

docker plugin remove ID

Remove plugin

docker volume create -d [DRIVER] [NAME]

docker service create -d --name [NAME] \

--mount type=[TYPE],src=[SOURCE],dst=[DESTINATION] \

-p [HOST_PORT]:[CONTAINER_PORT] \

--replicas [REPLICAS] \

[IMAGE] [CMD]

Create a volume using a driver:

docker volume create -d local portainer_data

Create a volume on the manager:

docker service create \

--name portainer \

--publish 8000:9000 \

--constraint 'node.role == manager' \

--mount type=volume,src=portainer_data,dst=/data \

--mount type=bind,src=/var/run/docker.sock,dst=/var/run/docker.sock \

portainer/portainer \

-H unix:///var/run/docker.sock

Create a portainers service that uses a volume:

Deploy swarm stack

STACK is deployed using docker compose file

docker stack deploy --compose-file docker-compose.yml prometheus

To deploy stack

docker stack ls

List stack

docker service ls

List services

docker stack rm STACK_NAME

Remove stack

sudo chown nfsnobody:nfsnobody -R /var/lib/docker/volumes/prometheus_data

Fix volume permissions:

Example

# prometheus.yml contents:

global:
 scrape_interval: 15s
 scrape_timeout: 15s
 evaluation_interval: 15s

scrape_configs:
 - job_name: prometheus
   scrape_interval: 5s
   static_configs:
   - targets:
     - prometheus_main:9090

 - job_name: nodes
   scrape_interval: 5s
   static_configs:
   - targets:
     - [MANAGER]:9100
     - [WORKER1]:9100
     - [WORKER2]:9100

 - job_name: cadvisor
   scrape_interval: 5s
   static_configs:
   - targets:
     - [MANAGER]:8081
     - [WORKER1]:8081
     - [WORKER2]:8081

# docker-compose.yml contents:

version:
'3'
services:
 main:
   image: prom/prometheus:latest
   container_name: prometheus
   ports:
     - 8080:9090
   
command:
     - --config.file=/etc/prometheus/prometheus.yml
     - --storage.tsdb.path=/prometheus/data
   volumes:
   - ./prometheus.yml:/etc/prometheus/prometheus.yml:ro
   - data:/prometheus/data
   depends_on:
     - cadvisor
     - node-exporter
 cadvisor:
   image: google/cadvisor:latest
   container_name: cadvisor
   deploy:
     mode: global
   restart: unless-stopped
   ports:
     - 8081:8080
   volumes:
     - /:/rootfs:ro
     - /var/run:/var/run:rw
     - /sys:/sys:ro
     - /var/lib/docker/:/var/lib/docker:ro
 node-exporter:
   image: prom/node-exporter:latest
   container_name: node-exporter
   deploy:
     mode: global
   restart: unless-stopped
   ports:
     - 9100:9100
   volumes:
     - /proc:/host/proc:ro
     - /sys:/host/sys:ro
     - /:/rootfs:ro
   
command:
     -
'--path.procfs=/host/proc'
     -
'--path.sysfs=/host/sys'
     - --collector.filesystem.ignored-mount-points
     -
"^/(sys|proc|dev|host|etc|rootfs/var/lib/docker/containers|rootfs/var/lib/docker/overlay2|rootfs/run/docker/netns|rootfs/var/lib/docker/aufs)($$|/)"
 grafana:
   image: grafana/grafana
   container_name: grafana
   ports:
     - 8082:3000
   volumes:
   - grafana_data:/var/lib/grafana
   - grafana_plugins:/var/lib/grafana/plugins
   environment:
     - GF_SECURITY_ADMIN_PASSWORD=P4ssW0rd0!
   depends_on:
     - prometheus
     - cadvisor
     - node-exporter

volumes:
 data:
 grafana_data:
 grafana_plugins:

Example 2

After a few months of debate, we’ve decided to set up a cooking blog. After researching different platforms, we've chosen Wordpress with MySQL. We have a swarm cluster already set up where we host customer sites. To make maintaining Wordpress easier, we’ve decided to set it up as a stack. We'll need to create the Docker Compose file, deploy the stack, and finish the Wordpress setup.

Todo:

# Docker compose file content

version: '3.1'

services:
 db:
   image: mysql:5.7
   volumes:
     - db_data:/var/lib/mysql
   networks:
     mysql_internal:
   environment:
     MYSQL_ROOT_PASSWORD: P4ssw0rd0!
     MYSQL_DATABASE: wordpress
     MYSQL_USER: wordpress
     MYSQL_PASSWORD: P4ssw0rd0!

 blog:
   depends_on:
     - db
   image: wordpress
   networks:
     mysql_internal:
     wordpress_public:
   ports:
     -
"80:80"
   environment:
     WORDPRESS_DB_HOST: db:3306
     WORDPRESS_DB_USER: wordpress
     WORDPRESS_DB_PASSWORD: P4ssw0rd0!

volumes:
  db_data:
networks:
mysql_internal:
  internal:
true
wordpress_public:

What’s Given