Custom default backend error pages of kubernetes ingress

The kubernetes nginx ingress controller has a default backend which show error pages of 404, 502 with nginx string in the error page. But some times we need to show a valid custom error page instead of showing the pages served by the default backend.

The process is simple. We need to create a configmap with custom error pages, create deployment with image k8s.gcr.io/ingress-nginx/nginx-errors with mounting the config map in /www. Also we need to create service which will be used as the default backend service of the ingress controller.

Configmap Manifest : https://github.com/divyaimca/my-k8s-test-projects/blob/main/ingress-nginx/custom-default-backend.yaml#L72-L539

Note : update the custom error pages under data with the required error HTML content

Deployment Manifest : https://github.com/divyaimca/my-k8s-test-projects/blob/main/ingress-nginx/custom-default-backend.yaml#L19-L70

Service Manifest : https://github.com/divyaimca/my-k8s-test-projects/blob/main/ingress-nginx/custom-default-backend.yaml#L2-L17

Modification in ingress controller arguments : https://github.com/divyaimca/my-k8s-test-projects/blob/main/ingress-nginx/ingress-deploy.yaml#L337

Note: Here update the service name matching the custom error service name

Next thing we need to update the ingress definition file for which we want to use the custom error pages.

We need to add 2 annotations for this :

  1. Pointing to the custom error service name
  2. mention the custom error to be served.

Ingress manifest update Example : https://github.com/divyaimca/my-k8s-test-projects/blob/main/rabbitmq_kustom/rabbitmq-ingress.yaml#L9-L10

Now if you want to access the webpage served by the ingress with some error, the ingress will serve the customised backend error pages instead of the default backend error pages.

Docker Issue : devmapper: Thin Pool is less than minimum required, use dm.min_free_space option to change behavior

Sometime while we build docker images or do any docker operation we might encounter thinpool space issue. like this:

devmapper: Thin Pool has 132480 free data blocks which is less than minimum required 163840 free data blocks. Create more free space in thin pool or use dm.min_free_space option to change behavior

If you do not want to use the device-tool utility, you can resize a loop-lvm thin pool manually using the following procedure.

In loop-lvm mode, a loopback device is used to store the data, and another to store the metadata. loop-lvm mode is only supported for testing, because it has significant performance and stability drawbacks.

If you are using loop-lvm mode, the output of docker info shows file paths for Data loop file and Metadata loop file:

[root@rvm-c431e558 proj_odi11g]# docker info |grep ‘loop file’
WARNING: Usage of loopback devices is strongly discouraged for production use. Use `–storage-opt dm.thinpooldev` to specify a custom block storage device.
Data loop file: /data/lib/docker/devicemapper/devicemapper/data
Metadata loop file: /data/lib/docker/devicemapper/devicemapper/metadata
WARNING: bridge-nf-call-iptables is disabled
WARNING: bridge-nf-call-ip6tables is disabledFollow these steps to increase the size of the thin pool.

In this example, the thin pool is 100 GB, and is increased to 200 GB.

List the sizes of the devices.

[root@rvm-c431e558 proj_odi11g]# ls -lh /data/lib/docker/devicemapper/devicemapper/
total 89G
-rw——- 1 root root 200G Mar 19 08:45 data
-rw——- 1 root root 2.0G Mar 19 08:45 metadata

Increase the size of the data file to 200 G using the truncate command, which is used to increase or decrease the size of a file. Note that decreasing the size is a destructive operation.

# truncate -s 200G /data/lib/docker/devicemapper/devicemapper/data
Verify the file size changed.

#  ls -lh /var/lib/docker/devicemapper/

total 1.2G
-rw——- 1 root root 100G Apr 14 08:47 data
-rw——- 1 root root 2.0G Apr 19 13:27 metadata

The loopback file has changed on disk but not in memory. List the size of the loopback device in memory, in GB. Reload it, then list the size again. After the reload, the size is 200 GB.

# echo $[ $(sudo blockdev –getsize64 /dev/loop0) / 1024 / 1024 / 1024 ]

100

# losetup -c /dev/loop0

# echo $[ $(sudo blockdev –getsize64 /dev/loop0) / 1024 / 1024 / 1024 ]

200

Reload the devicemapper thin pool.

a. Get the pool name first. The pool name is the first field, delimited by ` :`. This command extracts it.

#  dmsetup status | grep ‘ thin-pool ‘ | awk -F ‘: ‘ {‘print $1’}

docker-0:39-1566-pool

b. Dump the device mapper table for the thin pool.

#  dmsetup table docker-0:39-1566-pool

0 209715200 thin-pool 7:1 7:0 128 32768 1 skip_block_zeroing

c. Calculate the total sectors of the thin pool using the second field of the output. The number is expressed in 512-k sectors. A 100G file has 209715200 512-k sectors. If you double this number to 200G, you get 419430400 512-k sectors.

d. Reload the thin pool with the new sector number, using the following three dmsetup commands.

# dmsetup suspend docker-0:39-1566-pool

#  dmsetup reload docker-0:39-1566-pool –table ‘0 419430400 thin-pool 7:1 7:0 128 32768 1 skip_block_zeroing’

# dmsetup resume docker-0:39-1566-pool

#dmsetup suspend docker-0:39-1566-pool

#dmsetup reload docker-0:39-1566-pool –table ‘0 419430400 thin-pool 7:1 7:0 128 32768 1 skip_block_zeroing’

#dmsetup resume docker-0:39-1566-pool

Now space is increased to 200GB.

Accessing Host from Docker Container

Sometime we need to access the services that are running in the host machine to be accessible from the docker container.  e.g. In on of my project, we needed to connect to the oracle db (port 1521) from inside the container within code.

The default behaviour of containers are, they cant access the host network directly unless the firewall of the host machine allows the network interface of docker to ACCEPT the packets.

So the docker container will communicate with the host machine using the gateway ip. First find the gateway ip inside the container.

Run below command inside the container to get the gateway ip and observer I am not able to connect to port 1521.

[code language=”bash”]

# nc -vz dockerhost 1521

dockerhost [172.18.0.1] 1521 (?) : Connection timed out

# ip route | awk ‘/^default via /{print $3}’

172.18.0.1

[/code]

Next  task is to get the interface name of the docker network which is binded with the container.  Most of the cases its docker0.

But it can also be customized, so check ifconfig output which matches the inet addr of the container gateway.

[code language=”bash”]

# ifconfig

br-4e83b57c54cf Link encap:Ethernet  HWaddr 02:42:AF:CD:B5:DA

inet addr:172.18.0.1  Bcast:0.0.0.0  Mask:255.255.0.0

# ip addr show br-4e83b57c54cf

10: br-4e83b57c54cf: mtu 1500 qdisc noqueue state UP

link/ether 02:42:af:cd:b5:da brd ff:ff:ff:ff:ff:ff

inet 172.18.0.1/16 scope global br-4e83b57c54cf

valid_lft forever preferred_lft forever

[/code]

Here the interface name is : br-4e83b57c54cf

Now add a iptables rule in Linux host:

[code language=”bash”]

iptables -A INPUT -i br-4e83b57c54cf -j ACCEPT

[/code]

OR with firewalld

[code language=”bash”]
# firewall-cmd –permanent –zone=trusted –change-interface=br-294e81e5ac31
# firewall-cmd –reload

[/code]

Now try to access the host port from container.

[code language=”bash”]

# nc -vz dockerhost 1521

dockerhost [172.18.0.1] 1521 (?) open

[/code]

There are other ways also available on internet , but I found none of them working.

 

 

 

 

 

 

 

 

 

 

 

Docker Supervisord – Way to run multiple Demon process in a container

The docker was released keeping in mind, one daemon  per container which makes the container lightweight. Like suppose for running a web application, one container will serve database, one container will server as web server, one container  will server as  caching server connecting to DB.

So while writing a Dockerfile, the limitation  is : only one CMD  parameter can be be used inside it to run a single foreground process, the exit of which will stop the container.

But sometime we may face situations like to run more than one daemon process in a single container that is to setup the complete stack in a single container.

For this we can have two approaches:

  1. A bash script that will run all the processes in backened in sequence but the last one should run with only & to run as a foreground process.
  2. Using supervisor : Easy templatized way of managing multiple process in the container.

UseCase : I faced a situation where I have to run ssh,httpd,mysql in a single container and here is how I approached it with supervisor.

Also using the stdout of supervisor we can redirect the logs in terminal.

The three config file used here:

  1. Dockerfile
  2. supervisor.conf
  3. docker-compose.yml

These files can be accessed from my gitrepo :

https://github.com/kumarprd/docker-supervisor

Next run below commands:

  1. docker-compose build (It will build the image by reading the files)

#docker-compose build

Building web
Step 1 : FROM oraclelinux:6.8
—> 7187d444f0ce
Step 2 : ENV container docker
—> Running in 8cff18dabcc4
—> 655b5004777a

……..

……..

Step 20 : CMD /usr/bin/supervisord -c /etc/supervisor.conf
—> Running in 4ffed54b078f
—> dfb974e07bfb
Removing intermediate container 4ffed54b078f
Successfully built dfb974e07bfb

2. docker-compose up

# docker-compose up
Creating supervisord_web_1
Attaching to supervisord_web_1
web_1 | 2016-10-01 05:57:55,357 CRIT Supervisor running as root (no user in config file)
web_1 | 2016-10-01 05:57:55,357 WARN For [program:sshd], redirect_stderr=true but stderr_logfile has also been set to a filename, the filename has been ignored
web_1 | 2016-10-01 05:57:55,357 WARN For [program:mysqld], redirect_stderr=true but stderr_logfile has also been set to a filename, the filename has been ignored
web_1 | 2016-10-01 05:57:55,357 WARN For [program:httpd], redirect_stderr=true but stderr_logfile has also been set to a filename, the filename has been ignored
web_1 | 2016-10-01 05:57:55,364 INFO supervisord started with pid 1
web_1 | 2016-10-01 05:57:56,369 INFO spawned: ‘httpd’ with pid 7
web_1 | 2016-10-01 05:57:56,373 INFO spawned: ‘sshd’ with pid 8
web_1 | 2016-10-01 05:57:56,377 INFO spawned: ‘mysqld’ with pid 9
web_1 | Could not load host key: /etc/ssh/ssh_host_rsa_key
web_1 | Could not load host key: /etc/ssh/ssh_host_dsa_key
web_1 | 161001 05:57:56 mysqld_safe Logging to ‘/var/log/mysqld.log’.
web_1 | 161001 05:57:56 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql
web_1 | httpd: Could not reliably determine the server’s fully qualified domain name, using 172.18.0.2 for ServerName
web_1 | 2016-10-01 05:57:57,649 INFO success: httpd entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
web_1 | 2016-10-01 05:57:57,649 INFO success: sshd entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
web_1 | 2016-10-01 05:57:57,649 INFO success: mysqld entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)

3. check the ps table

# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
edd870f7e3ca testimg.supervisor “/usr/bin/supervisord” 19 minutes ago Up 19 minutes 0.0.0.0:5002->22/tcp, 0.0.0.0:5000->80/tcp, 0.0.0.0:5001->3306/tcp supervisord_web_1

 

4.  Connect to the container and check the services:

ssh -p 5002 root@<FQDN of the host where docker engine is running>

root@<FQDN>’s password:
Last login: Sat Oct 1 06:07:43 2016 from <FQDN>

[root@edd870f7e3ca ~]# /etc/init.d/mysqld status
mysqld (pid 101) is running…
[root@edd870f7e3ca ~]# /etc/init.d/httpd status
httpd (pid 7) is running…
[root@edd870f7e3ca ~]# /etc/init.d/sshd status
openssh-daemon (pid 8) is running…
[root@edd870f7e3ca ~]#

 

Docker Private Registry Setup

We can create our own secure private  docker repository where we can store our images and can be accessed from remote machine.

1. Goto /var/lib/docker in server and Create certificate using the domain name:

cd /var/lib/docker && mkdir certs
 mkdir -p certs && openssl req -newkey rsa:4096 -nodes -sha256 -keyout certs/sl09vmf0022.us.company.com.key -x509 -days 365 -out certs/sl09vmf0022.us.company.com.crt

2. Delete any old registry if exists:

docker rm  OR docker rmi registry:2

3. Recreate the registry using the newly created certificates by staying in the cert dir:

docker run -d -p 5000:5000 --restart=always --name bkdevregistry -v `pwd`/certs:/certs -e REGISTRY_HTTP_TLS_CERTIFICATE=/certs/sl09vmf0022.us.company.com.crt -e REGISTRY_HTTP_TLS_KEY=/certs/sl09vmf0022.us.company.com.key registry:2

4. Goto docker cert dir and copy the crt file as ca.crt and restart docker service

cd /etc/docker/certs.d/sl09vmf0022.us.company.com\:5000/
 cp /var/lib/docker/sl09vmf0022.us.company.com.crt /etc/docker/certs.d/sl09vmf0022.us.company.com\:5000/ca.crt
 update-ca-trust enable
 service docker restart

5. Now push images to private repository:

docker pull ubuntu
 docker tag ubuntu sl09vmf0022.ua.company.com:5000/ubuntu1404
 docker push sl09vmf0022.ua.company.com:5000/ubuntu1404

6. Client side configuration:

Copy the ca.crt file from docker registry server to local docker cert dir and restart docker service

mkdir -p /etc/docker/certs.d/sl09vmf0022.us.company.com\:5000/
 scp sl09vmf0022.us.company.com:/var/lib/docker/certs/sl09vmf0022.us.company.com.crt /etc/docker/certs.d/sl09vmf0022.us.company.com:5000/ca.crt
 service docker restart

7. Pull image from remote registry :

docker pull sl09vmf0022.us.company.com:5000/oel6u6

8. Check images in remote registry available using the crt file or in insecure mode:

curl -X GET https://sn09vmf0022.us.company.com:5000/v2/_catalog --cacert /etc/docker/certs.d/sn09vmf0022.us.company.com\:5000/ca.crt

OR

curl -X GET https://sl09vmf0022.us.company.com:5000/v2/_catalog --insecure

Using Docker – Part 1

In this part we will go through some simple usage of docker command.

Use -D with docker for debug mode.
Docker images are Immutable and Containers are Ephemeral.

How to get help ??

docker help
docker <command> help

1. Check images:

docker images

2. Run an application in the container:

( We have already downloaded oraclelinux:6.6 image from dockerhub)

-i flag starts an interactive container.

-t flag creates a pseudo-TTY that attaches stdin and stdout

docker run -i -t –name guest companylinux:6.6 /bin/bash

–name -> create an container instance with the name using the image companylinux6.6
execute /bin/bash isinde the container guest

NOTE : Here if image doesnt exist locally it will try to pull it from docker hub

3. Create an image and remove the container once logged out

 

docker run -i -t –rm companylinux:6.6 /bin/bash

 

4. Show all info about running processes in docker

docker ps
docker ps -a

5. Show info of processes running inside a container(here guest)

docker top guest

6. Run additional processes inside (guest here)

docker exec -it guest <command>

7. Create a container with a name that can be started in later time

docker create -it –name guest1 companylinux:7 /bin/bash

8. Start a container instance and Attach current shell to a docker container instance guest1

docker start -ai <container name> OR docker start -ai <container id>

9. stop instance and exit from the container

docker stop <containerid>

10. remove a container instance

docker rm guest1

11. Show all logs currently happening inside

docker logs -f guest

-f > updates the output in realtime

12. Get full information about a container in json format with inspect

docker inspect –format ='{{ .State.running}}’ guest1

13. Relaunch a container:

Look at the docker ps -all output and note down the CONTAINER_ID. If want to relaunch with interactive mode use -i option else just start.

docker start -i cfb007d616b9

OR

docker start cfb007d616b9

14. start/attach to a running Container

docker start <ID of comtainer>

15. Change the behaviour of the containers when exits from the container instance (add the option with run command )

–restart=always

Docker always attempts to restart the container when the container exits.

–restart=no
Docker does not attempt to restart the container when the container exits. This is the default policy.

–restart=on-failure[:max-retry]
Docker attempts to restarts the container if the container returns a non-zero exit code. You can optionally specify the maximum number of times that Docker will try to restart the container.

–rm (use this with run command, so that once you exit from the instance, it will get removed)

16. Local repo creation:(Use registry with tag 2, base host port 5000 mapped to registry container instance port 5000, names with localregistry)

docker run -d -p 5000:5000 –restart=always –name localregistry registry:2

17. Add images to local repository:(pull from docker hub OR create local image, tag it ,push it into local repo, pull it to from localrepo to create instance)

docker pull companylinux:6.6
docker tag companylinux:6.6 localhost:5000/oel6u6
docker push localhost:5000/oel6u6
docker pull localhost:5000/oel6u6

18. Stop and remove any instance

 

docker stop <container id> OR docker stop <instance-name>
docker rm <container id> OR docker rm <instance-name>

19. Remove image from repository(use -f for force remove)

docker rmi <imageid> OR docker rmi <imagereponame>
docker rmi -f <imageid> OR docker rmi -f <imagereponame>

20. Remove dead process entry from (docker ps -all) where any instance is in stopped state

docker rm $(docker ps -a -q)

Docker Concept & Setup

Why Containerization ?

Up to now we have been working with monolithic applications where different components of service are packaged into a single application which is easy to develop, test and deploy.But when it becomes large and complex it’s become difficult as one team to work on it and the risk of failure is high at deploy time.
So to overcome, a new trend has been followed to work with microservices where components of the monolithic application are divided into small microservices. Here every microsevice will have its own API to handle its part of the application.

  • It has advantages like each smaller service can use its own technology stack.
  • The developers will find it easy to understand a single service.
  • It’s also quicker to build and faster to deploy.
  • The application becomes distributed and microservice scales quicker horizontally than vertical and becomes more fault tolerant.

Virtual Machines are too big to transfer and often too slow.

So containerization is the better choice when adopting Microservices architecture.

Container ???

  • Container is all about running an application and not just a VM
  • Container is  a virtualization method at operating system level, that allows running multiple instances of OS running in same kernel.
  • Container is an image that contains apps, library, dependencies and most important kernel space components are provided by host operating systems
    • NameSpace : Global system resources like network, PID, mount points are presented as such a way that container thinks this is only available to it
    • CGroup : Used to reserve and allocate resources to container
    • Union file system : Merge different file systems into one virtual file system.
    • Capabilities : Managing privileges like root/nonroot

 

Docker ??

Docker is one of the most popular container product, that is based on LXC and  is an open platform to build , ship and run distributed applications.

 

  –   Docker Engine : portable, lightweight runtime packaging tool
   –  Docker Hub: A cloud service for sharing application
  • Docker enables application to quick assemble from components
  • It removes the friction between Dev,QA, Prod envs.
  • The same app unchanged can run anywhere (lappy/PC/datacente).

Docker images are built from Dockerfile and the containers are built from images.

:: Setup ::

Installing Docker is easy. All the commands used here are in OEL6 in my workplace.

1. Installation:

Update OS to atleast OEL6_UEK4 repo to use kernel > 4.1 (yum update and confirm kernel version, os > 6.4)
[ol6_UEKR4]
name=Latest Unbreakable Enterprise Kernel Release 4 for company Linux $releasever ($basearch)
baseurl=http://public-yum.company.com/repo/companyLinux/OL6/UEKR4/$basearch/
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-company
gpgcheck=1
enabled=1

yum update and reboot

> use docker repo:

[dockerrepo]
name=Docker Repository
baseurl=https://yum.dockerproject.org/repo/main/companylinux/6
enabled=1
gpgcheck=1
gpgkey=https://yum.dockerproject.org/gpg

2. Use btrfs filesystem:

yum install btrfs-progs
mkfs.btrfs /dev/sdb ( Add a raw disk and format with brtfs )
(FS tab entry )/dev/sdb /var/lib/docker btrfs defaults 0 0

3. Add proxy (if any to contact docker HUB)

/etc/sysconfig/docker ( If any ) OR add in /etc/default/docker( to use it with CURL)

export HTTP_PROXY=”proxy_URL:port”
export HTTPS_PROXY=”proxy_URL:port”

4. Modify docker config

In /etc/init.d/docker

Update

“$unshare” -m — $exec -d $other_args &>> $logfile &

to

$exec -d $other_args &>> $logfile &

5. Start docker service

# service docker start
# chkconfig docker on

6. Check docker details

service docker status
docker info
docker version

7. Add local user to docker group

groupadd docker
usermod -a -G docker <local docker>
chmod g+rx /var/lob/docker

8. Search images in docker hub:(Before pulling check the availability)

docker search oraclelinux
docker searcg centos
docker searcg registry

9. pull oracle linux6.6 image:

docker pull oraclelinux:6.6

here oraclelinux – image is 6.6 – version

10. Check images:

docker images

11. Add this env variable for authenticity, integrity of images

export DOCKER_CONTENT_TRUST=1