Python : Read and Update helm chart

Recently I was working on a release pipeline where the helm chart of 30+ environments need to be updated in git with the new chart versions from Jenkins input.

Here the helm chart was in yaml format and it was a umbrella chart and individual service chart was needed to be updated from Jenkins.

The umbrella chart file looks like this. https://raw.githubusercontent.com/divyaimca/my_projects/main/update_helm_chart/chart.yaml

apiVersion: v2
description: Helm chart to deploy application NG
name:app-main
version: 0.0.1
dependencies:
- name: service-a
  version: 0.1.014bf574
  repository: '@helm-repo'
  tags:
  - application
  enabled: true
- name: service-b
  version: 0.1.014bf575
  repository: '@helm-repo'
  tags:
  - application
  enabled: true
- name: service-c
  version: 0.1.014bf475
  repository: '@helm-repo'
  tags:
  - application
  enabled: true
- name: service-d
  version: 0.1.024bf575
  repository: '@helm-repo'
  tags:
  - application
  enabled: true
- name: service-e
  version: 0.1.014bf559
  repository: '@helm-repo'
  tags:
  - application
  enabled: true

Here you can see there are 5 dependent services and each version needs to be updated from

I used python module pyyaml.

Here is the code that is used in one stage to achieve this task.

https://raw.githubusercontent.com/divyaimca/my_projects/main/update_helm_chart/helmchart_parser.py

The function takes the input as chart.yaml file path and the subchart and versions in keyword arguments format. Refer the full code from the above link.

e.g.

    update_helm_chart("./chart.yaml",service-b="0.2.574",service-d="0.2.585",service-e="0.2.576")    

openssl issue : Error Loading extension section v3_ca

If you get this issue while generating certificates using openssl command verify the openssl configuration file. This issue was found while running openssl in MacOS.

It was found that in MacOS the default OpenSSL config does not include the configuration for v3_ca certificate generation.

This can be fixed by below steps :

Open the file : /etc/ssl/openssl.cnf and add blow content

[ v3_ca ]
basicConstraints = critical,CA:TRUE
subjectKeyIdentifier = hash
authorityKeyIdentifier = keyid:always,issuer:always

Accessing Host from Docker Container

Sometime we need to access the services that are running in the host machine to be accessible from the docker container.  e.g. In on of my project, we needed to connect to the oracle db (port 1521) from inside the container within code.

The default behaviour of containers are, they cant access the host network directly unless the firewall of the host machine allows the network interface of docker to ACCEPT the packets.

So the docker container will communicate with the host machine using the gateway ip. First find the gateway ip inside the container.

Run below command inside the container to get the gateway ip and observer I am not able to connect to port 1521.

[code language=”bash”]

# nc -vz dockerhost 1521

dockerhost [172.18.0.1] 1521 (?) : Connection timed out

# ip route | awk ‘/^default via /{print $3}’

172.18.0.1

[/code]

Next  task is to get the interface name of the docker network which is binded with the container.  Most of the cases its docker0.

But it can also be customized, so check ifconfig output which matches the inet addr of the container gateway.

[code language=”bash”]

# ifconfig

br-4e83b57c54cf Link encap:Ethernet  HWaddr 02:42:AF:CD:B5:DA

inet addr:172.18.0.1  Bcast:0.0.0.0  Mask:255.255.0.0

# ip addr show br-4e83b57c54cf

10: br-4e83b57c54cf: mtu 1500 qdisc noqueue state UP

link/ether 02:42:af:cd:b5:da brd ff:ff:ff:ff:ff:ff

inet 172.18.0.1/16 scope global br-4e83b57c54cf

valid_lft forever preferred_lft forever

[/code]

Here the interface name is : br-4e83b57c54cf

Now add a iptables rule in Linux host:

[code language=”bash”]

iptables -A INPUT -i br-4e83b57c54cf -j ACCEPT

[/code]

OR with firewalld

[code language=”bash”]
# firewall-cmd –permanent –zone=trusted –change-interface=br-294e81e5ac31
# firewall-cmd –reload

[/code]

Now try to access the host port from container.

[code language=”bash”]

# nc -vz dockerhost 1521

dockerhost [172.18.0.1] 1521 (?) open

[/code]

There are other ways also available on internet , but I found none of them working.

 

 

 

 

 

 

 

 

 

 

 

Docker Supervisord – Way to run multiple Demon process in a container

The docker was released keeping in mind, one daemon  per container which makes the container lightweight. Like suppose for running a web application, one container will serve database, one container will server as web server, one container  will server as  caching server connecting to DB.

So while writing a Dockerfile, the limitation  is : only one CMD  parameter can be be used inside it to run a single foreground process, the exit of which will stop the container.

But sometime we may face situations like to run more than one daemon process in a single container that is to setup the complete stack in a single container.

For this we can have two approaches:

  1. A bash script that will run all the processes in backened in sequence but the last one should run with only & to run as a foreground process.
  2. Using supervisor : Easy templatized way of managing multiple process in the container.

UseCase : I faced a situation where I have to run ssh,httpd,mysql in a single container and here is how I approached it with supervisor.

Also using the stdout of supervisor we can redirect the logs in terminal.

The three config file used here:

  1. Dockerfile
  2. supervisor.conf
  3. docker-compose.yml

These files can be accessed from my gitrepo :

https://github.com/kumarprd/docker-supervisor

Next run below commands:

  1. docker-compose build (It will build the image by reading the files)

#docker-compose build

Building web
Step 1 : FROM oraclelinux:6.8
—> 7187d444f0ce
Step 2 : ENV container docker
—> Running in 8cff18dabcc4
—> 655b5004777a

……..

……..

Step 20 : CMD /usr/bin/supervisord -c /etc/supervisor.conf
—> Running in 4ffed54b078f
—> dfb974e07bfb
Removing intermediate container 4ffed54b078f
Successfully built dfb974e07bfb

2. docker-compose up

# docker-compose up
Creating supervisord_web_1
Attaching to supervisord_web_1
web_1 | 2016-10-01 05:57:55,357 CRIT Supervisor running as root (no user in config file)
web_1 | 2016-10-01 05:57:55,357 WARN For [program:sshd], redirect_stderr=true but stderr_logfile has also been set to a filename, the filename has been ignored
web_1 | 2016-10-01 05:57:55,357 WARN For [program:mysqld], redirect_stderr=true but stderr_logfile has also been set to a filename, the filename has been ignored
web_1 | 2016-10-01 05:57:55,357 WARN For [program:httpd], redirect_stderr=true but stderr_logfile has also been set to a filename, the filename has been ignored
web_1 | 2016-10-01 05:57:55,364 INFO supervisord started with pid 1
web_1 | 2016-10-01 05:57:56,369 INFO spawned: ‘httpd’ with pid 7
web_1 | 2016-10-01 05:57:56,373 INFO spawned: ‘sshd’ with pid 8
web_1 | 2016-10-01 05:57:56,377 INFO spawned: ‘mysqld’ with pid 9
web_1 | Could not load host key: /etc/ssh/ssh_host_rsa_key
web_1 | Could not load host key: /etc/ssh/ssh_host_dsa_key
web_1 | 161001 05:57:56 mysqld_safe Logging to ‘/var/log/mysqld.log’.
web_1 | 161001 05:57:56 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql
web_1 | httpd: Could not reliably determine the server’s fully qualified domain name, using 172.18.0.2 for ServerName
web_1 | 2016-10-01 05:57:57,649 INFO success: httpd entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
web_1 | 2016-10-01 05:57:57,649 INFO success: sshd entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
web_1 | 2016-10-01 05:57:57,649 INFO success: mysqld entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)

3. check the ps table

# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
edd870f7e3ca testimg.supervisor “/usr/bin/supervisord” 19 minutes ago Up 19 minutes 0.0.0.0:5002->22/tcp, 0.0.0.0:5000->80/tcp, 0.0.0.0:5001->3306/tcp supervisord_web_1

 

4.  Connect to the container and check the services:

ssh -p 5002 root@<FQDN of the host where docker engine is running>

root@<FQDN>’s password:
Last login: Sat Oct 1 06:07:43 2016 from <FQDN>

[root@edd870f7e3ca ~]# /etc/init.d/mysqld status
mysqld (pid 101) is running…
[root@edd870f7e3ca ~]# /etc/init.d/httpd status
httpd (pid 7) is running…
[root@edd870f7e3ca ~]# /etc/init.d/sshd status
openssh-daemon (pid 8) is running…
[root@edd870f7e3ca ~]#

 

xend issue : Xend has probably crashed! Invalid or missing HTTP status code.

I recently found  some VMs of one OVS node( from 30+ nodes)  went down and not able to start with this error :

Xend has probably crashed!  Invalid or missing HTTP status code.

There are many reasons behind this. And if you try to restart xend , it will not start.

The first place to look for is :

/var/log/xen/xend-debug.log

This log will say where exactly the issue is.

In my case my  / filesystem was running out of space because one log file consumed almost 8 GB . So I have to delete that file and now xend started successfully.

Chef – Create encrypted data bag and keep secrets

Sometimes we have to deal with global variables like User passwords, database password, API Keys, middleware boot properties in our chef recipes which shouldn’t be exposed outside.

One solution is we have to keep all the secrets in a data bag and encrypt them using a random secret key and later distribute the key to other node where the secrets are accessed.

diagram_01.png

The other solution if using chef-vault which we will cover in a later topics.

First we have to create  a random encryption key:

openssl rand -base64 512 | tr -d ‘\r\n’ > rev_secret_key 

We have to use this secret key now to encrypt the databag item “revpass” in data bag “rev_secret”.

[code language=”bash”]

export EDITOR=vi
knife data bag create  −−secret-file ./rev_secret_key rev_secret revpass

[/code]

This will open the vi editor with JSON data would be:

{
“id”: “revpass”
}

Now add your secrets here in json format which will be:

{
“id”: “revpass”,

“boot_pass”: “bootpassword”,
“db_pass”: “dbpassword”,
}

Save and exit.

Show the encrypted contents of your databag:

knife data bag show rev_secret revpass

Show the decrypted contents of your databag:

knife data bag show −−secret-file=./rev_secret_key rev_secret revpass

For your chef clients to be able to decrypt the databag when needed, just copy over the secret key (replace client-node with your IP/node name):

scp ./rev_secret_key client-node:/etc/chef/encrypted_data_bag_secret

OR

keep it in ~/.chef directory ad update settings in knife.rb file.

encrypted_data_bag_secret “~/.chef/encrypted_data_bag_secret”

Accessing secret in recipe:

OR Mention the secret key in recipe as below.

In your db recipe, add below line to

secret = Chef::EncryptedDataBagItem.load_secret(“/var/chef/cache/cookbooks/revrec-chef/files/default/revrec_secret_key”)

passwords = Chef::EncryptedDataBagItem.load(“rev_secret “, “revpass”)

dbpasswd = passwds[“db_pass”]

Use it inside a resource:

………

oradb_password = “#{dbpasswd}”

……..

Or keep it in a template how ever its suitable.

Note : If you are using password in a template turn of logging by adding this attribute in template resource:

……

sensitive true

…..

Chef – Deleting existing attributes

Sometimes we face situation like :

  1. May need to remove some persistent attributes which we set a flag after some work is done
  2. May be we set some attributes wrong so need to remove and reset the existing attribute.

The attribute may be set in attributes/default.rb OR insdie recipe with node normal attribute OR node set OR node override attributes.

There is a nice tool of chef (chef exec)  with which we can transform the attributes.

Usage:

Suppose a node with attribute hierarchy :

“normal” :[

{

“install_oradb” :  [

“is_installed” : “True”,

“is_running” :”True”

}

]

knife exec -E "nodes.transform(:all) {|n| n.normal_attrs[:install_oradb].delete(:is_installed) rescue nil }"
knife exec -E "nodes.transform(:all) {|n| n.normal_attrs[:install_oradb].delete(:is_running) rescue nil }"

 

Here we can replace normal_attrs with override_attrs OR default_attrs as required.

This will remove the attributes from all the nodes.

 

Suppose we want remove the attributes from a particular node. In this case you have to get search the node name from the chef solr index and replace the all with the required node.

e.g.

knife exec -E "nodes.transform(:node2) {|n| n.normal_attrs[:install_oradb].delete(:is_installed) rescue nil }"
knife exec -E "nodes.transform(:node2) {|n| n.normal_attrs[:install_oradb].delete(:is_running) rescue nil }"

Writing chef Library

In many cases we have to reuse same code again and again in our recipes. So to reduce this we can write our own library module and reuse it’s methods whenever required. This can help us use our own custom methods.

Like in my previous post we have to access the environment variables in many recipes. So we can use the same code as our library to create a module (lets say ProdEnvSetup)

 

  1. Create a library file under library directory of cook-book.(cook-book/library/envsetup.rb) and add below code.

 

[code language=”ruby”]module ProdEnvSetup
def setupenv()
hash1 = {}
File.open(“/u01/data/wor/app/conf/conf.prop”) do |fp|
fp.each do |line|
key, value = line.chomp.split(“=”,2)
hash1[key] = value
end
end

hash1.each do |key,value|
skey = “#{key.to_s}”.gsub(/\s|”|’/, ”)
svalue = “#{value.to_s}”.gsub(/\s|”|’/, ”)
ENV[skey] = svalue
end
end
end
end[/code]

2. Using library inside recipe ( 1st way )

So if the same environment variables are required inside a recipe, we can directly include the module inside our recipe.

Use below code inside begining of recipe:

[code language=”ruby”]class Chef::Recipe
include ProdEnvSetup
end
setupenv() [/code]

2. Using library ( 2nd way )

We can add additional encapsulation to our module by keeping it inside a recipe and include that recipe in every recipe, wherever its required.

create a recipe setupenv.rb and add above codes in 1st way(above)

and include it in other recipes like this:

[code language=”ruby”]include “prod-multiode-cookbook:setupenv”

[/code]

 


			

Chef/ruby way – Read a file and expose as environment variable

Many time we have to read a property file in in which the variable and value are comma separated and we have to set those in our environment variable to execute certain recipes.

 

e.g. property file (/u01/data/wor/app/conf/conf.prop)

ops_home = ‘/u01/data/work/app/ops-home-1.2.30’

node_instance = ‘/u01/data/work/app’

 

So here we have to read the file , create a hash and then save the LHS as key and RHS as value. Then we are good to expose them as environment variable.

Note : This is the approach I used, there may be other solution available.

Here the properties are = separated. It can be any separator.

This is a reusable function and can be called where ever required.

[code language=”ruby”]def setupenv()
hash1 = {}
File.open(&quot;/u01/data/wor/app/conf/conf.prop&quot;) do |fp|
fp.each do |line|
key, value = line.chomp.split(&quot;=&quot;,2)
hash1[key] = value
end
end&lt;/pre&gt;
hash1.each do |key,value|
skey = &quot;#{key.to_s}&quot;.gsub(/\s|&quot;|’/, ”)
svalue = &quot;#{value.to_s}&quot;.gsub(/\s|&quot;|’/, ”)
ENV[skey] = svalue
end
end
end
end[/code]

 

Here setupenv( ) can be called anywhere the ENV variables are required.

Note : Here gsub(/\s|”|’/, ”) is used to trim  leading and trailing space, single quote, double quote of the key and value.

 

 

Docker Private Registry Setup

We can create our own secure private  docker repository where we can store our images and can be accessed from remote machine.

1. Goto /var/lib/docker in server and Create certificate using the domain name:

cd /var/lib/docker && mkdir certs
 mkdir -p certs && openssl req -newkey rsa:4096 -nodes -sha256 -keyout certs/sl09vmf0022.us.company.com.key -x509 -days 365 -out certs/sl09vmf0022.us.company.com.crt

2. Delete any old registry if exists:

docker rm  OR docker rmi registry:2

3. Recreate the registry using the newly created certificates by staying in the cert dir:

docker run -d -p 5000:5000 --restart=always --name bkdevregistry -v `pwd`/certs:/certs -e REGISTRY_HTTP_TLS_CERTIFICATE=/certs/sl09vmf0022.us.company.com.crt -e REGISTRY_HTTP_TLS_KEY=/certs/sl09vmf0022.us.company.com.key registry:2

4. Goto docker cert dir and copy the crt file as ca.crt and restart docker service

cd /etc/docker/certs.d/sl09vmf0022.us.company.com\:5000/
 cp /var/lib/docker/sl09vmf0022.us.company.com.crt /etc/docker/certs.d/sl09vmf0022.us.company.com\:5000/ca.crt
 update-ca-trust enable
 service docker restart

5. Now push images to private repository:

docker pull ubuntu
 docker tag ubuntu sl09vmf0022.ua.company.com:5000/ubuntu1404
 docker push sl09vmf0022.ua.company.com:5000/ubuntu1404

6. Client side configuration:

Copy the ca.crt file from docker registry server to local docker cert dir and restart docker service

mkdir -p /etc/docker/certs.d/sl09vmf0022.us.company.com\:5000/
 scp sl09vmf0022.us.company.com:/var/lib/docker/certs/sl09vmf0022.us.company.com.crt /etc/docker/certs.d/sl09vmf0022.us.company.com:5000/ca.crt
 service docker restart

7. Pull image from remote registry :

docker pull sl09vmf0022.us.company.com:5000/oel6u6

8. Check images in remote registry available using the crt file or in insecure mode:

curl -X GET https://sn09vmf0022.us.company.com:5000/v2/_catalog --cacert /etc/docker/certs.d/sn09vmf0022.us.company.com\:5000/ca.crt

OR

curl -X GET https://sl09vmf0022.us.company.com:5000/v2/_catalog --insecure