Custom default backend error pages of kubernetes ingress

The kubernetes nginx ingress controller has a default backend which show error pages of 404, 502 with nginx string in the error page. But some times we need to show a valid custom error page instead of showing the pages served by the default backend.

The process is simple. We need to create a configmap with custom error pages, create deployment with image with mounting the config map in /www. Also we need to create service which will be used as the default backend service of the ingress controller.

Configmap Manifest :

Note : update the custom error pages under data with the required error HTML content

Deployment Manifest :

Service Manifest :

Modification in ingress controller arguments :

Note: Here update the service name matching the custom error service name

Next thing we need to update the ingress definition file for which we want to use the custom error pages.

We need to add 2 annotations for this :

  1. Pointing to the custom error service name
  2. mention the custom error to be served.

Ingress manifest update Example :

Now if you want to access the webpage served by the ingress with some error, the ingress will serve the customised backend error pages instead of the default backend error pages.

Jenkins slave as a service in windows to start automatically

In Jenkins many time we have to add windows machine as slave, where we need the agent to be up and running as windows service.

There are many ways to do it, but I struggled to find the correct configuration steps. I used windows resource toolkit to make it work and adding the steps here.

Configuration Steps:

(A) Adding the windows slave in Jenkins server :

1. Add the agent in Jenkins Master with Launch Method : Launch Via Java Web Start

Screen Shot 2019-09-25 at 4.07.06 PM

(B) Creating the Service in windows Server for starting the slave agent: (In Windows Server 2016)

1. Download and install the Java 8.
2. Down and Install Windows Resource Kit Tools (
3. Create a blank service called “Jenkins Slave” by running the following from a command prompt

“C:\Program Files (x86)\Windows Resource Kits\Tools\instsrv.exe” “Jenkins Slave” “C:\Program Files (x86)\Windows Resource Kits\Tools\srvany.exe”

4. Open Registry Editor and go to below location :

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Jenkins Slave

Now Follow the below steps carefully.

  1. Create a string value “Description”
  2. Populate it with “Jenkins Continuous Integration Slave”

Screen Shot 2019-09-25 at 2.13.33 PM

  1. Create a new key “Parameters”
  2. Under “Parameters” create a new string value “Application”
  3. Populate it with the full path to java.exe, something like “C:\sapjvm_8\bin\java.exe”
  4. Under “Parameters” Create a new string value “AppParameters”
  5. Populate it with

“-jar E:\jenkins_agent\agent.jar -jnlpUrl -secret <secret-name> -workDir E:\jenkinsWorkSpace”

  1. The slave.jar should point to the correct location
  2. The Jenkins master machine name should be correct
  3. The new Jenkins slave machine should be correct
  4. Make sure you use the secret for this machine that you copied from the master when adding the new node

Screen Shot 2019-09-25 at 2.24.06 PM

Open the Services application from Control Panel – Administrative Tools, find the “Jenkins Slave” service, right click on the service and go to “Properties”.

  1. Go to the “Recovery” tab and change “First failure” and “Second failure” to “Restart the Service” – occasionally we found it wouldn’t start up first time out
  2. Go to the “Log On” tab and set an account and password- we found that using an account with local admin rights on the slave machine worked best but this is probably unnecessary
  3. Go to the “General” tab and change the “Startup type” to “Automatic” – make sure the service
  4. starts up when you restart the slave
  5. Click the “OK” button
  6. Now start the serviceScreen Shot 2019-09-25 at 2.27.33 PM

6. The service will run by default during startup of the windows machine.

7. Now Verify the agent is up and running in Jenkins Web Page.

Screen Shot 2019-09-25 at 4.03.00 PM

Python : Read and Update helm chart

Recently I was working on a release pipeline where the helm chart of 30+ environments need to be updated in git with the new chart versions from Jenkins input.

Here the helm chart was in yaml format and it was a umbrella chart and individual service chart was needed to be updated from Jenkins.

The umbrella chart file looks like this.

apiVersion: v2
description: Helm chart to deploy application NG
version: 0.0.1
- name: service-a
  version: 0.1.014bf574
  repository: '@helm-repo'
  - application
  enabled: true
- name: service-b
  version: 0.1.014bf575
  repository: '@helm-repo'
  - application
  enabled: true
- name: service-c
  version: 0.1.014bf475
  repository: '@helm-repo'
  - application
  enabled: true
- name: service-d
  version: 0.1.024bf575
  repository: '@helm-repo'
  - application
  enabled: true
- name: service-e
  version: 0.1.014bf559
  repository: '@helm-repo'
  - application
  enabled: true

Here you can see there are 5 dependent services and each version needs to be updated from

I used python module pyyaml.

Here is the code that is used in one stage to achieve this task.

The function takes the input as chart.yaml file path and the subchart and versions in keyword arguments format. Refer the full code from the above link.



Docker Issue : devmapper: Thin Pool is less than minimum required, use dm.min_free_space option to change behavior

Sometime while we build docker images or do any docker operation we might encounter thinpool space issue. like this:

devmapper: Thin Pool has 132480 free data blocks which is less than minimum required 163840 free data blocks. Create more free space in thin pool or use dm.min_free_space option to change behavior

If you do not want to use the device-tool utility, you can resize a loop-lvm thin pool manually using the following procedure.

In loop-lvm mode, a loopback device is used to store the data, and another to store the metadata. loop-lvm mode is only supported for testing, because it has significant performance and stability drawbacks.

If you are using loop-lvm mode, the output of docker info shows file paths for Data loop file and Metadata loop file:

[root@rvm-c431e558 proj_odi11g]# docker info |grep ‘loop file’
WARNING: Usage of loopback devices is strongly discouraged for production use. Use `–storage-opt dm.thinpooldev` to specify a custom block storage device.
Data loop file: /data/lib/docker/devicemapper/devicemapper/data
Metadata loop file: /data/lib/docker/devicemapper/devicemapper/metadata
WARNING: bridge-nf-call-iptables is disabled
WARNING: bridge-nf-call-ip6tables is disabledFollow these steps to increase the size of the thin pool.

In this example, the thin pool is 100 GB, and is increased to 200 GB.

List the sizes of the devices.

[root@rvm-c431e558 proj_odi11g]# ls -lh /data/lib/docker/devicemapper/devicemapper/
total 89G
-rw——- 1 root root 200G Mar 19 08:45 data
-rw——- 1 root root 2.0G Mar 19 08:45 metadata

Increase the size of the data file to 200 G using the truncate command, which is used to increase or decrease the size of a file. Note that decreasing the size is a destructive operation.

# truncate -s 200G /data/lib/docker/devicemapper/devicemapper/data
Verify the file size changed.

#  ls -lh /var/lib/docker/devicemapper/

total 1.2G
-rw——- 1 root root 100G Apr 14 08:47 data
-rw——- 1 root root 2.0G Apr 19 13:27 metadata

The loopback file has changed on disk but not in memory. List the size of the loopback device in memory, in GB. Reload it, then list the size again. After the reload, the size is 200 GB.

# echo $[ $(sudo blockdev –getsize64 /dev/loop0) / 1024 / 1024 / 1024 ]


# losetup -c /dev/loop0

# echo $[ $(sudo blockdev –getsize64 /dev/loop0) / 1024 / 1024 / 1024 ]


Reload the devicemapper thin pool.

a. Get the pool name first. The pool name is the first field, delimited by ` :`. This command extracts it.

#  dmsetup status | grep ‘ thin-pool ‘ | awk -F ‘: ‘ {‘print $1’}


b. Dump the device mapper table for the thin pool.

#  dmsetup table docker-0:39-1566-pool

0 209715200 thin-pool 7:1 7:0 128 32768 1 skip_block_zeroing

c. Calculate the total sectors of the thin pool using the second field of the output. The number is expressed in 512-k sectors. A 100G file has 209715200 512-k sectors. If you double this number to 200G, you get 419430400 512-k sectors.

d. Reload the thin pool with the new sector number, using the following three dmsetup commands.

# dmsetup suspend docker-0:39-1566-pool

#  dmsetup reload docker-0:39-1566-pool –table ‘0 419430400 thin-pool 7:1 7:0 128 32768 1 skip_block_zeroing’

# dmsetup resume docker-0:39-1566-pool

#dmsetup suspend docker-0:39-1566-pool

#dmsetup reload docker-0:39-1566-pool –table ‘0 419430400 thin-pool 7:1 7:0 128 32768 1 skip_block_zeroing’

#dmsetup resume docker-0:39-1566-pool

Now space is increased to 200GB.

chef attribute : avoiding “undefined method `[]’ for nil:NilClass” error

In chef, when a nested attribute that might not exist/or not crated yet, you can use rescue as a modifier to an if statement.

For example, assuming that only some of your nodes have node[‘install_wls’][‘isManaged’]defined:

if node['install_wls']['isManaged']
do the stuff
end rescue NoMethodError

will avoid the chef run bailing with “undefined method `[]’ for nil:NilClass” error.

Though there are any legit method errors within your if block to handle empty/nil/blank attributes, you’ve just caught an exception you shouldn’t.

I use this in cookbooks that have extra functionality if some dependency of other component recipe happens to be installed. For example, the odi cookbook  depend on wls or odi depends on oracldb.

Python : Inplace update json and maintain proper order

Some time we have to read one existing json property file and  update some values inplace.

If we don’t use proper approach, the update may lead to breaking the json structure in the file.

We have to hook the json objects by using OrderedDict of collection module in python for remembering the proper order.

Here old_value is updated with new_value :



“name” : “old_value”



[code language=”python”]

from collections import OrderedDict

propJson = os.path.dirname(os.path.abspath(__file__))+/props.json
if os.path.isfile(propJson):
with open(propJson,r+) as f:
prop = json.load(f, object_hook=OrderedDict)
prop[head][name] = str(new_value)
f.write(json.dumps(prop, f, default=str, indent=4))


Send mail using Python’s smtplib module

Python has a built in module to send mail to recipient[s] as to,cc,bcc. Here assumption is that : the smtp is configured in localhost (where the script will run).

[code language=”python”]
import socket
import smtplib
from email.mime.text import MIMEText

def SendMail(file,Email,status):
fp = open(file,’rb’)
msg = MIMEText(
msg[‘Subject’] = ‘MULTINODE SETUP :: ‘+status
msg[‘From’] = ‘’
msg[‘to’] = to
msg[‘cc’] = cc

msg[‘bcc’] = bcc
s = smtplib.SMTP(‘localhost’)
s.sendmail(‘’,toaddr ,msg.as_string())





Knife remove all recipes from the run_list

There is a simple knife command which can be used to remove all recipes from the run_list of all nodes in a environment.

For this you have to create a dummy role like suppose dummy_role.

#knife role create dummy_role

Once you create the dummy role, assign this role to all the nodes in the environment using the below knife command.


#knife exec -E ‘nodes.transform(:all) {|n| n.run_list([“role[dummy_role]”])}'”


Now this command would remove all the recipes added to the run_list of the nodes in the environment and add dummy_role to the run_list.

We can remove the dummy_role from the run_list of all the nodes and make it empty.

#knife exec -E ‘nodes.find(“role:dummy_role”) {|n|  n.run_list.remove(“role[dummy_role]”);}’

This is helpful in scenarios where you need to remove all the recipes irrespective of the nodes in the environment and start adding fresh.

Chef – Create encrypted data bag and keep secrets

Sometimes we have to deal with global variables like User passwords, database password, API Keys, middleware boot properties in our chef recipes which shouldn’t be exposed outside.

One solution is we have to keep all the secrets in a data bag and encrypt them using a random secret key and later distribute the key to other node where the secrets are accessed.


The other solution if using chef-vault which we will cover in a later topics.

First we have to create  a random encryption key:

openssl rand -base64 512 | tr -d ‘\r\n’ > rev_secret_key 

We have to use this secret key now to encrypt the databag item “revpass” in data bag “rev_secret”.

[code language=”bash”]

export EDITOR=vi
knife data bag create  −−secret-file ./rev_secret_key rev_secret revpass


This will open the vi editor with JSON data would be:

“id”: “revpass”

Now add your secrets here in json format which will be:

“id”: “revpass”,

“boot_pass”: “bootpassword”,
“db_pass”: “dbpassword”,

Save and exit.

Show the encrypted contents of your databag:

knife data bag show rev_secret revpass

Show the decrypted contents of your databag:

knife data bag show −−secret-file=./rev_secret_key rev_secret revpass

For your chef clients to be able to decrypt the databag when needed, just copy over the secret key (replace client-node with your IP/node name):

scp ./rev_secret_key client-node:/etc/chef/encrypted_data_bag_secret


keep it in ~/.chef directory ad update settings in knife.rb file.

encrypted_data_bag_secret “~/.chef/encrypted_data_bag_secret”

Accessing secret in recipe:

OR Mention the secret key in recipe as below.

In your db recipe, add below line to

secret = Chef::EncryptedDataBagItem.load_secret(“/var/chef/cache/cookbooks/revrec-chef/files/default/revrec_secret_key”)

passwords = Chef::EncryptedDataBagItem.load(“rev_secret “, “revpass”)

dbpasswd = passwds[“db_pass”]

Use it inside a resource:


oradb_password = “#{dbpasswd}”


Or keep it in a template how ever its suitable.

Note : If you are using password in a template turn of logging by adding this attribute in template resource:


sensitive true