Docker Issue : devmapper: Thin Pool is less than minimum required, use dm.min_free_space option to change behavior

Sometime while we build docker images or do any docker operation we might encounter thinpool space issue. like this:

devmapper: Thin Pool has 132480 free data blocks which is less than minimum required 163840 free data blocks. Create more free space in thin pool or use dm.min_free_space option to change behavior

If you do not want to use the device-tool utility, you can resize a loop-lvm thin pool manually using the following procedure.

In loop-lvm mode, a loopback device is used to store the data, and another to store the metadata. loop-lvm mode is only supported for testing, because it has significant performance and stability drawbacks.

If you are using loop-lvm mode, the output of docker info shows file paths for Data loop file and Metadata loop file:

[root@rvm-c431e558 proj_odi11g]# docker info |grep ‘loop file’
WARNING: Usage of loopback devices is strongly discouraged for production use. Use `–storage-opt dm.thinpooldev` to specify a custom block storage device.
Data loop file: /data/lib/docker/devicemapper/devicemapper/data
Metadata loop file: /data/lib/docker/devicemapper/devicemapper/metadata
WARNING: bridge-nf-call-iptables is disabled
WARNING: bridge-nf-call-ip6tables is disabledFollow these steps to increase the size of the thin pool.

In this example, the thin pool is 100 GB, and is increased to 200 GB.

List the sizes of the devices.

[root@rvm-c431e558 proj_odi11g]# ls -lh /data/lib/docker/devicemapper/devicemapper/
total 89G
-rw——- 1 root root 200G Mar 19 08:45 data
-rw——- 1 root root 2.0G Mar 19 08:45 metadata

Increase the size of the data file to 200 G using the truncate command, which is used to increase or decrease the size of a file. Note that decreasing the size is a destructive operation.

# truncate -s 200G /data/lib/docker/devicemapper/devicemapper/data
Verify the file size changed.

#  ls -lh /var/lib/docker/devicemapper/

total 1.2G
-rw——- 1 root root 100G Apr 14 08:47 data
-rw——- 1 root root 2.0G Apr 19 13:27 metadata

The loopback file has changed on disk but not in memory. List the size of the loopback device in memory, in GB. Reload it, then list the size again. After the reload, the size is 200 GB.

# echo $[ $(sudo blockdev –getsize64 /dev/loop0) / 1024 / 1024 / 1024 ]

100

# losetup -c /dev/loop0

# echo $[ $(sudo blockdev –getsize64 /dev/loop0) / 1024 / 1024 / 1024 ]

200

Reload the devicemapper thin pool.

a. Get the pool name first. The pool name is the first field, delimited by ` :`. This command extracts it.

#  dmsetup status | grep ‘ thin-pool ‘ | awk -F ‘: ‘ {‘print $1’}

docker-0:39-1566-pool

b. Dump the device mapper table for the thin pool.

#  dmsetup table docker-0:39-1566-pool

0 209715200 thin-pool 7:1 7:0 128 32768 1 skip_block_zeroing

c. Calculate the total sectors of the thin pool using the second field of the output. The number is expressed in 512-k sectors. A 100G file has 209715200 512-k sectors. If you double this number to 200G, you get 419430400 512-k sectors.

d. Reload the thin pool with the new sector number, using the following three dmsetup commands.

# dmsetup suspend docker-0:39-1566-pool

#  dmsetup reload docker-0:39-1566-pool –table ‘0 419430400 thin-pool 7:1 7:0 128 32768 1 skip_block_zeroing’

# dmsetup resume docker-0:39-1566-pool

#dmsetup suspend docker-0:39-1566-pool

#dmsetup reload docker-0:39-1566-pool –table ‘0 419430400 thin-pool 7:1 7:0 128 32768 1 skip_block_zeroing’

#dmsetup resume docker-0:39-1566-pool

Now space is increased to 200GB.

ORA-12519: TNS:no appropriate service handler found

The number of database processes that can run in Oracle DB is set to 150 as default which can be viewed from v$resource_limit.

select * from v$resource_limit where resource_name = 'processes';


RESOURCE_NAME CURRENT_UTILIZATION MAX_UTILIZATION INITIAL_ALLOCATION LIMIT_VALUE
------------- ------------------- --------------- ------------------ -----------
processes     148                 149              150                150

We face this issue – ORA-12519 ,  randomly when the number of DB processes utilized by our application is more that 150. So we need to increase the number of DB processes inorder to avoid the issue. Use below steps to incresae the limit.

[code language=”sql”]

sqlplus / as sysdba
sqlplus>alter system set processes=500 scope=spfile;
sqlplus>shutdown immediate;
sqlplus>startup

[/code]

Now query the view to see the new limits in effect.

select * from v$resource_limit where resource_name = 'processes';


RESOURCE_NAME CURRENT_UTILIZATION MAX_UTILIZATION INITIAL_ALLOCATION LIMIT_VALUE
------------- ------------------- --------------- ------------------ -----------
processes      279                  285             500                500

 

 

pexpect alternative in python for remote connection

We generally use python pexpect module to connect system remotely with ssh and execute our tasks. But sometimes pexpect module is not found to be installed in remote systems which create problems. And this problem can be solved with the python select module with poll.

Here is the sample code that can be used.

https://github.com/kumarprd/pexpect-alternate

Accessing Host from Docker Container

Sometime we need to access the services that are running in the host machine to be accessible from the docker container.  e.g. In on of my project, we needed to connect to the oracle db (port 1521) from inside the container within code.

The default behaviour of containers are, they cant access the host network directly unless the firewall of the host machine allows the network interface of docker to ACCEPT the packets.

So the docker container will communicate with the host machine using the gateway ip. First find the gateway ip inside the container.

Run below command inside the container to get the gateway ip and observer I am not able to connect to port 1521.

[code language=”bash”]

# nc -vz dockerhost 1521

dockerhost [172.18.0.1] 1521 (?) : Connection timed out

# ip route | awk ‘/^default via /{print $3}’

172.18.0.1

[/code]

Next  task is to get the interface name of the docker network which is binded with the container.  Most of the cases its docker0.

But it can also be customized, so check ifconfig output which matches the inet addr of the container gateway.

[code language=”bash”]

# ifconfig

br-4e83b57c54cf Link encap:Ethernet  HWaddr 02:42:AF:CD:B5:DA

inet addr:172.18.0.1  Bcast:0.0.0.0  Mask:255.255.0.0

# ip addr show br-4e83b57c54cf

10: br-4e83b57c54cf: mtu 1500 qdisc noqueue state UP

link/ether 02:42:af:cd:b5:da brd ff:ff:ff:ff:ff:ff

inet 172.18.0.1/16 scope global br-4e83b57c54cf

valid_lft forever preferred_lft forever

[/code]

Here the interface name is : br-4e83b57c54cf

Now add a iptables rule in Linux host:

[code language=”bash”]

iptables -A INPUT -i br-4e83b57c54cf -j ACCEPT

[/code]

OR with firewalld

[code language=”bash”]
# firewall-cmd –permanent –zone=trusted –change-interface=br-294e81e5ac31
# firewall-cmd –reload

[/code]

Now try to access the host port from container.

[code language=”bash”]

# nc -vz dockerhost 1521

dockerhost [172.18.0.1] 1521 (?) open

[/code]

There are other ways also available on internet , but I found none of them working.

 

 

 

 

 

 

 

 

 

 

 

SNMP Poller tool to monitor any thing on network

Few years ago, I had created one SNMP Poller tool using perl and snmp utilites that can poll OID informations from any network devices, which is kind of passive monitoring mechanism.

Thought to make it Opensource under GNU GPL.

The details of the utility with its usage can be found here, if anyone is interested to use it.

https://github.com/kumarprd/snmp-poller

My MongoDB cheatsheet

Few years ago, I was working on one mongoDB project.Its a NoSQL database , easy to learn and its pure JSON based.

I learned it myself and had created a cheatsheet for it, so I can easily recall it anytime I want. It can help anyone who is interested in learning it quickly.

Thought to share the sheet. It can be found here :

https://github.com/kumarprd/mongo-guide/blob/master/my-mongo_guide.txt

 

 

 

chef attribute : avoiding “undefined method `[]’ for nil:NilClass” error

In chef, when a nested attribute that might not exist/or not crated yet, you can use rescue as a modifier to an if statement.

For example, assuming that only some of your nodes have node[‘install_wls’][‘isManaged’]defined:

if node['install_wls']['isManaged']
do the stuff
end rescue NoMethodError

will avoid the chef run bailing with “undefined method `[]’ for nil:NilClass” error.

Though there are any legit method errors within your if block to handle empty/nil/blank attributes, you’ve just caught an exception you shouldn’t.

I use this in cookbooks that have extra functionality if some dependency of other component recipe happens to be installed. For example, the odi cookbook  depend on wls or odi depends on oracldb.

chef knife tricks: Add a node in an environment

 

Sometime during automation of a large deployment process, we have to bootstrap a node , create environment and add the node in that particular environment on the fly.

 

  1. Bootstraping :

[code language=”ruby”]

knife bootstrap  myhost09vmf0209.in.ram.com -x root -P password -N node2

[/code]

2. Create environment dynamically from inside the programme:(python here)

[code language=”python”]

## Create an envtemplate with required values in it

envtemplate = “””

{
“name”: \””””+envname+”””\”,
“description”: “The master development branch”,
“cookbook_versions”: {
},
“json_class”: “Chef:Environment”,
“chef_type”: “environment”,
“default_attributes”: {
“revrec”:
{
“required_version”: \””””+appVersion+”””\”

}
},
“override_attributes”: {
}
}

##write the envtemplate in a file

with open(“/tmp/”+envname+”.json”,”w”) as f:
f.write(envtemplate)
f.close()

## Create env from the teplate json file

subprocess.call(“knife environment from file /tmp/”+envname+”.json”, shell=True)

[/code]

3. Add the node in the environment:

[code language=”ruby”]

knife exec -E ‘nodes.find(“name:node2”) {|n| n.chef_environment(“env209”);n.save }’

[/code]

 

Nagios Plugin Developed@NagiosExchange

Long ago, while working in one of the previous organization, there were lots of components like services and servers running in production environment. I had deployed all products one by one from scratch and the count kept on increasing. There were components like PLM Servers, DB Server, License Mgmt, internal portal, Cotainer based virtualization system and a lots.

But there was no proper tools to monitor all the components at a time. As the count kept increasing , it becomes difficult to keep an eye on UP/DOWN time of all.

So I decided to deploy Nagios Monitoring system in the Data Center and developed many plugins to use.

I have opensourced few of the plugins, which I thought can help other people in world, those may facing these kind of challenges.

Also I posted them on Nagios Exchange on 4 years ago and now they are huge success. They each are downloaded 50k+ times  and I received many thanks from many people from around the world and feel happy.

They can be found from here: https://exchange.nagios.org/directory/Owner/divyaimca/1