parse expect_output in a variable : TCL Data Structure

Sometimes we have to automate our steps through expect, run some commands in remote machine and capture the output of a command in a variable and use that variable in some other task.

So here is an example, how we can do that. The output from expect is always captured in expect_output(buffer) and we have to parse this to get our expected result.

So first we have store this expect_output(buffer) in a variable and which will have multiple lines along with our expected result.

Now we have to split that variable with "\n" as delimiter , which will create an array with all the lines in it.

Again from that array we can use indexing to extract the result from a certain position.

Here is one example.

[code lang=’bash’]
#!/usr/bin/expect
set password somepass
set cmd “ls -Art /var/lib/docker/path_to_files/ | tail -n 1”

spawn ssh root@10.59.1.150
set prompt “#|%|>|\\\$ $”
expect {
“(yes/no)” {send “yes\r”;exp_continue}
“password: ” {send “$password\r”;exp_continue}
-re $prompt
}
send “$cmd\r”
expect “# ”

set outcome [split $expect_out(buffer) “\n”]
set filename [lindex $outcome 1]

expect eof
puts “##########################”
puts $filename
puts “##########################”
[/code]

Docker Issue : devmapper: Thin Pool is less than minimum required, use dm.min_free_space option to change behavior

Sometime while we build docker images or do any docker operation we might encounter thinpool space issue. like this:

devmapper: Thin Pool has 132480 free data blocks which is less than minimum required 163840 free data blocks. Create more free space in thin pool or use dm.min_free_space option to change behavior

If you do not want to use the device-tool utility, you can resize a loop-lvm thin pool manually using the following procedure.

In loop-lvm mode, a loopback device is used to store the data, and another to store the metadata. loop-lvm mode is only supported for testing, because it has significant performance and stability drawbacks.

If you are using loop-lvm mode, the output of docker info shows file paths for Data loop file and Metadata loop file:

[root@rvm-c431e558 proj_odi11g]# docker info |grep ‘loop file’
WARNING: Usage of loopback devices is strongly discouraged for production use. Use `–storage-opt dm.thinpooldev` to specify a custom block storage device.
Data loop file: /data/lib/docker/devicemapper/devicemapper/data
Metadata loop file: /data/lib/docker/devicemapper/devicemapper/metadata
WARNING: bridge-nf-call-iptables is disabled
WARNING: bridge-nf-call-ip6tables is disabledFollow these steps to increase the size of the thin pool.

In this example, the thin pool is 100 GB, and is increased to 200 GB.

List the sizes of the devices.

[root@rvm-c431e558 proj_odi11g]# ls -lh /data/lib/docker/devicemapper/devicemapper/
total 89G
-rw——- 1 root root 200G Mar 19 08:45 data
-rw——- 1 root root 2.0G Mar 19 08:45 metadata

Increase the size of the data file to 200 G using the truncate command, which is used to increase or decrease the size of a file. Note that decreasing the size is a destructive operation.

# truncate -s 200G /data/lib/docker/devicemapper/devicemapper/data
Verify the file size changed.

#  ls -lh /var/lib/docker/devicemapper/

total 1.2G
-rw——- 1 root root 100G Apr 14 08:47 data
-rw——- 1 root root 2.0G Apr 19 13:27 metadata

The loopback file has changed on disk but not in memory. List the size of the loopback device in memory, in GB. Reload it, then list the size again. After the reload, the size is 200 GB.

# echo $[ $(sudo blockdev –getsize64 /dev/loop0) / 1024 / 1024 / 1024 ]

100

# losetup -c /dev/loop0

# echo $[ $(sudo blockdev –getsize64 /dev/loop0) / 1024 / 1024 / 1024 ]

200

Reload the devicemapper thin pool.

a. Get the pool name first. The pool name is the first field, delimited by ` :`. This command extracts it.

#  dmsetup status | grep ‘ thin-pool ‘ | awk -F ‘: ‘ {‘print $1’}

docker-0:39-1566-pool

b. Dump the device mapper table for the thin pool.

#  dmsetup table docker-0:39-1566-pool

0 209715200 thin-pool 7:1 7:0 128 32768 1 skip_block_zeroing

c. Calculate the total sectors of the thin pool using the second field of the output. The number is expressed in 512-k sectors. A 100G file has 209715200 512-k sectors. If you double this number to 200G, you get 419430400 512-k sectors.

d. Reload the thin pool with the new sector number, using the following three dmsetup commands.

# dmsetup suspend docker-0:39-1566-pool

#  dmsetup reload docker-0:39-1566-pool –table ‘0 419430400 thin-pool 7:1 7:0 128 32768 1 skip_block_zeroing’

# dmsetup resume docker-0:39-1566-pool

#dmsetup suspend docker-0:39-1566-pool

#dmsetup reload docker-0:39-1566-pool –table ‘0 419430400 thin-pool 7:1 7:0 128 32768 1 skip_block_zeroing’

#dmsetup resume docker-0:39-1566-pool

Now space is increased to 200GB.

chef attribute : avoiding “undefined method `[]’ for nil:NilClass” error

In chef, when a nested attribute that might not exist/or not crated yet, you can use rescue as a modifier to an if statement.

For example, assuming that only some of your nodes have node[‘install_wls’][‘isManaged’]defined:

if node['install_wls']['isManaged']
do the stuff
end rescue NoMethodError

will avoid the chef run bailing with “undefined method `[]’ for nil:NilClass” error.

Though there are any legit method errors within your if block to handle empty/nil/blank attributes, you’ve just caught an exception you shouldn’t.

I use this in cookbooks that have extra functionality if some dependency of other component recipe happens to be installed. For example, the odi cookbook  depend on wls or odi depends on oracldb.

chef knife tricks: Add a node in an environment

 

Sometime during automation of a large deployment process, we have to bootstrap a node , create environment and add the node in that particular environment on the fly.

 

  1. Bootstraping :

[code language=”ruby”]

knife bootstrap  myhost09vmf0209.in.ram.com -x root -P password -N node2

[/code]

2. Create environment dynamically from inside the programme:(python here)

[code language=”python”]

## Create an envtemplate with required values in it

envtemplate = “””

{
“name”: \””””+envname+”””\”,
“description”: “The master development branch”,
“cookbook_versions”: {
},
“json_class”: “Chef:Environment”,
“chef_type”: “environment”,
“default_attributes”: {
“revrec”:
{
“required_version”: \””””+appVersion+”””\”

}
},
“override_attributes”: {
}
}

##write the envtemplate in a file

with open(“/tmp/”+envname+”.json”,”w”) as f:
f.write(envtemplate)
f.close()

## Create env from the teplate json file

subprocess.call(“knife environment from file /tmp/”+envname+”.json”, shell=True)

[/code]

3. Add the node in the environment:

[code language=”ruby”]

knife exec -E ‘nodes.find(“name:node2”) {|n| n.chef_environment(“env209”);n.save }’

[/code]

 

Nagios Plugin Developed@NagiosExchange

Long ago, while working in one of the previous organization, there were lots of components like services and servers running in production environment. I had deployed all products one by one from scratch and the count kept on increasing. There were components like PLM Servers, DB Server, License Mgmt, internal portal, Cotainer based virtualization system and a lots.

But there was no proper tools to monitor all the components at a time. As the count kept increasing , it becomes difficult to keep an eye on UP/DOWN time of all.

So I decided to deploy Nagios Monitoring system in the Data Center and developed many plugins to use.

I have opensourced few of the plugins, which I thought can help other people in world, those may facing these kind of challenges.

Also I posted them on Nagios Exchange on 4 years ago and now they are huge success. They each are downloaded 50k+ times  and I received many thanks from many people from around the world and feel happy.

They can be found from here: https://exchange.nagios.org/directory/Owner/divyaimca/1

Chef Recipe: Oracle DB 11gR2 EE silent deploy

Chef provides a lot of flexibility and greater choice for infrastructure automation and I prefer it over others.

We should design our recipe in such a way that the our recipes without being modified can be used in any environment by maximizing the use of  attributes.

I was working on a deployment project on Linux x86-64 platform, where I had to automate all the infra components. Oracle 11g R2 EE is one of them. I will share the cookbook  here that can help many other. The recipes written here are used for silent installation of the DB using a response file after pulling the media files from a remote system.

Also the recipes are made idempotent, so that rerunning the cookbook again and again never do any damage. It automatically sets an attribute for DB installed / DB running in chef server after a successful compile -> run of the recipes.

Also the username/passwords are pulled stored and pulled from Encrypted Databag to make it more secure.

Here is the cookbook : https://github.com/kumarprd/Ora11gR2-EE-Silent-Install-Chef-Recipe

The recipes involved use below steps in sequence :

  1. setupenv.rb (It create the environment that will be used by rest of the recipes)
  2. oradb.rb (It checks the default attributes to fresh install/patch install and go further for any operations)
  3. install_oradb.rb ( Install the oracle database in ideompotent manner and sets the attributes in the server)
  4. create_schema.rb (This is application specific, but I will provide the template that can be modifed)

NOTE : Here create an encryoted databag with below json props  which are accessed inside recipes.

Follow  my other post : https://thegnulinuxguy.com/2016/08/09/chef-create-encrypted-data-bag-and-keep-secrets/

{

“id”: “apppass”,
“ora_db_passwd”: “dbpass”,
“oracle_pass”: “orapass”

}

Any issue/suggestion are welcome.

Docker Supervisord – Way to run multiple Demon process in a container

The docker was released keeping in mind, one daemon  per container which makes the container lightweight. Like suppose for running a web application, one container will serve database, one container will server as web server, one container  will server as  caching server connecting to DB.

So while writing a Dockerfile, the limitation  is : only one CMD  parameter can be be used inside it to run a single foreground process, the exit of which will stop the container.

But sometime we may face situations like to run more than one daemon process in a single container that is to setup the complete stack in a single container.

For this we can have two approaches:

  1. A bash script that will run all the processes in backened in sequence but the last one should run with only & to run as a foreground process.
  2. Using supervisor : Easy templatized way of managing multiple process in the container.

UseCase : I faced a situation where I have to run ssh,httpd,mysql in a single container and here is how I approached it with supervisor.

Also using the stdout of supervisor we can redirect the logs in terminal.

The three config file used here:

  1. Dockerfile
  2. supervisor.conf
  3. docker-compose.yml

These files can be accessed from my gitrepo :

https://github.com/kumarprd/docker-supervisor

Next run below commands:

  1. docker-compose build (It will build the image by reading the files)

#docker-compose build

Building web
Step 1 : FROM oraclelinux:6.8
—> 7187d444f0ce
Step 2 : ENV container docker
—> Running in 8cff18dabcc4
—> 655b5004777a

……..

……..

Step 20 : CMD /usr/bin/supervisord -c /etc/supervisor.conf
—> Running in 4ffed54b078f
—> dfb974e07bfb
Removing intermediate container 4ffed54b078f
Successfully built dfb974e07bfb

2. docker-compose up

# docker-compose up
Creating supervisord_web_1
Attaching to supervisord_web_1
web_1 | 2016-10-01 05:57:55,357 CRIT Supervisor running as root (no user in config file)
web_1 | 2016-10-01 05:57:55,357 WARN For [program:sshd], redirect_stderr=true but stderr_logfile has also been set to a filename, the filename has been ignored
web_1 | 2016-10-01 05:57:55,357 WARN For [program:mysqld], redirect_stderr=true but stderr_logfile has also been set to a filename, the filename has been ignored
web_1 | 2016-10-01 05:57:55,357 WARN For [program:httpd], redirect_stderr=true but stderr_logfile has also been set to a filename, the filename has been ignored
web_1 | 2016-10-01 05:57:55,364 INFO supervisord started with pid 1
web_1 | 2016-10-01 05:57:56,369 INFO spawned: ‘httpd’ with pid 7
web_1 | 2016-10-01 05:57:56,373 INFO spawned: ‘sshd’ with pid 8
web_1 | 2016-10-01 05:57:56,377 INFO spawned: ‘mysqld’ with pid 9
web_1 | Could not load host key: /etc/ssh/ssh_host_rsa_key
web_1 | Could not load host key: /etc/ssh/ssh_host_dsa_key
web_1 | 161001 05:57:56 mysqld_safe Logging to ‘/var/log/mysqld.log’.
web_1 | 161001 05:57:56 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql
web_1 | httpd: Could not reliably determine the server’s fully qualified domain name, using 172.18.0.2 for ServerName
web_1 | 2016-10-01 05:57:57,649 INFO success: httpd entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
web_1 | 2016-10-01 05:57:57,649 INFO success: sshd entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
web_1 | 2016-10-01 05:57:57,649 INFO success: mysqld entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)

3. check the ps table

# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
edd870f7e3ca testimg.supervisor “/usr/bin/supervisord” 19 minutes ago Up 19 minutes 0.0.0.0:5002->22/tcp, 0.0.0.0:5000->80/tcp, 0.0.0.0:5001->3306/tcp supervisord_web_1

 

4.  Connect to the container and check the services:

ssh -p 5002 root@<FQDN of the host where docker engine is running>

root@<FQDN>’s password:
Last login: Sat Oct 1 06:07:43 2016 from <FQDN>

[root@edd870f7e3ca ~]# /etc/init.d/mysqld status
mysqld (pid 101) is running…
[root@edd870f7e3ca ~]# /etc/init.d/httpd status
httpd (pid 7) is running…
[root@edd870f7e3ca ~]# /etc/init.d/sshd status
openssh-daemon (pid 8) is running…
[root@edd870f7e3ca ~]#

 

Python : Inplace update json and maintain proper order

Some time we have to read one existing json property file and  update some values inplace.

If we don’t use proper approach, the update may lead to breaking the json structure in the file.

We have to hook the json objects by using OrderedDict of collection module in python for remembering the proper order.

Here old_value is updated with new_value :

“head”:

{

“name” : “old_value”

}

 

[code language=”python”]

from collections import OrderedDict

propJson = os.path.dirname(os.path.abspath(__file__))+/props.json
if os.path.isfile(propJson):
with open(propJson,r+) as f:
prop = json.load(f, object_hook=OrderedDict)
prop[head][name] = str(new_value)
f.seek(0)
f.write(json.dumps(prop, f, default=str, indent=4))
f.truncate()

[/code]

Using optparse in python

Sometimes we have to create tools that takes input as argument with certain options. We can create such tool with optparse module of python.

Here is a small example of using this.

 

[code language=”python”]

from optparse import OptionParser

parser = OptionParser(usage=’usage: %prog [options] arguments’)

parser.add_option(‘-a’,help="setup/cleanup",action="store", dest="action")
parser.add_option(‘-m’,help="email id",action="store", dest="email")
parser.add_option(‘-i’,help="Input json props",action="store", dest="input")
(options, args) = parser.parse_args()

[/code]

For help, type: ( This will display all the arguments that can be used with their format)

python tool.py -h

Usage: tool.py [options] arguments

Options:
-h, –help show this help message and exit
-a ACTION setup/cleanup
-m EMAIL email id
-i INPUT Input json props

save it in a programme and execute it as :

python tool.py -a setup -i file.json -m pdk@pdk.com

 

Now we can access the above inputs and use them, using below variables inside the programme:

[code language=”python”]

options.input

options.email

options.action

[/code]