Chef – Deleting existing attributes

Sometimes we face situation like :

  1. May need to remove some persistent attributes which we set a flag after some work is done
  2. May be we set some attributes wrong so need to remove and reset the existing attribute.

The attribute may be set in attributes/default.rb OR insdie recipe with node normal attribute OR node set OR node override attributes.

There is a nice tool of chef (chef exec)  with which we can transform the attributes.


Suppose a node with attribute hierarchy :

“normal” :[


“install_oradb” :  [

“is_installed” : “True”,

“is_running” :”True”



knife exec -E "nodes.transform(:all) {|n| n.normal_attrs[:install_oradb].delete(:is_installed) rescue nil }"
knife exec -E "nodes.transform(:all) {|n| n.normal_attrs[:install_oradb].delete(:is_running) rescue nil }"


Here we can replace normal_attrs with override_attrs OR default_attrs as required.

This will remove the attributes from all the nodes.


Suppose we want remove the attributes from a particular node. In this case you have to get search the node name from the chef solr index and replace the all with the required node.


knife exec -E "nodes.transform(:node2) {|n| n.normal_attrs[:install_oradb].delete(:is_installed) rescue nil }"
knife exec -E "nodes.transform(:node2) {|n| n.normal_attrs[:install_oradb].delete(:is_running) rescue nil }"

Chef/ruby way – Read a file and expose as environment variable

Many time we have to read a property file in in which the variable and value are comma separated and we have to set those in our environment variable to execute certain recipes.


e.g. property file (/u01/data/wor/app/conf/conf.prop)

ops_home = ‘/u01/data/work/app/ops-home-1.2.30’

node_instance = ‘/u01/data/work/app’


So here we have to read the file , create a hash and then save the LHS as key and RHS as value. Then we are good to expose them as environment variable.

Note : This is the approach I used, there may be other solution available.

Here the properties are = separated. It can be any separator.

This is a reusable function and can be called where ever required.

[code language=”ruby”]def setupenv()
hash1 = {}"/u01/data/wor/app/conf/conf.prop") do |fp|
fp.each do |line|
key, value = line.chomp.split("=",2)
hash1[key] = value
hash1.each do |key,value|
skey = "#{key.to_s}".gsub(/\s|"|’/, ”)
svalue = "#{value.to_s}".gsub(/\s|"|’/, ”)
ENV[skey] = svalue


Here setupenv( ) can be called anywhere the ENV variables are required.

Note : Here gsub(/\s|”|’/, ”) is used to trim  leading and trailing space, single quote, double quote of the key and value.



Chef Issue – Recover deleted user pivotal

By default “pivotal” is the only chef server superuser who has permission to CREATE users,orgnization, group etc in chef server.  So if by mistake you will delete the “pivotal” user with below command :

# chef-server-ctl user-delete pivotal

Then , further is you run any command(list,create,delete,etc) related to users, organization , it will fail with the following error :

Response:  Failed to authenticate as 'pivotal'. Ensure that your node_name and client key are correct.


So to overcome this issue we have to recreate “pivotal” using its with required authorization  in pgdb.

So follow below steps to do it.

create pivotal’s public key from /etc/opscode/pivotal.pem and store in an accessible location

#openssl rsa -in /etc/opscode/pivotal.pem -pubout > /var/opt/opscode/postgresql/9.2/data/

get the pivotal user’s authz_id and store in an accessible location

# echo "SELECT authz_id FROM auth_actor WHERE id = 1" | su -l opscode-pgsql -c 'psql bifrost -tA' | tr -d '\n' > /var/opt/opscode/postgresql/9.2/data/pivotal.authz_id

create the pivotal user’s record

# echo "INSERT INTO users (id, authz_id, username, email, pubkey_version, public_key, serialized_object, last_updated_by, created_at, updated_at) VALUES (md5(random()::text), pg_read_file('pivotal.authz_id'), 'pivotal', '', 0, pg_read_file(''), '{\"first_name\":\"Clark\",\"last_name\":\"Kent\",\"display_name\":\"Clark Kent\"}', pg_read_file('pivotal.authz_id'), LOCALTIMESTAMP, LOCALTIMESTAMP);" | su -l opscode-pgsql -c 'psql opscode_chef'

delete the temporary files

# rm /var/opt/opscode/postgresql/9.2/data/ /var/opt/opscode/postgresql/9.2/data/pivotal.authz_id

Docker Private Registry Setup

We can create our own secure private  docker repository where we can store our images and can be accessed from remote machine.

1. Goto /var/lib/docker in server and Create certificate using the domain name:

cd /var/lib/docker && mkdir certs
 mkdir -p certs && openssl req -newkey rsa:4096 -nodes -sha256 -keyout certs/ -x509 -days 365 -out certs/

2. Delete any old registry if exists:

docker rm  OR docker rmi registry:2

3. Recreate the registry using the newly created certificates by staying in the cert dir:

docker run -d -p 5000:5000 --restart=always --name bkdevregistry -v `pwd`/certs:/certs -e REGISTRY_HTTP_TLS_CERTIFICATE=/certs/ -e REGISTRY_HTTP_TLS_KEY=/certs/ registry:2

4. Goto docker cert dir and copy the crt file as ca.crt and restart docker service

cd /etc/docker/certs.d/\:5000/
 cp /var/lib/docker/ /etc/docker/certs.d/\:5000/ca.crt
 update-ca-trust enable
 service docker restart

5. Now push images to private repository:

docker pull ubuntu
 docker tag ubuntu
 docker push

6. Client side configuration:

Copy the ca.crt file from docker registry server to local docker cert dir and restart docker service

mkdir -p /etc/docker/certs.d/\:5000/
 scp /etc/docker/certs.d/
 service docker restart

7. Pull image from remote registry :

docker pull

8. Check images in remote registry available using the crt file or in insecure mode:

curl -X GET --cacert /etc/docker/certs.d/\:5000/ca.crt


curl -X GET --insecure

Using Docker – Part 1

In this part we will go through some simple usage of docker command.

Use -D with docker for debug mode.
Docker images are Immutable and Containers are Ephemeral.

How to get help ??

docker help
docker <command> help

1. Check images:

docker images

2. Run an application in the container:

( We have already downloaded oraclelinux:6.6 image from dockerhub)

-i flag starts an interactive container.

-t flag creates a pseudo-TTY that attaches stdin and stdout

docker run -i -t –name guest companylinux:6.6 /bin/bash

–name -> create an container instance with the name using the image companylinux6.6
execute /bin/bash isinde the container guest

NOTE : Here if image doesnt exist locally it will try to pull it from docker hub

3. Create an image and remove the container once logged out


docker run -i -t –rm companylinux:6.6 /bin/bash


4. Show all info about running processes in docker

docker ps
docker ps -a

5. Show info of processes running inside a container(here guest)

docker top guest

6. Run additional processes inside (guest here)

docker exec -it guest <command>

7. Create a container with a name that can be started in later time

docker create -it –name guest1 companylinux:7 /bin/bash

8. Start a container instance and Attach current shell to a docker container instance guest1

docker start -ai <container name> OR docker start -ai <container id>

9. stop instance and exit from the container

docker stop <containerid>

10. remove a container instance

docker rm guest1

11. Show all logs currently happening inside

docker logs -f guest

-f > updates the output in realtime

12. Get full information about a container in json format with inspect

docker inspect –format ='{{ .State.running}}’ guest1

13. Relaunch a container:

Look at the docker ps -all output and note down the CONTAINER_ID. If want to relaunch with interactive mode use -i option else just start.

docker start -i cfb007d616b9


docker start cfb007d616b9

14. start/attach to a running Container

docker start <ID of comtainer>

15. Change the behaviour of the containers when exits from the container instance (add the option with run command )


Docker always attempts to restart the container when the container exits.

Docker does not attempt to restart the container when the container exits. This is the default policy.

Docker attempts to restarts the container if the container returns a non-zero exit code. You can optionally specify the maximum number of times that Docker will try to restart the container.

–rm (use this with run command, so that once you exit from the instance, it will get removed)

16. Local repo creation:(Use registry with tag 2, base host port 5000 mapped to registry container instance port 5000, names with localregistry)

docker run -d -p 5000:5000 –restart=always –name localregistry registry:2

17. Add images to local repository:(pull from docker hub OR create local image, tag it ,push it into local repo, pull it to from localrepo to create instance)

docker pull companylinux:6.6
docker tag companylinux:6.6 localhost:5000/oel6u6
docker push localhost:5000/oel6u6
docker pull localhost:5000/oel6u6

18. Stop and remove any instance


docker stop <container id> OR docker stop <instance-name>
docker rm <container id> OR docker rm <instance-name>

19. Remove image from repository(use -f for force remove)

docker rmi <imageid> OR docker rmi <imagereponame>
docker rmi -f <imageid> OR docker rmi -f <imagereponame>

20. Remove dead process entry from (docker ps -all) where any instance is in stopped state

docker rm $(docker ps -a -q)

Docker Concept & Setup

Why Containerization ?

Up to now we have been working with monolithic applications where different components of service are packaged into a single application which is easy to develop, test and deploy.But when it becomes large and complex it’s become difficult as one team to work on it and the risk of failure is high at deploy time.
So to overcome, a new trend has been followed to work with microservices where components of the monolithic application are divided into small microservices. Here every microsevice will have its own API to handle its part of the application.

  • It has advantages like each smaller service can use its own technology stack.
  • The developers will find it easy to understand a single service.
  • It’s also quicker to build and faster to deploy.
  • The application becomes distributed and microservice scales quicker horizontally than vertical and becomes more fault tolerant.

Virtual Machines are too big to transfer and often too slow.

So containerization is the better choice when adopting Microservices architecture.

Container ???

  • Container is all about running an application and not just a VM
  • Container is  a virtualization method at operating system level, that allows running multiple instances of OS running in same kernel.
  • Container is an image that contains apps, library, dependencies and most important kernel space components are provided by host operating systems
    • NameSpace : Global system resources like network, PID, mount points are presented as such a way that container thinks this is only available to it
    • CGroup : Used to reserve and allocate resources to container
    • Union file system : Merge different file systems into one virtual file system.
    • Capabilities : Managing privileges like root/nonroot


Docker ??

Docker is one of the most popular container product, that is based on LXC and  is an open platform to build , ship and run distributed applications.


  –   Docker Engine : portable, lightweight runtime packaging tool
   –  Docker Hub: A cloud service for sharing application
  • Docker enables application to quick assemble from components
  • It removes the friction between Dev,QA, Prod envs.
  • The same app unchanged can run anywhere (lappy/PC/datacente).

Docker images are built from Dockerfile and the containers are built from images.

:: Setup ::

Installing Docker is easy. All the commands used here are in OEL6 in my workplace.

1. Installation:

Update OS to atleast OEL6_UEK4 repo to use kernel > 4.1 (yum update and confirm kernel version, os > 6.4)
name=Latest Unbreakable Enterprise Kernel Release 4 for company Linux $releasever ($basearch)

yum update and reboot

> use docker repo:

name=Docker Repository

2. Use btrfs filesystem:

yum install btrfs-progs
mkfs.btrfs /dev/sdb ( Add a raw disk and format with brtfs )
(FS tab entry )/dev/sdb /var/lib/docker btrfs defaults 0 0

3. Add proxy (if any to contact docker HUB)

/etc/sysconfig/docker ( If any ) OR add in /etc/default/docker( to use it with CURL)

export HTTP_PROXY=”proxy_URL:port”
export HTTPS_PROXY=”proxy_URL:port”

4. Modify docker config

In /etc/init.d/docker


“$unshare” -m — $exec -d $other_args &>> $logfile &


$exec -d $other_args &>> $logfile &

5. Start docker service

# service docker start
# chkconfig docker on

6. Check docker details

service docker status
docker info
docker version

7. Add local user to docker group

groupadd docker
usermod -a -G docker <local docker>
chmod g+rx /var/lob/docker

8. Search images in docker hub:(Before pulling check the availability)

docker search oraclelinux
docker searcg centos
docker searcg registry

9. pull oracle linux6.6 image:

docker pull oraclelinux:6.6

here oraclelinux – image is 6.6 – version

10. Check images:

docker images

11. Add this env variable for authenticity, integrity of images



My Git commandline Cheat-sheet

Creating a new repository

mkdir project

cd project

git init

git remote add origin

git add .

git commit -am “new repository”

git push -u origin master

Cloning existing repository

git clone

Creating branch

git checkout -b feature-1

# you are now in a branch, you can edit and create new files

git add .

git commit -am “new feature”

Merging branch to master

git checkout master

git merge feature-1

git push

Deleting branch

git branch -d feature-x

List all branches

git branch -a

Switch branch

git checkout feature-x

Switch to master branch

git checkout master

Listing Remote repositories

git remote -v

Replacing remote repository

# in case your remote repository changes, or you want to switch from HTTPS->SSH or SSH->HTTPS

git remote remove origin

git remote add origin

Forking vs Threading

What is Fork/Forking:

Fork is nothing but a new process that looks exactly like the old or the parent process but still it is a different process with different process ID and having  it’s own memory. Parent process creates a separate address space for child. Both parent and child process possess the same code segment, but execute independently from each other.

The simplest example of forking is when you run a command on shell in unix/linux. Each time a user issues a command, the shell forks a child process and the task is done.

When a fork system call is issued, a copy of all the pages corresponding to the parent process is created, loaded into a separate memory location by the OS for the child process, but in certain cases, this is not needed. Like in ‘exec’ system calls, there is not need to copy the parent process pages, as execv replaces the address space of the parent process itself.

Few things to note about forking are:

  • The child process will be having it’s own unique process ID.
  • The child process shall have it’s own copy of parent’s file descriptor.
  • File locks set by parent process shall not be inherited by child process.
  • Any semaphores that are open in the parent process shall also be open in the child process.
  • Child process shall have it’s own copy of message queue descriptors of the parents.
  • Child will have it’s own address space and memory.

Fork is universally accepted than thread because of the following reasons:

  • Development is much easier on fork based implementations.
  • Fork based code a more maintainable.
  • Forking is much safer and more secure because each forked process runs in its own virtual address space. If one process crashes or has a buffer overrun, it does not affect any other process at all.
  • Threads code is much harder to debug than fork.
  • Fork are more portable than threads.
  • Forking is faster than threading on single cpu as there are no locking over-heads or context switching.

Some of the applications in which forking is used are: telnetd(freebsd), vsftpd, proftpd, Apache13, Apache2, thttpd, PostgreSQL.

Pitfalls in Fork:

  • In fork, every new process should have it’s own memory/address space, hence a longer startup and stopping time.
  • If you fork, you have two independent processes which need to talk to each other in some way. This inter-process communication is really costly.
  • When the parent exits before the forked child, you will get a zombie process. That is all much easier with a thread. You can end, suspend and resume threads from the parent easily. And if your parent exits suddenly the thread will be ended automatically.
  • In-sufficient storage space could lead the fork system to fail.

What are Threads/Threading:

Threads are Light Weight Processes (LWPs). Traditionally, a thread is just a CPU (and some other minimal state) state with the process containing the remains (data, stack, I/O, signals). Threads require less overhead than “forking” or spawning a new process because the system does not initialize a new system virtual memory space and environment for the process. While most effective on a multiprocessor system where the process flow can be scheduled to run on another processor thus gaining speed through parallel or distributed processing, gains are also found on uniprocessor systems which exploit latency in I/O and other system functions which may halt process execution.

Threads in the same process share:

  • Process instructions
  • Most data
  • open files (descriptors)
  • signals and signal handlers
  • current working directory
  • User and group id

Each thread has a unique:

  • Thread ID
  • set of registers, stack pointer
  • stack for local variables, return addresses
  • signal mask
  • priority
  • Return value: errno

Few things to note about threading are:

  • Thread are most effective on multi-processor or multi-core systems.
  • For thread – only one process/thread table and one scheduler is needed.
  • All threads within a process share the same address space.
  • A thread does not maintain a list of created threads, nor does it know the thread that created it.
  • Threads reduce overhead by sharing fundamental parts.
  • Threads are more effective in memory management because they uses the same memory block of the parent instead of creating new.

Pitfalls in threads:

  • Race conditions: The big loss with threads is that there is no natural protection from having multiple threads working on the same data at the same time without knowing that others are messing with it. This is called race condition. While the code may appear on the screen in the order you wish the code to execute, threads are scheduled by the operating system and are executed at random. It cannot be assumed that threads are executed in the order they are created. They may also execute at different speeds. When threads are executing (racing to complete) they may give unexpected results (race condition). Mutexes and joins must be utilized to achieve a predictable execution order and outcome.
  • Thread safe code: The threaded routines must call functions which are “thread safe”. This means that there are no static or global variables which other threads may clobber or read assuming single threaded operation. If static or global variables are used then mutexes must be applied or the functions must be re-written to avoid the use of these variables. In C, local variables are dynamically allocated on the stack. Therefore, any function that does not use static data or other shared resources is thread-safe. Thread-unsafe functions may be used by only one thread at a time in a program and the uniqueness of the thread must be ensured. Many non-reentrant functions return a pointer to static data. This can be avoided by returning dynamically allocated data or using caller-provided storage. An example of a non-thread safe function is strtok which is also not re-entrant. The “thread safe” version is the re-entrant version strtok_r.

Advantages in threads:

  • Threads share the same memory space hence sharing data between them is really faster means inter-process communication (IPC) is real fast.
  • If properly designed and implemented threads give you more speed because there aint any process level context switching in a multi threaded application.
  • Threads are really fast to start and terminate.

Some of the applications in which threading is used are: MySQL, Firebird, Apache2, MySQL 323

Add time stamp to commands in linux history

Some times we need to know at what time someone executed a command. We can add the time stamp to the commands displayed in the history.

echo 'export HISTTIMEFORMAT="%d/%m/%y %T "' >> ~/.bash_profile ; source ~/.bash_profile

Then type history command and it will display the commands executed with their timestamp. e.g.

873 18/05/16 05:18:05 docker pull ubuntu
874 18/05/16 05:18:05 docker images
875 18/05/16 05:18:05 docker tag ubuntu:latest
876 18/05/16 05:18:05 docker ps
877 18/05/16 05:18:05 docker stop bkdevregistry
878 18/05/16 05:18:05 dockedr rm bkdevregistry


String manipulation tricks in bash

  • In some cases we have to modify the strings in a bash variable. Lets say : There are many files inside $(pwd)  with name as  a_ver10, b_ver10, c_ver10. We have to modify them back to a,b,c respectively. This below bash one liner can help to achieve this.
  • Inside the directory run this below command.


for file in $(ls -1d *_ver10); do mv $file  ${file%_ver10}; done

Moving some files into another dir or deleting them(change mv to rm).

for file in $(ls -1d *_ver11); do mv ${file%_ver11} bkp/ ;done