We generally use python pexpect module to connect system remotely with ssh and execute our tasks. But sometimes pexpect module is not found to be installed in remote systems which create problems. And this problem can be solved with the python select module with poll.
Sometime we need to access the services that are running in the host machine to be accessible from the docker container. e.g. In on of my project, we needed to connect to the oracle db (port 1521) from inside the container within code.
The default behaviour of containers are, they cant access the host network directly unless the firewall of the host machine allows the network interface of docker to ACCEPT the packets.
So the docker container will communicate with the host machine using the gateway ip. First find the gateway ip inside the container.
Run below command inside the container to get the gateway ip and observer I am not able to connect to port 1521.
[code language=”bash”]
# nc -vz dockerhost 1521
dockerhost [172.18.0.1] 1521 (?) : Connection timed out
# ip route | awk ‘/^default via /{print $3}’
172.18.0.1
[/code]
Next task is to get the interface name of the docker network which is binded with the container. Most of the cases its docker0.
But it can also be customized, so check ifconfig output which matches the inet addr of the container gateway.
[code language=”bash”]
# ifconfig
br-4e83b57c54cf Link encap:Ethernet HWaddr 02:42:AF:CD:B5:DA
Recently I encountered this issue in OVMM 3.2.9 while starting a vm with
xm create <vm.cfg path>
The reason behind this found was : the vm was not shutdown properly and the lock file is still there even if VM is down.
So the places to look at :
/var/log/xen/xend-debug.log
/var/run/ovs-agent/vm-*.lock
Look at the log file and if the lock file is present under /var/run/ovs-agent/ with the id of the vm which is not starting, just delete the lock file and then VM will start successfully.
Sometimes we have to deal with global variables like User passwords, database password, API Keys, middleware boot properties in our chef recipes which shouldn’t be exposed outside.
One solution is we have to keep all the secrets in a data bag and encrypt them using a random secret key and later distribute the key to other node where the secrets are accessed.
The other solution if using chef-vault which we will cover in a later topics.
1.Moving Linux job from Foreground Into background :
start the program
Ctrl+Z (Will pause the program on the terminal)
jobs (find the job number from current shell)
bg %jobnum
NOTE : Here we are only sending the process into background but exiting the programe still use the child shell of the current shell. So exiting the shell/terminal will kill the process.
2. Moving Linux jobs into background(nohup mode, freeing the shell)
Start the program
Ctrl+Z (Will pause the program on the terminal)
bg to run it in the background
disown -h ( shell disowns the process and will not get SIGHUP, so its kind of nohup mode)
exit (To exit from the shell)
Check in other terminal if the process is still running
NOTE : Here running process is moved into background, and exiting the shell will not kill the process and will still run.
Up to now we have been working with monolithic applications where different components of service are packaged into a single application which is easy to develop, test and deploy.But when it becomes large and complex it’s become difficult as one team to work on it and the risk of failure is high at deploy time.
So to overcome, a new trend has been followed to work with microservices where components of the monolithic application are divided into small microservices. Here every microsevice will have its own API to handle its part of the application.
It has advantages like each smaller service can use its own technology stack.
The developers will find it easy to understand a single service.
It’s also quicker to build and faster to deploy.
The application becomes distributed and microservice scales quicker horizontally than vertical and becomes more fault tolerant.
Virtual Machines are too big to transfer and often too slow.
So containerization is the better choice when adopting Microservices architecture.
Container ???
Container is all about running an application and not just a VM
Container is a virtualization method at operating system level, that allows running multiple instances of OS running in same kernel.
Container is an image that contains apps, library, dependencies and most important kernel space components are provided by host operating systems
NameSpace : Global system resources like network, PID, mount points are presented as such a way that container thinks this is only available to it
CGroup : Used to reserve and allocate resources to container
Union file system : Merge different file systems into one virtual file system.
Capabilities : Managing privileges like root/nonroot
Docker ??
Docker is one of the most popular container product, that is based on LXC and is an open platform to build , ship and run distributed applications.
– Docker Hub: A cloud service for sharing application
Docker enables application to quick assemble from components
It removes the friction between Dev,QA, Prod envs.
The same app unchanged can run anywhere (lappy/PC/datacente).
Docker images are built from Dockerfile and the containers are built from images.
:: Setup ::
Installing Docker is easy. All the commands used here are in OEL6 in my workplace.
1. Installation:
Update OS to atleast OEL6_UEK4 repo to use kernel > 4.1 (yum update and confirm kernel version, os > 6.4)
[ol6_UEKR4]
name=Latest Unbreakable Enterprise Kernel Release 4 for company Linux $releasever ($basearch)
baseurl=http://public-yum.company.com/repo/companyLinux/OL6/UEKR4/$basearch/
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-company
gpgcheck=1
enabled=1
Fork is nothing but a new process that looks exactly like the old or the parent process but still it is a different process with different process ID and having it’s own memory. Parent process creates a separate address space for child. Both parent and child process possess the same code segment, but execute independently from each other.
The simplest example of forking is when you run a command on shell in unix/linux. Each time a user issues a command, the shell forks a child process and the task is done.
When a fork system call is issued, a copy of all the pages corresponding to the parent process is created, loaded into a separate memory location by the OS for the child process, but in certain cases, this is not needed. Like in ‘exec’ system calls, there is not need to copy the parent process pages, as execv replaces the address space of the parent process itself.
Few things to note about forking are:
The child process will be having it’s own unique process ID.
The child process shall have it’s own copy of parent’s file descriptor.
File locks set by parent process shall not be inherited by child process.
Any semaphores that are open in the parent process shall also be open in the child process.
Child process shall have it’s own copy of message queue descriptors of the parents.
Child will have it’s own address space and memory.
Fork is universally accepted than thread because of the following reasons:
Development is much easier on fork based implementations.
Fork based code a more maintainable.
Forking is much safer and more secure because each forked process runs in its own virtual address space. If one process crashes or has a buffer overrun, it does not affect any other process at all.
Threads code is much harder to debug than fork.
Fork are more portable than threads.
Forking is faster than threading on single cpu as there are no locking over-heads or context switching.
Some of the applications in which forking is used are: telnetd(freebsd), vsftpd, proftpd, Apache13, Apache2, thttpd, PostgreSQL.
Pitfalls in Fork:
In fork, every new process should have it’s own memory/address space, hence a longer startup and stopping time.
If you fork, you have two independent processes which need to talk to each other in some way. This inter-process communication is really costly.
When the parent exits before the forked child, you will get a zombie process. That is all much easier with a thread. You can end, suspend and resume threads from the parent easily. And if your parent exits suddenly the thread will be ended automatically.
In-sufficient storage space could lead the fork system to fail.
What are Threads/Threading:
Threads are Light Weight Processes (LWPs). Traditionally, a thread is just a CPU (and some other minimal state) state with the process containing the remains (data, stack, I/O, signals). Threads require less overhead than “forking” or spawning a new process because the system does not initialize a new system virtual memory space and environment for the process. While most effective on a multiprocessor system where the process flow can be scheduled to run on another processor thus gaining speed through parallel or distributed processing, gains are also found on uniprocessor systems which exploit latency in I/O and other system functions which may halt process execution.
Threads in the same process share:
Process instructions
Most data
open files (descriptors)
signals and signal handlers
current working directory
User and group id
Each thread has a unique:
Thread ID
set of registers, stack pointer
stack for local variables, return addresses
signal mask
priority
Return value: errno
Few things to note about threading are:
Thread are most effective on multi-processor or multi-core systems.
For thread – only one process/thread table and one scheduler is needed.
All threads within a process share the same address space.
A thread does not maintain a list of created threads, nor does it know the thread that created it.
Threads reduce overhead by sharing fundamental parts.
Threads are more effective in memory management because they uses the same memory block of the parent instead of creating new.
Pitfalls in threads:
Race conditions: The big loss with threads is that there is no natural protection from having multiple threads working on the same data at the same time without knowing that others are messing with it. This is called race condition. While the code may appear on the screen in the order you wish the code to execute, threads are scheduled by the operating system and are executed at random. It cannot be assumed that threads are executed in the order they are created. They may also execute at different speeds. When threads are executing (racing to complete) they may give unexpected results (race condition). Mutexes and joins must be utilized to achieve a predictable execution order and outcome.
Thread safe code: The threaded routines must call functions which are “thread safe”. This means that there are no static or global variables which other threads may clobber or read assuming single threaded operation. If static or global variables are used then mutexes must be applied or the functions must be re-written to avoid the use of these variables. In C, local variables are dynamically allocated on the stack. Therefore, any function that does not use static data or other shared resources is thread-safe. Thread-unsafe functions may be used by only one thread at a time in a program and the uniqueness of the thread must be ensured. Many non-reentrant functions return a pointer to static data. This can be avoided by returning dynamically allocated data or using caller-provided storage. An example of a non-thread safe function is strtok which is also not re-entrant. The “thread safe” version is the re-entrant version strtok_r.
Advantages in threads:
Threads share the same memory space hence sharing data between them is really faster means inter-process communication (IPC) is real fast.
If properly designed and implemented threads give you more speed because there aint any process level context switching in a multi threaded application.
Threads are really fast to start and terminate.
Some of the applications in which threading is used are: MySQL, Firebird, Apache2, MySQL 323