Few years ago, I had created one SNMP Poller tool using perl and snmp utilites that can poll OID informations from any network devices, which is kind of passive monitoring mechanism.
Thought to make it Opensource under GNU GPL.
The details of the utility with its usage can be found here, if anyone is interested to use it.
1.Moving Linux job from Foreground Into background :
- start the program
- Ctrl+Z (Will pause the program on the terminal)
- jobs (find the job number from current shell)
- bg %jobnum
NOTE : Here we are only sending the process into background but exiting the programe still use the child shell of the current shell. So exiting the shell/terminal will kill the process.
2. Moving Linux jobs into background(nohup mode, freeing the shell)
- Start the program
- Ctrl+Z (Will pause the program on the terminal)
- bg to run it in the background
- disown -h ( shell disowns the process and will not get SIGHUP, so its kind of nohup mode)
- exit (To exit from the shell)
- Check in other terminal if the process is still running
NOTE : Here running process is moved into background, and exiting the shell will not kill the process and will still run.
What is Fork/Forking:
Fork is nothing but a new process that looks exactly like the old or the parent process but still it is a different process with different process ID and having it’s own memory. Parent process creates a separate address space for child. Both parent and child process possess the same code segment, but execute independently from each other.
The simplest example of forking is when you run a command on shell in unix/linux. Each time a user issues a command, the shell forks a child process and the task is done.
When a fork system call is issued, a copy of all the pages corresponding to the parent process is created, loaded into a separate memory location by the OS for the child process, but in certain cases, this is not needed. Like in ‘exec’ system calls, there is not need to copy the parent process pages, as execv replaces the address space of the parent process itself.
Few things to note about forking are:
- The child process will be having it’s own unique process ID.
- The child process shall have it’s own copy of parent’s file descriptor.
- File locks set by parent process shall not be inherited by child process.
- Any semaphores that are open in the parent process shall also be open in the child process.
- Child process shall have it’s own copy of message queue descriptors of the parents.
- Child will have it’s own address space and memory.
Fork is universally accepted than thread because of the following reasons:
- Development is much easier on fork based implementations.
- Fork based code a more maintainable.
- Forking is much safer and more secure because each forked process runs in its own virtual address space. If one process crashes or has a buffer overrun, it does not affect any other process at all.
- Threads code is much harder to debug than fork.
- Fork are more portable than threads.
- Forking is faster than threading on single cpu as there are no locking over-heads or context switching.
Some of the applications in which forking is used are: telnetd(freebsd), vsftpd, proftpd, Apache13, Apache2, thttpd, PostgreSQL.
Pitfalls in Fork:
- In fork, every new process should have it’s own memory/address space, hence a longer startup and stopping time.
- If you fork, you have two independent processes which need to talk to each other in some way. This inter-process communication is really costly.
- When the parent exits before the forked child, you will get a zombie process. That is all much easier with a thread. You can end, suspend and resume threads from the parent easily. And if your parent exits suddenly the thread will be ended automatically.
- In-sufficient storage space could lead the fork system to fail.
What are Threads/Threading:
Threads are Light Weight Processes (LWPs). Traditionally, a thread is just a CPU (and some other minimal state) state with the process containing the remains (data, stack, I/O, signals). Threads require less overhead than “forking” or spawning a new process because the system does not initialize a new system virtual memory space and environment for the process. While most effective on a multiprocessor system where the process flow can be scheduled to run on another processor thus gaining speed through parallel or distributed processing, gains are also found on uniprocessor systems which exploit latency in I/O and other system functions which may halt process execution.
Threads in the same process share:
- Process instructions
- Most data
- open files (descriptors)
- signals and signal handlers
- current working directory
- User and group id
Each thread has a unique:
- Thread ID
- set of registers, stack pointer
- stack for local variables, return addresses
- signal mask
- Return value: errno
Few things to note about threading are:
- Thread are most effective on multi-processor or multi-core systems.
- For thread – only one process/thread table and one scheduler is needed.
- All threads within a process share the same address space.
- A thread does not maintain a list of created threads, nor does it know the thread that created it.
- Threads reduce overhead by sharing fundamental parts.
- Threads are more effective in memory management because they uses the same memory block of the parent instead of creating new.
Pitfalls in threads:
- Race conditions: The big loss with threads is that there is no natural protection from having multiple threads working on the same data at the same time without knowing that others are messing with it. This is called race condition. While the code may appear on the screen in the order you wish the code to execute, threads are scheduled by the operating system and are executed at random. It cannot be assumed that threads are executed in the order they are created. They may also execute at different speeds. When threads are executing (racing to complete) they may give unexpected results (race condition). Mutexes and joins must be utilized to achieve a predictable execution order and outcome.
- Thread safe code: The threaded routines must call functions which are “thread safe”. This means that there are no static or global variables which other threads may clobber or read assuming single threaded operation. If static or global variables are used then mutexes must be applied or the functions must be re-written to avoid the use of these variables. In C, local variables are dynamically allocated on the stack. Therefore, any function that does not use static data or other shared resources is thread-safe. Thread-unsafe functions may be used by only one thread at a time in a program and the uniqueness of the thread must be ensured. Many non-reentrant functions return a pointer to static data. This can be avoided by returning dynamically allocated data or using caller-provided storage. An example of a non-thread safe function is strtok which is also not re-entrant. The “thread safe” version is the re-entrant version strtok_r.
Advantages in threads:
- Threads share the same memory space hence sharing data between them is really faster means inter-process communication (IPC) is real fast.
- If properly designed and implemented threads give you more speed because there aint any process level context switching in a multi threaded application.
- Threads are really fast to start and terminate.
Some of the applications in which threading is used are: MySQL, Firebird, Apache2, MySQL 323