Jenkins slave as a service in windows to start automatically

In Jenkins many time we have to add windows machine as slave, where we need the agent to be up and running as windows service.

There are many ways to do it, but I struggled to find the correct configuration steps. I used windows resource toolkit to make it work and adding the steps here.

Configuration Steps:

(A) Adding the windows slave in Jenkins server :

1. Add the agent in Jenkins Master with Launch Method : Launch Via Java Web Start

Screen Shot 2019-09-25 at 4.07.06 PM

(B) Creating the Service in windows Server for starting the slave agent: (In Windows Server 2016)

1. Download and install the Java 8.
2. Down and Install Windows Resource Kit Tools (https://www.microsoft.com/en-us/download/details.aspx?id=17657)
3. Create a blank service called “Jenkins Slave” by running the following from a command prompt

“C:\Program Files (x86)\Windows Resource Kits\Tools\instsrv.exe” “Jenkins Slave” “C:\Program Files (x86)\Windows Resource Kits\Tools\srvany.exe”

4. Open Registry Editor and go to below location :

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Jenkins Slave

Now Follow the below steps carefully.

  1. Create a string value “Description”
  2. Populate it with “Jenkins Continuous Integration Slave”

Screen Shot 2019-09-25 at 2.13.33 PM

  1. Create a new key “Parameters”
  2. Under “Parameters” create a new string value “Application”
  3. Populate it with the full path to java.exe, something like “C:\sapjvm_8\bin\java.exe”
  4. Under “Parameters” Create a new string value “AppParameters”
  5. Populate it with

“-jar E:\jenkins_agent\agent.jar -jnlpUrl http://m1-hostname.lab.saas.company.corp:8080/computer/hostame-of-machone/slave-agent.jnlp -secret <secret-name> -workDir E:\jenkinsWorkSpace”

  1. The slave.jar should point to the correct location
  2. The Jenkins master machine name should be correct
  3. The new Jenkins slave machine should be correct
  4. Make sure you use the secret for this machine that you copied from the master when adding the new node

Screen Shot 2019-09-25 at 2.24.06 PM

Open the Services application from Control Panel – Administrative Tools, find the “Jenkins Slave” service, right click on the service and go to “Properties”.

  1. Go to the “Recovery” tab and change “First failure” and “Second failure” to “Restart the Service” – occasionally we found it wouldn’t start up first time out
  2. Go to the “Log On” tab and set an account and password- we found that using an account with local admin rights on the slave machine worked best but this is probably unnecessary
  3. Go to the “General” tab and change the “Startup type” to “Automatic” – make sure the service
  4. starts up when you restart the slave
  5. Click the “OK” button
  6. Now start the serviceScreen Shot 2019-09-25 at 2.27.33 PM

6. The service will run by default during startup of the windows machine.

7. Now Verify the agent is up and running in Jenkins Web Page.

Screen Shot 2019-09-25 at 4.03.00 PM

Python : Read and Update helm chart

Recently I was working on a release pipeline where the helm chart of 30+ environments need to be updated in git with the new chart versions from Jenkins input.

Here the helm chart was in yaml format and it was a umbrella chart and individual service chart was needed to be updated from Jenkins.

The umbrella chart file looks like this. https://raw.githubusercontent.com/divyaimca/my_projects/main/update_helm_chart/chart.yaml

apiVersion: v2
description: Helm chart to deploy application NG
name:app-main
version: 0.0.1
dependencies:
- name: service-a
  version: 0.1.014bf574
  repository: '@helm-repo'
  tags:
  - application
  enabled: true
- name: service-b
  version: 0.1.014bf575
  repository: '@helm-repo'
  tags:
  - application
  enabled: true
- name: service-c
  version: 0.1.014bf475
  repository: '@helm-repo'
  tags:
  - application
  enabled: true
- name: service-d
  version: 0.1.024bf575
  repository: '@helm-repo'
  tags:
  - application
  enabled: true
- name: service-e
  version: 0.1.014bf559
  repository: '@helm-repo'
  tags:
  - application
  enabled: true

Here you can see there are 5 dependent services and each version needs to be updated from

I used python module pyyaml.

Here is the code that is used in one stage to achieve this task.

https://raw.githubusercontent.com/divyaimca/my_projects/main/update_helm_chart/helmchart_parser.py

The function takes the input as chart.yaml file path and the subchart and versions in keyword arguments format. Refer the full code from the above link.

e.g.

    update_helm_chart("./chart.yaml",service-b="0.2.574",service-d="0.2.585",service-e="0.2.576")    

parse expect_output in a variable : TCL Data Structure

Sometimes we have to automate our steps through expect, run some commands in remote machine and capture the output of a command in a variable and use that variable in some other task.

So here is an example, how we can do that. The output from expect is always captured in expect_output(buffer) and we have to parse this to get our expected result.

So first we have store this expect_output(buffer) in a variable and which will have multiple lines along with our expected result.

Now we have to split that variable with "\n" as delimiter , which will create an array with all the lines in it.

Again from that array we can use indexing to extract the result from a certain position.

Here is one example.

[code lang=’bash’]
#!/usr/bin/expect
set password somepass
set cmd “ls -Art /var/lib/docker/path_to_files/ | tail -n 1”

spawn ssh root@10.59.1.150
set prompt “#|%|>|\\\$ $”
expect {
“(yes/no)” {send “yes\r”;exp_continue}
“password: ” {send “$password\r”;exp_continue}
-re $prompt
}
send “$cmd\r”
expect “# ”

set outcome [split $expect_out(buffer) “\n”]
set filename [lindex $outcome 1]

expect eof
puts “##########################”
puts $filename
puts “##########################”
[/code]

python : trace line number with exception

In python we raise exception to catch if anything goes wrong, but sometime with try block we have multiple lines of code and we are not able to track which line exactly throwing the exception.

There is way to catch the line number from which the exception is coming.

try this way:

[code language=”python”]

try:

line1

line2

except Exception as E:

print(‘Error on line {}’.format(sys.exc_info()[-1].tb_lineno), type(E).__name__, E)

[/code]

Now for any exception, you will get the line number for that also.

ORA-12519: TNS:no appropriate service handler found

The number of database processes that can run in Oracle DB is set to 150 as default which can be viewed from v$resource_limit.

select * from v$resource_limit where resource_name = 'processes';


RESOURCE_NAME CURRENT_UTILIZATION MAX_UTILIZATION INITIAL_ALLOCATION LIMIT_VALUE
------------- ------------------- --------------- ------------------ -----------
processes     148                 149              150                150

We face this issue – ORA-12519 ,  randomly when the number of DB processes utilized by our application is more that 150. So we need to increase the number of DB processes inorder to avoid the issue. Use below steps to incresae the limit.

[code language=”sql”]

sqlplus / as sysdba
sqlplus>alter system set processes=500 scope=spfile;
sqlplus>shutdown immediate;
sqlplus>startup

[/code]

Now query the view to see the new limits in effect.

select * from v$resource_limit where resource_name = 'processes';


RESOURCE_NAME CURRENT_UTILIZATION MAX_UTILIZATION INITIAL_ALLOCATION LIMIT_VALUE
------------- ------------------- --------------- ------------------ -----------
processes      279                  285             500                500

 

 

Accessing Host from Docker Container

Sometime we need to access the services that are running in the host machine to be accessible from the docker container.  e.g. In on of my project, we needed to connect to the oracle db (port 1521) from inside the container within code.

The default behaviour of containers are, they cant access the host network directly unless the firewall of the host machine allows the network interface of docker to ACCEPT the packets.

So the docker container will communicate with the host machine using the gateway ip. First find the gateway ip inside the container.

Run below command inside the container to get the gateway ip and observer I am not able to connect to port 1521.

[code language=”bash”]

# nc -vz dockerhost 1521

dockerhost [172.18.0.1] 1521 (?) : Connection timed out

# ip route | awk ‘/^default via /{print $3}’

172.18.0.1

[/code]

Next  task is to get the interface name of the docker network which is binded with the container.  Most of the cases its docker0.

But it can also be customized, so check ifconfig output which matches the inet addr of the container gateway.

[code language=”bash”]

# ifconfig

br-4e83b57c54cf Link encap:Ethernet  HWaddr 02:42:AF:CD:B5:DA

inet addr:172.18.0.1  Bcast:0.0.0.0  Mask:255.255.0.0

# ip addr show br-4e83b57c54cf

10: br-4e83b57c54cf: mtu 1500 qdisc noqueue state UP

link/ether 02:42:af:cd:b5:da brd ff:ff:ff:ff:ff:ff

inet 172.18.0.1/16 scope global br-4e83b57c54cf

valid_lft forever preferred_lft forever

[/code]

Here the interface name is : br-4e83b57c54cf

Now add a iptables rule in Linux host:

[code language=”bash”]

iptables -A INPUT -i br-4e83b57c54cf -j ACCEPT

[/code]

OR with firewalld

[code language=”bash”]
# firewall-cmd –permanent –zone=trusted –change-interface=br-294e81e5ac31
# firewall-cmd –reload

[/code]

Now try to access the host port from container.

[code language=”bash”]

# nc -vz dockerhost 1521

dockerhost [172.18.0.1] 1521 (?) open

[/code]

There are other ways also available on internet , but I found none of them working.

 

 

 

 

 

 

 

 

 

 

 

My MongoDB cheatsheet

Few years ago, I was working on one mongoDB project.Its a NoSQL database , easy to learn and its pure JSON based.

I learned it myself and had created a cheatsheet for it, so I can easily recall it anytime I want. It can help anyone who is interested in learning it quickly.

Thought to share the sheet. It can be found here :

https://github.com/kumarprd/mongo-guide/blob/master/my-mongo_guide.txt

 

 

 

chef attribute : avoiding “undefined method `[]’ for nil:NilClass” error

In chef, when a nested attribute that might not exist/or not crated yet, you can use rescue as a modifier to an if statement.

For example, assuming that only some of your nodes have node[‘install_wls’][‘isManaged’]defined:

if node['install_wls']['isManaged']
do the stuff
end rescue NoMethodError

will avoid the chef run bailing with “undefined method `[]’ for nil:NilClass” error.

Though there are any legit method errors within your if block to handle empty/nil/blank attributes, you’ve just caught an exception you shouldn’t.

I use this in cookbooks that have extra functionality if some dependency of other component recipe happens to be installed. For example, the odi cookbook  depend on wls or odi depends on oracldb.

chef knife tricks: Add a node in an environment

 

Sometime during automation of a large deployment process, we have to bootstrap a node , create environment and add the node in that particular environment on the fly.

 

  1. Bootstraping :

[code language=”ruby”]

knife bootstrap  myhost09vmf0209.in.ram.com -x root -P password -N node2

[/code]

2. Create environment dynamically from inside the programme:(python here)

[code language=”python”]

## Create an envtemplate with required values in it

envtemplate = “””

{
“name”: \””””+envname+”””\”,
“description”: “The master development branch”,
“cookbook_versions”: {
},
“json_class”: “Chef:Environment”,
“chef_type”: “environment”,
“default_attributes”: {
“revrec”:
{
“required_version”: \””””+appVersion+”””\”

}
},
“override_attributes”: {
}
}

##write the envtemplate in a file

with open(“/tmp/”+envname+”.json”,”w”) as f:
f.write(envtemplate)
f.close()

## Create env from the teplate json file

subprocess.call(“knife environment from file /tmp/”+envname+”.json”, shell=True)

[/code]

3. Add the node in the environment:

[code language=”ruby”]

knife exec -E ‘nodes.find(“name:node2”) {|n| n.chef_environment(“env209”);n.save }’

[/code]

 

Chef Recipe: Oracle DB 11gR2 EE silent deploy

Chef provides a lot of flexibility and greater choice for infrastructure automation and I prefer it over others.

We should design our recipe in such a way that the our recipes without being modified can be used in any environment by maximizing the use of  attributes.

I was working on a deployment project on Linux x86-64 platform, where I had to automate all the infra components. Oracle 11g R2 EE is one of them. I will share the cookbook  here that can help many other. The recipes written here are used for silent installation of the DB using a response file after pulling the media files from a remote system.

Also the recipes are made idempotent, so that rerunning the cookbook again and again never do any damage. It automatically sets an attribute for DB installed / DB running in chef server after a successful compile -> run of the recipes.

Also the username/passwords are pulled stored and pulled from Encrypted Databag to make it more secure.

Here is the cookbook : https://github.com/kumarprd/Ora11gR2-EE-Silent-Install-Chef-Recipe

The recipes involved use below steps in sequence :

  1. setupenv.rb (It create the environment that will be used by rest of the recipes)
  2. oradb.rb (It checks the default attributes to fresh install/patch install and go further for any operations)
  3. install_oradb.rb ( Install the oracle database in ideompotent manner and sets the attributes in the server)
  4. create_schema.rb (This is application specific, but I will provide the template that can be modifed)

NOTE : Here create an encryoted databag with below json props  which are accessed inside recipes.

Follow  my other post : https://thegnulinuxguy.com/2016/08/09/chef-create-encrypted-data-bag-and-keep-secrets/

{

“id”: “apppass”,
“ora_db_passwd”: “dbpass”,
“oracle_pass”: “orapass”

}

Any issue/suggestion are welcome.