Trifork Blog

Axon Framework, DDD, Microservices

Posts Tagged ‘Configuration Management’

How to manage your Docker runtime config with Vagrant

July 20th, 2014 by
(http://blog.trifork.com/2014/07/20/how-to-manage-your-docker-runtime-config-with-vagrant/)

Vagrant LogoIn this short blog I will show you how to manage a Docker container using Vagrant. Since version 1.6 Vagrant supports Docker as a provider, next to existing providers for VirtualBox and AWS. With the new Docker support Vagrant boxes can be started way faster. In turn Vagrant makes Docker easier to use since its runtime configuration can be stored in the Vagrantfile. You won’t have to add runtime parameters on the command line any time you want to start a container. Read on if you like to see how I create a Vagrantfile for an existing Docker image from Quinten’s Docker cookbooks collection.

Read the rest of this entry »

Ansible – Simple module

April 18th, 2013 by
(http://blog.trifork.com/2013/04/18/ansible-simple-module/)

In this post, we’ll review Ansible module development.
I haven chosen to make a maven module; not very fancy, but it provides a good support for the subject.
This module will execute a maven phase for a project (a pom.xml is designated).
You can always refer to the Ansible Module Development page.

Which language?

The de facto language in Ansible is Python (you benefit from the boilerplate), but any language can be used. The only requirement is being to be able to read/write files and write to stdout.
We will be using bash.

Module input

The maven module needs two parameters, the phase and the pom.xml location (pom).
For non-Python modules, Ansible provides the parameters in a file (first parameter) with the following format:
pom=/home/mohamed/myproject/pom.xml phase=test

You then need to read this file and extract the parameters.

In bash you can do that in two ways:
source $1

This can cause problems because the whole file is evaluated, so any code in there will be executed. In that case we trust that Ansible will not put any harmful stuf in there.

You can also parse the file using sed (or any way you like):
eval $(sed -e “s/\([a-z]*\)=\([a-zA-Z0-9\/\.]*\)/\1=’\2’/g” $1)
This is good enough for this exercise.

We now have two variables (pom and phase) with the expected values.
We can continue and execute the maven phase for the given project (pom.xml).

Module processing

Basically, we can check if the parameters have been provided and then execute the maven command:


#!/bin/bash

eval $(sed -e “s/\([a-z]*\)=\([a-zA-Z0-9\/\.]*\)/\1=’\2’/g” $1)

if [ -z “${pom}” ] || [ -z “${phase}” ]; then
echo ‘failed=True msg=”Module needs pom file (pom) and phase name (phase)”‘
exit 0;
fi

maven-output=$(mktemp /tmp/ansible-maven.XXX)
mvn ${phase} -f ${pom} > $maven-output 2>&1
if [ $? -ne 0 ]; then
echo “failed=True msg=\”Failed to execute maven ${phase} with ${pom}\””
exit 0
fi

echo “changed=True”
exit 0

In order to communicate the result, the module needs to return JSON.
To simplify the JSON outputing step, Ansible allows to use key=value as output.

Module output

You noticed that an output is always returned. If an error happened, failed=True is returned as well as an error message.
If everything went fine, changed=True is returned (or changed=False).

If the maven command fails, a generic error message is returned. We can change that by parsing the content of maven-ansible and return only what we need.

In some situations, your module doesn’t do anything (no action is needed). In that case you’ll need to return changed=False in order to let Ansible know that nothing happened (it is important if you need that for the rest of the tasks in your playbook).

Use it

You can run your module with the following command:

ansible buildservers -m maven -M /home/mohamed/ansible/mymodules/ –args=”pom=/home/mohamed/myproject/pom.xml phase=test” -u mohamed -k

If it goes well, you get something like the following output:

localhost | success >> {
“changed”: true
}

Otherwise:

localhost | FAILED >> {
“failed”: true,
“msg”: “Failed to execute maven test with /home/mohamed/myproject/pom.xml”
}

To install the module put it in ANSIBLE_LIBRARY (by default it is in /usr/share/ansible), and you can start using it inside your playbooks.
It goes without saying that this module has some dependencies: an obvious one is the presence of maven. You can ensure that maven is installed by adding a task in your playbook before using this module.

Conclusion

Module development is as easy as what we briefly saw here, and in any language. That’s another point I wanted to make and that makes Ansible very nice to use.

Ansible – Example playbook to setup Jenkins slave

April 2nd, 2013 by
(http://blog.trifork.com/2013/04/02/ansible-example-playbook-to-setup-jenkins-slave/)

As mentioned in my previous post about Ansible, we will now proceed with writing an Ansible playbook. Playbooks are files containing instructions that can be processed by Ansible, they are written in yaml. For this blog post I will show you how to create a playbook that will setup a remote computer as a Jenkins slave.

What do we need?

We need some components to get a computer execute Jenkins jobs:

  • JVM 7
  • A dedicated user that will run the Jenkins agent
  • Subversion
  • Maven (with our configuation)
  • Jenkins Swarm Plugin and Client

Why Jenkins Swarm Plugin

We use the Swarm Plugin, because it allows a slave to auto-discover a master and join it automatically. We hence don’t need any actions on the master.

JDK7

We now proceed with adding the JDK7 installation task. We will not use any package version (for example dedicate Ubuntu PPA or RedHat/Fedora repos), we will use the JDK7 archive from oracle.com.
There multiple steps required:

* We need wget to be install. This is needed to download the JDK
* To download the JDK you need to accept terms, we can’t do that in a batch run so we need to wrap a wget call in a shell script to send extra HTTP headers
* Set the platform wide JDK links (java and jar executable)

Install wget

We want to verify that wget is installed on the remote computer and if not install it from the distribution repos. To install packages, there are modules available, yum and apt (There are others but we will focus on these).
To be able to run the correct task depending on the ansible_pkg_mgr value we can use only_id:

  - name: Install wget package (Debian based)
    action: apt pkg='wget' state=installed
    only_if: "'$ansible_pkg_mgr' == 'apt'"

  - name: Install wget package (RedHat based)
    action: yum name='wget' state=installed
    only_if: "'$ansible_pkg_mgr' == 'yum'"

Download JDK7

To download JDK7 from oracle.com, we need to accept the terms but we can’t do that in a batch, so we need to skip that:

Create a script contains the wget call:

#!/bin/bash

wget --no-cookies --header "Cookie: gpw_e24=http%3A%2F%2Fwww.oracle.com" http://download.oracle.com/otn-pub/java/jdk/7/$1 -O $1

The parameter is the archive name.

  - name: Copy download JDK7 script
    copy: src=files/download-jdk7.sh dest=/tmp mode=0555

  - name: Download JDK7 (Ubuntu)
    action: command creates=${jvm_folder}/jdk1.7.0 chdir=${jvm_folder} /tmp/download-jdk7.sh $jdk_archive

These two tasks copy the script to /tmp and then execute it. $jdk_archive is a variable containing the archive name, it can be different depending on the distribution and the architecture.

Ansible provide a way to load variable files:

  vars_files:

    - [ "vars/defaults.yml" ]
    - [ "vars/$ansible_distribution-$ansible_architecture.yml", "vars/$ansible_distribution.yml" ]

This will load the file vars/defauts.yml (Note that all these file are written in yaml) and then look for the file vars/$ansible_distribution-$ansible_architecture.yml.
The variables are replaced by the their value on the remote computer voor example on an Ubuntu 32bits on i386 distribution, Ansible will look for the file vars/Ubuntu-i386.yml. If it doesn’t find it, it will fallback to vars/Ubuntu.yml.

Examples, Ubuntu-i386.yml would contain:

---
jdk_archive: jdk-7-linux-i586.tar.gz

Fedora-i686.yml would contain:

---
jdk_archive: jdk-7-linux-i586.rpm

Unpack/Install JDK

You notice that for Ubuntu we use the tar.gz archive but for Fedora we use an rpm archive. That means the the installation of the JDK will be different depending on the distribution.

  - name: Unpack JDK7
    action: command creates=${jvm_folder}/jdk1.7.0 chdir=${jvm_folder} tar zxvf ${jvm_folder}/$jdk_archive --owner=root
    register: jdk_installed
    only_if: "'$ansible_pkg_mgr' == 'apt'"

  - name: Install JDK7 RPM package
    action: command creates=${jvm_folder}/latest chdir=${jvm_folder} rpm --force -Uvh ${jvm_folder}/$jdk_archive
    register: jdk_installed
    only_if: "'$ansible_pkg_mgr' == 'yum'"

On ubuntu, we just unpack the downloaded archive but for fedora we install it using rpm.
You might want to review the condition (only_if) particularly if you use SuSE.
jvm_folder is just an extra variable that can be global of per distribution, you need to place if in a vars file.
Note that the command module take a ‘creates’ parameter. It is useful if you don’t want to rerun the command, the module that the file or directory provided via this parameter exits, if it does it will skip that task.
In this task, we use register. With register you can store the result of a task into a variable (in this case we called it jdk_installed).

Set links

To be able to make the java and jar executables accessible to anybody (particularly our jenkins user) from anywhere, we set symbolic links (actually we just install an alternative).

  - name: Set java link
    action: command update-alternatives --install /usr/bin/java java ${jvm_folder}/jdk1.7.0/bin/java 1
    only_if: '${jdk_installed.changed}'

  - name: Set jar link
    action: command update-alternatives --install /usr/bin/jar jar ${jvm_folder}/jdk1.7.0/bin/jar 1
    only_if: '${jdk_installed.changed}'

Here we reuse the stored register, jdk_installed. We can access the changed attribute, if the unpacking/installation of the JDK did do something then changed will be true and the update-alternatives command will be ran.

Cleanup

To keep things clean, you can remove the downloaded archive using the file module.

  - name: Remove JDK7 archive
    file: path=${jvm_folder}/$jdk_archive state=absent

We are done with the JDK.

Obviously you might want to reuse this process in other playbooks. Ansible let you do that.
Just create a file with all this task and include it in a playbook.

- include: tasks/jdk7-tasks.yml jvm_folder=${jvm_folder} jdk_archive=${jdk_archive}

jenkins user

Creation

With the name module, the can easily handle users.

  - name: Create jenkins user
    user: name=jenkins comment="Jenkins slave user" home=${jenkins_home} shell=/bin/bash

The variable jenkins_home can be defined in one of the vars files.

Password less from Jenkins master

We first create the .ssh in the jenkins home directory with the correct rights. And then with the authorized_key module, we can add the public of the jenkins user on the jenkins master to the authorized keys of the jenkins user (on the new slave). And then we verify that the new authorized_keys file has the correct rights.

  - name: Create .ssh folder
    file: path=${jenkins_home}/.ssh state=directory mode=0700 owner=jenkins

  - name: Add passwordless connection for jenkins
    authorized_key: user=jenkins key="xxxxxxxxxxxxxx jenkins@master"

  - name: Update authorized_keys rights
    file: path=${jenkins_home}/.ssh/authorized_keys state=file mode=0600 owner=jenkins

If you want jenkins to execute any command as sudo without the need of providing a password (basically updating /etc/sudoers), the module lineinfile can do that for you.
That module checks ‘regexp’ against ‘dest’, if it matches it doesn’t do anything if not, it adds ‘line’ to ‘dest’.

  - name: Tomcat can run any command with no password
    lineinfile: "line='tomcat ALL=NOPASSWD: ALL' dest=/etc/sudoers regexp='^tomcat'"

Subversion

This one is straight forward.

  - name: Install subversion package (Debian based)
    action: apt pkg='subversion' state=installed
    only_if: "'$ansible_pkg_mgr' == 'apt'"

  - name: Install subversion package (RedHat based)
    action: yum name='subversion' state=installed
    only_if: "'$ansible_pkg_mgr' == 'yum'"

Maven

We will put maven under /opt so we first need to create that directory.

  - name: Create /opt directory
    file: path=/opt state=directory

We then download the maven3 archive, this time it is more simple, we can directly use the get_url module.

  - name: Download Maven3
    get_url: dest=/opt/maven3.tar.gz url=http://apache.proserve.nl/maven/maven-3/3.0.4/binaries/apache-maven-3.0.4-bin.tar.gz

We can then unpack the archive and create a symbolic link to the maven location.

  - name: Unpack Maven3
    action: command creates=/opt/maven chdir=/opt tar zxvf /opt/maven3.tar.gz

  - name: Create Maven3 directory link
    file: path=/opt/maven src=/opt/apache-maven-3.0.4 state=link

We use again update-alternatives to make mvn accessible platform wide.

  - name: Set mvn link
    action: command update-alternatives --install /usr/bin/mvn mvn /opt/maven/bin/mvn 1

We put in place out settings.xml by creating the .m2 directory on the remote computer and copying a settings.xml (we backup any already existing settings.xml).

  - name: Create .m2 folder
    file: path=${jenkins_home}/.m2 state=directory owner=jenkins

  - name: Copy maven configuration
    copy: src=files/settings.xml dest=${jenkins_home}/.m2/ backup=yes

Clean things up.

  - name: Remove Maven3 archive
    file: path=/opt/maven3.tar.gz state=absent

Swarm client

You first need to install the Swarm plugin as mentioned here.
Then you can proceed with the client installation.

First create the jenkins slave working directory.

  - name: Create Jenkins slave directory
    file: path=${jenkins_home}/jenkins-slave state=directory owner=jenkins

Download the Swarm Client.

  - name: Download Jenkins Swarm Client
    get_url: dest=${jenkins_home}/swarm-client-1.8-jar-with-dependencies.jar url=http://maven.jenkins-ci.org/content/repositories/releases/org/jenkins-ci/plugins/swarm-client/1.8/swarm-client-1.8-jar-with-dependencies.jar owner=jenkins

When you start the swarm client, it will connect to the master and the master will automatically create a new node for it.
There are a couple of parameters to start the client. You still need to provided a login/password in order to authenticate. You obviously want this information to be parameterizable.

First we need a script/configuration to start the swarm client at boot time (systemv, upstart or systemd it is up to you). In that script/configuration, you need to add the swarm client run command:

java -jar {{jenkins_home}}/swarm-client-1.8-jar-with-dependencies.jar -name {{jenkins_slave_name}} -password {{jenkins_password}} -username {{jenkins_username}} -fsroot {{jenkins_home}}/jenkins-slave -master https://jenkins.trifork.nl -disableSslVerification &> {{jenkins_home}}/swarm-client.log &

Then using the template module, to process the script/configuration template (using Jinja2) into a file that will be put on a given location.

  - name: Install swarm client script
    template: src=templates/jenkins-swarm-client.tmpl dest=/etc/init.d/jenkins-swarm-client mode=0700

The file mode is 700 because we have a login/password in that file, we don’t want people (that can log on the remote computer) to be able to see that.

Instead of putting jenkins_username and jenkins_password in vars files, you can prompt for them.

  vars_prompt:

    - name: jenkins_username
      prompt: "What is your jenkins user?"
      private: no
    - name: jenkins_password
      prompt: "What is your jenkins password?"
      private: yes

And then you can verify that they have been set.

  - fail: msg="Missing parameters!"
    when_string: $jenkins_username == '' or $jenkins_password == ''

You can now start the swarm client using the service module and enable it to start at boot time.

  - name: Start Jenkins swarm client
    action: service name=jenkins-swarm-client state=started enabled=yes

Run it!

ansible-playbook jenkins.yml --extra-vars "host=myhost user=myuser" --ask-sudo-pass

By passing ‘–ask-sudo-pass’, you tell Ansible that ‘myuser’ requires a password to be typed in order to be able to run the tasks in the playbook.
‘–extra-vars’ will pass on a list of viriables to the playbook. The begining of the playbook will look like this:

---
 
- hosts: $host
  user: $user
  sudo: yes

‘sudo: yes’ tells Ansible to run all tasks as root but it acquires the privileges via sudo.
You can also use ‘sudo_user: admin’, if you want Ansible to run the command to sudo to admin instead of root.
Note that if you don’t need facts, you can add ‘gather_facts: no’, this will spend up the playbook execution but that requires that you know everything you need about the remote computer.

Conclusion

The playbook is ready. You can now easily add new nodes for new Jenkins slaves thanks to Ansible.

Ansible – next generation configuration management

March 26th, 2013 by
(http://blog.trifork.com/2013/03/26/ansible-next-generation-configuration-management/)

The popularity of the cloud has taken configuration management to the next level. Tools that help system administrators and developers configure and manage large amounts of servers, like Chef and Puppet, have popped up everywhere. Ansible is the next generation configuration management. Ansible can be used to excute tasks on remote computers via SSH so no agent is required on the remote computer. It was originally created by Michael DeHaan.
I won’t compare Ansible with Puppet or Chef, you can check the Ansible FAQ. But the key differentiators are that Anisble does not require an agent to be installed, its commands can be ordered, can be extended via modules written in any language as long as they return JSON, basically taking the best of both worlds (Puppet and Chef).

Instalation

You’ll want to install Ansible on a central computer from which you can reach all the other computers.

On Fedora, it is already packaged:

sudo yum install ansible

On Ubuntu, you need to add a repo:

sudo add-apt-repository ppa:rquillo/ansible
sudo apt-get install ansible

On Mac, you can use MacPorts.

On others, compile it from source https://github.com/ansible/ansible.

Getting started

One of the core constructs in Ansible is the notion of an inventory. Ansible uses this inventory to know which computers should be included when executing a module for a given group. An inventory is a very simple file (by default it uses /etc/ansible/hosts) containing groups of computers.

Example:

[appservers]
app1.trifork.nl
app2.trifork.nl

As part of the inventory you can also initialize variables common to a group. These variables can then be reused when executing tasks for each computer.

[appservers]
app1.trifork.nl
app2.trifork.nl

[appservers:vars]
tomcat_version=7
java_version=7

You can set your own inventory by setting a global environment variable:

export ANSIBLE_HOSTS=my-ansible-inventory

You can then start using Ansible right away:

ansible appservers -m ping -u my-user -k

What this does is run module ‘ping’ for all computer in the group appservers, it returns:

app1.trifork.nl | success >> {
"changed": false,
"ping": "pong"
}

app2.trifork.nl | success >> {
"changed": false,
"ping": "pong"
}

You see that the module executed successfully on both hosts. We’ll come back to the ‘changed’ output later.
-u tells Ansible that you want to use another user (it uses root by default) to login on the remote computers. -k tells Ansible that you want to provide a password for this user.
In most cases you’ll probably want to setup a passwordless connection to the remote computers, ssh-copy-id will help you do that. Or better, you can rely on ssh-agent.

Gathering facts

Most of the time when using Ansible, you want to know something about the computer you are executing a task on.
The ‘setup’ module does just that, it gathers facts about a computer.

ansible appservers -m setup -u tomcat -k

You get a big output (I’ve removed some of it):

app1.trifork.nl | success >> {
    "ansible_facts": {
        ...
        "ansible_architecture": "x86_64",
        ...
        "ansible_distribution": "Ubuntu",
        ...
        "ansible_domain": "trifork.nl",
        ...
        "ansible_fqdn": "app1.trifork.nl",
        "ansible_hostname": "app1",
        "ansible_interfaces": [
            "lo",
            "eth0"
        ],
        ...
        "ansible_machine": "x86_64",
        "ansible_memfree_mb": 1279,
        "ansible_memtotal_mb": 8004,
        "ansible_pkg_mgr": "apt",
        ...
        "ansible_system": "Linux",
        ...
    },
    "changed": false,
    "verbose_override": true
}

app2.trifork.nl | success >> {
    "ansible_facts": {
        ...
        "ansible_architecture": "x86_64",
        ...
        "ansible_distribution": "Ubuntu",
        ...
        "ansible_domain": "trifork.nl",
        "ansible_fqdn": "app2.trifork.nl",
        "ansible_hostname": "app2",
        "ansible_interfaces": [
            "lo",
            "eth0"
        ],
        ...
        "ansible_machine": "x86_64",
        "ansible_memfree_mb": 583,
        "ansible_memtotal_mb": 2009,
        "ansible_pkg_mgr": "apt",
        ...
        "ansible_system": "Linux",
        ...
    },
    "changed": false,
    "verbose_override": true
}

These are Ansible facts, Ansible can also use extra facts gathered by ohai or facter.

Let’s review some of the Ansible facts:

ansible_pkg_mgr: This tells which package manager is in use on the remote Linux computer. This is important if you want to use the ‘apt’ or ‘yum’ module and want to make your scripts (playbooks) distro-agnositic.
ansible_distribution: This tells which Linux distribution is installed on the remote computer.
ansible_architecture: If you want to know which OS architecture it is.

Next time we’ll use these facts together with modules in a playbook example.

Automated Configuration Management with Puppet

February 15th, 2011 by
(http://blog.trifork.com/2011/02/15/automated-configuration-management-with-puppet/)

Puppet is a systems management platform that enables sysadmins and developers to standardise the deployment and management of the IT infrastructure. This blog entry shows you how to automate your configuration management using Puppet.

Read the rest of this entry »