Trifork Blog

AngularJS training

Category ‘System Administration’

Ansible - Simple module

April 18th, 2013 by

In this post, we'll review Ansible module development.
I haven chosen to make a maven module; not very fancy, but it provides a good support for the subject.
This module will execute a maven phase for a project (a pom.xml is designated).
You can always refer to the Ansible Module Development page.

Which language?

The de facto language in Ansible is Python (you benefit from the boilerplate), but any language can be used. The only requirement is being to be able to read/write files and write to stdout.
We will be using bash.

Module input

The maven module needs two parameters, the phase and the pom.xml location (pom).
For non-Python modules, Ansible provides the parameters in a file (first parameter) with the following format:
pom=/home/mohamed/myproject/pom.xml phase=test

You then need to read this file and extract the parameters.

In bash you can do that in two ways:
source $1

This can cause problems because the whole file is evaluated, so any code in there will be executed. In that case we trust that Ansible will not put any harmful stuf in there.

You can also parse the file using sed (or any way you like):
eval $(sed -e "s/\([a-z]*\)=\([a-zA-Z0-9\/\.]*\)/\1='\2'/g" $1)
This is good enough for this exercise.

We now have two variables (pom and phase) with the expected values.
We can continue and execute the maven phase for the given project (pom.xml).

Module processing

Basically, we can check if the parameters have been provided and then execute the maven command:


eval $(sed -e "s/\([a-z]*\)=\([a-zA-Z0-9\/\.]*\)/\1='\2'/g" $1)

if [ -z "${pom}" ] || [ -z "${phase}" ]; then
echo 'failed=True msg="Module needs pom file (pom) and phase name (phase)"'
exit 0;

maven-output=$(mktemp /tmp/ansible-maven.XXX)
mvn ${phase} -f ${pom} > $maven-output 2>&1
if [ $? -ne 0 ]; then
echo "failed=True msg=\"Failed to execute maven ${phase} with ${pom}\""
exit 0

echo "changed=True"
exit 0

In order to communicate the result, the module needs to return JSON.
To simplify the JSON outputing step, Ansible allows to use key=value as output.

Module output

You noticed that an output is always returned. If an error happened, failed=True is returned as well as an error message.
If everything went fine, changed=True is returned (or changed=False).

If the maven command fails, a generic error message is returned. We can change that by parsing the content of maven-ansible and return only what we need.

In some situations, your module doesn't do anything (no action is needed). In that case you'll need to return changed=False in order to let Ansible know that nothing happened (it is important if you need that for the rest of the tasks in your playbook).

Use it

You can run your module with the following command:

ansible buildservers -m maven -M /home/mohamed/ansible/mymodules/ --args="pom=/home/mohamed/myproject/pom.xml phase=test" -u mohamed -k

If it goes well, you get something like the following output:

localhost | success >> {
"changed": true


localhost | FAILED >> {
"failed": true,
"msg": "Failed to execute maven test with /home/mohamed/myproject/pom.xml"

To install the module put it in ANSIBLE_LIBRARY (by default it is in /usr/share/ansible), and you can start using it inside your playbooks.
It goes without saying that this module has some dependencies: an obvious one is the presence of maven. You can ensure that maven is installed by adding a task in your playbook before using this module.


Module development is as easy as what we briefly saw here, and in any language. That's another point I wanted to make and that makes Ansible very nice to use.

Ansible - Example playbook to setup Jenkins slave

April 2nd, 2013 by

As mentioned in my previous post about Ansible, we will now proceed with writing an Ansible playbook. Playbooks are files containing instructions that can be processed by Ansible, they are written in yaml. For this blog post I will show you how to create a playbook that will setup a remote computer as a Jenkins slave.

What do we need?

We need some components to get a computer execute Jenkins jobs:

  • JVM 7
  • A dedicated user that will run the Jenkins agent
  • Subversion
  • Maven (with our configuation)
  • Jenkins Swarm Plugin and Client

Why Jenkins Swarm Plugin

We use the Swarm Plugin, because it allows a slave to auto-discover a master and join it automatically. We hence don't need any actions on the master.


We now proceed with adding the JDK7 installation task. We will not use any package version (for example dedicate Ubuntu PPA or RedHat/Fedora repos), we will use the JDK7 archive from
There multiple steps required:

* We need wget to be install. This is needed to download the JDK
* To download the JDK you need to accept terms, we can't do that in a batch run so we need to wrap a wget call in a shell script to send extra HTTP headers
* Set the platform wide JDK links (java and jar executable)

Install wget

We want to verify that wget is installed on the remote computer and if not install it from the distribution repos. To install packages, there are modules available, yum and apt (There are others but we will focus on these).
To be able to run the correct task depending on the ansible_pkg_mgr value we can use only_id:

  - name: Install wget package (Debian based)
    action: apt pkg='wget' state=installed
    only_if: "'$ansible_pkg_mgr' == 'apt'"

  - name: Install wget package (RedHat based)
    action: yum name='wget' state=installed
    only_if: "'$ansible_pkg_mgr' == 'yum'"

Download JDK7

To download JDK7 from, we need to accept the terms but we can't do that in a batch, so we need to skip that:

Create a script contains the wget call:


wget --no-cookies --header "Cookie:"$1 -O $1

The parameter is the archive name.

  - name: Copy download JDK7 script
    copy: src=files/ dest=/tmp mode=0555

  - name: Download JDK7 (Ubuntu)
    action: command creates=${jvm_folder}/jdk1.7.0 chdir=${jvm_folder} /tmp/ $jdk_archive

These two tasks copy the script to /tmp and then execute it. $jdk_archive is a variable containing the archive name, it can be different depending on the distribution and the architecture.

Ansible provide a way to load variable files:


    - [ "vars/defaults.yml" ]
    - [ "vars/$ansible_distribution-$ansible_architecture.yml", "vars/$ansible_distribution.yml" ]

This will load the file vars/defauts.yml (Note that all these file are written in yaml) and then look for the file vars/$ansible_distribution-$ansible_architecture.yml.
The variables are replaced by the their value on the remote computer voor example on an Ubuntu 32bits on i386 distribution, Ansible will look for the file vars/Ubuntu-i386.yml. If it doesn't find it, it will fallback to vars/Ubuntu.yml.

Examples, Ubuntu-i386.yml would contain:

jdk_archive: jdk-7-linux-i586.tar.gz

Fedora-i686.yml would contain:

jdk_archive: jdk-7-linux-i586.rpm

Unpack/Install JDK

You notice that for Ubuntu we use the tar.gz archive but for Fedora we use an rpm archive. That means the the installation of the JDK will be different depending on the distribution.

  - name: Unpack JDK7
    action: command creates=${jvm_folder}/jdk1.7.0 chdir=${jvm_folder} tar zxvf ${jvm_folder}/$jdk_archive --owner=root
    register: jdk_installed
    only_if: "'$ansible_pkg_mgr' == 'apt'"

  - name: Install JDK7 RPM package
    action: command creates=${jvm_folder}/latest chdir=${jvm_folder} rpm --force -Uvh ${jvm_folder}/$jdk_archive
    register: jdk_installed
    only_if: "'$ansible_pkg_mgr' == 'yum'"

On ubuntu, we just unpack the downloaded archive but for fedora we install it using rpm.
You might want to review the condition (only_if) particularly if you use SuSE.
jvm_folder is just an extra variable that can be global of per distribution, you need to place if in a vars file.
Note that the command module take a 'creates' parameter. It is useful if you don't want to rerun the command, the module that the file or directory provided via this parameter exits, if it does it will skip that task.
In this task, we use register. With register you can store the result of a task into a variable (in this case we called it jdk_installed).

Set links

To be able to make the java and jar executables accessible to anybody (particularly our jenkins user) from anywhere, we set symbolic links (actually we just install an alternative).

  - name: Set java link
    action: command update-alternatives --install /usr/bin/java java ${jvm_folder}/jdk1.7.0/bin/java 1
    only_if: '${jdk_installed.changed}'

  - name: Set jar link
    action: command update-alternatives --install /usr/bin/jar jar ${jvm_folder}/jdk1.7.0/bin/jar 1
    only_if: '${jdk_installed.changed}'

Here we reuse the stored register, jdk_installed. We can access the changed attribute, if the unpacking/installation of the JDK did do something then changed will be true and the update-alternatives command will be ran.


To keep things clean, you can remove the downloaded archive using the file module.

  - name: Remove JDK7 archive
    file: path=${jvm_folder}/$jdk_archive state=absent

We are done with the JDK.

Obviously you might want to reuse this process in other playbooks. Ansible let you do that.
Just create a file with all this task and include it in a playbook.

- include: tasks/jdk7-tasks.yml jvm_folder=${jvm_folder} jdk_archive=${jdk_archive}

jenkins user


With the name module, the can easily handle users.

  - name: Create jenkins user
    user: name=jenkins comment="Jenkins slave user" home=${jenkins_home} shell=/bin/bash

The variable jenkins_home can be defined in one of the vars files.

Password less from Jenkins master

We first create the .ssh in the jenkins home directory with the correct rights. And then with the authorized_key module, we can add the public of the jenkins user on the jenkins master to the authorized keys of the jenkins user (on the new slave). And then we verify that the new authorized_keys file has the correct rights.

  - name: Create .ssh folder
    file: path=${jenkins_home}/.ssh state=directory mode=0700 owner=jenkins

  - name: Add passwordless connection for jenkins
    authorized_key: user=jenkins key="xxxxxxxxxxxxxx jenkins@master"

  - name: Update authorized_keys rights
    file: path=${jenkins_home}/.ssh/authorized_keys state=file mode=0600 owner=jenkins

If you want jenkins to execute any command as sudo without the need of providing a password (basically updating /etc/sudoers), the module lineinfile can do that for you.
That module checks 'regexp' against 'dest', if it matches it doesn't do anything if not, it adds 'line' to 'dest'.

  - name: Tomcat can run any command with no password
    lineinfile: "line='tomcat ALL=NOPASSWD: ALL' dest=/etc/sudoers regexp='^tomcat'"


This one is straight forward.

  - name: Install subversion package (Debian based)
    action: apt pkg='subversion' state=installed
    only_if: "'$ansible_pkg_mgr' == 'apt'"

  - name: Install subversion package (RedHat based)
    action: yum name='subversion' state=installed
    only_if: "'$ansible_pkg_mgr' == 'yum'"


We will put maven under /opt so we first need to create that directory.

  - name: Create /opt directory
    file: path=/opt state=directory

We then download the maven3 archive, this time it is more simple, we can directly use the get_url module.

  - name: Download Maven3
    get_url: dest=/opt/maven3.tar.gz url=

We can then unpack the archive and create a symbolic link to the maven location.

  - name: Unpack Maven3
    action: command creates=/opt/maven chdir=/opt tar zxvf /opt/maven3.tar.gz

  - name: Create Maven3 directory link
    file: path=/opt/maven src=/opt/apache-maven-3.0.4 state=link

We use again update-alternatives to make mvn accessible platform wide.

  - name: Set mvn link
    action: command update-alternatives --install /usr/bin/mvn mvn /opt/maven/bin/mvn 1

We put in place out settings.xml by creating the .m2 directory on the remote computer and copying a settings.xml (we backup any already existing settings.xml).

  - name: Create .m2 folder
    file: path=${jenkins_home}/.m2 state=directory owner=jenkins

  - name: Copy maven configuration
    copy: src=files/settings.xml dest=${jenkins_home}/.m2/ backup=yes

Clean things up.

  - name: Remove Maven3 archive
    file: path=/opt/maven3.tar.gz state=absent

Swarm client

You first need to install the Swarm plugin as mentioned here.
Then you can proceed with the client installation.

First create the jenkins slave working directory.

  - name: Create Jenkins slave directory
    file: path=${jenkins_home}/jenkins-slave state=directory owner=jenkins

Download the Swarm Client.

  - name: Download Jenkins Swarm Client
    get_url: dest=${jenkins_home}/swarm-client-1.8-jar-with-dependencies.jar url= owner=jenkins

When you start the swarm client, it will connect to the master and the master will automatically create a new node for it.
There are a couple of parameters to start the client. You still need to provided a login/password in order to authenticate. You obviously want this information to be parameterizable.

First we need a script/configuration to start the swarm client at boot time (systemv, upstart or systemd it is up to you). In that script/configuration, you need to add the swarm client run command:

java -jar {{jenkins_home}}/swarm-client-1.8-jar-with-dependencies.jar -name {{jenkins_slave_name}} -password {{jenkins_password}} -username {{jenkins_username}} -fsroot {{jenkins_home}}/jenkins-slave -master -disableSslVerification &> {{jenkins_home}}/swarm-client.log &

Then using the template module, to process the script/configuration template (using Jinja2) into a file that will be put on a given location.

  - name: Install swarm client script
    template: src=templates/jenkins-swarm-client.tmpl dest=/etc/init.d/jenkins-swarm-client mode=0700

The file mode is 700 because we have a login/password in that file, we don't want people (that can log on the remote computer) to be able to see that.

Instead of putting jenkins_username and jenkins_password in vars files, you can prompt for them.


    - name: jenkins_username
      prompt: "What is your jenkins user?"
      private: no
    - name: jenkins_password
      prompt: "What is your jenkins password?"
      private: yes

And then you can verify that they have been set.

  - fail: msg="Missing parameters!"
    when_string: $jenkins_username == '' or $jenkins_password == ''

You can now start the swarm client using the service module and enable it to start at boot time.

  - name: Start Jenkins swarm client
    action: service name=jenkins-swarm-client state=started enabled=yes

Run it!

ansible-playbook jenkins.yml --extra-vars "host=myhost user=myuser" --ask-sudo-pass

By passing '--ask-sudo-pass', you tell Ansible that 'myuser' requires a password to be typed in order to be able to run the tasks in the playbook.
'--extra-vars' will pass on a list of viriables to the playbook. The begining of the playbook will look like this:

- hosts: $host
  user: $user
  sudo: yes

'sudo: yes' tells Ansible to run all tasks as root but it acquires the privileges via sudo.
You can also use 'sudo_user: admin', if you want Ansible to run the command to sudo to admin instead of root.
Note that if you don't need facts, you can add 'gather_facts: no', this will spend up the playbook execution but that requires that you know everything you need about the remote computer.


The playbook is ready. You can now easily add new nodes for new Jenkins slaves thanks to Ansible.

Bash - A few commands to use again and again

March 28th, 2013 by


These days I spend a lot of time in the bash shell. I use it for ad-hoc scripting or driving several Linux boxes. In my current project we set up a continuous delivery environment and migrate code onto it. I lift code from CVS to SVN, mavenize Ant builds and funnel artifacts into Nexus. One script I wrote determines if a jar that was checked into a CVS source tree exists in Nexus or not. This check can be done via the Nexus REST API. More on this script at the end of the blog. But first let's have a look at a few bash commands that I use all the time in day-to-day bash usage, in no particular order.

  1. find
  2. Find searches files recursively in the current directory.

    $ find -name *.jar

    This command lists all jars in the current directory, recursively. We use this command to figure out if a source tree has jars. If this is the case we add them to Nexus and to the pom as part of the migration from Ant to Maven.

    $ find -name *.jar -exec sha1sum {} \;

    Find combined with exec is very powerful. This command lists the jars and computes sha1sum for each of them. The shasum command is put directly after the -exec flag. The {} will be replaced with the jar that is found. The \; is an escaped semicolon for find to figure out when the command ends.

  3. for
  4. For loops are often the basis of my shell scripts. I start with a for loop that just echoes some values to the terminal so I can check if it works and then go from there.

    $ for i in $(cat items.txt); do echo $i; done;

    The for loop keywords should be followed by either a newline or an ';'. When the for loop is OK I will add more commands between the do and done blocks. Note that I could have also used find -exec but if I have a script that is more than a one-liner I prefer a for loop for readability.

  5. tr
  6. Transliterate. You can use this to get rid of certain characters or replace them, piecewise.

    $ echo 'Com_Acme_Library' | tr '_A-Z' '.a-z'

    Lowercases and replaces underscores with dots.

  7. awk

  8. $ echo 'one two three' | awk '{ print $2, $3 }'

    Prints the second and third column of the output. Awk is of course a full blown programming language but I tend to use this snippets like this a lot for selecting columns from the output of another command.

  9. sed
  10. Stream EDitor. A complete tool on its own, yet I use it mostly for small substitutions.

    $ cat 'foo bar baz' | sed -e 's/foo/quux/'

    Replaces foo with quux.

  11. xargs
  12. Run a command on every line of input on standard in.

    $ cat jars.txt | xargs -n1 sha1sum

    Run sha1sum on every line in the file. This is another for loop or find -exec alternative. I use this when I have a long pipeline of commands in a oneliner and want to process every line in the end result.

  13. grep
  14. Here are some grep features you might not know:

    $ grep -A3 -B3 keyword data.txt

    This will list the match of the keyword in data.txt including 3 lines after (-A3) and 3 lines before (-B3) the match.

    $ grep -v keyword data.txt

    Inverse match. Match everything except keyword.

  15. sort
  16. Sort is another command often used at the end of a pipeline. For numerical sorting use

    $ sort -n

  17. Reverse search (CTRL-R)
  18. This one isn't a real command but it's really useful. Instead of typing history and looking up a previous command, press CTRL-R,
    start typing and have bash autocomplete your history. Use escape to quit reverse search mode. When you press CTRL-R your prompt will look like this:


  19. !!
  20. Pronounced 'bang-bang'. Repeats the previous command. Here is the cool thing:

    $ !!:s/foo/bar

    This repeats the previous command, but with foo replaced by bar. Useful if you entered a long command with a typo. Instead of manually replacing one of the arguments replace it this way.

    Bash script - checking artifacts in Nexus

    Below is the script I talked about. It loops over every jar and dll file in the current directory, calls Nexus via wget and optionally outputs a pom dependency snippet. It also adds a status column at the end of the output, either an OK or a KO, which makes the output easy to grep for further processing.

    for jar in $(find $(pwd) 2&>/dev/null -name '*.jar' -o -name '*.dll')
    output=$(basename $jar)-pom.xml
    sha1=$(sha1sum $jar | awk '{print $1}')
    response=$(curl -s$sha1)
    if [[ $response =~ groupId ]]; then
    echo "findjars $jar OK"
    echo "" >> "$output"
    echo "$response" | grep groupId -A3 -m1 >> "$output"
    echo "" >> "$output"
    echo "findjars $jar KO"
    if [[ $jars > 0 ]]; then
    echo "findjars Found $ok/$jars jars/dlls. See -pom.xml file for XML snippet"
    exit 1


    It is amazing what you can do in terms of scripting when you combine just these commands via pipes and redirection! It's like a Pareto's law of shell scripting, 20% of the features of bash and related tools provide 80% of the results. The basis of most scripts can be a for loop. Inside the for loop the resulting data can be transliterated, grepped, replaced by sed and finally run through another program via xargs.


    The Bash Cookbook is a great overview of how to solve solutions to common problems using bash. It also teaches good bash coding style.

Ansible - next generation configuration management

March 26th, 2013 by

The popularity of the cloud has taken configuration management to the next level. Tools that help system administrators and developers configure and manage large amounts of servers, like Chef and Puppet, have popped up everywhere. Ansible is the next generation configuration management. Ansible can be used to excute tasks on remote computers via SSH so no agent is required on the remote computer. It was originally created by Michael DeHaan.
I won't compare Ansible with Puppet or Chef, you can check the Ansible FAQ. But the key differentiators are that Anisble does not require an agent to be installed, its commands can be ordered, can be extended via modules written in any language as long as they return JSON, basically taking the best of both worlds (Puppet and Chef).


You'll want to install Ansible on a central computer from which you can reach all the other computers.

On Fedora, it is already packaged:

sudo yum install ansible

On Ubuntu, you need to add a repo:

sudo add-apt-repository ppa:rquillo/ansible
sudo apt-get install ansible

On Mac, you can use MacPorts.

On others, compile it from source

Getting started

One of the core constructs in Ansible is the notion of an inventory. Ansible uses this inventory to know which computers should be included when executing a module for a given group. An inventory is a very simple file (by default it uses /etc/ansible/hosts) containing groups of computers.



As part of the inventory you can also initialize variables common to a group. These variables can then be reused when executing tasks for each computer.



You can set your own inventory by setting a global environment variable:

export ANSIBLE_HOSTS=my-ansible-inventory

You can then start using Ansible right away:

ansible appservers -m ping -u my-user -k

What this does is run module 'ping' for all computer in the group appservers, it returns: | success >> {
"changed": false,
"ping": "pong"
} | success >> {
"changed": false,
"ping": "pong"

You see that the module executed successfully on both hosts. We'll come back to the 'changed' output later.
-u tells Ansible that you want to use another user (it uses root by default) to login on the remote computers. -k tells Ansible that you want to provide a password for this user.
In most cases you'll probably want to setup a passwordless connection to the remote computers, ssh-copy-id will help you do that. Or better, you can rely on ssh-agent.

Gathering facts

Most of the time when using Ansible, you want to know something about the computer you are executing a task on.
The 'setup' module does just that, it gathers facts about a computer.

ansible appservers -m setup -u tomcat -k

You get a big output (I've removed some of it): | success >> {
    "ansible_facts": {
        "ansible_architecture": "x86_64",
        "ansible_distribution": "Ubuntu",
        "ansible_domain": "",
        "ansible_fqdn": "",
        "ansible_hostname": "app1",
        "ansible_interfaces": [
        "ansible_machine": "x86_64",
        "ansible_memfree_mb": 1279,
        "ansible_memtotal_mb": 8004,
        "ansible_pkg_mgr": "apt",
        "ansible_system": "Linux",
    "changed": false,
    "verbose_override": true
} | success >> {
    "ansible_facts": {
        "ansible_architecture": "x86_64",
        "ansible_distribution": "Ubuntu",
        "ansible_domain": "",
        "ansible_fqdn": "",
        "ansible_hostname": "app2",
        "ansible_interfaces": [
        "ansible_machine": "x86_64",
        "ansible_memfree_mb": 583,
        "ansible_memtotal_mb": 2009,
        "ansible_pkg_mgr": "apt",
        "ansible_system": "Linux",
    "changed": false,
    "verbose_override": true

These are Ansible facts, Ansible can also use extra facts gathered by ohai or facter.

Let's review some of the Ansible facts:

ansible_pkg_mgr: This tells which package manager is in use on the remote Linux computer. This is important if you want to use the 'apt' or 'yum' module and want to make your scripts (playbooks) distro-agnositic.
ansible_distribution: This tells which Linux distribution is installed on the remote computer.
ansible_architecture: If you want to know which OS architecture it is.

Next time we'll use these facts together with modules in a playbook example.

Building a Captive Portal - controlling access to the internet from your network

January 15th, 2013 by

What is a captive portal?
Wikipedia says: "The captive portal technique forces an HTTP client on a network to see a special web page (usually for authentication purposes) before using the Internet normally. A captive portal turns a Web browser into an authentication device. This is done by intercepting all packets, regardless of address or port, until the user opens a browser and tries to access the Internet."

Basically, when accessing a network (in most cases a WIFI network), a captive portal will block any traffic (to for instance the internet) as long as the client did not go through a predefined workflow. That workflow begins when the user opens a web browser, and via that same browser the client is required to for instance:

  • authenticate itself
  • accept terms
  • pay fees
  • etc.

In this post, I will show you how you can build this kind of solution for your own network using several open source tools, primarily CoovaChilli.
Read the rest of this entry »

Automated Configuration Management with Puppet

February 15th, 2011 by

Puppet is a systems management platform that enables sysadmins and developers to standardise the deployment and management of the IT infrastructure. This blog entry shows you how to automate your configuration management using Puppet.

Read the rest of this entry »

Decrease the double click speed for Java Applications on Ubuntu Linux

October 13th, 2008 by

When running any Java swing application (like intellij idea) on Ubuntu the double click speed is by default set to 200ms. If like me you find this anoying you can decrease the double click speed by taking the following steps

In your home directory create a file called .Xresources and add the following line

*multiClickTime: 400

Then from the commandline execute

xrdb ~/.Xresources

to make the changes take effect

Disabling URL rewriting for the Googlebot

September 8th, 2008 by

Http is a stateless protocol. To work around the problems caused by this, web applications have the concept of a session. When a user requests a webpage for the first time the user is assigned a unique 32 character string. This string can be send along in subsequent requests to indicate that these requests are in fact originating from the same user. The most common way to pass along this string, or session identifier, is by sending it in a cookie. But what if a user chooses to disable cookies ? In that case a servlet container will fall back on url rewriting, the session identifier is appended at the end of any links in your application. So a link to your homepage might look like this after rewriting


When you click this link the container will parse the jsessionid value and will determine that you are the same user that made the previous request. This way even privacy conscious users may continue to use your web site. This is something that all just works as long as you use something like the jstl url tag. When it detects that the user has disabled cookies it will automatically start rewriting all the URLs in your application.

Most of the time this is what you want. However there is an unfortunate side affect to this strategy. The Google bot that constantly spiders the internet for new content does not support cookies. This means that it will see, and index, the rewritten URLs. a quick search suggests that this is a fairly common problem. The rewritten URLs will hurt your google rating because less of the URL will match a users search query. So how do you solve it ? It turned out to be fairly trivial.

I created a ServletResponseWrapper that modifies the encodeURL and encodeRedirectURL methods so it does not append the session identifier. The wrapper is created in a servlet filter that only applies the wrapper when it determines that the request originates from the Google bot. You can check this fairy easily by inspecting the user agent header send along with every request. I included the source below

Password protecting web applications in tomcat.

January 22nd, 2007 by

A few days back I wanted to take an existing application, deploy it to a staging environment and password protect it without having to change the application code. How hard can it be right? As it turns out it’s not that hard but way, way harder than it should be. There doesn’t seem to be any support for this build into tomcat. So I ended up implementing my own valve that does this. Valves are components that enable Tomcat to intercept a request and pre-process it. They are similar to the filter mechanism of the Servlet specifications, but are specific to Tomcat They have a broader scope than Servlet filters and can be applied to the entire engine, to all applications for a host or a single web application. With this jar in my /server/lib, password protecting an application becomes as simple as

<Context docBase="../app" debug="0" privileged="true">
<Valve className="nl.jteam.tomcat.valves.PasswordValve"
password="s3cr3t" exclude="/test.html " />

Disabling the Firefox DNS cache

November 4th, 2006 by

If you, like me, make frequent changes to your host file, for instance because your staging and production environment both listen to the same vhost. You will have probably noticed that it takes Firefox a while to pick up on the alterations you made. This is because in order to improve performance by default Firefox caches DNS lookups for up to 60 seconds. If you do not feel like waiting, restarting the browser is your only option. Fortunately you can disable DNS caching. This is what you do :

Enter about:config in the adress bar.

Right click on the list of properties, select new > integer in the context menu

enter network.dnsCacheExpiration as the preference name and 0 as the integer value

Add another integer preference, this time use network.dnsCacheEntries as the preference name and again 0 as the value