Trifork Blog

Posts Tagged ‘linux’

Server side applications in Apple’s Swift

May 2nd, 2016 by
(http://blog.trifork.com/2016/05/02/server-side-applications-in-apples-swift/)

In 2014, Apple announced the release of Swift, a new programming language for all their platforms. Their programming language of choice on iOS and OSX has always been Objective-c, a language which is a bit dated (it predates C++) and as it has had new features (and syntaxes) bolted onto it every few years, it carries quite a bit of baggage. It seems I wasn’t the only one with this opinion, as the release of swift was greeted with great enthusiasm, and has been adopted very rapidly.

Swift combines all the features that are fashionable in a general purpose language today, without the feeling that they were bolted on after the fact. While building an iOS client for our customer Gerimedica in swift, I found myself wishing I could use this language on the server side as well as in the client. At WWDC 2015, Apple announced the intention to open source the language, and release a Linux version, so it looked like it could become a reality. Since december 2015, the sources have been available on github, and builds for OSX and Ubuntu are made available roughly twice per month.

PerfectLib

A number of groups and companies saw an opportunity to be among the first with something that was obviously going to be big. One of the first was PerfectSoft, a startup that aims to be the one big framework for all your server side development in swift. They started building their framework as soon as the open source release of swift was announced, and have been advertising their product everywhere. Because they started development before anyone outside Cupertino had a good idea what the release would look like, it only worked on OSX at first, and it didn’t use the Swift Package Manager, the intended default build and dependency management tool for swift. At the time, the framework compiled to one big binary, that you had to include in your build manually. They have a beautiful website and good documentation, but it just wasn’t working when I tried it. I intend to try this framework out again at a later date.

IBM

The biggest player (other than Apple) to openly jump on the swift bandwagon is IBM. As soon as the open source release of Swift was announced, IBM announced the Swift Sandbox, their Swift based version of google’s golang playground. It is a web based repl that can be shared online by sharing a URL. Cool, but not extremely useful, as unlike go, swift already comes with a repl. The real significance of this is not the swift sandbox itself, but the message that IBM is interested in this technology and intends to be involved. IBM isn’t the kind of company to back technologies just because they like them, so they either see an opportunity, or a potential strategic interest. At the moment, IBM’s swift related activities seem to be associated with their PaaS solution BlueMix, so they are likely working on the Swift / IBM version of google’s app engine for go. IBM offers its own web framework for swift: kitura. Kitura turns out to be less than trivial to install and for now somewhat bare bones, but as this is IBM, it is worth dedicating another blog post to it at a later date. Also check out their overview of the most popular, most active and most essential open source projects on github for swift.

Read the rest of this entry »

Developing .NET software on Linux with Mono

February 19th, 2015 by
(http://blog.trifork.com/2015/02/19/developing-dotnet-software-on-linux-with-mono/)

The motivation

The obvious question here is why would you want to develop .NET software on Linux or for Linux? At the risk of sounding like throwing buzzwords around, I will say it is because Linux dominates the cloud completely. Many cloud-related technologies such as Docker, Mesos, and others build on Linux as a base. Sure, it is possible to run Windows in the cloud one way or another, but it is really hard to match the flexibility of Linux, especially when running more than just a few instances.

Quite recently Microsoft announced open-sourcing of .NET Core paving new grounds for a truly cross-platform .NET implementation. It has already been possible to run a lot of .NET software on Linux and OSX for quite some time on an independent .NET implementation called Mono, and now Microsoft is saying that they will work with the Mono project on a common code base that will eventually become the .NET core. In fact, Microsoft has been close to Xamarin, a company behind Mono, for a while now, so this step is not that surprising.

But how usable is Mono right now? That is what I set out to find out in my little experiment. Read the rest of this entry »

How to manage your Docker runtime config with Vagrant

July 20th, 2014 by
(http://blog.trifork.com/2014/07/20/how-to-manage-your-docker-runtime-config-with-vagrant/)

Vagrant LogoIn this short blog I will show you how to manage a Docker container using Vagrant. Since version 1.6 Vagrant supports Docker as a provider, next to existing providers for VirtualBox and AWS. With the new Docker support Vagrant boxes can be started way faster. In turn Vagrant makes Docker easier to use since its runtime configuration can be stored in the Vagrantfile. You won’t have to add runtime parameters on the command line any time you want to start a container. Read on if you like to see how I create a Vagrantfile for an existing Docker image from Quinten’s Docker cookbooks collection.

Read the rest of this entry »

Docker From A Distance – The Remote API

December 24th, 2013 by
(http://blog.trifork.com/2013/12/24/docker-from-a-distance-the-remote-api/)

Docker-logoMany people use docker from the command line to build images, run containers and manage Docker on their machine. However, you can also run the same Docker commands via its remote REST API. In this blog I will guide you through Docker’s remote API using curl while pointing out a few details and tools that you might not know about. We will remotely search and pull an elasticsearch image, run a container and clean up after ourselves.

Read the rest of this entry »

NLUUG DevOps Conference 2013 – Reliability, clouds and the UNIX way

November 26th, 2013 by
(http://blog.trifork.com/2013/11/26/nluug-devops-conference-2013-reliability-clouds-and-the-unix-way/)

Last Thursday I attended the NLUUG DevOps conference in Bunnik, near Utrecht. The NLUUG is the Dutch UNIX user group. In this blog I will summarize the talks I attended, some fun things I learned and I will discuss my own talk about continuous integration at a large organization.
Read the rest of this entry »

Using Docker to efficiently create multiple tomcat instances

August 15th, 2013 by
(http://blog.trifork.com/2013/08/15/using-docker-to-efficiently-create-multiple-tomcat-instances/)

Docker-logoIn my previous blog article I gave a short introduction into Docker (“an open-source engine that automates the deployment of any application as a lightweight, portable, self-sufficient container that will run virtually anywhere”). In this article we’ll check out how to create an image for Tomcat 7 and the Java 7 JDK as dependency.

So, let’s go ahead and do some ‘coding’. First, you need to install docker. Instructions can be found here. I already mentioned you need a Linux system with a modern kernel, so if you happen to be a Mac or Windows user, there are instructions on linked pages on how to use Vagrant to easily setup a virtual machine (VM) to use. For now we’ll work locally, but once you start installing servers you might find the Chef project to install docker useful as well.

As a first step after installation, let’s pick the first example from the Docker getting started page and create an Ubuntu 12.04 container, with completely separated processes, its own file system and its own network interface (but with network connection via the host), and have it print “hello world”. Do this by running

docker run ubuntu /bin/echo hello world

Cool huh? You probably just ran something on a different OS than that of your own machine or (in case you’re on Windows/Mac) the VM in which Docker is running! In this command ubuntu defines the image (found automatically as it is one of the standard images supplied by Docker). The run command creates an instance of the image (a container), feeding it /bin/echo hello world as the command to execute.

Read the rest of this entry »

Next step in virtualization: Docker, lightweight containers

August 8th, 2013 by
(http://blog.trifork.com/2013/08/08/next-step-in-virtualization-docker-lightweight-containers/)

Docker-logoLately, I have been experimenting with Docker, a new open source technology based on Linux containers (LXC). Docker is most easily compared to Virtual Machines (VMs). Both technologies allow you to create multiple distinct virtual environments which can be run on the same physical machine (host). Docker also shares characteristics with configuration management tools like Chef and Ansible: you can create build files (a Dockerfile) containing a few lines of script code with which an environment can be set-up easily. It’s also a deployment tool, as you can simply pull and start images (e.g. some-webapp-2.1) from a private or public repository on any machine you’d like, be it a colleagues laptop or a test or production server.

But you’re already using all those other tools, so why would you need Docker? In this blog entry, I’d like to give you an answer to that question and provide a short introduction to Docker. In my next blog entry (coming soon) I’ll dive into using Docker, specifically covering how to setup Tomcat servers.

Read the rest of this entry »

Bash – A few commands to use again and again

March 28th, 2013 by
(http://blog.trifork.com/2013/03/28/bash-a-few-commands-to-use-again-and-again/)

Introduction

These days I spend a lot of time in the bash shell. I use it for ad-hoc scripting or driving several Linux boxes. In my current project we set up a continuous delivery environment and migrate code onto it. I lift code from CVS to SVN, mavenize Ant builds and funnel artifacts into Nexus. One script I wrote determines if a jar that was checked into a CVS source tree exists in Nexus or not. This check can be done via the Nexus REST API. More on this script at the end of the blog. But first let’s have a look at a few bash commands that I use all the time in day-to-day bash usage, in no particular order.

  1. find
  2. Find searches files recursively in the current directory.

    $ find -name *.jar

    This command lists all jars in the current directory, recursively. We use this command to figure out if a source tree has jars. If this is the case we add them to Nexus and to the pom as part of the migration from Ant to Maven.

    $ find -name *.jar -exec sha1sum {} \;

    Find combined with exec is very powerful. This command lists the jars and computes sha1sum for each of them. The shasum command is put directly after the -exec flag. The {} will be replaced with the jar that is found. The \; is an escaped semicolon for find to figure out when the command ends.

  3. for
  4. For loops are often the basis of my shell scripts. I start with a for loop that just echoes some values to the terminal so I can check if it works and then go from there.


    $ for i in $(cat items.txt); do echo $i; done;

    The for loop keywords should be followed by either a newline or an ‘;’. When the for loop is OK I will add more commands between the do and done blocks. Note that I could have also used find -exec but if I have a script that is more than a one-liner I prefer a for loop for readability.

  5. tr
  6. Transliterate. You can use this to get rid of certain characters or replace them, piecewise.

    $ echo ‘Com_Acme_Library’ | tr ‘_A-Z’ ‘.a-z’

    Lowercases and replaces underscores with dots.

  7. awk

  8. $ echo 'one two three' | awk '{ print $2, $3 }'

    Prints the second and third column of the output. Awk is of course a full blown programming language but I tend to use this snippets like this a lot for selecting columns from the output of another command.

  9. sed
  10. Stream EDitor. A complete tool on its own, yet I use it mostly for small substitutions.


    $ cat 'foo bar baz' | sed -e 's/foo/quux/'

    Replaces foo with quux.

  11. xargs
  12. Run a command on every line of input on standard in.


    $ cat jars.txt | xargs -n1 sha1sum

    Run sha1sum on every line in the file. This is another for loop or find -exec alternative. I use this when I have a long pipeline of commands in a oneliner and want to process every line in the end result.

  13. grep
  14. Here are some grep features you might not know:

    $ grep -A3 -B3 keyword data.txt

    This will list the match of the keyword in data.txt including 3 lines after (-A3) and 3 lines before (-B3) the match.

    $ grep -v keyword data.txt

    Inverse match. Match everything except keyword.

  15. sort
  16. Sort is another command often used at the end of a pipeline. For numerical sorting use

    $ sort -n

  17. Reverse search (CTRL-R)
  18. This one isn’t a real command but it’s really useful. Instead of typing history and looking up a previous command, press CTRL-R,
    start typing and have bash autocomplete your history. Use escape to quit reverse search mode. When you press CTRL-R your prompt will look like this:

    (reverse-i-search)`':

  19. !!
  20. Pronounced ‘bang-bang’. Repeats the previous command. Here is the cool thing:

    $ !!:s/foo/bar

    This repeats the previous command, but with foo replaced by bar. Useful if you entered a long command with a typo. Instead of manually replacing one of the arguments replace it this way.

    Bash script – checking artifacts in Nexus

    Below is the script I talked about. It loops over every jar and dll file in the current directory, calls Nexus via wget and optionally outputs a pom dependency snippet. It also adds a status column at the end of the output, either an OK or a KO, which makes the output easy to grep for further processing.

    #!/bin/bash
    
    ok=0
    jars=0
    
    for jar in $(find $(pwd) 2&>/dev/null -name '*.jar' -o -name '*.dll')
    do
    ((jars+=1))
    
    output=$(basename $jar)-pom.xml
    sha1=$(sha1sum $jar | awk '{print $1}')
    
    response=$(curl -s http://oss.sonatype.org/service/local/data_index?sha1=$sha1)
    
    if [[ $response =~ groupId ]]; then
    ((ok+=1))
    echo "findjars $jar OK"
    echo "" >> "$output"
    echo "$response" | grep groupId -A3 -m1 >> "$output"
    echo "" >> "$output"
    else
    echo "findjars $jar KO"
    fi
    
    done
    
    if [[ $jars > 0 ]]; then
    echo "findjars Found $ok/$jars jars/dlls. See -pom.xml file for XML snippet"
    exit 1
    fi
    

    Conclusions

    It is amazing what you can do in terms of scripting when you combine just these commands via pipes and redirection! It’s like a Pareto’s law of shell scripting, 20% of the features of bash and related tools provide 80% of the results. The basis of most scripts can be a for loop. Inside the for loop the resulting data can be transliterated, grepped, replaced by sed and finally run through another program via xargs.

    References

    The Bash Cookbook is a great overview of how to solve solutions to common problems using bash. It also teaches good bash coding style.

Simulating a bad network for testing

July 3rd, 2012 by
(http://blog.trifork.com/2012/07/03/simulating-a-bad-network-for-testing/)

In a development environment, and often in the test and QA environments as well, we are thankfully blessed with a network that is for all intents and purposes infinitely fast, infinitely reliable and not shared with anyone else. Sometimes this causes you to miss a bug that only becomes apparent once your application has been released into the wild, where it has to deal with latency, packet loss and protocol violations.

To reproduce such bugs, it would be nice to have a network that is bad in a precisely controlled way. On a Linux machine, you can simulate one with netem. There is a wide range of possibilities with this tool, most of which are more useful to a network engineer than to a programmer or software tester, but I’ll give some simple examples, and demonstrate their effect with mtr.

First let’s take a look at the normal state of the network:

$ mtr -c 100 --report orange11.nl
HOST: cartman                    Loss%  Snt   Last   Avg  Best  Wrst StDev
1.|-- lobby                      0.0%   100    0.2   0.2   0.1   0.2   0.0
2.|-- backup1.orange11.nl        0.0%   100    2.4   4.0   2.0   9.1   1.7
3.|-- 10.0.0.30                  0.0%   100    5.0   4.0   2.2  10.4   1.6

That’s not too bad. Now we’ll simulate an average packet delay of 100 ms with a variability of 50ms, and a packet loss of 5%:

$ sudo tc qdisc add dev eth0 root netem delay 100ms 50ms loss 5%
$ mtr -c 100 --report orange11.nl
HOST: cartman                    Loss%  Snt   Last   Avg  Best  Wrst StDev
1.|-- lobby                      8.0%   100  129.3  96.6  50.2 147.8  26.0
2.|-- backup1.orange11.nl        3.0%   100  120.1 103.9  54.4 157.5  27.8
3.|-- 10.0.0.30                  4.0%   100   90.3 103.4  53.9 154.3  29.3

Pretty much as we would expect, the best ping times are around 50ms, the worst around 150ms, with an average around 100ms. The packet loss is a bit more random that I expected, but it should average out around 5% if we left mtr running for much longer than 100 cycles.

I can recommend trying out whatever project you are working on now, with a packet delay of 500ms to see if strange things happen in a reasonable worst case. It is important to realize that this tool can only shape the traffic that we’re sending, not receiving, so if the networked application is running on a different server, only your uploads and ACK packets should be affected.

You don’t have to reboot to get your network back to normal:

$ sudo tc qdisc del dev eth0 root

A great deal more can be done to shape your network traffic for better or for worse, such as rate control, prioritizing one destination over another, introducing packet corruption, duplication, or reordering etc, but these are outside the scope of this post.

A nice tutorial with examples can be found at the linuxfoundation.org and if you are interested in reading more about the background of network traffic control in the Linux kernel, I can recommend the Linux Advanced Routing & Traffic Control HOWTO.

Let me know how you get on won’t you?

January Newsletter

January 25th, 2012 by
(http://blog.trifork.com/2012/01/25/january-newsletter/)

Once again, the festive season is behind us and we all start the year as always with a bunch of new years resolutions. At Dutchworks we have made a few of our own too and we’re totally committed to achieving them. Our goals are to:

  1. Recruit the top talent in the industry and engage them with colleagues, customers & projects they can be proud of
  2. Achieve maximum project delivery reliability by striving for top quality & a rock solid delivery process
  3. Explore our key business domains (even more) and gain maximum exposure and experience in these markets.

Watch this space, as we’ll keep you posted on how we are getting on as the year progresses.

Read the rest of this entry »