Saturday 1 October 2016

Performing nightly build steps with a Jenkinsfile

Note: 2018-12-13: I have a new post with an updated version that works with declarative pipelines. 


Using a Jenkinsfile to control your jenkins builds is an important part of the jenkins 2 workflow for pipeline-as-code. A Jenkinsfile allows you to control what you build, were you build it and all other aspects of your CI flow.

Typically when using pipeline-as-code your build would be triggered by a commit or push from your source control repository. However, there can still be times when you want your build to run on a schedule to perform a long running task e.g. static analysis or a full rebuild of your repository.

Running a nightly build

Jenkins supports running jobs using a trigger which can be controlled with a cron like format. From a Jenkinsfile this can be setup using triggers
  
def triggers = []
triggers << cron('H H(0-2) * * *')
properties (
    [
        pipelineTriggers(triggers)

    ]
)
This will cause your build to trigger sometime between midnight and 2am every day. The above works correctly, however it will cause a build to trigger for every branch in your repository. To limit it to a specific branch you can change it to


def triggers = []
if (env.BRANCH_NAME == "master) {
    triggers << cron('H H(0-2) * * *')
}
properties (
    [
        pipelineTriggers(triggers)

    ]
)
This will limit your scheduled build to only run on the master branch.

Limiting parts of the build to only run at night

Now that you have your build running every night, how do you limit the long running tasks to only trigger from the nightly build?

To do this you must examine the cause of the build. This involves getting the rawBuild data and searching all causes for a particular line in the description. Below is a handy function I've written which can be used to get that information.

// check if the job was started by a timer
// check if the job was started by a timer
@NonCPS
def isJobStartedByTimer() {
    def startedByTimer = false
    try {
        def buildCauses = currentBuild.rawBuild.getCauses()
        for ( buildCause in buildCauses ) {
            if (buildCause != null) {
                def causeDescription = buildCause.getShortDescription()
                echo "shortDescription: ${causeDescription}"
                if (causeDescription.contains("Started by timer")) {
                    startedByTimer = true
                }
            }
        }
    } catch(theError) {
        echo "Error getting build cause"
    }

    return startedByTimer
}

Note: As this is a NonCPS function it must be run outside of a node block.
Note: To get this to work correctly you may have to go to Manage Jenkins > In Process Script Approval, and approve the following signatures

method groovy.lang.Binding getVariables
method hudson.model.Cause getShortDescription
method hudson.model.Run getCause java.lang.Class
method hudson.model.Run getCauses
method org.jenkinsci.plugins.workflow.support.steps.build.RunWrapper getRawBuild


When I run my build I change my trigger section to

def triggers = []
def startedByTimer = false
if (env.BRANCH_NAME == "master) {
    triggers << cron('H H(0-2) * * *')
    startedByTimer = isJobStartedByTimer()
}
properties (
    [
        pipelineTriggers(triggers)

    ]
)

Then later in my build I can check if the build is a timed build and run the additional analysis checks. For example

if ( startedByTimer ) {
    node("analysis_server") {
        sh script: "make analysis"
    }
}

Thursday 9 June 2016

Multiple Independent Instances of Gnome Terminal

My typical workflow involves SSHing to multiple servers and switching between them. As a result of this I can often end up having 3+ terminals open into 6+ servers. This results in me often having 15+ terminal windows open on top of my usual browsers, file managers, etc.

I find that it helps me find and sort windows if you can group them based on the server you are logging into instead of the default grouping of all terminals together. To accomplish this grouping you can use a feature in gnome called the window class. This allows you to start applications with a particular WM_CLASS attribute and in the dock, launcher, and <ALT+TAB> menu these windows are grouped together.

In my previous installations of Ubuntu, I had been using either gnome-shell or xfce as the window manager and xterm as my terminal. With this combination I could easily group my terminals and had a handy script to automatically create a menu launcher. However, after upgrading to Ubuntu 16.04 I decided to investigate using gnome-terminal to replace xterm in my workflow.

My first attempt was to just change the above script to launch gnome-terminal instead of xterm (with a slight modification of arguments). I quickly found out that this didn't work and some googling told me that the reason is because gnome-terminal launches a background process called gnome-terminal server which in turn launches and controls the terminal windows. I was able to find a blog on how to launch multiple gnome-terminal-servers, however this required sudo and/or a gnome restart.

After more investigation I found that in Ubuntu /usr/bin/gnome-terminal is a python script that wraps the startup of gnome-terminal and gnome-terminal-server. With a small change to the script, to add a "--class" flag when launching gnome-terminal-server, I was able to fix the issue of terminal windows not showing in multiple groups. The changed script is available from here and the changes are on line 52 and 53 of the script and line 2 and 3 below:

  
        ts = Gio.Subprocess.new(['/usr/lib/gnome-terminal/gnome-terminal-server',
                                 '--class',
                                 name,
                                 '--app-id',
                                 name],
                                Gio.SubprocessFlags.NONE)

Save the script as ~/bin/gnome-terminal-custom and to then launch a terminal in it's own class you can call
  
~/bin/gnome-terminal-custom --disable-factory --app-id com.sshmenu.mylauncher

Wraping up the above in this script to create a desktop menu launcher I can easily launch a new terminal that will automatically SSH into a server and group all terminals for that server together.

Gnome-shell dash showing multiple launchers

gnome-shell dock showing multiple grouped terminal windows

Friday 8 April 2016

Implementing Git Flow with Gitlab and Jenkins

As with my last posts I'm going to cover the updates to the build and test systems that I have been making. In my previous posts I covered using CMake, and moving from SVN to Git. In this post I'm going to cover the branching strategy that I implemented after moving to Git. I will cover the branching model chosen and introduce the tools used which allow continuous integration using that model.

Git branching strategies

Git branching strategies are policies on how to use git for development. This allows you to establish a common work-flow that all team members can use and help make updates easier.

There are a number of branching strategies available and they offer various pros and cons depending on your releasing and development methodologies. For our model we have implemented a slight variation on the git flow model. The main changes from this model that we have implemented are:
  • master is used as the current development branch. 
  • production is used as the release branch. 
  • All changes to master and production must be pushed to Gitlab.
  • All changes to master and production must be tested via Jenkins.
  • All changes to master and production must be code reviewed.
Git flow was chosen as our model because it allows us to release software versions in a consistent and robust manner while still allowing for other developers to continue to work on new features.

Some of the other models such as github flow are more aligned towards software which follows a continuous deployment model where all changes should be pushed to production after testing instead of having to release software to customers.

Other tools

The tools we use to help with our work flow include Gitlab for the central git server and Jenkins for continuous integration.

Gitlab

Gitlab is git repository management tool, which includes user management, code review, merge requests, wiki and more. It could be considered similar to github and is quickly catching up on features and usability. However, the main area where it is ahead of github and what caused me to choose it was that it has an open source community edition which allows for free, easy to install, onsite installations.

Jenkins

Jeninks is a automation server that supports continuous integration and deployment. It is easily extensible and has many plugins to support most common build and integration tools. This allows it to easily integrate with build and source control tools to receive notifications and automatically update, build and test software.

Build System

As previously mentioned this post is about building and testing a software project which is C++ based and uses CMake as the build tool. The core platforms to build for are RedHat based systems including RedHat 5, 6, and 7.

Configuration

In this section I will cover configuration of the various servers. First, I will look at the Gitlab configuration and how the branches and hooks are configured. Secondly, I will cover the Jeninks configuration which builds and tests the software.

Gitlab

The installation of Gitlab is via the Gitlab CE omnibus edition Debian package on an Ubuntu server. This is standard and covered on here.

User accounts, groups and the repository are created. In this example the group is example-group and the project is example-project.

Protected Branches

Two branches master and production are created an set to protected. As described in the Gitlab UI, protect branched are designed to:
  • prevent pushed from everybody except masters
  • prevent anyone from force pushing to the branch
  • prevent anyone from deleting the branch
This means that these branches are sure to exist, only senior developers are allowed to push to them. It also enforces the use of merge requests for code review.

Gitlab Protected Branches

Webhook

Webhooks are configured to sent a HTTP request to the URL http://<jenkins-host>/gitlab/build_now for push events. As described later, this will trigger the Gitlab plugin in Jenkins.

Gitlab webhook

Finally, deploy keys for the Jenkins users are configured on the repository to allow the jenkins user to clone the code.

Jenkins

For this example, Jenkins v1.607 is installed on one server. All builds are performed on the slave nodes buster, earth, and jupiter, where each node has a different version of RedHat installed.

The following are the main plugins used for this example:
  • Git Plugin - Allows you to use git as a SCM with Jenkins.
  • Gitlab Hook Plugin - Allows Jenkins to receive Gitlab web hooks and trigger builds.
Note: There is a Gitlab Merge Request Builder Plugin but I have not had a chance to configure it yet.

Gitlab Hook Plugin

To configure the Gitlab Hook Plugin go to Manage Jenkins > Configure System and find the section "Gitlab Web Hook"


Gitlab Web Hook configuration


Enable "automatic project creation" and set the project master branch to "master".

Combined with the Gitlab webhook configuration above, this will cause Jenkins to create a new project for every branch that is pushed to Gitlab and have it use the project associated with the master branch as a template.

This allows Jenkins to automatically build and test every commit on every branch that is pushed to Gitlab. By having every branch tested before merging we ensure that all changes are working as expected and that they should be safe to merge into the one of the core branches.

Git Plugin

Configuration of the git plugin is from the project configuration page as shown below:

Git Plugin project configuration

The option to "Clean before checkout" will run a git clean on the project repository to remove any temporary build files from any previous runs of the job.

Job configuration

The configuration of the build job and steps is fairly standard multi-configuration project.

A configuration matrix is configured to run the job on each of the relevant build server slaves.

Build Slave Configuration Matrix


I then created an "Execute Shell" action which will call bash scripts that are in a jenkins folder as part of the repository. This allows you to the actual build commands to be under source control as part of the repository instead of in the Jenkins database. It can also allow slight variations of the build per branch.

Jenkins Execute Shell Build Step
The contents of these scripts can be as as simple or as complex as required. In this example the bash scripts are as follows:

build_step.sh runs CMake and make to compile the software.

#!/bin/bash -ex

mkdir build
cd build
cmake ..
make

test_step.sh runs CTest to make sure all unit tests pass.

#!/bin/bash -ex

cd build
ctest -V

rpm_step.sh uses CPack to create both an RPM and .tar.gz package.

#!/bin/bash -ex
cd build
make package  

Finally after the build has completed one of two options can happens:
  • On success the .rpm and .tar.gz build artifacts are archived for use by other jobs.
  • On failure an email is sent to the relevant developer group
Post Build Actions

Branch Jobs

All branches will by default create a new job that is a clone of the above master job. These jobs will be called "example-project_<branch name>" and will build the software on every push to Gitlab.

This is also true for the the production branch where a branch "example-project_production" is create. After creation it is possible to make changes to these jobs to add additional build steps required for customer releasable software, for example, you could add a test to make sure that release notes are available or you can SSH the rpm file to a central release server for installation at customer sites.

Merge Requests

As mentioned in the git flow description we have added the step that all branches must be pushed to Gitlab.

The advantage to this is that it requires a merge request to be issued. These merge requests must be approved by another developer to ensure that:
  • They meet any coding standards.
  • Changes are of sufficient quality.
  • Changes are checked by at least 2 developers to help spread knowledge.

Summary

In this post I have show the how to use Gitlab and Jenkins to help implement the git flow branching strategy. This combination allows for continuous integration code review, and helps to enable the building of high quality software.




Tuesday 16 February 2016

SVN to Git - Only migrate some branches

I have recently been moving some core repositories from SVN to Git. These repositories typically use a standard layout of trunk, branches, and tags. If moving a repository with the standard layout, there are many tutorials to help with this and my particular favourite, which I have followed for most of our repositories, is this one from Atlassian.

However, when I went to move our final and largest repository, I realised that it includes over 40 legacy branches and 90 legacy tags. Many of these hadn't been required for a number of years. Instead of moving these branches and cluttering up the new Git repository, I decided to only move branches and tags that are currently active. Unfortunately, to do this requires a different formula to online tutorials. The steps to achieve this and only move select branches is outlined below:

Create an authors file as normal

An authors.txt file links the username used for SVN commits to an email address in the Git repository. This can be done using the svn-migration-scripts.jar from Atlassian or an alternative script that can be used on an svn checkout is below.

svn log -q | awk -F '|' '/^r/ {sub("^ ", "", $2); sub(" $", "", $2); print $2" = "$2" <"$2">"}' | sort -u > authors.txt

Once this file is created, you need to edit it so that the section between the angled brackets to the email address of your users. For example change

jbloggs = jbloggs <jbloggs>

into

jbloggs = jbloggs <jbloggs@your-company.com>

Clone trunk branch of the repository

Once you have an authors.txt file you can now clone your repository using git-svn. For this method, your first clone should only include the trunk branch.

git svn clone -T trunk --authors-file=authors.txt svn-repo  

Under your .git/config you should now see a new svn section such as:

[svn-remote "svn"]
 url = svn://your/path
 fetch = trunk:refs/remotes/trunk
[svn]
    authorsfile = /path/to/authors.txt

Add branches and tags

After waiting for the initial clone to complete, you can add your branches and tags. To do this you should edit the .git/config file and add them to the svn-remote section.

For branches add them as follows:

 fetch = branches/branch1:refs/remotes/branches/branch1
 fetch = branches/branch2:refs/remotes/branches/branch2
 fetch = branches/branch2:refs/remotes/branches/branch2
 branches = branches/{branch1,branch1,branch1}:refs/remotes/branches/* 

For tags addas follows:

 tags = tags/{tag1,tag2,tag3}:refs/remotes/tags/*

This should result in a final section such as:

[svn-remote "svn"]
 url = svn://your/path
 fetch = trunk:refs/remotes/trunk
        fetch = branches/branch1:refs/remotes/branches/branch1
        fetch = branches/branch2:refs/remotes/branches/branch2
        fetch = branches/branch2:refs/remotes/branches/branch2
        branches = branches/{branch1,branch1,branch1}:refs/remotes/branches/*
        tags = tags/{tag1,tag2,tag3}:refs/remotes/tags/*

Note: the`fetch =` line were recommended from here. In some cases they may not be needed.

Fetch the tags and branches

git svn fetch

Once completed all branches and tags specified should be available and can be viewed by running.

git branch -r
git tag 

Clean the branches

At this stage in the Atlassian stdlayout tutorial you will run the clean-git command to link the SVN branches to Git branches. However, the jar file provided by Atlassian does not support the branch syntax that we used. A patch is available but I cannot confirm it if works because of issues recompiling the program. As a result of this I manually pieced together the steps required to finish the repository move from the source of the program and various blogs.

To move the tags you can use the following script:

    #!/bin/sh
    # Based on https://github.com/haarg/convert-git-dbic

    set -u
    set -e

    git for-each-ref --format='%(refname)' refs/remotes/tags/* | while read r; do
        tag=${r#refs/remotes/tags/}
        sha1=$(git rev-parse "$r")

        commiterName="$(git show -s --pretty='format:%an' "$r")"
        commiterEmail="$(git show -s --pretty='format:%ae' "$r")"
        commitDate="$(git show -s --pretty='format:%ad' "$r")"

        # Print the commit subject and body separated by a newline
        git show -s --pretty='format:%s%n%n%b' "$r" | \
        env GIT_COMMITTER_EMAIL="$commiterEmail" GIT_COMMITTER_DATE="$commitDate" GIT_COMMITTER_NAME="$commiterName" \
        git tag -a -m "Tag: ${tag} sha1: ${sha1} using '${commiterName}', '${commiterEmail}' on '${commitDate}'" "$tag" "$sha1"

        # Remove the svn/tags/* ref
        git update-ref -d "$r"
    done

To move the branches you need this script:

    #!/bin/bash

    for branch in `git branch -a | grep remotes | grep -v HEAD | grep -v master | grep -v trunk`; do
       #git branch ${branch##*/} $branch
       #git branch ${branch#*remotes/origin/} $branch
       xbc=${branch#*remotes/branches/}
       echo "$xbc    --    $branch"
       createcmd="git branch -f $xbc $branch"

       #required since git v1.8.4
       trackcmd="git config branch.$xbc.merge $branch"

       eval $createcmd
       eval $trackcmd
    done

Share your code

Your local Git repository is now following the upstream SVN repository and you can share the repository with colleagues by pushing to a central server (e.g. Github, Bitbucket). You can use standard Git commands to do this:

git add origin <server>
git push -u origin --all
git push --tags


If you are able to close your SVN repository you can submit all commits to the new git repository. However, if you have to keep the SVN repository open for commits for a period of time you will need to sync commits from SVN to Git.

Sunday 14 February 2016

CMake Examples Repository

Overview

CMake is a cross-platform open-source meta-build system which can build, test and package software. It can be used to support multiple native build environments including make, Apple's xcode, and Microsoft Visual Studio.

The cmake-examples repository includes some example CMake configurations which I have picked up when exploring it's usage for various projects. The examples are laid out in a tutorial like format. The first examples are very basic and slowly increase in complexity drawing on previous examples to show more complex use cases.

These examples have been tested on Ubuntu 14.04 but should work under any Linux system that supports CMake.

Examples provide include:

Requirements

The basic requirements for most examples are:
  • CMake
  • A c++ compiler [defaults to gcc]
  • make
The easiest way to install the above on Ubuntu is as follows

$ sudo apt-get install build-essential
$ sudo apt-get install cmake 
Some specific examples may require other tools including:
  • boost
    sudo apt-get libboost-all-dev
  • protobuf
    $ sudo apt-get install libprotobuf-dev
    $ sudo apt-get install protobuf-compiler
  • cppcheck
    $ sudo apt-get install cppcheck 
  • clang
    $ sudo apt-get install clang-3.6 
  • Ninja Build
    $ sudo apt-get install ninja-build