Saturday 25 February 2017

It's the end of the line

When working on an install script recently, I came across one of those bugs that make you realise just how pedantic computer programming can be. I had a file that contains a list of yum package names and a script that read the file and did some work on them.



while read PKG; do
    yum install -y ${PKG}
done < /path/to/PackageList.txt

This file had been working fine as part of our installer for a number of iterations. As part of a developing a new feature I added a new package to the list and saved the file. Thinking this was such a small change that it would just work, I committed it and pushed the changes. However when running the script our tester complained that the new package was missing.

I sat down to debug the issue by checked that the package existed, that the script hadn't changed, and that I had the package name correct. As part of this debugging I resaved the file and it worked again.

After scratching my head, getting a cup of tea and doing some searching, I discovered that the posix standard specifies that a newline character should be added to the end of files. My editor of choice for development is Sublime Text, which by default doesn't add the newline character to the end of the file.

In order to turn it on you should edit your preferences to change the following preference to true.

// Set to true to ensure the last line of the file ends in a 
// newline character when saving
"ensure_newline_at_eof_on_save": true

You may also see the symptom of this issue when committing files to source control and at the end of a diff you will see.

\ No newline at end of file

Friday 10 February 2017

Inflation Problems

Despite 64-bit operating systems being the default for over 10 years, some of the code I use is still compiled with "-m32" for 32-bit mode. The reasons for this are mostly lack of management will and developer time. As I got time between projects, I decided to update the code so that we can release in both 32-bit and 64-bit mode.

Upgrading the code to be ready for 64-bit mode proved to be a slow task that had many chances for errors. I hope that by showing these errors and some common fixes it helps others to also update their code.

Common Errors

int or unsigned int instead of size_t

On a 32-bit system this isn't really a problem as all 3 types use a 32-bit integer, so you won't get errors. However, it's not portable and on a 64-bit Linux system, size_t is a 64-bit (unsigned) integer. This can cause issues with comparisons and overflow. For example:

string s = "some string";

unsigned int pos = s.find("st");
if( pos == string::npos) {
    // code that can never be hit

The above causes issues because string::npos can never be equal to pos as the data type of an unsigned int is too small to match string::npos.

This issue can be caught with the compiler flag -Wtype-limits. Or preferably use -Werror=type=limits to cause the compilation to fail with the following error

error: comparison is always false due to limited range of data type [-Werror=type-limits]

As mentioned this can also cause overflow issues, for example:

unsigned int pos = string::npos;

This causes an overflow because string::npos is too big to fit in a 32-bit integer.

Again this can be caught by a compiler flag, in this case -Woverflow. And again I recommend to use -Werror=overflow to cause a compilation error.

Wrong printf arguments

The logger in our codebase uses printf style formatting for formatting log lines. As a result of this the most common warning on our 64-bit compile was related to this.

The most common cause was related to the above assumption that a size_t is a 32-bit integer. Below is an example of the warning showing this

warning: format '%u' expects argument of type 'unsigned int', but argument 2 has type 'size_t {aka long unsigned int}' [-Wformat=]
         TRACE(("Insert at position [%u]", pos));

The fix that I used for this warning to use the %zu format specifier for size_t. This was introduced in the C99 standard and should be available in gcc and clang. However, it may not be available in some older versions of the Visual Studio compiler.

TRACE(("Insert at position [%zu]", pos));

I have also seen the above error in relation to other types, for example time_t, uintptr_t, and long. If you are unsure of what the printf argument for a type is, then you can use helpful macros from the C "inttypes.h" header (<cinttypes> if using C++11 or later). This includes macros with the printf arguments for various system typedefs.

Note: Before C++11 you must define __STDC_FORMAT_MACROS before including this header. For example, to print a uintptr_t you can use the macro PRIuPTR

#include <inttypes.h>

bool MyList::insert(uintptr_t value)


    TRACE(("value [%" PRIuPTR "]", value));

Assuming the size of a type is always the same

Again this is somewhat related to the previous points. I saw a number of errors where it was assumed that a particular type was always the same length on different platforms.

The 2 most common were pointers and long.

In our code pointer length issues often manifest as the printf argument error, e.g. using %08x instead of %p but I also saw some cases where a pointer was cast to an int to pass it through a particular function. This would then cause it to then precision on a 64-bit system.

In the case of long it appears that in many cases it was assumed that long was always a 32-bit integer. I came across a number of errors caused by using bitwise operations which assumed that a long was 32-bits. For example:

long offset = getSomeValue();
if ( offset & (1 << 31) )

This causes errors because long is not guaranteed to be a 32-bit integer. If you need to guarantee a size then you should use the correct typedef for that sized integer from the C "stdint.h" header (<cstdint> for C++11). e.g.

#include <stdint.h>

int32_t i32 = get32bitInt();
int64_t i64 = get64bitint();

These can then be used in conjunction with the PRIxxx macros from inttypes.h if you need to log / format them

Even with stdint.h there were some ambiguous types that were being cast to / from different types. An example of this was time_t which is not actually defined in a standard. After some googling and testing, I discovered it aligns to the same size as a long (4 bytes on a 32-bit arch, 8 bytes on 64-bit). So when we needed  to pass a time_t value and can't use the time_t typedef I defaulted to using a long.

At the end of the article I show a very simple test program and it's output on RedHat Linux. This shows how the size of types can change depending on compilation mode.

Using the wrong type with malloc

This issue is not actually related to the 64-bit port but the symptoms of it only manifested when we ran the code in 64-bit mode.

There were a couple of blocks of code that were using malloc to get a block of code for an array and these were using the wrong type for the sizeof argument. For example, some code for a hash table included:

typedef struct HT
    int num_entries;
    int size;
    HTEntry **table;
} HT;

Then to initialize the table

HT *newtable = NULL;
newtable = (HT*)malloc(sizeof(HT));
newtable->size = size;

newtable->table = (HTEntry**)malloc(sizeof(int)*size);

This has been deployed and run error free for a number of years in our 32-bit software release. However, as the sizeof an int and the size of pointers differ on 64-bit systems, it  caused errors there.

The correct code is:

newtable->table = (HTEntry**)malloc(sizeof(HTEntry*)*size);

Unfortunately I was unable to catch this with any compiler warnings and it caused a crash when run. I had also run some static analyzers over the code which missed this.


The task of updating your code to make it 64-bit compatible is slow, however, can be made easier if you take care to listen to your tools. This includes enabling compiler warnings, making some warnings errors, and using static analysis tools. These will help catch many of the common errors that can occour.

As for the benefit of updating, it will be worth it because:
  •  It will help improve compatibility. As most OSes and software projects are now released in 64-bit mode by default, there is less chance of finding an incompatible package
  • Allow access to new CPU instructions. Compiling with 64bit mode allows access to new instructions and registers. Some initial tests have shown that certain sections of code can be up to 10% faster.
  • Improved code. Keeping the code compiling and working in both environments may lead to more careful programming.


Test program to check common sizes

In order to check sizes, I created a simple test program that will print out the sizes for some common types:


using namespace std;

int main()
    cout << "sizeof(int) : " << sizeof(int) << std::endl;
    cout << "sizeof(unsigned long) : " << sizeof(unsigned long) << std::endl;
    cout << "sizeof(long int) : " << sizeof(long int) << std::endl;
    cout << "sizeof(long long int) : " << sizeof(long long int) << std::endl;
    cout << "sizeof(int32_t) : " << sizeof(int32_t) << std::endl;
    cout << "sizeof(int64_t) : " << sizeof(int64_t) << std::endl;
    cout << "sizeof(double) : " << sizeof(double) << std::endl;
    cout << "sizeof(float) : " << sizeof(float) << std::endl;
    cout << "sizeof(size_t) : " << sizeof(size_t) << std::endl;
    cout << "sizeof(intptr_t) : " << sizeof(intptr_t) << std::endl;
    cout << "sizeof(uintptr_t) : " << sizeof(uintptr_t) << std::endl;
    cout << "sizeof(void*) : " << sizeof(void*) << std::endl;
    cout << "sizeof(char) : " << sizeof(char) << std::endl;

To compile and run, you can use:

$> .g++ sizes.cpp -m32 -o t32.sizes
$> ./t32.sizes 
sizeof(int) : 4
sizeof(unsigned long) : 4
sizeof(long int) : 4
sizeof(long long int) : 8
sizeof(int32_t) : 4
sizeof(int64_t) : 8
sizeof(double) : 8
sizeof(float) : 4
sizeof(size_t) : 4
sizeof(intptr_t) : 4
sizeof(uintptr_t) : 4
sizeof(void*) : 4
sizeof(char) : 1

$> .g++ sizes.cpp -o t64.sizes
$> ./t64.sizes 
sizeof(int) : 4
sizeof(unsigned long) :8
sizeof(long int) : 8
sizeof(long long int) : 8
sizeof(int32_t) : 4
sizeof(int64_t) : 8
sizeof(double) : 8
sizeof(float) : 4
sizeof(size_t) : 8
sizeof(intptr_t) : 8
sizeof(uintptr_t) : 8
sizeof(void*) : 8
sizeof(char) : 1

As you can see there are a number of types that have different sizes. These will be the same on all Linux systems, however they aren't guaranteed across different operating systems.

Thursday 2 February 2017

Build Clang & LLVM tooling on RHEL 7

Clang is a C (and C++) front-end for the LLVM compiler. It provides a fast compiler with really good error messages and great support for writing code analysis and formatting tools. Some of the official tools include:
Third party tools built on top of the clang tooling (and libclang libraries) include:
A good talk by Chandler Carruth about some of the above tools and the future direction for Clang tooling is available on YouTube

Installing Clang

Redhat 7

On RedHat 7, Clang is not included in the official repositories, however older versions (v3.4) are included in the epel repository.

If you are unable to use the epel repository, or want a newer version of clang, the below script can be used to get and install v3.9.1 of llvm, clang, clang tools and the include what you use tool.

mkdir clang_llvm_391_build
cd clang_llvm_391_build
svn co llvm
cd llvm/tools
svn co clang
cd ../..
cd llvm/tools/clang/tools
svn co extra
cd ../../../..
cd llvm/projects
svn co compiler-rt
cd ../..
#cd llvm/projects
#svn co libcxx
#cd ../..
cd llvm/tools/clang/tools
git clone
cd include-what-you-use
git checkout clang_3.9
cd ..
echo "" >> CMakeLists.txt
echo "add_subdirectory(include-what-you-use)" >> CMakeLists.txt
cd ../../../..
cmake -G Ninja -DCMAKE_INSTALL_PREFIX=/opt/software/clang -DCMAKE_BUILD_TYPE=Release ../llvm
mkdir -p /opt/software/clang
cmake -DCMAKE_INSTALL_PREFIX=/opt/software/clang -P cmake_install.cmake

As you can see this installs the software to /opt/software/clang If you want to install to a different location change the CMAKE_INSTALL_PREFIX locations in the line 26 and 29.

The script doesn't compile the version of the C++ standard library (libcxx) available with Clang as I had compiler errors when building it with the default version of gcc (v4.8.5) available with RHEL 7.3

Redhat 6

For RHEL 6, there is also a epel repository with v3.4 available. However, if you want a later version of Clang you have some hoops to run through.

This is because Clang requires a C++11 compiler and Clang v3.9.1, mentioned above, requires at least v4.8 of gcc. The version of gcc available on RHEL 6 is too old and you have to manually build a later version before you can build Clang. You can find instructions on doing so from this blog post.

Using Clang


To build your software using Clang with CMake you should override the CMAKE_C_COMPILER and CMAKE_CXX_COMPILER variables. Using the install from my script above this would be done using

$ cmake -DCMAKE_C_COMPILER=/opt/software/clang/bin/clang -DCMAKE_CXX_COMPILER=/opt/software/clang/bin/clang++ ..
$ make

You can see more details in my cmake-examples GitHub repository.

Similar methods of overriding the C and C++ compiler environments may work with other build tools. e.g. using CC and CXX with Makefiles.

Using Clang Static Analyzer

Using the Clang Static Analyzer is easy too as it includes a tool scan-build which can be used to scan your source code at the same time as it builds it

$ /opt/software/clang/bin/scan-build cmake ..
$ /opt/software/clang/bin/scan-build make

On RedHat the above will use gcc to build your software while scanning it with the Clang Static Analyzer.

To get extra coverage for your code I also recommend to use clang to compile it. This can be done at the same time as your static analysis by using the --use-cc and --use-c++ flags for scan-build

$ /opt/software/clang/bin/scan-build --use-cc=/opt/software/clang/bin//clang --use-c++=/opt/software/clang/bin//clang++  cmake ..
$ /opt/software/clang/bin/scan-build --use-cc=/opt/software/clang/bin//clang --use-c++=/opt/software/clang/bin//clang++  make

Advantages of having Clang Available

The main reason I have for using Clang on RedHat is to get access to it's tooling and static analyzer.

However as a side effect of this it also makes the compiler available for use. Using this second compiler can give you more chance of finding errors. For example, when compiling with Clang I had an error:

 file included from /path/to/myclass.cpp:22:
/path/to/logger.h:1:9: warning: '_LIBMYLIB_LOGGER_H_' is used as a header guard here, followed by
      #define of a different macro [-Wheader-guard]
/path/to/logger.h:2:9: note: '_LINMYLIB_LOGGER_H_' is defined here; did you mean '_LIBMYLIB_LOGGER_H_'?
6 warnings generated.

This did not cause any errors or warnings on my version of GCC and while it didn't cause any issues (because I only included that header once), it could potentially have lead to a later error.