Wednesday, 18 April 2018

Rate Limiting

At work I was recently tasked with integrating a rate limiting solution into out API. The purpose of the rate limiter was to cap the number of requests a user can make against an API over a specific period of time. This can help:

  • Prevent spam attacks.
  • Stop spikes in traffic from overloading our servers and degrading performance.
  • Help identify and stop misbehaving clients.

A user in the context of rate limiting is any entity that you wish to limit. Users can be identified by any unique identifier. Some examples include:
  • IP Address
  • User ID
  • Device Identifier
  • Customer Name (for B2B customers)
  • URL. E.g. only X number of POST requests to /withdrawl URL
Or any combination of the above.

The main requirement for the solution was that our front end services can run on multiple containers in a distributed manner so our rate limiting must work across servers. This would involve using an external key-value store (redis) to store the rate limiting information. As a result of this many of the details described below are directly related to how redis works. These details can normally be applied to other cache services as required.

For the remainder of this blog post I will discuss the various rate limiting algorithms I investigated, our chosen algorithm and some additional implementation details.

Rate Limiting Algorithms

Fixed Window Counters

Fixed window counters are the simplest method of rate limiting.  They work by maintaining a counter with the number of times an entity has hit the endpoint within a fixed window. If the user makes more than the allocated number of requests, then new requests are rejected. At the start of the new window the counter is reset to 0 and the user may make additional requests. A window is typically associated with a time and the counter is reset at the start of the period. Some examples include:
  • Day window: Reset at the same time every day, e.g. midnight
  • Hour window: Reset on the hour, e.g. 12:00:00, 13:00:00 ...
  • 15 minute window: Reset every 15 minutes e.g. 12:00:00, 12:15:00 ...
The key for the cache service would be {id}_{windowTimestamp}, where id could be the user id of the user and windowTimestamp would be the timestamp at the start of the window.

 In the following example the user is allowed to make 3 requests in a 60 second window.

Time
Action
Counter Value
Additional
12:00:05user makes requestuser1_1515120000: 0 → 1Key is user1_1515120000
12:00:15user makes requestuser1_1515120000: 1 → 2
12:01:01user makes requestuser1_1515120100: 0 → 1
Counter is reset as the 60 second period is is over. New key is user1_1515120100
12:01:10user makes requestuser1_1515120100: 1 → 2
12:01:40user makes requestuser1_1515120100: 2 → 3
12:01:50user makes requestuser1_1515120100: 3 → 4User is rejected as they are over limit
12:02:20user makes requestuser1_1515120200: 0 → 1Allowed as counter reset and new key is user1_1515120200

Note: Old keys can be set to automatically expire a fixed period after they are done. This may require 2 calls to redis to have an INCR, then EXPIRE command called.

Pros:

The advantages of this approach are:
  • Simple to implement
  • INCR is an atomic redis command.
  • Low memory requirement.

Cons:

  • It can allow a user to go above their allowed quota in a rolling 60 second period. For example, in the above limit of 3 requests per 60 seconds, if the user made 3 requests at 12:00:59 and a further 3 requests at 12:01:00, this would allow the user to make 6 requests in a 2 second period. 
  • It may also cause bursts of traffic across many clients. For example, if every client has used their quota for the pervious minute they may retry until they are allowed. This could cause many users to hit the server in the first second of the new window.

 Sliding Log

Sliding log rate limiting involves storing a history of requests for each user along with the associated time stamp. As new requests come in you count the number of requests for a period. Logs can be stored in a sorted set per user, where the key and value are the time stamp of the requests. Logs older than the allowed period are dropped.

To recreate the previous example where the user is allowed 3 requests per 60 second window:

Time
Action
Set Value
Additional
12:00:05user makes request
user1: {
15151200050000: 15151200050000
}

12:00:15user makes request
user1: {
15151200050000: 15151200050000
15151200150000: 15151200150000
}

12:01:01user makes request
user1: {
15151200050000: 15151200050000
15151200150000: 15151200150000
15151201010000: 15151201010000
}

12:01:10user makes request
user1: {
15151200150000: 15151200150000
15151201010000: 15151201010000
15151201100000: 15151201100000
}
User is still under threshold as the initial request (15151200000000) was more than 60 seconds ago
12:01:40user makes request
user1: {
15151201010000: 15151201010000
15151201100000: 15151201100000
15151201400000: 15151201400000
}

12:01:50user makes request
user1: {
15151201010000: 15151201010000
15151201100000: 15151201100000
15151201400000: 15151201400000
15151201500000: 15151201500000
}
User is rejected as they have had 4 request in the last 60 seconds
12:02:20user makes request
user1: {
15151201400000: 15151201400000
15151201500000: 15151201500000
15151202200000:15151202200000
}
Request is allowed as user is below threshold.

Pros:

  • Allows high precision on the limits
  • Avoids bursts of data at the start of a window

Cons:

  • It requires a set item per request so has a large memory requirement.
  • Failed requests may still be added to the set. This could mean that even rate limited requests that perform no action could block a user.
  • May require multiple commands to perform the update and expire of old keys. 
    • This can be made atomic by using a redis MULTI command or lua script.

 Sliding Window

In a sliding window rate limiter, each user is given limits to be used within a time frame. These are broken up into smaller buckets that allow us to control the way limits are available over a more evenly distributed window. As requests are used they are removed from the smaller windows and as those small windows expire the tokens are re-added. You can add more precision to your requests by having multiple windows that work in parallel to even out he flow of traffic. The data would be stored in a has per user to allow access to the individual buckets. The key for each bucket would be the time stamp at the start of the window.

A simplified version allowing 3 requests per 60 seconds in 15 second buckets is:

Time
Action
Set Value
Additional
12:00:05user makes request
user1: {
1515120000: 1
}

12:00:15user makes request
user1: {
1515120000: 1
1515120015: 1
}

12:01:01user makes request
user1: {
1515120015: 1
1515120100: 1
}
Timestamp for 1515120000 is expired so is deleted
12:01:10user makes request
user1: {
1515120015: 1
1515120100: 2
}

12:01:40user makes request
user1: {
1515120100: 2
1515120130: 1
}
Timestamp for 1515120015 expires
12:01:50user makes request
user1: {
1515120100: 2
1515120130: 1
1515120145: 1
}
User is rejected as they have had 4 request in the last 60 seconds
12:02:20user makes request
user1: {
1515120130: 1
1515120145: 1
1515120215: 1
}
Timestamp for 1515120100 expires.

Request is allowed as user is below back threshold.

Pros:

  • Allows varying degrees of precision depending on bucket time frames.
  • Lower memory constraints than sliding log.
  • Avoids bursts of data at the start of a new window.

Cons:

  • The update is not atomic so would require a redis lua script to control key counting / expiry.

 Sliding Tail

 Sliding tail rate limiting is an alternative implementation of the sliding window algorithm. It can be also be thought of as a fixed window rate limiting with history. In this case, we store a count of the current fixed window and the previous window. When a request comes in we calculate usage based on the count of the number of requests in the current window + a weighted count of the previous window. For example, if the current window is 25% through it's time, then we use a weighted count including 75% of the previous count + the current window count. This count is used to create the current number of tokens used by the user.

The key for the information is the same as the fixed window example where we use {id}_{windowTimestamp}. A server can do an atomic INCR for the current window time stamp and a GET for the previous window. To improve efficiency after the first request for a time stamp the server could cache the previous window data as this would no longer change.

To redo the example of 3 requests per 60 seconds:

Time
Action
Counters Value
Additional
12:00:05user makes requestuser1_1515120000: 0 → 1No previous window so only using current window
12:00:15user makes requestuser1_1515120000: 1 → 2
12:01:01user makes request
user1_1515120000: 2
user1_1515120100: 0 → 1
Weighted count is (2 * ((60-1)/60)) + 1 = 2.9
12:01:10user makes request
user1_1515120000: 2
user1_1515120100: 1 → 2
Weighted count is (2 * ((60-10)/60)) + 2 = 3.6

12:01:40user makes request
user1_1515120000: 2
user1_1515120100: 2 → 3
Weighted count is (2 * ((60-40)/60)) + 3 = 3.6
12:01:50user makes request
user1_1515120000: 2
user1_1515120100: 3 → 4
Weighted count is (2 * ((60-50)/60)) + 4 = 4.3
User is above threshold so rejected
12:02:20user makes request
user1_1515120100: 4
user1_1515120200: 0 → 1
Updated to a new window so user1_1515120000 is dropped and we move to using the weighted count from user1_1515120100

Weighted count is (4 * ((60-20)/60)) + 1 = 3.6

In the above example you can see that the effect is the same as the fixed window for normal usage. However, if the user sent any more requests before 12:2:31, they would be rejected.

Pros:

  • Low memory usage
  • Increment is atomic, although there are 2 extra commands
    • EXPIRE of the current key
    • GET of the previous key
  • Prevents bursts of data at the start of a new window
  • Could be tuned to be more or less permissive based on weighting of the previous window
  • Could be tuned based on whether to round up or down

Cons:

  • Only an approximation of the last window as it assumes there was a constant rate of traffic.
  • New clients could send bursts of traffic at their first window
  • If using atomic increment rejected requests could still add to the count for the window

Leaky Bucket

The leaky bucket as a meter algorithm (related to token bucket) describes how an API can be limited to avoid burstiness. Each user has a count of tokens which is incremented as a request comes in. If the counter is above the threshold (the bucket size), then additional requests are discarded. To "empty" the bucket and give the user more tokens we decrement the counter by a set amount every period until it reaches zero.

An implementation could keep a hash which includes, the token count, and the time stamp of the last time the bucket was emptied. As each request comes in, it performs 2 operations:
  1. Decrement the token based on a steady counter
  2. Add a token
If the number is below the bucket size, the request is allowed.

In the following example, we allow a bucket size of 3 and a refresh rate of 1 per 20 seconds:

Time
Action
Set Value
Additional
12:00:05user makes request
user1: {
tokens: 1
ts: 1515120000
}
Add a token to the bucket
12:00:15user makes request
user1: {
tokens: 2
ts: 1515120000
}

12:01:01user makes request
user1: {
tokens: 1
ts: 1515120100
}
The previous ts was 1515120000 which means we should decrement the token count by up to 3.
Then add this token
12:01:10user makes request
user1: {
tokens: 2
ts: 1515120100
}

12:01:40user makes request
user1: {
tokens: 1
ts: 1515120140
}
The previous ts was 1515120100 which means we should decrement the token count by up to 2.
Then add this token
12:01:50user makes request
user1: {
tokens: 2
ts: 1515120140
}

12:02:20user makes request
user1: {
tokens: 1
ts: 15151200220
}
The previous ts was 1515120140 which means we should decrement the token count by up to 2.
Then add this token

 Pros:

  • Memory efficient
  • Prevents bursts of traffic

Cons:

  • Requires a redis lock or lua script to update.
  • Harder to tune requirements and implement.

 Implementation

For our purposes we decided to choose the sliding tail algorithm as it provided some protection against bursts of traffic while having low processing and memory usage. It is also easy to implement using standard redis commands that allow for atomic operations.

Additional Details

HTTP Responses

When rejecting a user who is over the limit we choose a 429 Too Many Requests response.

In addition to the 429 rejection, all responses include HTTP headers to inform the user what their limit is, and how many tokens are remaining. There appears to be no consensus on the HTTP header to use for this information. From looking at responses to other APIs we decided to go with:
  • X-Rate-Limit-Limit - to describe the limit the user has for this request
  • X-Rate-Limit-Remaining - to describe the number of tokens left for the user.

Tokens Used for each Request

As some operations are related but have different processing requirements, we decided to allow a configurable number of tokens per request type. This means that for certain endpoints we can allow the same bucket of tokens to rate limit the different requests. For example, a request to GET /user would use 1 token from the {id}:user:{timestamp} bucket, however a request to POST /user would remove 2 tokens from the same bucket.



Thursday, 5 October 2017

Reverse engineering the EPH Controls Ember API

I have recently had my home heating upgraded with a new gas boiler and heating controls. While looking at the controls my plumber recommended the thermostat and controller EPH Controls. I was leaning more towards a system like Hive or Honeywell but decided to give this one a go as it did tick most of the boxes I needed. It includes 2 zone system (heating and hot water) that can be controlled via the thermostat locally in the house or via an app remotely.

The missing components to the EPH Ember system were integration into other home automation systems and voice control systems (e.g. Google Assistant). I decided it would be a good learning experience to reverse engineer the API and add it to home assistant, which I use for some other home automation controls. I have completed the first half of my task, which is reverse engineering the API, and from this have created a python library which can be used to control / monitor the system. This blog post will explain the steps I used to get the API details from their app / server.

The basic steps used to do this were:
  1. Figure out how the app and server communicate
  2. Capture that traffic 
  3. Decode and examine the traffic

Figure out how the app and server communicate

The very first thing to do is to find out how the app and server communicate. To do this I installed tPacketCapture on my android phone and sniffed the network traffic on my phone. Once I had the packet capture on my phone, I transferred it to my laptop and opened it with wireshark. From this I could see a number of different connections going out from my phone. Once I eliminated the traffic going to google, I could see network traffic going from my phone to an IP 40.127.170.98 on port 443.


As 443 is the port used by HTTPs, I could be fairly confident that that is the protocol used between server and client. HTTPs meant it was secure (good for my privacy) but (slightly) harder to decipher. However, as HTTPs is widely used there are plenty of tools available to sniff the connection when you are in control of the device and network.

Capture the traffic

The tool I chose for this was Charles Proxy a HTTP(s) proxy which allows you to capture and display HTTP(s) traffic. Once installed you should configure it to do SSL Proxying, get the root certificate, and install it on your device. Some of these steps are explained here. In later steps you will need the IP address of the computer running charles and the port that it is listening on (Proxy > Proxy Settings > Port)

On your android phone, you need to edit the wifi network settings for the network that you are connected to and have an instance of charles running on to use a proxy and point it to your charles install. To do this on Android, follow these steps:
  • Open your wifi settings.
  • Long click on the wifi network name and select modify network.
  • Select Advanced Options
  • Select Proxy > Manual
  • Enter the IP address of the server running charles
  • Enter the port charles is listening on (8888)
  • Save 
Your phone should now route all HTTP(s) traffic through your charles instance. Once you launch the ember app you should see traffic going to it's server at https://ember.ephcontrols.com

Decode and examine the traffic

As mentioned earlier, this proves that the API is using HTTPS to communicate between the app and the server. Charles can show you all information about the HTTP traffic including URL, headers, data.


Looking at the hierarchy of the folders in charles shows the URLs that are used. These include URLs such as
  • api/Account/RefreshToken
  • api/Home/GetHomeById?homeId=1234
By looking at the contents of these requests I can see that the server and client communicate by sending JSON encoded messages between them. As JSON is a widely used and text based protocol it was then just a matter of reading the charles logs to decipher the full protocol.

By running various scenarios in the app (e.g. login, check temperature, boost heating) I was able to capture most API calls and have documented them on github.



Sunday, 6 August 2017

CUDA in Docker

To use CUDA within docker to take advantage of parallel programming on your GPU you need to expose your GPU inside the docker container. To do so you can use the nvidia-docker extension to expose your NVIDIA graphics card and drivers inside the container.

Requirements

NVIDIA Driver 

The first major requirement is to make sure you are using an NVIDIA graphics card and the NVIDIA propriety driver. On Ubuntu you can enable this from Software & Updates:



If using a laptop (or intel chip that includes integrated graphics) you may also need to make sure that you have selected the NVIDIA graphics card as the one in use. Once the driver is installed you can select the card using the NVIDIA X Server Settings applications:



If you had to change either of the above, restart your computer for them to take effect.

You can tell if your NVIDIA card is running by using the nvidia-smi command line tool:

$ nvidia-smi      
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 375.66                 Driver Version: 375.66                    |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  GeForce GTX 660M    Off  | 0000:01:00.0     N/A |                  N/A |
| N/A   62C    P0    N/A /  N/A |    236MiB /  1999MiB |     N/A      Default |
+-------------------------------+----------------------+----------------------+
                                                                               
+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID  Type  Process name                               Usage      |
|=============================================================================|
|    0                  Not Supported                                         |
+-----------------------------------------------------------------------------+
$ 

Docker and NVIDIA Docker

Obviously to use docker you must first install it. Follow the instructions on their site to download and install the latest version.

After install docker, you should then install the nvidia docker extension. Installers and instructions are available on the linked github page.

Running nvidia-docker

Once everything is installed you should be able to use the GPU in your container

$ nvidia-docker run -it --rm nvidia/cuda nvidia-smi       
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 375.66                 Driver Version: 375.66                    |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  GeForce GTX 660M    Off  | 0000:01:00.0     N/A |                  N/A |
| N/A   62C    P0    N/A /  N/A |    236MiB /  1999MiB |     N/A      Default |
+-------------------------------+----------------------+----------------------+
                                                                               
+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID  Type  Process name                               Usage      |
|=============================================================================|
|    0                  Not Supported                                         |
+-----------------------------------------------------------------------------+
$ 
  
As you can see this is the same as the above output when running the tool outside of the container on my native host.

Choosing Type of OS

There are a number of pre-built images available using different versions of CUDA and popular OS / Containers, including:
  • Ubuntu 16.04
  • Ubuntu 14.04
  • CentOS 7
  • CentOS 8
It is also possible to view the dockerfiles used to create these images and extract the required values to re-create your own images. For example, this is the dockerfile for an Ubuntu 16.04 and CUDA 8


Clang on RHEL 7 via RPM

In a previous post I discussed how to build CLang and LLVM from source on CentOS 7. That approach works but problems with it are:
  1. It takes a long time to recompile for each user.
  2. It results in large install.
As a result of these drawbacks, I wanted to build an RPM that could result in a minimal install. Some investigation showed that the latest RPM I could find for CentOS 7 was on EPEL and was for CLang 3.4. However, fedora does have an RPM for CLang 3.9. Taking the SRPM for this I was able to modify it to make an RPM for CentOS 7.

Preparation

CMake

To start with we need a version of CMake greater than v3.4.3. The easiest way to do this is via the epel repository:

$ yum install epel-release
$ yum install cmake3
$ cmake3 --version
cmake3 version 3.6.3 

Rpmbuild

You need to install rpmbuild:

$ yum install rpmbuild

Spec files

The spec files are provided below and as mentioned are a modified version of the spec files from the fedora LLVM / CLang v3.9.1 SRPM.7.

LLVM

# Components enabled if supported by target architecture:
%ifarch %ix86 x86_64
  %bcond_without gold
%else
  %bcond_with gold
%endif

Name:       llvm
Version:    3.9.1
Release:    1%{?dist}
Summary:    The Low Level Virtual Machine

License:    NCSA
URL:        http://llvm.org
Source0:    http://llvm.org/releases/%{version}/%{name}-%{version}.src.tar.xz

Source100:  llvm-config.h

BuildRequires:  cmake3
BuildRequires:  zlib-devel
BuildRequires:  libffi-devel
BuildRequires:  ncurses-devel
%if %{with gold}
BuildRequires:  binutils-devel
%endif
BuildRequires:  libstdc++-static

Requires:   %{name}-libs%{?_isa} = %{version}-%{release}

%description
LLVM is a compiler infrastructure designed for compile-time, link-time,
runtime, and idle-time optimization of programs from arbitrary programming
languages. The compiler infrastructure includes mirror sets of programming
tools as well as libraries with equivalent functionality.

%package devel
Summary:    Libraries and header files for LLVM
Requires:   %{name}%{?_isa} = %{version}-%{release}
Requires(posttrans): %{_sbindir}/alternatives
Requires(posttrans): %{_sbindir}/alternatives

%description devel
This package contains library and header files needed to develop new native
programs that use the LLVM infrastructure.

%package doc
Summary:    Documentation for LLVM
BuildArch:  noarch
Requires:   %{name} = %{version}-%{release}

%description doc
Documentation for the LLVM compiler infrastructure.

%package libs
Summary:    LLVM shared libraries

%description libs
Shared libraries for the LLVM compiler infrastructure.

%package static
Summary:    LLVM static libraries

%description static
Static libraries for the LLVM compiler infrastructure.

%prep
%autosetup -n %{name}-%{version}.src

%build
mkdir -p _build
cd _build

# force off shared libs as cmake macros turns it on.
%cmake3 .. \
    -DBUILD_SHARED_LIBS:BOOL=OFF \
    -DCMAKE_BUILD_TYPE=Release \
    -DCMAKE_SHARED_LINKER_FLAGS="-Wl,-Bsymbolic -static-libstdc++" \
%if 0%{?__isa_bits} == 64
    -DLLVM_LIBDIR_SUFFIX=64 \
%else
    -DLLVM_LIBDIR_SUFFIX= \
%endif
    \
    -DLLVM_TARGETS_TO_BUILD="X86;AMDGPU;PowerPC;NVPTX;SystemZ;AArch64;ARM;Mips;BPF" \
    -DLLVM_ENABLE_LIBCXX:BOOL=OFF \
    -DLLVM_ENABLE_ZLIB:BOOL=ON \
    -DLLVM_ENABLE_FFI:BOOL=ON \
    -DLLVM_ENABLE_RTTI:BOOL=ON \
%if %{with gold}
    -DLLVM_BINUTILS_INCDIR=%{_includedir} \
%endif
    \
    -DLLVM_BUILD_RUNTIME:BOOL=ON \
    \
    -DLLVM_INCLUDE_TOOLS:BOOL=ON \
    -DLLVM_BUILD_TOOLS:BOOL=ON \
    \
    -DLLVM_INCLUDE_TESTS:BOOL=ON \
    -DLLVM_BUILD_TESTS:BOOL=ON \
    \
    -DLLVM_INCLUDE_EXAMPLES:BOOL=ON \
    -DLLVM_BUILD_EXAMPLES:BOOL=OFF \
    \
    -DLLVM_INCLUDE_UTILS:BOOL=ON \
    -DLLVM_INSTALL_UTILS:BOOL=OFF \
    \
    -DLLVM_INCLUDE_DOCS:BOOL=ON \
    -DLLVM_BUILD_DOCS:BOOL=OFF \
    -DLLVM_ENABLE_DOXYGEN:BOOL=OFF \
    \
    -DLLVM_BUILD_LLVM_DYLIB:BOOL=ON \
    -DLLVM_DYLIB_EXPORT_ALL:BOOL=ON \
    -DLLVM_LINK_LLVM_DYLIB:BOOL=ON \
    -DLLVM_BUILD_EXTERNAL_COMPILER_RT:BOOL=ON \
    -DLLVM_INSTALL_TOOLCHAIN_ONLY:BOOL=OFF 

make %{?_smp_mflags}

%install
cd _build
make install DESTDIR=%{buildroot}

# fix multi-lib
mv -v %{buildroot}%{_bindir}/llvm-config{,-%{__isa_bits}}
mv -v %{buildroot}%{_includedir}/llvm/Config/llvm-config{,-%{__isa_bits}}.h
install -m 0644 %{SOURCE100} %{buildroot}%{_includedir}/llvm/Config/llvm-config.h

%check
cd _build
make check-all || :

%post libs -p /sbin/ldconfig
%postun libs -p /sbin/ldconfig

%post devel
%{_sbindir}/update-alternatives --install %{_bindir}/llvm-config llvm-config %{_bindir}/llvm-config-%{__isa_bits} %{__isa_bits}

%postun devel
[ $1 -eq 0 ] && %{_sbindir}/update-alternatives --remove llvm-config %{_bindir}/llvm-config-%{__isa_bits}

%files
%{_bindir}/*
%exclude %{_bindir}/llvm-config-%{__isa_bits}

%files libs
%{_libdir}/BugpointPasses.so
%{_libdir}/LLVMHello.so
%if %{with gold}
%{_libdir}/LLVMgold.so
%endif
%{_libdir}/libLLVM-3.9*.so
%{_libdir}/libLTO.so

%files devel
%{_bindir}/llvm-config-%{__isa_bits}
%{_includedir}/llvm
%{_includedir}/llvm-c
%{_libdir}/libLLVM.so
%{_libdir}/cmake/llvm

%files static
%{_libdir}/*.a

%changelog
* Sun Aug 06 2017 Thom Troy  - 3.9.1-1
- First build - spec a modified version of fedora25 SRPM

Compiler-rt


%ifarch s390 s390x
# only limited set of libs available on s390(x) and the existing ones (stats, ubsan) don't provide debuginfo
%global debug_package %{nil}
%endif

Name:       compiler-rt
Version:    3.9.1
Release:    1%{?dist}
Summary:    LLVM "compiler-rt" runtime libraries

License:    NCSA or MIT
URL:        http://llvm.org
Source0:    http://llvm.org/releases/%{version}/%{name}-%{version}.src.tar.xz

BuildRequires:  cmake3
BuildRequires:  python
BuildRequires:  llvm-devel = %{version}
BuildRequires:  llvm-static = %{version}

%description
The compiler-rt project is a part of the LLVM project. It provides
implementation of the low-level target-specific hooks required by
code generation, sanitizer runtimes and profiling library for code
instrumentation, and Blocks C language extension.

%prep
%setup -q -n %{name}-%{version}.src

%build
mkdir -p _build
cd _build
%cmake3 .. \
    -DCMAKE_BUILD_TYPE=RelWithDebInfo \
    -DLLVM_CONFIG_PATH:FILEPATH=%{_bindir}/llvm-config-%{__isa_bits} \
    \
%if 0%{?__isa_bits} == 64
        -DLLVM_LIBDIR_SUFFIX=64 \
%else
        -DLLVM_LIBDIR_SUFFIX= \
%endif
    -DCOMPILER_RT_INCLUDE_TESTS:BOOL=OFF # could be on?

make %{?_smp_mflags}

%install
cd _build
make install DESTDIR=%{buildroot}

# move sanitizer lists to better place
mkdir -p %{buildroot}%{_libdir}/clang/%{version}
for file in asan_blacklist.txt msan_blacklist.txt dfsan_blacklist.txt cfi_blacklist.txt dfsan_abilist.txt; do
    mv -v %{buildroot}%{_prefix}/${file} %{buildroot}%{_libdir}/clang/%{version}/ || :
done

# move sanitizer libs to better place
mkdir -p %{buildroot}%{_libdir}/clang/%{version}/lib
mv -v %{buildroot}%{_prefix}/lib/linux/libclang_rt* %{buildroot}%{_libdir}/clang/%{version}/lib
mkdir -p %{buildroot}%{_libdir}/clang/%{version}/lib/linux/
pushd %{buildroot}%{_libdir}/clang/%{version}/lib
for i in *.a *.syms *.so; do
    ln -s ../$i linux/$i
done

%check
cd _build
#make check-all

%files
%{_includedir}/*
%{_libdir}/clang/%{version}

%changelog
* Fri Jul 14 2017 Thom Troy  - 3.9.1-1
- First build - spec a modified version of fedora25 SRPM


Compiler-rt


Name:       clang
Version:    3.9.1
Release:    1%{?dist}
Summary:    A C language family front-end for LLVM

License:    NCSA
URL:        http://llvm.org
Source0:    http://llvm.org/releases/%{version}/cfe-%{version}.src.tar.xz

Source100:  clang-config.h

BuildRequires:  cmake3
BuildRequires:  llvm-devel = %{version}
BuildRequires:  libxml2-devel
BuildRequires:  llvm-static = %{version}
BuildRequires:  perl-generators
BuildRequires:  ncurses-devel

Requires:   %{name}-libs%{?_isa} = %{version}-%{release}

# clang requires gcc, clang++ requires libstdc++-devel
# - https://bugzilla.redhat.com/show_bug.cgi?id=1021645
# - https://bugzilla.redhat.com/show_bug.cgi?id=1158594
Requires:   libstdc++-devel
Requires:   gcc-c++


%description
clang: noun
    1. A loud, resonant, metallic sound.
    2. The strident call of a crane or goose.
    3. C-language family front-end toolkit.

The goal of the Clang project is to create a new C, C++, Objective C
and Objective C++ front-end for the LLVM compiler. Its tools are built
as libraries and designed to be loosely-coupled and extensible.

%package libs
Summary: Runtime library for clang
Requires: compiler-rt%{?_isa} >= %{version}

%description libs
Runtime library for clang.

%package devel
Summary: Development header files for clang.
Requires: %{name}%{?_isa} = %{version}-%{release}

%description devel
Development header files for clang.

%package analyzer
Summary:    A source code analysis framework
License:    NCSA and MIT
BuildArch:  noarch
Requires:   %{name} = %{version}-%{release}
# not picked up automatically since files are currently not installed in
# standard Python hierarchies yet
Requires:   python

%description analyzer
The Clang Static Analyzer consists of both a source code analysis
framework and a standalone tool that finds bugs in C and Objective-C
programs. The standalone tool is invoked from the command-line, and is
intended to run in tandem with a build of a project or code base.

%prep
%setup -q -n cfe-%{version}.src
%build
mkdir -p _build
cd _build
%cmake3 .. \
    -DLLVM_LINK_LLVM_DYLIB:BOOL=ON \
    -DCMAKE_BUILD_TYPE=RelWithDebInfo \
    -DLLVM_CONFIG:FILEPATH=/usr/bin/llvm-config-%{__isa_bits} \
    \
    -DCLANG_ENABLE_ARCMT:BOOL=ON \
    -DCLANG_ENABLE_STATIC_ANALYZER:BOOL=ON \
    -DCLANG_INCLUDE_DOCS:BOOL=ON \
    -DCLANG_INCLUDE_TESTS:BOOL=ON \
    -DCLANG_PLUGIN_SUPPORT:BOOL=ON \
    -DENABLE_LINKER_BUILD_ID:BOOL=ON \
    \
    -DCLANG_BUILD_EXAMPLES:BOOL=OFF \
%if 0%{?__isa_bits} == 64
        -DLLVM_LIBDIR_SUFFIX=64 \
%else
        -DLLVM_LIBDIR_SUFFIX= \
%endif
    -DLIB_SUFFIX=

make %{?_smp_mflags}

%install
cd _build
make install DESTDIR=%{buildroot}

# multilib fix
mv -v %{buildroot}%{_includedir}/clang/Config/config{,-%{__isa_bits}}.h
install -m 0644 %{SOURCE100} %{buildroot}%{_includedir}/clang/Config/config.h

# remove git integration
rm -vf %{buildroot}%{_bindir}/git-clang-format
# remove editor integrations (bbedit, sublime, emacs, vim)
rm -vf %{buildroot}%{_datadir}/clang/clang-format-bbedit.applescript
rm -vf %{buildroot}%{_datadir}/clang/clang-format-sublime.py*
rm -vf %{buildroot}%{_datadir}/clang/clang-format.el
rm -vf %{buildroot}%{_datadir}/clang/clang-format.py*
# remove diff reformatter
rm -vf %{buildroot}%{_datadir}/clang/clang-format-diff.py*

%check
# requires lit.py from LLVM utilities
#cd _build
#make check-all

%files
%{_libdir}/clang/
%{_bindir}/clang*
%{_bindir}/c-index-test

%files libs
%{_libdir}/*.so.*
%{_libdir}/*.so

%files devel
%{_includedir}/clang/
%{_includedir}/clang-c/
%{_libdir}/cmake/
%dir %{_datadir}/clang/

%files analyzer
%{_bindir}/scan-view
%{_bindir}/scan-build
%{_bindir}/scan-build
%{_libexecdir}/ccc-analyzer
%{_libexecdir}/c++-analyzer
%{_datadir}/scan-view/
%{_datadir}/scan-build/
%{_mandir}/man1/scan-build.1.*

%changelog
* Sun Aug 06 2017 Thom Troy  - 3.9.1-1
- First build - spec a modified version of fedora25 SRPM


Include What You Use


Name:           iwyu
Version:        0.7
Release:        1%{?dist}
Summary:        C/C++ source files #include analyzer based on clang

License:        NCSA
Source0:        https://github.com/include-what-you-use/include-what-you-use/archive/clang_3.9.tar.gz

BuildRequires:  cmake3
BuildRequires:  clang-devel >= 3.9
BuildRequires:  llvm-devel
BuildRequires:  llvm-static
BuildRequires:  zlib-devel
# Scripts are Python 2
BuildRequires:  python2-devel
BuildRequires:  ncurses-devel

# Virtual provide the long name
Provides:  include-what-you-use = %{version}-%{release}
Provides:  include-what-you-use%{?_isa} = %{version}-%{release}

ExclusiveArch: %{ix86} x86_64


%description
"Include what you use" means this: for every symbol (type, function, variable,
or macro) that you use in foo.cc (or foo.cpp), either foo.cc or foo.h
should #include a .h file that exports the declaration of that symbol. The
include-what-you-use tool is a program that can be built with the clang
libraries in order to analyze #includes of source files to find
include-what-you-use violations, and suggest fixes for them. 


%prep
%autosetup -n include-what-you-use-clang_3.9


%build
mkdir build
cd build
%cmake3 -DIWYU_LLVM_LIB_PATH=%{_libdir}/llvm -DIWYU_LLVM_INCLUDE_PATH=%{_includedir} ..
%make_build


%install
%make_install -C build
cd %{buildroot}%{_bindir}
ln -s include-what-you-use iwyu
ln -s fix_includes.py fix_includes
ln -s iwyu_tool.py iwyu_tool


%check
# Need to have the clang header's at the correct relative path (see https://github.com/include-what-you-use/include-what-you-use/issues/100 )
ln -s %{_libdir} %{_lib}
cd build
PATH=$PWD:$PATH
ln -s ../fix_includes.py
ln -s ../fix_includes_test.py
ln -s ../iwyu_test_util.py
ln -s ../run_iwyu_tests.py
ln -s ../tests
%{__python2} run_iwyu_tests.py
%{__python2} fix_includes_test.py


%files
%{_bindir}/include-what-you-use
%{_bindir}/iwyu
%{_bindir}/fix_includes
%{_bindir}/fix_includes.py
%{_bindir}/iwyu_tool
%{_bindir}/iwyu_tool.py
%dir %{_datadir}/include-what-you-use
%{_datadir}/include-what-you-use/*.imp


* Sun Aug 06 2017 Thom Troy  - 0.7
- Update to work on centos 7

Building the spec files

To build the spec files run:

$ spectool -g -R SPECS/llvm.spec
$ rpmbuild -ba SPECS/llvm.spec
$ sudo yum install -y RPMS/x86_64/llvm-static-3.9.1-1.el7.centos.x86_64.rpm \
 RPMS/x86_64/llvm-devel-3.9.1-1.el7.centos.x86_64.rpm \
 RPMS/x86_64/llvm-libs-3.9.1-1.el7.centos.x86_64.rpm \
 RPMS/x86_64/llvm-3.9.1-1.el7.centos.x86_64.rpm
$ spectool -g -R SPECS/compiler-rt.spec
$ rpmbuild -ba SPECS/compiler-rt.spec
$ sudo yum install -y RPMS/x86_64/compiler-rt-3.9.1-1.el7.centos.x86_64.rpm
$ spectool -g -R SPECS/clang.spec
$ rpmbuild -ba SPECS/clang.spec
$ sudo yum install -y RPMS/x86_64/clang-3.9.1-1.el7.centos.x86_64.rpm \
 RPMS/x86_64/clang-libs-3.9.1-1.el7.centos.x86_64.rpm \
 RPMS/x86_64/clang-devel-3.9.1-1.el7.centos.x86_64.rpm
$ spectool -g -R SPECS/iwyu.spec
$ rpmbuild -ba SPECS/iwyu.spec

This will result in you having all RPMS and SRPMS needed to install and use CLang and LLVM v3.9.1

Saturday, 25 February 2017

It's the end of the line

When working on an install script recently, I came across one of those bugs that make you realise just how pedantic computer programming can be. I had a file that contains a list of yum package names and a script that read the file and did some work on them.

PackageList.txt

redis
python

InstallScript.sh

while read PKG; do
    yum install -y ${PKG}
done < /path/to/PackageList.txt

This file had been working fine as part of our installer for a number of iterations. As part of a developing a new feature I added a new package to the list and saved the file. Thinking this was such a small change that it would just work, I committed it and pushed the changes. However when running the script our tester complained that the new package was missing.

I sat down to debug the issue by checked that the package existed, that the script hadn't changed, and that I had the package name correct. As part of this debugging I resaved the file and it worked again.

After scratching my head, getting a cup of tea and doing some searching, I discovered that the posix standard specifies that a newline character should be added to the end of files. My editor of choice for development is Sublime Text, which by default doesn't add the newline character to the end of the file.

In order to turn it on you should edit your preferences to change the following preference to true.

// Set to true to ensure the last line of the file ends in a 
// newline character when saving
"ensure_newline_at_eof_on_save": true

You may also see the symptom of this issue when committing files to source control and at the end of a diff you will see.

\ No newline at end of file



Friday, 10 February 2017

Inflation Problems

Despite 64-bit operating systems being the default for over 10 years, some of the code I use is still compiled with "-m32" for 32-bit mode. The reasons for this are mostly lack of management will and developer time. As I got time between projects, I decided to update the code so that we can release in both 32-bit and 64-bit mode.

Upgrading the code to be ready for 64-bit mode proved to be a slow task that had many chances for errors. I hope that by showing these errors and some common fixes it helps others to also update their code.

Common Errors

int or unsigned int instead of size_t


On a 32-bit system this isn't really a problem as all 3 types use a 32-bit integer, so you won't get errors. However, it's not portable and on a 64-bit Linux system, size_t is a 64-bit (unsigned) integer. This can cause issues with comparisons and overflow. For example:

string s = "some string";

unsigned int pos = s.find("st");
if( pos == string::npos) {
    // code that can never be hit
}

The above causes issues because string::npos can never be equal to pos as the data type of an unsigned int is too small to match string::npos.


This issue can be caught with the compiler flag -Wtype-limits. Or preferably use -Werror=type=limits to cause the compilation to fail with the following error

error: comparison is always false due to limited range of data type [-Werror=type-limits]

As mentioned this can also cause overflow issues, for example:

unsigned int pos = string::npos;

This causes an overflow because string::npos is too big to fit in a 32-bit integer.

Again this can be caught by a compiler flag, in this case -Woverflow. And again I recommend to use -Werror=overflow to cause a compilation error.

Wrong printf arguments


The logger in our codebase uses printf style formatting for formatting log lines. As a result of this the most common warning on our 64-bit compile was related to this.

The most common cause was related to the above assumption that a size_t is a 32-bit integer. Below is an example of the warning showing this

warning: format '%u' expects argument of type 'unsigned int', but argument 2 has type 'size_t {aka long unsigned int}' [-Wformat=]
         TRACE(("Insert at position [%u]", pos));

The fix that I used for this warning to use the %zu format specifier for size_t. This was introduced in the C99 standard and should be available in gcc and clang. However, it may not be available in some older versions of the Visual Studio compiler.


TRACE(("Insert at position [%zu]", pos));

I have also seen the above error in relation to other types, for example time_t, uintptr_t, and long. If you are unsure of what the printf argument for a type is, then you can use helpful macros from the C "inttypes.h" header (<cinttypes> if using C++11 or later). This includes macros with the printf arguments for various system typedefs.

Note: Before C++11 you must define __STDC_FORMAT_MACROS before including this header. For example, to print a uintptr_t you can use the macro PRIuPTR


#define __STDC_FORMAT_MACROS 1
#include <inttypes.h>

bool MyList::insert(uintptr_t value)

{
....

    TRACE(("value [%" PRIuPTR "]", value));

Assuming the size of a type is always the same


Again this is somewhat related to the previous points. I saw a number of errors where it was assumed that a particular type was always the same length on different platforms.

The 2 most common were pointers and long.

In our code pointer length issues often manifest as the printf argument error, e.g. using %08x instead of %p but I also saw some cases where a pointer was cast to an int to pass it through a particular function. This would then cause it to then precision on a 64-bit system.


In the case of long it appears that in many cases it was assumed that long was always a 32-bit integer. I came across a number of errors caused by using bitwise operations which assumed that a long was 32-bits. For example:

long offset = getSomeValue();
if ( offset & (1 << 31) )

This causes errors because long is not guaranteed to be a 32-bit integer. If you need to guarantee a size then you should use the correct typedef for that sized integer from the C "stdint.h" header (<cstdint> for C++11). e.g.

#include <stdint.h>

int32_t i32 = get32bitInt();
int64_t i64 = get64bitint();
...

These can then be used in conjunction with the PRIxxx macros from inttypes.h if you need to log / format them

Even with stdint.h there were some ambiguous types that were being cast to / from different types. An example of this was time_t which is not actually defined in a standard. After some googling and testing, I discovered it aligns to the same size as a long (4 bytes on a 32-bit arch, 8 bytes on 64-bit). So when we needed  to pass a time_t value and can't use the time_t typedef I defaulted to using a long.


At the end of the article I show a very simple test program and it's output on RedHat Linux. This shows how the size of types can change depending on compilation mode.

Using the wrong type with malloc

This issue is not actually related to the 64-bit port but the symptoms of it only manifested when we ran the code in 64-bit mode.

There were a couple of blocks of code that were using malloc to get a block of code for an array and these were using the wrong type for the sizeof argument. For example, some code for a hash table included:


typedef struct HT
{
    int num_entries;
    int size;
    HTEntry **table;
} HT;

Then to initialize the table

HT *newtable = NULL;
newtable = (HT*)malloc(sizeof(HT));
newtable->size = size;

newtable->table = (HTEntry**)malloc(sizeof(int)*size);


This has been deployed and run error free for a number of years in our 32-bit software release. However, as the sizeof an int and the size of pointers differ on 64-bit systems, it  caused errors there.

The correct code is:

newtable->table = (HTEntry**)malloc(sizeof(HTEntry*)*size);

Unfortunately I was unable to catch this with any compiler warnings and it caused a crash when run. I had also run some static analyzers over the code which missed this.

Conclusions

The task of updating your code to make it 64-bit compatible is slow, however, can be made easier if you take care to listen to your tools. This includes enabling compiler warnings, making some warnings errors, and using static analysis tools. These will help catch many of the common errors that can occour.


As for the benefit of updating, it will be worth it because:
  •  It will help improve compatibility. As most OSes and software projects are now released in 64-bit mode by default, there is less chance of finding an incompatible package
  • Allow access to new CPU instructions. Compiling with 64bit mode allows access to new instructions and registers. Some initial tests have shown that certain sections of code can be up to 10% faster.
  • Improved code. Keeping the code compiling and working in both environments may lead to more careful programming.

References

http://www.drdobbs.com/cpp/porting-to-64-bit-platforms/226600156?pgno=1

http://www.viva64.com/en/a/0004/

http://www.drdobbs.com/parallel/multiplatform-porting-to-64-bits/184406427

Test program to check common sizes

In order to check sizes, I created a simple test program that will print out the sizes for some common types:

#include 
#include 
#include 
#include 

using namespace std;

int main()
{
    cout << "sizeof(int) : " << sizeof(int) << std::endl;
    cout << "sizeof(unsigned long) : " << sizeof(unsigned long) << std::endl;
    cout << "sizeof(long int) : " << sizeof(long int) << std::endl;
    cout << "sizeof(long long int) : " << sizeof(long long int) << std::endl;
    cout << "sizeof(int32_t) : " << sizeof(int32_t) << std::endl;
    cout << "sizeof(int64_t) : " << sizeof(int64_t) << std::endl;
    cout << "sizeof(double) : " << sizeof(double) << std::endl;
    cout << "sizeof(float) : " << sizeof(float) << std::endl;
    cout << "sizeof(size_t) : " << sizeof(size_t) << std::endl;
    cout << "sizeof(intptr_t) : " << sizeof(intptr_t) << std::endl;
    cout << "sizeof(uintptr_t) : " << sizeof(uintptr_t) << std::endl;
    cout << "sizeof(void*) : " << sizeof(void*) << std::endl;
    cout << "sizeof(char) : " << sizeof(char) << std::endl;
}

To compile and run, you can use:

$> .g++ sizes.cpp -m32 -o t32.sizes
$> ./t32.sizes 
sizeof(int) : 4
sizeof(unsigned long) : 4
sizeof(long int) : 4
sizeof(long long int) : 8
sizeof(int32_t) : 4
sizeof(int64_t) : 8
sizeof(double) : 8
sizeof(float) : 4
sizeof(size_t) : 4
sizeof(intptr_t) : 4
sizeof(uintptr_t) : 4
sizeof(void*) : 4
sizeof(char) : 1



$> .g++ sizes.cpp -o t64.sizes
$> ./t64.sizes 
sizeof(int) : 4
sizeof(unsigned long) :8
sizeof(long int) : 8
sizeof(long long int) : 8
sizeof(int32_t) : 4
sizeof(int64_t) : 8
sizeof(double) : 8
sizeof(float) : 4
sizeof(size_t) : 8
sizeof(intptr_t) : 8
sizeof(uintptr_t) : 8
sizeof(void*) : 8
sizeof(char) : 1


As you can see there are a number of types that have different sizes. These will be the same on all Linux systems, however they aren't guaranteed across different operating systems.