How to build Backend IoT services using Sigfox, Spring, Docker and Jenkins pipelines?

Jenny Baur BLOG

If we believe all the market researchers like IDC, we will soon be overwhelmed with connected objects and robots. But that’s just one part of the story… ^^ To handle this huge number of connected devices, there is a range of tools that may be used to build IoT cloud services, and the challenge is to choose the most relevant tools and frameworks that simplify building robust IoT services by reducing the development life cycle and the time-to-market.

You want to discover a toolbox (but not a silver bullet) with real workshop and demonstration that will help you to take up the challenge? Grab a cheer and a coffee, I think that you will find this useful ☺.

In this article, I will present the toolbox we are using at Rtone, in order to easily build, ship, and run a full fledged IoT application. We’ll focus on an MVP (Minimum Viable Product) needed to support our nascent IoT application at velocity.

Concretely, we are going to implement throughout this article a meaningful IoT application dedicated to managing connected Sigfox temperature sensors in a RECORD TIME !

Our IoT services Lego Blocks!

In this first installment, I am going to introduce the architecture big picture of the application, and the different tools used to build and run the project.

Architecture big picture

I will explain step by step every single component and concept and how to put all this together through the next sections.

Backend services with Spring boot

The majority of our backend applications are written in Spring, and recently in Spring Boot, this wonderful java framework that provides a set of useful capabilities, into single coherent code framework and approach that streamlines the effort it takes to build a robust and resilient backend.

In a nutshell, this framework helps us to create full-fledged backend that manages a fleet of devices. In the case of this article, we will build our IoT backend around Sigfox devices, using java third party library “Sigfox Rest Client” (developed by our team) and expose a webhook, in order to collect sensor data pushed by Sigfox callback (the way that Sigfox data collecting works).

Frontend with Angular

For the demo needs, we will implement a Single Page web Application (SPA) with real time charts that show up automatically the data for every update through websocket connection. I’ve chosen Angular 4 to do this, great framework to run quickly a new web application. I am not going to dive into any details about it, it’s not the subject of this article, however, the sources of the webapp module are available as well on Github project repo.

Build ship and run using Jenkins Pipeline and docker

One of the greatest challenges is, as stated above, the service delivery velocity, I.E, how to improve continuous delivery with short lead times in order to reduce Time-To-Market, and thus, create value to your business. For this purpose, there is only one solution, the “automation” of the integration and the delivery process, and to do so, we are using a magic box that holds a set of tools making building, shipping and deploying our applications much easier than before. Jenkins will be used for building (Continuous integration) whereas Docker and Docker-compose will be used for delivering application. The big challenge in all of this is automating the setup of the infrastructure. So we need a reproducible infrastructure whatever the stage of the delivery pipeline (test, pre-production or production). Fortunately, Docker is great at creating the necessary infrastructure (for instance, our backend IoT server runs in containers with openjdk8-jre).

Connecting devices using Sigfox IoT connectivity services

Sigfox is a Low Power Wide Area Network (LPWAN) that enables users to connect their lightweight devices. This technology is a good fit for any application that needs to send small and infrequent bursts of data. Things like basic alarm systems, location monitoring and simple metering are all examples of one-way systems that might make sense for this type of network. I will not go into any details about this technology but you can find out lots of articles and blogs that talk about it. All we need to do is a couple of configuration of our devices and the connectivity with our IoT backend that we will address in the next sections.

How it works

The application project introduced in this article revolves around fictitious ‘temperature sensors’ that are connected to Sigfox network. Theses sensors record and push temperature data to our IoT backend via the Sigfox network (uplink data). A second general requirement: we assume that sensors need a weekly time sync with the server to ensure a consistent measure data (downlink data).

Sensors are managed using the Sigfox Rest Client that interfacing with Sigfox backend API. The SDK was developed by our team and, the project is available in open source on Github and, the dependency is as well on Maven Central.

Setting Up our project

The Spring Initializr gets our project off to a running start. There are several ways to accomplish this but since we’re taking the simplest path possible for this first demo, let’s just point our browsers to Spring Boot gives you options, such as a Gradle-based build, various versions of Boot, Java/Groovy, and packaging (JAR vs. WAR), but we’ll stick to most of the defaults for our demo.

Here are the choices we’ll make for our project:

Spring Initializr web page


  • Maven project
  • Latest non-snapshot version of Spring Boot
  • Group: fr.rtone (feel free to use your own)
  • Artifact: iot-sigfox
  • Dependencies: Web, JPA, Devtools Repositories

On your mark, get set, go !!

Once you’ve made the above selections, simply click on ‘Generate Project’ button to have the Spring Initializr generate a skeleton project. Bundle it into a .zip file and serve it up for download. Save it locally, unzip it, and open the project in your favorite IDE to get started coding.

Project structure

In order to split the project structure into 2 modules (backend and frontend), I have made some packaging changes by creating maven reactor (top level pom) and two modules with the necessary build configuration. The frontend module is served by the backend module as static resources.

Setting Up our CI & CD

Given the fact that we have a pretty simple project, and then simple CI and CD workflow, our Jenkins pipeline is made of 7 stages :

  1. Checkout: pulling the codebase to test and build the project.
  2. Check tools: checking if all necessary environment tools are installed (Java, node, npm..etc).
  3. NPM install: installing all webbapp packages and dependencies.
  4. Backend Testing & frontend Testing: execute all backend and frontend UT and IT.
  5. Packaging & building docker image: packaging the jar and build the docker image.
  6. Push image: publish the image into a private docker repository.
  7. Deploy the infrastructure: deploying the all the infrastructure (The app & the DBMS) to a distant server.

The pipeline is declared in Groovy lang. at Jenkinsfile located in the root folder. This snippet shows how to declare a stage:

stage(‘packaging & building docker image’) {
sh “./mvnw package -Pprod docker:build -DskipTests”
archiveArtifacts artifacts: ‘**/target/*.war’, fingerprint: true

From the sixth stage, docker and docker-compose are required, first, to build the Dockerfile (located in root folder) and push it to the private repository (or a remote server), second, to run and orchestrate the infrastructure (for instance, the backend app and the MySQL DBMS)

The application infrastructure is described in a docker-compose.yml file located in the docker-compose folder

Device management

To explore our connected object fleet, we need to declare and to synchronize our device database against the Sigfox Backend. In order to do that, we can use the ‘Sigfox Rest Client’ to perform a recurrent device list fetching (HTTP long polling) to create, update or delete devices from the Sigfox backend as pictured below:

Device fleet sync mechanism

We can perform this with a simple Spring scheduling task using the magic method annotation @Scheduled and Sigfox API call using Sigfox-Rest-Clientas shown below:

 * Scheduled service (every 30 minutes) to update Device list.
@Scheduled(cron = "0 0/30 * * * *")
public void updateDeviceList() {

    String deviceTypeId = "123456";
    sigfoxClient.getDeviceList(deviceTypeId, "", LIST_SIZE, 0)
            .subscribe(data -> {
                    error -> logger.error("some thing goes wrong with sigfox", error));


We can even declare new devices from our backend to ensure a full device management at our end. As I am keeping the project simple to cover and illustrate all the concepts, it’s up to you to go further those details if you want☺.

Device Data collecting (Upstream data)

The first fundamental and common use case in the IoT world is “data collecting” from connected devices to the IoT backend. In our purpose, data are nothing else than temperatures measures, that is created by devices, and sent over Sigfox network to the the Sigfox Backend, after that, Sigfox push data to our backend using Push notification system to a webhook (called callback in Sigfox system) this webhook (or callback) can be configured in Sigfox Backend or directly via the Rest API using the “Sigfox-Rest-Client”. The following schema shows how this mechanism works:

Data push mechanism

The collected data are stored in a classic relational database (MySQL DBMS).

Step #1: Configure Sigfox Callback System

First of all, we need to configure our device callback at device type level. We assume that you already have a Sigfox account and configured at least one device (to test if the device is correctly connected, you can configure an email callback).

Sigfox callback configuration screenshot


  • Our callback is BIDIR (Bidirectional) Data Callback (bidir because it will be used for uplink and downlink);
  • Choose the channel URL and provide the endpoint of your IoT backend server your server must be deployed on an accessible server with IP address or CNAME;
  • Sigfox provides the data and a set of meta information about the device and the network (device s/n, time, avgsnr, station..etc) in a variable tokens (as you can see in the screenshot above). These variables are parsed and sent in two ways, either in the URL (screenshot) as request parameters, or in the body (payload) of the request if the used HTTP verb is POST or PUT;
  • We can use any content type, in our case we use the famous JSON structure;
  • We can as well provide Headers, for example a security token to protect our end.

Step #2: Implement our backend webhook (REST endpoint)

Spring makes this functionality pretty easy to implement. All we need to do is create a controller class ( a java classes annotated with @Controller) :

public void uplinkEndpoint(CallbackDataDTO callbackData) {
Measure measure = new Measure();
// find device
Device device = deviceRepository.findOne(callbackData.getDevice());
// no device, no chocolate !
if (device == null) {
Long value = Long.valueOf(callbackData.getData());
// push data through websocket connection
List<SigfoxDataDTO> sigfoxData = measureRepository.findAll(new Sort(Sort.Direction.DESC, “timestamp”))

Re-running our application and testing our new endpoint by emulating a device pushing to our IoT backend, using Curl or Postman for example, confirms that our webhook is now active.

And we are ready to receive and persist sensor data, congratulations!

Device data downlink (downstream data)

In our use case, sensor timers need to be updated one time every week to make sure that the measure timestamp is accurate enough. To perform this, we need to make some changes to the callback configuration and the IoT backend webhook as well. The following schema shows how this mechanism works:

Data push/downlink mechanism

Step #1: Configure Sigfox Downlink mode

In the device type form (in edition mode) we have two modes in ‘downlink data’ section:

Sigfox Dowlink configuration screenshot


  • DIRECT: a static message configured directly in the Sigfox Backend and will automatically be sent by Sigfox, when the downlink flag is ticked in the callback view :

Downlink activation

CALLBACK: in this mode, it’s up to our IoT backend’s webhook to return the value to send back to the device on an Uplink call. This mode matches perfectly, since our need is to perform devices timers weekly sync.

Step #2: Enable Sigfox Downlink mode

In the same way as DIRECT mode, we need to enable the Downlink mode in the callback view, by ticking the Downlink checkbox.

Step #3: Adapt our REST endpoint to meet the changes

Our endpoint has to check the last Timer update for a given device (sensor) and, to send back the current server timestamp if the last update was done over a week:

public String uplinkEndpoint(CallbackDataDTO callbackData) {

    Measure measure = new Measure();

    // find device
    Device device = deviceRepository.findOne(callbackData.getDevice());

    // no device, no chocolate !
    if (device == null) {
        return null;

    Long value = Long.valueOf(callbackData.getData());


    // push data through websocket connection
    List<SigfoxDataDTO> sigfoxData = measureRepository.findAll(new Sort(Sort.Direction.DESC, "timestamp"))
            LocalDateTime.ofInstant(device.getLastTimeSync().toInstant(), ZoneId.systemDefault())).toDays();

    // Send downlink payload
    if (isLastUpdateBeforeWeek(device)) {
        long timestamp =;
        // payload is 8 bytes hex string
        String payload = Converter.longToBigEndianHex(timestamp, 16);
        return payload;

    return null;

Running the demo step by step


The source code of the project is available on Github, you can checkout the code source :

git clone

Running the demo code is easy but you’ll need to have the following software installed on your machine first. For reference, I’m using Ubuntu 16.04 as my OS but as we said, the application infrastructure is reproducible, thanks to our savior, Docker.

Make sure that Docker and Docker-compose are installed (or install them)

docker -v
docker-compose -v

You need, as well, Jenkins 2.0 installed (you can find the installation guide here ).

Step #1: Configure your Jenkins Job

Jenkins pipeline job configuration

Jenkins job configuration is pretty simple as the pipeline is described in Jenkinsfile. All we have to do is to provide the Jenkinsfile path and the rest is up to Jenkins !

Step #2: start your job !

To launch the CI/CD workflow, press Build Now. After a few minutes you should see something like this on the page of your new Jenkins job:

Jenkins pipeline execution

Voila, that’s it! We successfully set up and run our first IoT backend service. Every time we launch the job, the application goes through the whole pipeline and is finally deployed.

This way we get feedback quickly, we can increase the reliability of our delivery process and reduce the risk of releasing (due to automation).

A propos de l'auteur

Hani Benzouache

Full-Stack software architect & coder, also cloud engineer and passionate about Open Source, Java & NoSQL. Voir son profil