D3.5 White Paper on Sensor Clouds
© SelSus Consortium
Restricted to other programme participants
(including the Commission Services)
Restricted to a group specified by the consortium
(including the Commission Services)
Confidential, only for members of the consortium
(excluding the Commission Services)
WP3 project team
The SelComp concept was primarily developed to create a representation of the
synchronize it with a Cloud system. For that purpose, a base technology for device
discovery and data exchange was explored, with support for both Java and C#
programming languages. After a thorough analysis, these requirements were fulfilled by
using the Universal Plug’n’Play (UPnP) architecture. It’s flexible implementation of publish-
subscribe pattern by using both UPnP Control Point (receiver) and UPnP Device (sender)
allowed for the quick implementation of the base template of the SelComp. The
development of this template was supported by a set of tools available to implement the
wrapping technology for all the equipment. Furthermore, the SelComp template is easily
adaptable according to the specificities and necessities of each partner present in the
project consortium. Since different partners operate in different markets, this requirement
was contemplated beforehand.
Based on the fact that the main objective of the SelSus project is to explore
concepts like Condition-based Monitoring and Machine Diagnostics, two major types of
shop-floor equipment are considered: 1) Machines / Controllers (Machine SelComp) and
2) Wireless Sensor Networks (Sensor SelComp). This distinction between machine and
sensors is nowadays critical in industry, based on the fact that there are operating
machines that are not prepared to collect data from its own process, which impairs the use
of complex methods to, e.g. infer any deviation from the normal process behaviour.
Therefore, the use of additional sensors that 1) do not influence the process itself, 2) are
quick to deploy and reconfigurable when process requirements change and 3) are fully
integrated with a data visualization / analysis platform was the key point explored in the
Sensor Cloud technology developed in SelSus project.
Based on this, the next sections will explain the main innovative features that
compose this Sensor Cloud technology, starting with the main building blocks of the
Sensor SelComp such as:
1) Dynamic Modular Software Reconfiguration that allows to change the data
2) Flexible Sensor Integration solution that allows to graphically develop an
interpreter of raw byte data packet at the gateway level for automatic data
3) Statistical Analysis section to detail some of the potentialities of using
certain methods and algorithms to analyse Machine and Sensor data,
available at the Sensor Cloud level, and can be used in the reconfiguration
process of the Sensor SelComp.
The SelSus Component concept (SelComp) envisages to create an innovative
smart component, with enhanced reconfiguration, sensing, decision making and
These smart components are capable of providing a quick response in real-time
scenarios and to collaborate in synergy with other entities while present in complex
industrial scenarios; such as Industrial Cyber Physical Systems (Industrial CPS). By
combining the capabilities of reconfigurable software, machine to machine communication
and sensing, this solution is capable of delivering specific and specialized computational
analysis with a great degree of flexibility.
A tentative of definition, led by the smart
“Smart Components in manufacturing are components which incorporate functions of
The proposed solution takes advantage from the Internet of Things (IoT) advent
Economically speaking, the hardware solutions adopted are low cost and widely
employed in recent smart manufacturing systems. Low cost embedded hardware
platforms, such as RaspberryPi and Beaglebone Black, provide the necessary
computational power, sensor interfacing and communication capabilities. The market of
low cost sensors has also a wide range of products for the measurement of a large set of
physical properties with several degrees of precision.
From a technological and state of the art point of view, by adopting IoT hardware
and software solutions, the SelComp solution is taking profit of cutting edge technology
and contributing to the creation of standardized technology.
Figure 2. The component is fully modular and each module can be updated by
deploying a new plugin in the system.
Neural Networks or Control Charts. These modules are “write once, run everywhere”. Once
developed, these files can run in any Sensor SelComp. The modules are maintained in the
Sensor SelComp file system, and the SelSus Cloud controls what data processing
modules are present and running in each Sensor SelComp.
Each block controls a specific functionality and the system is mainly divided in three core
North Gate and SelSus Cloud share the same ontology. The North Gate is
responsible to handle Cloud interactions. The Sensor SelComp exposes through this
interface its services to the network and the SelSus Cloud is aware of all the services
available. There are four static services which the cloud (or external components) can
Internal Structure Reconfiguration;
Submit Data Processing Modules;
Submit System Plugins;
Submit Sensor Parsers.
Let us discuss in more detail the Internal Structure Reconfiguration since it is the
core of the Sensor SelComp is composed by services. The SelSus Cloud, a Sensor or an
External Component is virtualized and treated as a service which provides and/or receives
data. The Data Processing modules presented can be instantiated. The instance is
transformed into a service, which provides data after being treated.
The whole logic is that every service running in the Sensor SelComp implements
the publish-subscribe pattern and can be connected in a directed acyclic graph
arrangement fashion, as presented in Figure 3. Data flows from the bottom of the graph,
components in the network subscribing the Sensor SelComp services. This design puts a
strong emphasis in the “servicitization” of processes and physical devices. This
virtualization turns the shop-floor in a digital twin of the physical world, which makes this
system to be aligned with the recent Industrial CPS and ISC trends.
The Dynamic Modular Software Reconfiguration is the proposed solution
industrial demands. Therefore, it must provide consistent mechanisms to configure, deploy
and dynamically reconfigure the Sensor SelComp source code responsible for data
processing, which traduces in a component-based middleware. It is component-based
because each piece of software that can be reconfigured at the Sensor SelComp level is
seen as a component that can be easily updated or exchanged.
In software engineering the terms “Modular Software” and “Software Component”
are the most approximate topics to what we refer as “Reconfigurable Software”.
of the key benefits of using a component model. Component models can be divided in two
categories: 1) as in object-oriented programming, components are objects; 2) components
represent units in software architectures. “A generally accepted view of a software
A SelComp internal logic arrangement is represented using a directed acyclic graph
(DAG). The graph structure in Figure 4
can be divided in three levels, each with a different
providing data to the gateway; the data treatment level (middle level), includes nodes
representing instances of algorithms embedded at the gateway that can treat information
in several ways (e.g. aggregate and validate data using techniques such as control charts,
perform trend analysis, etc.); the Network Level (top level) includes nodes where the flow
resulting from the lower level nodes can be redirected to subscribing hosts in the network
(e.g. Sensor Cloud, industrial machines). This internal structure can be dynamically
rearranged in runtime using the Sensor Cloud, where new sensors and data modules can
be loaded and therefore, the connections between nodes can be reformulated to
synchronize and treat data in new ways.
Figure 4 - Sensor SelComp configuration
Flexible Sensor Integration
The technology developed for flexible and easy Sensor Integration in the SelSus project is
The construction of this WSN message parser is not only suitable for new nodes to
be integrated in the network, but also to extend existing ones to support new sensors in
the node itself. In order to build the desired parser, there’s the major assumption that the
message frame content is known. Based on this, we started to define that a certain
message frame is composed by a set of items. The item is the atomic element that needs
to be found in the messages received in the gateway. This way, an item can be defined as
only one byte or set of bytes, depending on the user preferences, that can be in a pre-
defined position / location in the message, called index (in case of non-variable size
messages) or vary according to the encoding of the message (in case of variable size
messages). Based on this, the Flexible Sensor Integration solution will look for these items
on new messages received, by a specific order ruled by the indexes defined in the
interface, and if all items match the message received, it can be correctly interpreted.
Normally, the first item is a SYNC which is a sequence of chars that uniquely
specifies the start of a message. This kind of item is particularly important because it
message is, and choose the right parser for decoding.
Therefore, for each item is specified by 1) Name; Type (how should be decoded);
2) Default Value (if it has one, hex-encoded); 3) Start Index; Size (if it isn’t variable); 4)
Variable Field (true or false depending if the size is or not fixed).
If an item is variable (not fixed size for interpretation), the immediate next item in most
cases is a separating char.
Additionally, the user interface is also used to provide additional information like
manufacturer, model, serial port configuration (only for protocol that needs one), physical
link, protocol name and protocol norm. For ease of sensor integration, the SelSus system
already have component meta-information that can filled for further ease of integration in
After creating the parser in the Cloud solution, all the parsers are converted to a
XML encoded representation with all data needed to receive and decode one message
type. Moreover, at the Cloud level is possible to deploy the parsers to any chosen Sensor
SelComp, becoming immediately available to interpret new data from the sensor nodes.
The present solution for sensor integration was validated using IEEE802.15.4 RF Protocol,
with 4 motes, 2 different baud rates (115200, 57600), 4 sensors (Battery, ACC, PIR and
Temperature). These tests were performed not only in a PC platform, but also in a
Raspberry Pi 2 to test the efficiency in lower computational powered devices. Two major
tests were made: 1) 4 motes with one sensor each; 2) 4 motes, but 1 mote with 1 sensor,
1 mote with 2 sensors, 1 mote with 3 sensors and finally 1 mote with 4 sensors each).
Some preliminary results are depicted in Figure 5 and Figure 6 where the amount
of time required to interpret a single sensor message is presented in milliseconds. The
tests were made using a Sampling Rate of 5 seconds (0.2Hz) for a period of 1h. From the
figures, we can also see that the performance of the PC is better comparing with the
Raspberry Pi 2, as expected. Using the PC platform (Figure 5) the mean value of all
decoded messages is 0.056 milliseconds and standard deviation of 0.095 milliseconds.
For the Raspberry Pi 2 test, the mean value is 1.27 milliseconds, with a standard deviation
of 1.05 milliseconds.
One of the key aspects that is peremptory when assessing the data from sensors
the main goals in the SelSus project is to develop algorithms and modules for statistical
sensor data processing.
Besides validation of sensor readings for the detection of sensor malfunction,
algorithms based on statistical analysis have been developed and implemented for data
fusion, multi-variable processing, process monitoring and fault detection to process the
huge amount of data generated by the sensor network.
One of the main challenges in automated sensor signal processing is the sensor
signal validation. Since sensor signals are subjected to drifts and noise, simple fixed limit
strategies reveal not to be so effective. A much more sophisticated way is a validation
based on Kalman Filter strategies. Consider a sensor delivers temperature data (float
precision single value) at certain - not equidistant - time instances. An alarm should be
generated when an unusual temperature change occurs. This could be for example caused
by a sudden strong friction or a smouldering fire. Additionally, due to day/night and
seasonal drifts it is not possible to apply fixed limits for an alarm activation.
The aim of the algorithm is to detect if the measurement series behaves unusually,
for example overheating or certain pressure loss while tolerating normal process patterns.
Generally speaking, a system state must be observed from noisy measurements and the
state movement must be evaluated against a usual drift. The concept of the KALMAN state
observer is very suited for this approach because it allows not only the state observation
but also the estimation of typical state variance which can be used to detect faulty state or
and noise characteristic is required. Base on a series of provided sensor reading, the
implemented algorithm is able to estimate these parameters in a reliable manner.
The second implemented statistical approach deals with the analysis of sensor
array behaviour. In this situation if there are multiple sensor readings in parallel it would be
annoying to setup individual control charts for each individual sensor. Moreover, it would
not be possible to detect faults which are due to unusual correlations. Also the natural
redundancy of such a sensor array would not be exploitable. So there are many reasons
to deal with multivariate methods for assessing sensors array data as a whole. Common
methods dealing with this are the Q² and T² Statistics.
Statistics deals with outliers which are not covered with the multivariate
but have unusual features. Both statistical values allow the detection of any unusual
environmental behaviour or sensor malfunctions of the data delivered from a sensor array.
A special type of application sensors are used in batch processing supervision.
While a batch is processed multiple trace are recorded for different parameter or at multiple
places. From a PCA based decomposition a set of characteristic base pattern is extracted
from a set of historical recording. Applying these patterns to the traces delivered by the
sensors yields to key numbers which characterize the process itself and helps to access
and monitor the process quality.
Ultimately, the algorithms have been successfully applied to the HWH and IEF
Werner demonstrators. In case of IEF Werner demonstrator, a linear axis which moved up
and down was integrated in the system (a SelComp was created to wrap the equipment)
and a Statistical Analysis was made from the acquired data. From Figure 7, the statistical
, maximum model error were calculated and compared against alternating
limits (red dots) – always one for up and one for down movement. If any limit is exceeded,
the run is considered as faulty (red circle) and it ran for 50 cycles.
Figure 7 - Statistical evaluation of traces to detect faulty movements – IEF Werner
and current curves were obtained. From this curves, a PCA decomposition was made to
obtain the most significant components, and by using the singular values a Statistical
Analysis was made using again T
and maximum model error. Figure 8 shows that this
analysis is a really good way for monitoring derivations of shapes from process batches
captured from the welding machine. For this demonstrator, 250 welds were made to test
The impact of the most advanced and robust technological developments in
floor is perceived, analysed, interpreted and operated. With the use of sensory solutions,
the factory blueprint can be obtained by simply adding new equipment to measure the
main processes, and therefore its dynamics can be quantified. With this quantification, is
possible to analyse the key points of optimization and improve the systems performance
and reliability. For that purpose, data from various shop-floor equipment needs to be
integrated in a manufacturing system, such as MES or ERP for data acquisition, and
collectively stored for further analysis.
Based on this, we could notice the increasing publications and citation of scientific
papers in conferences and journals relating sensory solutions, mainly Wireless Sensor
Networks (WSN), with Cloud-based solutions. This is demonstrated by Figure 9 and
Figure 10 that present the number of citations and published papers, both per year.
which aims to develop a platform for sensor integration, data storage and data analysis.
The idea is to have a distributed system with a well-defined API to establish a standardized
communication with heterogeneous Wireless Sensor Networks, allow the easy access to
this data for integration with other external systems, and a set of graphical user tools to
increase the easiness and flexibility when dealing with such kind of systems.
It’s based on this kind of concept and all the major industry difficulties that the
developments of SelSus project related with sensors was made. This means that the
solutions presented such as Statistical Analysis, Dynamic Modular Software
Reconfiguration and Flexible Sensor Integration have a direct impact in the manufacturing
processes. These technologies and new concepts are the starting point towards a more
connected factory, easier to perceive, easier to handle in case of sudden deviations, and
ultimately easier to act when a decision needs to be made.
Nowadays, the paradigm of sensing the factory dynamics can be rewritten towards
smart industrial sensors. This means that using the concepts of on-the-fly reconfiguration
using statistical analysis, sensors could provide more than raw information, more than a
set of data acquired in a high frequency rate that need to be post-processed. They could
provide metrics, machine states, process drifts and so forth, used for decision making and
even directly feed a machine for online calibration an optimization. The benefits are not
only in the efficiency of machines, but also how efficiently the system users operate
equipment. Based on the fact that these solutions do not imply a direct coding to perform
the desired changes in the system, but instead a graph-like interface to design the workflow
of data treatment, opens the doors for many people to work in such industrial
environments, with the same outcome as a programmer, but with less effort. This
represents a new approach to what the usual systems in industry can provide in terms of
flexibility and ease of use.
Additionally, also the concept of Plug’n’Produce is complemented with the solutions
proposed. One aspect already explored, is the device discovery and data exchange using
the publish-subscribe pattern that is nowadays well-implemented by technologies like
UPnP, OPC-UA, DDS and others, so now the effort is focused on how to extract the
information from the shop-floor components for further broadcast. This issue is tackle by
programming language to design it graphically. With the use of the DMsR, the parsers can
be deployed in run-time in the SelComps at the shop-floor level, without the need to turn it
down, deploy the parser and turn in up again passing through all the process of device
discovery. Once again, the participation of the user in the most important phases of building
and maintaining a process line is the key element that drive the presented work.
Figure 10 - Published papers in Each Year
Set of keywords used for the search in Web of Science platform:
Sensor cloud computing;
Sensor cloud industry;
Sensor cloud manufacturing;
Sensor Cloud Application;
Sensor Cloud Storage;
Sensor cloud industrial;
Sensor Cloud Infrastructure;
Sensor Cloud Platform;
Sensor Cloud Servers;