Current Projects

Past Projects

LoCoVision: Low-cost embedded 1D vision sensors for smart monitioring applications

Low-cost embedded one-dimensional (1D) line-scan image sensors open up many opportunities for realizing cost-effective embedded imaging systems enabling continuous online condition, process and product quality monitoring. With this project we develop a generic and very low cost 1D line-scan camera set-up. This line scan camera will have a read-out speed of more than 10kHz with a resolution of 128 pixels, at a total set-up cost below € 100.

We also implement motion compensation algorithms based on optimal spatial temporal processing of time-consecutive 1D images. Afterwards the set-up will run algorithms to control the object to be verified such as: diameter calculation, motion tracking, arrival time, detect multiple objects, ... Finally we also develop a lighting system that is fully controllable by the line scan camera.

The project received research funding from Flanders Make and IWT.

Contact: Stef Van Wolputte, stef.vanwolputte[at]; Wim Abbeloos, wim.abbeloos[at]

Detection of Abnormal Behaviour in Surveillance applications

In this PhD project, we try to support the requirement of human intervention of the increasing amount of surveillance camera's in public places. All the video material should be watched in real-time for real-time response. In this project we develop a system that supports this human intervention by interpreting the scene using Computer Vision and combine this with Knowledge Representation to describe Normal Behaviour. If the scene we are observing does not fulfil the described allowed behaviour we should respond in an appropriate manner.

The main research in this project focusses on fast and robust pedestrian detection. For more information about the publications on these topics, please visit the project website, and feel free to contact me for further questions or remarks.

Contact: Floris De Smedt, floris.desmedt[at]

RaPiDo: Vision-Guided Random Picking for Industrial Robots

In this project we equip industrial robots with vision sensors. This allows the robot to grasp and handle objects with unknown position and orientation. This eliminates the need for elaborate mechanical systems to align products correctly so that they can be grasped at predefined locations.

We explore the state-of-the art in the domain of vision technology and robotics. Some of the new technology we will study and demonstrate:

  1. 2D and 3D sensors, including new technology such as Multi Flash Cameras, Structured Light and Time of Flight cameras.
  2. Algorithms to recognize and localize your products in images or 3D point clouds.
  3. Advanced robotic techniques such as: in-hand scanning, visual servoing and real-time path planning, which allow the robot to move efficiently and without collisions.
  4. The interface between the robot, sensors and system.
  5. We assess the performance on a wide variety of products, in a number of important applications: Conveyor picking (grasp products on a moving conveyor belt), (de)palletizing (stacking products (e.g. boxes or bags) onto pallets, ready for transportation) and random bin picking (picking products that are piled randomly in a container).

Contact: Wim Abbeloos, wim.abbeloos[at]

Automatic parameter optimalisation for heterogeneous clusters

Lately in the world of High- Performance Computing there has been a lot of progress in two areas. On one side there is General-purpose computing on graphics processing units (GPGPU), where massively parallel streaming processors like the GPU -- which are normally used for graphics applications -- are used to solve other computationally intensive problems. On the other hand there is the Big Data phenomenon, of which the MapReduce programming model is an essential part. A big part of MapReduce's succes is due to the ease at which it allows programmers to utilise clusters of relatively low-end hardware to solve problems involving huge quantities of data.

In this project we try to improve frameworks that combine MapReduce and GPGPU computing. Recent research has already yielded some frameworks (like GPMR and MGMR) that implement MapReduce on heterogeneous clusters, however their researchers indicate that the performance of these systems is highly dependent on a correct configuration. We aim to adres this shortcoming by creating a system that can automatically determine an ideal execution strategy for these heterogeneous MapReduce clusters. We do this by first using some simple statistical methods (like principle component analysis) and then using techniques from the field of Machine Learning and Probabilistic Logic Programming (PLL). After that we also try to influence the scheduler of these systems directly, with the purpose of using these models in real-time.

This project can be divided in four distinct fases. In the first fase of we limit ourself to a single machine containing both a GPU and a CPU. We try to find an optimal execution strategy that utilises both of these processors on a single system. In the second fase we extend this system to a cluster of heterogeneous machines. The third system uses a detailed description of the cluster (whereas the second system only uses some general parameters like the average number of GPUs in a single machine). Finally, in the fourth fase we integrate our system in the scheduler of the MapReduce framework to be able to influence the performance at runtime.

Contact: Wiebe Van Ranst, wiebe.vanranst[at]

Cametron: Technology for a virtual camera crew

This PhD research project focuses on the development of a robotic flying camera man. Within the Cametron project we aim at making the process of filming an event completely automatic. A human camera camera crew will be replaced by automated static PTZ cameras and UAV-drones sporting a camera and microphone. The movie director will be replaced by a virtual director so that an event, for example a music performance or a sports game, can be captured in real time with minimal human intervention. This project focuses on the virtual camera man part of Cametron, in which an unmanned aerial vehicle (UAV), sporting a PTZ camera and microphone, is robotized such that it totally autonomously can gather video of an event, based on high abstraction level instructions from the virtual director. These instructions are no more detailed than framing a certain (set of) actor(s) in a certain cinematographic shot composition (long shot, shoulder shot, mid shot, close-up, extreme close-up, etc.).

The main challenges that have to be conquered in this project are real-time actor detection and tracking, topological localization, and image-based visual servoing and motion planning.

Contact: Dries Hulens, dries.hulens[at]

3D4SURE: 3D Photogrammetry for SURveying Engineers

In all sectors of our society, the demand for geo-information increases and one of the most important technological evolutions it is extended use of sensor applications such as satellites, laser scanners and high resolution camera's. The rise of these sensors within the geo-sector provides an explosion of digital images which should be processed appropriately. This means that the profile of the classical land surveyor must adapt to these changing technological innovations and geo-ICT within surveying becomes essential. The classical techniques (total station, GPS, ...) are increasingly being complemented by modern sensor technologies such as photogrammetry and laser scanning which are in se data and analysis intensive. Specifically for surveying there is an evolution towards 3D models and information: volume calculations, facade reconstructions, which is difficult to achieve with classical techniques solely.

Therefore the main goals of this project are: (1) to asses and evaluate the possibilities and limitations of 3D photogrammetry for surveying applications and (2) facilitate its use for small medium enterprises (SME's) within the geo-industry. In the project we focus on three easily accessible image sources: the digital camera, the total station with integrated camera and/or UAV images. As such large financial investment are avoided. As the distance between the sensor and the object is rather limited, the term close range photogrammetry is used for these image sources.

The project aims to clarify the capabilities of the variety of (semi)automated processing tools recently available on the market and to test the quality of the resulting 3D models. At the moment the use of photogrammetry by land surveying SMEs is rather limited due to a lack of understanding of the processing when creating 3D models. However, the knowledge of the capabilities and limitations of this technique is one of the requirements for its proper use. The latter will be adressed in this project by transferring this knowledge to the end user.

Contact: Toon Goedemé, toon.goedeme[at]

TOBCAT: Industrial Applications of Object Categorization

Object detection is already wildly spread in industrial applications as far as the appearance of the object is fixed (size, shape, color,... ). Using simple image processing techniques like pixel value thresholding and pattern matching, known objects can be recognized in real time in 2D images.

However, in some industrial applications, the appearance of an object can vary a lot within one object class, making it much harder for these simple techniques to detect an actual object in an image. Examples of such objects are fruits and vegetables, pedestrians, animals and microscopic cells. The need for more global detection techniques is large. These should be able to detect a class of objects rather than a single object based on class characteristics. Object categorization techniques are ideal to cope with this intra-class variability of objects.

The goal of this TETRA research project is to examine, implement and optimize these new state-of-the-art object categorization techniques (Viola & Jones, Felzenswalb, …) in order to provide a solution for practical and realistic industrial problems. A second goal is to provide ready-to-use techniques, making it possible for industrial partners to implement these techniques themselves. Following this idea, an interface with two major computer vision libraries (Halcon, OpenCV) will be provided.

Contact: Steven Puttemans, steven.puttemans[at]
Software related to the project:
USB content of AAA Vision Symposium, edition 1:

InSight Out: object recognition for mobile eye-tracking data analysis

In this project we aim at developing and implementing effective new methods for analyzing gaze data collected with mobile eye-tracking devices. More specifically, we chose for the integration of object recognition algorithms from vision engineering, such as invariant region matching techniques, in gaze analysis software. An object-based approach may provide a significant surplus, in terms of analytical precision, flexibility, additional application areas and cost efficiency, to the existing (commercial) systems that use predefined areas of analysis (e.g. IR marker systems).

In order to test the actual analytical power of object recognition algorithms for the analysis of gaze data recorded in the wild, we develop a series of test cases in different real world situations, including shopping behavior, navigation, handling and usability of mobile systems. By setting up these case studies in close collaboration with key players in the relevant fields (retailers, signage consultants, market and user-experience research, and developers of eye-tracking hard- and software), we are able to sketch an accurate picture of the pros and cons of the proposed method in comparison to current analytical practice.

Contact: Stijn De Beugher, stijn.debeugher[at]

S.O.S. OpenCL - Multicore Cooking

Today, computer manufacturers are confronted with two challenges: on one hand, it is almost impossible to increase computational performance by further increase of clock speed. On the other hand, there is an increasing demand for systems to consume less power. An answer to these challenges may lie in multicore processors. Producers of graphical hardware have a technological advantage in this field, since current generation GPUs are massive multicore processors, that can also be used for any highly parallel computations -- even non-graphical ones. One barrier still preventing market penetration, however, is the absence of a reliable software framework for performing general purpose programming on such unit. This has lead to the development of the OpenCL framework, which consists of an extension of the C language, some API libraries and a runtime system, that together are claimed to form an open, performant and platform independent programming framework for GPUs and other highly parallel platforms.

The goal of this project is twofold. First, we want to review those claims, and decide whether OpenCL really is as open, performant and platform independent as is claimed. Second, we will also produce a ''cookbook'' of concrete, practical guidelines and best-practices that will be aimed at assisting companies that might wish to migrate their software to OpenCL. Our research into this will be guided by hands-on experience with a number of practical case studies, that are designed to capture as much of the computational spectrum as possible. First, we will consider a classic set of numerical algorithms, whose behaviour under parallelisation is well understood. Second, we will study a state-or-the-art image processing case, that reflects the need to perform computations with high data density. A previous project concerning the Cell processor showed that data density is often a more important bottleneck than speed of computation. Third, we will also examine a SAT solver, an advanced search algorithm with high computational complexity, that has industrial appliactions in Electronic Design Automation, among others. While the image processing case mainly focuses on data parallelism, this case is concerned with taks parallelism. Together, this set of cases will provide quite an exhaustive overview of how OpenCL can aid the development of cutting edge software implementations.The cases will be implemented and studied on a diverse set of different hardware platforms.

The main goal of this project is to shorten the learning curve for companies that want to look into OpenCL. Not only will the detailed ''cookbook'' be available, but our partners will also be invited to seminars by our researchers that offer Open CL training, and will be able to individually consult with our researchers, thus directly benefiting from their experience with the framework. Moreover, partners that wish to be more closely involved may also propose a case study of their own.

Contact: Floris De Smedt, floris.desmedt[at]
Sander Beckers, sander.beckers[at]

The active blind spot camera: hard real-time recognition of moving objects from a moving camera

This PhD research focuses on visual object recognition under specific demanding conditions. The object to be recognized as well as the camera move, and the time available for the recognition task is extremely short. This generic problem is applied here on a specific problem: the active blind spot camera.

Statistics show a large number of accidents with trucks are related to the so-called blind spot, the area around the vehicle in which vulnerable road users are hard to perceive by the truck driver. A simple solution, as the blind spot mirrors (now enforced by EU-law) unfortunately show not to be able to decrease the number of accidents per year. Another solution must be found, in which a detection system actively warns the driver. The purpose of this PhD research therefore is the realization of an active blind spot camera, which can detect vulnerable road users in the camera image using image processing, and warns the driver about this. To achieve this, two detection modalities will be combined. First, an appearance-based method will be developed that can detect vulnerable road users in real-time. In addition to that, a detector will be build which uses movement information (using e.g. parallax). These two sources of information will then be combined intelligently to achieve a robust detector.

This research subject poses a number of challenges. The very small time margin available contradicts with the very high reliability requirement. An implementation in specific hardware (FPGA/GPU) will certainly be part of the solution. Moreover, time-efficient recognition of a heterogeneous object class (vulnerable road-users include pedestrians, cyclists, moped riders and wheelchair users) and the proposed cue-integration of appearance and motion features is challenging. Also, the specific sideways-looking blind spot camera position differs significantly from the often-described forward looking pedestrian detection, which is demonstrated in the current literature.

Click here for demo movies: sequence 1 - sequence 2

Contact: Kristof Van Beeck, kristof.vanbeeck[at]

ICVS: Intelligent Control of weaving machines with Vision Systems (Picanol)

In this project, we work on the development of a real-time on-loom textile camera inspection system. The aim is detecting weaving errors in real-time, so that the weaving machine can be stopped before a new yarn is inserted. Next step is to observe textile parameters and predict upcoming errors, which can be avoided by intelligent control of the weaving loom.

EAVISE is mainly involved in the real-time implementation of the image processing algorithm. Other partners are the University of Ghent and the IBBT.

Contact: Toon Goedemé, toon.goedeme[at]

Contactless Monitoring of Activities of Daily Life

Due to the aging of the population, there will be more and more elderly people who depend on the support of others. In order to increase the quality of care and life and to keep the cost of health care sustainable, we need to find ways to support these elderly to live independently in their own environment as long as possible in a comfortable way.

Normally, a questionnaire (i.e., Katz scale) is used to assess the older persons ability to perform activities of daily living (ADL) independently. However, this is time consuming and it is often difficult to obtain consistent and valid results.

This research aims at automatic non-intrusive recognition and monitoring of ADL of elderly people living alone at home.

To recognize these ADL, we will use a combination of several detection mechanisms: the output of contactless sensors measuring the consumption of electricity, water and gas, together with security sensors and video cameras (for position, movement and posture detection). This combination of information or sensor fusion, will provide a robust classification of ADL. Identifying and labeling lifestyle patterns are carried out automatically by using machine learning techniques.

Sudden (e.g., accidents) or slowly changing (e.g., cognitive and behavioral disturbances in dementia) deviations from these patterns are also detected and trigger an event to alarm all stakeholders. Also unique to this research is the fact that three setups will be installed and tested in a real life environment.

The results of the research will make it more feasible for single elderly people to sustain their independence longer and to provide a system to assist care takers in their support towards elderly people.

Contact: Toon Goedemé, toon.goedeme[at]

3DAMEEA: 3D Additive Manufacturing of Electrical and Electronic Applications

Nowadays manufacturers face the challenge of delivering customized products more quickly than before: multifunctional freeform shaped products (with mechanical, electrical and electronic functions). 3D Additive Manufacturing (3D-AM) can be used to face these challenges. 3D-AM refers to a group of technologies used for building parts, all from 3D Computer Aided Design (CAD) data, or data from 3D scanning systems. Based on thin horizontal cross sections taken from a 3D computer model, parts are produced, layer by layer, in different types of materials.

Without the constraints of the conventional manufacturing technologies, which in the most of the cases are advantageous for mass production, designers are given the freedom to create new designs that before were impossible, impractical, too expensive or with too long delivery times to manufacture. Until a few years these new gained processes were limited to manufacture mechanical parts. Thanks to the new developments in electrical conducting or semi conducting materials, mainly used for 2D printing, a wide field of new applications will be opened by 3D printing of these materials.

The aim is to explore the possibilities of new materials in combination with some new fabrication techniques, to manufacture electrical and electronic applications with a high added value, either as customised one piece or small series production (Rapid Prototyping), either as customised mass production (Rapid Manufacturing).

Especially the 3D printing technologies based on fused deposition, powder-bed-based printing, ink-jet printing and aerosol-jet printing will be investigated. Starting from those new possibilities, the re-design of existent applications and the design of new applications will be explored.

In this project, EAVISE develops an in-process monitoring system for different 3D printer techniques. During printing, a camera inspects the print head. This enables the parameters of the 3D printer to be tuned in real-time in order to acquire a good printing process quality.

Contact: Toon Goedemé, toon.goedeme[at]

FallCam: Camera-based Elderly Fall Detection

This project focuses on building and validating a camera system for fall detection of elderly people that live at home. The system will be installed in several rooms of the house of an elderly person with an increased fall risk. This system is responsible for alerting a caregiver when a fallincident occurs.

In this project we will build a (prototype) fall detection camera system and validate it with real elderly people with an increased fall risk. To do this we will do successively: (i) set up an experimental vision system that is capable to capture a large part of the house; (ii) build up a database of labeled images sequences, consisting of fall movements simulated by actors as well as observated real fall movements of elderly people with an increased risk of falling; (iii) implement, test and refine existing algorithms for fall detection based on avaliable literature, while using the recorded database; (iv) implement this algorithm on a prototype embedded platform so that the images can be processed in real-time, whereby the cost and power consumption are kept as low as possible; (v) execute detailed experiments with this prototype, observing elderly people with a high fall frequency for long time durations.

Contact: Toon Goedemé, toon.goedeme[at]

FAST-ProMoCo: Fast Prototyping with Model-Based HW/SW Co-design

The Fast-ProMoCo project investigates the use of modern model-based design techniques for real-time prototyping of complex algorithms for high speed applications. A relevant testcase is used for the exploration and evaluation of these techniques. They offer the possibility to drastically reduce the development time, to optimise the implementation, and to accelerate simulation using in-the-loop co-simulation. Flemish companies will gain insight in the potential of these techniques, and will be supported during the introduction.

This project will investigate the innovative possibility to substantially accelerate the development of complex, computationally expensive algorithms based on a model-based design technique. The algorithm will be described at a higher abstraction level (model level) and will then be converted automatically in hardware (VHDL description of hardware blocks that can be realized on FPGA's) and optimised software (C) using development tools. In the project, a number of available tools for model-based design will be tested and compared.

Contact: Kristof Van Beeck, kristof.vanbeeck[at]; Jan Meel, jan.meel[at]

RoboCup SSL

RoboCup is a worldwide robotic football competition. Every year, an international tournament is organised in different leagues. We are working towards participation in the Small Size League (SSL), were robots of about 15 cm size compete 5 against 5 on a field of 4 by 7 metres. The development of a robot soccer team requires the combination of multiple engineering disciplines: electronics, image Processing, mechanics and computer Sciences

In this project, we engage the undergraduate students of the last year of electronic/ICT engineering. The goal is to teach students collaborate software/hardware development practices, 20-40 students work together on one big embedded electronics project.

Contact: Toon Goedemé, toon.goedeme[at]

SIVOL - Snelle Implementatie van Vormgebaseerde Objectherkenning in Landbouwtoepassingen

SIVOL is a 1-year study project in which we explore the field of computer vision in agricultural applications. The idea is to present a fast implementation of shape-based object recognition to be used for agricultural purposes. Current methods are mostly based on pixel based colour detection. This comes with a lot of disadvantages, such as blending of objects in the background. Therefore we want to explore the feasibility of more advanced image recognition methods for these applications.

The main reason why shape-based recognition techniques aren't used yet in agriculture, is their need for high processing speed and the higher requirements because of the environmental conditions. Dust, vibrations and illumination are only a few of the factors that can influence the recognition performance. Also the high variety of to-be-detected shapes requires more sophisticated algorithms.

Currently, we're doing a preparing study of one year. Our objectives are as follows:

  1. Getting an overview of the current state of robust shape-based recognition techniques and their use in agriculture.
  2. Building up a contact directory of possible national and international partners, both industrial and academic.
  3. Investigating the demand and interest from the agricultural industry concerning these more advanced vision techniques and possible bottlenecks when transforming to more commercial implementations.
  4. Building a demonstration application, to show the potential of shape-based object recognition.
  5. Composing a proposal for follow-up project, responding to the needs of the interested (Flemish) companies.

Contact: Floris De Smedt, floris.desmedt[at]

Vision-based automatic wheelchair navigation

In this work, we present a navigation system with unique properties. With only an omnidirectional camera as sensor, this system is able to build automatically and robust accurate topologically organised environment maps of a large, complex, unmodified environment. It can localise itself using that map at each moment, including both at startup (kidnapped robot) or using knowledge of former localisations. The topological nature of the map is similar to the intuitive maps humans use, is memory-efficient and enables fast and simple path planning towards a specified goal. We developed a robust visual servoing technique to steer the system along the computed path, as well as a vision-based range scanner in order to detect and avoid obstacles.

The power of this approach lies in the combination of a topological world model with state-of-the-art image matching techniques based on fast wide baseline local features. The latter makes it possible to find image correspondences without the need for artificial markers in the scene, and also enables an efficient description of the environments.

We demonstrated our navigation approach on two applications, namely autonomous wheelchair navigation and pedestrian navigation for a virtual tourist guide.

Contact: Toon Goedemé, toon.goedeme[at]
Full Phd Text: Visual Navigation

Multi-sensor motion capture via fusion of vision and inertial sensors

In this PhD project, a motion sensor will be designed that combines the advantages of both visual and inertial sensors.

In contrast to the inertia-based sensors for cameras there is no problem with error accumulation; the position is measured with respect to an absolute world reference frame. Unfortunately, drawbacks of using a camera are the relatively low framerate and the high computational burden of the image processing algorithms.

Indeed, a combination of a miniature camera with inertial sensors would yield a measurement system that has both a small absolute error and a high sample rate. Both sensors are very complementary. Moreover, the image processing algorithm can be simplified and optimised taking advantage of the fast but inaccurate prior position estimate coming from the inertial sensors.

The project investigates how the low-level signal processing of all fused sensor hardware can be done in an integrated way, allowing each sensor to improve the signal processing of all others, with respect to drift compensation, dynamic range and accuracy. We target an absolute accuracy of better than 1 mm, with motion dynamics of accelerations up to those typical in human walking or jogging, and in semi-structured indoor environments.

The main novelty of the approach in this PhD is that the sensors wil not be treated separately, but they will be truly united into one integrated sensor. The result will be a portable embedded system which is small, compact and energy efficient.

Contact: Koen Buys