Image and Interaction

E. Frenoux, D. Béroule, F. Bimbard, H. Ding, M. Gouiffès, C. Jacquemin, A. Setkov, PA Bokaris.

HTRI processing of a sequence of day and night pictures Photo © Bertrand Planes

The theme “Image and Interaction” gathers several research fields concerning augmented reality, computer vision and arts-sciences. Each of them aims at developing new technologies in automatic processing of digital images and human visual interaction improvement. The problems addressed in this theme are:

  • Use of physics related to vision, camera-projector systems: geometric and colorimetric characterization, color invariance, use of the Human Visual System properties.
  • Scene analysis: 1) detection of saliencies, robust features, color, texture, features points and regions, 2) Spatial and temporal matching, tracking, 3) 3D Reconstruction, 4) scene recognition (collaboration with CPU team, P. Tarroux)
  • Rendering: geometric and colorimetric adaptation, shaders for real-time calibration, adaptation, and interaction with moving targets or moving cameras.
  • Acceleration of the algorithms, Graphic Processing Unit (GPU) programming.


Image processing for Augmented Reality

For Projector-based Augmented Reality (i.e. using video-projection to overlay physical space with visual digital data), it is necessary to calibrate the image projected onto the physical world, to find its optimal position. In addition to the core calibration issues, Projector-based Augmented Reality raises many challenges in image processing such as:

  • Computing projection masks so that visual augmentation can be limited to subsets of a real scene (e.g. spectators shadows);
  • Real-time image transformation to re-project it onto the scene, after modification (e.g. contour delineation);
  • Human-scene interaction management.

All the algorithms are implemented on the GPU in order to optimize processing time and make them compatible with real-time interaction. Calibration allows rebuilding the physical world geometry and uses it for computing image correction on plane parts of the physical scene. Concerning calibration and real-time images compensation, a research collaboration has been developed with the IEF lab (team ACCIS), which was materialized at the end of 2012 by the beginning of Alex Setkov PhD, and the integration of two members (Michèle Gouiffès and Franck Bimbard) of ACCIS in our laboratory. Two projects proposals have been made on this topics: 1) “ANR blanc bilateral” project has been submitted in January 2013, in collaboration with Germany (HU and TUC universities), 2) a Digiteo project “Post-doctorant” (a collaboration with F. Vernier, AMI, and C. Clavel CPU).

Since a few years, cameras and projectors are widely used and are integrated to many electronic devices (smartphones, pico-projectors). Thus, we can now use these technologies in Projector-based Augmented Reality applications. Knowing that the pin-hole model can be applied both to projectors and cameras, we can use these two kinds of devices for 3D reconstruction and 3D tracking. To do so, we have to calibrate the devices. For this purpose, we are working on the following two problems:

  • Calibration of cameras and projectors using seen and projected calibration grids;
  • Self-calibration of cameras and projectors only based on correspondences between seen and projected images.

As calibration algorithms are widely used and developed since many years for cameras, we are working on projectors calibration, which requires new image processing algorithms to be reliable despite the physical problems inherent to these devices (luminosity …). Once the calibration is done, we can compute 3D reconstruction by using classical algorithms based, for example, on the essential matrix. In addition, we optimize and/or adapt the previous algorithms for several architectures such as CPU/SIMD and GPGPU. This point is really important in order to be able to use these algorithms in real-time applications. Our researches concerning Projector-based Augmented Reality are used in various projects: for built heritage augmentation, for interactive installations in public spaces, and more generally, for some of the art/science applications described in VIDA transversal theme. 204 AMI group LIMSI – Scientific report 2013. Five PhD students worked on applications and extensions of image processing for Augmented Reality: Hui Ding has studied audio-graphic scenes descriptions and rendering in the framework of the ANR Topophonie project (PhD defense in 2013 and ATER in 2013-2014). Her results can be applied to audio and visual augmentations of physical scenes. Tifanie Bouchara has developed comparative analysis methods for visual and auditory perceptions in audio-graphic scenes (post-doctoral position in 2013-2014). Sarah Fdili Alaoui PhD has proposed new perspectives for gesture interaction using the whole body and motion analysis in collaboration with IRCAM (post-doctoral position in 2013-2014). Alexander Setkov has started his PhD in 2012. He is currently working on geometric image distortion compensation through color and geometric invariance for feature matching applied to camera-projector systems. He stayed 6 months in 2014 at Computer Vision Center in University Autonomia de Barcelona and since his return, a collaboration with this laboratory has been developed. Panagiotis-Alexandros Bokaris started his PhD in 2013. His PhD is supervised in collaboration with LadHyX-CNRS at Ecole polytechnique and Laboratoire Victor Vérité, a theater company. His research is on color compensation in camera-projector system for the concealement of human presence for the stage. His first work is on color compensation through real-time and adaptative techniques, it will develop further to take mobile scenes in consideration. Through his collaboration with LadHyX he works on presence revelation.

Image Processing and Robotic Vision

This theme is the object of collaboration with members of the CPU group of the LIMSI (detailed in Topic 1 of CPU group presentation). The PhDs and following researches of Ahmad Hasasneh and Mathieu Dubois concerned the development of machine learning methods for semantic place recognition and robot localization. Both PhD have been defended in 2012, and researches concerning these topics continue through a collaboration with Philippe Tarroux (CPU). As a consequence of these works and to prepare new research topics in this area, several members of the group were involved in the creation of Digicosme Working Group “deepnets”.


HDRI (High Dynamic Range Imaging) techniques are used to produce dynamic and well-contrasted images of real-world luminance, by capturing several images of the same scene through exposure bracketing. In the same vein, we have developed a new approach to image fusion from a series of photographs of the same scene taken at different timestamps. When compared with HDRI, exposure bracketing at a single timestamp is replaced by timestamp variation disregarding exposure times. Because of the parallel between these two approaches, this technique is called HTRI (High Time Range Imaging), it aims at capturing ephemeral events occurring over a long time period during which a sequence of images is shot. For each pixel location, the most salient colors are privileged in the series of photographs. The choice of the saliency criterion is based on an analysis of the existing admitted definitions of visual attention. In a second stage, a higher priority is assigned to the pixels with high temporal saliency, i.e., which appear very briefly in the sequence, jointly producing spatial and temporal changes of contrast between two successive frames. The proposed algorithm captures all these salient objects in the final image, without introducing a significant amount of noise, and despite the large illumination changes that may occur in the acquisition conditions from one frame to the next. This method has been published in a journal paper in 2013.

Campus universitaire bât 508
Rue John von Neumann
F - 91405 Orsay cedex
Tél +33 (0) 1 69 15 80 15


LIMSI in numbers

10 Research Teams
100 Researchers
40 Technicians and Engineers
60 Doctoral Students
70 Trainees

 Paris-Sud University new window


Paris-Saclay University new window