Ambient and Interaction

Y. Bellik, D. Béroule, A. Gharsellaoui


Classical WIMP interaction models are not adequate within the context of ambient environments due to users' mobility, interaction devices heterogeneity and interaction context variability. Hence, there is a need of new interaction models that will suit well users' needs in ambient environments.

Spatial Interaction

Ambient environments aim to embed the physical environment with various sensors and effectors in order to assist users in their daily tasks. In particular, the use of location sensors allows to transform the physical space itself into a mean of interaction. For example, the simple act of bringing a tag representing a video file close to another tag representing a screen in the house, may trigger the playing of the video file on the corresponding screen. Some systems using space as a means of interaction already exist (in the context of tangible interaction), but this use is still fragmented and/or ad hoc (using proximity only for some systems, orientation for others...) and do not offer a formal and generic modeling of spatial interaction language. That is why we have launched actions to explore in depth how the physical space can be used as a means of interaction. To achieve this objective, a preliminary step is to formally define a language for the description of spatial interactions. This language should be based on the different physical spatial properties and relations
between objects: position, speed, acceleration, orientation, distance, etc. This language will be used, in a second step, to implement an interactive tool for spatial interactions specification. Such an interactive tool will allow us, in a third step, to quickly implement and conduct user studies in our intelligent room platform (IRoom). These studies will allow us to explore different techniques for spatial interaction and to provide guidelines (what properties, operators to use in which case...) for spatial interaction design.

The affordance problem

An important problem that we are faced when one wants to use the space as interaction channel is the problem of affordance. Since in ambient environments it becomes possible to define certain areas of space as special places offering some services, a question then arises: how to indicate to theuser the "sensitive" areas and how to inform him/her about the services they offer? It is also possible that the simple proximity of two or more objects induces a given action on the system. How to tell the user about this kind of potential interactions between physical objects? In addition, if the induced action may vary depending, for example, on the orientation of objects, their velocities, etc., providing the user with some clues about these potential interactions becomes an even more complex problem. All these considerations lead us to exclude the use of real physical objects to identify these "ambient-spots" and their potential spatial interactions. We rather prefer to explore other approaches such as augmented reality approaches. Using Google Glass, for example, offers an interesting option to explore. We have also started to explore less intrusive methods such as using a mobile phone or a mobile pico-projector (collaboration with Image and Interaction topic) to inform
the user about spatial interaction possibilities.

System Fault Detection and User Task Assistance

A great benefit of ambient environments is that they offer a large variety of sensors that can be used to monitor both system and user’s actions. However, sensors and actuators may suffer failures. The motivation of A. Mohamed’s thesis (Supélec co-supervision) is to equip ambient systems with self fault-detection and diagnosis capabilities allowing them to check autonomously whether the intended actions were performed correctly by the actuators. To address this issue, an approach in which the fault detection and diagnosis is dynamically done at run-time, while decoupling actuators and sensors at design time, was proposed. We have introduced a Fault Detection and Diagnosis framework modeling the generic characteristics of actuators and sensors, and the effects that are expected on the physical environment when a given action is performed by the system's actuators. These effects are then used at run-time to link actuators (that produce them) with the corresponding sensors (that detect them). Most importantly the mathematical model describing each effect allows the calculation of the expected readings of sensors. Comparing the predicted values with the actual values provided by sensors allow us to achieve fault-detection in dynamic and heterogeneous ambient systems.

Concerning user’s tasks, existing task models are static and used only at design time. In A. Gharsellaoui’s thesis, we propose to use the task model at runtime, in order to track user actions, verify that he/she has not made any errors while accomplishing his/her tasks and to provide help when asked for. In particular, we propose an extension of the classical task models for ambient environments to allow their dynamic update at runtime. This extension consists in giving tasks, runtime states suitable with information received from the environment (task started, task suspended, task resumed, task done…). A second contribution consists in a monitoring and assistance system based on our dynamic task model. Furthermore, a simulator has been implemented and allowed us to validate our tasks tracking algorithm. A real user study in our intelligent room (IRoom) that exploits our tracking and assistance system has been conducted.


Campus universitaire bât 508
Rue John von Neumann
F - 91405 Orsay cedex
Tél +33 (0) 1 69 15 80 15


LIMSI in numbers

10 Research Teams
100 Researchers
40 Technicians and Engineers
60 Doctoral Students
70 Trainees

 Paris-Sud University new window


Paris-Saclay University new window