iSpace Research Agenda & Vision in a Nutshell
Virtual reality software and hardware is becoming increasingly affordable and powerful, and is increasingly being used in experimental research. In fact, the possibility to conduct tightly controlled and repeatable experiments with naturalistic multi-modal stimuli in a closed action-perception loop suggest that VR could become a powerful yet flexible research tool.
Despite increasing computational power and rendering quality, though, it is debatable whether humans necessarily perceive and behave similarly in real and virtual environments – which is essential for achieving sufficient real-world transfer of experimental results gained in the lab. What might be missing? What can we learn from this? How might we be able to “cheat intelligently” in VR and, e.g., provide users with a compelling sensation of moving through the simulated environments without the need for full physical locomotion? Can the mere illusion of self-motion (“vection”) be sufficient for providing similar benefits as actual locomotion? i.e., what is the functional significance of vection? How far can we get with just visual cues? What benefits do we gain from multi-modal stimuli?
See also our iSpace Youtube Playlist
Below is a short intro video of iSpace lab and a few graphics to explain our research agenda and vision. enjoy.
[Note that there are many other areas that we work in and are interested in expanding into. See also our iSpace Youtube Playlist]
GOAL: To investigate what constitutes effective, robust, and intuitive human spatial orientation and behaviour. This fundamental knowledge will be applied to design novel, more effective human-computer interfaces and interaction paradigms that enable similarly effective processes in computer-mediated environments like virtual reality (VR), immersive gaming, and multi-media.
MOTIVATION: While modern VR simulations can have stunning photorealism, they are typically unable to provide a life-like and compelling sensation of moving through the simulated world, thus limiting perceived realism, behavioural effectiveness, user acceptance, and commercial success. Moreover, VR users frequently get disoriented because the supported spatial behaviour is still clumsy and unnatural.
APPROACH & SHORT/MID-TERM OBJECTIVES: I propose that investigating and exploiting self-motion illusions (“vection”) might be a lean and elegant way to overcome such shortcomings and provide a truly “moving experience” in computer-mediated environments, thus enabling more affordable-yet-effective simulations for broader audiences. My team recently provided the first evidence that such embodied self-motion illusions can indeed facilitate perspective switches and spatial orientation, thus providing similar benefits as actual self-motion, but without the cost and effort involved in having to physically move the user. This research program will corroborate and further investigate the functional/behavioural significance of self-motion illusions for a wider range of spatial orientation/cognition tasks that would normally be difficult to accomplish without actual self-motion. To this end, my team will perform experiments with human participants in VR to investigate and optimize multi-modal and higher-level contributions and interactions for spatial orientation and self-motion perception/illusions, while minimizing reference frame conflicts. Specifically, we will:
(A) Research and utilize higher-level and multi-modal synergistic benefits to enhance vection;
(B) Design and evaluate user-powered motion cueing interfaces;
(C) Investigate the functional significance of vection using three complementary experimental paradigms;
(D) Design transitions into VR;
(E) Develop novel experimental paradigms; and
(F) Integrate findings into our theoretical framework.
Multi-modal, naturalistic and immersive VR provides the unique opportunity to study human perception and behaviour in reproducible, clearly defined and controllable experimental conditions.
LONG-TERM GOALS: To investigate how we can best employ self-motion illusion and research-guided interface design and development to enable life-like, robust and effortless spatial cognition, orientation and behaviour in VR and other immersive media.
SIGNIFICANCE: This research program will lead to a deeper understanding of human spatial cognition, perception and behaviour that will enable us to design more effective human-computer interfaces and interaction metaphors. These will provide improved test beds for evaluation as well as enable and inspire further research. Thus, combining fundamental and applied research perspectives will allow us to identify the essential parameters of perception/action and will pin-point the “blind spots” that enable the brain to be tricked when simulating VR. This will enable the creation of better cost-effective virtual solutions for numerous immersive and tele-presence applications such as driving/flight simulation, architecture walk-throughs, immersive gaming and recreation, space exploration, engineering, emergency training, minimally invasive surgery, and video conferencing.