Vision

iSpace Research Agenda & Vision in a Nutshell

Virtual real­ity soft­ware and hard­ware is becom­ing increas­ingly afford­able and pow­er­ful, and is increas­ingly being used in exper­i­men­tal research. In fact, the pos­si­bil­ity to con­duct tightly con­trolled and repeat­able exper­i­ments with nat­u­ral­is­tic multi-modal stim­uli in a closed action-perception loop sug­gest that VR could become a pow­er­ful yet flex­i­ble research tool.

Despite increas­ing com­pu­ta­tional power and ren­der­ing qual­ity, though, it is debat­able whether humans nec­es­sar­ily per­ceive and behave sim­i­larly in real and vir­tual envi­ron­ments – which is essen­tial for achiev­ing suf­fi­cient real-world trans­fer of exper­i­men­tal results gained in the lab. What might be miss­ing? What can we learn from this? How might we be able to “cheat intel­li­gently” in VR and, e.g., pro­vide users with a com­pelling sen­sa­tion of moving through the sim­u­lated envi­ron­ments with­out the need for full phys­i­cal loco­mo­tion? Can the mere illu­sion of self-motion (“vec­tion”) be suf­fi­cient for pro­vid­ing sim­i­lar ben­e­fits as actual loco­mo­tion? i.e., what is the func­tional sig­nif­i­cance of vec­tion? How far can we get with just visual cues? What ben­e­fits do we gain from multi-modal stimuli?

See also our iSpace Youtube Playlist

Below is a short intro video of iSpace lab and a few graph­ics to explain our research agenda and vision. enjoy.

 

Summary of iSpace Main Research Program

[Note that there are many other areas that we work in and are inter­ested in expand­ing into. See also our iSpace Youtube Playlist]

GOAL: To inves­ti­gate what con­sti­tutes effec­tive, robust, and intu­itive human spa­tial ori­en­ta­tion and behav­iour. This fun­da­men­tal knowl­edge will be applied to design novel, more effec­tive human-computer inter­faces and inter­ac­tion par­a­digms that enable sim­i­larly effec­tive processes in computer-mediated envi­ron­ments like vir­tual real­ity (VR), immer­sive gaming, and multi-media.

MOTIVATION: While modern VR sim­u­la­tions can have stun­ning pho­to­re­al­ism, they are typ­i­cally unable to pro­vide a life-like and com­pelling sen­sa­tion of moving through the sim­u­lated world, thus lim­it­ing per­ceived real­ism, behav­ioural effec­tive­ness, user accep­tance, and com­mer­cial suc­cess. Moreover, VR users fre­quently get dis­ori­ented because the sup­ported spa­tial behav­iour is still clumsy and unnatural.

APPROACH & SHORT/MID-TERM OBJECTIVES: I pro­pose that inves­ti­gat­ing and exploit­ing self-motion illu­sions (“vec­tion”) might be a lean and ele­gant way to over­come such short­com­ings and pro­vide a truly “moving expe­ri­ence” in computer-mediated envi­ron­ments, thus enabling more affordable-yet-effective sim­u­la­tions for broader audi­ences. My team recently pro­vided the first evi­dence that such embod­ied self-motion illu­sions can indeed facil­i­tate per­spec­tive switches and spa­tial ori­en­ta­tion, thus pro­vid­ing sim­i­lar ben­e­fits as actual self-motion, but with­out the cost and effort involved in having to phys­i­cally move the user. This research pro­gram will cor­rob­o­rate and fur­ther inves­ti­gate the functional/behavioural sig­nif­i­cance of self-motion illu­sions for a wider range of spa­tial orientation/cognition tasks that would nor­mally be dif­fi­cult to accom­plish with­out actual self-motion. To this end, my team will per­form exper­i­ments with human par­tic­i­pants in VR to inves­ti­gate and opti­mize multi-modal and higher-level con­tri­bu­tions and inter­ac­tions for spa­tial ori­en­ta­tion and self-motion perception/illusions, while min­i­miz­ing ref­er­ence frame con­flicts. Specifically, we will:
(A) Research and uti­lize higher-level and multi-modal syn­er­gis­tic ben­e­fits to enhance vec­tion;
(B) Design and eval­u­ate user-powered motion cueing inter­faces;
(C) Investigate the func­tional sig­nif­i­cance of vec­tion using three com­ple­men­tary exper­i­men­tal par­a­digms;
(D) Design tran­si­tions into VR;
(E) Develop novel exper­i­men­tal par­a­digms; and
(F) Integrate find­ings into our the­o­ret­i­cal frame­work.
Multi-modal, nat­u­ral­is­tic and immer­sive VR pro­vides the unique oppor­tu­nity to study human per­cep­tion and behav­iour in repro­ducible, clearly defined and con­trol­lable exper­i­men­tal con­di­tions.
LONG-TERM GOALS: To inves­ti­gate how we can best employ self-motion illu­sion and research-guided inter­face design and devel­op­ment to enable life-like, robust and effort­less spa­tial cog­ni­tion, ori­en­ta­tion and behav­iour in VR and other immer­sive media.

SIGNIFICANCE: This research pro­gram will lead to a deeper under­stand­ing of human spa­tial cog­ni­tion, per­cep­tion and behav­iour that will enable us to design more effec­tive human-computer inter­faces and inter­ac­tion metaphors. These will pro­vide improved test beds for eval­u­a­tion as well as enable and inspire fur­ther research. Thus, com­bin­ing fun­da­men­tal and applied research per­spec­tives will allow us to iden­tify the essen­tial para­me­ters of perception/action and will pin-point the “blind spots” that enable the brain to be tricked when sim­u­lat­ing VR. This will enable the cre­ation of better cost-effective vir­tual solu­tions for numer­ous immer­sive and tele-presence appli­ca­tions such as driving/flight sim­u­la­tion, archi­tec­ture walk-throughs, immer­sive gaming and recre­ation, space explo­ration, engi­neer­ing, emer­gency train­ing, min­i­mally inva­sive surgery, and video conferencing.