Learning Sciences Research
We are a team of graduate student researchers at the University of Washington working on a VR-based experiment which will measure how VR can improve cognitive performance on difficult tasks.
We have been developing with Unity Pro and the Oculus Rift since November of 2013, the second iteration since September of 2014 using the Oculus Rift DK2 combined with the Microsoft Kinect 2 (for
Windows) as an input device, and our newest design since early 2016, which incorporates features of the HTC Vive and full room-scale. Our experiment's final
round of pilot testing will take place during Q3 2016, with the study data to be collected in Q4.
The participants will experience an interactive multi-player immersive VR environment we have designed and built using Unity Pro (see screenshots of our
prototype below), through a story about training rescue droids to save wildlife and to do other heroic tasks. These particular droids need to have their spatial reasoning chips calibrated, and
the only way to accomplish this is via a cognitive tele-link, which allows the player to see through the visual sensors of the droid and train it through
experience. According to the story, the goal is to recalibrate the spatial reasoning chip of the droid so that it can go on rescue missions without the need for human guidance. Players work
together as a team to solve problems and complete a challenging engineering task in the virtual reality environment. They also work independently to improve their own cognitive skills through a
scaffolded support program, to "accomplish the mission."
With the launch of the HTC Vive Lighthouse VR system, the participants have a more comfortable embodied experience because the input and movements are mapped to physical reality "1 to 1,"
which was not quite the case with our DK2 & Kinect design.
After the experiment
We have many plans for further research into educational applications using the upcoming Oculus Touch as well as our current HTC Vive VR systems (we currently have a pair of Vives for designing additional CSCL learning spaces) after our current experiment is
completed. The unprecedented tracking quality of the handheld controllers in these systems could introduce new directions for experiments measuring the advantages of fine hand interactions with
objects. This will open up new affordable and accessible avenues for research related to embodied cognition, collaborative building of 3D visualizations, and the designing of new spaces for
We envision that our findings will drive our future projects and the assets we create will become components of immersive educational VR experiences to teach both concrete and abstract concepts.
We hope that research verifying the cognitive implications of VR will help legitimize the medium as an active learning tool, which our fellow educators will be interested in exploring as well.