Lost in Translation VR — Iteration 1

Elizabeth Larez
13 min readMar 25, 2019

--

Lost in Translation VR game environment

I was required to submit an idea for a project in the scope of my postgraduate course “Interaction Design, Web and Games” from the Fine Arts Faculty, University of Porto. The goal was to go through all the stages of the design process: product definition, ideation, prototyping, user testing and developing (and iterating on this). The course only lasts 8 months, oh well…

To make this even harder for myself, I chose a rather complex project to undergo alone: Lost in Translation is a Virtual Reality (VR) game experience with psychological horror and biofeedback. The greatest inspirations to the environment and feel of this game were the movie “Gravity” and the game “ADR1FT”.

The game’s input was designed a little differently than usual and doesn’t encompass physical controls, that is, the interaction is only possible through head movements, which makes it accessible for the physically impaired; also, the audio cues are extremely important to the storyline and subsequent actions from the player, therefore the Muse 2 equipment will analyze the player’s attention span to give cues along the game (only if the player is concentrated will the audio cues be activated). The graphics of the game would be too hard for me to create from scratch, so they were entirely developed using free assets (thank you to the contributors!) on Unity3D and in a space setting with an astronaut for a character (the darkness of the setting helps to set the mood for the psychological horror as well).

Psychological Horror

This expression means that the horror isn’t going to be triggered by gore elements, monsters, ghosts, creepy-crawlers, etc. It is going to be triggered purely by indirect inputs to the autonomic nervous system, particularly the sympathetic nervous system. The mirror cells will play a big role in this, as it is expected that the players imitate the character’s emotions, thus, feeling the game’s elicited emotions.

To add some dimension to this, some emotional attachment will be developed through the story and the character’s background.

Product Definition

The product was defined by me and is intended to be a proof of concept. Even so, semi-structured interviews were performed with users to understand the viability of the product if it were to have commercial goals in the future.

The interviews were conducted in a local software engineer company. Even though I thought I had gathered quite a diversified group of users within a target audience previously established, the group showed very similar results. It was possible to answer 6 very important questions according to the users’ interviews:

WHO — men and women between 19 and 39 years old, english speakers, software engineers, gamers, may or may not have mental or physical impairments (except users suffering from Blindness, Deafness, Schizophrenia, Epilepsy, Vertigo, limitation/absence of neck movement, nausea);
WHEN — at night, after dinner (between 8pm and 2am) on weekdays or at any time during weekends;
WHERE — at home, sitting in the living room (not much space is necessary because the interaction will be done through head/eye movements);
WHAT — leisure, distraction, abstraction, relaxation;
HOW — individual activity using smartphone with VR headset (such as Google Cardboard), Muse 2 and headphones;
WHY — intrinsic motivations, personal taste, to unwind after a day at work.

*I was lucky enough to interview someone who needs games to have a higher focus on accessibility, and the feedback from the lack of physical controls was very positive from him/her.

One very important aspect of this initial contact with users both fascinated and discouraged me: 100% of the users did not own a VR headset nor had tried one. I dove head first into this project knowing for sure people would be marveling in the immersive technologies and that wasn’t the case at all!

User Personas

After the initial interviews, I designed three personas to help me guide my design process. Whenever I had hard questions about where to guide the project, I would impersonate those personas and interpret their answers to my questions. This was extremely helpful as it allowed me to emotionally detach myself from the project and to be a lot more logical and analytical about what the best decisions would be.

Personas created for Lost in Translation VR (photos of people are not my own)

Scenarios

Two different scenarios were created, one for João Raposo and another for Eduardo Matias. The reason why creating scenarios was important was to capture his/her motivations and how the user would perform tasks on the game, providing context to the user experience.

Scenario 1

João gets home from a hard day at work at 7pm, has dinner and then sits on the couch to relax. He decides to play videogames, because he feels more relaxed when he plays videogames, specifically game experiences that require little control and a lot of abstraction. João sits comfortably in his couch, grabs his smartphones, puts on the Muse 2 and VR headsets and starts playing the game. The bedtime comes flying by and he decides to go to bed where he sleeps profoundly and dreams about space and being an astronaut.

Scenario 2

Eduardo is home all day, trying hard to perform day-to-day activities, such as simply brushing his teeth. Because of this difficulty that he experiences and his anhedonia, he doesn’t have any leisure activities. He loves to play videogames, so he watches streamings on Youtube to satisfy his curiosity and to keep up with what’s happening in the gaming world. He decides to try out a new game experience that doesn’t require fine motor skills to manage the physical controls, grabs the smartphone, puts on Muse 2 and the VR headsets and he gets lost in a different world where he can live in a different body. Eduardo is happy now that he “can” move around freely.

Design Requirements

This stage included writing the extended version of the storyline with dialogues, definite characters, core story, multiple endings, audio cues, horror-based music, etc, which helped me understand what were the exact movements/actions that the player would have to endure to be able to play the game.

*The story won’t be disclosed just yet, given the stage of the project.

Design/Interaction Framework

  • Content
    The diegetic menu options are iconographic. The narrative is spoken in english (some voices will be in distress, others will be unintelligible, low/high volume in the narrative to keep the user in the correct path).
  • Visual Design
    The gameplay is interactable through head movements — right/left for movement of the character, and up/down for menu options (the options are chosen through gaze).
    The typography is intended to be used in the splash screen only and has futuristic and sci-fi inspirations — sans-serif, regular and uppercase (e.g. Hyperspace, Pirulen).
  • Physical Objects and Space
    Smartphone; VR headset (e.g. Google Cardboard); EEG headset (e.g. Muse 2); headphones.
  • Time
    The sound will be used to give the user feedback on what is happening and to set a mood for what the design is supposed to make the user feel (e.g. fast heart rate, fast breathing, screams, etc. will “stress” the user).
    The transition between the menu options and in-game will be smooth, since the menu is diegetic — the sounds will mute and only the sound of the astronaut’s breathing will remain audible.
  • Behavior
    The gameplay is interactable through head movements to the right/left, which will propel the player to navigate through the setting. The menu options are called through up/down head movements, which will immediately pause/resume the game (the options are chosen through gaze).

Sketches — VR Paper Prototyping

I guess you would say it is impossible to make VR Paper Prototyping. Well, I did it, I guess… First, the user goal was segmented into user tasks, so that, together with the VR headset, it would be possible to analyze any physical, cognitive or technical constraints.

User tasks

  • Equip with peripherical equipment (EEG headset — Muse 2, VR headset with smartphone, headphones)
  • Sit on a rotating chair (to facilitate navigation and avoid injuries)
  • Start new game
  • Pause game
  • Resume game
  • Navigate through the world by moving the head
  • Focus to be able to hear instructions/narrative

Paper Prototype

The paper prototype was based on VR research about the angle of views regarding the head and the eyes, shown in the image below.

Image not my own
Sketch of the user’s perspectives involving the gameplay and the menu access (middle sketch is the menu’s options on the astronaut’s hand; and right sketch is the menu’s options on the astronaut’s visor)

After understanding the constraints that could be attached to the nature of the headset, the interactions were tested in a live-size prototype. It was built with a cinnamon cookies’ cardboard box, tracing paper, thick paper, elastics, cotton, glue, wires, a flashlight and a sharpie.

The user turns his head according to audio cues (the flashlight was used to illuminate the inside of the box from the outside). The excess tracing paper on the right and left sides was shown when the user turned his head to the right and left; and the paper on top and below was shown when the user turned his head up and down; (both not shown in this particular gif)

The feedback was incredibly positive as the prototype was viewed as very “dark, claustrophobic and intense” (sic), even though it didn’t appear as such at first glance.

The feedback was also very productive in the way that it indicated some flaws to this first approach with the paper prototype:

  • The audio used was only a voice saying “look to your left, look to your right, look at the center” and it greatly reduced the immersiveness felt by the user. Thus, the audio should be prototyped to contain the narrative (or a sample of it) and the audio simulation of spatial suits (breathing, radio communication) for greater immersiveness;
  • If the audio can’t be prototyped, the user must wear the headphones at all times as they help to isolate the surrounding environment, even when disconnected;
  • The menu must have a different control to choose options (e.g. eyetracking o eyegaze);
  • The menu location and nomenclature must be reiterated.

Prototyping Phase

The prototyping phase involved the search for free graphic and auditory assets to use in Unity3D. There is some software available to prototype VR/AR/MR, but none of them offered the immersiveness that I was looking for, so I had to develop directly in the game engine. For this, I had the help of André Ferreira, my beloved husband.

Sadly, the school year dictated when the prototyping phase would occur and an error in my planning prevented me from prototyping with the Muse 2 equipment — it will be included in the next iteration of the project.

*When the prototype gets to the Beta phase, I will be happy to show some samples of the gameplay.

User Testing

A Usability Test Report (ISO/IEC 25062) resulted from the user testing phase and it is summarized below.

Test objectives

The main objective of the test is to assess the intuitiveness of the diegetic menu of the game by analyzing how much time the users take to perform the tasks, how many errors they make and whether they perform the task successfully or not. For this, a task scenario followed by two tasks are presented to the participants.

Method

The test is performed using a smartphone OnePlus 2, with Android OS 8.1.0 (Oreo), and a Virtual Reality headset, while the participant is in a seated position (preferably in a rotating chair).
The methods to be used are composed of: a participant briefing, observation, think aloud, usability testing with task scenario, and participant debriefing (accompanied of key questions regarding the participants’ actions during the test).

Participants

The test is conducted with five Software Engineers, gamers, English speakers, between the ages of 19 and 39 — these match the target audience and the personas elaborated previously. The Think Aloud method is used while the user performs the tasks provided in the task scenario. The participants were selected through a previous user interview made to asoftware engineer company and the test sessions are made live and in person to each individual separately.
Exclusion criteria: users suffering from blindness, deafness, schizophrenia, epilepsy, vertigo, limitation/absence of neck movement, nausea.

Tasks

The tasks selected, which are presented below, are frequent accessory actions and they are the most troublesome in the gameplay and the ones that trigger the most doubts about whether they are intuitive, easy to use and obey to a good user experience.
The source of these tasks is the novelty manner in which the actions that are asked to the user are supposed to be performed.
The data provided to the participants is to be done solely through the task scenario, which allows for the participant to have some guidance but also to be able to explore.
The test ends when the participant completes both tasks successfully or if the participant takes over 10 minutes to complete them. The performance criteria are based on the task success/failure, the time they take to complete the tasks and the number and frequency of errors they make.

Task Scenario

“You are an astronaut wandering in space. Please (1) find the menu and (2) choose the option to pause the game.”

Test facility

The environment in which the test takes place is a quiet room equipped with a tripod camera, and a VR headset with a smartphone. The user is placed in a seated position (preferably in a rotating chair) without obstacles at reach.

Test administrator tools

The tools that were used to record data are as follows: Unity Analytics 18.3.2f1, webcam with microphone from MacBook Pro (OS 10.14), screen recorder Vysor 2.1.4 (output is a split screen with the video of the user and the screen recording of the game).
The questions that were made after the test was over were included in the debriefing, and referred directly to the user’s behavior.

Usability Metrics

Variables

  • Time — the time elapsed between the beginning and the end of the task (the maximum time allowed to achieve the end of the test successfully, which includes both tasks, is 10 minutes);
  • Errors — number and frequency of errors committed while performing the task;
  • Success— success (done) or failure (not done) of the task.

Effectiveness

The participants’ success/unsuccess performing the task, as well as the amount and frequency of errors performed during the tasks are used to assess the accuracy with which these goals can be achieved.

Completion rate

100% of the participants were able to achieve the goals of the two tasks presented to them ((1) find the menu; and (2) pause the game).

Errors

Regarding Task 1 (find the menu), all of the participants found the menu by chance, because they saw the automatic movement of the hand and it drew their attention to it (which is intentional). Also, two participants were confused by the gear icon in the top right corner of the screen when trying to find the menu — this icon is an overlay that can’t be removed. Although it doesn’t serve any purpose, it affected the test.

Assists

The test administrator had to intervene in Task 1 (find the menu) when the participants, who were confused by the gear icon, didn’t know why the icon didn’t work — the help provided was towards telling them that the icon isn’t part of the test and to please ignore it. The unassisted completion rate is 60%, although the participants went on to successfully complete the task after the assist, so they will be included in the assisted Task 1 completion rate, which is 100%.

Efficiency

The variable used to assess the efficiency of the test is the time to achieve the task.

Results

Data analysis

Since the metrics collected are all numerical and non-discrete, there wasn’t the need for categorization or normalization of the data set.
Due to the reduced amount of test subjects and the fact that there weren’t any visible indications that a test should be eliminated from the results, no reduction of data was performed as well as outlier removal.

Presentation of the results

Interpretation of the results

The test results were considered extremely positive. Both tasks were performed much faster than anticipated when designing the test. This can be interpreted as: finding the menu and pausing the game are intuitive commands within the game, so far.
The white dot that appears when the menu is activated was viewed as positive and helpful by the participants. They all understood that they had to use it to aim to menu options. Some doubt still remained towards understanding how the selection of the options was made — one user expressed at the end of the test that he believes the option was selected by eye gaze and blinking; one other user expressed at the end of the test that he believes the option was selected by a detection of his body/physical movements (he is seen mimicking the astronaut’s hand movement). The selection feedback should be further tested and one design solution could be to add a loading/progress wheel on top of the intended option.
All of the participants found the menu by chance while exploring the setting due to the intentional movement of the hand. Nonetheless, they all understood that it was the menu without the need for any assistance. In order for the location of the menu to be perceived by the users without the need for exploration, it should be included in the tutorial of the game in future iterations of the development.
One participant was able to find the menu, but he was confused and expressed that the gear icon on the astronaut’s hand was theoption to open the menu (although when he saw the pause button on another finger, he shifted his attention to it). This event emphasizes the fact that the icons should be further tested in regards to their meaning and function.
Concerning Task 2 (pause the game), two participants experienced an unforeseen software error in which the selection of the pause button wasn’t active — the participants had to leave the menu and get back to it in order for it to function correctly. This should be fixed immediately.

Conclusion

It was very enlightening to build a paper prototype of an exclusively digital technology, such as VR. I encourage everyone to try it as it is extremely liberating and increases the creativity and problem-solving of the design thinking process.

The results from the user testing phase need to be implemented in the game so that it is further tested. The need for the participants to be the same as the first testing phase (for reduced learnability) or different from them (for increased discoverability) has to be deeply analyzed to understand the benefits and problems that can occur from each option.

A heuristic evaluation will be asked to experts when the prototype advances to a more advanced stage and the results will be demonstrated in my next Medium post.

The Muse 2 is currently being integrated so a whole new dimension of tests needs to be added as a task scenario (separate from the demonstrated in this post), one that entails elicited emotions, particularly fear.

See you in the next iteration! 👋

--

--

No responses yet