Semester Updates and Technical Problems

As the semester comes close to finished, we now are looking at finishing our first prototype. We wanted to share some both our triumphs and technical problems we’ve had thus far, with Julian covering the coding, Sam covering the 3D environment, and Jeremy the sound design respectively.

On coding the GSR to connect to unity and MAX/MSP

In order to get Unity to function with the GSR a socket had to be established between the two, thus letting them communicate with each other. We graciously received the help of one of our computer engineer colleagues at the lab, Mehdi, who was able to share some of his expertise and provide us with his socket code to complete the task. Upon receiving the GSR’s variables, the next task was to devise an algorithm that could interpret, organize, and distribute the data coming in. The first plan of action was to create a dynamic array that would simply take in the variables then forward them to the subsystems of our project. Our first problem we ran into was  storing all the variables that were constantly coming in as after some testing, results showed that Unity would crash after 10 minutes of assimilating and storing data. This was due to the sheer size the dynamic array was becoming, causing a strain on the computer’s memory (it is important to note that the GSR was sending 256 data points  a second). The next phase of the algorithm needed a means to estimate, store, and dump data on the go while still retaining the bits of data flooding in by the GSR: a buffer had to be created in order not to lose the data. After discussing the details with Professor Shaw, we came to the conclusion that estimating the data every 5 seconds would cause a flat line effect on the GSR’s reading. We agreed that half a second would be a good time to receive an average from the GSR, thus calculating an average of 128 data points every half a second and then send it to the other subsystems.

Once the buffer was constructed,  code was added in order to test how long it would take to add a 128 data points and calculate an average. It turned out the computer could do these calculations in an incredibly small amount of time. The  buffer was then changed to use a stack function in order to override itself as soon as a 128 data points had filled the array and the calculation had begun. The only issue now, would the averaging code calculate the average within less than half a second? Time was thus taken for the algorithm to calculate the average and estimated how much data we would be losing without creating a buffer. Turns out we were only losing around 20% of the data, which left us with more than enough data to create an accurate average while having an efficient and rapid code.

Now that we had an efficient algorithm that could add up a 128 data points of data and give us the average without using a buffer and storing all of the GSR’s readings, we were ready to distribute the average to the sub-systems. This is where are next problem arose due to working on multiple platforms. The GSR’s data was being communicated to Unity through a server and Unity acted as the socket. Meaning we could send the GSR average to the Unity sub-systems of our project but the generative sound of the project exists within MaxMSP. This meant we would have to create a multi-socket server, or embed a server within a socket. The multi-socket server was the most time efficient means of distributing data over multiple platforms, but required multithreading programming which was too complicated for our current level of skill. We decided on resorting to constructing an embedded server within the socket(Unity), which takes the average of the GSR data and sends it to MaxMSP. The issue with this method is it causes latency to be present between the moments Unity applies the new average and when MaxMSP applies the new variable thus adding a half a second delay added on top of the generative sound. This is an issue that we must deal with in the tentative phase because human hearing detects sound anomalies at around 10 milliseconds. One quick fix we’ve applied is to create all ambient and environmental sound through Unity. The main issue is that from the users first reading, half a second is lost to calculating the average, then another half a second is lost re-sending the new average to MaxMSP, causing the generative sound to be 1 full second behind the users original reading. User testing will have to be applied to this issue to notice if the users notice a significance inconsistency with the generative sound.

On the 3D environment design and working with Unity

This term we’ve been working on bringing 3D models from Maya into Unity, and writing code in C# to interpret the numeric data coming in from the GSR and cause changes in the environment. For the technical demo a simple ground was created in Maya, cut into about 12 separations that could move around. After bringing these into Unity, a script was written to animate the object to be able to float away from the centre and back, attaching it to each chunk. A 1 to 10 scale was created to represent the ‘chaos level’ of the environment and different chunks were set to animate at different chaos levels. After integrating the code that brings the GSR data into Unity, a script was written to convert the GSR numbers to Unity chaos levels using minimum and maximum values; this system can be calibrated manually to different individuals. How this system works is that as the user  attached to the GSR becomes more excited or stressed, the environment becomes more chaotic and more pieces float away. As they calm down, the pieces return. Also added is the ability to switch the program to read fake GSR readings inputed through a slider in Unity for testing purposes when GSR readings are not present. In the below demo video, you can see how the chunks react as I change the chaos level by moving the test input slider.

One of the most challenging aspects was working with bringing in the GSR data into Unity. At first we were receiving the data in Unity much slower than we’d hoped, wanting to calculate the current average every half of a second, but only getting it after several seconds, too slow for a smooth interpretation of the user’s state. We solved the problem when I realized that we had coded it to read at the speed of the GSR (256 data points a second) but the code in Unity is only executed once every frame, at 30 frames per second. Once we adjusted the code to read 30 data points a second it ran just as we expected, and by taking the average of all the points for every half a second, we were able to generate a smooth, accurate representation of the user’s state.

The next steps in this process is to create a larger environment with more chunks and begin adding visual details such as trees and grass. Also to be worked on is a script that will monitor the user’s GSR levels and automatically adjust itself to a suitable minimums and maximums.

On the Generative and Environmental Sound 

Thus far in the semester, we have been having a focus on getting the code and visuals to function as a single being, while allowing sound to exist as a side element of the project. While sound will play a bigger part later, what we have discovered is that the sound design of the project will play a very important part on the user’s immersion when the final prototype is created. Some of the reasons for this is that the sound can provide added value to the visual, create mood and atmosphere, and sonically be able to tell the user how their GSR data is being read. As of right now we are using two methods to initialize the sound: Unity using environmental sound and MAX/MSP using generative sound. Using this dual method presents problems: as noted above, a multi socket connection needs to be established to allow MAX/MSP to take in the values. The way we are working to remedy this is to send a UDP signal from Unity to MAX, thus avoiding the multi socket problem.

We currently are focusing on three methods of generative sound: harmonic feedback, binaural beats, and soundscape playback. Harmonic feedback is a term we have created to describe the sound which responds to the users stress levels, and works at the foreground of the user’s listening experience. When the user is operating on low stress, notes of major chords are played back to match the amount of stress the user has, while when they have high stress, minor chords are played at a quicker progression. This relates to mood with major chords being more happy and minors being more sad. Distortion of the audio signal also becomes present with high stress and softness with low stress.

Next we have a binaural beats running through different wave types throughout the experience. Binaural beats can be described as sine waves played back in stereo to a user at different speeds to stimulate different brain states, allowing a person to go deeper into their subconscious and thus deeper into immersion. Were hoping to use this to our advantage and induce our users into these “deeper” states, along with using them to  reduce anxiety and stimulate their theta states. A problem we which we might run into could be the user rejecting these sounds but we are hoping that playing them back under the rest of the soundscape will allow for acceptance.

Finally, we have soundscapes being played back consisting of different sounds of nature. These function as ambience for the listener, and will play depending on the stress level of the user. Using a pitch and time stretch program we have coded in MAX, these nature sounds can be pitched up and down to form a harmonic relation to the other sound elements and hopefully add a sense of tranquility and unity to the overall auditorial experience. A technical problem that we currently have with the soundscapes is how to trigger the sounds without it being too jarring for the user, and how these can be transitioned and crossfaded for immediate feedback. We’re hoping that when user tests happen it will become more clear what our course of action should be.

As for environmental sound in Unity, core sounds are being connected the environment, for example rocks shifting, wind, rumbling and meteors flying. These “sync sounds” will make the objects feel less virtual and more real.

A chat with Ted Esser

Image

Recently, we had to chance to talk to Ted Esser, a Graduate Student/College Instructor/Spiritual Counselor who is pursuing his PhD in Consciousness Studies, specifically looking at the phenomena of lucid dreaming. We approached him with the intention of finding out more about about altered states and his opinion on our project so far. He was able connect with us online (and with his cute cat too!) to provide us with some great information. These are some of the highlights of what we talked about.

On surrealism, immersion and altered states:

  • With the visual and overall environment, simplicity is good – new stimulus can excite too much and throw off the user
  • Slowly changing environment and making it more surreal is good for driving people “deeper” (more immersed)
  • Different people dream differently
  • Movies changes peoples perspectives of what dreams are; scene changes happen, surrealism can be exaggerated in Hollywood
    • Right brained: more “fantastic” nature oriented
    • Left brained: more “realistic”, like physical reality
  • Making the world TOO surreal has the possibility to distract, drive the user to waking consciousness

Next, we chatted to him about our process with sound. He had a lot to say on our consideration of using an underlying layer of “binaural beats”, a type of audio track that plays back two similar frequencies in both ears to stimualte altered states of mind

  • Binaural beats allow people to stay grounded but they must be willing to let go – they facilitate places that are already there, or capacity for what is there. Can help get people deeper and more relaxed
  • Different people have different views on consciousness – beats won’t necessarily connect with everyone
  • Visuals could take away from immersion… closed eyes as an option, more focus on sound and imagination
  • To achieve out of body states, optimal setup is to have them conformtably laying down
  • Nature sounds are best for environmental sounds – music can distract if the user doesn’t like it, or can’t connect with it
  • Try out a naturalistic approach with sound

Lastly, he mentioned some different resources to check out furthe:

We are looking forward to following up with Ted and finding out more. He has provided us with some great ideas that we think will drive our project forward.

Update: A Fantasy World

For our project, we decided to move in a slightly different direction with the visuals of the virtual world, one which we think will benefit the overall experience. Our world now has a more fantasy based theme, completed with floating objects and “puzzle piece” like lanscape blocks that break apart and come back together based on one’s attention to their stress levels. These are all factors malleable by the variables of the GSR attached to the user.

We think that this will benefit our design in the long run as stress is a parameter more easily connected to pain, which is a problem our project wishes to address in the long run. Another reason for focusing in on “stress”, is that from our research we have noticed that the GSR accurately represents stress rather than emotions.

For visuals, we have two different extremities of the environment which the user can visit. The “peaceful side” starts in a world of serene fields, mountain ranges, oak trees, and birds flying around it, along with other peaceful elements we hope to add as the project progresses. When stress of the participant rises, the “chaos” of the world kicks in: the pieces of the landscape will break apart into different sections of flying earth, and the mentioned elements will spin, fly, and change colours to connect to the theme of chaos. The weather in both will also change: the peaceful environment will be sunny and tranquil, the chaotic environment will be stormy and raining.

Below you can see some of our developed concept art.

Image

 

Update: Virtual Environment “Mirror Spaces”

A screenshot of our flower environment in progress

For our project, we will be making a series of environments that attach to different mind states. We are experimenting with what I describe as ‘mirrored spaces’ , spaces that have similar elements but different themes and feelings. For example, where one environment has a large leafy tree, another has the same tree only dead and bare. At the present we are focussing on only two opposing mind states (and therefore environments): a binary pair of positive and negative, happy and sad. This might change in the future but for now we are focusing on these two rudimentary emotions.

Bringing the environment into Unity as a test revealed several points that need to be addressed. First off, unlike Maya, Unity uses backface culling by default, meaning that polygons can only be seen from one side. Flat objects such as leaves, flowers, and grass will have to be rebuilt to be double sided. The tree will also have to be rebuilt because the built-in tree from Maya has too many vertices to be imported into Unity. To facilitate easy updating of assets and future building of the environment, we believe it would be more efficient to export individual models into Unity and assemble the environment there, rather that import the whole environment as one large asset with multiple copies of models. As such, all the models will be rebuilt or copied as individual files, and the environment reassembled in Unity.

Playing Through “Flow, Flower, and Journey” (10/25/12)

Upon recommendation of Dr. Gromala, we decided to try out some Playstation 3 games that would help us in understanding the ideas of immersion to a greater extent. She pointed us towards three games by Japanese game company “thatgamecompany“, a company well known for their attention to art direction and ease of play. The company desribes their games as “experimental” and rewarding if the player feels something out of playing them.

We first played Flow, a game which places you as a creature in the sea looking to grow by absorbing other creatures. This is done by touching the different creatures (only works if your bigger than them!), with all the different lifeforms being represented abstractly. Some of the things we liked were:

  • the game interface is immersive… no menu. never interrupted. simple directions given, even when paused, you are in the game
  • dynamic environment…  everything is always moving. particles create depth and energy
  • it feels like you can play “forever”, gameplay never ends
  • different layers can be seen, gives you idea of the “next step”. aided by opaque
  • no game over, can’t lose: creates sense of endlessness
  • sound adds to the movement, flow
  • the different stages have different sound design, ambience, music; creates uniques environments
  • light patterns are interesting to watch and keeps your attention

The next game is called Flower. In this game, you are a pedal that has to collect other pedals to continue through the game, always moving and flying through virtual space. Dr Gromala suggests that this game induced motion sickness, which we should aim not to do with our virtual movement, and we wanted to see if we found this to be true. We found this statement congruent to our experience to an extent, but for us the movement was not a deterrent from playing the game. In fact, we found the abstract representation of “pedals flying” to be relaxing. Some of the things we found good:

  • motion is very fluent… feels like you’re always moving
  • grass reacts in an interesting way, very flowy. adds to the motion of the pedals
  • flowers react to movement and the colour makes it feel alive
  • zoom in and out are used to add speed, “vertigo effect”; focal length gives impression of depth
  • simple to play…  very obvious what your have to do
  • music reacts to movement, “chinese cords strike”, flower sound add melodic compliments to main soundtrack
  • clear progression adds interest; rewarded with sound and colour when completed objectives
  • motion graphics introduce different areas, use of mixed media

The final game we tried out was called Journey. It is an adventure game where you are a mysterious person who is exploring a desert landscape, solving puzzles and meeting strange beings along the way. What we liked:

  • world feels very empty but full of things you want to explore at the same time: there is a want to explore because of the mystery surrounding the game
  • sand looks very good: constantly moving and reacting to your movement. begs you to interact with it.
  • lots of mystery surrounding the character and story, makes you want to find out more
  • option for multiplayer adds dimension to experience “makes you feel less alone”.

We had a lot of takeaways from these games and we definitely enjoyed playing them in between the other research we were doing. We look forward to looking for more games that relate to our project.

Inspiration Log #4: 3 clues to understanding your brain

Image

In his TED Talk, Neuroscientist Vilayanur S. Ramachandran explores the topics of the human brain and  what it means to study it. He goes over three major “clues”, that of Capgras delusion (where you cannot recognize what things are), phantom limbs (amputees seeing a fake arm), and synesthesia (where you associate sensorial stimuli to other senses).

All three of these he treated using body illusions, finding ways to trick the brain. For example, with the phantom limb example, Dr. Ramachandran used a mirror box with a reflection of the real arm to trick the person’s mind into believing they had another real arm. This worked so well that it helped the patients move on from their visions of a fake arm.

How we found this inspirational to our project was that these researchers used innovative ways to tackle real world problems. Digital technology opens up things even further: using body responsive, biofeedback technology we will be able to take full advantage of the human system and be able to help pain patients and other users understand their body to a greater extent.

Source: http://www.ted.com/talks/vilayanur_ramachandran_on_your_mind.html

Inspiration Log #3: The Ultimate Display

The article The Ultimate Display by Ivan Sutherland explores the means in which we have the ability to represent and control data, exploring our physical reality through the realm of mathematics. With mathematics we have the ability to calculate physical phenomena’s and as a computer works off of mathematics, we are able to simulate experiments that couldn’t be conducted in our reality as we abide to certain physical laws. as an example, with a virtual realm that controls friction and how it reacts with objects, we can toy with friction parameters, or even turn it off, simulating an environment not possible in our known world. As our processing power increases we get closer to being capable of re-creating physical phenomena’s to even simulating physics not possible available.

This can relate to our project as we wish to simulate a visual representation of our psychological stimulus, meaning we can create feedback to what we feel as human beings. As the user’s use the application more their feelings might become clearer through relating them to visual and auditory cues. Such a surreal audio/visual experience would only be possible in a virtual environment, hence our reasoning for wanting to work with this medium.

Source: http://www.eng.utah.edu/~cs6360/Readings/UltimateDisplay.pdf

Trying out the Sensory Deprivation Chamber (10/06/12)

This weekend, we visited a sensory deprivation tank facility in Abbotsford called Cloud 9 Spa. To give you some background, what happens in these tanks is that you lose all senses – upon entering the tank, one floats in salts that give you the feeling that your body is floating in complete darkness, and void of any sound. The tank is also room temperature which makes it easier for your body to adapt to.

All three of us on the team tried it out for an hour, each with a different experience at the end of it. What some of the most prominent sensations were a feeling of loss of time, literally feeling like “floating” in blank empty space, and large amounts of thinking (while also devoting time to being rid of it).

Some takeaways from the experience was wanting to make the subject feel that they are floating, to aid in immersiveness, and preparing the room to feel empty and quiet to help the HMD have total effect. It goes to show that even though a space is empty, it can really change your perceptions of what happens around you. The Pain Lab was gracious enough to provide us with a space to work in and were hoping to put this to full potential. We are still dabbling with the idea of producing a CAVE instead of using a HMD but this is something we will explore more in the future.

Inspiration Log #1: Group Ehrson

Image

We plan on posting as we go along a number of “inspirations” of other researchers, artists that have functioned as precedents to our work and are helping to guide our work. Our first log will focus on the work of “Group Ehrson”.

“Group Ehrsson”, a Brain, Body & Self Laboratory, is based out of the Department of Neuroscience at the Karolinska Institutet in Sweden and is run by Henrik Ehrson. Here, the researchers look into making people feel “out of body”, using mannequins, rubber arms and virtual reality to create body illusions. These illusions find ways to convince people that they have swapped bodies with another person, gained a third arm, or grown to giant proportions.

One of his exhibits we found particularly interesting, was one where the subject felt like they had shrunk into the size of a Barbie doll. This was accomplished by the subject laying down, wearing a head mounted display (HMD), and getting poked using a stick. On the display they are shown live footage of a doll being rubbed. This is enough to trick people’s minds into thinking they are a small doll.

What we found the most interesting about Henrik’s exhibits is that they are able to use the technology in a way that alters’s perceptions of self, and in a fashion that could only be accomplished by using a camera and HMD. In the above example, the “eye of the camera” in collaboration with the HMD changes the person’s vision into another, allowing for controlled immersion. We hope with our project to be able to pull off such radical immersiveness, and be able to give visitors an experience of changed self and greater awareness of body after using our system.

Source: http://www.nature.com/news/out-of-body-experience-master-of-illusion-1.9569