ITP Blog

My journey in making things

Sound Objects – Final

Towards the end of this semester I had the opportunity to mix two of my favorite classes into a single final, Code Lab 1 which was about programming for the Unity game engine, and Interactive Music, which as the name suggests, was about interactive music. This was also a great opportunity for me to collaborate with Scott Reitherman, as we have been talking about a continuing peace for his Ambient Machine project for a while now, and have been looking for a time to create a 3D virtual reality ‘big brother’ project, and this was the birth of Sound Objects VR.

At it’s core, Sound Object is a VR composition app, that let’s you compose music by augmenting physics of objects in space. The music is generated by object collisions, while the physics determine repetition and speed.

Prototyping the idea

We started by creating a really simple 3D world which let’s the user spawn new sound spheres and bounce them continuously to create musical patterns. This idea was our first working demo, and so I tested it in both Interactive Music, and Code Lab, and was surprised to realize that people found the interaction very playful, and were mainly commenting about additional experience elements, such as scenery, effects and composition changes. One of the main things that hit me while demoing in the midterm, was the power of spatial audio as a mixing tool, instead of having to mix the things for the user, if he get’s to walk around the sound emitting objects, he will intuitively mix it himself.

Beyond the midterm

After midterm, the main goals were:

  1. Work on the world (scene, graphics, effects)
  2. Work on the audio elements and the compositional aspect of the experience
  3. Implement the scene using a VR headset
  4. Figure out and refine the interaction

We started by changing the world, as our midterm environment was essentially a gray sandbox, we had to create the environment from scratch. After some brainstroming, and user feedback, we decided to go with a desert scene, in which you are surrounded by sand and mountains, which works well since you are familiar with the environment (i.e it’s not beyond conceived reality), yet it is very peaceful and minimal, allowing the composition to act as the main thing. We designed the terrain in Unity terrain plugin, and worked with E-on Vue, to create specific mountain geometries. We also used keijiro’s amazing HexBokeh shader, to add some depth of field to the scene.

Alongside getting the environment to work well, we continued to develop the sounds for experience, and actually developed a day-to-night scale transition, which we will implement in the future as a part of an arc story in the experience. The sounds all get loaded into a main static dictionary, which is shared between all the sound objects in order to play clips. This approach also eases the implementation of new sounds to just calling the buildSoundList method.

Another realization we had along the way, is we wanted to able to control the properties of objects that shared the same sound. For this, we added a SoundProps class which uses similar structure to SoundLists, and essentially stores properties which are then used by the objects in a later stage.

With the sounds in place, we were also working on implementing this world in a VR headset. Initialy we wanted to go with the Vive, but since we had access to more Rifts in ITP, we used that alongside the Oculus Touch controllers as hands. After some time learning the API, one thing we had to tackle right away was being able to walk in VR. We decided to use the joystick found on the left Touch controller, but the Oculus code only provided a method which requires you to calibrate the forward vector everytime you run you game, and so that was time to hack. To acomplish a fix for that I added a public declaration in OVRPlayerController script for the right eye camera, and use that one to create forward vector for the joystick, that way if you rotate your head, you also change the joystick controls. full script could be found here.

After testing many different interaction approaches, we decided the sound objects wont be spawned, but place in trays, that have similar composition qualities (i.e work well together), and the user would navigate in space, creating a big composition composed of three spatial areas of smaller compositions.

Demo, demo, demo, demo

Here is a video showing a demo we made in the Interactive Music final class

Here is a short making of video showing some of the aforementioned stages

Next up

We would like to continue and work on Sound Objects and deal with the following:

  • More compositional elements
  • Audio-reactive scenery
  • ‘Arc’ story for the experience that changes over time
  • Some effects for raycasting balls, and ball groups of the same sound

I would also like to thank Matt Parker and Yotam Mann for guidance, help and the knowledge each course provided, and in return the way it shaped the project – THANK YOU

Hi I am Or, I am a director, developer and artist. My current interest in research is sound interaction, computer vision & immersive media development.

Leave a Reply

Your email address will not be published. Required fields are marked *