interactive_music

All posts in this category.

Sound Objects – Final

Towards the end of this semester I had the opportunity to mix two of my favorite classes into a single final, Code Lab 1 which was about programming for the Unity game engine, and Interactive Music, which as the name suggests, was about interactive music. This was also a great opportunity for me to collaborate with Scott Reitherman, as we have been talking about a continuing peace for his Ambient Machine project for a while now, and have been looking for a time to create a 3D virtual reality ‘big brother’ project, and this was the birth of Sound Objects VR.

At it’s core, Sound Object is a VR composition app, that let’s you compose music by augmenting physics of objects in space. The music is generated by object collisions, while the physics determine repetition and speed.

Prototyping the idea

We started by creating a really simple 3D world which let’s the user spawn new sound spheres and bounce them continuously to create musical patterns. This idea was our first working demo, and so I tested it in both Interactive Music, and Code Lab, and was surprised to realize that people found the interaction very playful, and were mainly commenting about additional experience elements, such as scenery, effects and composition changes. One of the main things that hit me while demoing in the midterm, was the power of spatial audio as a mixing tool, instead of having to mix the things for the user, if he get’s to walk around the sound emitting objects, he will intuitively mix it himself.

Beyond the midterm

After midterm, the main goals were:

  1. Work on the world (scene, graphics, effects)
  2. Work on the audio elements and the compositional aspect of the experience
  3. Implement the scene using a VR headset
  4. Figure out and refine the interaction

We started by changing the world, as our midterm environment was essentially a gray sandbox, we had to create the environment from scratch. After some brainstroming, and user feedback, we decided to go with a desert scene, in which you are surrounded by sand and mountains, which works well since you are familiar with the environment (i.e it’s not beyond conceived reality), yet it is very peaceful and minimal, allowing the composition to act as the main thing. We designed the terrain in Unity terrain plugin, and worked with E-on Vue, to create specific mountain geometries. We also used keijiro’s amazing HexBokeh shader, to add some depth of field to the scene.

Alongside getting the environment to work well, we continued to develop the sounds for experience, and actually developed a day-to-night scale transition, which we will implement in the future as a part of an arc story in the experience. The sounds all get loaded into a main static dictionary, which is shared between all the sound objects in order to play clips. This approach also eases the implementation of new sounds to just calling the buildSoundList method.

Another realization we had along the way, is we wanted to able to control the properties of objects that shared the same sound. For this, we added a SoundProps class which uses similar structure to SoundLists, and essentially stores properties which are then used by the objects in a later stage.

With the sounds in place, we were also working on implementing this world in a VR headset. Initialy we wanted to go with the Vive, but since we had access to more Rifts in ITP, we used that alongside the Oculus Touch controllers as hands. After some time learning the API, one thing we had to tackle right away was being able to walk in VR. We decided to use the joystick found on the left Touch controller, but the Oculus code only provided a method which requires you to calibrate the forward vector everytime you run you game, and so that was time to hack. To acomplish a fix for that I added a public declaration in OVRPlayerController script for the right eye camera, and use that one to create forward vector for the joystick, that way if you rotate your head, you also change the joystick controls. full script could be found here.

After testing many different interaction approaches, we decided the sound objects wont be spawned, but place in trays, that have similar composition qualities (i.e work well together), and the user would navigate in space, creating a big composition composed of three spatial areas of smaller compositions.

Demo, demo, demo, demo

Here is a video showing a demo we made in the Interactive Music final class

Here is a short making of video showing some of the aforementioned stages

Next up

We would like to continue and work on Sound Objects and deal with the following:

  • More compositional elements
  • Audio-reactive scenery
  • ‘Arc’ story for the experience that changes over time
  • Some effects for raycasting balls, and ball groups of the same sound

I would also like to thank Matt Parker and Yotam Mann for guidance, help and the knowledge each course provided, and in return the way it shaped the project – THANK YOU

Interactive Music – Midterm

For the midterm project in interactive I decided to continue developing a project I started working on this semester with Scott Reitherman. We started the project from a set of meetings in which we discussed stochastic music creation approaches and our self interest in reinterpretation of how composition could be created using intuitive methods and tools, enabling essentially anybody (referring to prior musical knowledge that is), to intuitively compose.

As we started laying the foundations for the project with a Virtual Reality HMD in mind, I decided to focus on building a ‘demo’ scene for the experience which could emphasize, or rather outline our main objectives for the experience, as it Work in Progress

Realizing the gesture

One of the main aspects of the assignment was the use of gesture, and since I decided to use a virtual reality headset, I decided I would need a ‘3D agent’  to bridge the virtual and the physical. I decided to use the Oculus Touch controller, which conveys ‘hands like’ feeling during the experience.

The experience

The experience consists of a virtual world in which you get to spawn spheres, that on collision with the floor trigger sounds. The main focus from an interaction stand point, would be your interaction with the physics engine that is controlling the sphere’s movement in space after they are created. To demonstrate that point, I decided to create a physical model in which the bouncing spheres maintain they’re energy, or simply put, they bounce forever.

In the above examples spheres are spawned by the mouse’s X and Y coordinates on the screen at a fixed distance from the player itself. After realizing that part, I went on to the VR integration.

The demo above demonstrates the use of the ball spawning and also the ability to ‘pause’ the balls in mid air (which in return pauses they’re sound).

A link to desktop versions of the demo could be found here for both Win/Mac

What next?

  1. Adding interactions with the physical model enabling the user to further understand the connection between the pysical characteristics of the world and the composition he creats (e.g less gravitational force will make repetitions less often which will result in a slower overall composition).
  2. Sound selection GUI which enables the user to both change and audition different musical components.
  3. Figure out a way to deal with non-rhythmic sounds (e.g drones, pads, ambient components)

Interactive Music – Rethinking the score

For my score realization I chose to focus on making generative music using ‘unconscious’ or rather ‘subconscious’ interaction. I started researching how I could use browsing history, to generate a score based on the user’s choice of online content. After reviewing the ideas in class, and getting paired up, explaining your idea to a partner I realized that this idea is still  vague to me, and given the nature of the assignment I decided to choose a different route with an idea that clearer to me.

The Score

I have created a couple of visualizers for music in the past years and one thing that always attracted me was the concept of driving, visual effects, animations and visual occurrences by data generated from audio analysis and/or MIDI.

A web audio visualizer

One thing that always intrigued me was the ability to ‘reverse engineer’ the audio reactive approach used to create visualizers into a something best described as ‘sonification of graphics’, essentially using graphics to generate audio.

The Execution

For my realization I chose to focus on a graphical simulation system that generates the score. More specifically I used a ‘metaball‘ simulation to define the behavior of 3D sphere in space which is also responsible for generating the score. As I had a certain style I was aiming for, in terms of score I used samples that are played through a granular sampler using Tone.js that way, the generation of the score is to some extant pre-determined but the user controls the simulation still controls the texture and the turning on and off of the samples.

The result is a granular sampler based composition where the user gets to ‘spwan’ metaballs into the simulation which in return triggers different samples into the composition. The position of the metaball in space, changes the pan of the sound in the composition.

“This song ain’t my song” – Manifesto

Growing up I found my way into the musical world through classical piano studies and down the road percussion and drums. Later on, I found myself cleaning and operating a rehearsal room, which was a dream come true, I get to work in helping people create music? YES, YES and YES!

After some time, I started learning cinema, and shifted my interest in music into a broader, interest of sound in cinema and media, which would lead me to experiment with programming, audio-reactive art and also write a thesis paper about interactivity, listening modes and the similarities between cinema and interactive experiences from a sonic point of ‘listening’ which I ended up naming “The choices we hear”, original right?

*Disclaimer: I do have tremendous respect for the craft of writing, producing and recording music, everything mentioned in this post refers to my ‘five-cents’ on personal aspirations for experiments in music creation, publishing and distribution.

So what’s wrong?

From a creation, publishing and distribution point of view, even though all three topics mentioned did transform themselves as the web developed (just one example out of many), the points listed below are still things I personally think are worth investigating

  • To some extent, we have lost fandom, artwork for instance, hasn’t been ‘reinvented’ to accommodate the new mediums available.
  • Even though a major portion of music is consumed in a streaming model, it still is a one-sided dialogue, the listener has no ability to affect, control or ‘personalise’ his listening experience
  • A general note, is that since these digital distributing services are branding themselves as the messenger, I personally feel that the gap between the artist and listener is actually getting bigger, “it’s just a catalogue, look me up”.
  • One major point from a creation point of view is that it seems one of the bigger trends in music creation has been emulation. By emulation, I am referring to analogue and dynamic processors. While I do think this uprise has brought many interesting and well-made tools, I rarely get to come across experimental tools, which aim to break the paradigm of ‘music-making’, whether it be algorithmic composition, experimental sound design or bizarre sound processing, these are way out of the main stream leaving little-to-no financianal reason for developers to investigate these options.

Sound particles is a prime example of one of the few experimental and interesting sound processing and design tool. The Roli Seaboard and the therevox (http://therevox.com/), are prime examples of a wild reimagination of the fundamental concepts of a keyboard.

So what’s next?

During this semester I would like to create musical and sound experiments using the following as guidelines

  • Inclusion, Inclusion, Inclusion – use the web for what it’s good for, making things accessible to the mass. Design, implement and code things so they could be intuitively used, music COULD be for everyone.
  • Avoid sticking to the creation<->publishing<->distribution paradigm, the experiences should mix these elements into one cohesive piece, it’s creation is a part of its distribution
  • Continuing point one, examine how music could be made using different inputs rather than music theory knowledge (continue walking the line if my ICM final – Forever)
  • Examine possibilities for sound creation and shaping using a 3D  (the opposite of audio-reactive, maybe graphics-reactive synthesis?)
  • Try getting good sleep, because it helps explain yourself to yourself

With this tone in mind, I look forward to a semester full of sonic experimentation.