icm

All posts in this category.

Forever – ICM Final Project

My ICM final project started the same way many projects do, brainstorming, erasing, writing again, rethinking and feeling confused. Given my past and current interest in sound, and the fact I took ‘Software Synthesis’ class this semester in Music Technology dept. in Steinhardt, NYU, I was fairly interested in challenging myself in the realms of synthesis, sound and composition. And so ‘Forever’ was born (only conceptually of course).

Class presentation of a very early age version of the idea behind ‘Forever’

And so, the first step was getting the idea nailed down, what is it? how does it work (e.g how does it sound)? and what does it require to know, that I might have to pick up as I build it.

Conceptualisation:

I decided to build a multiplayer web app that uses the user’s GPS location, with all other users locations already connected, to generate a collective musical composition. Essentially, I wanted this project to ask questions such as:

  1. What feels like a good balance share in the impact of the users versus the server in order to generate a composition? How will this balance translate to sound?
  2. What are components that signify progression in the piece?
  3. What data is meaningful data in respect to this composition?

And that’s the point where I think about Michel Foucault for a while and start sketching…

Some references that are worth mentioning are:

  • Iannis Xenakis – and his ‘stochastic’ compositions
  • David Cope – and his writings and compositions which include algorithmic composition and artificial intelligence
  • Musique concrète compositions
  • Musimathics book

Implementation:

The first thing I realized is that I know nearly nothing about server side programming, web sockets and server to client communications, and so started learning socket.io, express and programming my own server. I started with a very basic express node server

After building my first express server, I moved to sketching some client code that gets the client’s current position and logs it on the page (which later was transmitted back to the server via web sockets).

After getting these two to work, and with help of Dror Ayalon, I started sending the server each client’s GPS.

This later led to a full week of map projections and Mapbox API integration, getting the map, mapping latitude and longitude to x and y on the canvas and finally drawing icons for the users themselves on a separate p5.js canvas that gets overlayed on the map.

Musical interpretation of the data:

Once I got the user locations I divided the screen’s height into 5 zones, my musical scales are stored in 15 slot arrays therefore, each zone in the map has a range of 3 notes from which it can choose.

This image illustrates well the 5 zone division

This was built in a way that allows to change scales by only changing the input in changeNote(); function which sets the notes for the user based on their ‘zone’.

From this point on it was all about building the functionality in a flexible way that allowed me to change things instantly and test them again rapidly. For instance the styling of the map went through a couple of revisions.

Dark map

Visual work:

I decided to use p5.js for the drawing due to its flexibility and ease of use. Using p5 I draw a second canvas which is used for the cursor and user graphics. This canvas is also used for interactions such as cursor control with the mouse (or touch) and looping feature.

<<video coming soon>>

The repository for the project containing all code written can be found here

The live link for the website could be found here

 

 

ICM – Lesson #7

For this week’s assignment I wanted to create a small application that lives entirely on the canvas, to practice everything we have gone over so far. I started with conceptualizing an audio player app that uses hand gestures, to determine which music I am interested in and use the soundcloud API to get a playlist of the selected genre.

screen-shot-2016-10-13-at-10-58-52

With that in mind, I also wanted to practice using constructor functions, therefore I created an app constructor that holds all the initialization, screens and variables inside it.

I used LeapTrainer.js to train the leap to listen to specific gestures (i.e combinations of different position and rotations it picks up), and registered them as event listeners. Following that, I used an image and an overlaying video, to create an ‘animation on hover’ effect using p5’s ‘dist();’ functions.

Currently I only got the Metal genre working, even though event listeners have been registered to all the gestures. One of the topics I got confused about was the audio player structure (e.g keeping track of the current track, and building a modular system that could jump a song forward, backwards…etc).

A link to Github

Synthesis // icm + physcomp = ♥

To kick off the long weekend we had the Synthesis workshop, and even though at first I was sure we were going to deal with sound synthesis we actually synthesised Intro to Computational Media with Physical Computing courses, which boils down to endless possibilities.

I was assigned with Cristobal Valenzuela, and so we started by researching what sensor we would like to use for our ‘Physical Variable’ challenge. We decided to use the Colorpal RGB Sensor and send the red, green and blue data over to p5. While we were figuring out how to best serve the data over serial to p5, we came up with the simple idea of making a game for children, where they get a name of a fruit and have to find something with the same color as the fruit and scan it.

Demo time:

To determine the correct color being sensed we used p5’s ‘dist();’ function in the following manner:

Some of the things we didn’t have time to build but disscused and agreed could be interesting to try are timing the actual process of ‘search & scan’, keeping scores and varying difficulties.

Link to github

ICM – Lesson#2

For the second lesson of ICM, I decided to tackle a music visualiser. After many thoughts I ended up creating a 3D music visualiser, that analyses the sound – using Web Audio API and controls a noise function applied to a 3D sphere using Three.js and a GLSL shader.  Let’s dive to it!

I would like to thank Ian McEwan who wrote this shader in the attached example, which really helped me in figuring my way into this visualiser. And a huge thanks to Marco Guarino (who’s also an ITP student and a great friend).

  • First step – Get the audio analysed:

I start by creating a web audio context, which enables me to create an audio element and hook it up to an analyser. The analyser outputs it’s frequency data into an unassigned array in which I will cover later on, as it happens in the update function for the whole scene.

//Web Audio Stuff//
var audio = new Audio();
audio.src = 'marco.mp3';
audio.controls = true;
document.body.appendChild(audio);

var context = new AudioContext();
var analyser = context.createAnalyser();
freqByteData = new Uint8Array(analyser.frequencyBinCount);
window.addEventListener('load', function(e) {
 // Our <audio> element will be the audio source.
 var source = context.createMediaElementSource(audio);
 source.connect(analyser);
 analyser.connect(context.destination);
 audio.play();
}, false);
  • Second Step – Make the 3D scene:

The 3D scene is actually a lot simpler than it might look at first. I started by rendering a 360 equirectangular image of the terrain using Vue (i.e an environment modelling suite), which I rendered in two resolutions 3840×2160 for the background, and 1280×720 for the projected reflections on the actual sphere.

plate_1_projection

This image get’s wrapped on a sphere as a texture using [Three.js]’s THREE.ImageUtils.loadTexture and as a part of the sphere element fed into the shader for displacement based on Perlin’s noise function. The last element in the scene is a very simple perspective camera that’s being moved by mouse down events around the object.

  • UPDATE, UPDATE & UPDATE:

The update function plays a major roll in every WebGL scene but also serves here as the containing function for the audio analysis on each draw cycle.

function update() {
 requestAnimationFrame(update);
 analyser.getByteFrequencyData(freqByteData);
 var length = freqByteData.length;

 var sum = 0;
 for(var j = 0; j < length; ++j) {
 sum += freqByteData[j];
 }
 aveLevel = sum / length;
};

By using a for loop to iterate over the frequency data and divide it’s length by the sum of each frequency’s amplitude I determine average volume, but can also extract singular values for specific frequency ranges.

screen-shot-2016-09-21-at-21-37-13

Lastly, I Included a dat.gui interface, as both means to learning to use it, but also as a set of controls for the shader’s response to sound, and even the camera’s reaction to sound.

The link to the visualiser’s repo & demo is https://juniorxsound.github.io/music_visualiser/

In the future, I would like to get better grasp of GLSL and shader development in the future to gain more confidence in perhaps writing my own shaders for these types of experiences.

ICM – Lesson#1

We kicked off ICM with an introduction to P5.js. Given the opportunity to learn a new library I was very keen to get my hands on the keyboard and start working, but first thing’s first, CONCEPT, CONCEPT and yeah, CONCEPT.

After thinking (at times out loud), about possible ideas, I decided to pay tribute to one of my all time favorite desktop drawing app, Alchemy. One feature I particularly love about it, is it’s audio reactive brush. Simply put, it allows the user to hook up his microphone input signal into an analyser and assign it’s values to a brush size and pattern, time to get to work.

output_pjcxzm

I started by investigating [P5.dom.js & P5.js]’s reference to get a sense of how the library operates, and found the learning curve to be surprisingly small. I then went on to use P5.sound.js abstraction of the Web Audio API to capture the user’s microphone input and get values back into a variable.

screen-shot-2016-09-14-at-23-34-15

For finishing touches I added jQuery to control some UI elements in the DOM (I guess you can get out of jQuery, but you can’t get the jQuery out of you) and wrapped it up with a slider that acts as a multiplayer to the sound input for quieter environments and/or effects. Last but not least I added colors and functionality:

screen-shot-2016-09-14-at-23-29-28

I am currently working on a few advancements including noise patterns controlled by sound, different brush types and better audio analysis.

 

The link to the code & demo can be found here https://github.com/juniorxsound/ICM-Fall-2016/tree/master/Modo