For this week we had to start prototyping week 8’s board. We had to decide on a concept and start prototyping the board and components for it. After some thought, I decided to either go for a musical instrument, or a weight scale. Given the time scope of the assignment, I decided to go with the weight scale board.
Since I decided to go with a weight scale board, I started researching into load cells using Arduinos. I used Sparksfun guide to load cells and amplifiers and this Instructable to get comfortable with the idea of using a load cell. I ordered an HX711 amplifier breakout board and a 100 grams load cell. As the board came without header pins, I started by soldering header pins on the breakout board.
After soldering the the HX711 amp breakout board, I downloaded the HX711 library from the Arduino and connected the load cell to the amp board to the arduino and started measuring. Unfortunately, the load cell didn’t come with threads (screws) and has to be mounted before it could be used accurately because the measurement hits the amplifier (I ordered 3mm threads so I can mount the scale between two laser cut panels I’ll cut).
Designing a board
To get started with designing the board, I found Sparksfun Github repo that has Eagle schematics for the HX711 breakout board, and so I downloaded it and started deleting parts I don’t really need for my board. I also inserted an ATTiny85 into the board got a basic header layout for power and ground, and for the load cell connections.
For this week’s class we had to make an LED letter, that is controlled by a sensor. After some class debate, I got the letter R, and started working on the design. This blog post will recap all my process in the creation of my LED letter. To make it short and concise here is a rundown of everything that happened over the last week in the creation process:
I cut my finger 3 times
I broke a drill 1/32″ bit
I drilled 3 R’s, soldered them entirely and only then realized I had a problem
When I came out of the class the first thing I did, was to cut last week’s board (kind of getting my feet wet before I start cutting this week’s assignment). This process went surprisingly smooth (which says nothing about what’s about to come next).
I started by designing the R, I wanted something classic but also the ability to fit all components nicely without having to struggle with getting everything on top of it.
One important thing I realized during the making process is that it’s probably better (at least for my soldering skills) to use an endmill, and make very wide traces, 1/64″ in picture above.
Fab, Fab & Fab
I started by using an engraving bit, which in return led to some really nice looking circuits, but that was literally impossible to solder, plus got me injured while putting the bit on, yes it’s sharp who would have thought so right?
After 4 versions of the same R I was able to get it right, using the 1/32″ end mill bit and with a ton of patience. One thing this homework has thought me well, was to solder, as every single one of the try outs was actually soldered to completion. Some of the reasons it didn’t work on the first tryouts are me using engraving bit, using a double sided board (both sides are conductive, I know now) and poor soldering skills which led some of the boards to essentially not work.
Coding and making it light up
I decided to use a pot to drive the LED’s. instead of having it turn the LED’s on and off, I decided to continue with the breathing effect from the previous week, and add an off feature when the chip reads the pot value to be less then 10, all other values control the ‘breathing speed’ of the effect. Here is how the part that controls the LED’s look:
For my score realization I chose to focus on making generative music using ‘unconscious’ or rather ‘subconscious’ interaction. I started researching how I could use browsing history, to generate a score based on the user’s choice of online content. After reviewing the ideas in class, and getting paired up, explaining your idea to a partner I realized that this idea is still vague to me, and given the nature of the assignment I decided to choose a different route with an idea that clearer to me.
I have created a couple of visualizers for music in the past years and one thing that always attracted me was the concept of driving, visual effects, animations and visual occurrences by data generated from audio analysis and/or MIDI.
One thing that always intrigued me was the ability to ‘reverse engineer’ the audio reactive approach used to create visualizers into a something best described as ‘sonification of graphics’, essentially using graphics to generate audio.
For my realization I chose to focus on a graphical simulation system that generates the score. More specifically I used a ‘metaball‘ simulation to define the behavior of 3D sphere in space which is also responsible for generating the score. As I had a certain style I was aiming for, in terms of score I used samples that are played through a granular sampler using Tone.js that way, the generation of the score is to some extant pre-determined but the user controls the simulation still controls the texture and the turning on and off of the samples.
The result is a granular sampler based composition where the user gets to ‘spwan’ metaballs into the simulation which in return triggers different samples into the composition. The position of the metaball in space, changes the pan of the sound in the composition.
For this week’s homework, we had to finish soldering our chip programming shield for the Arduino, and design our circuit with Eagle. I started by practicing soldering, time and time again (and got burned multiple times in the process), and after finishing the shield we started in class it looked something like this
During the weekend, I got my Soldering station delivered (thanks Amazon), and decided to re-watch the videos in the lab, plus the NASA soldering tutorials and practice more. To do so, I used an Arduino Proto shield, and soldered two sockets, one for the ATTiny85, and one for the 328, so I have scaling options in the future.
As these proto-bords are two sided, I kept most of the wiring on the bottom, resulting in a clean shield that is easy to carry plus put on and off. One thing I would improve in it, is having female headers on the I/O, so that I can keep the shield on while I use the Arduino as a power source for other circuits. Even though it could use a couple of further mods, the shield acctully saved me a lot of time in programming the chips, since using it feels more like a ‘plug and play’ instead of reorganizing the breadboard whenever something doesn’t work.
After the soldering, I started practicing Eagle in order to create the board for the next class. I actually found the workflow of typing commands to Eagle quite meditating after you dive into a long board design session (with the headphones on), and discover a couple of hours have passed (Oh no). As discussed in class I started with the Schematic view, and was looking for a ATTiny85 schematic symbol online, but all the ones I found wer’e surface mounted, and as we will be covering that topic down the road, decided to use a standard 8-pin socket, which in terms of holes will fit perfectly with the ATTiny85 socket.
After thinking about how the sensor would connect, I suddenly realized since it has it’s own logic board, it would probably make most sense to have headers that would connect to the appropriate place in the circuit but would allow me to mount the sensor’s logic board in a different location (Nice! now to the board design part).
After trying many different configurations, I landed onto this one which is able to fit all the electronics needed in the circuit into a rather small form-factor, but giving appropriate space between lines. One thing I did realise while designing in board view, is that my VCC and GND, were just symbols, and meant I didn’t have any way to ‘wire’ them up, and so I had to go into Schematic view and create headers where power and ground actually connect to the circuit.
After this semester’s round of confusion, class swaps and drops I joined the Homemade Hardware class. Prototyping, building and thinking about logic in hardware, was something I really enjoyed doing last semester, and so is seemed like a direct line, in which I could keep on pursuing knowledge in ‘hardware-land’. Enough talking, off to building!
I started by reviewing some of the labs, and essentially working out my soldering, drawing and ‘Arduino’ing’ skills that have faded a little since winter break started. I chose to use the ATTiny85 chip, and decided I would use moisture sensor to drive two status LED’s, one that shows whether there is enough moisture in a plant’s soil, and one to show that it needs more water. After running and the boot-loader, and following the tutorial on programming the ATTiny using the Arduino as the ISP, I was able to get the Blink demo sketch running (YAY, great success, well kind of).
After getting it to work, I decided to test my setup, and therefore used the Arduino to program a small applications that reads a moisture sensor on analog port 1 (A1), and creates a smooth ‘breathing’ LED effect on one of two status LED’s depending on the readings.
After getting the applications to function appropriately and measuring moisture sensor using the Arduino’s serial monitor, it was time to build the circuit using the ATTiny85.
After all that was in place I only had to implement the code using the correct naming conventions for the ATTiny85 chip I/O, which sounds trivial but it did take me a reasonable amount of time to figure out, implement the code using the Arduino as an ISP and assamble everything on the breadboard.
Here is a video demonstrating the final circuit doing its thing
Growing up I found my way into the musical world through classical piano studies and down the road percussion and drums. Later on, I found myself cleaning and operating a rehearsal room, which was a dream come true, I get to work in helping people create music? YES, YES and YES!
After some time, I started learning cinema, and shifted my interest in music into a broader, interest of sound in cinema and media, which would lead me to experiment with programming, audio-reactive art and also write a thesis paper about interactivity, listening modes and the similarities between cinema and interactive experiences from a sonic point of ‘listening’ which I ended up naming “The choices we hear”, original right?
*Disclaimer: I do have tremendous respect for the craft of writing, producing and recording music, everything mentioned in this post refers to my ‘five-cents’ on personal aspirations for experiments in music creation, publishing and distribution.
So what’s wrong?
From a creation, publishing and distribution point of view, even though all three topics mentioned did transform themselves as the web developed (just one example out of many), the points listed below are still things I personally think are worth investigating
To some extent, we have lost fandom, artwork for instance, hasn’t been ‘reinvented’ to accommodate the new mediums available.
Even though a major portion of music is consumed in a streaming model, it still is a one-sided dialogue, the listener has no ability to affect, control or ‘personalise’ his listening experience
A general note, is that since these digital distributing services are branding themselves as the messenger, I personally feel that the gap between the artist and listener is actually getting bigger, “it’s just a catalogue, look me up”.
One major point from a creation point of view is that it seems one of the bigger trends in music creation has been emulation. By emulation, I am referring to analogue and dynamic processors. While I do think this uprise has brought many interesting and well-made tools, I rarely get to come across experimental tools, which aim to break the paradigm of ‘music-making’, whether it be algorithmic composition, experimental sound design or bizarre sound processing, these are way out of the main stream leaving little-to-no financianal reason for developers to investigate these options.
Sound particles is a prime example of one of the few experimental and interesting sound processing and design tool. The Roli Seaboard and the therevox (http://therevox.com/), are prime examples of a wild reimagination of the fundamental concepts of a keyboard.
So what’s next?
During this semester I would like to create musical and sound experiments using the following as guidelines
Inclusion, Inclusion, Inclusion – use the web for what it’s good for, making things accessible to the mass. Design, implement and code things so they could be intuitively used, music COULD be for everyone.
Avoid sticking to the creation<->publishing<->distribution paradigm, the experiences should mix these elements into one cohesive piece, it’s creation is a part of its distribution
Continuing point one, examine how music could be made using different inputs rather than music theory knowledge (continue walking the line if my ICM final – Forever)
Examine possibilities for sound creation and shaping using a 3D (the opposite of audio-reactive, maybe graphics-reactive synthesis?)
Try getting good sleep, because it helps explain yourself to yourself
With this tone in mind, I look forward to a semester full of sonic experimentation.
For my Physical computing final project, I initially started by sketching ideas for things I am interested in making (more on this in this post). After some thought, I decided to mix two courses that really inspired me this semester, ‘Physical computing’ in ITP, and ‘Software Synthesis’ in Music Tech dept. of NYU. After brainstorming ideas (special thanks to Dror Ayalon & Roi Lev for that), I decided I would like to reimagine two concepts I am fairly interested in and touch both courses: Synthesis and Modularity, and that’s how DMS – Different Modular System was born (at least conceptually).
I started by examing the features I think make modular synthesizers (and modular systems in general) powerful, both in terms of synthesis and interaction, alongside they’re downsides, and so the list began.
Flexibility – modular systems are, by definition, a flexible instrument as they tend to allow more than a single configuration with the same components. This manifests into the audible realm too as you can process signals using the same components in different order and get different sonic outcomes (e.g filtering a signal before the delay vs. filtering after).
Interaction – modular systems tend to include more interaction from the user’s side, which can and at times does translate into a sense of customized ownership over the device. Simply put, rather than a synth, ‘it’s my special recipe using the synth’.
Terminology – This is a point that I’ll talk more about in regards to user play-testing, but it is worth mentioning at this point that the synthesizer arena in general and the modular one specifically, tend to overcomplicate terms to (at times) they’re mathematical and electrical origin, which sounds cryptic to most people and distances some users from even trying these systems.
Form Factor – As powerful as modular synthesizers are, they tend to be rather big installment devices. Thinking about these machines in a portable
*It is worth mentioning the list presents my perspective and does not imply that the current state of modular synthesizers is wrong or not valid, rather me trying to reimagine it in a different way.
Some of the references I used conceptually were:
Little Bits Korg edition – a ‘bare boned’ Modular synth kit that is meant to serve as both an educational and musical modular system.
Palette Gear – Modular controller eco-system for software and MIDI controller
This project imposed some big fabrication challenges for me, and due to that I decided to start the ‘making’ process with design and fabrication of the units.
Magnets, magnets and magnets!
Upon play-testing and discussing the project with both Danny Rozin and Ben Light, I decided I would start with building 2 modules that work, the main hub (i.e ‘the Brain’), and a second effect unit (i.e ‘the Mouth’). On the first Iteration I had around 6 magnets on each of the boxes, and after a Eureka moment decided to have an Arduino in each box, and let them communicate over serial. The initial bill of material looked something like the following:
I started the process by cutting acrylic top panels and leaving the exact diameter of the magnet that would later on be glued to the top. I used the protective plastic rings that came with the magnets (i.e separators), to elevate the magnets to 1mm above the plastic panel in order for them to always connect without disturbance (thanks Ben Light).
And finally, I soldered the wires onto springs that push against the glued magnets and pass the electrical energy. One note on this, is that as Ben Light mentioned to me, soldering directly to the magnet tempers with the magnet’s electrical properties and therefore might introduces unknown interferences to the electrical signal, the spring method however works fairly well, just remember to use epoxy.
As I mentioned before, terminology is something I really wanted to simplify into icons and language people feel comfortable interacting with (or at the very least doesn’t scare them away), and so for the synth’s interface I had multiple iterations, each time changing the icon or the text, and letting people react to that, this is how the final interface looks before the actual cutting and etching process
Last but not least, here are the two units after the finished fabrication process
Physical computing process:
The main reason for choosing the Arduino Mega for the brain module was it’s ability to both communicate using serial communication to the computer (in my case a Csound application), and use the additional serial ports (16/17), to communicate to another smaller Arduino in the mouth module.
The code below demonstrates the communication function on the brain module for sending the events to the computer. I use single byte messaging and divide 0-255 to all the synth parameters
A feature that really helped me and is both related to fabrication and implementation was putting a serial switch in the 2nd module that way I can change between Module-to-Module communication and sending new code to the 2nd module without disconnecting the serial lines.
Building the synth:
As this is my final project for both Physical Computing and Software Synthesis I used Csound, a synthesis library and engine to build all the actual sound generating logic.
I started by laying down the communication logic, the serial port, using Csound’s serialBegin and serialRead opcodes (the Csound name for logic functions built in the library).
After laying down the serial communication I started building the oscillators using Csound’s poscil opcode while storing the actual wave tables values for the waveshapes in an ftable.
During the making of the actual synth I discovered some functionality that I was really missing, for example the Arduino’s map(); function didn’t have any sibling on the Csound side, and so I challenged myself to extend Csound and build two mapping functions I used to map the Arduino serial coming in to different parameters in the Csound instrument I created.
After a lot of explanations, here is an image and a demo composition using the final synth
This is my final composition made entirely with the synth using Ableton Live as a looper and a MIDI controller hooked up to the synth.
My ICM final project started the same way many projects do, brainstorming, erasing, writing again, rethinking and feeling confused. Given my past and current interest in sound, and the fact I took ‘Software Synthesis’ class this semester in Music Technology dept. in Steinhardt, NYU, I was fairly interested in challenging myself in the realms of synthesis, sound and composition. And so ‘Forever’ was born (only conceptually of course).
And so, the first step was getting the idea nailed down, what is it? how does it work (e.g how does it sound)? and what does it require to know, that I might have to pick up as I build it.
I decided to build a multiplayer web app that uses the user’s GPS location, with all other users locations already connected, to generate a collective musical composition. Essentially, I wanted this project to ask questions such as:
What feels like a good balance share in the impact of the users versus the server in order to generate a composition? How will this balance translate to sound?
What are components that signify progression in the piece?
What data is meaningful data in respect to this composition?
And that’s the point where I think about Michel Foucault for a while and start sketching…
Some references that are worth mentioning are:
Iannis Xenakis – and his ‘stochastic’ compositions
David Cope – and his writings and compositions which include algorithmic composition and artificial intelligence
Musique concrète compositions
The first thing I realized is that I know nearly nothing about server side programming, web sockets and server to client communications, and so started learning socket.io, express and programming my own server. I started with a very basic express node server
After building my first express server, I moved to sketching some client code that gets the client’s current position and logs it on the page (which later was transmitted back to the server via web sockets).
After getting these two to work, and with help of Dror Ayalon, I started sending the server each client’s GPS.
This later led to a full week of map projections and Mapbox API integration, getting the map, mapping latitude and longitude to x and y on the canvas and finally drawing icons for the users themselves on a separate p5.js canvas that gets overlayed on the map.
Musical interpretation of the data:
Once I got the user locations I divided the screen’s height into 5 zones, my musical scales are stored in 15 slot arrays therefore, each zone in the map has a range of 3 notes from which it can choose.
This was built in a way that allows to change scales by only changing the input in changeNote(); function which sets the notes for the user based on their ‘zone’.
From this point on it was all about building the functionality in a flexible way that allowed me to change things instantly and test them again rapidly. For instance the styling of the map went through a couple of revisions.
I decided to use p5.js for the drawing due to its flexibility and ease of use. Using p5 I draw a second canvas which is used for the cursor and user graphics. This canvas is also used for interactions such as cursor control with the mouse (or touch) and looping feature.
For my final project in Physical Computing, I started by brainstorming all the ideas I had put away during the course and some inspirations that actually grew from class readings and discussions.
List of possible ideas included:
DJ Midi controller
Ferrofluid art installation
Modular synthesizer – Building a different modular synth that allows you to assemble the instrument of your liking physically (e.g by joining pieces together).
Jambot (Audio to midi) – I was thinking of creating a small listening bot that allows you to jam, and picks up audio that gets converted into MIDI notation in realtime.
I decided to keep only option 3 and 4, and after breaking down the devices into abstracted interactions, I found myself leaning towards the modular synthesizer idea.
I am fascinated by modular synthesizers, but it always struck me as devices that were actually designed ‘to feel complicated’, and by way of that, they tend to scare people who are not ‘into’ synthesis away.
I started designing what I refer to as ‘DMO’ – Different Modular Synthesiser. As we had to play test our chosen idea I decided to laser cut a simple interface and a couple of ‘modules’ and let people intuitively try and connect things, then write down the most common patterns, and try and figure out which modules make sense and which don’t.
My initial idea for the system was having:
Brain module that holds the basic functionality (wave shape, master volume)
Have control module that magnetically link up to the sides of the brain and control elements like envelope, LFO (low-frequency oscillation) and effects. The control modules were essentially variable resistors in the form of potentiometers, sliders and FSR (force sensing resistors) that could connect to each of the parameters on the sides of the brain in a way that suits the players liking.
Generally, people didn’t understand words such as envelope, LFO and wave shape types, Devise a new terminology that doesn’t insult experienced users but doesn’t scare away hobbyists and non-musicians.
Use Iconography to emphasize sound concepts as a visual thing (perhaps use metaphors that people can relate to).
People didn’t understand immediately that the modules are controls, think about using the modules as sound components instead of control values ‘on the brain’.
Danny Rozin mentioned fabrication as a main point, fabricating modular systems is a tough thing to do (I actually want my final to deal with fabrication, it’s challenging for me).
**Update – After speaking with Ben Light about the fabrication of the device I decided to build two modules (i.e a brain and one more) which will connect using Neodymium magnets to transfer electricity between the two.
For our midterm project, I was paired with Amanda MJ Lee. We sat down and started sketching out ideas, which after a while, seemed started looking very obvious, most of our ideas dealt in some way or another with sound & music, and so we decided to make a musical instrument, luma a color-reactive audio synthesizer that uses color plates (or discs) to create musical patterns in real-time.
One of the points that kept coming up while discussing ideas, was the ability to use a well-known music consumption tool (e.g Turntable, Gramophone or Phonograph) and repurpose its interaction to create a musical instrument. This discussion turned into our project, which used the Turntable as means of symbolic relation to music, but repurposed its interaction to serve as a musical creation tool.
We decided to build upon this idea, but instead of having the device motorized, divide the instrument into two decks, rhythmical and melodic, in which only the rhythmical is motorized requiring the participant (or better yet musician) to learn and develop his own way of playing it.
With the aforementioned in mind, we started thinking about visual references, we wanted the instrument to feel timeless (classic), yet have a very organic feel, since the color plates will introduce playfulness to the visual experience of using the instrument. Brown was a good visual reference
With that in mind, we started assembling a ‘look’ and decided we are going to use a wooden box and acrylic beige cover, to convey this ‘classic look‘.
From sketches to fabrication:
What we used (i.e our bill of materials):
Container Store Drawer Organizer Bamboo (6″ x 15″ x 2″ h) – 7.99$ – link
After we got all the parts we started thinking about how we would assemble our enclosure. During this discussion we realized, there were a couple of challenges everybody around us seem to be facing too, and so Amenda suggested it would be cool to try and tackle these as well for the benefit of everyone, and so to the 3D printer we GO!
We 3D modeled all the mounts inside the case that hold the components, but we also decided to use the 3D printer to tackle design issues. For instance, as we needed to drill a USB hole inside the wooden box to connect the Arduino, we realized drilling a square hole was a mission impossible for us, and so decided to design a circular USB type B adapter that would fit inside the hole the drill press created (the adapter can be found in Dror Ayalon and Mint’s awesome project Video Manipulations too, YAY we helped ITP)
* All 3D models used in the project are available here
We used a hot glue gun to glue the 3D printed mounts into the enclosure and started placing the sensors and components using double sided (really sticky) tape. Following that point, we started designing the interface, controls and laser cutting the actual discs – one lesson learned from that process is that when you prepare to the smallest detail, it actually is a very enjoyable one.
Coding & Implementation:
We chose to have the Arduino analyze all the inputs from the sensors, potentiometers and sliders and communicate to the computer over the serial port. With that in mind we started placing all the logic on the Arduino side first, and later on moved to creating the synthesizer.
We started coding with only one deck assembled, as we were multithreading design and code, trying to touch up and implement together. The fact we had only one deck available at the start, actually gave birth to a cleaner coding approach where we decided to break the functionality into small functions that deal with every part of the functionality chain separately. Here is our final loop function with comments to enable control every step of the process and even scale it very fast (1 deck or two deck, is only commenting and uncommenting a function).
Another thing we implemented at this point was the ability to tell whether a color has just started. We did this with the same logic of a button change press just utilizing a color range as the changing into/out of range which is handled by the matchLastColorState();
A couple of things we realized on the way were:
If you code individual blocks of logic, it is easier to debug them separately
If you communicate over serial with binary data, it is useful to have a function that you can switch on and off to debug with strings, that way you can actually read it
When using sensors that are affected by ambient factors (light, sound…etc) prepare to test extensively (and then test more).
Building the synthesizer:
As I am also taking a software synthesis course this semester in Steinhardt’s Music Technology department, I suggested we use Csound, the system used in the course, Amanda was in and so we started writing the synth in Csound. Some of the challenges we had to face was the binary serial communication between the Arduino and Csound, building interesting instruments that would play, and deciding on the logic at which we trigger different notes, so it doesn’t repeat the same note whenever a color is detected.
We ended up building a synth that uses 10 oscillators and 5 envelopes to create rich ambient and percussive sound textures, iterating over a pentatonic scale which makes the playing experience more engaging.