ITP Blog

My journey in making things

DMS – Physical Computing & Csound Final

For my Physical computing final project, I initially started by sketching ideas for things I am interested in making (more on this in this post). After some thought, I decided to mix two courses that really inspired me this semester, ‘Physical computing’ in ITP, and ‘Software Synthesis’ in Music Tech dept. of NYU. After brainstorming ideas (special thanks to Dror Ayalon & Roi Lev for that), I decided I would like to reimagine two concepts I am fairly interested in and touch both courses: Synthesis  and Modularity, and that’s how DMS – Different Modular System was born (at least conceptually).

Sketches of early stage ideas for the project
  • Conceptualisation:

I started by examing the features I think make modular synthesizers (and modular systems in general) powerful, both in terms of synthesis and interaction, alongside they’re downsides, and so the list began.

From http://gizmodo.com

Advantages:

  • Flexibility – modular systems are, by definition, a flexible instrument as they tend to allow more than a single configuration with the same components. This manifests into the audible realm too as you can process signals using the same components in different order and get different sonic outcomes (e.g filtering a signal before the delay vs. filtering after).
  • Interaction – modular systems tend to include more interaction from the user’s side, which can and at times does translate into a sense of customized ownership over the device. Simply put, rather than a synth, ‘it’s my special recipe using the synth’.

Disadvantages:

  • Terminology – This is a point that I’ll talk more about in regards to user play-testing, but it is worth mentioning at this point that the synthesizer arena in general and the modular one specifically, tend to overcomplicate terms to (at times) they’re mathematical and electrical origin, which sounds cryptic to most people and distances some users from even trying these systems.
  • Form Factor – As powerful as modular synthesizers are, they tend to be rather big installment devices. Thinking about these machines in a portable

*It is worth mentioning the list presents my perspective and does not imply that the current state of modular synthesizers is wrong or not valid, rather me trying to reimagine it in a different way.

Some of the references I used conceptually were:

  1. Little Bits Korg edition – a ‘bare boned’ Modular synth kit that is meant to serve as both an educational and musical modular system.
  2. Palette Gear – Modular controller eco-system for software and MIDI controller
  3. Korg MS-20 Mini – Semi modular synth ‘do it yourself kit’.
  4. Moog Werkstatt – Another semi modular synth ‘do it yourself kit’
  • Design:

This project imposed some big fabrication challenges for me, and due to that I decided to start the ‘making’ process with design and fabrication of the units.

Magnets, magnets and magnets!

Upon play-testing and discussing the project with both Danny Rozin and Ben Light, I decided I would start with building 2 modules that work, the main hub (i.e ‘the Brain’), and a second effect unit (i.e ‘the Mouth’). On the first Iteration I had around 6 magnets on each of the boxes, and after a Eureka moment decided to have an Arduino in each box, and let them communicate over serial. The initial bill of material looked something like the following:

I started the process by cutting acrylic top panels and leaving the exact diameter of the magnet that would later on be glued to the top. I used the protective plastic rings that came with the magnets (i.e separators), to elevate the magnets to 1mm above the plastic panel in order for them to always connect without disturbance (thanks Ben Light).

Final drilling of the cables soldered to a spring that pushes against the magnets

And finally, I soldered the wires onto springs that push against the glued magnets and pass the electrical energy. One note on this, is that as Ben Light mentioned to me, soldering directly to the magnet tempers with the magnet’s electrical properties and therefore might introduces unknown interferences to the electrical signal, the spring method however works fairly well, just remember to use epoxy.

As I mentioned before, terminology is something I really wanted to simplify into icons and language people feel comfortable interacting with (or at the very least doesn’t scare them away), and so for the synth’s interface I had multiple iterations, each time changing the icon or the text, and letting people react to that, this is how the final interface looks before the actual cutting and etching process

Last but not least, here are the two units after the finished fabrication process

Physical computing process:

The main reason for choosing the Arduino Mega for the brain module was it’s ability to both communicate using serial communication to the computer (in my case a Csound application), and use the additional serial ports (16/17), to communicate to another smaller Arduino in the mouth module.

The code below demonstrates the communication function on the brain module for sending the events to the computer. I use single byte messaging and divide 0-255 to all the synth parameters

A feature that really helped me and is both related to fabrication and implementation was putting a serial switch in the 2nd module that way I can change between Module-to-Module communication and sending new code to the 2nd module without disconnecting the serial lines.

  • Building the synth:

As this is my final project for both Physical Computing and Software Synthesis I used Csound, a synthesis library and engine to build all the actual sound generating logic.

I started by laying down the communication logic, the serial port, using Csound’s serialBegin and serialRead opcodes (the Csound name for logic functions built in the library).

After laying down the serial communication I started building the oscillators using Csound’s poscil opcode while storing the actual wave tables values for the waveshapes in an ftable.

During the making of the actual synth I discovered some functionality that I was really missing, for example the Arduino’s map(); function didn’t have any sibling on the Csound side, and so I challenged myself to extend Csound and build two mapping functions I used to map the Arduino serial coming in to different parameters in the Csound instrument I created.

 

After a lot of explanations, here is an image and a demo composition using the final synth

This is my final composition made entirely with the synth using Ableton Live as a looper and a MIDI controller hooked up to the synth.

  • Links:

Link to the Arduino Mega and Uno code

Link to the Csound program

  • Moving forward – things I would like to improve next:
  1. I would like to try and implement the synth on an embedded system that way I can have everything running from the box itself.
  2. I would like to build more modules to demonstrate more aspects of the synthesis process (e.g ‘the Spine’ as an arpeggiator).
  3. Implementing a better method to detect when the modules are connected and disconnected from the brain

*Special thanks Tom Igoe and Jean-Luc Cohen, this process has both taught me a lot, and challenged more throughout. Your help and guidance made this possible!

Forever – ICM Final Project

My ICM final project started the same way many projects do, brainstorming, erasing, writing again, rethinking and feeling confused. Given my past and current interest in sound, and the fact I took ‘Software Synthesis’ class this semester in Music Technology dept. in Steinhardt, NYU, I was fairly interested in challenging myself in the realms of synthesis, sound and composition. And so ‘Forever’ was born (only conceptually of course).

Class presentation of a very early age version of the idea behind ‘Forever’

And so, the first step was getting the idea nailed down, what is it? how does it work (e.g how does it sound)? and what does it require to know, that I might have to pick up as I build it.

Conceptualisation:

I decided to build a multiplayer web app that uses the user’s GPS location, with all other users locations already connected, to generate a collective musical composition. Essentially, I wanted this project to ask questions such as:

  1. What feels like a good balance share in the impact of the users versus the server in order to generate a composition? How will this balance translate to sound?
  2. What are components that signify progression in the piece?
  3. What data is meaningful data in respect to this composition?

And that’s the point where I think about Michel Foucault for a while and start sketching…

Some references that are worth mentioning are:

  • Iannis Xenakis – and his ‘stochastic’ compositions
  • David Cope – and his writings and compositions which include algorithmic composition and artificial intelligence
  • Musique concrète compositions
  • Musimathics book

Implementation:

The first thing I realized is that I know nearly nothing about server side programming, web sockets and server to client communications, and so started learning socket.io, express and programming my own server. I started with a very basic express node server

After building my first express server, I moved to sketching some client code that gets the client’s current position and logs it on the page (which later was transmitted back to the server via web sockets).

After getting these two to work, and with help of Dror Ayalon, I started sending the server each client’s GPS.

This later led to a full week of map projections and Mapbox API integration, getting the map, mapping latitude and longitude to x and y on the canvas and finally drawing icons for the users themselves on a separate p5.js canvas that gets overlayed on the map.

Musical interpretation of the data:

Once I got the user locations I divided the screen’s height into 5 zones, my musical scales are stored in 15 slot arrays therefore, each zone in the map has a range of 3 notes from which it can choose.

This image illustrates well the 5 zone division

This was built in a way that allows to change scales by only changing the input in changeNote(); function which sets the notes for the user based on their ‘zone’.

From this point on it was all about building the functionality in a flexible way that allowed me to change things instantly and test them again rapidly. For instance the styling of the map went through a couple of revisions.

Dark map

Visual work:

I decided to use p5.js for the drawing due to its flexibility and ease of use. Using p5 I draw a second canvas which is used for the cursor and user graphics. This canvas is also used for interactions such as cursor control with the mouse (or touch) and looping feature.

<<video coming soon>>

The repository for the project containing all code written can be found here

The live link for the website could be found here

 

 

Physical Computing – Final Project / Proposal

For my final project in Physical Computing, I started by brainstorming all the ideas I had put away during the course and some inspirations that actually grew from class readings and discussions.

notebook_sketching

List of possible ideas included:

  1. DJ Midi controller
  2. Ferrofluid art installation
  3. Modular synthesizer – Building a different modular synth that allows you to assemble the instrument of your liking physically (e.g by joining pieces together).
  4. Jambot (Audio to midi) – I was thinking of creating a small listening bot that allows you to jam, and picks up audio that gets converted into MIDI notation in realtime.

I decided to keep only option 3 and 4, and after breaking down the devices into abstracted interactions, I found myself leaning towards the modular synthesizer idea.

Concept

I am fascinated by modular synthesizers, but it always struck me as devices that were actually designed ‘to feel complicated’, and by way of that, they tend to scare people who are not ‘into’ synthesis away.

My list of reference sources included:

Playtesting the idea:

I started designing what I refer to as ‘DMO’ – Different Modular Synthesiser. As we had to play test our chosen idea I decided to laser cut a simple interface and a couple of ‘modules’ and let people intuitively try and connect things, then write down the most common patterns, and try and figure out which modules make sense and which don’t.

img-20161108-wa0004

My initial idea for the system was having:

  1. Brain module that holds the basic functionality (wave shape, master volume)
  2. Have control module that magnetically link up to the sides of the brain and control elements like envelope, LFO (low-frequency oscillation) and effects. The control modules were essentially variable resistors in the form of potentiometers, sliders and FSR (force sensing resistors) that could connect to each of the parameters on the sides of the brain in a way that suits the players liking.

Let’s playtest

Playtest notes:

  1. Generally, people didn’t understand words such as envelope, LFO and wave shape types, Devise a new terminology that doesn’t insult experienced users but doesn’t scare away hobbyists and non-musicians.
  2. Use Iconography to emphasize sound concepts as a visual thing (perhaps use metaphors that people can relate to).
  3.  People didn’t understand immediately that the modules are controls, think about using the modules as sound components instead of control values ‘on the brain’.
  4. Danny Rozin mentioned fabrication as a main point, fabricating modular systems is a tough thing to do (I actually want my final to deal with fabrication, it’s challenging for me).

**Update –  After speaking with Ben Light about the fabrication of the device I decided to build two modules (i.e a brain and one more) which will connect using Neodymium magnets to transfer electricity between the two.

 

 

Physical Computing – Midterm

ezgif-4057205676

For our midterm project, I was paired with Amanda MJ Lee. We sat down and started sketching out ideas, which after a while, seemed started looking very obvious, most of our ideas dealt in some way or another with sound & music, and so we decided to make a musical instrument, luma a color-reactive audio synthesizer that uses color plates (or discs) to create musical patterns in real-time.

Sketch
Sketches of different ideas after brainstorming
  • Conceptualisation:

One of the points that kept coming up while discussing ideas, was the ability to use a well-known music consumption tool (e.g Turntable, Gramophone or Phonograph) and repurpose its interaction to create a musical instrument. This discussion turned into our project, which used the Turntable as means of symbolic relation to music, but repurposed its interaction to serve as a musical creation tool.

We decided to build upon this idea, but instead of having the device motorized, divide the instrument into two decks, rhythmical and melodic, in which only the rhythmical is motorized requiring the participant (or better yet musician) to learn and develop his own way of playing it.

ezgif-565323512

  • Design:

With the aforementioned in mind, we started thinking about visual references, we wanted the instrument to feel timeless (classic), yet have a very organic feel, since the color plates will introduce playfulness to the visual experience of using the instrument. Brown was a good visual reference

sk6_braun_450pxls
Visual reference – Bruan personal turntable

 With that in mind, we started assembling a ‘look’ and decided we are going to use a wooden box and acrylic beige cover, to convey this ‘classic look‘.

  • From sketches to fabrication:

What we used (i.e our bill of materials):

  1. Container Store Drawer Organizer Bamboo (6″ x 15″ x 2″ h) –  7.99$ – link
  2. Arduino Mega Rev 3.  – $45.95 – link
  3. 2x Color Sensors – 9.99$ each – link
  4. Linear Potentiometer (B50K) – 2.99$ – link
  5. 60mm Slider (Phidegts 1112) – 11$ – link
  6. 5 Sheets of acrylic in different colors – 60$
  7. Some basic breadboard wires

After we got all the parts we started thinking about how we would assemble our enclosure. During this discussion we realized, there were a couple of challenges everybody around us seem to be facing too, and so Amenda suggested it would be cool to try and tackle these as well for the benefit of everyone, and so to the 3D printer we GO!

img-20161024-wa0005
Measuring all the components and prototyping design ideas with cardboard

We 3D modeled all the mounts inside the case that hold the components, but we also decided to use the 3D printer to tackle design issues. For instance, as we needed to drill a USB hole inside the wooden box to connect the Arduino, we realized drilling a square hole was a mission impossible for us, and so decided to design a circular USB type B adapter that would fit inside the hole the drill press created  (the adapter can be found in Dror Ayalon and Mint’s awesome project Video Manipulations too, YAY we helped ITP)

* All 3D models used in the project are available here

We used a hot glue gun to glue the 3D printed mounts into the enclosure and started placing the sensors and components using double sided (really sticky) tape. Following that point, we started designing the interface, controls and laser cutting the actual discs – one lesson learned from that process is that when you prepare to the smallest detail, it actually is a very enjoyable one.

output_4psna4

  • Coding & Implementation:

We chose to have the Arduino analyze all the inputs from the sensors, potentiometers and sliders and communicate to the computer over the serial port. With that in mind we started placing all the logic on the Arduino side first, and later on moved to creating the synthesizer.

breadboard
Arduino breadboard illustration

We started coding with only one deck assembled, as we were multithreading design and code, trying to touch up and implement together. The fact we had only one deck available at the start, actually gave birth to a cleaner coding approach where we decided to break the functionality into small functions that deal with every part of the functionality chain separately. Here is our final loop function with comments to enable control every step of the process and even scale it very fast (1 deck or two deck, is only commenting and uncommenting a function).

Another thing we implemented at this point was the ability to tell whether a color has just started. We did this with the same logic of a button change press just utilizing a color range as the changing into/out of range which is handled by the matchLastColorState();

A couple of things we realized on the way were:

  1. If you code individual blocks of logic, it is easier to debug them separately
  2. If you communicate over serial with binary data, it is useful to have a function that you can switch on and off to debug with strings, that way you can actually read it
  3. When using sensors that are affected by ambient factors (light, sound…etc) prepare to test extensively (and then test more).
  • Building the synthesizer:

As I am also taking a software synthesis course this semester in Steinhardt’s Music Technology department, I suggested we use Csound, the system used in the course, Amanda was in and so we started writing the synth in Csound. Some of the challenges we had to face was the binary serial communication between the Arduino and Csound, building interesting instruments that would play, and deciding on the logic at which we trigger different notes, so it doesn’t repeat the same note whenever a color is detected.

ezgif-955914941

We ended up building a synth that uses 10 oscillators and 5 envelopes to create rich ambient and percussive sound textures, iterating over a pentatonic scale which makes the playing experience more engaging.

The code for the Arduino app can be found here, and the code for the synthesizer can be found here

Here is the final graphic layout we made to explain the project:

illustration2-01

Demo video:

And here is the live demo we did in class on presentation day:

Visual Language – City Branding

The first time I visited Andorra, a small sovereign situated between France and Spain, I was shocked by the fact such a small and unknown place exists, being what you might call ‘under the grid’. Andorra during winter time, is home to some of the best snow in the area, but with that being said, it is nothing like other over-priced, over-rated, tea drinking in the evening spots for snow lovers, rather it’s all about riding hard, and partying hard, and so this is how my love story with Andorra La Vella starts.

1688714_10153871020490531_628079916_n
Snowboard trip to Andorra during 2012
  • Step 1 – Analysis:

Andorra La Vella is home to 22,000 residents and is the highest capital city in Europe rising at 3,356 ft. During the summer time, it enjoys very warm weather for a short period which changes to very cold winters and what makes Andorra especially worthwhile, it’s unbelievable snow and wide mountain ridges, which are nothing short but paradise for ski and snowboarding.

That being said, in terms of design, branding, and visual language, Andorra hasn’t really used its strengths to build itself a ‘brand’ , which might be one reason for why it’s remained relatively unknown.

25

To illustrate the aforementioned point Andorra La Vella’s current logo

comuandorra

  • Conceptualisation:

After some research and identification, here are some of the problems and ideas the new logo (and visual language) needs to emphasize:

  1. Use Andorra La Vella’s breathtaking mountain view which is also its biggest selling point, snow. With that being said, also try to speak the visual language patterns found in other snow wear, skiing resorts and snow related design.
  2. Refresh it’s current identity to a younger, more modern and bolder design.
  3. Being that Andorra is a sovereign that has several cities and regions, devise a design that could be modular to serve other areas in Andorra.
  • Execuation:

As I started examining related snow related design, I decided the design for Andorra must feature it’s breathtaking Pyrenees mountain ridge. I used vector drawing technique to outline a mountain ridge stencil.

screen-shot-2016-10-22-at-23-56-44

For the typography, I wanted to include a heavy, geometric Sans-Serif font for the first part of the name (i.e Andorra), and have the second part of the name in a different font. The main reason for this is both a design decision, but also a modularity concern, as Andorra is the name of the sovereign, but every city has a different add-on name, the logo could be used as an arc design language for every city. Fonts used are Simona Sans-Serif and HucklebuckJF for the script.

screen-shot-2016-10-23-at-0-04-42
Example shows Andorra La Vella, and Pas De La Casa as two examples using the same modular design

The outcome logo

screen-shot-2016-10-23-at-0-09-29

After some thought, and research into other snow related design I found that emblem design is quite common, and decided it would make sense to have this logo in a badge (emblem) format as well for merchandise and specific uses and so went on to made this badge derivative of the logo

screen-shot-2016-10-23-at-0-11-50

To emphasize how the design could be used for more tactile applications I made a t-shirt (which are a very common city branding commodity), which features the city’s redesigned logo on the back.

screen-shot-2016-10-23-at-0-12-56

Visual Language – Business Cards

For this week’s assignment we had to come up with a visual language for ourselves and produce a business card design. This blog post is meant to describe the process of conceptualizing through execution, and lessons I learned along the way.

  • Conceptualising:

I started by examining design patterns I liked before, and stuff I created for myself before coming to ITP, but this time looking at everything critically with some tools acquired in this course. The following example, is a logo I designed for myself about a year ago:

penrose

This logo was inspired by penrose triangle,  a part of the impossible geometry group of shapes, conceived first by swedish designer Oscar Reutersvärd.

  • References and influences:

Before I actually started designing I took sometime, looking for inspiration, designs and studios that create designs I can relate to, so I can analyse the elements of design I would like to incorporate in my visual language

A list of influences:

http://www.stockholmdesignlab.se/

http://bunchdesign.com/

http://the-studio.se/

Here are some elements I discovered about myself while analysing reference designs:

  1. I like BOLD typography
  2. I prefer Sans Serif fonts, if the readability allows so
  3. In terms of colors I lean more to slightly under-starturated colors (very generally, case specific)
  4. I love minimalism, brutalism and a couple of other ism’s.
  5. I really like nordic designers in general
  • Designing:

After some time I started designing, I realised I want to tackle what I felt most uncomfortable with, before this course, TYPOGRAPHY.

carddesign
final card with grid

I actually started by font choice, in which I decided to use ‘Mr Eaves XL Mod OT’ . I tested some combinations of Serifs with Sans-Serifs but ended up going with the Sans Serif option altogether

screen-shot-2016-10-16-at-20-06-44
attempt at combining Serif and Sans Serif fonts
screen-shot-2016-10-16-at-19-50-25
the font I eventually chose

I then started to mix color combinations for the front & back, as I knew I want some sort of the contrast between them, but preferred a light contact information, and so ended up with a palette that looks like

color
‘why so serious’ color palette

I have a love-hate relation with ‘place holders’, but in this design process I used the well known (and arguably infamous) ‘Lorem Ipsum’ , text to decide about fonts. At some point after many try outs I actually had a ‘eureka’ moment and I figured a business card is somewhat of a place holder for me, it is ‘What I think I am’, ‘What I want to be’ and when the design is bad, ‘What I am totally not’. Following this point I decided to use the ‘Lorem Ipsum’ text in the backside of the card in a dark color combination with the background to create some sort of subtlety with the very ‘bold’ font choice and the amount of text.

backside
I would love to have had these characters extruded in print

For the front side, I chose to minimise the data to the very bare minimum and using slight color and size variations to create hierarchy

frontside

  • Printing (A.K.A why I hate Staples):

After completing the design, I was referred to Staples as a print house. I did they’re online graphic submission which was way to easy for such a delicate process, but given my very limited knowledge, looked pretty reasonable.

I ended up picking the cards only to find out the colors were misinterpreted in the print process, and I actually got an invisible back side, and print artifacts (smudges and stains to name a few).

20161016_202736

  • Lessons I learned:
    • Good designs take time, carful conception all the way to execution
    • Design never ends in design, you always have to prefect the medium too, if it’s cards, make sure they’re perfectly printed, USE THE MEDIUM.
    • I acctualy learned Illustrator (if that counts as a lesson)
    • Never force an idea, you might dislike your rushed execution while the idea might be very good
    • Always print a sample to see everything is good and sharp
    • Never go to Staples unless your shopping for Sharpies

 

ICM – Lesson #7

For this week’s assignment I wanted to create a small application that lives entirely on the canvas, to practice everything we have gone over so far. I started with conceptualizing an audio player app that uses hand gestures, to determine which music I am interested in and use the soundcloud API to get a playlist of the selected genre.

screen-shot-2016-10-13-at-10-58-52

With that in mind, I also wanted to practice using constructor functions, therefore I created an app constructor that holds all the initialization, screens and variables inside it.

I used LeapTrainer.js to train the leap to listen to specific gestures (i.e combinations of different position and rotations it picks up), and registered them as event listeners. Following that, I used an image and an overlaying video, to create an ‘animation on hover’ effect using p5’s ‘dist();’ functions.

Currently I only got the Metal genre working, even though event listeners have been registered to all the gestures. One of the topics I got confused about was the audio player structure (e.g keeping track of the current track, and building a modular system that could jump a song forward, backwards…etc).

A link to Github

Physical Computing – Lesson#7

Following Synthesis, I had so many ideas, I decided to scale the conversation between the Arduino & p5. After reviewing the labs, I decided to build a Drum Machine that uses buttons on the Arduino to trigger oscillators on the p5.sound library that mimic the sound of electronic drums (Inspired by the great 808 drum machine).

I started by sketching the idea of board

breadboard

From that point, I started assembling the breadboard

dsc05251

After finishing assembly I started writing the software on the Arduino, since I am using a lot of the same (buttons) I decided to use ‘Arrays’ to store Button pins, Current States and Last States on each button. It looks something like

Once I read a value and it’s different from the last state && has a ‘HIGH’ value I serial print each button as 1,2,3,4,5. In p5 I pick these values and assign each of them to a specific function, in essence the on/off trigger of each oscillator by a specific button on the Arduino.

Time to demo

Link to github

Synthesis // icm + physcomp = ♥

To kick off the long weekend we had the Synthesis workshop, and even though at first I was sure we were going to deal with sound synthesis we actually synthesised Intro to Computational Media with Physical Computing courses, which boils down to endless possibilities.

I was assigned with Cristobal Valenzuela, and so we started by researching what sensor we would like to use for our ‘Physical Variable’ challenge. We decided to use the Colorpal RGB Sensor and send the red, green and blue data over to p5. While we were figuring out how to best serve the data over serial to p5, we came up with the simple idea of making a game for children, where they get a name of a fruit and have to find something with the same color as the fruit and scan it.

Demo time:

To determine the correct color being sensed we used p5’s ‘dist();’ function in the following manner:

Some of the things we didn’t have time to build but disscused and agreed could be interesting to try are timing the actual process of ‘search & scan’, keeping scores and varying difficulties.

Link to github

Visual Language – Color

This week’s assignment is divided into two sections, test and a project demonstrating self expression with color.

Test Results:

screen-shot-2016-10-03-at-0-45-31
After taking the test I was rated 4 (in which 0 is the highest).

Self expression with color:

This week I chose to focus on a really private thing that has been happening in my life, as this has many implications in terms of color. Two month ago, I moved to NYC, and had to part with my partner (girlfriend), in which we decided we will do our best to maintain our relationship as a long distance one. After some adjusting, we started messaging each other ‘selfie’ images of our lives, as we go about during the day (our human version of emoji one can say), and before I realised this happened on multiple instances each day. While thinking about what kind of self documentation I can use for this week’s assignment I suddenly realised it was in front me all along, my very own documentary.

5 Tones of Emotion

header

I decided to pack together all of these images (plus the ones that will come as time passes) into a website (i.e ‘5 Tones of Emotion’), that analyses these self documentation peices of emotion, and come up with a 5 tone palette that represents the overall average most common tones found in the entire library.

screen-shot-2016-10-02-at-17-28-06

In creating the website I suddenly realised as most of the images contain a rather ‘warm’ palette of colors, it has a very calming effect when viewed as a library, and to the bottom of this point this whole project was a homage to my relationship with Katia, and a way for me to deal with absence through abstraction of data.

screen-shot-2016-10-02-at-0-49-07

I am not sure if this project is worth investigating further, but I would like to make the color analysing engine better, in ways it could possibly assume more based on parameters like color temperature, facial recognition and perhaps even opening the engine to users to analyse their own gallery of images.

The link to the website