ITP Blog

My journey in making things

Visual Language – City Branding

The first time I visited Andorra, a small sovereign situated between France and Spain, I was shocked by the fact such a small and unknown place exists, being what you might call ‘under the grid’. Andorra during winter time, is home to some of the best snow in the area, but with that being said, it is nothing like other over-priced, over-rated, tea drinking in the evening spots for snow lovers, rather it’s all about riding hard, and partying hard, and so this is how my love story with Andorra La Vella starts.

Snowboard trip to Andorra during 2012
  • Step 1 – Analysis:

Andorra La Vella is home to 22,000 residents and is the highest capital city in Europe rising at 3,356 ft. During the summer time, it enjoys very warm weather for a short period which changes to very cold winters and what makes Andorra especially worthwhile, it’s unbelievable snow and wide mountain ridges, which are nothing short but paradise for ski and snowboarding.

That being said, in terms of design, branding, and visual language, Andorra hasn’t really used its strengths to build itself a ‘brand’ , which might be one reason for why it’s remained relatively unknown.


To illustrate the aforementioned point Andorra La Vella’s current logo


  • Conceptualisation:

After some research and identification, here are some of the problems and ideas the new logo (and visual language) needs to emphasize:

  1. Use Andorra La Vella’s breathtaking mountain view which is also its biggest selling point, snow. With that being said, also try to speak the visual language patterns found in other snow wear, skiing resorts and snow related design.
  2. Refresh it’s current identity to a younger, more modern and bolder design.
  3. Being that Andorra is a sovereign that has several cities and regions, devise a design that could be modular to serve other areas in Andorra.
  • Execuation:

As I started examining related snow related design, I decided the design for Andorra must feature it’s breathtaking Pyrenees mountain ridge. I used vector drawing technique to outline a mountain ridge stencil.


For the typography, I wanted to include a heavy, geometric Sans-Serif font for the first part of the name (i.e Andorra), and have the second part of the name in a different font. The main reason for this is both a design decision, but also a modularity concern, as Andorra is the name of the sovereign, but every city has a different add-on name, the logo could be used as an arc design language for every city. Fonts used are Simona Sans-Serif and HucklebuckJF for the script.

Example shows Andorra La Vella, and Pas De La Casa as two examples using the same modular design

The outcome logo


After some thought, and research into other snow related design I found that emblem design is quite common, and decided it would make sense to have this logo in a badge (emblem) format as well for merchandise and specific uses and so went on to made this badge derivative of the logo


To emphasize how the design could be used for more tactile applications I made a t-shirt (which are a very common city branding commodity), which features the city’s redesigned logo on the back.


Visual Language – Business Cards

For this week’s assignment we had to come up with a visual language for ourselves and produce a business card design. This blog post is meant to describe the process of conceptualizing through execution, and lessons I learned along the way.

  • Conceptualising:

I started by examining design patterns I liked before, and stuff I created for myself before coming to ITP, but this time looking at everything critically with some tools acquired in this course. The following example, is a logo I designed for myself about a year ago:


This logo was inspired by penrose triangle,  a part of the impossible geometry group of shapes, conceived first by swedish designer Oscar Reutersvärd.

  • References and influences:

Before I actually started designing I took sometime, looking for inspiration, designs and studios that create designs I can relate to, so I can analyse the elements of design I would like to incorporate in my visual language

A list of influences:

Here are some elements I discovered about myself while analysing reference designs:

  1. I like BOLD typography
  2. I prefer Sans Serif fonts, if the readability allows so
  3. In terms of colors I lean more to slightly under-starturated colors (very generally, case specific)
  4. I love minimalism, brutalism and a couple of other ism’s.
  5. I really like nordic designers in general
  • Designing:

After some time I started designing, I realised I want to tackle what I felt most uncomfortable with, before this course, TYPOGRAPHY.

final card with grid

I actually started by font choice, in which I decided to use ‘Mr Eaves XL Mod OT’ . I tested some combinations of Serifs with Sans-Serifs but ended up going with the Sans Serif option altogether

attempt at combining Serif and Sans Serif fonts
the font I eventually chose

I then started to mix color combinations for the front & back, as I knew I want some sort of the contrast between them, but preferred a light contact information, and so ended up with a palette that looks like

‘why so serious’ color palette

I have a love-hate relation with ‘place holders’, but in this design process I used the well known (and arguably infamous) ‘Lorem Ipsum’ , text to decide about fonts. At some point after many try outs I actually had a ‘eureka’ moment and I figured a business card is somewhat of a place holder for me, it is ‘What I think I am’, ‘What I want to be’ and when the design is bad, ‘What I am totally not’. Following this point I decided to use the ‘Lorem Ipsum’ text in the backside of the card in a dark color combination with the background to create some sort of subtlety with the very ‘bold’ font choice and the amount of text.

I would love to have had these characters extruded in print

For the front side, I chose to minimise the data to the very bare minimum and using slight color and size variations to create hierarchy


  • Printing (A.K.A why I hate Staples):

After completing the design, I was referred to Staples as a print house. I did they’re online graphic submission which was way to easy for such a delicate process, but given my very limited knowledge, looked pretty reasonable.

I ended up picking the cards only to find out the colors were misinterpreted in the print process, and I actually got an invisible back side, and print artifacts (smudges and stains to name a few).


  • Lessons I learned:
    • Good designs take time, carful conception all the way to execution
    • Design never ends in design, you always have to prefect the medium too, if it’s cards, make sure they’re perfectly printed, USE THE MEDIUM.
    • I acctualy learned Illustrator (if that counts as a lesson)
    • Never force an idea, you might dislike your rushed execution while the idea might be very good
    • Always print a sample to see everything is good and sharp
    • Never go to Staples unless your shopping for Sharpies


ICM – Lesson #7

For this week’s assignment I wanted to create a small application that lives entirely on the canvas, to practice everything we have gone over so far. I started with conceptualizing an audio player app that uses hand gestures, to determine which music I am interested in and use the soundcloud API to get a playlist of the selected genre.


With that in mind, I also wanted to practice using constructor functions, therefore I created an app constructor that holds all the initialization, screens and variables inside it.

I used LeapTrainer.js to train the leap to listen to specific gestures (i.e combinations of different position and rotations it picks up), and registered them as event listeners. Following that, I used an image and an overlaying video, to create an ‘animation on hover’ effect using p5’s ‘dist();’ functions.

Currently I only got the Metal genre working, even though event listeners have been registered to all the gestures. One of the topics I got confused about was the audio player structure (e.g keeping track of the current track, and building a modular system that could jump a song forward, backwards…etc).

A link to Github

Physical Computing – Lesson#7

Following Synthesis, I had so many ideas, I decided to scale the conversation between the Arduino & p5. After reviewing the labs, I decided to build a Drum Machine that uses buttons on the Arduino to trigger oscillators on the p5.sound library that mimic the sound of electronic drums (Inspired by the great 808 drum machine).

I started by sketching the idea of board


From that point, I started assembling the breadboard


After finishing assembly I started writing the software on the Arduino, since I am using a lot of the same (buttons) I decided to use ‘Arrays’ to store Button pins, Current States and Last States on each button. It looks something like

Once I read a value and it’s different from the last state && has a ‘HIGH’ value I serial print each button as 1,2,3,4,5. In p5 I pick these values and assign each of them to a specific function, in essence the on/off trigger of each oscillator by a specific button on the Arduino.

Time to demo

Link to github

Synthesis // icm + physcomp = ♥

To kick off the long weekend we had the Synthesis workshop, and even though at first I was sure we were going to deal with sound synthesis we actually synthesised Intro to Computational Media with Physical Computing courses, which boils down to endless possibilities.

I was assigned with Cristobal Valenzuela, and so we started by researching what sensor we would like to use for our ‘Physical Variable’ challenge. We decided to use the Colorpal RGB Sensor and send the red, green and blue data over to p5. While we were figuring out how to best serve the data over serial to p5, we came up with the simple idea of making a game for children, where they get a name of a fruit and have to find something with the same color as the fruit and scan it.

Demo time:

To determine the correct color being sensed we used p5’s ‘dist();’ function in the following manner:

Some of the things we didn’t have time to build but disscused and agreed could be interesting to try are timing the actual process of ‘search & scan’, keeping scores and varying difficulties.

Link to github

Visual Language – Color

This week’s assignment is divided into two sections, test and a project demonstrating self expression with color.

Test Results:

After taking the test I was rated 4 (in which 0 is the highest).

Self expression with color:

This week I chose to focus on a really private thing that has been happening in my life, as this has many implications in terms of color. Two month ago, I moved to NYC, and had to part with my partner (girlfriend), in which we decided we will do our best to maintain our relationship as a long distance one. After some adjusting, we started messaging each other ‘selfie’ images of our lives, as we go about during the day (our human version of emoji one can say), and before I realised this happened on multiple instances each day. While thinking about what kind of self documentation I can use for this week’s assignment I suddenly realised it was in front me all along, my very own documentary.

5 Tones of Emotion


I decided to pack together all of these images (plus the ones that will come as time passes) into a website (i.e ‘5 Tones of Emotion’), that analyses these self documentation peices of emotion, and come up with a 5 tone palette that represents the overall average most common tones found in the entire library.


In creating the website I suddenly realised as most of the images contain a rather ‘warm’ palette of colors, it has a very calming effect when viewed as a library, and to the bottom of this point this whole project was a homage to my relationship with Katia, and a way for me to deal with absence through abstraction of data.


I am not sure if this project is worth investigating further, but I would like to make the color analysing engine better, in ways it could possibly assume more based on parameters like color temperature, facial recognition and perhaps even opening the engine to users to analyse their own gallery of images.

The link to the website


Physical Computing – Lesson#3 – Observation


For this week’s interaction observation assignment I chose to focus on the LinkNYC booth that was recently statued close to my home, since I found the device to be an interesting case study that blurs the lines of it’s seemingly main use.


Interaction observation:

I chose to focus my observation study on a young couple’s interaction with the machine over a period of approx. one minute. The man equipped with a camera, and the woman holding a water bottle and a bag approached the machine and instantly started touching the screen. After tapping the screen once a map appeared then the woman looked into the screen observing as the man moves his fingers on the screen and the map moves. After about 20 seconds of moving his fingers on the screen the man pointed over to 72nd St. and started a conversation with the woman, in which she had started pointing in the other direction. As they converse, they often refer back at the screen for reference until they start walking towards 72nd St.

My first assumption was the couple are tourists visiting the city and exploring it. Secondly but probably more importantly, seems the couple initially approached the device due to it’s similarity of a handheld smartphone device in both design and interface. Since they most likely have some experience with the device’s ‘interaction language’ the use of it stated right as they approached it, i.e very short learning curve.

Connecting this interaction observation to Bret Victor’s ‘A Brief Rant On The Future Of Interaction Design’ definition of a tool by stating it addresses human needs by amplifying human capabilities, seems the LinkNYC booth might be very suitable for that category. The man’s ability to orient around NYC is limited by his knowledge of the city’s neighborhoods, streets and routes and in this sense, it seems given the booth location is actually on the street, it’s able to amplify his orientation by providing the missing knowledge. Furthermore, seems Victor’s approach that uses our hands as the means to interact with future interfaces, LinkNYC gains a lot by people’s familiarity with  interface and bundled applications, making the touch screen choice the perfect one in that sense.

General observation notes:

  • The machine itself is a rather noticeably big metal container with ad spaces on both sides facing the street and a bevel in which there is a touch screen, a metal numeric keyboard and a headphone jack.
  • The device has multiple features – free WiFi distributer, built-in tablet and a charging port to name the most popular ones.
  • It seems the device was conceptualised with an idea of solving two main types of problems at first, free NYC WiFi network and an information hub for people (maps, internet browser were the most common).  With that in mind it helps solve both a local problem for NY residents but also helps tourist orient in the city.


Physical Computing – Lesson#3

For this week’s class, I kept practicing the lab exercises, plus started experimenting with Arduino code & the Arduino IDE.

Digital I/O:

I started with building simple applications, turning an LED on/off, using buttons and counting the presses, predominantly learning the ‘digitalRead()’ and ‘digitalWrite()’ functions.

Assembling lights and buttons

Analog I/O:

Through reading and writing digital values (essentially a high & low values), I kept on learning how analogRead & PWM work, and found a house project I was excited doing that incorporated all of these elements.

The Moisture Checker:

I decided to create a moisture checker that lights up one of three state LEDs to notify you whether the soil is wet enough, ok or bad. Since my actual plants are in my balcony, having the LED on at all time would be both wasted power consumption and might annoy the neighbors, therefore I decided to incorporate a solution for distance sensing to initialise or remove the LEDs.


I started by defining the goals and scenario in which the thing would use, and once I got this done I moved to sketching the schematic for the assembly.


For the LEDs themselves I decided it would be calmer to use a ‘breathing’ effect by having them fade from on to off in a wave like way. With that in mind I decided to use a loop which will iterate between 0-255 and turn on and off the correct LED.

void ledEffect(int moistureState){
 //Mimic the effect of a sine wave breathing effect for the leds
 //Is there any way I can use an actual number generator to not delay the loop?
 if(counter == 0){
 for(counter = 0; counter < 255; counter++){
 analogWrite(moistureState, counter);
 } else if (counter == 255){
 for(counter = 255; counter > 0; counter--){
 analogWrite(moistureState, counter);
 } else {
 counter = 0;

The LED breathing effect

Reading the moisture sensor over serial:

Questions for class based on assignment:

  1. I am using a ‘for’ loop to iterate over values ranging from 0-255 in order to create the breathing effect for the LEDs. With that being said I am using the ‘delay()’ function which in return delays the whole loop while the LEDs are on, is there any other way of achieving this effect? perhaps a sine wave generator I can map to the values above without delaying the loop
  2. I find the sensor readings to be very jumpy in values, what would be a good technique to ‘filter’ the values I get back from the moisture sensor?

Visual Language – Typography & Expression

For this week’s class the assignment was to express yourself through different type faces and uses of typography. I started by rounding all the typefaces I have already used and liked and began researching the different features and characteristics, discussed in class, that each of them has, in order to better understand why do I ‘Like a Font’.

  • My name in 3 Serif’s and 3 Sans-Serif’s:

I started with the Sans-Serif’s as I have had more experience choosing Sans-Serif fonts. I decided to use the following three fonts as each of them has different variations


Modern Sans


Some of the features I liked about Modern Sans is it’s very geometric capital ‘O’ and ‘F’, I especially like the fact the lowercase ‘s’ is slightly angled on it’s axis.



Some of the feature I liked about Simona are it’s Serif ‘like’ lowercase ‘l’, rather low crossbar seen in the lowercase ‘e’ and the contrast between these rather unique features to it’s overall geometric approach.

* Disclosure: this font was designed by a good friend, Ben Nathan, and I especially like it since it also has a hebrew set to it. Link to his website.

Avenir Next


Avenir, Cliché right? some of the features I like about Avenir Next are it’s geometric uppercase ‘O’ and ‘F’, it’s ascenders found in the lowercase ‘l’ and ‘h’ which span very high (actually rises above the cap-height) and the circular dot used in the lowercase ‘i’.

Big Calson


Some of the features I like about Big Calson are it’s unique capital stress angles (e.g uppercase ‘F’), it’s high crossbar seen in the lowercase ‘e’ and especially the very subtle brackets connecting the bilateral serifs with the stems (e.g lowercase ‘r’, ‘l’ ‘i’).

Palatino Linotype


Features I like about Palatino Linotype are it’s round and subtle finial found in the lowercase ‘e’, it’s overall round Serif edges specially visible in the lowercase ascenders (e.g ‘l’, ‘h’), and the beak at the tip of the lowercase ‘r’ which adds a really nice yet subtle feature.



Bentham has a very noticeable beak compared to Palatino especially in the lowercase ‘r’, I do like the contrast between the very pointy Serif’s on the bottom of the letters and the more curved one noticeable in the ascenders (e.g lowercase ‘l’).

Expressive Words:








ICM – Lesson#2

For the second lesson of ICM, I decided to tackle a music visualiser. After many thoughts I ended up creating a 3D music visualiser, that analyses the sound – using Web Audio API and controls a noise function applied to a 3D sphere using Three.js and a GLSL shader.  Let’s dive to it!

I would like to thank Ian McEwan who wrote this shader in the attached example, which really helped me in figuring my way into this visualiser. And a huge thanks to Marco Guarino (who’s also an ITP student and a great friend).

  • First step – Get the audio analysed:

I start by creating a web audio context, which enables me to create an audio element and hook it up to an analyser. The analyser outputs it’s frequency data into an unassigned array in which I will cover later on, as it happens in the update function for the whole scene.

//Web Audio Stuff//
var audio = new Audio();
audio.src = 'marco.mp3';
audio.controls = true;

var context = new AudioContext();
var analyser = context.createAnalyser();
freqByteData = new Uint8Array(analyser.frequencyBinCount);
window.addEventListener('load', function(e) {
 // Our <audio> element will be the audio source.
 var source = context.createMediaElementSource(audio);
}, false);
  • Second Step – Make the 3D scene:

The 3D scene is actually a lot simpler than it might look at first. I started by rendering a 360 equirectangular image of the terrain using Vue (i.e an environment modelling suite), which I rendered in two resolutions 3840×2160 for the background, and 1280×720 for the projected reflections on the actual sphere.


This image get’s wrapped on a sphere as a texture using [Three.js]’s THREE.ImageUtils.loadTexture and as a part of the sphere element fed into the shader for displacement based on Perlin’s noise function. The last element in the scene is a very simple perspective camera that’s being moved by mouse down events around the object.


The update function plays a major roll in every WebGL scene but also serves here as the containing function for the audio analysis on each draw cycle.

function update() {
 var length = freqByteData.length;

 var sum = 0;
 for(var j = 0; j < length; ++j) {
 sum += freqByteData[j];
 aveLevel = sum / length;

By using a for loop to iterate over the frequency data and divide it’s length by the sum of each frequency’s amplitude I determine average volume, but can also extract singular values for specific frequency ranges.


Lastly, I Included a dat.gui interface, as both means to learning to use it, but also as a set of controls for the shader’s response to sound, and even the camera’s reaction to sound.

The link to the visualiser’s repo & demo is

In the future, I would like to get better grasp of GLSL and shader development in the future to gain more confidence in perhaps writing my own shaders for these types of experiences.