ITP Blog

My journey in making things

Data Art – Data Critique

For my final and conceptual project in Data Art, I collaborated with Shelly Hu to come up with a conceptual project that would serve as a “Data Critique”. Our starting point was a discussion into Rem Koolhass’s canonical article “Junkspace”. This article, which deals mainly with modern architecture is a poetic and blunt criticism of modern (and possibly “Americanized”) architecture, and an examination of the human driven use behind some of these spaces.

Understanding a “Junkspace”

As the article is quite long and articulated here are some of our impressions of it:

  • Koolhass refers to Junkspaces as byproduct of modernization
  • Koolhass attacks the concept of modern architecture as one that does not exist anymore, we only produce junk spaces that facilitate uses.
  • Koolhass embraces extreme romanticism in the his old vs. new paradigm.

Themes from the article:

  • Junkspace is political: it depends on the central removal of the critical faculty in the name of comfort and pleasure.”
  • Junkspace pretends to unite, but it actually splinters. It creates communities not of shared interest or free association, but of identical statistics and unavoidable demographics, an oportunistic weave of vested interests”
  • Junkspace seems an aberration, but it is essence, the main thing… product of the encounter between escalator and air conditioning, conceived in an incubator of sheetrock (all three missing from the history books).”

From “Junkspace” to “Junkdata”

For our data critique we decided to examine an interpretation of Koolhass’s article in the data domain. Our discussion included the notion of modern technology as layers of abstracting coherent meaning out of 0’s and 1’s. By that, we mean that by using a map service we actually generate piles of junk data only for the sole purpose of seeing directions on a map in our smartphone. That being said the junk lives on, as it’s stored, curated and then analyzed to produce stats that are used to increase revenue on the other end. Simply put, everybody generates junk data, by we almost never face the data itself. During the discussion we also referred to the fourth wall in cinema, the illusion of reality works as long as the fourth wall remains intact, seems nobody wants to break the illusion of meaning with junk data, since it doesn’t make a whole lot of sense seeing something like this:

When you are actually looking for directions to a restaurant.

Junk Data

Our proposal consists of an Augmented Reality experience that is meant to visualize junk data as it is being generated. The purpose of this experiment is to examine how it would feel like to be able to see the raw material generated in our day to day interaction with technology.

Setup

Our stack would be composed out of:

  • A router that would spread wifi
  • A wifi sniffing software that would pickup the packets sent from the connected clients
  • A server that would then distribute the packets sniffed to connected clients

 

Reading Impressions – Data Art

IMG MGMT: The Nine Eyes of Google Street View

Jon Rafman’s article outlines the development and cultural impacts of Google’s Street View. Rafman, uses language that outlines, to some extent, his perception of Google’s mission to index the world.  Continuing on that, Rafman shows example of special moment captured in the Google lens such as naked people, glitch errors, man carrying a gun and other questionable moments that blur the line between unnoticed “documentation” and augmenting scenarios.

Notes and open questions:

  1. Google Street Maps, to some extent feels to me like a mocumentary film. It is, capturing ‘a’ reality through conventions we associate with realistic qualities but nevertheless disturbs the medium prior to it’s capture.
  2. Some interesting projects have been made using Google street view (as Rafman mentioned) one of the ones I like the most is the Hyperlapse.js library (which is unfortunately unmaintained or developed anymore).
  3. This article by James Gorge, outlines some of the uncanny moments reconstructed by Google’s new photogrammetry engine which brings 3D buildings and streets into maps. Feels to some extent like an interesting parallel conversation about authenticity in Google documentation and the ‘uncanny valley’.

 

Where is the line with public data?

For this experiment I decided to tackle public data from a rather different angle. Through the making of this experiment I came across some interesting questions which I will highlight during the walk-through of the creation and documentation of this project. I collabrated with Anastasis Germanidis to create ‘Death Mask’.

Leon & Kenzo behind the data lens

Where is the line with when it comes to public data?

The concept behind this experiment is to predict the age of people captured from the camera and draw a representation of how long they have to live (if nothing goes wrong that is) in Augmented Reality. As dystopian as it may sound, this idea is based on some controversial machine learning research that claims to have ‘state-of-the-art’ prediction success rates in age prediction.

Social Security life expectancy calculator

The project also serves as a commentary about the distinction between whats referred to as public data and user contributed information in the age of deep learning. Some of the questions that arose during the conceptualization phase were:

  1. Is there an inherent moral difference between normal statistical methods and deep learning when it comes to predicting personal information such as age?
  2. Since the deep learning approach was already trained on information (in this case public information), isn’t the prediction process considered public information too?
  3. Why are we (rough generalization, sorry) so sensitive to information when it is decrementing vs. incrementing? e.g people responded to this experiment a lot “better” when it was showing the age vs. when it was showing the age minus the life expectancy.

Creating the experiment

We started out by searching for a CoreML implementation of the AgeNet model. Thanks to this amazing repo, we were able to from a working demo of the machine learning prediction functionality.

From that point we designed the graphics around it, to feel like a mesh of a face was being displaced based on how long you still have to live.

Demo

To summarize this experiment we created a small video that demonstrates the app and it’s usage. Thanks to Scott Reitherman for the amazing track.

Con’text’ with Twitter AR

For this experiment I chose to focus on spatializing information in Augmented Reality. The idea of spatializing information is not new in any sense and dates back to perhaps the invention of signage (or perhaps even earlier examples could be argued). With that being said, it seems advancements in accessibility of  Augmented Reality consumption models, predominantly the release of Apple’s ARKit and Google’s ARCore, calls upon the need of different approaches and models when spatializing information, or to be precise, drawing digital information in physical space. Given our shared interest in that subject I collaborated with Anastasis Germanidis to produce a speculative experiment of using Twitter data in Augmented Reality.

Spatializing information in AR feels much more like cave glyphs than street signs, they are graphic, associative and story driven.

 

Why AR?

During the past couple of years I have been experimenting with VR quite a lot. Through creating fantasy-driven VR experiences, narrative ones and documentary, the feeling of ‘Mimicking life’ has always struck me to be an impossible goal when designing these experiences. The paradigm shift that AR suggests, is that at the core of the experience, you are the focus (iPod, iPhone, iLife). As content is ‘interacting’ with your environment in a place of your choosing, we become numb to our ‘spidey-sense’ of detecting the fiction from the non-fiction and buy fully into more hybrid experiences. A good analogy is that ‘realistic’ VR experiences feel like Mocumentary films while with AR it feels more like Documentary (to me!).

 

Twitter in Augmented Reality

First off, in order to contextualize the real world to the digital world we need a bridge that allows us to understand some (very little but still) of the taught process that we go through between seeing things and thinking about ideas (yes, this ties perfectly into Peirce’s theory of signs and semiotics in general). To do that, we started by looking into another one of Apple’s new and  upcoming innovations CoreML. At it’s essence CoreML is an optimized engine for running machine learning (pretrained) models on iDevices. Apple also released quite a few pre-trained models themselves and so given our desire to classify objects from the real world we decided to use the Inception v3 model, which is trained to detect objects from images and classify them into a 1000 categories.

*We also found this example to be super useful when starting an ARKit/CoreML project

The art of association

Even though the machine learning model worked better then both of us have anticipated, it is nothing like our brain operates (sometimes I really am happy I took media studies in film school). Continuing on that, since our brain is such a phenomenal ‘associative computing engine’ we are able to bridge the gap with our own context of the scenario even when the machine learning classification is wrong. Which renders the question of what is wrong?

An early version of the experiment that shows the classification categories

 

From index to tweet

Once we got the machine learning apparatus running it was time to get some data based on it. We hooked up to the Twitter API using a swift library and started parsing tweets. Adding some filters on the parsing process we were able to get to a decent point where the tweets are closely related to the classification category.

Disturbing people on the floor in AR

Once we had all the rather technical parts in place we started sketching a design that would work in delivering the message of this experiment.

We wanted it to feel natural, but also disruptive. 

We added profile pictures of the tweets presented inside a sphere, roughly located next to the tweet, and used Twitter’s color palette to color the text and the user-name.

the ICON, the INDEX and the SYMBOL.

 

Where does the magic happen?

Personally I found small moments of magic when it almost felt like machine learning and augmented reality extended my perceptual senses and brought emotional impacts of objects on to the conscious surface, wow that was not very descriptive right? Perhaps an example would help, when looking at a fence in the subway station, the classification algorithm predicted I am looking at a prison. Since we disabled the user being able to see what the machine learning model classifies, it pulled the following tweet:

“You are the prisoner, the prison and the prison keeper. Only you hold the key to your freedom” – Ricky Mathieson

A moment of magic

Another example of this magic occurred when looking at the coffee machine (a thing I spend quite a lot of time doing every day)

“Still life with coffee pot” – Man Ray (1962)

Which refers to Man Ray’s following painting:

Still Life with Coffee Pot – Man Ray

 

Enough with the talking

To illustrate how this works out we made a video of using this experiment throughout one morning

Self Portrait – ‘Mean Emotion’

The idea of a self portrait is a rather challenging one. It holds the premise of conveying the ‘DNA’ of an artist, but more importantly to me, it tells the story of someone through the period of time when the portrait was taken/made. Therefore, portraits are inherently a document of a period of time that no longer exists for someone, and on the shoulders of this understanding, I decided to pursue ‘Mean Emotion’.

The story

I moved to New York city a little over a year ago, to attend ITP. Before coming to NYC, I lived in Israel with my partner for (around that time) over 5 years, Katia. When we got the message informing us about my acceptance into ITP, we both knew what it means. The reason behind this mutual understanding is that Katia was still attending school in Israel, and so we decided we would split but do everything we could to support and maintain that relationship. Fast forward, one year after as I sat down thinking about the portrait I realized one of the tools that helped us in this crazy period of time, is our ability to express emotion through selfies we send to each other.

Data-set collection from WhatsApp

The project

Recently I have begun digging deeper into data science and machine learning, with an emphasis on graphics and imagery. One thing I found myself doing recursively is aligning and averaging data-sets to be able to clearly see variances  and deviations that a so-called ‘learning’ model could potentially pick up. And so I decided to use the aforementioned selfie data to try and average a ‘year’s worth of emotions’ . The data set came to be 230 images taken from the 8th of 2016 till the 8th of 2017.

I collected all the images while paying special attention to WhatsApp’s naming convention, which stores date and time in the file name itself. From that point I started build a face aligner that would be able to parse the images and align them in 3D, which makes for a much better face matches. I started by examining Leon Eckert’s ‘Facemesh Workshop’ github repo and started build my own FaceMesh class in python. The project required Python 3+, openCV, dlib and numpy, which all have very good installation guides online. With dlib in place, you can use pretrained face landmarks .dat file which gives very good results out of the box for detecting faces. Another great resource was pyimageresearch’s facial tracking tutorial.

The console application running

Github repository for the project and the tool could be found here.

After some coding and testing I was able to run the script and start aligning faces on a reference image. My plan was to run the app and have it output a print of the entire year, 8/2016 – 8/2017.

The failure

After getting the code to work, and spending hours on alignment, resolution and compression (haaaaaaa), I was able to produce a good mean image from all the data set. Feeling uplifted I decided to go to Laguardia Studio and print the portrait on high quality paper using their museum grade inkjet printer. Upon getting there, and after a brief ‘up-to-speed’ tutorial by the staff I realized my image doesn’t meet the DPI requirement by the printer and set down to redo the part of the code that had to do with that. Unfortunately, after adjusting it the program took 3 hours on my computer and Laguardia studio already closed.

Lesson learned: “measure twice and cut once

Shooting a printer I never got to print anything on (meta)

 

The result

Here is the printed result of 8/2016- 8/2017, of 228 images shot during the year averaged into a single image.

I also took the time to sort the averaging process by month to create 12 more images each containing the selfies sent during that month.

I also created a speculative video of how the piece could be displayed:

Sound Objects – Final

Towards the end of this semester I had the opportunity to mix two of my favorite classes into a single final, Code Lab 1 which was about programming for the Unity game engine, and Interactive Music, which as the name suggests, was about interactive music. This was also a great opportunity for me to collaborate with Scott Reitherman, as we have been talking about a continuing peace for his Ambient Machine project for a while now, and have been looking for a time to create a 3D virtual reality ‘big brother’ project, and this was the birth of Sound Objects VR.

At it’s core, Sound Object is a VR composition app, that let’s you compose music by augmenting physics of objects in space. The music is generated by object collisions, while the physics determine repetition and speed.

Prototyping the idea

We started by creating a really simple 3D world which let’s the user spawn new sound spheres and bounce them continuously to create musical patterns. This idea was our first working demo, and so I tested it in both Interactive Music, and Code Lab, and was surprised to realize that people found the interaction very playful, and were mainly commenting about additional experience elements, such as scenery, effects and composition changes. One of the main things that hit me while demoing in the midterm, was the power of spatial audio as a mixing tool, instead of having to mix the things for the user, if he get’s to walk around the sound emitting objects, he will intuitively mix it himself.

Beyond the midterm

After midterm, the main goals were:

  1. Work on the world (scene, graphics, effects)
  2. Work on the audio elements and the compositional aspect of the experience
  3. Implement the scene using a VR headset
  4. Figure out and refine the interaction

We started by changing the world, as our midterm environment was essentially a gray sandbox, we had to create the environment from scratch. After some brainstroming, and user feedback, we decided to go with a desert scene, in which you are surrounded by sand and mountains, which works well since you are familiar with the environment (i.e it’s not beyond conceived reality), yet it is very peaceful and minimal, allowing the composition to act as the main thing. We designed the terrain in Unity terrain plugin, and worked with E-on Vue, to create specific mountain geometries. We also used keijiro’s amazing HexBokeh shader, to add some depth of field to the scene.

Alongside getting the environment to work well, we continued to develop the sounds for experience, and actually developed a day-to-night scale transition, which we will implement in the future as a part of an arc story in the experience. The sounds all get loaded into a main static dictionary, which is shared between all the sound objects in order to play clips. This approach also eases the implementation of new sounds to just calling the buildSoundList method.

Another realization we had along the way, is we wanted to able to control the properties of objects that shared the same sound. For this, we added a SoundProps class which uses similar structure to SoundLists, and essentially stores properties which are then used by the objects in a later stage.

With the sounds in place, we were also working on implementing this world in a VR headset. Initialy we wanted to go with the Vive, but since we had access to more Rifts in ITP, we used that alongside the Oculus Touch controllers as hands. After some time learning the API, one thing we had to tackle right away was being able to walk in VR. We decided to use the joystick found on the left Touch controller, but the Oculus code only provided a method which requires you to calibrate the forward vector everytime you run you game, and so that was time to hack. To acomplish a fix for that I added a public declaration in OVRPlayerController script for the right eye camera, and use that one to create forward vector for the joystick, that way if you rotate your head, you also change the joystick controls. full script could be found here.

After testing many different interaction approaches, we decided the sound objects wont be spawned, but place in trays, that have similar composition qualities (i.e work well together), and the user would navigate in space, creating a big composition composed of three spatial areas of smaller compositions.

Demo, demo, demo, demo

Here is a video showing a demo we made in the Interactive Music final class

Here is a short making of video showing some of the aforementioned stages

Next up

We would like to continue and work on Sound Objects and deal with the following:

  • More compositional elements
  • Audio-reactive scenery
  • ‘Arc’ story for the experience that changes over time
  • Some effects for raycasting balls, and ball groups of the same sound

I would also like to thank Matt Parker and Yotam Mann for guidance, help and the knowledge each course provided, and in return the way it shaped the project – THANK YOU

Homemade Hardware – Final

 

For my Homemade Hardware final, I decided to continue and pressure the keyboard design. The keyboard itself, is highly influenced by the Roli Seaboard and it’s approach towards multidimensional midi controllers, but rather then an expansive, software specific solution, I wanted to make a ‘cheap-as-possible’ multidimensional midi keyboard controller.

The Roli Seaboard (small version)

Prototyping

I started prototyping the idea of how the keyboard would actually function and ended up using a dual sensor setup for each key, where the pressure is determined by an FSR (force sensing resistor) sitting at the bottom of each key, and your finger position is determined by a soft-pot at the top.

Once everything was working on the breadboard, I moved to Eagle to start designing the board that would read the sensors, and send them over serial to the Raspberri Pi, which would function as the synth, turning the data into either sound/midi commands sent to the computer.

Schematics

Bill of materials:

  1. Atmega328 micro controller
  2. 2x 4051 multiplexers
  3. 16mhz resonator
  4. resistors, capacitors and header pins

I created two boards, one prototype (through hole) and one final (SMD) boards. I started by laying down the parts which I would need, and position them, the two boards are near identical, just the parts (symbols) are different to match the different PCB techniques.

Board schematics

After making the through hole version I wasn’t able to find a 16 channel multiplexer in a surface mounted version, and so decided to use 2×8 channel multiplexers and use the same control pins, so they are chained to the same select pins coming from the micro-controller.

 

Left: through hole board design | Right: surface mount board design

Fabrication

I started with toner transfer, which went surprisingly well. My times were:

  1. Run the board with the vinyl print through the laminator 5 times
  2. Iron each side of the board for 4 minutes, keep constant movement (I listened to dub music which really helped set the down tempo loopy mood for ironing)

After toner transfer was done I acid etched the board which took about 25 minutes for 3 boards.

Assembly

After cleaning the boards from the remaining toner, I started placing the parts I would need to solder and created a solder stencil. One thing that really helped me smooth the solder stencil process was to get rid of unused pins on the Atmega328, which was quite straight forward in Illustrator.

Solder stencil after some Illustrator work

The settings I used for the laser cutter:

  1. Raster mode
  2. Speed: 10
  3. Power: 15

After some more dub music, and quality time with the pick and place machine I was able to get 4 boards soldered and re-flowed. Out of 4 boards, I got 2 working ones which is A LOT compared to my previous ratio when making boards with the Othermill.

Programming was a breeze thanks to this nifty little thing, and I was able to burn the boot-loader and upload my sketch in a matter of minutes, here is the test app I uploaded

After beep testing the boards I drilled and hand soldered all the header pins in place and tested my board with a second Arduino that reads the serial out from the board, and IT WORKS!

Prototype board

Final board

Special thanks to Shir David for help with shooting.

Up ahead

I am currently working on fabrication aspects of the keyboard, such as enclosure, soft key molding and general design thing and would like to continue developing this into a functional ‘multidimensional‘ keyboard.

Homemade Hardware – Week 08 Board

For this week’s assignment we had to make our very own acid etched SMD board. Since we’re getting closer and closer to the final, I decided it would be a good idea to start realizing the final project.

The project

I decided to try and build a two dimensional MIDI keyboard, yes, much like the ROLI seaboard, but different. To start, I realized this project would depend on my ability to plan one key correctly and then realize the full keyboard, so I started sketching how just one key module would look, work and function.

I started prototyping the key and decided to use two sensors per key:

  1. A linear variable resistor (for your finger’s Y position on the key)
  2. An FSR, to sense how hard your pressing down on the key

After getting it to work on the Arduino I went on to make the actual schematics using the ATTiny85 as my MCU, and attaching two status LED’s that would indicate how hard you pressed each of the sensors.

I took the time to fully brush my understanding of Eagle’s wireless networks so my designs could be modular and I don’t have to to decipher where lines are going once the project becomes bigger. For the MCU and I finished my design of the ATTiny85, we started in class, and adjusted its size a bit so it fits. After finishing the schematics I made the board and tried to tidy it up, so it’s small, but not too small (the taste of bad experience with small boards still remains from the Othermill).

After printing I went on to printing and making vinyl toner transfer sheets in order to start my board.

The toner transfer went pretty well, I realized out of 8 boards I designed only 4 came out right, all the rest had at least one issue somewhere. Due to that, it made no sense placing boards that are problematic in the acid bath (that would just take time for nothing), so I cut the rest using the band-saw and went to over to acid etch, with the help of the wonderful David Lockard.

After about 30 minutes of acid, I took the board out, let it cool in water, and started placing the parts

Drilling holes
One of the final boards

Interactive Music – Midterm

For the midterm project in interactive I decided to continue developing a project I started working on this semester with Scott Reitherman. We started the project from a set of meetings in which we discussed stochastic music creation approaches and our self interest in reinterpretation of how composition could be created using intuitive methods and tools, enabling essentially anybody (referring to prior musical knowledge that is), to intuitively compose.

As we started laying the foundations for the project with a Virtual Reality HMD in mind, I decided to focus on building a ‘demo’ scene for the experience which could emphasize, or rather outline our main objectives for the experience, as it Work in Progress

Realizing the gesture

One of the main aspects of the assignment was the use of gesture, and since I decided to use a virtual reality headset, I decided I would need a ‘3D agent’  to bridge the virtual and the physical. I decided to use the Oculus Touch controller, which conveys ‘hands like’ feeling during the experience.

The experience

The experience consists of a virtual world in which you get to spawn spheres, that on collision with the floor trigger sounds. The main focus from an interaction stand point, would be your interaction with the physics engine that is controlling the sphere’s movement in space after they are created. To demonstrate that point, I decided to create a physical model in which the bouncing spheres maintain they’re energy, or simply put, they bounce forever.

In the above examples spheres are spawned by the mouse’s X and Y coordinates on the screen at a fixed distance from the player itself. After realizing that part, I went on to the VR integration.

The demo above demonstrates the use of the ball spawning and also the ability to ‘pause’ the balls in mid air (which in return pauses they’re sound).

A link to desktop versions of the demo could be found here for both Win/Mac

What next?

  1. Adding interactions with the physical model enabling the user to further understand the connection between the pysical characteristics of the world and the composition he creats (e.g less gravitational force will make repetitions less often which will result in a slower overall composition).
  2. Sound selection GUI which enables the user to both change and audition different musical components.
  3. Figure out a way to deal with non-rhythmic sounds (e.g drones, pads, ambient components)

Homemade Hardware – Scale board

For this week we had to start prototyping week 8’s board. We had to decide on a concept and start prototyping the board and components for it. After some thought, I decided to either go for a musical instrument, or a weight scale. Given the time scope of the assignment, I decided to go with the weight scale board.

 

Load cells

Since I decided to go with a weight scale board, I started researching into load cells using Arduinos. I used Sparksfun guide to load cells and amplifiers and this Instructable to get comfortable with the idea of using a load cell. I ordered an HX711 amplifier breakout board and a 100 grams load cell. As the board came without header pins, I started by soldering header pins on the breakout board.

 

 

After soldering the the HX711 amp breakout board, I downloaded the HX711 library from the Arduino and connected the load cell to the amp board to the arduino and started measuring. Unfortunately, the load cell didn’t come with threads (screws) and has to be mounted before it could be used accurately because the measurement  hits the amplifier (I ordered 3mm threads so I can mount the scale between two laser cut panels I’ll cut).

Designing a board

To get started with designing the board, I found Sparksfun Github repo that has Eagle schematics for the HX711 breakout board, and so I downloaded it and started deleting parts I don’t really need for my board. I also inserted an ATTiny85 into the board got a basic header layout for power and ground, and for the load cell connections.

Initial board design schematic in Eagle