data_art

All posts in this category.

Data Art – Data Critique

For my final and conceptual project in Data Art, I collaborated with Shelly Hu to come up with a conceptual project that would serve as a “Data Critique”. Our starting point was a discussion into Rem Koolhass’s canonical article “Junkspace”. This article, which deals mainly with modern architecture is a poetic and blunt criticism of modern (and possibly “Americanized”) architecture, and an examination of the human driven use behind some of these spaces.

Understanding a “Junkspace”

As the article is quite long and articulated here are some of our impressions of it:

  • Koolhass refers to Junkspaces as byproduct of modernization
  • Koolhass attacks the concept of modern architecture as one that does not exist anymore, we only produce junk spaces that facilitate uses.
  • Koolhass embraces extreme romanticism in the his old vs. new paradigm.

Themes from the article:

  • Junkspace is political: it depends on the central removal of the critical faculty in the name of comfort and pleasure.”
  • Junkspace pretends to unite, but it actually splinters. It creates communities not of shared interest or free association, but of identical statistics and unavoidable demographics, an oportunistic weave of vested interests”
  • Junkspace seems an aberration, but it is essence, the main thing… product of the encounter between escalator and air conditioning, conceived in an incubator of sheetrock (all three missing from the history books).”

From “Junkspace” to “Junkdata”

For our data critique we decided to examine an interpretation of Koolhass’s article in the data domain. Our discussion included the notion of modern technology as layers of abstracting coherent meaning out of 0’s and 1’s. By that, we mean that by using a map service we actually generate piles of junk data only for the sole purpose of seeing directions on a map in our smartphone. That being said the junk lives on, as it’s stored, curated and then analyzed to produce stats that are used to increase revenue on the other end. Simply put, everybody generates junk data, by we almost never face the data itself. During the discussion we also referred to the fourth wall in cinema, the illusion of reality works as long as the fourth wall remains intact, seems nobody wants to break the illusion of meaning with junk data, since it doesn’t make a whole lot of sense seeing something like this:

When you are actually looking for directions to a restaurant.

Junk Data

Our proposal consists of an Augmented Reality experience that is meant to visualize junk data as it is being generated. The purpose of this experiment is to examine how it would feel like to be able to see the raw material generated in our day to day interaction with technology.

Setup

Our stack would be composed out of:

  • A router that would spread wifi
  • A wifi sniffing software that would pickup the packets sent from the connected clients
  • A server that would then distribute the packets sniffed to connected clients

 

Reading Impressions – Data Art

IMG MGMT: The Nine Eyes of Google Street View

Jon Rafman’s article outlines the development and cultural impacts of Google’s Street View. Rafman, uses language that outlines, to some extent, his perception of Google’s mission to index the world.  Continuing on that, Rafman shows example of special moment captured in the Google lens such as naked people, glitch errors, man carrying a gun and other questionable moments that blur the line between unnoticed “documentation” and augmenting scenarios.

Notes and open questions:

  1. Google Street Maps, to some extent feels to me like a mocumentary film. It is, capturing ‘a’ reality through conventions we associate with realistic qualities but nevertheless disturbs the medium prior to it’s capture.
  2. Some interesting projects have been made using Google street view (as Rafman mentioned) one of the ones I like the most is the Hyperlapse.js library (which is unfortunately unmaintained or developed anymore).
  3. This article by James Gorge, outlines some of the uncanny moments reconstructed by Google’s new photogrammetry engine which brings 3D buildings and streets into maps. Feels to some extent like an interesting parallel conversation about authenticity in Google documentation and the ‘uncanny valley’.

 

Where is the line with public data?

For this experiment I decided to tackle public data from a rather different angle. Through the making of this experiment I came across some interesting questions which I will highlight during the walk-through of the creation and documentation of this project. I collabrated with Anastasis Germanidis to create ‘Death Mask’.

Leon & Kenzo behind the data lens

Where is the line with when it comes to public data?

The concept behind this experiment is to predict the age of people captured from the camera and draw a representation of how long they have to live (if nothing goes wrong that is) in Augmented Reality. As dystopian as it may sound, this idea is based on some controversial machine learning research that claims to have ‘state-of-the-art’ prediction success rates in age prediction.

Social Security life expectancy calculator

The project also serves as a commentary about the distinction between whats referred to as public data and user contributed information in the age of deep learning. Some of the questions that arose during the conceptualization phase were:

  1. Is there an inherent moral difference between normal statistical methods and deep learning when it comes to predicting personal information such as age?
  2. Since the deep learning approach was already trained on information (in this case public information), isn’t the prediction process considered public information too?
  3. Why are we (rough generalization, sorry) so sensitive to information when it is decrementing vs. incrementing? e.g people responded to this experiment a lot “better” when it was showing the age vs. when it was showing the age minus the life expectancy.

Creating the experiment

We started out by searching for a CoreML implementation of the AgeNet model. Thanks to this amazing repo, we were able to from a working demo of the machine learning prediction functionality.

From that point we designed the graphics around it, to feel like a mesh of a face was being displaced based on how long you still have to live.

Demo

To summarize this experiment we created a small video that demonstrates the app and it’s usage. Thanks to Scott Reitherman for the amazing track.

Con’text’ with Twitter AR

For this experiment I chose to focus on spatializing information in Augmented Reality. The idea of spatializing information is not new in any sense and dates back to perhaps the invention of signage (or perhaps even earlier examples could be argued). With that being said, it seems advancements in accessibility of  Augmented Reality consumption models, predominantly the release of Apple’s ARKit and Google’s ARCore, calls upon the need of different approaches and models when spatializing information, or to be precise, drawing digital information in physical space. Given our shared interest in that subject I collaborated with Anastasis Germanidis to produce a speculative experiment of using Twitter data in Augmented Reality.

Spatializing information in AR feels much more like cave glyphs than street signs, they are graphic, associative and story driven.

 

Why AR?

During the past couple of years I have been experimenting with VR quite a lot. Through creating fantasy-driven VR experiences, narrative ones and documentary, the feeling of ‘Mimicking life’ has always struck me to be an impossible goal when designing these experiences. The paradigm shift that AR suggests, is that at the core of the experience, you are the focus (iPod, iPhone, iLife). As content is ‘interacting’ with your environment in a place of your choosing, we become numb to our ‘spidey-sense’ of detecting the fiction from the non-fiction and buy fully into more hybrid experiences. A good analogy is that ‘realistic’ VR experiences feel like Mocumentary films while with AR it feels more like Documentary (to me!).

 

Twitter in Augmented Reality

First off, in order to contextualize the real world to the digital world we need a bridge that allows us to understand some (very little but still) of the taught process that we go through between seeing things and thinking about ideas (yes, this ties perfectly into Peirce’s theory of signs and semiotics in general). To do that, we started by looking into another one of Apple’s new and  upcoming innovations CoreML. At it’s essence CoreML is an optimized engine for running machine learning (pretrained) models on iDevices. Apple also released quite a few pre-trained models themselves and so given our desire to classify objects from the real world we decided to use the Inception v3 model, which is trained to detect objects from images and classify them into a 1000 categories.

*We also found this example to be super useful when starting an ARKit/CoreML project

The art of association

Even though the machine learning model worked better then both of us have anticipated, it is nothing like our brain operates (sometimes I really am happy I took media studies in film school). Continuing on that, since our brain is such a phenomenal ‘associative computing engine’ we are able to bridge the gap with our own context of the scenario even when the machine learning classification is wrong. Which renders the question of what is wrong?

An early version of the experiment that shows the classification categories

 

From index to tweet

Once we got the machine learning apparatus running it was time to get some data based on it. We hooked up to the Twitter API using a swift library and started parsing tweets. Adding some filters on the parsing process we were able to get to a decent point where the tweets are closely related to the classification category.

Disturbing people on the floor in AR

Once we had all the rather technical parts in place we started sketching a design that would work in delivering the message of this experiment.

We wanted it to feel natural, but also disruptive. 

We added profile pictures of the tweets presented inside a sphere, roughly located next to the tweet, and used Twitter’s color palette to color the text and the user-name.

the ICON, the INDEX and the SYMBOL.

 

Where does the magic happen?

Personally I found small moments of magic when it almost felt like machine learning and augmented reality extended my perceptual senses and brought emotional impacts of objects on to the conscious surface, wow that was not very descriptive right? Perhaps an example would help, when looking at a fence in the subway station, the classification algorithm predicted I am looking at a prison. Since we disabled the user being able to see what the machine learning model classifies, it pulled the following tweet:

“You are the prisoner, the prison and the prison keeper. Only you hold the key to your freedom” – Ricky Mathieson

A moment of magic

Another example of this magic occurred when looking at the coffee machine (a thing I spend quite a lot of time doing every day)

“Still life with coffee pot” – Man Ray (1962)

Which refers to Man Ray’s following painting:

Still Life with Coffee Pot – Man Ray

 

Enough with the talking

To illustrate how this works out we made a video of using this experiment throughout one morning

Self Portrait – ‘Mean Emotion’

The idea of a self portrait is a rather challenging one. It holds the premise of conveying the ‘DNA’ of an artist, but more importantly to me, it tells the story of someone through the period of time when the portrait was taken/made. Therefore, portraits are inherently a document of a period of time that no longer exists for someone, and on the shoulders of this understanding, I decided to pursue ‘Mean Emotion’.

The story

I moved to New York city a little over a year ago, to attend ITP. Before coming to NYC, I lived in Israel with my partner for (around that time) over 5 years, Katia. When we got the message informing us about my acceptance into ITP, we both knew what it means. The reason behind this mutual understanding is that Katia was still attending school in Israel, and so we decided we would split but do everything we could to support and maintain that relationship. Fast forward, one year after as I sat down thinking about the portrait I realized one of the tools that helped us in this crazy period of time, is our ability to express emotion through selfies we send to each other.

Data-set collection from WhatsApp

The project

Recently I have begun digging deeper into data science and machine learning, with an emphasis on graphics and imagery. One thing I found myself doing recursively is aligning and averaging data-sets to be able to clearly see variances  and deviations that a so-called ‘learning’ model could potentially pick up. And so I decided to use the aforementioned selfie data to try and average a ‘year’s worth of emotions’ . The data set came to be 230 images taken from the 8th of 2016 till the 8th of 2017.

I collected all the images while paying special attention to WhatsApp’s naming convention, which stores date and time in the file name itself. From that point I started build a face aligner that would be able to parse the images and align them in 3D, which makes for a much better face matches. I started by examining Leon Eckert’s ‘Facemesh Workshop’ github repo and started build my own FaceMesh class in python. The project required Python 3+, openCV, dlib and numpy, which all have very good installation guides online. With dlib in place, you can use pretrained face landmarks .dat file which gives very good results out of the box for detecting faces. Another great resource was pyimageresearch’s facial tracking tutorial.

The console application running

Github repository for the project and the tool could be found here.

After some coding and testing I was able to run the script and start aligning faces on a reference image. My plan was to run the app and have it output a print of the entire year, 8/2016 – 8/2017.

The failure

After getting the code to work, and spending hours on alignment, resolution and compression (haaaaaaa), I was able to produce a good mean image from all the data set. Feeling uplifted I decided to go to Laguardia Studio and print the portrait on high quality paper using their museum grade inkjet printer. Upon getting there, and after a brief ‘up-to-speed’ tutorial by the staff I realized my image doesn’t meet the DPI requirement by the printer and set down to redo the part of the code that had to do with that. Unfortunately, after adjusting it the program took 3 hours on my computer and Laguardia studio already closed.

Lesson learned: “measure twice and cut once

Shooting a printer I never got to print anything on (meta)

 

The result

Here is the printed result of 8/2016- 8/2017, of 228 images shot during the year averaged into a single image.

I also took the time to sort the averaging process by month to create 12 more images each containing the selfies sent during that month.

I also created a speculative video of how the piece could be displayed: