data_art

All posts in this category.

Con’text’ with Twitter AR

For this assignment I chose to focus on spatializing information in Augmented Reality. The idea of spatalizing information is not new in any sense and dates back to perhaps the invention of signage (or perhaps even earlier examples could be argued). With that being said, it seems advancements in accessibility of  Augmented Reality consumption models, predominantly the release of Apple’s ARKit and Google’s ARCore, calls upon the need of different approaches and models when spatializing information, or to be precise, drawing digital information in physical space. Given our shared interest in that subject I collaborated with Anastasis Germanidis to produce a speculative experiment of using Twitter data in Augmented Reality.

Spatializing information in AR feels much more like cave glyphs than street signs, they are graphic, associative and story driven.

 

Why AR?

During the past couple of years I have been experimenting with VR quite a lot. Through creating fantasy-driven VR experiences, narrative ones and documentary, the feeling of ‘Mimicking life’ has always struck me to be an impossible goal when designing these experiences. The paradigm shift that AR suggets, is that at the core of the experience, you are the focus (iPod, iPhone, iLife). As content is ‘interacting’ with your environment in a place of your choosing, we become numb to our ‘spidey-sense’ of detecting the fiction from the non-fiction and buy fully into more hybrid experiences. A good analogy is that ‘realistic’ VR experiences feel like Mocumentary films while with AR it feels more like Documentary (to me!).

 

Twitter in Augmented Reality

First off, in order to contextualize the real world to the digital world we need a bridge that allows us to understand some (very little but still) of the taught process that we go through between seeing things and thinking about ideas (yes, this ties perfectly into Peirce’s theory of signs and semiotics in general). To do that, we started by looking into another one of Apple’s new and  upcoming innovations CoreML. At it’s essence CoreML is an optimized engine for running machine learning (pre-trained) models on iDevices. Apple also released quite a few pre-trained models themselves and so given our desire to classify objects from the real world we decided to use the Inception v3 model, which is trained to detect objects from images and classify them into a 1000 catagories.

*We also found this example to be super useful when starting an ARKit/CoreML project

The art of association

Even though the machine learning model worked better then both of us have anticipated, it is nothing like our brain operates (sometimes I really am happy I took media studies in film school). Continuing on that, since our brain is such a phenomenal ‘associative computing engine’ we are able to bridge the gap with our own context of the scenario even when the machine learning classification is wrong. Which renders the question of what is wrong?

An early version of the experiment that shows the classification categories

 

From index to tweet

Once we got the machine learning apparatus running it was time to get some data based on it. We hooked up to the Twitter API using a swift library and started parsing tweets. Adding some filters on the parsing process we were able to get to a decent point where the tweets are closely related to the classification category.

Disturbing people on the floor in AR

Once we had all the rather technical parts in place we started sketching a design that would work in delivering the message of this experiment.

We wanted it to feel natural, but also disruptive. 

We added profile pictures of the tweets presented inside a sphere, roughly located next to the tweet, and used Twitter’s color palette to color the text and the username.

the ICON, the INDEX and the SYMBOL.

 

Where does the magic happen?

Personally I found small moments of magic when it almost felt like machine learning and augmented reality extended my perceptual senses and brought emotional impacts of objects on to the conscious surface, wow that was not very descriptive right? Perhaps an example would help, when looking at a fence in the subway station, the classification algorithm predicted I am looking at a prison. Since we disabled the user being able to see what the machine learning model classifies, it pulled the following tweet:

“You are the prisoner, the prison and the prison keeper. Only you hold the key to your freedom” – Ricky Mathieson

A moment of magic

Another example of this magic occoured when looking at the coffee machine (a thing I spend quite a lot of time doing every day)

“Still life with coffee pot” – Man Ray (1962)

Which refers to Man Ray’s following painting:

Still Life with Coffee Pot – Man Ray

 

Enough with the talking

To illustrate how this works out we made a video of using this experiment throughout one morning

Self Portrait – ‘Mean Emotion’

The idea of a self portrait is a rather challenging one. It holds the premise of conveying the ‘DNA’ of an artist, but more importantly to me, it tells the story of someone through the period of time when the portrait was taken/made. Therefore, portraits are inherently a document of a period of time that no longer exists for someone, and on the shoulders of this understanding, I decided to pursue ‘Mean Emotion’.

The story

I moved to New York city a little over a year ago, to attend ITP. Before coming to NYC, I lived in Israel with my partner for (around that time) over 5 years, Katia. When we got the message informing us about my acceptance into ITP, we both knew what it means. The reason behind this mutual understanding is that Katia was still attending school in Israel, and so we decided we would split but do everything we could to support and maintain that relationship. Fast forward, one year after as I sat down thinking about the portrait I realized one of the tools that helped us in this crazy period of time, is our ability to express emotion through selfies we send to each other.

Data-set collection from WhatsApp

The project

Recently I have begun digging deeper into data science and machine learning, with an emphasis on graphics and imagery. One thing I found myself doing recursively is aligning and averaging data-sets to be able to clearly see variances  and deviations that a so-called ‘learning’ model could potentially pick up. And so I decided to use the aforementioned selfie data to try and average a ‘year’s worth of emotions’ . The data set came to be 230 images taken from the 8th of 2016 till the 8th of 2017.

I collected all the images while paying special attention to WhatsApp’s naming convention, which stores date and time in the file name itself. From that point I started build a face aligner that would be able to parse the images and align them in 3D, which makes for a much better face matches. I started by examining Leon Eckert’s ‘Facemesh Workshop’ github repo and started build my own FaceMesh class in python. The project required Python 3+, openCV, dlib and numpy, which all have very good installation guides online. With dlib in place, you can use pretrained face landmarks .dat file which gives very good results out of the box for detecting faces. Another great resource was pyimageresearch’s facial tracking tutorial.

The console application running

Github repository for the project and the tool could be found here.

After some coding and testing I was able to run the script and start aligning faces on a reference image. My plan was to run the app and have it output a print of the entire year, 8/2016 – 8/2017.

The failure

After getting the code to work, and spending hours on alignment, resolution and compression (haaaaaaa), I was able to produce a good mean image from all the data set. Feeling uplifted I decided to go to Laguardia Studio and print the portrait on high quality paper using their museum grade inkjet printer. Upon getting there, and after a brief ‘up-to-speed’ tutorial by the staff I realized my image doesn’t meet the DPI requirement by the printer and set down to redo the part of the code that had to do with that. Unfortunately, after adjusting it the program took 3 hours on my computer and Laguardia studio already closed.

Lesson learned: “measure twice and cut once

Shooting a printer I never got to print anything on (meta)

 

The result

Here is the printed result of 8/2016- 8/2017, of 228 images shot during the year averaged into a single image.

I also took the time to sort the averaging process by month to create 12 more images each containing the selfies sent during that month.

I also created a speculative video of how the piece could be displayed: