School for poetic computation

In fall 2014, I was part of the second School for Poetic Computation in New York.

15 awesome developers/artists working together in one room for 10 weeks + a handful of awesome and inspiring teachers + visiting artists to share their work and ideas.

One of my favorite classes was Ramsey Nasser's Radical Computing class focused on how text is translated into a programming language, including writing our own abstract syntax trees to create our own languages. Another favorite other was Zach Lieberman's course. My top take-away from that a lot of new media art boils down to what data you use as an input, how you transform it, and how you express that data as an output in interesting ways. Tega Brain helped us contextualize our work with an overview of the history of new media art.

Here are two of the projects I did along the way.

dancing line drawing tool.gif

dancing drawings

As a student at the 10-week School for Poetic Computation, I learned about animation techniques from artist and technologist Zach Lieberman's. Inspired by this work, I built an animated sketchpad online using D3. 

 
Final version of the project, shown as a gif. This version takes in video from the computer's camera, shown on the left, and reorders the incoming pixels based on the brightness of pixels in an image of a flower to create the image on the right.

Final version of the project, shown as a gif. This version takes in video from the computer's camera, shown on the left, and reorders the incoming pixels based on the brightness of pixels in an image of a flower to create the image on the right.

First version simply takes in a flat black&white image on the left, and reorders it's pixels by brightness to output the image on the right.

First version simply takes in a flat black&white image on the left, and reorders it's pixels by brightness to output the image on the right.

Mid-process version: takes in images from the computer's camera and outputs them on the right after sorting them by brightness.

Mid-process version: takes in images from the computer's camera and outputs them on the right after sorting them by brightness.

Same pixels, different picture

My final SFPC project, which I shared at the December SFPC open house, was about playing with how a set of pixels could paint a totally different picture when rearranged. The gif illustrates this (minus compression artifacts).

Every pixel on the left side shows up somewhere on the right side, with the same exact color. You might not believe it when you see how different the images look, but it’s guaranteed in the code.

On the right, I first simply ordered in arcs radiating from the upper left corner. In other instances I used a seed image to determine the order of the brightness of the pixel on the right, so I was using the colors from the left image to sort-of paint the right image. The colors from the left image are arranged based on the brightness of the right seed image, to create the resulting image.

While the raw “data” (set of colored pixels) is exactly the same on the right and the left, the data takes on completely different meaning when reordered. Even more, the reordering changes our perception of the colors so that it no longer even appears that both images contain the same information. Holding colorful objects in front of the camera and seeing the right image react convinces us that, yes, the pixels are the same as we see a set of hues appear or disappear.

In a more poetic level, because of context we seem to see the world as more colorful than it “really” is. Which, perhaps, is a nice thing after all.

Read more here.