## Voronoi Diagrams of “Starry Night”

After getting very excited about Voronoi Diagrams and this post, I took a break from PHP to make what I think is essentially a custom image-compression algorithm.  I’ve applied it to Van Gogh’s famous painting “Starry Night” at varying levels.  The program, written in Processing, essentially looks at a pixel’s neighbors and if the colors are similar enough, it sets the neighbor’s value to the tested pixel.  The above image is a allows for a difference of as much as 250 (very high compression/low similarity) and steps down in increments of 25 down to 25.

## 3d Random “Pi Walk”

Random walk based on the decimal expansion of pi (from the previous post), but this time in 3d space. Created using Processing and OpenGL.

Click on images for full-size.

## Random “Pi Walk”

While John Venn is best-known for the Venn Diagram, Alex Bellos mentions Venn’s other invention in his quite-good book “Here’s Looking at Euclid” (page 231).  Venn was the first to create a “random walk” or “drunk walk”.  Using the decimal expansion of pi, each digit is seen as a cardinal direction.  I’ve updated Venn’s experiment slightly (his ignored the numbers 8 and 9) – each number from 0-9 rotates the direction of movement by a factor of 36º and takes a step 20 pixels forward.

The above image is the first 1120 decimal places of pi, starting at the gray dot.  Created using Processing.

## New York Times Timelapse Video

A really fantastic project (even if it was initially accidental) by Phillip Mendonça-Vieira: “Due to an errant cron task that ran twice an hour from September 2010 to July 2011, I accidentally collected about 12,000 screenshots of the front page of the nytimes.com”.  I find I’m most interested in the small quirks in the resulting time-lapse video, such as the wiggling text below the masthead with the date/last update and that some ads stay longer than others.

Also really nice is Phillip’s post about how the piece was compiled into a video.  As someone who ends up taking a lot of stills and turning them into video (usually accomplished very slowly using Final Cut), his suggestion of ffmpeg seems an interesting one.

## Further Text Interpolation Experiments

Some more wrangling in Processing, some new results experimenting with interpolating texts.  The problem with previous tests was that if the files aren’t the exact same length, remaining characters were simply dumped at the end of the resulting file.  While character-accurate, it isn’t really an interpolation.  Instead, this new version finds the ratio between the number of characters in the two texts.  For example:

File one = 351,155 characters
File two = 194,138 characters

This makes the ratio between the two files ~2/1.  The code reads two characters in the longer file, interpolates them, and then interpolates that result against a single character from the first file.  Two examples are below using Shakespeare’s sonnet #51 and 117.

The sonnets interpolated using a ratio-interpolation:

The text above, run through Microsoft Word’s spell-check:

## Text as Number

I wrote a simple Processing sketch that returns binary values of characters from a split file (one letter per line), with an appended “0.” in front – this essentially makes this a single very complex number that holds all the information of the text.  Above is Shakespeare’s 51st sonnet stored as a number.

Based on a thought in Gary William Flake’s “The Computational Beauty of Nature” (pg 21).  As Flake describes it, “Now take a long number and put a zero and a decimal point in front of it.  We’ve just translated one huge number into a rational number between 0 and 1.  By placing this single point at exactly the right spot on the number line, we can store an unlimited amount of information.”

I think especially interesting is the idea that rather than the sonnet be translated to a number between 0 and a giant number (say 100 trillion, etc), the number is only between 0-1 but occupies a very specific point on the number line.  The resulting number is unique and no other text has that exact value.

Source code:

## Super-Abstract Software

There is a risk, however, in aestheticising computation, which should be obvious given the historical lessons that tie Futurist enthusiasm for a machine aesthetic to Fascist politics.  It is far too easy to slip from what appears to be a critical exploration of the aesthetic possibilities of computation to the capitulation to, if not a celebration of, the mechanisms of domination practiced by global capital: the massive and rapid transfer and manipulation of data as capital and capital as data by digital means.  In the end, the risk is unavoidable since a reflection on software is crucial exactly to the degree it serves as a tool for domination.  Software must have its politics, its aesthetics, its poetics, and its criticism.

If there is a chance that software will contribute significantly to a new politically relevant aesthetics, it lies in the way software shows us a way out of order, in and through order.  It engages the tensions between possibility and constraint.  Software gives us not objects, but instances – occasions for experience.  We see our own embeddedness in networks of abstraction, structuration, and system making, and in seeing, find ways of inhabiting this situation of constraint as if it were possibility.  Software can create systems of production that present us with the generation of endless variation within programmatic limitations.  When freed from its intrumentalist telos, it is possible for software to exist solely on its own terms: it stages its own abstraction and serves nothing save its own play, display, and critique, that of abstraction itself.  If it is possible for software to exist solely on its own terms then it may become Super Abstract.

Via: “Super-Abstract: Software Art and a Redefinition of Abstraction” by Brad Borevitz (from read_me: Software Art & Cultures, page 298)

## Brownian Triangles – Animation

One more for today (best seen in HD on Vimeo)…

## 864 newpaper front pages – June 27, 2011

The front pages from 864 newspapers on June 27, 2011 – culled from the Newseum in Washington DC – with all pixels sorted by RGB values and recombined into an image.

Click on image for full-resolution file (approx. 12MB).