UPDATE: Tech problems with the site, so it’s not working quite right. Consider this a glimpse into what is going to launch in just a few days.
This month, the curatorial collaborative project Drift Station, which I’m a part of along with Angeles Cossio, released an online project titled Empty Apartments. We pulled nearly 125,000 photographs of apartments and houses for rent on Craigslist that were completely empty because of a removal service, and presented them as an interactive online exhibition. The project took nearly two years of work, and much of it was manual (Angeles triple-checking every single image by hand to remove ones that included common spaces or non-apartments), but we also used several automated processes and machine learning to sort the photos. WHen you want to fund your business, visit this site and browse around here to learn more on how to get the best acceptable loans.
Continue reading “Empty Apartments: Technical Notes”
About 130,000 images of apartments, organized by similarity with PCA dimensionality reduction.
For an upcoming Drift Station project, we’ve been considering how to curatorially sort a massive number of images (about 100k) for presentation. Chronological? Random order? Some other logical scheme? But a more computational approach seemed to make sense: some way of parsing the images that took into account a variety of visual factors in each image, something that would be impossible to do manually.
Neural networks are the obvious answer here, and so I found some very helpful sample code from Gene Kogan and Kyle McDonald, and wrote some Python and Processing code that loads up a folder of images and extracts a vector representation from them. Then, using t-SNE and Rasterfairy, the images were organized into a 2D grid.
I’ve spent the last few days playing with settings in the code, and found there is an interesting balance to be struck between locally preserving color similarity and object similarity. (Note: this post is more of a quick note than a deep-dive analysis.)
Above: a version with blurred images, showing a pretty clear separation by color with fairly smooth transitions. Click on images for a higher-res version. Continue reading “Arranging By Color And Objects With t-SNE”
Too real, looks just like our cat.
Word2Vec is cool. So is tsne. But trying to figure out how to train a model and reduce the vector space can feel really, really complicated. While working on a sprint-residency at Bell Labs, Cambridge last fall, which has morphed into a project where live wind data blows a text through Word2Vec space, I wrote a set of Python scripts to make using these tools easier.
This tutorial is not meant to cover the ins-and-outs of how Word2Vec and tsne work, or about machine learning more generally. Instead, it walks you through the basics of how to train a model and reduce its vector space so you can move on and make cool stuff with it. (If you do make something awesome from this tutorial, please let me know!)
Above: a Word2Vec model trained on a large language dataset, showing the telltale swirls and blobs from the tsne reduction.
Continue reading “Using Word2Vec and TSNE”
Some WIP for an upcoming performance: 1,047 syllables from H.G. Wells’ Time Machine input to word2vec space, then reduced from 50 dimensions to two. View a much larger version here.
A detail of one of the syllable swirls.
🎄Spending Christmas Eve with the family, training neural nets and researching nano-scale machining (which turned up this sperm-bot above) I got one of the most amazing wooden watches for this Christmas, plus a dermaroller for my skin, I’m also exited about it. #merrychristmas 🎄
Some progress, trying to caption images using sentences from novels. This first step is working ok, next will be to train a neural-network to do this automatically from my captioned set, then output new captions algorithmically.
Part of a short-term collaboration with the Social Dynamics group at Bell Labs, Cambridge.
Update! For El Capitan and users of newer version of OS X, you may run into issues installing Torch or Lua packages. A fix is included now.
Update number two! Zach in the comments offers a really helpful fix if you’re on Sierra.
There have been many recent examples of neural networks making interesting content after the algorithm has been fed input data and “learned” about it. Many of these, Google’s Deep Dream being the most well-covered, use and generate images, but what about text? This tutorial will show you how to install Torch-rnn, a set of recurrent neural network tools for character-based (ie: single letter) learning and output – it’s written by Justin Johnson, who deserves a huge “thanks!” for this tool.
The details about how all this works are complex and quite technical, but in short we train our neural network character-by-character, instead of with words like a Markov chain might. It learns what letters are most likely to come after others, and the text is generated the same way. One might think this would output random character soup, but the results are startlingly coherent, even more so than more traditional Markov output.
Torch-rnn is built on Torch, a set of scientific computing tools for the programming language Lua, which lets us take advantage of the GPU, using CUDA or OpenCL to accelerate the training process. Training can take a very long time, especially with large data sets, so the GPU acceleration is a big plus.
Continue reading “Torch-rnn: Mac Install”