Word2Vec is cool. So is tsne. But trying to figure out how to train a model and reduce the vector space can feel really, really complicated. While working on a sprint-residency at Bell Labs, Cambridge last fall, which has morphed into a project where live wind data blows a text through Word2Vec space, I wrote a set of Python scripts to make using these tools easier.
This tutorial is not meant to cover the ins-and-outs of how Word2Vec and tsne work, or about machine learning more generally. Instead, it walks you through the basics of how to train a model and reduce its vector space so you can move on and make cool stuff with it. (If you do make something awesome from this tutorial, please let me know!)
Above: a Word2Vec model trained on a large language dataset, showing the telltale swirls and blobs from the tsne reduction.
Continue reading “Using Word2Vec and TSNE”
Update! For El Capitan and users of newer version of OS X, you may run into issues installing Torch or Lua packages. A fix is included now.
There have been many recent examples of neural networks making interesting content after the algorithm has been fed input data and “learned” about it. Many of these, Google’s Deep Dream being the most well-covered, use and generate images, but what about text? This tutorial will show you how to install Torch-rnn, a set of recurrent neural network tools for character-based (ie: single letter) learning and output – it’s written by Justin Johnson, who deserves a huge “thanks!” for this tool.
The details about how all this works are complex and quite technical, but in short we train our neural network character-by-character, instead of with words like a Markov chain might. It learns what letters are most likely to come after others, and the text is generated the same way. One might think this would output random character soup, but the results are startlingly coherent, even more so than more traditional Markov output.
Torch-rnn is built on Torch, a set of scientific computing tools for the programming language Lua, which lets us take advantage of the GPU, using CUDA or OpenCL to accelerate the training process. Training can take a very long time, especially with large data sets, so the GPU acceleration is a big plus.
Continue reading “Torch-rnn: Mac Install”
There are a few things I always do when starting a project: make a
NotesAndIdeas.txt and a
Readme.md file. But opening a blank text file and saving it to the right location is a pain. To the rescue: a Finder keyboard shortcut.
1. WRITE A LITTLE SCRIPT
We’ll use an AppleScript to create the files. This requires two parts: getting the current directory and creating the file using a bash script. In the notes file, I’m adding a header to the top of the file, but you could add any text you want. Newlines must be escaped with two backslash characters:
tell application "Finder"
select the front Finder window
set targetFolder to insertion location as alias
set folderPath to POSIX path of targetFolder
set makeNoteFile to "echo '\\nNOTES && IDEAS:\\n' >> " & quoted form of folderPath & "/NotesAndIdeas.txt"
do shell script makeNoteFile
I save my scripts to a folder called Hacks to I can tweak them later, if necessary.
Continue reading “Tutorial: Create A Readme-File Finder Shortcut”
Almost all my bots have been written in Python, but I’ve been meaning to try Node.js for more interactive bots for some time. Daniel Shiffman’s excellent new tutorials were enough to get my jump-started, and I created @BotSuggestion, a bot whose only activity is following accounts suggested by Twitter, slowly conforming to their algorithm.
I run all my bots on a Raspberry Pi under my desk (see my tutorial on how to get that set up), but getting an ongoing Node server running took a little more work.
Continue reading “Tutorial: Node on Raspberry Pi (for Bots)”
The corner of my studio where my mini mill sits is definitely under-lit. When I got my mill, I first installed a cheap IKEA gooseneck LED lamp, which worked pretty well but was often in the way. So I built an LED ring light for the mill, which gives broad and even light, moves with the cutting tool, and is super low-profile.
Continue reading “Mini Mill Ring Light”
The HackRF One is a very nice software-defined radio (SDR). Though a good bit more expensive than other SDR hardware, it is very well made and Michael Ossmann of Great Scott Gadgets has put together an extensive set of free video tutorials. Of course, those only help if you have everything set up correctly to begin with.
It appears that most SDR work is done through Linux, which makes sense: SDR is classic hardware/software hacking. But for a Mac user, I found it somewhat difficult to get started. This short tutorial will hopefully help kickstart that process for you!
Continue reading “SDR/HackRF One: Mac Setup and Basics”
While services like OSH Park let you upload your Eagle CAD files directly for PCB manufacture, most other services, especially production runs, require the industry-standard Gerber files. Essentially a set of text files for each part of the board (ex: top copper, bottom silkscreen, bottom solder-mask, etc), generating Gerbers in the right format can be a bit tricky.
This tutorial walks you through this process, with a specific example of sending files to Seeed Studio’s excellent PCB service (no financial stake here – just like their service!). However, you could use these directions for most any fab house.
Special thanks to Luca Dentella’s post that helped me figure out this process.
Continue reading “Exporting Gerber Files From Eagle CAD”
I recently bought a cheap VU meter on Amazon, which looks very cool but needs some circuitry to get running. Unlike vintage meters, which can be driven by the audio signal directly, newer (and especially cheap) meters require DC current. A simple circuit, based on this example by Rod Elliott, uses four diodes to convert the AC audio signal into DC, plus a resistor and capacitor to dampen the movement of the needle.
See Rod’s post for lots more technical detail and a more complex driver circuit. Of course, this is pretty lo-fi and not studio-quality equipment… it also didn’t cost $1000.
Continue reading “Simple VU Meter Circuit”
I recently bought a small aquarium air pump, not really realizing that “pump” in this case meant blowing air, not sucking. A quick search turned up almost no information online, but converting it into a vacuum pump turned out to be quite easy.
Why might you need a vacuum pump? I’m using it for picking up swarf while cutting vinyl records, but this would also be useful for getting rid of bubbles in mold-making or casting!
I’m using this pump which I got for $13 on Amazon. It’s very small and fairly quiet. To convert it to a vacuum pump, remove the four screws on the back. Inside is a small transformer and two arms. These arms have a magnet at each end, which is pulled by the transformer to activate their rubber bellows. Fortunately, to switch the pump, we just have to reverse the air inlet/outlets: suck air in, push air out vs suck air out, push air out.
Pry up the plastic bellows assembly with a small screwdriver, like shown above.
The inlet/outlet are above, sealed with a rubber O-ring. Pull the assembly out carefully and spin it 90º. Then just push it back into place. Be careful that the O-rings are seated properly, as they seem to want to pop out.
Close up the pump! You may need to seal it using caulk or hot glue if you’re not getting good suction, but mine seemed pretty good without.
Finally, attach the tubes. For more suction, the two tubes can be combined using a T-connector (my pump came with one).
Nothing earth-shattering, but hopefully this helps someone with a similar need!
FFMPEG is one of those tools I use when I just want to quickly hack together a video and don’t need fancy things like editing, titles, or a user-interface. Compiling on a regular computer isn’t easy, but compiling for the Raspberry Pi takes a little more patience and care. I also wanted to include support for H264 video, which needs to be installed before compiling FFMPEG.
There are lots of examples on the web, but what worked for me was a combination of a few of them, so here’s what I did. Note many of these commands may require
sudo appended before (commands like
- INSTALL H264 SUPPORT
Run the following commands, one at a time.
git clone git://git.videolan.org/x264
./configure --host=arm-unknown-linux-gnueabi --enable-static --disable-opencl
sudo make install
- INSTALL OTHER LIBRARIES/FORMATS
Anything else you would like to install should be done now, before compiling FFMPEG. This includes MP3, AAC, etc.
- INSTALL FFMPEG
Add lines similar to the
--enable-libx264 for anything else installed above. This may take a REALLY long time, so be patient.
git clone https://github.com/FFmpeg/FFmpeg.git
sudo ./configure --arch=armel --target-os=linux --enable-gpl --enable-libx264 --enable-nonfree
sudo make install
As Malcshour notes, if you have a Model B+ you can use
make -j4 instead of just
make to take advantage of all four cores!
DONE! TEST IT
To test your new install, simply run the command
ffmpeg . If you don’t get any errors, you’re all good. What did I do with my new tool? I built a bot that generates random guitar chords.
A FEW MORE RESOURCES