You are looking at archived content from my "Bookworm" blog, an experiment that ran from 2014-2016. Not all content may work. For current posts, see here.
Jan 18 2016

Andrew Piper announced yesterday that the McGill text lab is releasing their corpus of modern novels in three languages. One of first thoughts with any corpus is: what existing Bookworm methods might add some value here? It only took about ten minutes to write the code to import it into a bookworm; the challenge is figuring how methods developed for millions of books can be useful on a set of just 450.

Dec 14 2015

A first pass at understanding the potential of the Hansard corpus through a Bookworm browser.

Dec 09 2015

2015-11-19

Oct 29 2015

My last post provided a general introduction to the new word embedding of language (WEMs), and introduced an R package for easily performing basic operations on them. It was geared mostly towards people in the Digital Humanities community. This post looks more closely at a single word2vec model Ive trained, on about 14 million reviews of faculty members from ratemyprofessors.com,1 The point of this one is to provide a more concrete exploration of how these models can help us think about gendered language. I hope it will be interesting even to people who arent interesting in training a machine learning model themselves; theres code in here, but its freely skippable.

Oct 24 2015

Recent advances in vector-space representations of vocabularies have created an extremely interesting set of opportunities for digital humanists. These models, known collectively as word embedding models, may hold nearly as many possibilities for digital humanitists modeling texts as do topic models. Yet although theyre gaining some headway, they remain far less used than other methods (such as modeling a text as a network of words based on co-occurrence) that have considerably less flexibility. As useful as topic modeling is a large claim, given that topic models are used so widely. DHers use topic models because it seems at least possible that each individual topic can offer a useful operationalization of some basic and real element of humanities vocabulary: topics (Blei), themes (Jockers), or discourses (Underwood/Rhody).1 The word embedding models offer something slightly more abstract, but equally compelling: a spatial analogy to relationships between words. WEMs (to make up for this post a blanket abbreviation for the two major methods)2 take an entire corpus, and try to encode the various relations between word into a spatial analogue.

Oct 19 2015

Theres no full description of the D3 bookworm package yet, because its still something of a moving target.

Sep 17 2015

Bookworm 0.4 is now released on github. It contains a number of improvements to the code from over the summer. It makes the existing code much, much more sensible for anyone wanting to build a bookworm on their own collections of texts based on the experience of many using it so far. All the stages: installation, configuration, and testing are now a lot easier. So if you have a collection of texts you wish to explore, I welcome you to test it out. (Ill explain at more length later, but for the absolute lowest investment of time you can just run a prebuilt bookworm virtual machine using vagrant.)

Sep 13 2015

This post is just kind of playing around in code, rather than any particular argument. It shows the outlines of using the features stored in a Bookworm for all sorts of machine learning, by testing how well a logistic regression classifier can predict IMDB genre based on the subtitles of television episodes.

Jon Fitz Fitzgerald was asking me for example for training a genre classifer on textual data. To reduce dimensionality into the model, we have been thinking of using a topic model as the classifiers instead of the tokens. The idea is that classifiers with more than several dozen variables tend to get finicky and hard to interpret, and with more than a few hundred become completely unmanageable. If you want to classify texts based on their vocabularies, you have two choices:

  1. Only use some of the words as classifiers. This is the normal approach, used from Mosteller and Wallace on the Federalist papers through to Ted Underwoods work on classifying genre in books.

  2. Aggregate the words somehow.1 The best way, from an information-theoretic point of view, is to use the first several principal components of the term-document matrix as your aggregators. This is hard, though, because principal components vectors are hard to interpret and on very large corpora (for which the term-document matrix doesnt fit in memory) somewhat tedious to calculate.

Using topic models as classifiers is somewhat appealing. They should be worse at classification than principal components, but they should also be readable like words, to some degree. I havent seen it done that much because there are some obvious problems; topic models are time-consuming to fit, and they usually throw out stopwords which tend to be extremely successful at classification problems. Thats where a system like Bookworm, which will just add in a one-size-fits-all topic model with a single command, can help; it lets you try loading in a pre-computed model to see what works.

So this post just walks through some of the problems with genre classification in a corpus of 44,000 television episodes and a pre-fit topic model. I dont compare it directly to existing methods, in large part because it quickly becomes clear that IMDB genre is such a flexible thing that its all but impossible to assess whether a classifier is working on anything but a subjective level. But I do include all of the code for anyone who wants to try fitting something else.

Code, Descriptions, and charts

Note: all the code below assumes the libraries dplyr, bookworm, and tidyr are loaded.

First we make the data wide (columns as topic labels). That gives us 127 topics across 44,258 episodes of television, each tagged with a genre by IMDB.

wide = movies %>% spread(topic_label,WordsPerMillion,fill=0)

Now well train a model. Were going to do logistic regression (in R, a glm with family=binomial), but Ill define a more general function that can take an svm or something more exotic for testing.

# Our feature set is a matrix without the categorical variables and a junk variable getting introduced somehow.
modeling_matrix = wide %>% select(-TV_show,-primary_genre,-season,-episode,-`0`) %>% as.matrix
training = sample(c(TRUE,FALSE),nrow(modeling_matrix),replace=T)

dim(modeling_matrix)
## [1] 44258   127
training_frame = data.frame(modeling_matrix[training,])
training_frame$match = NA
build_model = function(genre,model_function=glm,...) {
  # genre is a string indicating one of the primary_genre fields;
  # model function is something like "glm" or "svm";
  # are further arguments passed to that function.
  training_frame$match=as.numeric(wide$primary_genre == genre)[training]
  # we model against a matrix: the columns are the topics, which we get by dropping out the other four elements
  model = model_function(match ~ ., training_frame,...)
}

Heres a plot of the top genres. Ill model on the first ten, because theres a nice break before game show, reality, and fantasy.

library(ggplot2)

wide %>% filter(training) %>% group_by(primary_genre) %>% summarize(episodes=n()) %>% mutate(rank=rank(-episodes)) %>% arrange(rank) %>% ggplot() + geom_bar(aes(y=episodes,x=reorder(primary_genre,episodes),fill=rank<=7),stat="identity") + coord_flip() + labs(title="most common genres, by number of episodes in training set")
Jul 01 2015

I just saw Matt Wilkens talk at the Digital Humanities conference on places mentioned in books; I wanted to put up, mostly for him, a quick stab at some of the raw data running the equivalents on my movie bookworm.

May 21 2015

This is a quick post to share some ideas for interacting with the data underlying the recent article by Ted Underwood and Jordan Sellers on the pace of change in literary standards for poetry.

May 10 2015

Here are some interactives Ive made in preparation for my talk at the Literary Lab at Stanford on Tuesday on plot arcs in television shows based on underlying language.

May 07 2015

Even if you think you dont know Usenet, you probably do. Its the Cambrian explosion of the modern Internet, among the first places that an online culture emerged, but modern enough that it can seamlessly blend into the contemporary web. (I was recently trying to work out through Google where I might buy a clavichord in Boston; my hopes were briefly raised about one particular seller until I realized that the modern-looking Google Groups page I was reading was actually a presentation of a discussion from the Usenet archives in 1992.)

Apr 20 2015

Just a day after launching this blog (RSS feed, by the way, is now up here) I came across a perfect little example question to look at. The Guardian ran an article about appearance on teaching evaluations that touches on some issues that my Rate My Professor Bookworm can answer, with a few new interactive charts.

Apr 19 2015

Though more and more outside groups are starting to adopt Bookworm for their own projects, I havent yet written quite as much as Id like about how it should work. This blog is attempt to rectify that, and begin to explain how a combination of blogging software, interactive textual visualizations, and a exploratory data analysis API for bag-of-words models can make it possible to quickly and usefully share texts through a Bookworm installation.