On Friday, hidden away in the West Village, one block from the Hudson, the very necessary School for Poetic Computation showcased a quickly growing branch of digital literature: bot poetry. The event had been organized last minute by Allison Parrish to make up for Darius Kazemi’s official bot summit that was put off until next year.
Parrish is herself an author of code poetry, including one of the most iconic and influential bots out there: @everyword, which, over a period of 7 years, tweeted every word in the English language, and has now made it into (e)book form. In the event description, she gave a short definition of what she understands bots to be:
“For the purposes of this meetup, a bot is an automated agent (like, say, a computer program) that makes content on the Internet.”
But even this innocent working definition was undermined during the course of the evening. What separates a bot from other generative literature became increasingly unclear, and the question was something like the red thread of the evening.
That “content” rather than “text” is an important point in Parrish’s definition was aptly demonstrated by the first entry, Casey Kolderup’s @OminousZoom. This twitter bot takes a stock photo and, in a series of four steps, zooms into a detail it recognizes as a face. The effect can be funny – like in the dramatic zoom meme (sans the punch line of contagious grimaces). However, the most interesting results were those in which the algorithm failed to find a face, bringing about an unresolved tension in the progression of details that assigns significance to, for us, insignificant structures. What is left is nothing but the sense of significance itself – it sees something, but we cannot know what or why.
— Ominous Zoom (@OminousZoom) October 22, 2015
Calling into question Parrish’s term “automated agent,” another bot presented was human-triggered: Building on Kenneth Goldsmith’s quip that “the new memoir is our browser history,” Patrick Steadman’s Twitter account simply tweets everything he is typing into Google. He scrolled through the feed while speaking, and one could see that the list of search strings included the SFPC address; baseball match scores; McDonald’s restaurants in Chelsea; programming inquiries; news on Edward Snowden and Chelsea Manning; and Augustinian musings such as:
If God Knows Our Free Will Choices, Do We Still Have Free Will …
— Patrick Steadman (@ptsteadman) October 22, 2015
It is not hard to see why this bot doesn’t have a some clever handle, but simply bears the name of its inventor: This is Patrick Steadman. And yet, I was constantly wondering how much self-censoring had already happened before a search was performed and a tweet went out. The piece might be less about biography in the digital age, or the psychology of surveillance, but rather about a type of self-scrutiny and -control that is more reminiscent of Hawthorne’s guilt-ridden Salem Puritans, for whom sin begins in thought, thought is transparent to God, and thus thought itself needs to be censored.
The first expressly literary bot was Ross Goodwin’s MeterMap. Again, it defied Parrish’s definition, since it doesn’t make “content on the Internet.” Rather, MeterMap is a Python script using two corpora of input text: The first, a poem, is analyzed for meter and rhyme; the script then uses the resulting structure to craft a new poem from the second corpus, using all sentences that fit each of the original lines. Ross even added command-line switches to allow for sentiment analysis to add more positive or more negative sounding poems. Mapping William Faulkner onto e. e. cummings, the result looks like this:
This time as ‘too shocking.’
Paced by the turning outraged faces,
It’s quite chilly this morning,
Wind-gnawed face and bleak,
I know I damn well hate you.
Nick Montfort, MIT Trope Tank director and code poet, straddled the question of on- and offline for the definition of botness: His @one_algorithm operates as twitter bot permutating a set of phrases. One example:
It is not easy to be an algorithm. I just picked some strings for syntactical arrangements. I am an algorithm.
— An Algorithm (@one_algorithm) October 24, 2015
The set of possible phrases can be written like this:
“It is (not/sometimes) (easy/fun/neat) to be an algorithm. I just (picked/determined/selected) some strings for (syntactical/lexical/verbal) (fluids/arrangements/configurations). I am an algorithm.”
The indigent turns to the librarian.
She surrenders to him.
The more powerful one triumphs.
For Montfort, the point of this poem (which, again, is permutational) is neither the generative nor the medial aspect, but the hermeneutical work it provokes: Consisting of mini-stories, the piece reveals the interpreter’s own assumptions about power relations, and inferences about gender (who surrenders to whom? who is “she” or “he” here?).
Jia Zhang’s project applied this focus on interpretation to data analysis. Zhang is part of MIT’s Social Computing Group and co-founder of the website youarehere.cc, which visualizes sociopolitically relevant data by turning it into maps. If this topographical macro perspective works by abstracting from the individual, Zhang also wanted to find a way to re-personalize what is hidden in the data heaps. Using information from the US census, Zhang wrote the @censusamericans twitter bot that gives the statistic a face again: a Python script takes each of the 15 million data rows of the long form census and turns it into a first person sentence (it will take a couple of thousand years for it to finish its task). By picking the least likely features of each demographic, the bot highlights the unique in the average. The result is stream of micro-autobiographies that can have a devastating effect on the reader. Zhang mentioned how she noticed a pattern of retweeting: the most crushing lines were also the most shared: “How unique you are is how unfortunate you are.”
The evening ended with Leonard Richardson’s talk on the taxonomy of twitter bots. Contrasting additive with subtractive techniques (or, as he called it: painting bots vs carving bots), Richardson showed what he takes to be the most fundamental features of bot production: Either content is produced through combination, addition, and juxtaposition – the ideal type being Ranjit Bhavnagar’s @pentametron, which re-tweets two rhyming lines of iambic pentameter – or cropping, subtraction, and reduction – illustrated by Richardson’s own @hapaxhegemon, which tweets words occurring only once in the Project Gutenberg corpus. The mixed use of these techniques results in collage, and most bots fall under this category (although one could argue that there is not addition without a preceding choice, and thus subtraction, which renders the distinction somewhat moot).
Here, finally, the attempts at defining what bots are imploded for good. The taxonomy of addition, subtraction, and their combination can be applied to all digital literature, even to all art. The “grammar of bots” that Richardson held out – the idea that like looking at the empty spots on the periodic table missing types could be produced analytically – suffers from the fact that it abstracts from its subjects to such a degree that it becomes accidental. When asked how he defined bot, the answer –“pulling a lever” and getting a result – was appropriated followed up by the reply that this was basically the definition of conceptual art. That code and concept are related is nothing new (think of Casey Reas/Benjamin Fry’s implementation of Sol Lewitt drawings for processing) – the question would be, how they differ specifically, and that was answered neither by Richardson nor the otherwise very interesting evening.
(For an almost exhaustive list of bots, consult the BotDB)