How do you get insight from a giant pile of documents, given you can't read all of it? I develop natural language processing and machine learning methods to analyze social questions in text corpora, such as news or social media. For example, I've analyzed Twitter to understand how new slang spreads between cities, and how textual sentiment corresponds to public opinion polls. Other applications have looked at censorship in Chinese microblogs and extracting events in international relations from the news. Statistical language patterns can give insight into the underlying social variables (text as measurement); or, they can reveal the socially embedded process of language generation. I am interested in a wide variety of linguistic, computational, and statistical methodologies that are necessary to tackle these questions -- Bayesian inference, optimization, probabilistic graphical models, syntactic parsing, sentiment analysis, crowdsourcing, etc. One current project is a tool for interactive text exploration and visualization (a prototype is available at http://brenocon.com/mte/); I'm looking for new collaborations, which will inform how to best develop this and other methods going forward.