This week’s issue explores the terrain of digital mapping (and “unmapping), the economics and legislation of open education resources (OER), and the kinds of humanities data analysis that have been around for quite a bit longer than the term “DH” or the vogue of big data.
Feel free to add comments or questions below!
Mapping uncertainty at UCLA
For the last few weeks, we’ve been pretty busy here at UCLA, hosting a National Endowment for the Humanities-sponsored institute for digital cultural mapping. We’re using the term “digital cultural mapping” to describe the process of using (or reinventing) maps so that they make humanistic arguments.
Doing this raises an interesting set of problems. Maps have evolved to be compact, authoritative, semiotically efficient devices. There’s something about a map that seems transparent, as though it has a special relationship to reality. And yet we humanists want to make arguments that are, well, inefficient: complicated, reflexive, sometimes backtracking, sometimes nebulous. So should we make illegible maps? Alter mapping conventions? Jettison maps entirely?
As is the case at all good digital humanities events, we spent some time philosophizing and some time building. Each of the 12 participants came equipped with a project, selected because we thought it was tricky, interesting, and potentially illuminating. Annie Danis, for example, is documenting a 1937 archaeological investigation, and obviously place is important to her argument. But she also can’t reveal the exact location of culturally sensitive artifacts. The solution? Experimenting with ways of injecting uncertainty into objects’ locations: everything from enforced zooming-out to transposing one site to another.
It was really exciting, too, when the Scholars’ Lab released Neatline, a tool for mapping objects in space and time, near the end of our institute! It’s wonderful to see humanities-inflected mapping tools like these, and we’re really looking forward to putting it to good use.
We also devoted a couple days to a conversation with a group of publishers and journal editors, talking about practical problems that are familiar to all digital humanists: How can we ensure our projects have a decent lifespan? How can we properly attribute complex projects so that everyone gets credit? Where do publishers fit in to this process? What financial model will support projects like these?
By the end of the institute, we’d solved all these problems. Just kidding, no we didn’t. But we did have some great discussions, which are well-documented in our collaborative notes and in our notes from the closing presentations. We even started working on a shared vocabulary for digital cultural mapping. We’re hoping to keep the conversation going, on shared documents, blogs, and Twitter, and we hope you’ll join in!
Data big and small, digital and analog
It’s been almost two years now since the New York Times proclaimed that “The next big idea in language, history and the arts [is] data.” This lofty declaration came in the context of a discussion about the rise of digital scholarship in the humanities, implying that “data” and “digital” are necessarily yoked concept. And for many of us who work in the digital humanities, they are. One of the commonplaces of digital humanities scholarship is that we need computation methods to help us deal with the vast amounts of data that no individual researcher could possibly comb through on her own. Scholars such as Tanya Clement, Sara Steger, John Unsworth, and Kristen Uskalo have warned us, pointedly, not to try to “read a million books,” offering instead that we use digital tools to process large data sets. But do we always have to use digital tools to work with data? And does data always have to be large (i.e., of the “million books” variety) to be worth our attention as scholars?
One of my colleagues posted the following photo on her Facebook page the other day, with the caption “Sometimes work looks like this; Or, Humanists play with numbers, too.”
In this photo, my colleague, Sari Altschuler, was keeping track of the chronological distribution of articles, books, and dissertations on a research topic. What caught my attention is how obvious it is from Sari’s tallies, circles, and strike-throughs that she is making real and meaningful discoveries from a small (but still important) data set, and she’s doing so with the low-tech means of a pencil and some post-it notes. (Plus, she didn’t have to apply for an NEH grant to get this work done!)
Sari’s “analog humanities” data analysis reminds me that there is a longer history to such number-crunching than the 2010 New York Times piece on the rise of the digital humanities would allow. This history would certainly include such classics as William Charvat’s The Profession of Authorship in America, 1800-1870 (Ohio State Press, 1968)–which includes chapters with titles like “Longfellow’s Income from His Writings, 1842-1852” that no doubt involved culling research from many a handwritten tally sheet–as well as more recent works like Meredith L. McGill’s American Literature and the Culture of Reprinting, 1834-1853 (University of Pennsylvania Press, 2003) which required McGill to count dozens, if not hundreds (but certainly not a million), of reprinted texts from the antebellum United States.
So why does this matter? I think it behooves us as digital humanities to be aware (1) of the history of data analysis that precedes our current moment of fascination with digital computation, (2) of the continuing work of scholars in the digital era who opt not to use digital methods but who are still nevertheless working with quantitative data sets, and (3) that data need not be of the “million books” variety to still be worth studying.
Why Open Access Textbooks Matter
I firmly believe that the first duty of anyone involved in graduate education these days is figuring out how to lower costs for graduate students, especially since the US government has decided to abandon subsidizing loans for students while they work to achieve their degree. One possibility is the creation of a cross-institutional open access repository of texts and other resources that can be adopted to help offset expensive textbooks. Before we get to the point where such a repository is available for everyone, I’m going to look at one group who is leading the way for adopting open textbooks.
Open Access Textbooks is funded by a grant from FIPSE (Fund for the Improvement of Postsecondary Education), and sees as its goal the creation “of a sustainable model for Florida and other states to discover, produce, and disseminate open textbooks.” The site includes a number of resources designed to show the administration of your University how to adopt open textbooks more widely. For example, the site includes links to a talk called “Developing a Digital Textbook Strategy for Your Campus,” that covers the use of open textbooks and was presented at the Florida Distance Learning Symposium on February 8, 2012.
The site also archives a number of useful webinars on the topic of open textbooks. One webinar focuses on the Writing Commons, a website supported by the Writing Program at the University of South Florida. WritingCommons.org offers students in the Writing Program a “free access to an award-winning college textbook.” Note: Most of the webinars are archived on YouTube, but this particular one requires that you use the Elluminate Live! software associated with this particular presentation.
The site hosts information on state legislation regarding open textbooks. Did you realize that the State of Washington is currently enacting a bill that requires the Superintendant of Public Instruction to help develop a plan for open educational resources and open courseware? Further, Florida has created the “Florida Virtual Campus” to serve as a clearing house for free and distant learning resources. The Open Access Textbook site provides links to open access textbook websites, open access repository initiatives like the University Press of Florida’s Orange Grove Texts Plus, a site covering online tools for creating open textbooks, suggestions for promoting adoption and authorship, and guidelines for securing and maintaining revenue funding. The Open Access Textbook site is a powerful example of the kind of collaboration that is urgently needed on college campuses, especially if federal funding for graduate education continues to decline.