Today marks the inaugural launch of DigitalCultureWeek. The aim of this new weekly review is to provide a forum for amplifying, (re)absorbing, and asking questions about each week’s digital humanities and new media news. This first week I am delighted to be joined by Matt Burton, Daniel Chamberlain, and Miriam Posner. Together our DCW authors confront how we (as scholars, teachers, librarians, and writers) access, compile, and just generally work with (and in) digital media to foster DH scholarship. Equally important to the discussion is how we resist transforming the multidisciplinary fields of DH and new media studies into a one-size-fits-all rubric.
So: modes of acceptance, modes of resistance. Plugging in and plugging out. This week we ask: What does it mean for a tech journalist to go internet-free? How can we create effective learning spaces on campus? Who needs to learn to code (and why…and how much)? And what is the library’s role in teaching digital research workflows? All of this week’s (and every DigitalCultureWeek’s) topics concern discourses and demands that surround DH scholars and scholarship—a week’s worth of thought that deserves both further celebration and renewed critique.
Feel free to join the conversation by posting comments below!
On the Normality of Networked Life
I want to open my DCW contributions with a link to an ongoing experiment at a technology blog, The Verge. One of their writers, Paul Miller, has decided to leave the Internet…FOR A YEAR (gasp). He writes:
I’m abandoning one of my “top 5″ technological innovations of all time for a little peace and quiet. If I can survive the separation, I’m going to do this for a year. Yeah, I’m serious. I’m not leaving The Verge, and I’m not becoming a hermit, I just won’t use the internet in my personal or work life, and won’t ask anyone to use it for me.
Now we might instinctively descend into abject snarkery and quip about his “bravery” or point out that he really isn’t “off the internet.” He is still highly dependent upon an INTER-NETworked assemblage of human and non-human actors to support his everyday existence. While he may have removed the internet from his direct experience, he is still a netizen-once-removed. Such criticisms, which remind me of those who point out that Thoreau’s mother did his laundry, seem to miss the point (note: Paul Miller is not a Thoreau). We can criticize and dismiss his project, but then we miss an utterly fascinating story about a meeting of 40,000 ultra-orthodox jews coming together to try and understand the Internet’s impact upon the elementary forms of their religious life (spoiler: the Internet is profane, but can no longer be rejected outright).
Another option might be to use Miller’s project as an opportunity to reflexively foster a kind of technological mindfulness, especially among those who might not otherwise be mindful of how information technology is transforming their interpersonal relationships (hint: I’m saying readers of The Verge, like myself, might be addicted to “teh internets”).
As scholars of highly mediated digital environments, how often do we take a moment and contemplate a life outside the hegemony of the screen? How many of you, when you first wake up in the morning, reach over to the nightstand, pick up your smartphone, and check your email and Twitter? I know I do and I feel a competing tension because of this habit. One the one hand, my individual and collective identity is increasingly constitutive of the afforded communicative possibilities of the internet (i.e. I am what I tweet). But, on the other, I have yet to fully reconcile the influence such technology has had upon the practices of my everyday life (I’m not sure I could give up the internet for a week). Unlike the ultra-orthodox Jewish community, I have no framework to evaluate the sacred or profane impacts of technology in my life. So I wonder, what can I learn about myself and my relationship to people and technology by observing how Paul Miller or the ultra-orthodox Jewish community reconcile their relationships to the Internet?
Designing Learning Spaces
One of the nice things about a job that takes me in and out of the classroom is that I get opportunities to play both sides of the fence on a regular basis, which allows me to bring a instructor’s perspective to administrative questions and occasionally apply an administrator’s budget to challenges faced by professors. I could (and in the future likely will) point out plenty of examples where this helps me, my colleagues, and my institution solve problems; for today I want to highlight the challenge of designing learnings spaces, which was the topic of discussion at a recent THATCamp and in ProfHacker.
What I really like about these discussions (and the associated notes) is that they attend to both the strategic institutionalization of space and the tactical interventions that can allow for productive, unauthorized, and fun uses of space in a learning context. At some level, I think that what instructors generally need are clean, well-lighted spaces with a baseline of (working!) technology, flexible furniture, and plenty of vertical writing surfaces. Ideally, all of these parameters can be addressed in a manner that communicates to students the importance of the work that goes on in the space (stained carpets, fluttering lights, and spotty wi-fi suggest the opposite) and encourages students to take ownership of the space and their own learning. We don’t have too many spaces that fit that description at my institution, so I have been working with colleagues to make some changes. So far we have developed a small, flexible learning lab that decenters the classroom with full-perimeter writing surfaces, portable and hangable white boards, and multi-screen projection from each project table; an open-plan work space with configurable seating, wireless projection, and plenty of vertical writing space; and, most importantly, a regular process through which these spaces are shared and feedback is gathered from colleagues who teach in spaces all across the campus. As others have, we have also begun the process of experimenting with loosely structured spaces for students to claim and use as their own. Of course, now that everyone wants to teach in these spaces, I will likely have to resort to hacking whatever space I get assigned to next year.
On Competence: Mark Sample’s “5 BASIC Statements on Computational Literacy”
Mark Sample’s recent five-minute position talk at the Computers and Writing Conference (hosted at North Carolina State University this past Saturday) presents a welcome and nuanced alternative to recent do-or-die edicts that DH scholars strap themselves in for long thoughts of Visual C# (or Perl, or Ruby, or Python…).
His advice: learn what you need to learn to be a good scholar.
Using command line statements from the computer language BASIC—developed at Dartmouth in the 1960s as a “Beginner’s All-Purpose Symbolic Instruction Code”—Sample derives a series of important lessons from early coding culture. In brief: programming languages are “social texts” (think of the words behind the traditional exercise “PRINT ‘HELLO WORLD’”: a simple greeting that indicates how programming and programmers direct the products of code toward a public). They’re also aesthetic texts—there’s “spaghetti code” and there’s elegance. And code can, like any text, be evocative and artful. But it can also be impenetrably illegible—something that leads to the sorts of in/out elitism helpful to no one (as poignantly discussed by Bethany Nowviskie and DCW’s own Miriam Posner).
Rather than reiterate this elitism with a term like literacy—which, Sample writes, is “often misused as a gatekeeping concept, an either/or state”—he urges us to think instead in terms of contextualized competency: not an all-encompassing or idealized knowledge, but a situational, need-based one.
As a (sometime) historian of linguistics, I was struck by how different this notion of competence is from the sort that Noam Chomsky famously asserts as part of his theory of generative grammar. Linguistic competence is, for the linguist, a way of condensing everything speakers know (and everything they don’t know they know) about their native language. In Chomskyan grammar, linguistic competency is precisely the kind of abstracted, idealized version of syntactic knowledge that Sample wants to avoid in talking about computational literacy.
In other words, you don’t (or shouldn’t) need to be a native speaker of code to be able to speak critically about it.
I like the idea of Sample’s version of a highly context-driven coding competence. But something that still needs to be confronted is the shadowy ideal of a perfect knowledge that continues to lurk behind freighted terms like competence (though to be fair, maybe this is simply because of the linguistic baggage I attach to the term). Competence anxiety is at the root of much humanist angst—the fear that we’ll be called out for not demonstrating a complete fluency with the subject matter we describe, critique, explore, analyze, historicize, and interpret. So the question remains: how much competency is enough to be a critic or historian of code? Or, if “how much” is too quantitative, then how do we know when we know enough to be good critics and good historians?
Research workflows: An opportunity for libraries?
This week I’ve been thinking a lot about research workflows: a not-entirely-satisfactory name for the file-capturing, data-wrangling, information-retrieving process that results (if all goes well) in scholarly work. Here at UCLA (and I suspect elsewhere) grad students in particular are very curious about how to optimize these processes, and more and more they’re sharing information with each other, in blogs and forums and in person.
I’m fascinated by what I think is a real shift in the way we’re doing and thinking about our scholarship. Traditionally, the rhetoric of research suggests that the researcher’s problem is one of scarcity; he or she must hunt down scraps of significance and piece them together. But what I’m seeing, more and more, is that researchers are beleaguered by a surfeit of significance. At one research workshop, a scholar described taking 300 photographs of archival material per hour.
How can a human being possibly process, organize, and retrieve that much information? There are, I think, methods that can really help with this: tools like OCR, Automator, Hazel — even things like topic modeling and network analysis. Keeping up with these tools, though, requires constant vigilance and a geeky predilection toward such things.
Here, I think, is a real opportunity for libraries. I think libraries might usefully offer research workflow consultations: one-on-one appointments with researchers to understand and optimize a scholar’s workflow. I suggest one-on-one consultations because research methods are so deeply personal, inflected both by one’s subject matter and by one’s (often eccentric) work habits. A librarian might develop a set of diagnostics to first understand and then fine-tune a scholar’s workflow, taking into account both the kinds of material he or she works with and the researcher’s personality.
We know that the library is the place to find information professionals. It seems natural, then, to look there for expertise in sorting through research data.