DCW Volume 1 Issue 2 – Open, Shameless, (Un)filtered

by Korey Jackson on June 1, 2012

In this week’s issue, our authors discuss the many faces of shame and web writing, the business (and commodification) of academic publishing, big data at the library, creating encoding standards for archives and archival materials, and open education and digital pedagogy.

If there’s a through-line to trace in the five articles below, it’s the idea that the openness and accessibility made possible by distributed networks, distributed (and big) data, and distributed (and timely) writing brings with it a need both to filter and unfilter: to vet resources and to resist the urge to continually self-vet; to bravely put forward information for the good of the crowd, and to trust communities to be their own best content filters.

As always, feel free to add your comments below!


20 Minutes or Less to More Writing and Less Shame
Brian Croxall

Last week, Matt wrote about the normality of the networked life, and asked us to think about how our predilection toward technology use affects our daily lives. What does it mean when we reach for our phones or other connected devices before getting out of bed? Should my day really start with Twitter? These are the sorts of questions designed to be provoked by events such as the National Day of Unplugging or the Great ProfHacker Offline Challenge. (Fortunately, both are in the past, so you needn’t worry that you will be disconnected anytime soon.)

On the other hand, our networks can function like the tides, which occasionally bring you pretty and shiny things. Such was the case earlier this week when Twitter drew my attention to a new post from Kathleen Fitzpatrick on the need for shamelessness in writing. Kathleen’s own post was inspired by Collin Gifford Brooke’s meditation on Nietzsche’s Ecce Homo and the work of makers on Etsy. At the risk of doing both Kathleen and Collin the severe injustice of paraphrasing, they both ask what function our writing should serve, what affect we should attach to the written word, and when we should share our writing with others. Kathleen’s penultimate paragraph cuts to the chase:

As I read Collin’s post, I was drawn to this notion of shamelessness as a condition for writing of the sort in which I hope to immerse myself. Shedding shame is a necessary precursor to blogging, I think, and that blogging is likely to be a key component in helping me around the main obstacle keeping me from writing these days: not being at all sure that I have anything worth saying.

When each paragraph has to bear the weight of the next Big Project, its fragility and its apparent emptiness become all too visible. When each paragraph is just a passing thought, a throwaway, something that might lead to the next thought, or might simply drift off on the breeze, that fragility and emptiness might be transformed into virtues.

I find this notion of shamelessness about writing incredibly provocative. And I think that my experience on Twitter would make it easy for me to just throw thoughts out into the ether. And yet…I regularly have thoughts about blog posts that I want to write but never get around to because I’ve constructed my idea of blogging as something that MUST SAY SOMETHING. That must be well thought through and must be well written as I see on the blogs of friends and colleagues (including those of the DCW crew). But what happens is that through lack of time and my desire to create or preserve a particular brand image for myself, my thoughts go unwritten.

I’m sure that it’s well and meet that I haven’t foisted every blog post idea I’ve had upon the world. But in the coming weeks, I hope to loosen up a bit in my writing and share those longer than 140-character thoughts more frequently. Let this post be my own example, since I wrote it in 20 minutes. Perhaps—you think—the lack of time spent here is obvious. I hope it is and that it remains so for me.


The Business of Publishing: Reporting from SSP
Korey Jackson

I’m at the Society for Scholarly Publishing’s Annual Meeting this week, so my DCW musings come amid attending sessions, meeting other folks in the academic and trade publishing world (over 800 in attendance!), and generally operating under the hyper-caffeinated haze of Beltway conference-going. What follows is a broad overview of the questions and conversations that dominated over the last two days.

The marquis header for this year’s convention—“Social, Mobile, Agile, Global: Are You Ready?”—has the slight smack of a marketing anxiety-booster: the question itself implying that scholarly publishers aren’t quite prepared for the ascendance of real-time, mobile-first, networked information exchange. (Though there are certainly plenty of exhibitors here eager to offer their version of readiness.)

It’s worth mentioning, however, that this state of not-quite-ready isn’t peculiar to scholarly publishing. As Lee Rainie, Director of the Pew Research Center’s Internet and American Life Project, remarked during his plenary talk yesterday, the very fact that the map of Web 3.0 (or, more to the point, the next potentially web-agnostic breakthrough in mobile technology) is still “90% blank” means that, if anything, academic publishers stand just as much of a chance of being in the vanguard as any other industry.

But first we need to figure out how we want to incorporate mobility, social networks, and real-time expectations into the kinds of products we make. Rainie’s #1 question for the audience was, tellingly, “What is your commodity?” If online self-publishing, self-marketing, and crowd-based post-publication review are becoming de rigueur (and that’s certainly still a big ‘if’ in many academic sectors), what is the value proposition of an institutional publisher…aside from institutionality?

Dan Cohen, director of the Roy Rosenzweig Center for History and New Media at George Mason University, offered one answer during his keynote speech Wednesday night. With PressForward, CHNM’s publishing arm, Cohen is putting process in front of (or at least alongside of) product. It’s a method he’s referred to in the recent past as “catching the good.” There is no shortage of high-quality (and refreshingly shameless) scholarly writing on the web, says Cohen. If anything, the 20-year history of web writing shows a marked trend toward more and more robust user-generated content. What PressForward aims to do is offer a “part algorithmic, part editorial” filter for this content. And they are attempting to do so in a tiered way, with broadly defined community web content at one end of the filter chain, Digital Humanities Now (whose version 2.0 celebrates its sixth month this week!) somewhere in the middle, and the highly vetted (both pre- and post-publication) Journal of Digital Humanities at the other end. In this way, the commodity is less about some fetish object of “big ‘O’” original content, and more about the evolved filter…not to mention the human and technological infrastructure behind that filter. (You can read more about reactions to Cohen’s talk in Todd Carpenter’s post at The Scholarly Kitchen.)

Of course, it remains to be seen whether this process (and others like it) can be successfully construed as a commodity, or, indeed, whether commodification should be the prime mover for a system that is, in the end, about the freer exchange of information.


Libraries and the Future of Research
Kathryn Tomasek

Having come into digital humanities through connections to projects associated with libraries often brings me opportunities to hear from some of the people at the forefront of imagining how libraries might manage the many transitions that come with digital texts.

Last Friday, I had the good fortune to introduce the speakers at the BNN Future of the Academy Speaker Series, which is co-sponsored by NERCOMP, the NorthEast Regional Computing Program; NITLE, the National Institute for Technology in Liberal Education; and BLC, the Boston Library Consortium.  The series focuses on strategic issues surrounding the integration of information resources and technology in support of higher education.  Friday’s symposium, entitled “The Hathi Trust, Google Books, and the Future of Research,” featured Paul Courant and John Unsworth speaking from their experiences at the University of Michigan and the University of Illinois, Urbana-Champaign, respectively.  Both Unsworth and Courant were part of the group responsible for “Our Cultural Commonwealth,” the 2006 report of the American Council of Learned Societies Commission on Cyberinfrastructure for the Humanities and Social Sciences.

Courant, who is the University Librarian and Dean of Libraries at the University of Michigan, spoke on “The Google Project, the Hathi Trust, and the Digital Public Library of America: Where They Came From, Where They Are Going.”  At least that was the title he had announced before Friday.  I don’t seem to have written down the changed title he used last week.  (Rather brilliant on my part, I must say.)

He told the story of the digitization of print books at the University of Michigan, a story that includes the pre-Google digitization of the Evans Collection at the University of Michigan Libraries with OCR under the title “Early American Imprints.”  When Larry Page undertook digitizing everything in the University of Michigan Libraries, part of the University’s agreement with Google included copies of the digital files that were the property of the university to use as they saw fit.  Courant talked about some of the possibilities the Google settlement might have opened up, but the settlement never happened.  The University’s files are now part of the content that makes up HathiTrust, which is run by academic libraries rather than private enterprise.  At this point, HathiTrust contains over ten million volumes, and almost thirty percent of those are in the public domain.  Another measure of the content: 462 terabytes of data.  The potential for data mining this content represents thus led us into John Unsworth’s talk.

In “Challenges of Computational Research and Copyright,” Unsworth focused on the Hathi Trust Research Center, a combined effort of Indiana University and the University of Illinois, Urbana-Champaign, with other institutions in the process of joining.  After mentioning the MOUs that are necessary when multiple institutions participate in data sharing, Unsworth described the creation of the HTRC.  The organization is now in Phase One of building out the cyberinfrastructure for the collection and testing use cases.  In Phase Two, the data will be opened to the use of researchers, and access will be provided through an API.  The features of the HTRC are being developed as a result of queries to recipients of Google DH grants.  Acknowledging the shortcomings of OCR data, Unsworth mentioned the interest of Laura Mandell and Martin Mueller in crowdsourcing data cleanup.  Unsworth closed with some discussion of the kinds of work future humanists will do and what they will need to know.  One challenge will lie in reading data visualizations and distinguishing meaning-bearing characteristics from arbitrary ones.  He mentioned the work of Elijah Meeks at Stanford University as well as that of Ted Underwood at the University of Illinois, Urbana-Champaign.  Unsworth did not mention foreseeing any changes in the process of developing the HathiTrust Research Center as a result of his recent move to Brandeis University, but digital humanists in the Northeast are thrilled to have him in the area.

Since both of the speakers mentioned the challenges presented by copyright, many questions from the audience focused on these challenges.  Other questions involved the relationships between the book-focused projects described by Courant and Unsworth and digital preservation projects.

The audience appeared invigorated by the opportunities and challenges digital libraries pose for research and for our institutions’ abilities to support the research and teaching of staff, students, and faculty members.  For this audience, Big Data sounds inviting, and they are eager to explore its uses.


Building a National Archival Authorities Infrastructure
Edward Whitley

Last week I had the privilege of attending a meeting organized by Daniel Pitti and his colleagues at the Social Networks and Archival Contexts Project (SNAC) called, rather optimistically, “Building a National Archival Authorities Infrastructure”.  (The Chronicle of Higher Education has already covered both SNAC and this meeting here and here.) Pitti thinks big. I like that about him. He thinks about big data sets, about continent-spanning coalitions between institutions both large and small, and about the hefty grants from the NEH and the Mellon Foundation necessary to make his big ideas possible.

Pitti had gathered a group of archivists, scholars, and librarians from across the United States in an effort to convince them to create metadata for their holdings that would, as my colleague Micki McGee likes to say, “play nice with each other.” As the SNAC website explains, the goal is to have archives across the country use “a recently released Society of American Archivists communication standard for encoding information about persons, corporate bodies, and families, Encoded Archival Context-Corporate Bodies, Persons, and Families (EAC-CPF). EAC-CPF standardizes descriptions of people and groups who are documented in archival records.” With these standardized descriptions in place, Pitti and co. could then use a suite of digital tools that they have created to do really cool things with this national data set, like visualize the social network of an individual mentioned in an archive somewhere.

For instance, if I look up Susan B. Anthony on the SNAC website, not only do I get links to the various archival holdings from across the nation that relate to this antebellum feminist, but I also get a social network visualization showing me how Anthony is related to other people and groups (such as Clara Barton, Frederick Dogulass, or the National American Woman Suffrage Association) whose holdings also reside in one of the dozens of archival repositories that are currently participating in the SNAC project.

I was completely sold on Pitti’s proposal. As a scholar who makes use of archival data from the nineteenth century and as a teacher who frequently inflicts upon students the joys and frustrations of archival research, I would love to have this kind of resource at my disposal. But I wasn’t Pitti’s primary audience for his sales pitch–the nation’s archivists and librarians were, and as exciting as this project may seem (and it seems pretty exciting to me), they would have to muster the time, energy, resources, and labor to realize Pitti’s dream. Do increasingly diminishing library budgets have enough in the black to make this happen? Would the U.S. National Archives step up to foot the bill? Is this the kind of work that could be effectively (and reliably) crowdsourced?

This is a big task and, as I said, I like that Pitti thinks big. I just hope that he can find enough people willing to help him with the heavy lifting to make a National Archival Authorities Infrastructure a reality.


Opening Education
Roger Whitson

I’m becoming fascinated by the way that digital pedagogy is transforming relationships between students, teachers, scholars, and administration. Hybrid Pedagogy features a post by Marylhurst University undergraduate Teo Bishop called “A Letter from a Hybrid Student” in which he reflects on the recent #digped Twitter conversation about teachers and students. Bishop is particularly powerful when he mentions the tendency of teachers in that conversation to use “pedagogical jargon” which caused him to back away “from the conversation,” and when he challenges his teachers to see students as “experts in their own right.”

The challenges of more open modes of education are not just cultural, but also have to do with University infrastructure and administration. Lisa Spiro and Bryan Alexander’s “Open Education in the Liberal Arts: A NITLE Working Paper” outlines the various ways Universities are beginning to incorporate open education into their curricula. Since I’m participating in an open courseware pilot project in the fall at Washington State, I was delighted to see the broad range of ways open education figure into Universities: from making content available to creating open learning tools (“wikis, blogging platforms, and open Learning Management Systems”); to open standards and protocols (“Creative Commons licenses and the IMS metadata standard”); open courses (“The Open Learning Initiative and the ‘Change: Education, Learning and Technology MOOC’”); and open universities (“OERu”). Spiro and Alexander also provide a set of practical recommendations for Universities: intriguingly discussing the way that open resources can save money by reducing the amount of work that is replicated amongst faculty designing courses and inspiring further educational innovation by building upon the insights of other teachers.

Finally, Frank Ambrosio, William Garr, Eddie Maloney, and Theresa Schlafly at the Journal of Interactive Technology and Pedagogy discuss how their digital edition MyDante creates a collaborative reading environment for an undergraduate philosophy course. They argue that collaborative reading environments connect readers “to a reality that is shared by other readers,” and “strengthen his or her sense of a communal worldview of human culture.” For me, MyDante represents a project that illustrates the power of collaborative teaching and scholarship, yet I’d also like applications that would be more open to different texts and different classroom environments in addition to single-author sites.

Leave a Comment

 

Previous post:

Next post: