Archive for category open access

[siu] Accessing content

Alex Hodgson of ReadCube is leading a panel called “Accessing Content: New Thinking and New Business Models or Accessing Research Literature” at the Shaking It Up conference.

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

Robert McGrath is from ReadCube, a platform for managing references. You import your pdfs, read them with their enhanced reader, and can annotate them and discover new content. You can click on references in the PDF and go directly to the sources. If you hit a pay wall, they provide a set of options, including a temporary “checkout” of the article for $6. Academic libraries can set up a fund to pay for such access.

Eric Hellman talks about Unglue.it. Everyone in the book supply chain wants a percentage. But free e-books break the system because there are no percentages to take. “Even libraries hate free ebooks.” So, how do you give access to Oral Literature in Africain Africa? Unglue.it ran a campaign, raised money, and liberated it. How do you get free textbooks into circulation? Teachers don’t know what’s out there. Unglue.it is creating MARC records for these free books to make it easy for libraries to include the. The novel Zero Sum Game is a great book that the author put it out under a Creative Commons license, but how do you find out that it’s available? Likewise for Barbie: A Computer Engineer, which is a legal derivative of a much worse book. Unglue.it has over 1,000 creative commons licensed books in their collection. One of Unglue.it’s projects: an author pledges to make the book available for free after a revenue target has been met. [Great! A bit like the Library License project from the Harvard Library Innovation Lab. They’re now doing Thanks for Ungluing which aggregates free ebooks and lets you download them for free or pay the author for it. [Plug: John Sundman’s Biodigital is available there. You definitely should pay him for it. It’s worth it.]

Marge Avery, ex of MIT Press and now at MIT Library, says the traditional barriers sto access are price, time, and format. There are projects pushing on each of these. But she mainly wants to talk about format. “What does content want to be?” Academic authors often have research that won’t fit in the book. Univ presses are experimenting with shorter formats (MIT Press Bits), new content (Stanford Briefs), and publishing developing, unifinished content that will become a book (U of Minnesota). Cambridge Univ Press published The History Manifesto, created start to finish in four months and is available as Open Access as well as for a reasonable price; they’ve sold as many copies as free copies have been downloaded, which is great.

William Gunn of Mendeley talks about next-gen search. “Search doesn’t work.” Paul Kedrosky was looking for a dishwasher and all he found was spam. (Dishwashers, and how Google Eats Its Own Tail). Likewise, Jeff Atwood of StackExchange: “Trouble in the House of Google.” And we have the same problems in scholarly work. E.g., Google Scholar includes this as a scholarly work. Instead, we should be favoring push over pull, as at Mendeley. Use behavior analysis, etc. “There’s a lot of room for improvement” in search. He shows a Mendeley search. It auto-suggests keyword terms and then lets you facet.

Jenn Farthing talks about JSTOR’s “Register and Read” program. JSTOR has 150M content accesses per year, 9,000 institutions, 2,000 archival journals, 27,000 books. Register and Read: Free limited access for everyone. Piloted with 76 journals. Up to 3 free reads over a two week period. Now there are about 1,600 journals, and 2M users who have checked out 3.5M articles. (The journals are opted in to the program by their publishers.)

Q&A

Q: What have you learned in the course of these projects?

ReadCube: UI counts. Tracking onsite behavior is therefore important. Iterate and track.

Marge: It’d be good to have more metrics outside of sales. The circ of the article is what’s really of importance to the scholar.

Mendeley: Even more attention to the social relationships among the contributors and readers.

JSTOR: You can’t search for only content that’s available to you through Read and Register. We’re adding that.

Unglue.it started out as a crowdfunding platform for free books. We didn’t realize how broken the supply chain is. Putting a book on a Web site isn’t enough. If we were doing it again, we’d begin with what we’re doing now, Thanks for Ungluing, gathering all the free books we can find.

Q: How to make it easier for beginners?

Unglue .it: The publishing process is designed to prevent people from doing stuff with ebooks. That’s a big barrier to the adoption of ebooks.

ReadCube: Not every reader needs a reference manager, etc.

Q: Even beginning students need articles to interoperate.

Q: When ReadCube negotiates prices with publishers, how does it go?

ReadCube: In our pilots, we haven’t seen any decline in the PDF sales. Also, the cost per download in a site license is a different sort of thing than a $6/day cost. A site license remains the most cost-effective way of acquiring access, so what we’re doing doesn’t compete with those licenses.

Q: The problem with the pay model is that you can’t appraise the value of the article until you’ve paid. Many pay models don’t recognize that barrier.

ReadCube: All the publishers have agreed to first-page previews, often to seeing the diagrams. We also show a blurred out version of the pages that gives you a sense of the structure of the article. It remains a risk, of course.

Q: What’s your advice for large legacy publishers?

ReadCube: There’s a lot of room to explore different ways of brokering access — different potential payers, doing quick pilots, etc.

Mendeley: Make sure your revenue model is in line with your mission, as Geoff said in the opening session.

Marge: Distinguish the content from the container. People will pay for the container for convenience. People will pay for a book in Kindle format, while the content can be left open.

Mendeley: Reading a PDF is of human value, but computing across multiple articles is of emerging value. So we should be getting past the single reader business model.

JSTOR: Single article sales have not gone down because of Read and Register. They’re different users.

Unglue.it: Traditional publishers should cut their cost basis. They have fancy offices in expensive locations. They need to start thinking about how they can cut the cost of what they do.

Tags:

[siu] Panel: Capturing the research lifecycle

It’s the first panel of the morning at Shaking It Up. Six men from six companies give brief overviews of their products. The session is led by Courtney Soderberg from the
Center for Open Science, which sounds great. [Six panelists means that I won’t be able to keep up. Or keep straight who is who, since there are no name plates. So, I’ll just distinguish them by referring to them as “Another White Guy,” ‘k?]

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

Riffyn: “Manufacturing-grade quality in the R&D process.” This can easily double R&D productivity “because you stop missing those false negatives.” It starts with design

Github: “GitHub is a place where people do software development together.” 10M people. 15M software repositories. He points to Zenodo, a respository for research outputs. Open source communities are better at collaborating than most academic research communities are. The principles of open source can be applied to private projects as well. A key principle: everything has a URL. Also, the processes should be “lock-free” so they can be done in parallel and the decision about branching can be made later.

Texas Advanced Computing Center: Agave is a Science-as-a-Service platform. It’s a platform, that provides lots of services as well as APIs. “It’s SalesForce for science.”

CERN is partnering with GitHub. “GitHub meets Zenodo.” But it also exports the software into INSPIRE which links the paper with the software. [This
might be the INSPIRE he’s referring to. Sorry. I know I should know this.
]

Overleaf was inspired by etherpad, the collaborative editor. But Etherpad doesn’t do figures or equations. OverLeaf does that and much more.

Publiscize helps researchers translate their work into terms that a broader audience can understand. He sees three audiences: intradisciplinary, interdisciplinary, and the public. The site helps scientists create a version readable by the public, and helps them disseminate them through social networks.

Q&A

Some white guys provided answers I couldn’t quite hear to questions I couldn’t hear. They all seem to favor openness, standards, users owning their own data, and interoperability.

[They turned on the PA, so now I can hear. Yay. I missed the first couple of questions.]

Github: Libraries have uploaded 100,000 open access books, all for free. “Expect the unexpected. That happens a lot.” “Academics have been among the most abusive of our platform…in the best possible way.”

Zenodo: The most unusual uses are the ones who want to instal a copy at their local institutions. “We’re happy to help them fork off Zenodo.”

Q: Where do you see physical libraries fitting in?

AWG: We keep track of some people’s libraries.

AWG: People sometimes accidentally delete their entire company’s repos. We can get it back for you easily if you do.

AWG: Zenodo works with Chris Erdmann at Harvard Library.

AWG: We work with FigShare and others.

AWG: We can provide standard templates for Overleaf so, for example, your grad students’ theses can be managed easily.

AWG: We don’t do anything particular with libraries, but libraries are great.

Courtney:We’re working with ARL on a shared notification system

Q: Mr. GitHub (Arfon Smith), you said in your comments that reproducibility is a workflow issue?

GitHub: You get reproducibility as a by-product of using tools like the ones represented on this panel. [The other panelists agree. Reproducibility should be just part of the infrastructure that you don’t have to think about.]

Tags:

[2b2k] Digital Humanities: Ready for your 11AM debunking?

The New Republic continues to favor articles debunking claims that the Internet is bringing about profound changes. This time it’s an article on the digital humanities, titled “The Pseudo-Revolution,” by Adam Kirsch, a senior editor there. [This seems to be the article. Tip of the hat to Jose Afonso Furtado.]

I am not an expert in the digital humanities, but it’s clear to the people in the field who I know that the meaning of the term is not yet settled. Indeed, the nature and extent of the discipline is itself a main object of study of those in the discipline. This means the field tends to attract those who think that the rise of the digital is significant enough to warrant differentiating the digital humanities from the pre-digital humanities. The revolutionary tone that bothers Adam so much is a natural if not inevitable consequence of the sociology of how disciplines are established. That of course doesn’t mean he’s wrong to critique it.

But Adam is exercised not just by revolutionary tone but by what he perceives as an attempt to establish claims through the vehemence of one’s assertions. That is indeed something to watch out for. But I think it also betrays a tin-eared reading by Adam. Those assertions are being made in a context the authors I think properly assume readers understand: the digital humanities is not a done deal. The case has to be made for it as a discipline. At this stage, that means making provocative claims, proposing radical reinterpretations, and challenging traditional values. While I agree that this can lead to thoughtless triumphalist assumptions by the digital humanists, it also needs to be understood within its context. Adam calls it “ideological,” and I can see why. But making bold and even over-bold claims is how discourses at this stage proceed. You challenge the incumbents, and then you challenge your cohort to see how far you can go. That’s how the territory is explored. This discourse absolutely needs the incumbents to push back. In fact, the discourse is shaped by the assumption that the environment is adversarial and the beatings will arrive in short order. In this case, though, I think Adam has cherry-picked the most extreme and least plausible provocations in order to argue against the entire field, rather than against its overreaching. We can agree about some of the examples and some of the linguistic extensions, but that doesn’t dismiss the entire effort the way Adam seems to think it does.

It’s good to have Adam’s challenge. Because his is a long and thoughtful article, I’ll discuss the thematic problems with it that I think are the most important.

First, I believe he’s too eager to make his case, which is the same criticism he makes of the digital humanists. For example, when talking about the use of algorithmic tools, he talks at length about Franco Moretti‘s work, focusing on the essay “Style, Inc.: Reflections on 7,000 Titles.” Moretti used a computer to look for patterns in the titles of 7,000 novels published between 1740 and 1850, and discovered that they tended to get much shorter over time. “…Moretti shows that what changed was the function of the title itself.” As the market for novels got more crowded, the typical title went from being a summary of the contents to a “catchy, attention-grabbing advertisement for the book.” In addition, says Adam, Moretti discovered that sensationalistic novels tend to begin with “The” while “pioneering feminist novels” tended to begin with “A.” Moretti tenders an explanation, writing “What the article ‘says’ is that we are encountering all these figures for the first time.”

Adam concludes that while Moretti’s research is “as good a case for the usefulness of digital tools in the humanities as one can find” in any of the books under review, “its findings are not very exciting.” And, he says, you have to know which questions to ask the data, which requires being well-grounded in the humanities.

That you need to be well-grounded in the humanities to make meaningful use of digital tools is an important point. But here he seems to me to be arguing against a straw man. I have not encountered any digital humanists who suggest that we engage with our history and culture only algorithmically. I don’t profess expertise in the state of the digital humanities, so perhaps I’m wrong. But the digital humanists I know personally (including my friend Jeffrey Schnapp, a co-author of a book, Digital_Humanities, that Adam reviews) are in fact quite learned lovers of culture and history. If there is indeed an important branch of digital humanities that says we should entirely replace the study of the humanities with algorithms, then Adam’s criticism is trenchant…but I’d still want to hear from less extreme proponents of the field. In fact, in my limited experience, digital humanists are not trying to make the humanities safe for robots. They’re trying to increase our human engagement with and understanding of the humanities.

As to the point that algorithmic research can only “illustrate a truism rather than discovering a truth,” — a criticism he levels even more fiercely at the Ngram research described in the book Uncharted — it seems to me that Adam is missing an important point. If computers can now establish quantitatively the truth of what we have assumed to be true, that is no small thing. For example, the Ngram work has established not only that Jewish sources were dropped from German books during the Nazi era, but also the timing and extent of the erasure. This not only helps make the humanities more evidence-based —remember that Adam criticizes the digital humanists for their argument-by-assertion —but also opens the possibility of algorithmically discovering correlations that overturn assumptions or surprise us. One might argue that we therefore need to explore these new techniques more thoroughly, rather than dismissing them as adding nothing. (Indeed, the NY Times review of Uncharted discusses surprising discoveries made via Ngram research.)

Perhaps the biggest problem I have with Adam’s critique I’ve also had with some digital humanists. Adam thinks of the digital humanities as being about the digitizing of sources. He then dismisses that digitizing as useful but hardly revolutionary: “The translation of books into digital files, accessible on the Internet around the world, can be seen as just another practical tool…which facilitates but does not change the actual humanistic work of thinking and writing.”

First, that underplays the potential significance of making the works of culture and scholarship globally available.

Second, if you’re going to minimize the digitizing of books as merely the translation of ink into pixels, you miss what I think is the most important and transformative aspect of the digital humanities: the networking of knowledge and scholarship. Adam in fact acknowledges the networking of scholarship in a twisty couple of paragraphs. He quotes the following from the book Digital_Humanities:

The myth of the humanities as the terrain of the solitary genius…— a philosophical text, a definitive historical study, a paradigm-shifting work of literary criticism — is, of course, a myth. Genius does exist, but knowledge has always been produced and accessed in ways that are fundamentally distributed…

Adam responds by name-checking some paradigm-shifting works, and snidely adds “you can go to the library and check them out…” He then says that there’s no contradiction between paradigm-shifting works existing and the fact that “Scholarship is always a conversation…” I believe he is here completely agreeing with the passage he thinks he’s criticizing: genius is real; paradigm-shifting works exist; these works are not created by geniuses in isolation.

Then he adds what for me is a telling conclusion: “It’s not immediately clear why things should change just because the book is read on a screen rather than on a page.” Yes, that transposition doesn’t suggest changes any more worthy of research than the introduction of mass market paperbacks in the 1940s [source]. But if scholarship is a conversation, might moving those scholarly conversations themselves onto a global network raise some revolutionary possibilities, since that global network allows every connected person to read the scholarship and its objects, lets everyone comment, provides no natural mechanism for promoting any works or comments over any others, inherently assumes a hyperlinked rather than sequential structure of what’s written, makes it easier to share than to sequester works, is equally useful for non-literary media, makes it easier to transclude than to include so that works no longer have to rely on briefly summarizing the other works they talk about, makes differences and disagreements much more visible and easily navigable, enables multiple and simultaneous ordering of assembled works, makes it easier to include everything than to curate collections, preserves and perpetuates errors, is becoming ubiquitously available to those who can afford connection, turns the Digital Divide into a gradient while simultaneously increasing the damage done by being on the wrong side of that gradient, is reducing the ability of a discipline to patrol its edges, and a whole lot more.

It seems to me reasonable to think that it is worth exploring whether these new affordances, limitations, relationships and metaphors might transform the humanities in some fundamental ways. Digital humanities too often is taken simply as, and sometimes takes itself as, the application of computing tools to the humanities. But it should be (and for many, is) broad enough to encompass the implications of the networking of works, ideas and people.

I understand that Adam and others are trying to preserve the humanities from being abandoned and belittled by those who ought to be defending the traditional in the face of the latest. That is a vitally important role, for as a field struggling to establish itself digital humanities is prone to over-stating its case. (I have been known to do so myself.) But in my understanding, that assumes that digital humanists want to replace all traditional methods of study with computer algorithms. Does anyone?

Adam’s article is a brisk challenge, but in my opinion he argues too hard against his foe. The article becomes ideological, just as he claims the explanations, justifications and explorations offered by the digital humanists are.

More significantly, focusing only on the digitizing of works and ignoring the networking of their ideas and the people discussing those ideas, glosses over the locus of the most important changes occurring within the humanities. Insofar as the digital humanities focus on digitization instead of networking, I intend this as a criticism of that nascent discipline even more than as a criticism of Adam’s article.

Tags:

[2b2k] Big Data and the Commons

I’m at the Engaging Big Data 2013 conference put on by Senseable City Lab at MIT. After the morning’s opener by Noam Chomsky (!), I’m leading one of 12 concurrent sessions. I’m supposed to talk for 15-20 mins and then lead a discussion. Here’s a summary of what I’m planning on saying:

Overall point: To look at the end state of the knowledge network/Commons we want to get to

Big Data started as an Info Age concept: magnify the storage and put it on a network. But you can see how the Net is affecting it:

First, there are a set of values that are being transformed:
- From accuracy to scale
- From control to innovation
- From ownership to collaboration
- From order to meaning

Second, the Net is transforming knowledge, which is changing the role of Big Data
- From filtered to scaled
- From settled to unsettled and under discussion
- From orderly to messy
- From done in private to done in public
- From a set of stopping points to endless lilnks

If that’s roughly the case, then we can see a larger Net effect. The old Info Age hope (naive, yes, but it still shows up at times) was that we’d be able to create models that ultimate interoperate and provide an ever-increasing and ever-more detailed integrated model of the world. But in the new Commons, we recognize that not only won’t we ever derive a single model, there is tremendous strength in the diversity of models. This Commons then is enabled if:

  • All have access to all
  • There can be social engagement to further enrich our understanding
  • The conversations default to public

So, what can we do to get there? Maybe:

  • Build platforms and services
  • Support Open Access (and, as Lewis Hyde says, “beat the bounds” of the Commons regularly)
  • Support Linked Open Data

Questions if the discussion needs kickstarting:

  • What Big Data policies would help the Commons to flourish?
  • How can we improve the diversity of those who access and contribute to the Commons?
  • What are the personal and institutional hesitations that are hindering the further development of the Commons?
  • What role can and should Big Data play in knowledge-focused discussions? With participants who are not mathematically or statistically inclined?
  • Does anyone have experience with Linked Data? Tell us about it?

 


I just checked the agenda, which of course I should have done earlier, and discovered that of the 12 sessions today, 1211 are being led by men. Had I done that homework, I would not have accepted their invitation.

Tags:

Aaron Swartz and the future of libraries

I was unable to go to our local Aaron Swartz Hackathon, one of twenty around the world, because I’d committed (very happily) to give the after dinner talk at the University of Rhode Island Graduate Library and Information Studies 50th anniversary gala last night.

The event brought together an amazing set of people, including Senator Jack Reed, the current and most recent presidents of the American Library Association, Joan Ress Reeves, 50 particularly distinguished alumni (out of the three thousand (!) who have been graduated), and many, many more. These are heroes of libraries. (My cousin’s daughter, Alison Courchesne, also got an award. Yay, Alison!)

Although I’d worked hard on my talk, I decided to open it differently. I won’t try to reproduce what I actually said because the adrenalin of speaking in front of a crowd, especially one as awesome as last night’s, wipes out whatever short term memory remains. But it went very roughly something like this:

It’s awesome to be in a room with teachers, professors, researchers, a provost, deans, and librarians: people who work to make the world better…not to mention the three thousand alumni who are too busy do-ing to be able to be here tonight.

But it makes me remember another do-er: Aaron Swartz, the champion of open access, open data, open metadata, open government, open everything. Maybe I’m thinking about Aaron tonight because today is his birthday.

When we talk about the future of libaries, I usually promote the idea of libraries as platforms — platforms that make openly available everything that libraries know: all the data, all the metadata, what the community is making of what they get from the library (privacy accommodated, of course), all the guidance and wisdom of librarians, all the content especially if we can ever fix the insane copyright laws. Everything. All accessible to anyone who wants to write an application that puts it to use.

And the reason for that is because in my heart I don’t think librarians are going to invent the future of libraries. It’s too big a job for any one group. It will take the world to invent the future of libraries. It will take 14 year olds like Aaron to invent the future of libraries. We need supply them with platforms that enable them.

I should add that I co-direct a Library Innovation Lab where we do work that I’m very proud of. So, of course libraries will participate in the invention of their future. But it’ll take the world — a world that contains people with the brilliance and commitment of an Aaron Swartz — to invent that future fully.

 


Here are wise words delivered at an Aaron Hackathon last night by Carl Malamud: Hacking Authority. For me, Carl is reminding us that the concept of hacking over-promises when the changes threaten large institutions that represent long-held values and assumptions. Change often requires the persistence and patience that Aaron exhibited, even as he hacked.

Tags:

Elsevier acquires Mendeley + all the data about what you read, share, and highlight

I liked the Mendeley guys. Their product is terrific — read your scientific articles, annotate them, be guided by the reading behaviors of millions of other people. I’d met with them several times over the years about whether our LibraryCloud project (still very active but undergoing revisions) could get access to the incredibly rich metadata Mendeley gathers. I also appreciated Mendeley’s internal conflict about the urge to openness and the need to run a business. They were making reasonable decisions, I thought. At they very least they felt bad about the tension :)

Thus I was deeply disappointed by their acquisition by Elsevier. We could have a fun contest to come up with the company we would least trust with detailed data about what we’re reading and what we’re attending to in what we’re reading, and maybe Elsevier wouldn’t win. But Elsevier would be up there. The idea of my reading behaviors adding economic value to a company making huge profits by locking scholarship behind increasingly expensive paywalls is, in a word, repugnant.

In tweets back and forth with Mendeley’s William Gunn [twitter: mrgunn], he assures us that Mendeley won’t become “evil” so long as he is there. I do not doubt Bill’s intentions. But there is no more perilous position than standing between Elsevier and profits.

I seriously have no interest in judging the Mendeley folks. I still like them, and who am I to judge? If someone offered me $45M (the minimum estimate that I’ve seen) for a company I built from nothing, and especially if the acquiring company assured me that it would preserve the values of that company, I might well take the money. My judgment is actually on myself. My faith in the ability of well-intentioned private companies to withstand the brute force of money has been shaken. After all this time, I was foolish to have believed otherwise.

MrGunn tweets: “We don’t expect you to be joyous, just to give us a chance to show you what we can do.” Fair enough. I would be thrilled to be wrong. Unfortunately, the real question is not what Mendeley will do, but what Elsevier will do. And in that I have much less faith.

 


I’ve been getting the Twitter handles of Mendeley and Elsevier wrong. Ack. The right ones: @Mendeley_com and @ElsevierScience. Sorry!

Tags:

[annotations][2b2k] Rob Sanderson on annotating digitized medieval manuscripts

Rob Sanderson [twitter:@azaroth42] of Los Alamos is talking about annotating Medieval manuscripts.

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

He says many Medieval manuscripts are being digitized. The Mellon Foundation is funding many such projects. But these have tended to reinvent the same tech, and have not been designed for interoperability with other projects. So the Digital Medieval Initiative was founded, with a long list of prestigious partners. They thought about what they’d like: distributed, linked data, interoperable, etc. For this they need a shared description format.

The traditional approach is annotate an image of a page. But it can be very difficult to know which images to annotate; he gives as an example a page that has fold-outs. “The naive assuption is that an image equals a page.” But there may be fragments, or only portions of the page have been digitized (e.g., the illuminations), etc. There may be multiple images on a page, revealed by multi-spectral imaging. There may be multiple orientations of the page, etc.

The solution? The canvas paradigm. A canvas is an empty space corresponding to the rectangle (or whatever) of the page. You allow rich resources to be associated with it, and allow users to comment. For this, they use Open Annotation. You can specify a choice of images. You can associate text with an area of the canvas. There are lots of different ways to visualize those comments: overlays, side-by-side, etc.

You can build hybrid pages. For example, and old scan might have a new color scan of its illustrations pointing at it. Or you could have a recorded performance of a piece of music pointing at the musical notation.

In summary, the SharedCanvas model uses open standards (HTML 5, Open Annotation, TEI, etc.) and can be implement distributed across reporsitories, encouraging engagement by domain experts.

Tags:

[2b2k] Cliff Lynch on preserving the ever-expanding scholarly record

Cliff Lynch is giving talk this morning to the extended Harvard Library community on information stewardship. Cliff leads the Coalition for Networked Information, a project of the Association of Research Libraries and Educause, that is “concerned with the intelligent uses of information technology and networked information to enhance scholarship and intellectual life.” Cliff is helping the Harvard Library with the formulation of a set of information stewardship principles. Originally he was working with IT and the Harvard Library on principles, services, and initial projects related to digital information management. Given that his draft set of principles are broader than digital asset management, Cliff has been asked to address the larger community (says Mary Lee Kennedy).

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

Cliff begins by saying that the principles he’s drafted are for discussion; how they apply to any particular institution is always a policy issue, with resource implications, that needs to be discussed. He says he’ll walk us through these principles, beginning with some concepts that underpin them.

When it comes to information stewardship, “university community” should include grad students whose research materials the university supports and maintains. Undergrads, too, to some extent. The presence of a medical school here also extends and smudges the boundaries.

Cliff then raises the policy question of the relation of the alumni to the university. There are practical reasons to keep the alumni involved, but particularly for grads of the professional schools, access to materials can be crucial.

He says he uses “scholarly record” for human-created things that convey scholarly ideas across time and space: books, journals, audio, web sites, etc. “This is getting more complicated and more diverse as time goes on.” E.g., author’s software can be part of that record. And there is a growing set of data, experimental records, etc., that are becoming part of the scholarly record.

Research libraries need to be concerned about things that support scholarship but are not usually considered part of the historical record. E.g., newspapers, popular novels, movies. These give insight into the scholarly work. There are also datasets that are part of the evidentiary record, e.g., data about the Earth gathered from sensors. “It’s so hard to figure out when enough is enough.” But as more of it goes digital, it requires new strategies for acquisition, curation and access. “What are the analogs of historical newspapers for the 21st century?” he asks. They are likely to be databases from corporations that may merge and die and that have “variable and often haphazard policies about how they maintain those databases.” We need to be thinking about how to ensure that data’s continued availability.

Provision of access: Part of that is being able to discover things. This shouldn’t require knowing which Harvard-specific access mechanism to come to. “We need to take a broad view of access” so that things can be found through the “key discovery mechanisms of the day,” beyond the institution’s. (He namechecks the Digital Public Library of America.)

And access isn’t just for “the relatively low-bandwidth human reader.” [API's, platforms and linked data, etc., I assume.]

Maintaining a record of the scholarly work that the community does is a core mission of the university. So, he says, in his report he’s used the vocabulary of obligation; that is for discussion.

The 5 principles

1. The scholarly output of the community should be captured, preserved, organized, and made accessible. This should include the evidence that underlies that output. E.g., the experimental data that underlies a paper should be preserved. This takes us beyond digital data to things like specimens and cell lines, and requires including museums and other partners. (Congress is beginning to delve into this, Cliff notes, especially with regard to preserving the evidence that enables experiments to be replicated.)

The university is not alone in addressing these needs.

2. A university has the obligation to provide its community with the best possible access to the overall scholarly record. This is something to be done in partnership with research libraries aaround the world. But Harvard has a “leadership role to play.”

Here we need to think about providing alumni with continued access to the scholarly record. We train students and then send them out into the world and cut off their access. “In many cases, they’re just out of luck. There seems to be something really wrong there.”

Beyond the scholarly record, there are issues about providing access to the cultural record and sources. No institution alone can do this. “There’s a rich set of partnerships” to be formed. It used to be easier to get that cultural record by buying it from book jobbers, DVD suppliers, etc. Now it’s data with differing license terms and subscription limitations. A lot out of it’s out on the public Web. “We’re all hoping that the Internet Archive will do a good job,” but most of our institutions of higher learning aren’t contributing to that effort. Some research libraries are creating interesting partnerships with faculty, collecting particular parts of the Web in support of particular research interests. “Those are signposts toward a future where the engagement to collect and preserve the cultural records scholar need is going to get much more complex” and require much more positive outreach by libraries, and much more discussion with the community (and the faculty in particular) about which elements are going to be important to preserve.

“Absolutely the desirable thing is share these collections broadly,” as broadly as possible.

3. “The time has come to recognize that good stewardship means creating digital records of physical objects” in order to preserve them and make them accessible. They should be stored away from the physical objects.

4. A lot goes on here in addition to faculty research. People come through putting on performances, talks, colloquia. “You need a strategy to preserve these and get them out there.”

“The stakes are getting much higher” when it comes to archives. The materials are not just papers and graphs. They include old computers and storage materials, “a microcosm of all of the horrible consumer recording technology of the 20th century,” e.g., 8mm film, Sony Betamax, etc.

We also need to think about what to archive of the classroom. We don’t have to capture every calculus discussion section, but you want to get enough to give a sense of what went on in the courses. The documentation of teaching and learning is undergoing a tremendous change. The new classroom tech and MOOCs are creating lots of data, much of it personally identifiable. “Most institutions have little or no policies around who gets to see it, how long they keep it, what sort of informed consent they need from students.” It’s important data and very sensitive data. Policy and stewardship discussions are need. There are also record management issues.

5. We know that scholarly communication is…being transformed (not as fast as some of us would like â?? online scientific journals often look like paper versions) by the affordances of digital technology. “Create an ongoing partnership with the community and with other institutions to extend and broaden the way scholarly communication happens. The institutional role is terribly important in this. We need to find the balances between innovation and sustainability.

Q&A

Q: Providing alumni with remote access is expensive. Harvard has about 100,000 living alumni, which includes people who spent one semester here. What sort of obligation does a university have to someone who, for example, spent a single semester here?

A: It’s something to be worked out. You can define alumnus as someone who has gotten a degree. You may ask for a co-payment. At some institutions, active members of the alumni association get some level of access. Also, grads of different schools may get access to different materials. Also, the most expensive items are typically those for which there are a commercial market. For example, professional grade resources for the financial industry probably won’t allow licensing to alumni because it would cannibalize their market. On the other hand, it’s probably not expensive to make JSTOR available to alumni.

Q: [robert darnton] Very helpful. We’re working on all 5 principles at Harvard. But there is a fundamental problem: we have to advance simultaneously on the digital and analog fronts. More printed books are published each year, and the output of the digital increases even faster. The pressures on our budget are enormous. What do you recommend as a strategy? And do you think Harvard has a special responsibility since our library is so much bigger, except for the Library of Congress? Smaller lilbraries can rely on Hathi etc. to acquire works.

A: “Those are really tough questions.” [audience laughs] It’s a large task but a finite one. Calculating how much money would take an institution how far “is a really good opportunity for fund raising.” Put in place measures that talk about the percentage of the collection that’s available, rather than a raw number of images. But, we are in a bad situation: continuing growth of traditional media (e.g., books), enormous expansion of digital resources. “My sense is…that for Harvard to be able to navigate this, it’s going to have to get more interdependent with other research libraries.” It’s ironic, because Harvard has been willing to shoulder enormous responsibility, and so has become a resource for other libraries. “It’s made life easier for a lot of the other research libraries” because they know Harvard will cover around the margins. “I’m afraid you may have to do that a little more for your scholars, and we are going to see more interdependence in the system. It’s unavoidable given the scope of the challenge.” “You need to be able to demonstrate that by becoming more interdependent, you’re getting more back than you’re giving up.” It’s a hard core problem, and “the institutional traditions make the challenge here unique.”

Tags:

[2b2k] Why Is Open-Internet Champion Darrell Issa Supporting an Attack on Open Science?

I’ve swiped the title of this post from Rebecca J. Rosen’s excellent post at The Atlantic. Darrell Issa has been generally good on open Internet issues, so why is he supporting a bill that would forbid the government from requiring researchers to openly post the results of their research? [Later that day: I revised the previous sentence, which was gibberish. Sorry.]

Rebecca cites danah boyd’s awesome post: Save Scholarly Ideas, Not the Publishing Industry (a rant). InfoDocket has a helpful roundup, including to Peter Suber’s Google+ discussion.

Tags: