Archive for category everythingIsMiscellaneous

[liveblog] The future of libraries

I’m at a Hubweek event called “Libraries: The Next Generation.” It’s a panel hosted by the Berkman Center with Dan Cohen, the executive director of the DPLA; Andromeda Yelton, a developer who has done work with libraries; and Jeffrey Schnapp of metaLab

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

Sue Kriegsman of the Center introduces the session by explaining Berkman’s interest in libraries. “We have libraries lurking in every corner…which is fabulous.” Also, Berkman incubated the DPLA. And it has other projects underway.

Dan Cohen speaks first. He says if he were to give a State of the Union Address about libraries, he’d say: “They are as beloved as ever and stand at the center of communities” here and around the world. He cites a recent Pew survey about perspectives on libraries:“ …libraries have the highest approval rating of all American institutions. But, that’s fragile.” libraries have the highest approval rating of all American institutions. But, he warns, that’s fragile. There are many pressures, and libraries are chronically under-funded, which is hard to understand given how beloved they are.

First among the pressures on libraries: the move from print. E-book adoption hasn’t stalled, although the purchase of e-books from the Big Five publishers compared to print has slowed. But Overdrive is lending lots of ebooks. Amazon has 65% of the ebook market, “a scary number,” Dan says. In the Pew survey a couple of weeks ago, 35% said that libraries ought to spend more on ebooks even at the expense of physical books. But 20% thought the opposite. That makes it hard to be the director of a public library.

If you look at the ebook market, there’s more reading go on at places like the DPLA. (He mentions the StackLife browser they use, that came out of the Harvard Library Innovation Lab that I used to co-direct.) Many of the ebooks are being provided straight to a platform (mainly Amazon) by the authors.

There are lots of jobs public libraries do that are unrelated to books. E.g., the Boston Public Library is heavily used by the homeless population.

The way forward? Dan stresses working together, collaboration. “DPLA is as much a social, collaborative project as it is a technical project.” It is run by a community that has gotten together to run a common platform.

And digital is important. We don’t want to leave it to Jeff Bezos who “wants to drop anything on you that you want, by drone, in an hour.”

Andromeda: She says she’s going to talk about “libraries beyond Thunderdome,” echoing a phrase from Sue Kriegman’s opening comments. “My real concern is with the skills of the people surrounding our crashed Boeing.” Libraries need better skills to evaluate and build the software they need. She gives some exxamples of places where we see a tensions between library values and code.

1. The tension between access and privacy. Physical books leave no traces. With ebooks the reading is generally tracked. Overdrive did a deal so that library patrons who access ebooks get notices from Amazon when their loan period is almost up. Adobe does rights management, with reports coming page by page about what people are reading. “Unencrypted over the Internet,” she adds. “You need a fair bit of technical knowledge to see that this is happening,” she says. “It doesn’t have to be this way.” “It’s the DRM and the technology that have these privacy issues built in.”

She points to the NYPL Library Simplified program that makes it far easier for non-techie users. It includes access to Project Gutenberg. Libraries have an incentive to build open architectures that support privacy. But they need the funding and the technical resources.

She cites the Library Freedom Project that teaches librarians about anti-surveillance technologies. They let library users browse the Internet through TOR, preventing (or at least greatly inhibit) tracking. They set up the first library TOR node in New Hampshire. Homeland Security quickly suggested that they stop. But there was picketing against this, and the library turned it back on. “That makes me happy.”

2. Metadata. She has us do an image search for “beautiful woman” at Google. They’re basically all white. Metadata is sometimes political. She goes through the 200s of the Dewey Decimal system: 90% Christian. “This isn’t representative of human knowledge. It’s representative of what Melvil Dewey thought maps to human knowledge.” Libraries make certain viewpoints more computationally accessible than others.“ Our ability to write new apps is only as good as the metadata under them.” Our ability to write new apps is only as good as the metadata under them. “As we go on to a more computational library world — which is awesome — we’re going to fossilize all these old prejudices. That’s my fear.”

“My hope is that we’ll have the support, conviction and empathy to write software, and to demand software, that makes our libraries better, and more fair.”

Jeffrey: He says his peculiar interest is in how we use space to build libraries as architectures of knowledge. “Libraries are one of our most ancient institutions.” “Libraries have constantly undergone change,” from mausoleums, to cloisters, to warehouses, places of curatorial practice, and civic spaces. “The legacy of that history…has traces of all of those historical identities.” We’ve always faced the question “What is a library?” What are it’s services? How does it serve its customers? Architects and designers have responded to this, assuming a set of social needs, opportunities, fantasies, and the practices by which knowledge is created, refined, shared. “These are all abiding questions.”

Contemporary architects and designers are often excited by library projects because it crystallizes one of the most central questions of the day: “How do you weave together information and space?” We’re often not very good at that. The default for libraries has been: build a black box.

We have tended to associate libraries with collections. “If you ask what is a library?, the first answer you get is: a collection.” But libraries have also always been about the making of connections, i.e., how the collections are brought alive. E.g., the Alexandrian Librarywas a performance space. “What does this connection space look like today?” In his book with Matthew Battles, they argue that while we’ve thought of libraries as being a single institution, in fact today there are now many different types of libraries. E.g., the research library as an information space seems to be collapsing; the researchers don’t need reading rooms, etc. But civic libraries are expanding their physical practices.

We need to be talking about many different types of libraries, each with their own services and needs. The Library as an institution is on the wane. We need to proliferate and multiply the libraries to serve their communities and to take advantage of the new tools and services. “We need spaces for learning,” but the stack is just one model.


Dan: Mike O’Malley says that our image of reading is in a salon with a glass of port, but in grad school we’re taught to read a book the way a sous chef guts a fish. A study says that of academic ebooks, 75% of scholars read less than 50 pages of them. [I may have gotten that slightly wrong. Sorry.] Assuming a proliferation of forms, what can we do to address them?

Jeffrey: The presuppositions about how we package knowledge are all up for grabs now. “There’s a vast proliferation of channels. ‘And that’s a design opportunity.’”There’s a vast proliferation of channels. “And that’s a design opportunity.” How can we create audiences that would never have been part of the traditional distribution models? “I’m really excited about getting scholars and creative practitioners involved in short-form knowledge and the spectrum of ways you can intersect” the different ways we use these different forms. “That includes print.” There’s “an extraordinary explosion of innovation around print.”

Andromeda: “Reading is a shorthand. Library is really about transforming people and one another by providing access to information.” Reading is not the only way of doing this. E.g., in maker spaces people learn by using their hands. “How can you support reading as a mode of knowledge construction?” Ten years ago she toured Olin College library, which was just starting. The library had chairs and whiteboards on castors. “This is how engineers think”: they want to be able to configure a space on the fly, and have toys for fidgeting. E.g., her eight year old has to be standing and moving if she’s asked a hard question. “We need to think of reading as something broader than dealing with a text in front of you.”

Jeffrey: The DPLA has a location in the name — America &#8212. The French National Library wants to collect “the French Internet.” But what does that mean? The Net seems to be beyond locality. What role does place play?

Dan: From the beginning we’ve partnered with Europeana. We reused Europeana’s metadata standard, enabling us to share items. E.g., Europeana’s 100th anniversary of the Great War web site was able to seamlessly pull in content from the DPLA via our API, and from other countries. “The DPLA has materials in over 400 languages,” and actively partners with other international libraries.

Dan points to Amy Ryan (the DPLA chairperson, who is in the audience) and points to the construction of glass walls to see into the Boston Public Library. This increases “permeability.” When she was head of the BPL, she lowered the stacks on the second floor so now you can see across the entire floor. Permeability “is a very smart architecture” for both physical and digital spaces.

Jeff: Rendering visible a lot of the invisible stuff that libraries do is “super-rich,” assuming the privacy concerns are addressed.

Andromeda: Is there scope in the DPLA metadata for users to address the inevitable imbalances in the metadata?

Dan: We collect data from 1,600 different sources. We normalize the data, which is essential if you want to enable it for collaboration. Our Metdata Application Profile v. 4 adds a field for annotation. Because we’re only a dozen people, we haven’t created a crowd-sourcing tool, but all our data is CC0 (public domain) so anyone who wants to can create a tool for metadata enhancement. If people do enhance it, though, we’ll have to figure out if we import that data into the DPLA.

Jeffrey: The politics of metadata and taxonomy has a long history. The Enlightenment fantasy is for a universal metadata school. What does the future look like on this issue?

Andromeda: “You can have extremely crowdsourced metadata, but then you’re subject to astroturfing”You can have extremely crowdsourced metadata, but then you’re subject to astroturfing and popularity boosting results for bad reasons. There isn’t a great solution except insofar as you provide frameworks for data that enable many points of view and actively solicit people to express themselves. But I don’t have a solution.

Dan: E.g., at DPLA there are lots of ways entering dates. We don’t want to force a scheme down anyone’s throat. But the tension between crowdsourced and more professional curation is real. The Indianapolis Museum of Art allowed freeform tagging and compared the crowdsourced tags vs. professional. Crowdsourced: “sea” and “orange” were big, which curators generally don’t use.


Q: People structure knowledge differently. My son has ADHD. Or Nepal, where I visited recently.

A: Dan: It’s great that the digital can be reformatted for devices but also for other cultural views. “That’s one of the miraculous things about the digital.” E.g., digital book shelves like StackLife can reorder themselves depending on the query.

Jeff: Yes, these differences can be profound. “Designing for that is a challenge but really exciting.”

Andromeda: This is a why it’s so important to talk with lots of people and to enable them collaborate.

me: Linked data seems to resolve some of these problems with metadata.

Dan: Linked Data provides a common reference for entities. Allows harmonizing data. The DPLA has a slot for such IDs (which are URIs). We’re getting there, but it’s not our immediate priority. [Blogger’s perogative: By having many references for an item linked via “sameAs” relationships can help get past the prejudice that can manifest itself when there’s a single canonical reference link. But mainly I mean that because Linked Data doesn’t have a single record for each item, new relationships can be added relatively easily.]

Q; How do business and industry influence libraries? E.g., Google has images for every place in the world. They have scanned books. “I can see a triangulation happening. Virtual libraries? Virtual spaces?

Andromeda: (1) Virtual tech is written outside of libraries, almost entirely. So it depends on what libraries are able to demand and influence. (2) Commercial tech sets expectations for what users experiences should be like, which libraries may not be able to support. (3) “People say “Why do we need libraries? It’s all online and I can pay for it.” No, it’s not, and no, not everyone can.”People say “Why do we need libraries? It’s all online and I can pay for it.” No, it’s not, and no, not everyone can. Libraries should up their tech game, but there’s an existential threat.

Jeffrey: People use other spaces to connect to knowledge, e.g. coffee houses, which are now being incorporated into libraries. Some people are anxious about that loss of boundary. Being able to eat, drink, and talk is a strong “vision statement” but for some it breaks down the world of contemplative knowledge they want from a library.

Q: The National Science and Technology Library in China last week said they have the right to preserve all electronic resources. How can we do that?

Dan: Libraries have long been sites for preservation. In the 21st century we’re so focused on getting access now now now, we lose sight that we may be buying into commercial systems that may not be able to preserve this. This is the main problem with DRM. Libraries are in the forever business, but we don’t know where Amazon will be. We don’t know if we’ll be able to read today’s books on tomorrow devices. E.g., “I had a subscription to Oyster ebook service, but they just went out of business. There go all my books. ”I had a subscription to Oyster ebook service, but they just went out of business. There go all my books. Open Access advocacy is going to play a critical role. Sure, Google is a $300B business and they’ll stick around, but they drop services. They don’t have a commitment like libraries and nonprofits and universities do to being in the forever business.

Jeff: It’s a huge question. It’s really important to remember that the oldest digital documents we have are 50 yrs old which isn’t even a drop in the bucket. There’s far from universal agreement about the preservation formats. Old web sites, old projects, chunks of knowledge, of mine have disappeared. What does it mean to preserve a virtual world? We need open standards, and practices [missed the word] “Digital stuff is inherently fragile.”

Andromeda: There are some good things going on in this space. The Rapid Response Social Media project is archiving (e.g., #Ferguson). Preserving software is hard: you need the software system, the hardware, etc.

Q: Distintermediation has stripped out too much value. What are your thoughts on the future of curation?

Jeffrey: There’s a high level of anxiety in the librarian community about their future roles. But I think their role comes away as reinforced. It requires new skills, though.

Andromeda: In one pottery class the assignment was to make one pot. In another, it was to make 50 pots. The best pots came out of the latter. When lots of people can author lots of stuff, it’s great. That makes curation all the more critical.

Dan: the DPLA has a Curation Core: librarians helping us organize our ebook collection for kids, which we’re about to launch with President Obama. Also: Given the growth in authorship, yes, a lot of it is Sexy Vampires, but even with that aside, we’ll need librarians to sort through that.

Q: How will Digital Rights Management and copyright issues affect ebooks and libraries? How do you negotiate that or reform that?

Dan: It’s hard to accession a lot of things now. For many ebooks there’s no way to extract them from their DRM and they won’t move into the public domain for well over 100 years. To preserve things like that you have to break the law — some scholars have asked the Library of Congress for exemptions to the DMCA to archive films before they decay.

Q: Lightning round: How do you get people and the culture engaged with public libraries?

Andromeda: Ask yourself: Who’s not here?

Jeffrey: Politicians.

Dan: Evangelism

The post [liveblog] The future of libraries appeared first on Joho the Blog.


Restoring the Network of Bloggers

It’s good to have Hoder — Hossein Derakhshan— back. After spending six years in an Iranian jail, his voice is stronger than ever. The changes he sees in the Web he loves are distressingly real.

Hoder was in the cohort of early bloggers who believed that blogs were how people were going to find their voices and themselves on the Web. (I tried to capture some of that feeling in a post a year and a half ago.) Instead, in his great piece in Medium he describes what the Web looks like to someone extremely off-line for six years: endless streams of commercial content.

Some of the decline of blogging was inevitable. This was made apparent by Clay Shirky’s seminal post that showed that the scaling of blogs was causing them to follow a power law distribution: a small head followed by a very long tail.

Blogs could never do what I, and others, hoped they would. When the Web started to become a thing, it was generally assumed that everyone would have a home page that would be their virtual presence on the Internet. But home pages were hard to create back then: you had to know HTML, you had to find a host, you had to be so comfortable with FTP that you’d use it as a verb. Blogs, on the other hand, were incredibly easy. You went to one of the blogging platforms, got yourself a free blog site, and typed into a box. In fact, blogging was so easy that you were expected to do it every day.

And there’s the rub. The early blogging enthusiasts were people who had the time, skill, and desire to write every day. For most people, that hurdle is higher than learning how to FTP. So, blogging did not become everyone’s virtual presence on the Web. Facebook did. Facebook isn’t for writers. Facebook is for people who have friends. That was a better idea.

But bloggers still exist. Some of the early cohort have stopped, or blog infrequently, or have moved to other platforms. Many blogs now exist as part of broader sites. The term itself is frequently applied to professionals writing what we used to call “columns,” which is a shame since part of the importance of blogging was that it was a way for amateurs to have a voice.

That last value is worth preserving. It’d be good to boost the presence of local, individual, independent bloggers.

So, support your local independent blogger! Read what she writes! Link to it! Blog in response to it!

But, I wonder if a little social tech might also help. . What follows is a half-baked idea. I think of it as BOAB: Blogger of a Blogger.

Yeah, it’s a dumb name, and I’m not seriously proposing it. It’s an homage to Libby Miller [twitter:LibbyMiller] and Dan Brickley‘s [twitter:danbri ] FOAF — Friend of a Friend — idea, which was both brilliant and well-named. While social networking sites like Facebook maintain a centralized, closed network of people, FOAF enables open, decentralized social networks to emerge. Anyone who wants to participate creates a FOAF file and hosts it on her site. Your FOAF file lists who you consider to be in your social network — your friends, family, colleagues, acquaintances, etc. It can also contain other information, such as your interests. Because FOAF files are typically open, they can be read by any application that wants to provide social networking services. For example, an app could see that Libby ‘s FOAF file lists Dan as a friend, and that Dan’s lists Libby, Carla and Pete. And now we’re off and running in building a social network in which each person owns her own information in a literal and straightforward sense. (I know I haven’t done justice to FOAF, but I hope I haven’t been inaccurate in describing it.)

BOAB would do the same, except it would declare which bloggers I read and recommend, just as the old “blogrolls” did. This would make it easier for blogging aggregators to gather and present networks of bloggers. Add in some tags and now we can browse networks based on topics.

In the modern age, we’d probably want to embed BOAB information in the HTML of a blog rather than in a separate file hidden from human view, although I don’t know what the best practice would be. Maybe both. Anyway, I presume that the information embedded in HTML would be similar to what does: information about what a page talks about is inserted into the HTML tags using a specified vocabulary. The great advantage of is that the major search engines recognize and understand its markup, which means the search engines would be in a position to construct the initial blog networks.

In fact, has a blog specification already. I don’t see anything like markup for a blogroll, but I’m not very good a reading specifications. In any case, how hard could it be to extend that specification? Mark a link as being to a blogroll pal, and optionally supply some topics? (Dan Brickley works on

So, imagine a BOAB widget that any blogger can easily populate with links to her favorite blog sites. The widget can then be easily inserted into her blog. Hidden from the users in this widget is the appropriate markup. Not only could the search engines then see the blogger network, so could anyone who wanted to write an app or a service.

I have 0.02 confidence that I’m getting the tech right here. But enhancing blogrolls so that they are programmatically accessible seems to me to be a good idea. So good that I have 0.98 confidence that it’s already been done, probably 10+ years ago, and probably by Dave Winer :)

Ironically, I cannot find Hoder’s personal site; is down, at least at the moment.

More shamefully than ironically, I haven’t updated this blog’s blogroll in many years.

My recent piece in The Atlantic about whether the Web has been irremediably paved touches on some of the same issues as Hoder’s piece.

The post Restoring the Network of Bloggers appeared first on Joho the Blog.


[2b2k] In defense of the library Long Tail

Two percent of Harvard’s library collection circulates every year. A high percentage of the works that are checked out are the same as the books that were checked out last year. This fact can cause reflexive tsk-tsking among librarians. But — with some heavy qualifications to come — this is at it should be. The existence of a Long Tail is not a sign of failure or waste. To see this, consider what it would be like if there were no Long Tail.

Harvard’s 73 libraries have 16 million items [source]. There are 21,000 students and 2,400 faculty [source]. If we guess that half of the library items are available for check-out, which seems conservative, that would mean that 160,000 different items are checked out every year. If there were no Long Tail, then no book would be checked out more than any other. In that case, it would take the Harvard community an even fifty years before anyone would have read the same book as anyone else. And a university community in which across two generations no one has read the same book as anyone else is not a university community.

I know my assumptions are off. For example, I’m not counting books that are read in the library and not checked out. But my point remains: we want our libraries to have nice long tails. Library long tails are where culture is preserved and discovery occurs.

And, having said that, it is perfectly reasonable to work to lower the difference between the Fat Head and the Long Tail, and it is always desirable to help people to find the treasures in the Long Tail. Which means this post is arguing against a straw man: no one actually wants to get rid of the Long Tail. But I prefer to put it that this post argues against a reflex of thought I find within myself and have encountered in others. The Long Tail is a requirement for the development of culture and ideas, and at the same time, we should always help users to bring riches out of the Long Tail


[misc][2b2k] Making Twitter better for disasters

I had both CNN and Twitter on yesterday all afternoon, looking for news about the Boston Marathon bombings. I have not done a rigorous analysis (nor will I, nor have I ever), but it felt to me that Twitter put forward more and more varied claims about the situation, and reacted faster to misstatements. CNN plodded along, but didn’t feel more reliable overall. This seems predictable given the unfiltered (or post-filtered) nature of Twitter.

But Twitter also ran into some scaling problems for me yesterday. I follow about 500 people on Twitter, which gives my stream a pace and variety that I find helpful on a normal day. But yesterday afternoon, the stream roared by, and approached filter failure. A couple of changes would help:

First, let us sort by most retweeted. When I’m in my “home stream,” let me choose a frequency of tweets so that the scrolling doesn’t become unwatchable; use the frequency to determine the threshold for the number of retweets required. (Alternatively: simply highlight highly re-tweeted tweets.)

Second, let us mute based on hashtag or by user. Some Twitter cascades I just don’t care about. For example, I don’t want to hear play-by-plays of the World Series, and I know that many of the people who follow me get seriously annoyed when I suddenly am tweeting twice a minute during a presidential debate. So let us temporarily suppress tweet streams we don’t care about.

It is a lesson of the Web that as services scale up, they need to provide more and more ways of filtering. Twitter had “follow” as an initial filter, and users then came up with hashtags as a second filter. It’s time for a new round as Twitter becomes an essential part of our news ecosystem.


[2b2k] My world leader can beat up your world leader

There’s a knowingly ridiculous thread at Reddit at the moment: Which world leader would win if pitted against other leaders in a fight to the death.

The title is a straightline begging for punchlines. And it is a funny thread. Yet, I found it shockingly informative. The shock comes from realizing just how poorly informed I am.

My first reaction to the title was “Putin, duh!” That just shows you what I know. From the thread I learned that Joseph Kabila (Congo) and Boyko Borisov (Bulgaria) would kick Putin’s ass. Not to mention that Jigme Khesar Namgyel Wangchuck (Bhutan), who would win on good looks.

Now, when I say that this thread is “shockingly informative,” I don’t mean that it gives sufficient or even relevant information about the leaders it discusses. After all, it focuses on their personal combat skills. Rather, it is an interesting example of the haphazard way information spreads when that spreading is participatory. So, we are unlikely to have sent around the Wikipedia article on Kabila or Borisov simply because we all should know about the people leading the nations of the world. Further, while there is more information about world leaders available than ever in human history, it is distributed across a huge mass of content from which we are free to pick and choose. That’s disappointing at the least and disastrous at its worst.

On the other hand, information is now passed around if it is made interesting, sometimes in jokey, demeaning ways, like an article that steers us toward beefcake (although the president of Ireland does make it up quite high in the Reddit thread). The information that gets propagated through this system is thus spotty and incomplete. It only becomes an occasion for serendipity if it is interesting, not simply because it’s worthwhile. But even jokey, demeaning posts can and should have links for those whose interest is piqued.

So, two unspectacular conclusions.

First, in our despair over the diminishing of a shared knowledge-base of important information, we should not ignore the off-kilter ways in which some worthwhile information does actually propagate through the system. Indeed, it is a system designed to propagate that which is off-kilter enough to be interesting. Not all of that “news,” however, is about water-skiing cats. Just most.

Second, we need to continue to have the discussion about whether there is in fact a shared news/knowledge-base that can be gathered and disseminated, whether there ever was, whether our populations ever actually came close to living up to that ideal, the price we paid for having a canon of news and knowledge, and whether the networking of knowledge opens up any positive possibilities for dealing with news and knowledge at scale. For example, perhaps a network is well-informed if it has experts on hand who can explain events at depth (and in interesting ways) on demand, rather than assuming that everyone has to be a little bit expert at everything.


[2b2k][eim] Over my head

I’m not sure how I came into possession of a copy of The Indexer, a publication by the Society of Indexers, but I thoroughly enjoyed it despite not being a professional indexer. Or, more exactly, because I’m not a professional indexer. It brings me joy to watch experts operate at levels far above me.

The issue of The Indexer I happen to have — Vol. 30, No,. 1, March 2012 — focuses on digital trends, with several articles on the Semantic Web and XML-based indexes as well as several on broad trends in digital reading and digital books, and on graphical visualizations of digital indexes. All good.

I also enjoyed a recurring feature: Indexes reviewed. This aggregates snippets of book reviews that mention the quality of the indexes. Among the positive reviews, the Sunday Telegraph thinks that for the book My Dear Hugh, “the indexer had a better understanding of the book than the editor himself.” That’s certainly going on someone’s resumé!

I’m not sure why I enjoy works of expertise in fields I know little about. It’s true that I know a little about indexing because I’ve written about the organization of digital information, and even a little about indexing. And I have a lot of interest in the questions about the future of digital books that happen to be discussed in this particular issue of The Indexer. That enables me to make more sense of the journal than might otherwise be the case. But even so, what I enjoy most are the discussions of topics that exhibit the professionals’ deep involvement in their craft.

But I think what I enjoy most of all is the discovery that something as seemingly simple as generating an index turns out to be indefinitely deep. There are endless technical issues, but also fathomless questions of principle. There’s even indexer humor. For example, one of the index reviews notes that Craig Brown’s The Lost Diaries “gives references with deadpan precision (‘Greer, Germaine: condemns Queen, 13-14…condemns pineapple, 70…condemns fat, thin and medium sized women, 93…condemns kangaroos,122′).”

As I’ve said before, everything is interesting if observed at the right level of detail.


[eim][2b2k] The DSM — never entirely correct

The American Psychiatric Association has approved its new manual of diagnoses — Diagnostic and Statistical Manual of Mental Disorders — after five years of controversy [nytimes].

For example, it has removed Aspberger’s as a diagnosis, lumping it in with autism, but it has split out hoarding from the more general category of obsessive-compulsive disorder. Lumping and splitting are the two most basic activities of cataloguers and indexers. There are theoretical and practical reasons for sometimes lumping things together and sometimes splitting them, but they also characterize personalities. Some of us are lumpers, and some of us are splitters. And all of us are a bit of each at various times.

The DSM runs into the problems faced by all attempts to classify a field. Attempts to come up with a single classification for a complex domain try to impose an impossible order:

First, there is rarely (ever?) universal agreement about how to divvy up a domain. There are genuine disagreements about which principles of organization ought to be used, and how they apply. Then there are the Lumper vs. the Splitter personalities.

Second, there are political and economic motivations for dividing up the world in particular ways.

Third, taxonomies are tools. There is no one right way to divide up the world, just as there is no one way to cut a piece of plywood and no one right thing to say about the world. It depends what you’re trying to do. DSM has conflicting purposes. For one thing, it affects treatment. For example, the NY Times article notes that the change in the classification of bipolar disease “could ‘medicalize’ frequent temper tantrums,” and during the many years in which the DSM classified homosexuality as a syndrome, therapists were encouraged to treat it as a disease. But that’s not all the DSM is for. It also guides insurance payments, and it affects research.

Given this, do we need the DSM? Maybe for insurance purposes. But not as a statement of where nature’s joints are. In fact, it’s not clear to me that we even need it as a single source to define terms for common reference. After all, biologists don’t agree about how to classify species, but that science seems to be doing just fine. The Encyclopedia of Life takes a really useful approach: each species gets a page, but the site provides multiple taxonomies so that biologists don’t have to agree on how to lump and split all the forms of life on the planet.

If we do need a single diagnostic taxonomy, DSM is making progress in its methodology. It has more publicly entered the fray of argument, it has tried to respond to current thinking, and it is now going to be updated continuously, rather than every 5 years. All to the good.

But the rest of its problems are intrinsic to its very existence. We may need it for some purposes, but it is never going to be fully right…because tools are useful, not true.


[2b2k][eim]Digital curation

I’m at the “Symposium on Digital Curation in the Era of Big Data” held by the Board on Research Data and Information of the National Research Council. These liveblog notes cover (in some sense — I missed some folks, and have done my usual spotty job on the rest) the morning session. (I’m keynoting in the middle of it.)

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

Alan Blatecky [pdf] from the National Science Foundation says science is being transformed by Big Data. [I can't see his slides from the panel at front.] He points to the increase in the volume of data, but we haven’t paid enough attention to the longevity of the data. And, he says, some data is centralized (LHC) and some is distributed (genomics). And, our networks are unable to transport large amounts of data [see my post], making where the data is located quite significant. NSF is looking at creating data infrastructures. “Not one big cloud in the sky,” he says. Access, storage, services — how do we make that happen and keep it leading edge? We also need a “suite of policies” suitable for this new environment.

He closes by talking about the Data Web Forum, a new initiative to look at a “top-down governance approach.” He points positively to the IETF’s “rough consensus and running code.” “How do we start doing that in the data world?” How do we get a balanced representation of the community? This is not a regulatory group; everything will be open source, and progress will be through rough consensus. They’ve got some funding from gov’t groups around the world. (Check for more info.)

Now Josh Greenberg from the Sloan Foundation. He points to the opportunities presented by aggregated Big Data: the effects on social science, on libraries, etc. But the tools aren’t keeping up with the computational power, so researchers are spending too much time mastering tools, plus it can make reproducibility and provenance trails difficult. Sloan is funding some technical approaches to increasing the trustworthiness of data, including in publishing. But Sloan knows that this is not purely a technical problem. Everyone is talking about data science. Data scientist defined: Someone who knows more about stats than most computer scientists, and can write better code than typical statisticians :) But data science needs to better understand stewardship and curation. What should the workforce look like so that the data-based research holds up over time? The same concerns apply to business decisions based on data analytics. The norms that have served librarians and archivists of physical collections now apply to the world of data. We should be looking at these issues across the boundaries of academics, science, and business. E.g., economics works now rests on data from Web businesses, US Census, etc.

[I couldn't liveblog the next two — Michael and Myron — because I had to leave my computer on the podium. The following are poor summaries.]

Michael Stebbins, Assistant Director for Biotechnology in the Office of Science and Technology Policy in the White House, talked about the Administration’s enthusiasm for Big Data and open access. It’s great to see this degree of enthusiasm coming directly from the White House, especially since Michael is a scientist and has worked for mainstream science publishers.

Myron Gutmann, Ass’t Dir of of the National Science Foundation likewise expressed commitment to open access, and said that there would be an announcement in Spring 2013 that in some ways will respond to the recent UK and EC policies requiring the open publishing of publicly funded research.

After the break, there’s a panel.

Anne Kenney, Dir. of Cornell U. Library, talks about the new emphasis on digital curation and preservation. She traces this back at Cornell to 2006 when an E-Science task force was established. She thinks we now need to focus on e-research, not just e-science. She points to Walters and Skinners “New Roles for New Times: Digital Curation for Preservation.” When it comes to e-research, Anne points to the need for metadata stabilization, harmonizing applications, and collaboration in virtual communities. Within the humanities, she sees more focus on curation, the effect of the teaching environment, and more of a focus on scholarly products (as opposed to the focus on scholarly process, as in the scientific environment).

She points to Youngseek Kim et al. “Education for eScience Professionals“: digital curators need not just subject domain expertise but also project management and data expertise. [There's lots of info on her slides, which I cannot begin to capture.] The report suggests an increasing focus on people-focused skills: project management, bringing communities together.

She very briefly talks about Mary Auckland’s “Re-Skilling for Research” and Williford and Henry, “One Culture: Computationally Intensive Research in the Humanities and Sciences.”

So, what are research libraries doing with this information? The Association of Research Libraries has a jobs announcements database. And Tito Sierra did a study last year analyzing 2011 job postings. He looked at 444 jobs descriptions. 7.4% of the jobs were “newly created or new to the organization.” New mgt level positions were significantly higher, while subject specialist jobs were under-represented.

Anne went through Tito’s data and found 13.5% have “digital” in the title. There were more digital humanities positions than e-science. She posts a lists of the new titles jobs are being given, and they’re digilicious. 55% of those positions call for a library science degree.

Anne concludes: It’s a growth area, with responsibilities more clearly defined in the sciences. There’s growing interest in serving the digital humanists. “Digital curation” is not common in the qualifications nomenclature. MLS or MLIS is not the only path. There’s a lot of interest in post-doctoral positions.

Margarita Gregg of the National Oceanic and Atmospheric Administration, begins by talking about challenges in the era of Big Data. They produce about 15 petabytes of data per year. It’s not just about Big Data, though. They are very concerned with data quality. They can’t preserve all versions of their datasets, and it’s important to keep track of the provenance of that data.

Margarita directs one of NOAA’s data centers that acquires, preserves, assembles, and provides access to marine data. They cannot preserve everything. They need multi-disciplinary people, and they need to figure out how to translate this data into products that people need. In terms of personnel, they need: Data miners, system architects, developers who can translate proprietary formats into open standards, and IP and Digital Rights Management experts so that credit can be given to the people generating the data. Over the next ten years, she sees computer science and information technology becoming the foundations of curation. There is no currently defined job called “digital curator” and that needs to be addressed.

Vicki Ferrini at the Lamont -Doherty Earth Observatory at Columbia University works on data management, metadata, discovery tools, educational materials, best practice guidelines for optimizing acquisition, and more. She points to the increased communication between data consumers and producers.

As data producers, the goal is scientific discovery: data acquisition, reduction, assembly, visualization, integration, and interpretation. And then you have to document the data (= metadata).

Data consumers: They want data discoverability and access. Inceasingly they are concerned with the metadata.

The goal of data providers is to provide acccess, preservation and reuse. They care about data formats, metadata standards, interoperability, the diverse needs of users. [I've abbreviated all these lists because I can't type fast enough.].

At the intersection of these three domains is the data scientist. She refers to this as the “data stewardship continuum” since it spans all three. A data scientist needs to understand the entire life cycle, have domain experience, and have technical knowledge about data systems. “Metadata is key to all of this.” Skills: communication and organization, understanding the cultural aspects of the user communities, people and project management, and a balance between micro- and macro perspectives.

Challenges: Hard to find the right balance between technical skills and content knowledge. Also, data producers are slow to join the digital era. Also, it’s hard to keep up with the tech.

Andy Maltz, Dir. of Science and Technology Council of Academy of Motion Picture Arts and Sciences. AMPA is about arts and sciences, he says, not about The Business.

The Science and Technology Council was formed in 2005. They have lots of data they preserve. They’re trying to build the pipeline for next-generation movie technologists, but they’re falling behind, so they have an internship program and a curriculum initiative. He recommends we read their study The Digital Dilemma. It says that there’s no digital solution that meets film’s requirement to be archived for 100 years at a low cost. It costs $400/yr to archive a film master vs $11,000 to archive a digital master (as of 2006) because of labor costs. [Did I get that right?] He says collaboration is key.

In January they released The Digital Dilemma 2. It found that independent filmmakers, documentarians, and nonprofit audiovisual archives are loosely coupled, widely dispersed communities. This makes collaboration more difficult. The efforts are also poorly funded, and people often lack technical skills. The report recommends the next gen of digital archivists be digital natives. But the real issue is technology obsolescence. “Technology providers must take archival lifetimes into account.” Also system engineers should be taught to consider this.

He highly recommends the Library of Congress’ “The State of Recorded Sound Preservation in the United States,” which rings an alarm bell. He hopes there will be more doctoral work on these issues.

Among his controversial proposals: Require higher math scores for MLS/MLIS students since they tend to score lower than average on that. Also, he says that the new generation of content creators have no curatorial awareness. Executivies and managers need to know that this is a core business function.

Demand side data points: 400 movies/year at 2PB/movie. CNN has 1.5M archived assets, and generates 2,500 new archive objects/wk. YouTube: 72 hours of video uploaded every minute.


  • Show business is a business.

  • Need does not necessarily create demand.

  • The nonprofit AV archive community is poorly organized.

  • Next gen needs to be digital natvies with strong math and sci skills.

  • The next gen of executive leaders needs to understand the importance of this.

  • Digital curation and long-term archiving need a business case.


Q: How about linking the monetary value of the metadata to the metadata? That would encourage the generation of metadata.

Q: Weinberger paints a picture of flexible world of flowing data, and now we’re back in the academic, scientific world where you want good data that lasts. I’m torn.

A: Margarita: We need to look how that data are being used. Maybe in some circumstances the quality of the data doesn’t matter. But there are other instances where you’re looking for the highest quality data.

A: [audience] In my industry, one person’s outtakes are another person’s director cuts.

A: Anne: In the library world, we say if a little metadata would be great, a lot of it would be great. We need to step away from trying to capture the most to capturing the most useful (since can’t capture the most). And how do you produce data in a way that’s opened up to future users, as well as being useful for its primary consumers? It’s a very interesting balance that needs to be played. Maybe short-term need is a higher thing and long-term is lower.

A: Vicki: The scientists I work with use discrete data sets, spreadsheets, etc. As we get along we’ll have new ways to check the quality of datasets so we can use the messy data as well.

Q: Citizen curation? E.g., a lot of antiques are curated by being put into people’s attics…Not sure what that might imply as model. Two parallel models?

A: Margarita: We’re going to need to engage anyone who’s interested. We need to incorporate citizen corporation.

Anne: That’s already underway where people have particular interests. E.g., Cornell’s Lab of Ornithology where birders contribute heavily.

Q: What one term will bring people info about this topic?

A: Vicki: There isn’t one term, which speaks to the linked data concept.

Q: How will you recruit people from all walks of life to have the skills you want?

A: Andy: We need to convince people way earlier in the educational process that STEM is cool.

A: Anne: We’ll have to rely to some degree on post-hire education.

Q: My shop produces and integrates lots of data. We need people with domain and computer science skills. They’re more likely to come out of the domains.

A: Vicki: As long as you’re willing to take the step across the boundary, it doesn’t mater which side you start from.

Q: 7 yrs ago in library school, I was told that you need to learn a little programming so that you understand it. I didn’t feel like I had to add a whole other profession on to the one I was studying.


[everythingismisc] Scaling Japan

MetaFilter popped up a three-year-old post from Derek Sivers about how streeet addresses work in Japan. The system does a background-foreground duck-rabbit Gestalt flip on Western addressing schemes. I’d already heard about it — book-larnin’ because I’ve never been to Japan — but the post got me thinking about how things scale up.

What we would identify by street address, the Japanese identify by house number within a block name. Within a block, the addresses are non-sequential, reflecting instead the order of construction.

I can’t remember where I first read about this (I’m pretty sure I wrote about it in Everything Is Miscellaneous), but it pointed out some of the assumptions and advantages of this systems: it assumes local knowledge, confuses invaders, etc. But my reaction then was the same as when I read Derek’s post this morning: Yeah, but it doesn’t scale. Confusing invaders is a positive outcome of a failure to scale, but getting tourists lost is not. The math just doesn’t work: 4 streets intersected by 4 avenues creates 9 blocks, but add just 2 more streets and 2 more avenues and you’ve enclosed another 16 blocks. So, to navigate a large western city you have to know many many fewer streets and avenues than the number of existing blocks.

But of course I’m wrong. Tokyo hasn’t fallen apart because there are too many blocks to memorize. Clearly the Japanese system does scale.

In part that’s because according to the Wikipedia article on it, blocks are themselves located within a nested set of named regions. So you can pop up the geographic hierarchy to a level where there are fewer entities in order to get a more general location, just as we do with towns, counties, states, countries, solar system, galaxy, the universe.

But even without that, the Japanese system scales in ways that peculiarly mirror how the Net scales. Computers have scaled information in the Western city way: bits are tucked into chunks of memory that have sequential addresses. (At least they did the last time I looked in 1987.) But the Internet moves packets to their destinations much the way a Japanese city’s inhabitants might move inquiring visitors along: You ask someone (who we will call Ms. Router) how to get to a particular place, and Ms. Router sends you in a general direction. After a while you ask another person. Bit by bit you get closer, without anyone having a map of the whole.

At the other end of the stack of abstraction, computers have access to such absurdly large amounts of information either locally or in the cloud — and here namespaces are helpful — that storing the block names and house numbers for all of Tokyo isn’t such a big deal. Point your mobile phone to Google Maps’ Tokyo map if you need proof. With enough memory,we do not need to scale physical addresses by using schemes that reduce it to streeets and avenues. We can keep the arrangement random and just look stuff up. In the same way, we can stock our warehouses in a seemingly random order and rely on our computers to tell us where each item is; this has the advantage of letting us put the most requested items up front, or on the shelves that require humans to do the least bending or stretching.

So, I’m obviously wrong. The Japanese system does scale. It just doesn’t scale in the ways we used when memory spaces were relatively small.