Archive for category Uncategorized

[2b2k] Citizen scientists

Alex Wright has an excellent article in the New York Times today about the great work being done by citizen scientists. (Alex follows up in his blog with some more worthy citizen science efforts.)

Alex, who I met a few years ago at a conference because we had written books on similar topics — his excellent Glut and my Everything Is Miscellaneous — quotes me a couple of times in the article. The first time, I say that the people who are gathering data and classifying images “are not doing the work of scientists.” Some in the comments have understandably taken issue with that characterization. It’s something I deal with at some length in Too Big to Know. Because of the curtness of the comment, it could easily be taken as dismissive, which was not my intent; these volunteers are making a real contribution, as Alex’s article documents. But, in many of the projects Alex discusses (and that I discuss in my manuscript), the volunteers are doing work for which they need no scientific training. They are doing the work of science — gathering data certainly counts — but not the work of scientists. But that’s what makes it such an exciting time: You don’t need a degree or even training beyond the instructions on a Web page, and you can be part of a collective effort that advances science. (Commenter kc I think makes a good argument against my position on this.)

FWIW, the origins of my participation in the article were a discussion with Alex about why in this age of the amateur it’s so hard to find the sort of serious leap in scientific thinking coming from amateurs. Amateurs drove science more in the 19th century than now. Of course, that’s not an apple to apples comparison because of the professionalization of science in the 20th century. Also, so much of basic science now requires access to equipment far too expensive for amateurs. (Although that’s scarily not the case for gene sequencers.)

Tags:

[2b2k] Understanding as the deepest driver

I’m working on a talk that asks why our greatest institutions have trembled, if not shattered, before the tiny silver hammer of the hyperlink. One tap and, boom, down come newspapers, the recording industry, traditional encyclopedias… Why?

I recognize there are many ways of explaining any complex event. When it comes to understanding the rise of the Net, I tend to pay insufficient attention to economic explanations and to historic explanations based around large players. I’m at doing the opposite of justifying that inattention; I’m copping to it. I tend instead to look first at the Net as a communications medium, and see the changes in light of how what moves onto the Net takes on the properties of the Internet’s sort of network: loose, huge, center-less, without shape, etc.

But, then you have to ask why we flocked to that sort of medium. Why did it seem so inviting? Again, there are multiple explanations, and we need them all. But, perhaps because of some undiagnosable quirk, I tend to understand this in terms of our mental model of who we are and how we live together. My explanatory model hits rock bottom (possibly in both senses) when I see the new network model as more closely fitting what (I believe) we’ve known all along: we are more social than the old model thought, the world is more interesting than the old model thought, we are more fallible and confused than the old model wanted us to believe.

(Now that I think of it, that’s pretty much what my book Small Pieces Loosely Joined was about. So, eight years later, to my surprise, I still basically agree with myself!)

My preference for understanding-based explanations undoubtedly reflects my own personality and unexplored beliefs. I don’t believe there is one bedrock that is bedrockier than all the others.

Tags:

[2b2k] ExpertNet for OpenGov

From the ExpertNet site:

The United States General Services Administration (GSA) and the White House Open Government Initiative are soliciting your feedback on a concept for next generation citizen consultation, namely a government-wide software tool and process to elicit expert public participation (working title “ExpertNet”). ExpertNet could:

Enable government officials to circulate notice of opportunities to participate in public consultations to members of the public with expertise on a topic.

Provide those volunteer experts with a mechanism to provide useful, relevant, and manageable feedback back to government officials.
The proposed concept is intended to be complementary to two of the ways the Federal government currently obtains expertise to inform decision-making, namely by convening Federal Advisory Committees and announcing public comment opportunities in the Federal Register.

Take a look at the example in the editable part of the wiki. (And, yes, I did say that parts of the wiki are editable. Thank you for trusting us, my government!)

The only thing I object to in this brilliant idea is that it comes too late for inclusion as an example in my book. Why, those dirty government dogs!

(via Craig Newmark)

Tags:

[2b2k] Too Many Leaks to Know

Jeremy Wagstaff has a terrific post looking at the leaked cables not as a security problem but as an information problem. Too much data, not enough metadata, not enough sharing, not enough ability to sort and make sense of them all.

I hesitate to excerpt some key paragraphs from it for fear of distracting you from the post in its entirety. Nevertheless:

..,the problem that WikiLeaks unearths is that the most powerful nation on earth doesn’t seem to have any better way of working with all this information than anyone else. Each cable has some header material—who it’s intended for, who it’s by, and when it was written. Then there’s a line called TAGS, which, in true U.S. bureaucratic style doesn’t actually mean tags but “Traffic Analysis by Geography and Subject”—a state department system to organize and manage the cables. Many are two letter country or regional tags—US, AF, PK etc—while others are four letter subject tags—from AADP for Automated Data Processing to PREL for external political relations, or SMIG for immigration related terms.

Of course there’s nothing wrong with this—the tag list is updated regularly (that last one seems to be in January 2008). You can filter a search by, say, a combination of countries, a subject tag and then what’s called a program tag, which always begins with K, such as KPAO for Public Affairs Office.

This is all very well, but it’s very dark ages. The trouble is, as my buff friend in the Kabul garden points out, there’s not much out there that’s better. A CIA or State Department analyst may use a computer to sift through the tags and other metadata, but that seems to be the only real difference between him and his Mum or Dad 50 years before.

Read the whole thing here.

Tags:

[bigdata] Panel: A Thousand Points of Data

Paul Ohm (law prof at U of Colorado Law School — here’s a paper of his) moderates a panel among those with lots of data. Panelists: Jessica Staddon (research scientist, Google), Thomas Lento (Facebook), Arvin Narayanan (post-doc, Stanford), and Dan Levin (grad student, U of Mich).

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

Dan Levin asks what Big Data could look like in the context of law. He shows a citation network for a Supreme Court decision. “The common law is a network,” he says. He shows a movie of the citation network of first thirty years of the Supreme Court. Fascinating. Marbury remains an edge node for a long time. In 1818, the net of internal references blooms explosively. “We could have a legalistic genome project,” he says. [Watch the video here.]

What will we be able to do with big data?

Thomas Lento (Facebook): Google flu tracking. Predicting via search terms.

Jessica Staddon (Google): Flu tracking works pretty well. We’ll see more personalization to deliver more relevant info. Maybe even tailor privacy and security settings.

Dan: If someone comes to you as a lawyer and ask if she has a case, you’ll do a better job deciding if you can algorithmically scour the PACER database of court records. We are heading for a legal informatics revolution.

Thomas: Imagine someone could tell you everything about yourself, and cross ref you with other people, say you’re like those people, and broadcast it to the world. There’d be a high potential for abuse. That’s something to worry about. Further, as data gets bigger, the granularity and accuracy of predictions gets better. E.g., we were able to beat the polls by doing sentiment analysis of msgs on Facebook that mention Obama or McCain. If I know who your friends are and what they like, I don’t actually have to know that much about you to predict what sort of ads to show you. As the computational power gets to the point where anyone can run these processes, it’ll be a big challenge…

Jessica: Companies have a heck of a lot to lose if they abuse privacy.

Helen Nissenbaum: The harm isn’t always to the individual. It can be harm to the democratic system. It’s not about the harm of getting targeted ads. It’s about the institutions that can be harmed. Could someone explain to me why to get the benefits of something like the Flu Trends you have to be targeted down to the individual level?

Jessica: We don’t always need the raw data for doing many types of trend analysis. We need the raw data for lots of other things.

Arvind: There are misaligned incentives everywhere. For the companies, it’s collect data first and ask questions yesterday; you never know what you’ll need.

Thomas: It’s hard to understand the costs and benefits at the individual level. We’re all looking to build the next great iteration or the next great product. The benefits of collecting all that data is not clearly defined. The cost to the user is unclear, especially down the line.

Jessica: Yes, we don’t really understand the incentives when it comes to privacy. We don’t know if giving users more control over privacy will actually cost us data.

Arvind describes some of his work on re-identification, i.e., taking anonymized data and de-anonymizing it. (Arvind worked on the deanonymizing of Netflix records.) Aggregation is a much better way of doing things, although we have to be careful about it.

Q: In other fields, we hear about distributed innovation. Does big data require companies to centralize it? And how about giving users more visibility into the data they’ve contributed — e.g., Judith Donath’s data mirrors? Can we give more access to individuals without compromising privacy?

Thomas: You can do that already at FB and Google. You can see what your data looks like to an outside person. But it’s very hard to make those controls understandable. There are capital expenditures to be able to do big data processing. So, it’ll be hard for individuals, although distributed processing might work.

Paul: Help us understand how to balance the costs and benefits? And how about the effect on innovation? E.g., I’m sorry that Netflix canceled round 2 of its contest because of the re-identification issue Arvind brought to light.

Arvind: No silver bullets. It can help to have a middleman, which helps with the misaligned incentives. This would be its own business: a platform that enables the analysis of data in a privacy-enabled environment. Data comes in one side. Analysis is done in the middle. There’s auditing and review.

Paul: Will the market do this?

Jessica: We should be thinking about systems like that, but also about the impact of giving the user more controls and transparency.

Paul: Big Data promises vague benefits — we’ll build something spectacular — but that’s a lot to ask for the privacy costs.

Paul: How much has the IRB (institutional review board) internalized the dangers of Big Data and privacy?

Daniel: I’d like to see more transparency. I’d like to know what the process is.

Arvind: The IRB is not always well suited to the concerns of computer scientists. Maybe current the monolithic structure is not the best way.

Paul: What mode of solution of privacy concerns gives you the most hope? Law? Self-regulation? Consent? What?

Jessica: The one getting the least attention is the data itself. At the root of a lot of privacy problems is the need to detect anomalies. Large data sets help with this detection. We should put more effort in turning the date around to use it for privacy protection.

Paul: Is there an incentive in the corporate environment?

Jessica: Google has taken some small steps in this direction. E.g., Google’s “got the wrong bob” tool for gmail that warns you if you seem to have included the wrong person in a multi-recipient email. [It's a useful tool. I send more email to the Annie I work with than to the Annie I'm married to, so my autocomplete keeps wanting to send Annie I work with information about my family. Got the wrong Bob catches those errors.]

Dan: It’s hard to come up with general solutions. The solutions tend to be highly specific.

Arvind: Consent. People think it doesn’t work, but we could reboot it. M. Ryan Calo at Stanford is working on “visceral notice,” rather than burying consent at the end of a long legal notice.

Thomas: Half of our users have used privacy controls, despite what people think. Yes, our controls could be simpler, but we’ve been working on it. We also need to educate people.

Q: FB keeps shifting the defaults more toward disclosure, so users have to go in and set them back.
Thomas: There were a couple of privacy migrations. It’s painful to transition users, and we let them adjust privacy controls. There is a continuum between the value of the service and privacy: all privacy and it would have no value. It also wouldn’t work if everything were open: people will share more if they feel they control who sees it. We think we’ve stabilized it and are working on simplification and education.

Paul: I’d pick a different metaphor: The birds flying south in a “privacy migration”…

Thomas: In FB, you have to manage all these pieces of content that are floating around; you can’t just put them in your “house” for them to be private. We’ve made mistakes but have worked on correcting them. It’s a struggle of a mode of control over info and privacy that is still very new.

Tags:

[bigdata] Ensuring Future Access to History

Brewster Kahle, Victoria Stodden, and Richard Cox are on a panel, chaired by the National Archive’s Director of Litigation Jason Baron. The conference is being put on by Princeton’s Center Internet for Technology Policy.

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

Brewster goes first. He’s going to talk about “public policy in the age of digital reproduction.” “We are in a jam,” he says, because of how we have viewed our world as our tech has change. Brewster founded the Internet Archive, a non-profit library. The aim is to make freely accessible everything ever published, from the Sumerian texts on. “Everyone everywhere ought to have access to it” — that’s a challenge worthy of our generation, he says.

He says the time is ripe for this. The Internet is becoming ubiquitous. If there aren’t laptops, there are Internet cafes. And there a mobiles. Plus, storage is getting cheaper and smaller. You can record “100 channel years” of HD TV in a petabyte for about $200,000, and store it in a small cabinet. For about $1,200, you could store all of the text in the Library of Congress. Google’s copy of the WWW is about a petabyte. The WayBack machine uses 3 petabytes, and has about 150 billion pages. It’s used by 1.5M/day. A small organization, like the Internet Archive, can take this task on.

This archive is dynamic, he says. The average Web page has 15 links. The average Web page changes every 100 days.

There are downsides to the archive. E.g., the WayBack Machine gets used to enable lawsuits. We don’t want people to pull out of the public sphere. “Get archived, go to jail,” is not a useful headline. Brewster says that they once got an FBI letter asking for info, which they successfully fought (via the EFF). The Archive gets lots of lawyer letters. They get about 50 requests per week to have material taken out of the Archive. Rarely do people ask for other people’s stuff to be taken down. Once, the Scientologists wanted some copyright-infringing material taken down from someone else’s archived site; the Archive finally agreed to this. The Archive held a conference and came up with Oakland Archive Policy for issues such as these.

Brewster points out that John Postel’s taxonomy is sticking: .com, .org, .gov, .edu, .mil … Perhaps we need separate policies for each of these, he says. And how do we take policy ideas and make them effective? E.g., if you put up a robots.txt exclusion, you will nevertheless get spidered by lots of people.

“We can build the Library of Alexandria,” he concludes, “but it might be problematic.”

Q: I’ve heard people say they don’t need to archive their sites because you will.
A: Please archive your own. More copies make us safe.

Q: What do you think about the Right to Oblivion movement that says that some types of content we want to self-destruct on some schedule, e.g. Facebook.
A: I have no idea. It’s really tough. Personal info is so damn useful. I wish we could keep our computers from being used against us in court; if we defined the 5th amendment so that who “we” are included our computers…


Richard Cox says if you gold, you know about info overload. It used to be that you had one choice of golf ball, Top-Flite. Now they have twenty varieties.

Archives are full of stories waiting to be told, he says. “When I think about Big Data…most archivists would think we’re talking about being science, corporate world, and government.” Most archivists work in small cultural, public institutions. Richard is going to talk about the shifting role of archivists.

As early as the 1940s, archivists were talking about machine-readable records. The debates and experiments have been going on for many decades. One early approach was to declare that electronic records were not archives, because the archives couldn’t deal with them. (Archivists and records managers have always been at odds, he says, because RM is about retention schedules, i.e., deleting records.) Over time, archivists came up to speed. By 2000, some were dealing with electronic records. In 2010, many do, but many do not. There is a continuing debate. Archivists have spent too long debating among themselves when they need to be talking with others. But, “archivists tend not to be outgoing folks.” (Archivists have had issues with the National Archives because their methods don’t “scale down.”)

There are many projects these days. E.g., we now have citizen archivists who maintain their own archives and who may contribute to public archives. Who are today’s archivists? Archival educators are redefining the role. Richard believes archives will continue, but the profession may not. He recommends reading the Clair report [I couldn't get the name or the spelling, and can't find it on Google :( ] on audio-visual archives. “I read it and I wept.” It says that we need people who understand the analog systems so that they can be preserved, but there’s no funding.


Victoria Stodden’s talk gloomy title is “The Coming Dark Ages in Scientific Knowledge.”

She begins by pointing to the pervasive use of computers and computational methods in the sciences, and even in the humanities and law schools. E.g., Northwestern is looking at the word counts in Shakespearean works. It’s changing the type of scientific analysis we’re doing. We can do very complicated simulations that give us a new way of understanding our world. E.g., we do simulations of math proofs, quite different from the traditional deductive processes.

This means what we’re doing as scientists is being stored in script, codes, data, etc. But science only is science when it’s communicated. If the data and scripts are not shared, the results are not reproducible. We need to act as scientists to make sure that this data etc. are shared. How do we communicate results based on enormous data sets? We have to give access to those data sets. And what happens when those data sets change (corrected or updated)? What happens to results based on the earlier sets? We need to preserve the prior versions of the data. How do we version it? How do we share it? How do we share it? E.g., There’s an experiment at NSF: All proposals have to include a data management plan. The funders and journals have a strong role to play here.

Sharing scientific knowledge is harder than it sounds, but is vital. E.g., a recent study showed that a cancer therapy will be particular effective based on individual genomes. But, it was extremely hard to trace back the data and code used to get this answer. Victoria notes that peer reviewers do not check the data and algorithms.

Why a dark age? Because “without reproducibility, knowledge cannot be recreated or understood.” we need ways and processes of sharing. Without this, we only have scientists making proclamations.

She gives some recommendations: (1) Assessment of the expense of data/code archiving. (2) Enforcement of funding agency guidelines. (3) Publication requirements. (4) Standards for scientific tools. (5) Versioning as a scientific principal. (6) Licensing to realign scientific intellectual property with longstanding scientific norms (Reproducible Research Standard). [verbatim from her slide] Victoria stresses the need to get past the hurdles copyright puts in the way.

Q: Are you a pessimist?
A: I’m an optimist. The scientific community is aware of these issues and is addressing them.

Q: Do we need an IRS for the peer review process?
A: Even just the possibility that someone could look at your code and data is enough to make scientists very aware of what they’re doing. I don’t advocate code checking as part of peer review because it takes too long. Instead, throw your paper out into the public while it’s still being reviewed and let other scientists have at it.

Q: [rick] Every age has lost more info than it has preserved. This is not a new problem. Every archivist from the beginning of time has had to cope with this.


Jason Baron of the National Archives (who is not speaking officially) points to the volume of data the National Archives (NARA) has to deal with. E.g., in 2001 32 million emails were transferred to NARA; in 2009, 250+ million archives were. He predicts there will be a billion presidential emails by 2017 held at NARA. The first lawsuit over email was filed in 1989 (email=PROFS). Right now, the official policy of 300 govt agencies is to print email out for archiving. We can no longer deal with the info flow with manual processes. Processing of printed pages occurs when there’s a lawsuit or a a FOIA request. Jason is pushing on the value of search as a way of encouraging systematic intake of digital records. He dreams of search algorithms that retrieve all relevant materials. There are clustering algorithms emerging within law that hold hope. He also wants to retrieve docs other than via key words. Visual analytics can help.

There are three languages we need: Legal, Records Management, and IT. How do we make the old ways work in the new? We need both new filtering techniques, but also traditional notions of appraisal. “The neutral archivist may serve as an unbiased resource for the filtering of information in an increasingly partisan (untrustworthy) world” [from the slide].

Tags:

From facts to data to commons

Im keynoting a conference at the Princeton Center for Internet Polic on Big Data on Tuesday. No, I dont know why they asked me either. Heres an outline of what I plan on saying, although the talk has far from gelled.

“How sweet is the perception of a new natural fact,” Thoreau rhapsodized, and Darwin spent seven years trying to pin down one fact about barnacles. We were enamored with facts in the 19th century, when facts were used to nail down social policy debates. We have hoped and wished ever since that facts would provide a bottom to arguments: We can dig our way down and come to agreement. Indeed, in 1963, Bernard Forscher wrote a letter to Science “Chaos in the Brickyard” complaining that young scientists are generating too many facts and not enough theories; too many facts leads to chaos.

This is part and parcel of the traditional Western strategy for knowing our world. In a world too big to knowâ„¢, our basic strategy has been to filter, reduce, and fragment knowledge. This was true all the way through the Information Age. Our fear of information overload now seems antiquated. Not only is there “no such thing as information overload, only filter failure” Clay Shirky, natch, in the digital age, the nature of filters change. On the Net, we do not filter out. We filter forward. That is, on the Net, a filter merely shortens the number of clicks it takes to get to an object; all the other objects remain accessible.

This changes the role and nature of expertise, and of knowledge itself. For traditional knowledge is a system of stopping points for inquiry. This is very efficient, but its based on the limitations of paper as knowledges medium. Indeed, science itself has viewed itself as a type of publishing: It is done in private and not made public until its certain-ish.

But the networking of knowledge gives us a new strategy. We will continue to use the old one where appropriate. Networked knowledge is abundant, unsettled, never done, public, imperfect, and contains disagreements within itself.

So, lets go back to facts. [Work on that transition!] In the Age of Facts, we thought that facts provided a picture of the world — the Book of Nature was written in facts. Now we are in the Age of Big Data. This is different from the Info Age, when data actually was fairly scarce. The new data is super-abundant, linked, fallible, and often recognizably touched by frail humans. Unlike with facts, these data are [Note to self: Remember to use plural...the sign of quality!] often used to unnail, rather than to nail things down. While individual arguments, of course, use data to support hypotheses or to nail down conclusions, the system weve built for ourselves overall is more like a resource to support exploration, divergence, and innovation. Despite Bernard Forscher, too many facts in the form of data, do not lead to chaos, but to a commons.

Tags:

[2b2k] Curation = Relevancy ranking

I was talking with Sophia Liu at Defrag, and disagreed with her a bit about one thing she said as she was describing the dissertation she’s working on. I said that relevancy ranking and curation are different things. But then I thought about it for a moment and realized that given what my book (“Too Big to Know”) says, and what I had said that very morning in my talk at Defrag, they are not different at all, and Sophia was right.

Traditional filters filter out. You don’t see what fails to pass through them: You don’t see the books the library decided not to buy or the articles the newspaper decided not to publish. Filtering on the Web is different: When I blog my list of top ten movie reviews, all the other movie reviews are still available to you. Web filters filter forward, not out. Thus, curation on the Web consists of filtering forward, which is indistinguishable from relevancy ranking (although the relevancy is to my own sense of what’s important.)

Tags:

Jay Rosen’s view from somewhere

Jay Rosen expounds on his use of the phrase “the view from nowhere” and its application to journalism. It’s a self-interview, with an exceptionally smart interviewer.

Tags:

[2b2k] World Bank’s open data … now in contest form!

The World Bank has done an admirable job of opening its data for public access. The World Bank has lots of data, much of it at the national level, and throwing it into the public arena — which it did in April — was a gutsy and right move.

They now have a contest, with $45K in prizes, to encourage the development of apps that make use of that data via its APIs. Here’s more about the data:

The World Bank Indicators API lets you programmatically access more than 3,000 indicators and query the data in several ways, using parameters to specify your request. Many data series date back 50 years, and can be used to create interesting applications. You can read more about the data itself in the API Sources section. The projects API provides access to all World Bank projects, including closed projects, active projects, and those in the pipeline. The dataset includes pilot geocode data on project locations; note that these data are collected through a desk study of existing project documents and are being released as a test database — further work is required for data validation and quality enhancements…

Releasing all this data must have required a lot of cultural transformation work. Wow.

Tags: