Archive for June, 2010

Why we don’t remember how science works

I was listening this morning to an NPR Morning Edition story by Allison Aubrey about a study that found that if mice drink lots o’ joe, they’re less likely to suffer from little tiny cases of Alzheimers. It was a fine piece, but to a large degree because it spent most of its time undoing the very reason that the story was on the air. The story’s pitch was: Coffee prevents Alzheimers! The bulk of the story was: In mice! Maybe! Other studies on humans are provocative but inconclusive! There are other factors! We don’t know! Maybe! Mouse study isn’t really all that significant!

On the one hand, it’s admirable that NPR spent so much of its time getting us past the headline. On the other hand, isn’t it a little bit depressing that we need to be told over and over again that scientific studies rarely are conclusive about big points and biological correlations? Are we still that unschooled in the scientific method that 450 years after the birth of Francis Bacon (and a thousand years after Abu Ali al-Hasan ibn al-Haytham, if you want to get technical about it) we need a refresher course in science’s nervous stepwise progress every time the media report on a scientific study? Apparently, yes.

Then, as if NPR were thinking exactly the same thoughts, the very next piece (by Alix Spiegel) was about how a tiny study got turned into a cultural meme:

In the spring of 1993 a psychologist named Francis Rauscher played 10 minutes of a Mozart Piano Sonata to 36 college students, and after the excerpt, gave the students a test of spatial reasoning. Rauscher also asked the students to take a spatial reasoning test after listening to 10 minutes of silence, and, after listening to 10 minutes of a person with a monotone speaking voice.

And Rauscher says, the results of this experiment seemed pretty clear. “What we found was that the students who had listened to the Mozart Sonata scored significantly higher on the spatial temporal task.”

The story tracks how this modest research among a tiny, non-random group led to a small industry of Mozart for Babies CD’s, the state of Georgia distributing free Mozart CD’s to every newborn, and even death threats against Rauscher for having the temerity to report that she did not observe the same beneficial results from listening to rock and roll.

Why did this basically insignificant study generate so much interest?

It’s probably a couple of things, Rauscher says. Americans believe in self-improvement, but also are fond of quick fixes. And as Rauscher points out, parents care desperately about their children.

Sure. But that’s missing the primary cause in the sequence of events:

The first call came from Associated Press before Rauscher had even realized that her paper was due to be published. Once the Associated Press printed its story the Mozart Effect was everywhere.

“I mean we were on the nightly news with Tom Brokaw. We had people coming to our house for live television,” Rauscher says. “I had to hire someone to manage all the calls I had coming in.”

The headlines in the papers were less subtle than her findings: “Mozart makes you smart” was the general idea.

Americans may have embraced the Mozart-makes-babies-smart meme because we love our poor dumb babies so much, but we got the idea from the AP and the rest of the media that followed AP’s lead. The media played on American’s love of babies, self-improvement, and quick fixes to serve up exactly what we wanted to hear.

So, I’m willing to acknowledge that we have a stupidity gene that causes strong conclusions to wipe out the reasoning that led to them. But the media are supposed to be helping us to get past our natural tendency toward blunt-edged thinking. Instead, over and over it dangles juicy conclusions in front of us, appealing to our fear of disease and our urgent desire to give our babies the competitive edge they need to crush lesser babies whose parents do not love them as much. The good science reports — like this morning’s on caffeinated mice — dangle exciting conclusions in front of us but then explain why we shouldn’t have gotten so excited by them. The bad ones — most of them — play upon the fact that for some reason, we seem unable to remember how science actually works…and then reinforce that forgetting, over and over.


By the way, I wonder if one other reason we forget how science works is that we are taught about the scientific method by performing experiments in school that establish known results. When the lima beans kept in the dark don’t grow, we’re told that the experiment worked because it proves that lima bean sprouts need light. The teacher doesn’t mention that maybe it was because that side of the jar happened to be in the path of hostile bacteria or that the distribution of the beans was not sufficiently randomized. Only many years later is it broken to us that the scientific method is more about eliminating false hypotheses than proving positive causation.

Tags:

[2b2k] Four-chapter re-write

I’ve been holding off writing about what’s up with the book I’m writing because it’s been oscillating and I didn’t want to choose a state yet. But I think I’m getting close.

This book feels like it’s been struggling to become something that I’ve been trying to keep it from being. Sorry for the anthropomorphicizing, but that’s how it feels. I’ve been trying to keep the topic small enough to be manageable, but it’s hard to write about the subset of topics I’ve been working on without going large. But, going large — at its largest: What is the Net doing to knowledge? — is not a writable book. At least, not by me.

But, after the past couple of weeks, after finishing a draft of the troublesome Chapter 4 (which argues that we need much less diversity of opinion in a conversation than we generally think, and that a radical diversity of opinions not only is not ideal, it makes conversation impossible … and then I spend the next two-thirds of the chapter trying to find a way through the “echo chamber” arguments), a thematic way of finding a path through the huge question of knowledge occurred to me. It means a significant reorg, and possibly some serious rewriting of those first four chapters, but I think it might be workable. It would enable me to cover some particular topics without feeling like I had to say everything there is to be said about knowledge on the Net.

So, that’s encouraging. On the other hand, by last December, I had written four chapters, and then started over. It’s taken me until June to write these four chapters, and here I am again. Except this time I think I can mainly just reframe the chapters I’ve written. I hope.

Ok, back to rewriting the preface from the ground up. At least the old preface (which I actually sort of liked) may find a home as the opening of Chapter 2 (which used to be Chapter 3).

Tags:

Data.gov goes semantic

Data.gov has announced that it’s making some data sets available as RDF triples so Semantic Webbers can start playing with it. There’s an index of data here. The site says that even though only a relative handful of datasets have been RDF’ed, there are 6.4 billion triples available. They’ve got some examples of RDF-enabled visualizations here and here, and some more as well.

Data.gov also says they’re working with RPI to come up with a proposal for “a new encoding of datasets converted from CSV (and other formats) to RDF” to be presented for worldwide consideration: “We’re looking forward to a design discussion to determine the best scheme for persistent and dereferenceable government URI naming with the international community and the World Wide Web Consortium to promote international standards for persistent government data (and metadata) on the World Wide Web.” This is very cool. A Uniform Resource Identifier points to a resource; it is dereferenceable if there is some protocol for getting information about that resource. So, Data.gov and RPI are putting together a proposal for how government data can be given stable Web addresses that will predictably yield useful information about that data.

I think.

Tags:

[berkman] [2b2k] Carolina Rossini – The Industrial Cooperation Project

Carolina Rossini is giving a Berkman talk abouut the preloilminaray conclusionss from Yochai Benkler’s Industrial Cooperation Project.

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

The project looked at industries to see if they are moving to collaborative peer production. They looked at knowledge-embedded products: data, text, and tools. The key takeaway, she says: “The nature of the knowledge embedded product has an impact on the emergence of commons-based production.” This is relevant if you care about the emergence of commons-based production (or a knowledge governance system), in some sectors, you may have to have some kind of intervention in the market, and not wait for the organic emergence of CBP.

She goes through the research methodology: standardized observations across sectors and “a huge literature review.” How does innovation happen? What’s the effect of IP? etc. All of this will be open and available soon. From this came synthesizing papers and the ICP Wiki (open in September) where you can see the entire research process.

The main verticals studies: biotech, alternative energy, educational materials. These three are a cross-section of contemporary economy. It is not the more typical sector where you find the usual examples such as threadless.com.

Within alternative energy, the project looked at wind, solar, and see, because they are at different levels of maturity, a lot of patenting, complicate manufacturing and governance processes, and no evidence of commons based production. The academics have no incentives for sharing. Sharing is happening, however, in pre-product spaces, e.g., OpenEi, U.S. gov’t OpenLabs, and the Database of State Incentives for Renewables and Energy. There is also the emergence of a DIY spirit among users, and there’s the Danish offshore wind industry is interesting. Prizes have been created to encourage CBP, and there have been attempts to cluster people to encourage collaboration.

Another vertical: educational materials. Here there’s lots of CBP. E.g., the Open Educational Resources Commons (which Carolina has long been involved with). Incumbents as well as start-ups are exploring new models, both open and closed. On the other hand, publishers are suing users now and then. Pres. Obama says he wants to invest OER, but there is no money around. On the other hand, the text book adoption process now accepts open textbooks, which Carolina says is a “huge huge turnaround.” There are also governmental interventions around the world, including in Brazil. There are also pushes toward closedness. E.g., Pearson is rethinking it’s strategy so that the end product isn’t a textbook but a new education model that includes assessment systems, exam models, etc. [Vertical integration strikes again! When your product becomes open or commoditized, it's one of the obvious business responses.] Does this lessen the importance of copyright for Pearson? The guy from Pearson that Carolina asked this question of just laughed.

In Biotech there’s a mix of CBP and closed practices. “Big science” shows the most evidence of CBP. The commons in gene sequences has helped, as does the fact that most big science happens through government investments, the data and text results of which are open by default (thanks to the Bermuda rules and the NIH Public access Policy). But there’s no evidence of commons-based industrial disruption in biological materials used as a research tools. This is due in part to the fact that VC’s who fund “translational research” like patents. The end products market has been the most resistant to commons-based effects. And patents are aggressively enforced. One result is that there can be conflicts. E.g., you can get a genomic profile from a company like 23andme, but if a woman wants to get a specific analysis

Summary: Biotech: some of the most open and closed practices. Alternative energy: not a lot of CBP and no sstructure for it. Educational material: intermediate in development of CBP.

If you want to see more CBP in these industries, we need interventions. Carolina doesn’t want to talk much about what is needed, but she points to a 2×2 (closed-open, regulated-unregulated) that shows the effect of government intervention in biotech: some companies left the market, but new open databases enabled other projects and new business models. [She shows the same chart for the other two sectors, but I can't capture them in text.]

There’s also the possibility of industry intervention. E.g., the danish offshore wind farmers needed to prove to the government that their industry is viable. There’s community pressure to share knowledge and not to patent. They organized around work teams, not companies; teams were cross-company, so patenting wasn’t possible. They were able to cooperate because they knew they could only solve the problems together.

Conclusion: It’s easy to find CBP within copyright-based materials. Industries with more complex manufacturing and distribution are more resistant. But even within them, you can find CBP within some of their processes. Within those resistant sectors, we should probably look for more commons-based sharing of knowledge, rather than in their core development processes — knowledge diffusion rather than innovation.

Q: What was most surprising?
A: Openness may not be the answer in every sector.

Q: Denmark’s team-oriented approach — does that mean that the teams have members from universities, companies, etc.?
A: They found a problem, they figured which companies had some relation to that problem — e.g., the cement for holding the towers — and they picked employees from a diversity of companies.

Q: When Paul Roemer talks about economic growth in terms of sharing recipes. The copyright industries have more openness than patent-based ones. Maybe that’s because copyright is about sharing documents and files, and the recipe is there. Maybe the obstacle is in how to write out the recipe…
A: Also, the copyright industries are dealing with more easily digitized products that can be more easily transformed and shared.

Q: Is the educational system doing what the music business has done: Product is commoditized, so they sell services. E.g., Verizon tells you that the music is free or cheap, but you pay for fast download, etc.
A: I see an easier parallel in software — customization, extension to make it work better for particular clients, etc. Now that’s being done for textbooks.

Q: How do you measure the impact of CBP on the development of an industry, especially compared to the effect of social networks, etc.?
A: You’re thinking about wikis, etc. We asked how the sectors are sharing knowledge, how much patenting, are there access issues, etc.

Q: Could this be delivered on a Google platform or Facebook? How people share info?
A: We didn’t focus on this, but a lot of people looking at how companies are sharing info. E.g., the Pfizer internal wiki. Also, the OpenEI.

Q: What does it mean to share? People chatting or contracting parties? Why would people want to share?
A: Buying is a way of sharing. There’s a gradient.

Q: Can you compare the Brazillian experiment with the US government’s?
A: In Brazil, the federal government buys all the textbooks for the public schools. They are now started to debate whether the gov’t should also buy the IP and make it available. In the US, we are putting money toward training the teachers. Pres. Obama has hired many leaders from the OER. But, there is no policy being pushed here for public access to educational materials because the fed gov’t is not the main buyer. But Texas and California are asking for more openness because it will reduce their costs.

Q: How is OER trying to create sustainable business models?
A: Most of the big projects in the US are supported by universities. Many are cutting funding, so these projects are going to foundations for money. They’re trying to find business models. Publishign on-demand. E.g., Flat World Knowledge is getting major authors and paying them better than traditional publishers; you have access to a free version online, but you can buy an iPad version, etc.

Q: Long term?
A: Biotech: We have mandates for openness, but the structure so dictates how the market works that I don’t see a lot of change there. Same in alternative energy. I hope they share more if the target is mitigation of climate change, but I don’t see that happening in practice. We’ll see in some changes. I think we’ll see more political debate around open access. And there will be some international agreements, probably in terms of more tech transfer.l Educational materials: Transitioning.

Tags:

Democratized curation

JP Rangaswami has an excellent post about the democratizing of curation.

He begins by quoting Eric Schmidt (found at 19:48 in this video):

“…. the statistic that we have been using is between the dawn of civilisation and 2003, five exabytes of information were created. In the last two days, five exabytes of information have been created, and that rate is accelerating. And virtually all of that is what we call user-generated what-have-you. So this is a very, very big new phenomenon.”

He concludes — and I certainly agree — that we need digital curation. He says that digital curation consists of “Authenticity, Veracity, Access, Relevance, Consume-ability, and Produce-ability.” “Consume-ability” means, roughly, that you can play it on any device you want, and “produce-ability” means something like how easy it is to hack it (in the good O’Reilly sense).

JP seems to be thinking primarily of knowledge objects, since authenticity and veracity are high on his list of needs, and for that I think it’s a good list. But suppose we were to think about this not in terms of curation — which implies (against JP’s meaning, I think) a binary acceptance-rejection that builds a persistent collection — and instead view it as digital recommendations? In that case, for non-knowledge-objects, other terms will come to the fore, including amusement value, re-playability, and wiseacre-itude. In fact, people recommend things for every reason we humans may like something, not to mention the way we’s socially defined in part by what we recommend. (You are what you recommend.)

Anyway, JP is always a thought-provoking writer…

Tags:

Book formats and formatting books

AKMA points to an excellent post by Jacqui Cheng at Ars Technica about the fragmentation of ebook standards. AKMA would love to see a publisher offer an easy way of “pouring” text into an open format that creates a useful, beautiful digital book. Jacqui points to the major hurdle: Ebook makers like owning their own format so they can “vertically integrate,” i.e., lock users into their own bookstores.

Even if there werent that major economic barrier, itd be hard to do what AKMA and we all want because books are incredibly complex objects. You can always pour text into a new container, but its much harder to capture the structure of the book this is a title, that is body text, this is a caption. The structure is then reflected in the format of the book titles are bolded, etc. and in the electronic functionality of the book tables of contents are linked to the contents, etc.. We are so well-trained in reading books that even small errors interrupt our reading: My Kindle doesnt count hyphens as places where words can be broken across lines, resulting in some butt-ugly layouts. A bungled drop cap can mystify you for several seconds. White-space breaks between sections that are not preserved when they occur at the end of page can ruin a good mid-chapter conclusion. Its not impossible to get all this right, but its hard.

And getting a standard format that captures the right degrees of structure and of format, and that is sufficiently forgiving so just about anything can be poured to it is really difficult because there are no such right degrees. E.g., epub is not great at layout info at least according to Wikipedia.

All Im saying is: Its really really hard.

Tags:

[pdf] Aneesh Chopra, Federal CTO

Aneesh Chopra, the U.S. Chief Technology Officer, is opening the second day of Personal Democracy Forum.

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other peoples ideas and words. You are warned, people.

He says Katie Stanton at the State Dept. says that the difference between consumer and government cultures is “Theres an app for that” vs. “Theres a form for that.”

President Obamas first act in office was to require far more openness. This means changing the defaults, a cultural change. Aneesh says theyre making progress. A year ago, Vivek Kundra, the federal CIO, announced at PDF the IT Dashboard for browsing the new Data.gov, a tool for accountability. Aneesh points to reform at the Veterans Admin, resulting in cost savings and faster service. Another example: US Citizen and Immigration Services now lets you opt in to having the status of your application be pushed to you, rather than you having to check in. This type of change has little cost, brings benefits, and is beginning to change the culture of government.

Aneesh is announcing “the blue button to liberate personal health data.” Press it and you can get your data from government databases. “Do it with it whatever you want. Its your choice. Its your data.” This will begin this fall with medical and veteran info.

Another example of the change in culture: The Dept of Agriculture wants to inform us about healthy nutrition choices, part of the First Ladys efforts. The Dept has nutritional info on 30,000 products. What to do with it? The government is holding “game jams” across the country — “Apps for Healthy Kids.”

Theyve been building tools to find widely dispersed knowledge. E.g., NASA has today released a report on its experience with the Innocentive prize system. A semi-retired radio frequency engineer won with an idea that exceeded NASAs requirements. “No RFP, no convoluted process, just a smart person” that the prize system uncovered.

Aneesh talks about the Health and Human Services Community Health Data Initiative that debuted two days ago. Its launched with twenty programs that take advantage of the newly opened data. The OMB has required agencies to make data available at any agency site, at a /open address. Microsoft Bing is now showing on maps the info available at hospitalcompare.gov, a site few have gone to. Heres an idea from a citizen: Asthmapolis crowdsources data to help visualize outbreaks; participants have gps-aware inhalers.

[And then my computer crashed...]

Tags:

[pdf] Non-deliberative deliberation

I moderated a panel yesterday at Personal Democracy Forum on deliberative democracy. Because I was the moderator, I didn’t express my own unease with the emphasis on deliberation. Don’t get me wrong: I like deliberative processes and wish there were more of them. I’m just not as bullish on their ability to resolve real differences.

But there are non-rational deliberative processes. For example, Morgan Spurlock’s tv series, Thirty Days, puts together people who deeply disagree. But they learn more and better by living with one another for thirty days than they do through their rational discussions. If “deliberation” refers to a fair weighing, living with someone with whom you disagree is more likely to right the scales. The issues over which we struggle the most and that divide us the deepest cannot be bridge through careful, quiet discussion. Or, at least, the role of rational deliberation often is, in my opinion, over-stated. When rational discussion fails to change our minds, sympathy based on lived understanding can change our souls.

Tags:

[pdf] Aneesh Chopra

My computer froze as I was near the end of live-blogging Aneesh Chopra’s keynote at PDF. (Ubuntu.) I’ll try to recover it later.

The quick overview is that it’s amazing to have a person like him in the White House in charge of the change in culture and change in defaults, from processing forms to releasing info out into the wild so that people with ideas can do something with them. It’s our data, he says.

The shift in attitude is astounding.

Tags:

[pdf] Truth, factchecking and online media

Brendan Greeley is moderating a panel on truth and factchecking. He begins by wondering if we need argument-checking as well as fact-checking.

Bill Adair of Politifact.com gives the first brief talk. He says (in response to Brendan’s question) that he is an Internet optimist because the Net can reinvent political discourse. He was a political reporter, but in 2008 felt that he and other reporters had been letting candidates get away with falsehoods.It was easy and comforting to cover politics as horse races. So, he started his fact-checking site.

The site researches claims and scores them. The research is done by paid journalists. The sources are transparent. They compile records on elected officials. (Cf. Michelle Bachman.) They also check pundits. And they have an Obameter that tracks how Obama is delivering on his 500+ campaign promises. They have also begun state sites. “It’s a whole new form of journalism,” he concludes.

Brendan: Couldn’t you have done this before the Net? No, says Bill. You couldn’t do the research. Plus, the corrections would have only run in the one edition of the paper, and if you missed it, you would have missed it forever.

There is a jurisprudence to the Truthometer, Bill says. They’ve had to invent how to distinguish an “untrue” from a “pants on fire.”

Jay Rosen says that 58 yrs ago, Joe McCarthy exploited defects in the media to make a name for himself, at great cost. Charges are news. What happens today is news. Senators are worth reporting on and have some credibility. News can’t be untold. Eventually, the media figured out that they’d been exploited; the press had been put in the service of untruth. So, reporters changed the rules: It suddenly became ok to do “interpretation.” I.e., it was ok for them to point out that a public official might have another motive for what he said. Fifty years later, politicians are exploiting different weaknesses. The best known is “he said she said” journalism. That’s a response to the quest for innocence, i.e., a demonstration that you are neutral in the cultural/political wars. Rather than having an agenda for the left or right, the press has an innocence agenda. He-said-she-said also helps journalists make their deadlines: you don’t have time to interrupt, so you get someone to state the other side.

In December, Jay tweeted that Meet the Press ought to fact check its guests and run the results on Wednesday. ABC has started doing it, for ABC This Week. MtP has refused, possibly because the person who’s been the most frequent guest is John McCain, who Politifact rates as a pants-on-fire liar. But, some college students have put up MeetTheFacts.com to

Marc Ambinder says that he’s getting more comfortable going outside of traditional journalism’s box, and getting angry about being told to stay inside of it. E.g., there’s nothing to the story of the White House offering a job to Sestak, but the press is covering it as if it’s an issue. The solution is for reputable journalists to say that it isn’t a story and then covering something else, but you’re dealing with an entrenched set of habits.

Bill points to TechCrunch as his favorite voice on the Web, which, as he says, is strongly voiced and non-neutral. Jay says that it used to be that you lost credibility if you judged, but that has flipped. This is part of a culture war in which the press is an object of attack, Jay says.

Brendan says that Jay was right 5 yrs ago to say that the war between journalism and blogging is over. Now there’s the same sort of controversy over factchecking. How do we get past the conflict, Brendan asks. Bill says we need to get past the “bucket of quotes” mentality. Factchecking should be a standard part of the journalist’s toolkit. Jay says that the birther phenomenon is interesting. That Obama was born in the US is as verified as a fact can get. But, within politics, the overriding of that fact has given rise to a political movement. There is no journalistic response to this. They can’t treat it as a claim within the spectrum; it’s actually a repudiation of journalism. Marc and Jay agree that the remedy is not within journalism but within the political system: Republicans ought to shame the birthers.

Q: What about factchecking that goes wrong?
A: There’s still room for journalists.
A: (jay) Reputation systems work.
A: (brendan) But email is anonymous.

Q: Reputation systems can be gamed. And we need the Sunday shows to do the factchecking on the same episode so people can see it.
A: Yes. We’re seeing progress, but… ABC deserves credit.

Brendan: There’s selection bias in factchecking. Factcheckers decide what to count as worth checking?
Bill: Is it something that Mabel — our typical reader — would wonder about?
Jay: News orgs used to establish trust by advertising their viewlessness. now they need to say where they’re coming from.

Tags: