Archive for category media

High-contrast transparency – How Glenn Greenwald could look like a monopolist

Glenn Greenwald mounts a mighty and effective defense against the charge leveled by Mark Ames at Pando.com that Greenwald and Laura Poitras are “monopolizing” and “privatizing” the 50,000-200,000 NSA documents entrusted to them by Edward Snowden.

Unlike Greenwald, I do think “it’s a question worth asking,” as Ames puts it — rather weasily, since his post attempt really is about supplying an answer. It’s worth asking because of the new news venture funded by Pierre Omidyar that has hired Greenwald and Poitras. Greenwald argues (among other things) that the deal has nothing to do with profiting from their access to the Snowden papers; in fact, he says, by the time the venture gets off the ground, there may not be any NSA secrets left to reveal. But one can imagine a situation in which a newspaper hires a journalist with unique access to some highly newsworthy information in order to acquire and control that information. In this case, we have contrary evidence: Greenwald and Poitras have demonstrated their courage and commitment.

Greenwald’s defense overall is, first, that he and Poitras (Bart Gellman plays a lesser role in the article) have not attempted to monopolize the papers so far. On the contrary, they’ve been generous and conscientious in spreading the the revelations to papers around the world. Second, getting paid for doing this is how journalism works.

To be fair, Ames’ criticism isn’t simply that Greenwald is making money, but that Omidyar can’t be trusted. I disagree, albeit without pretending to have any particular insight into Omidyar’s (or anyone’s) soul. (I generally have appreciated Omidyar’s work, but so what?) We do have reason to trust Greenwald, however. It’s inconceivable to me that Greenwald would let the new venture sit on NSA revelations for bad reasons.

But I personally am most interested in why these accusations have traction at all.

Before the Web, the charge that Greenwald is monopolizing the information wouldn’t even have made sense because there wasn’t an alternative. Yes, he might have turned the entire cache over to The Guardian or the New York Times, but then would those newspapers look like monopolists? No, they’d look like journalists, like stewards. Now there are options. Snowden could have posted the cache openly on a Web site. He could have created a torrent so that they circulate forever. He could have given them to Wikileaks curate. He could have sent them to 100 newspapers simultaneously. He could have posted them in encrypted form and have given the key to the Dalai Lama or Jon Stewart. There are no end of options.

But Snowden didn’t. Snowden wanted the information curated, and redacted when appropriate. He trusted his hand-picked journalists more than any newspaper to figure out what “appropriate” means. We might disagree with his choice of method or of journalists, but we can understand it. The cache needs editing, contextualization, and redaction so that we understand it, and so that the legitimate secrets of states are preserved. (Are there legitimate state secrets? Let me explain: Yes.) Therefore, it needs stewardship.

No so incidentally, the fact that we understand without a hiccup why Snowden entrusted individual journalists with the information, rather than giving it to even the most prestigious of newspapers, is another convincing sign of the collapse of our institutions.

It’s only because we have so many other options that entrusting the cache to journalists committed to stewarding it into the public sphere could ever be called “monopolizing” it. The word shouldn’t make any sense to us in this environment, yet it is having enough traction that Greenwald reluctantly wrote a long post defending himself. Given that the three recipients of the Snowden cache have been publishing it in newspapers all over the world makes them much less “monopolists” than traditional reporters are. Greenwald only needed to defend himself from this ridiculous charge because we now have a medium that can do what was never before possible: immediately and directly publish sets of information of any size. And we have a culture (in which I happily and proudly associate) that says openness is the default. But defaults were made to be broken. That’s why they’re defaults and not laws of nature or morality.

Likewise, when Ames’ criticizes Greenwald for profiting from these secrets because he gets paid as a journalist (which is separate from the criticism that working for Omidyar endangers the info — a charge I find non-credible), the charge makes even the slightest sense only because of the Web’s culture of Free, which, again I am greatly enthusiastic about. As an institution of democracy, one might hope that newspapers would be as free as books in the public library — which is to say, the costs are hidden from the user — but it’s obvious what the problems are with government-funded news media. So, journalists get paid by the companies that hire them, and this by itself could only ever look like a criticism in an environment where Free is the default. We now have that environment, even if enabling journalism is one of the places where Free just doesn’t do the entire job.

That the charge that Glenn Greenwald is monopolizing or privatizing the Snowden information is even comprehensible to us is evidence of just how thoroughly the Web is changing our defaults and our concepts. Many of our core models are broken. We are confused. These charges are further proof, as if we needed it.

Tags:

[2b2k] Does the Net make us stoopid?

Yesterday I participated as a color commentator in a 90 minute debate between Clive Thompson [twitter:pomeranian99] and Steve Easterbrook [twitter:smeasterbrook], put on by the CBC’s Q program.The topic was “Does the Net Make Us Smart or Stupid?” It airs today, and you can hear it here.

It was a really good discussion between Clive and Steve, without any of the trumped up argumentativeness that too often mars this type of public conversation. It was, of course, too short, but with a topic like this, we want it to bust its bounds, don’t we?

My participation was minimal, but that’s why we have blogs, right? So, here are two points I would have liked to pursue further.

First, if we’re going to ask if the Net makes us smart or stupid, we have to ask who we’re talking about. More exactly, who in what roles? So, I’d say that the Net’s made me stupider in that I spend more of my time chasing down trivialities. I know more about Miley Cyrus than I would have in the old days. Now I find that I’m interested in the Miley Phenomenon — the media’s treatment, the role of celebrity, the sexualization of everything, etc. — whereas before I would never have felt it worth a trip to the library or the purchase of an issue of Tiger Beat or whatever. (Let me be clear: I’m not that interested. But that’s the point: it’s all now just a click away.)

On the other hand, if you ask if the Net has made scholars and experts smarter, I think the answer has to be an almost unmitigated yes. Find me a scholar or expert who would turn off the Net when pursuing her topic. All discussions of whether the Net makes us smarter I think should begin by considering those who are in the business of being smart, as we all are at some points during the day.

Now, that’s not really as clear a distinction as I’d like. It’s possible to argue that the Net’s made experts stupider because it’s enabled people to become instant “experts” on topics. (Hat tip to Visiona-ary [twitter:0penCV] who independently raised this on Twitter.) We can delude ourselves into thinking we’re experts because we’ve skimmed the Wikipedia article or read an undergrad’s C- post about it. But is it really a bad thing that we can now get a quick gulp of knowledge in a field that we haven’t studied and probably never will study in depth? Only if we don’t recognize that we are just skimmers. At that point we find ourselves seriously arguing with a physicist about information’s behavior at the event horizon of a black hole as if we actually knew what we were talking about. Or, worse, we find ourselves disregarding our physician’s advice because we read something on the Internet. Humility is 95% of knowledge.

Here’s a place where learning some of the skills of journalists would be helpful for us all. (See Dan Gillmor‘s MediActive for more on this.) After all, the primary skill of a particular class of journalists is their ability to speak for experts in a field in which the journalist is not her/himself expert. Journalists, however, know how to figure out who to consult, and don’t confuse themselves with experts themselves. Modern media literacy means learning some of the skills and all of the humility of good journalists.

Second, Clive Thompson made the excellent and hugely important point that knowledge is now becoming public. In the radio show, I tried to elaborate on that in a way that I’m confident Clive already agrees with by saying that it’s not just public, it’s social, and not just social, but networked. Jian Ghomeshi, the host, raised the question of misinformation on the Net by pointing to Reddit‘s misidentification of one of the Boston bombers. He even played a touching and troubling clip by the innocent person’s brother talking about the permanent damage this did to the family. Now, every time you look up “Sunil Tripathi” on the Web, you’ll see him misidentified as a suspect in the bombing.

I responded ineffectively by pointing to Judith Miller’s year of misreporting for the NY Times that helped move us into a war, to make the point that all media are error prone. Clive did a better job by citing a researcher who fact checked an entire issue of a newspaper and uncovered a plethora of errors (mainly small, I assume) that were never corrected and that are preserved forever in the digital edition of that paper.

But I didn’t get a chance to say the thing that I think matters more. So, go ahead and google “Sunil Tripathi”. You will have to work at finding anything that identifies him as the Boston Bomber. Instead, the results are about his being wrongly identified, and about his suicide (which apparently occurred before the false accusations were made).

None of this excuses the exuberantly irresponsible way a subreddit (i.e., a topic-based discussion) at Reddit accused him. And it’s easy to imagine a case in which such a horrible mistake could have driven someone to suicide. But that’s not my point. My point here is twofold.

First, the idea that false ideas once published on the Net continue forever uncorrected is not always the case. If we’re taking as our example ideas that are clearly wrong and are important, the corrections will usually be more obvious and available to us than in the prior media ecology. (That doesn’t relieve us of the responsibility of getting facts right in the first place.)

Second, this is why I keep insisting that knowledge now lives in networks the way it used to live in books or newspapers. You get the truth not in any single chunk but in the web of chunks that are arguing, correcting, and arguing about the corrections. This, however, means that knowledge is an argument, or a conversation, or is more like the webs of contention that characterize the field of living scholarship. There was an advantage to the old ecosystem in which there was a known path to authoritative opinions, but there were problems with that old system as well.

That’s why it irks me to take any one failure, such as the attempt to crowdsource the identification of the Boston murderers, as a trump card in the argument the Net makes us stupider. To do so is to confuse the Net with an aggregation of public utterances. That misses the transformative character of the networking of knowledge. The Net’s essential character is that it’s a network, that it’s connected. We therefore have to look at the network that arose around those tragically wrong accusations.

So, search for Sunil Tripathi at Reddit.com and you will find a list of discussions at Reddit about how wrong the accusation was, how ill-suited Reddit is for such investigations, and how the ethos and culture of Reddit led to the confident condemning of an innocent person. That network of discussion — which obviously extends far beyond Reddit’s borders — is the real phenomenon…”real” in the sense that the accusations themselves arose from a network and were very quickly absorbed into a web of correction, introspection, and contextualization.

The network is the primary unit of knowledge now. For better and for worse.

Tags:

[2b2k] The public ombudsman (or Facts don’t work the way we want)

I don’t care about expensive electric sports cars, but I’m fascinated by the dustup between Elon Musk and the New York Times.

On Sunday, the Times ran an article by John Broder on driving the Tesla S, an all-electric car made by Musk’s company, Tesla. The article was titled “Stalled Out on Tesla’s Electric Highway,” which captured the point quite concisely.

Musk on Wednesday in a post on the Tesla site contested Broder’s account, and revealed that every car Tesla lends to a reviewer has its telemetry recorders set to 11. Thus, Musk had the data that proved that Broder was driving in a way that could have no conceivable purpose except to make the Tesla S perform below spec: Broder drove faster than he claimed, drove circles in a parking lot for a while, and didn’t recharge the car to full capacity.

Boom! Broder was caught red-handed, and it was data that brung him down. The only two questions left were why did Broder set out to tank the Tesla, and would it take hours or days for him to be fired?

Except…

Rebecca Greenfield at Atlantic Wire took a close look at the data — at least at the charts and maps that express the data — and evaluated how well they support each of Musk’s claims. Overall, not so much. The car’s logs do seem to contradict Broder’s claim to have used cruise control. But the mystery of why Broder drove in circles in a parking lot seems to have a reasonable explanation: he was trying to find exactly where the charging station was in the service center.

But we’re not done. Commenters on the Atlantic piece have both taken it to task and provided some explanatory hypotheses. Greenfield has interpolated some of the more helpful ones, as well as updating her piece with testimony from the tow-truck driver, and more.

But we’re still not done. Margaret Sullivan [twitter:sulliview] , the NYT “public editor” — a new take on what in the 1960s we started calling “ombudspeople” (although actually in the ’60s we called them “ombudsmen”) — has jumped into the fray with a blog post that I admire. She’s acting like a responsible adult by witholding judgment, and she’s acting like a responsible webby adult by talking to us even before all the results are in, acknowledging what she doesn’t know. She’s also been using social media to discuss the topic, and even to try to get Musk to return her calls.

Now, this whole affair is both typical and remarkable:

It’s a confusing mix of assertions and hypotheses, many of which are dependent on what one would like the narrative to be. You’re up for some Big Newspaper Schadenfreude? Then John Broder was out to do dirt to Tesla for some reason your own narrative can supply. You want to believe that old dinosaurs like the NYT are behind the curve in grasping the power of ubiquitous data? Yup, you can do that narrative, too. You think Elon Musk is a thin-skinned capitalist who’s willing to destroy a man’s reputation in order to protect the Tesla brand? Yup. Or substitute “idealist” or “world-saving environmentally-aware genius,” and, yup, you can have that narrative too.

Not all of these narratives are equally supported by the data, of course — assuming you trust the data, which you may not if your narrative is strong enough. Data signals but never captures intention: Was Broder driving around the parking lot to run down the battery or to find a charging station? Nevertheless, the data do tell us how many miles Broder drove (apparently just about the amount that he said) and do nail down (except under the most bizarre conspiracy theories) the actual route. Responsible adults like you and me are going to accept the data and try to form the story that “makes the most sense” around them, a story that likely is going to avoid attributing evil motives to John Broder and evil conspiratorial actions by the NYT.

But the data are not going to settle the hash. In fact, we already have the relevant numbers (er, probably) and yet we’re still arguing. Musk produced the numbers thinking that they’d bring us to accept his account. Greenfield went through those numbers and gave us a different account. The commenters on Greenfield’s post are arguing yet more, sometimes casting new light on what the data mean. We’re not even close to done with this, because it turns out that facts mean less than we’d thought and do a far worse job of settling matters than we’d hoped.

That’s depressing. As always, I am not saying there are no facts, nor that they don’t matter. I’m just reporting empirically that facts don’t settle arguments the way we were told they would. Yet there is something profoundly wonderful and even hopeful about this case that is so typical and so remarkable.

Margaret Sulllivan’s job is difficult in the best of circumstances. But before the Web, it must have been so much more terrifying. She would have been the single point of inquiry as the Times tried to assess a situation in which it has deep, strong vested interests. She would have interviewed Broder and Musk. She would have tried to find someone at the NYT or externally to go over the data Musk supplied. She would have pronounced as fairly as she could. But it would have all been on her. That’s bad not just for the person who occupies that position, it’s a bad way to get at the truth. But it was the best we could do. In fact, most of the purpose of the public editor/ombudsperson position before the Web was simply to reassure us that the Times does not think it’s above reproach.

Now every day we can see just how inadequate any single investigator is for any issue that involves human intentions, especially when money and reputations are at stake. We know this for sure because we can see what an inquiry looks like when it’s done in public and at scale. Of course lots of people who don’t even know that they’re grinding axes say all sorts of mean and stupid things on the Web. But there are also conversations that bring to bear specialized expertise and unusual perspectives, that let us turn the matter over in our hands, hold it up to the light, shake it to hear the peculiar rattle it makes, roll it on the floor to gauge its wobble, sniff at it, and run it through sophisticated equipment perhaps used for other purposes. We do this in public — I applaud Sullivan’s call for Musk to open source the data — and in response to one another.

Our old idea was that the thoroughness of an investigation would lead us to a conclusion. Sadly, it often does not. We are likely to disagree about what went on in Broder’s review, and how well the Tesla S actually performed. But we are smarter in our differences than we ever could be when truth was a lonelier affair. The intelligence isn’t in a single conclusion that we all come to — if only — but in the linked network of views from everywhere.

There is a frustrating beauty in the way that knowledge scales.

Tags:

[2b2k] Science as social object

An article in published in Science on Thursday, securely locked behind a paywall, paints a mixed picture of science in the age of social media. In “Science, New Media, and the Public,” Dominique Brossard and Dietram A. Scheufele urge action so that science will be judged on its merits as it moves through the Web. That’s a worthy goal, and it’s an excellent article. Still, I read it with a sense that something was askew. I think ultimately it’s something like an old vs. new media disconnect.

The authors begin by noting research that suggests that “online science sources may be helping to narrow knowledge gaps” across educational levels[1]. But all is not rosy. Scientists are going to have “to rethink the interface between the science community and the public.” They point to three reasons.

First, the rise of online media has reduced the amount of time and space given to science coverage by traditional media [2].

Second, the algorithmic prioritizing of stories takes editorial control out of the hands of humans who might make better decisions. The authors point to research that “shows that there are often clear discrepancies between what people search for online, which specific areas are suggested to them by search engines, and what people ultimately find.” The results provided by search engines “may all be linked in a self-reinforcing informational spiral…”[3] This leads them to ask an important question:

Is the World Wide Web opening up a new world of easily accessible scientific information to lay audiences with just a few clicks? Or are we moving toward an online science communication environment in which knowledge gain and opinion formation are increasingly shaped by how search engines present results, direct traffic, and ultimately narrow our informational choices? Critical discussions about these developments have mostly been restricted to the political arena…

Third, we are debating science differently because the Web is social. As an example they point to the fact that “science stories usually…are embedded in a host of cues about their accuracy, importance, or popularity,” from tweets to Facebook “Likes.” “Such cues may add meaning beyond what the author of the original story intended to convey.” The authors cite a recent conference [4] where the tone of online comments turned out to affect how people took the content. For example, an uncivil tone “polarized the views….”

They conclude by saying that we’re just beginning to understand how these Web-based “audience-media interactions” work, but that the opportunity and risk are great, so more research is greatly needed:

Without applied research on how to best communicate science online, we risk creating a future where the dynamics of online communication systems have a stronger impact on public views about science than the specific research that we as scientists are trying to communicate.

I agree with so much of this article, including its call for action, yet it felt odd to me that scientists will be surprised to learn that the Web does not convey scientific information in a balanced and impartial way. You only are surprised by this if you think that the Web is a medium. A medium is that through which content passes. A good medium doesn’t corrupt the content; it conveys signal with a minimum of noise.

But unlike any medium since speech, the Web isn’t a passive channel for the transmission of messages. Messages only move through the Web because we, the people on the Web, find them interesting. For example, I’m moving (infinitesimally, granted) this article by Brossard and Scheufele through the Web because I think some of my friends and readers will find it interesting. If someone who reads this post then tweets about it or about the original article, it will have moved a bit further, but only because someone cared about it. In short, we are the medium, and we don’t move stuff that we think is uninteresting and unimportant. We may move something because it’s so wrong, because we have a clever comment to make about it, or even because we misunderstand it, but without our insertion of ourselves in the form of our interests, it is inert.

So, the “dynamics of online communication systems” are indeed going to have “a stronger impact on public views about science” than the scientific research itself does because those dynamics are what let the research have any impact beyond the scientific community. If scientific research is going to reach beyond those who have a professional interest in it, it necessarily will be tagged with “meaning beyond what the author of the original story intended to convey.” Those meanings are what we make of the message we’re conveying. And what we make of knowledge is the energy that propels it through the new system.

We therefore cannot hope to peel the peer-to-peer commentary from research as it circulates broadly on the Net, not that the Brossard and Scheufele article suggests that. Perhaps the best we can do is educate our children better, and encourage more scientists to dive into the social froth as the place where their research is having its broadest effect.

 


Notes, copied straight from the article:

[1] M. A. Cacciatore, D. A. Scheufele, E. A. Corley, Public Underst. Sci.; 10.1177/0963662512447606 (2012).

[2] C. Russell, in Science and the Media, D. Kennedy, G. Overholser, Eds. (American Academy of Arts and Sciences, Cambridge, MA, 2010), pp. 13–43

[3] P. Ladwig et al., Mater. Today 13, 52 (2010)

[4] P. Ladwig, A. Anderson, abstract, Annual Conference of the Association for Education in Journalism and Mass Communication, St. Louis, MO, August 2011; www.aejmc. com/home/2011/06/ctec-2011-abstracts

Tags:

[2b2k] Science as social object

An article in published in Science on Thursday, securely locked behind a paywall, paints a mixed picture of science in the age of social media. In “Science, New Media, and the Public,” Dominique Brossard and Dietram A. Scheufele urge action so that science will be judged on its merits as it moves through the Web. That’s a worthy goal, and it’s an excellent article. Still, I read it with a sense that something was askew. I think ultimately it’s something like an old vs. new media disconnect.

The authors begin by noting research that suggests that “online science sources may be helping to narrow knowledge gaps” across educational levels[1]. But all is not rosy. Scientists are going to have “to rethink the interface between the science community and the public.” They point to three reasons.

First, the rise of online media has reduced the amount of time and space given to science coverage by traditional media [2].

Second, the algorithmic prioritizing of stories takes editorial control out of the hands of humans who might make better decisions. The authors point to research that “shows that there are often clear discrepancies between what people search for online, which specific areas are suggested to them by search engines, and what people ultimately find.” The results provided by search engines “may all be linked in a self-reinforcing informational spiral…”[3] This leads them to ask an important question:

Is the World Wide Web opening up a new world of easily accessible scientific information to lay audiences with just a few clicks? Or are we moving toward an online science communication environment in which knowledge gain and opinion formation are increasingly shaped by how search engines present results, direct traffic, and ultimately narrow our informational choices? Critical discussions about these developments have mostly been restricted to the political arena…

Third, we are debating science differently because the Web is social. As an example they point to the fact that “science stories usually…are embedded in a host of cues about their accuracy, importance, or popularity,” from tweets to Facebook “Likes.” “Such cues may add meaning beyond what the author of the original story intended to convey.” The authors cite a recent conference [4] where the tone of online comments turned out to affect how people took the content. For example, an uncivil tone “polarized the views….”

They conclude by saying that we’re just beginning to understand how these Web-based “audience-media interactions” work, but that the opportunity and risk are great, so more research is greatly needed:

Without applied research on how to best communicate science online, we risk creating a future where the dynamics of online communication systems have a stronger impact on public views about science than the specific research that we as scientists are trying to communicate.

I agree with so much of this article, including its call for action, yet it felt odd to me that scientists will be surprised to learn that the Web does not convey scientific information in a balanced and impartial way. You only are surprised by this if you think that the Web is a medium. A medium is that through which content passes. A good medium doesn’t corrupt the content; it conveys signal with a minimum of noise.

But unlike any medium since speech, the Web isn’t a passive channel for the transmission of messages. Messages only move through the Web because we, the people on the Web, find them interesting. For example, I’m moving (infinitesimally, granted) this article by Brossard and Scheufele through the Web because I think some of my friends and readers will find it interesting. If someone who reads this post then tweets about it or about the original article, it will have moved a bit further, but only because someone cared about it. In short, we are the medium, and we don’t move stuff that we think is uninteresting and unimportant. We may move something because it’s so wrong, because we have a clever comment to make about it, or even because we misunderstand it, but without our insertion of ourselves in the form of our interests, it is inert.

So, the “dynamics of online communication systems” are indeed going to have “a stronger impact on public views about science” than the scientific research itself does because those dynamics are what let the research have any impact beyond the scientific community. If scientific research is going to reach beyond those who have a professional interest in it, it necessarily will be tagged with “meaning beyond what the author of the original story intended to convey.” Those meanings are what we make of the message we’re conveying. And what we make of knowledge is the energy that propels it through the new system.

We therefore cannot hope to peel the peer-to-peer commentary from research as it circulates broadly on the Net, not that the Brossard and Scheufele article suggests that. Perhaps the best we can do is educate our children better, and encourage more scientists to dive into the social froth as the place where their research is having its broadest effect.

 


Notes, copied straight from the article:

[1] M. A. Cacciatore, D. A. Scheufele, E. A. Corley, Public Underst. Sci.; 10.1177/0963662512447606 (2012).

[2] C. Russell, in Science and the Media, D. Kennedy, G. Overholser, Eds. (American Academy of Arts and Sciences, Cambridge, MA, 2010), pp. 13–43

[3] P. Ladwig et al., Mater. Today 13, 52 (2010)

[4] P. Ladwig, A. Anderson, abstract, Annual Conference of the Association for Education in Journalism and Mass Communication, St. Louis, MO, August 2011; www.aejmc. com/home/2011/06/ctec-2011-abstracts

Tags:

[2b2k] Science as social object

An article in published in Science on Thursday, securely locked behind a paywall, paints a mixed picture of science in the age of social media. In “Science, New Media, and the Public,” Dominique Brossard and Dietram A. Scheufele urge action so that science will be judged on its merits as it moves through the Web. That’s a worthy goal, and it’s an excellent article. Still, I read it with a sense that something was askew. I think ultimately it’s something like an old vs. new media disconnect.

The authors begin by noting research that suggests that “online science sources may be helping to narrow knowledge gaps” across educational levels[1]. But all is not rosy. Scientists are going to have “to rethink the interface between the science community and the public.” They point to three reasons.

First, the rise of online media has reduced the amount of time and space given to science coverage by traditional media [2].

Second, the algorithmic prioritizing of stories takes editorial control out of the hands of humans who might make better decisions. The authors point to research that “shows that there are often clear discrepancies between what people search for online, which specific areas are suggested to them by search engines, and what people ultimately find.” The results provided by search engines “may all be linked in a self-reinforcing informational spiral…”[3] This leads them to ask an important question:

Is the World Wide Web opening up a new world of easily accessible scientific information to lay audiences with just a few clicks? Or are we moving toward an online science communication environment in which knowledge gain and opinion formation are increasingly shaped by how search engines present results, direct traffic, and ultimately narrow our informational choices? Critical discussions about these developments have mostly been restricted to the political arena…

Third, we are debating science differently because the Web is social. As an example they point to the fact that “science stories usually…are embedded in a host of cues about their accuracy, importance, or popularity,” from tweets to Facebook “Likes.” “Such cues may add meaning beyond what the author of the original story intended to convey.” The authors cite a recent conference [4] where the tone of online comments turned out to affect how people took the content. For example, an uncivil tone “polarized the views….”

They conclude by saying that we’re just beginning to understand how these Web-based “audience-media interactions” work, but that the opportunity and risk are great, so more research is greatly needed:

Without applied research on how to best communicate science online, we risk creating a future where the dynamics of online communication systems have a stronger impact on public views about science than the specific research that we as scientists are trying to communicate.

I agree with so much of this article, including its call for action, yet it felt odd to me that scientists will be surprised to learn that the Web does not convey scientific information in a balanced and impartial way. You only are surprised by this if you think that the Web is a medium. A medium is that through which content passes. A good medium doesn’t corrupt the content; it conveys signal with a minimum of noise.

But unlike any medium since speech, the Web isn’t a passive channel for the transmission of messages. Messages only move through the Web because we, the people on the Web, find them interesting. For example, I’m moving (infinitesimally, granted) this article by Brossard and Scheufele through the Web because I think some of my friends and readers will find it interesting. If someone who reads this post then tweets about it or about the original article, it will have moved a bit further, but only because someone cared about it. In short, we are the medium, and we don’t move stuff that we think is uninteresting and unimportant. We may move something because it’s so wrong, because we have a clever comment to make about it, or even because we misunderstand it, but without our insertion of ourselves in the form of our interests, it is inert.

So, the “dynamics of online communication systems” are indeed going to have “a stronger impact on public views about science” than the scientific research itself does because those dynamics are what let the research have any impact beyond the scientific community. If scientific research is going to reach beyond those who have a professional interest in it, it necessarily will be tagged with “meaning beyond what the author of the original story intended to convey.” Those meanings are what we make of the message we’re conveying. And what we make of knowledge is the energy that propels it through the new system.

We therefore cannot hope to peel the peer-to-peer commentary from research as it circulates broadly on the Net, not that the Brossard and Scheufele article suggests that. Perhaps the best we can do is educate our children better, and encourage more scientists to dive into the social froth as the place where their research is having its broadest effect.

 


Notes, copied straight from the article:

[1] M. A. Cacciatore, D. A. Scheufele, E. A. Corley, Public Underst. Sci.; 10.1177/0963662512447606 (2012).

[2] C. Russell, in Science and the Media, D. Kennedy, G. Overholser, Eds. (American Academy of Arts and Sciences, Cambridge, MA, 2010), pp. 13–43

[3] P. Ladwig et al., Mater. Today 13, 52 (2010)

[4] P. Ladwig, A. Anderson, abstract, Annual Conference of the Association for Education in Journalism and Mass Communication, St. Louis, MO, August 2011; www.aejmc. com/home/2011/06/ctec-2011-abstracts

Tags:

[2b2k] My world leader can beat up your world leader

There’s a knowingly ridiculous thread at Reddit at the moment: Which world leader would win if pitted against other leaders in a fight to the death.

The title is a straightline begging for punchlines. And it is a funny thread. Yet, I found it shockingly informative. The shock comes from realizing just how poorly informed I am.

My first reaction to the title was “Putin, duh!” That just shows you what I know. From the thread I learned that Joseph Kabila (Congo) and Boyko Borisov (Bulgaria) would kick Putin’s ass. Not to mention that Jigme Khesar Namgyel Wangchuck (Bhutan), who would win on good looks.

Now, when I say that this thread is “shockingly informative,” I don’t mean that it gives sufficient or even relevant information about the leaders it discusses. After all, it focuses on their personal combat skills. Rather, it is an interesting example of the haphazard way information spreads when that spreading is participatory. So, we are unlikely to have sent around the Wikipedia article on Kabila or Borisov simply because we all should know about the people leading the nations of the world. Further, while there is more information about world leaders available than ever in human history, it is distributed across a huge mass of content from which we are free to pick and choose. That’s disappointing at the least and disastrous at its worst.

On the other hand, information is now passed around if it is made interesting, sometimes in jokey, demeaning ways, like an article that steers us toward beefcake (although the president of Ireland does make it up quite high in the Reddit thread). The information that gets propagated through this system is thus spotty and incomplete. It only becomes an occasion for serendipity if it is interesting, not simply because it’s worthwhile. But even jokey, demeaning posts can and should have links for those whose interest is piqued.

So, two unspectacular conclusions.

First, in our despair over the diminishing of a shared knowledge-base of important information, we should not ignore the off-kilter ways in which some worthwhile information does actually propagate through the system. Indeed, it is a system designed to propagate that which is off-kilter enough to be interesting. Not all of that “news,” however, is about water-skiing cats. Just most.

Second, we need to continue to have the discussion about whether there is in fact a shared news/knowledge-base that can be gathered and disseminated, whether there ever was, whether our populations ever actually came close to living up to that ideal, the price we paid for having a canon of news and knowledge, and whether the networking of knowledge opens up any positive possibilities for dealing with news and knowledge at scale. For example, perhaps a network is well-informed if it has experts on hand who can explain events at depth (and in interesting ways) on demand, rather than assuming that everyone has to be a little bit expert at everything.

Tags:

[2b2k] Information overload? Not so much. (Part 2)

Yesterday I tried to explain my sense that we’re not really suffering from information overload, while of course acknowledging that there is vastly more information out there than anyone could ever hope to master. Then a comment from Alex Richter helped me clarify my thinking.

We certainly do at times feel overwhelmed. But consider why you don’t feel like you’re suffering from information overload about, say, the history of stage costumes, Chinese public health policy, the physics of polymers, or whatever topic you would never have majored in, even though each of these topics contains an information overload. I think there are two reasons those topics don’t stress you.

First, and most obviously, because (ex hypothesis) you don’t care about that topic, you’re not confronted with having to hunt down some piece of information, and that topic’s information is not in your face.

But I think there’s a second reason. We have been taught by our previous media that information is manageable. Give us 23 minutes and we’ll give you the world, as the old radio slogan used to say. Read the daily newspaper — or Time or Newsweek once a week — and now you have read the news. That’s the promise implicit in the old media. But the new medium promises us instead edgeless topics and endless links. We know there is no possibility of consuming “the news,” as if there were such a thing. We know that whatever topic we start with, we won’t be able to stay within its bounds without doing violence to that topic. There is thus no possibility of mastering a field. So, sure, there’s more information than anyone could ever take in, but that relieves us of the expectation that we will master it. You can’t be overwhelmed if whelming is itself impossible.

So, I think our sense of being overwhelmed by information is an artifact of our being in a transitional age, with old expectations for mastery that the new environment gives the lie to.

No, this doesn’t mean that we lose all responsibility for knowing anything. Rather, it means we lose responsibility for knowing everything.

Tags:

[2b2k] Linking is a public good

Mathew Ingram at GigaOm has posted the Twitter stream that followed upon his tweet criticizing the Wall Street Journal for running an article based on a post by TechCrunch’s MC Siegler, who responded in an angry post.

Mathew’s point is that linking is a good journalistic practice, even if author of the the second article independently confirmed the information in the first, as happened in this case. Mathew thinks it’s a matter of trust, and if the repeater gets caught at it, it would indeed erode trust. Of course, they probably won’t, and even if you did read the WSJ article after reading the TechCrunch post, you’d probably assume that the news was coming from a common source.

I think there’s another reason why reports ought to link to their, um, inspirations: Links are a public good. They create a web that is increasingly rich, useful, diverse, and trustworthy. We should all feel an obligation to be caretakers of and contributors to this new linked public.

And there’s a further reason. In addition to building this new infrastructure of curiosity, linking is a small act of generosity that sends people away from your site to some other that you think shows the world in a way worth considering. Linking is a public service that reminds us how deeply we are social and public creatures.

Which I think helps explains why newspapers often are not generous with their links. A paper like the WSJ believes its value — as well as its self-esteem — comes from being the place you go for news. It covers the stories worth covering, and the stories tell you what you need to know. It is thus a stopping point in the ecology of information. And that’s the oeprational definition of authority: The last place you visit when you’re looking for an answer. If you are satisfied with the answer, you stop your pursuit of it. Take the links out and you think you look like more of an authority. To this mindset, links are sign of weakness.

This made more sense when knowledge was paper-based, because in practical terms that’s pretty much how it worked: You got your news rolled up and thrown onto your porch once a day, and if you wanted more information about an article in it, you were pretty much SOL. Paper masked just how indebted the media were to one another. The media have always been an ecology of knowledge, but paper enabled them to pretend otherwise, and to base much of their economic value on that pretense.

Until newspapers are as heavily linked as GigaOm, TechCrunch, and Wikipedia, until newspapers revel in pointing away from themselves, they are depending on a value that was always unreal and now is unsustainable.

Tags:

[2b2k] Is HuffPo killing the news?

Mathew Ingram has a provocative post at Gigaom defending HuffingtonPost and its ilk from the charge that they over-aggregate news to the point of thievery. I’m not completely convinced by Mathew’s argument, but that’s because I’m not completely convinced by any argument about this.

It’s a very confusing issue if you think of it from the point of view of who owns what. So, take the best of cases, in which HuffPo aggregates from several sources and attributes the reportage appropriately. It’s important to take a best case since we’ll all agree that if HuffPo lifts an article en toto without attribution, it’s simple plagiarism. But that doesn’t tell us if the best cases are also plagiarisms. To make it juicier, assume that in one of these best cases, HuffPo relies heavily on one particular source article. It’s still not a slam dunk case of theft because in this example HuffPo is doing what we teach every school child to do: If you use a source, attribute it.

But, HuffPo isn’t a schoolchild. It’s a business. It’s making money from those aggregations. Ok, but we are fine in general with people selling works that aggregate and attribute. Non-fiction publishing houses that routinely sell books that have lots of footnotes are not thieves. And, as Mathew points out, HuffPo (in its best cases) is adding value to the sources it aggregates.

But, HuffPo’s policy even in its best case can enable it to serve as a substitute for the newspapers it’s aggregating. It thus may be harming the sources its using.

And here we get to what I think is the most important question. If you think about the issue in terms of theft, you’re thrown into a moral morass where the metaphors don’t work reliably. Worse, you may well mix in legal considerations that are not only hard to apply, but that we may not want to apply given the new-ness (itself arguable) of the situation.

But, I find that I am somewhat less conflicted about this if I think about it terms of what direction we’d like to nudge our world. For example, when it comes to copyright I find it helpful to keep in mind that a world full of music and musicians is better than a world in which music is rationed. When it comes to news aggregation, many of us will agree that a world in which news is aggregated and linked widely through the ecosystem is better than one in which you—yes, you, since a rule against HuffPo aggregating sources wouldn’t apply just to HuffPo— have to refrain from citing a source for fear that you’ll cross some arbitrary limit. We are a healthier society if we are aggregating, re-aggregating, contextualizing, re-using, evaluating, and linking to as many sources as we want.

Now, beginning by thinking where we want the world to be —which, by the way, is what this country’s Founders did when they put copyright into the Constitution in the first place: “to promote the progress of science and useful arts”—is useful but limited, because to get the desired situation in which we can aggregate with abandon, we need the original journalistic sources to survive. If HuffPo and its ilk genuinely are substituting for newspapers economically, then it seems we can’t get to where we want without limiting the right to aggregate.

And that’s why I’m conflicted. I don’t believe that even if all rights to aggregate were removed (which no one is proposing), newspapers would bounce back. At this point, I’d guess that the Net generation is primarily interested in news mainly insofar as its woven together and woven into the larger fabric. Traditional reportage is becoming valued more as an ingredient than a finished product. It’s the aggregators—the HuffingtonPosts of the world, but also the millions of bloggers, tweeters and retweeters, Facebook likers and Google plus-ers, redditors and slashdotters, BoingBoings and Ars Technicas— who are spreading the news by adding value to it. News now only moves if we’re interested enough in it to pass it along. So, I don’t know how to solve journalism’s deep problems with its business models, but I can’t imagine that limiting the circulation of ideas will help, since in this case, the circulatory flow is what’s keeping the heart beating.

 


[A few minutes later] Mathew has also posted what reads like a companion piece, about how Amazon’s Kindle Singles are supporting journalism.

Tags: