Archive for category media

[liveblog] Robin Sproul, ABC News

I’m at a Shorenstein Center brownbag talk. Robin Sproul is talking about journalism in the changing media landscape. She’s been Washington Bureau Chief of ABC News for 20 years, and now is VP of Public Affairs for that network. (Her last name rhymes with “owl,” by the way.)

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

This is an “incredibly exciting time,” Robin begins. The pace has been fast and is only getting faster. E.g., David Plouffe says that Obama’s digital infrastructure from 2008 didn’t apply in 2012, and the 2012 infrastructure won’t apply in 2016.

A few years ago, news media were worried about how to reach you wherever you are. Now it’s how to reach you in a way that makes you want to pay attention. “How do we get inside your brain, through the firehose, in a way that will break through everything you’re exposed to?” We’re all adapting to getting more and smaller bites. “Digital natives swerve differently than the older generation, from one topic to another.”

In this social media world, “each of us is a news reporter.” Half of people on social networks repost news videos, and one in ten post news videos they’ve recorded themselves.

David Carr: “If the Vietnam War brought war into our living rooms,” now “it’s at our fingertips.” But we see the world through narrow straws. We’re not going back from that, but we need to get better at curating them and making sure they’re accurate and contextualized.

On the positive side: “I was so moved by a Ferguson coverage: how a community of color, in this case, could tell their own story” and connect with people around the country, in real-time. “The people of that community were ahead of the cables.” Sure, some of the info was wrong, but we could watch people bearing witness to history. Also, the Ray Rice video has stimulated conversations on domestic violence around the country. How do you tap into these discussions? Sort them? Curate them? “A lot of it comes down to curation.”

People are not coming into ABCnews.com directly. “They’re coming in through side doors.” “And the big stories we do compete with the animal stories, the recipes,” etc. “We see a place like Buzzfeed” that now has 200 employees. They’ve hired someone from The Guardian, they’ve been reporting from the ground in Liberia. Yahoo’s hired Katie Couric. Vice. Michael Isikoff. Reddit’s AMAs. Fusion has just hired Tim Pool from Vice Media. “All of these things are competing in a rapidly shifting universe.”

ABC is creating partnerships, e.g., with Facebook for identifying what’s trending which is then discussed on their Sunday morning show. [See Ethan Zuckerman’s recent post on why Twitter is a better news source than Facebook. Also, John McDermott’s Why Facebook is for ice buckets, Twitter is for Ferguson. Both suggest that ABC maybe should rethink its source for what’s trending.] ABC uses various software platforms to evaluate video coming in of breaking news. “We need help, so we’re partnering.” ABC now has a social desk. “During a big story, we activate a team…and they are in a deep deep dive of social media,” vetting it for accuracy and providing context. “Six in ten of Americans watch videos on line and half of those watch news videos. This is a big growth area.” But, she adds ruefully, it’s “not a big revenue growth area.”

So, ABC is tapping into social media, but is wary of those who have their own aims. E.g., Whitehouse.gov does reports that look like news reports but are not. The photos the White House hands out never show a yawning, exhausted, or weeping president. “I joke with the press secretary that we’re one step away from North Korea.” We’re heading toward each candidate having their own network, in effect, a closed circle.

Q&A

Q: You’ve describe the fragmentation in the supply of news. But how about the demand? “Are you getting a sense of your audience?” What circulates? What sticks? What sets the agenda? etc.

A: We do a lot of audience research. Our mainstream TV shows attract an aging audience. No matter what we do, they’re not bringing in a new audience. Pretty much the older the audience, the more they like hard news. We’ve changed the pace of the Sunday shows. We think people want a broader lens from us. “We’re not as focused on horse race politics, or what John McCain thinks of every single issue. We’re open to new voices.”

Q: The future of health reporting? I’m disappointed with what I see. E.g., there’s little regard to the optics of how we’re treating Ebola, particular with regard to the physicians getting treated back in the US.

A: Dr. Richard Besser, who ran the CDC, is at ABC and has reported from Africa. But it’s hit or miss. We did cover the white doctors getting the serum, but it’s hard to find in the firehose.

Q: How do you balance quality news with short attention spans?

A: For the Sunday shows we’ve tried to maintain a balance.

Q: Does ABC try to maintain its own pace, or go with the new pace? If the latter, how do you maintain quality?

A: We used to make a ton of money producing the news and could afford to go anywhere. Now we have the same number of hours of news on TV, but the audiences are shrinking and we’re trying to grow. It’s not as deep. It’s broader. We will want to find you…but you have to be willing to be found.

Q: How do you think about the segmentation of your news audience? And what are the differences in what you provide them?

A: We know which of our shows skew older (Sunday shows), or more female (Good Morning America), etc. We don’t want to leave any segment behind. We want our White House reporter to go into depth, but he also has to tweet all day, does a Yahoo show, does radio, accompanies Nancy Pelosi on a fast-walk, etc.

Q: Some of your audiences matter from a business point of view. But historically ABC has tried to supply news to policy makers etc. The 11 year old kids may give you large numbers, but…

A: When we sit in our morning editorial mornings we never say that we will do a story because the 18-24 year olds are interested. The need to know, what we think is important, drives decisions. We used to be programming for “people like us” who want the news. Then we started getting thousands of “nutjob” emails. [I’m doing a bad job paraphrasing here. Sorry] Sam Donaldson was shocked. “This digital age has made us much more aware of all those different audiences.” We’re in more contact with our audience now. E.g., when the media were accused of pulling their punches in the run-up to the Iraq War, we’d get pushback saying we’re anti-American. Before, we didn’t get these waves.

Q: A fast-walk with Nancy Pelosi, really?

A: [laughs] It got a lot of hits.

Q: Can you elaborate on your audience polling? And do people not watch negative stories?

A: A Harvard prof told me last night that s/he doesn’t like watching the news any more because it’s just so depressing. But that’s a fact of life. Anyway, it used to be that the posted comments were very negative, and sometimes from really crazy people. We learned to put that into perspective. Now Twitter provides instant feedback. We’re slammed whatever we do. So we try to come up with a mix. For World News Tonight, people with different backgrounds talk about the stories, how they play off the story before it, etc. Recenty we’ve been criticized for doing too much “news you can use”, how to live your life, etc. We want to give people news that isn’t always just terrible. There’s a lot of negative stuff that we’re exposed to now. [Again, sorry for the choppiness. My fault.]

Q: TV has always had the challenge of the limited time for news. With digital, how are you linking the on-screen reporting with the in-depth online stories, given the cutbacks? How do you avoid answering every tweet? [Not sure I got that right.]

A: We have a mix of products.

Q: What is the number one barrier to investigative journalism? How have new media changed that balance?

A: There are investigative reporting non-profits springing up all the time. There’s an appetite from the user for it. All of the major news orgs still have units doing it. But what is the business model? How much money do you apportion to each vertical in your news division? It’s driven by the appetite for it, how much money you have, what you’re taking it away from. Investigative is a growth industry.

Q: I was a spokesperson for Healthcare.gov and was interested in your comments about this Administration being more closed to the media.

A: They are more closed than prior admins. There’s always a message. When the President went out the other day to talk, no other admin members were allowed to talk with the media. I think it’s a response to how many inquiries are coming and how out of control info is, and how hard it is to respond to inaccuracies that pop up. The Obama administration has clamped down a little more because of that.

Q: You can think of Vice in Liberia as an example of boutique reporting: they do that one story. But ABC News has to cover everything. Do you see a viable future for us?

A: As we go further down this path and it becomes more overwhelming, there are some brands that stand for something. Curation is what we do well. Cyclically, people will go back to these brands.

Q: In the last couple of years, there’s a trend away from narrative to Gestalt. They were called news stories because they had a plot. Recent news events like Ferguson or Gaza were more like just random things. Very little story.

A: Twitter is a tool, a platform. It’s not really driving stories. Maybe it’s the nature of the stories. It’ll be interesting to see how social media are used by the candidates in the 2016 campaign.

Q: Why splitting the nightly news anchor from …

A: Traditionally the evening news anchor has been the chief anchor for the network. George Stephanopoulos anchors GMA, which makes most of the money. So no one wanted to move him to the evening news. And the evening news has become a little less relevant to our network. There’s been a diminishment in the stature of the evening news anchor. And it plays to GS’s strengths.

Tags:

High-contrast transparency – How Glenn Greenwald could look like a monopolist

Glenn Greenwald mounts a mighty and effective defense against the charge leveled by Mark Ames at Pando.com that Greenwald and Laura Poitras are “monopolizing” and “privatizing” the 50,000-200,000 NSA documents entrusted to them by Edward Snowden.

Unlike Greenwald, I do think “it’s a question worth asking,” as Ames puts it — rather weasily, since his post attempt really is about supplying an answer. It’s worth asking because of the new news venture funded by Pierre Omidyar that has hired Greenwald and Poitras. Greenwald argues (among other things) that the deal has nothing to do with profiting from their access to the Snowden papers; in fact, he says, by the time the venture gets off the ground, there may not be any NSA secrets left to reveal. But one can imagine a situation in which a newspaper hires a journalist with unique access to some highly newsworthy information in order to acquire and control that information. In this case, we have contrary evidence: Greenwald and Poitras have demonstrated their courage and commitment.

Greenwald’s defense overall is, first, that he and Poitras (Bart Gellman plays a lesser role in the article) have not attempted to monopolize the papers so far. On the contrary, they’ve been generous and conscientious in spreading the the revelations to papers around the world. Second, getting paid for doing this is how journalism works.

To be fair, Ames’ criticism isn’t simply that Greenwald is making money, but that Omidyar can’t be trusted. I disagree, albeit without pretending to have any particular insight into Omidyar’s (or anyone’s) soul. (I generally have appreciated Omidyar’s work, but so what?) We do have reason to trust Greenwald, however. It’s inconceivable to me that Greenwald would let the new venture sit on NSA revelations for bad reasons.

But I personally am most interested in why these accusations have traction at all.

Before the Web, the charge that Greenwald is monopolizing the information wouldn’t even have made sense because there wasn’t an alternative. Yes, he might have turned the entire cache over to The Guardian or the New York Times, but then would those newspapers look like monopolists? No, they’d look like journalists, like stewards. Now there are options. Snowden could have posted the cache openly on a Web site. He could have created a torrent so that they circulate forever. He could have given them to Wikileaks curate. He could have sent them to 100 newspapers simultaneously. He could have posted them in encrypted form and have given the key to the Dalai Lama or Jon Stewart. There are no end of options.

But Snowden didn’t. Snowden wanted the information curated, and redacted when appropriate. He trusted his hand-picked journalists more than any newspaper to figure out what “appropriate” means. We might disagree with his choice of method or of journalists, but we can understand it. The cache needs editing, contextualization, and redaction so that we understand it, and so that the legitimate secrets of states are preserved. (Are there legitimate state secrets? Let me explain: Yes.) Therefore, it needs stewardship.

No so incidentally, the fact that we understand without a hiccup why Snowden entrusted individual journalists with the information, rather than giving it to even the most prestigious of newspapers, is another convincing sign of the collapse of our institutions.

It’s only because we have so many other options that entrusting the cache to journalists committed to stewarding it into the public sphere could ever be called “monopolizing” it. The word shouldn’t make any sense to us in this environment, yet it is having enough traction that Greenwald reluctantly wrote a long post defending himself. Given that the three recipients of the Snowden cache have been publishing it in newspapers all over the world makes them much less “monopolists” than traditional reporters are. Greenwald only needed to defend himself from this ridiculous charge because we now have a medium that can do what was never before possible: immediately and directly publish sets of information of any size. And we have a culture (in which I happily and proudly associate) that says openness is the default. But defaults were made to be broken. That’s why they’re defaults and not laws of nature or morality.

Likewise, when Ames’ criticizes Greenwald for profiting from these secrets because he gets paid as a journalist (which is separate from the criticism that working for Omidyar endangers the info — a charge I find non-credible), the charge makes even the slightest sense only because of the Web’s culture of Free, which, again I am greatly enthusiastic about. As an institution of democracy, one might hope that newspapers would be as free as books in the public library — which is to say, the costs are hidden from the user — but it’s obvious what the problems are with government-funded news media. So, journalists get paid by the companies that hire them, and this by itself could only ever look like a criticism in an environment where Free is the default. We now have that environment, even if enabling journalism is one of the places where Free just doesn’t do the entire job.

That the charge that Glenn Greenwald is monopolizing or privatizing the Snowden information is even comprehensible to us is evidence of just how thoroughly the Web is changing our defaults and our concepts. Many of our core models are broken. We are confused. These charges are further proof, as if we needed it.

Tags:

[2b2k] Does the Net make us stoopid?

Yesterday I participated as a color commentator in a 90 minute debate between Clive Thompson [twitter:pomeranian99] and Steve Easterbrook [twitter:smeasterbrook], put on by the CBC’s Q program.The topic was “Does the Net Make Us Smart or Stupid?” It airs today, and you can hear it here.

It was a really good discussion between Clive and Steve, without any of the trumped up argumentativeness that too often mars this type of public conversation. It was, of course, too short, but with a topic like this, we want it to bust its bounds, don’t we?

My participation was minimal, but that’s why we have blogs, right? So, here are two points I would have liked to pursue further.

First, if we’re going to ask if the Net makes us smart or stupid, we have to ask who we’re talking about. More exactly, who in what roles? So, I’d say that the Net’s made me stupider in that I spend more of my time chasing down trivialities. I know more about Miley Cyrus than I would have in the old days. Now I find that I’m interested in the Miley Phenomenon — the media’s treatment, the role of celebrity, the sexualization of everything, etc. — whereas before I would never have felt it worth a trip to the library or the purchase of an issue of Tiger Beat or whatever. (Let me be clear: I’m not that interested. But that’s the point: it’s all now just a click away.)

On the other hand, if you ask if the Net has made scholars and experts smarter, I think the answer has to be an almost unmitigated yes. Find me a scholar or expert who would turn off the Net when pursuing her topic. All discussions of whether the Net makes us smarter I think should begin by considering those who are in the business of being smart, as we all are at some points during the day.

Now, that’s not really as clear a distinction as I’d like. It’s possible to argue that the Net’s made experts stupider because it’s enabled people to become instant “experts” on topics. (Hat tip to Visiona-ary [twitter:0penCV] who independently raised this on Twitter.) We can delude ourselves into thinking we’re experts because we’ve skimmed the Wikipedia article or read an undergrad’s C- post about it. But is it really a bad thing that we can now get a quick gulp of knowledge in a field that we haven’t studied and probably never will study in depth? Only if we don’t recognize that we are just skimmers. At that point we find ourselves seriously arguing with a physicist about information’s behavior at the event horizon of a black hole as if we actually knew what we were talking about. Or, worse, we find ourselves disregarding our physician’s advice because we read something on the Internet. Humility is 95% of knowledge.

Here’s a place where learning some of the skills of journalists would be helpful for us all. (See Dan Gillmor‘s MediActive for more on this.) After all, the primary skill of a particular class of journalists is their ability to speak for experts in a field in which the journalist is not her/himself expert. Journalists, however, know how to figure out who to consult, and don’t confuse themselves with experts themselves. Modern media literacy means learning some of the skills and all of the humility of good journalists.

Second, Clive Thompson made the excellent and hugely important point that knowledge is now becoming public. In the radio show, I tried to elaborate on that in a way that I’m confident Clive already agrees with by saying that it’s not just public, it’s social, and not just social, but networked. Jian Ghomeshi, the host, raised the question of misinformation on the Net by pointing to Reddit‘s misidentification of one of the Boston bombers. He even played a touching and troubling clip by the innocent person’s brother talking about the permanent damage this did to the family. Now, every time you look up “Sunil Tripathi” on the Web, you’ll see him misidentified as a suspect in the bombing.

I responded ineffectively by pointing to Judith Miller’s year of misreporting for the NY Times that helped move us into a war, to make the point that all media are error prone. Clive did a better job by citing a researcher who fact checked an entire issue of a newspaper and uncovered a plethora of errors (mainly small, I assume) that were never corrected and that are preserved forever in the digital edition of that paper.

But I didn’t get a chance to say the thing that I think matters more. So, go ahead and google “Sunil Tripathi”. You will have to work at finding anything that identifies him as the Boston Bomber. Instead, the results are about his being wrongly identified, and about his suicide (which apparently occurred before the false accusations were made).

None of this excuses the exuberantly irresponsible way a subreddit (i.e., a topic-based discussion) at Reddit accused him. And it’s easy to imagine a case in which such a horrible mistake could have driven someone to suicide. But that’s not my point. My point here is twofold.

First, the idea that false ideas once published on the Net continue forever uncorrected is not always the case. If we’re taking as our example ideas that are clearly wrong and are important, the corrections will usually be more obvious and available to us than in the prior media ecology. (That doesn’t relieve us of the responsibility of getting facts right in the first place.)

Second, this is why I keep insisting that knowledge now lives in networks the way it used to live in books or newspapers. You get the truth not in any single chunk but in the web of chunks that are arguing, correcting, and arguing about the corrections. This, however, means that knowledge is an argument, or a conversation, or is more like the webs of contention that characterize the field of living scholarship. There was an advantage to the old ecosystem in which there was a known path to authoritative opinions, but there were problems with that old system as well.

That’s why it irks me to take any one failure, such as the attempt to crowdsource the identification of the Boston murderers, as a trump card in the argument the Net makes us stupider. To do so is to confuse the Net with an aggregation of public utterances. That misses the transformative character of the networking of knowledge. The Net’s essential character is that it’s a network, that it’s connected. We therefore have to look at the network that arose around those tragically wrong accusations.

So, search for Sunil Tripathi at Reddit.com and you will find a list of discussions at Reddit about how wrong the accusation was, how ill-suited Reddit is for such investigations, and how the ethos and culture of Reddit led to the confident condemning of an innocent person. That network of discussion — which obviously extends far beyond Reddit’s borders — is the real phenomenon…”real” in the sense that the accusations themselves arose from a network and were very quickly absorbed into a web of correction, introspection, and contextualization.

The network is the primary unit of knowledge now. For better and for worse.

Tags:

[2b2k] The public ombudsman (or Facts don’t work the way we want)

I don’t care about expensive electric sports cars, but I’m fascinated by the dustup between Elon Musk and the New York Times.

On Sunday, the Times ran an article by John Broder on driving the Tesla S, an all-electric car made by Musk’s company, Tesla. The article was titled “Stalled Out on Tesla’s Electric Highway,” which captured the point quite concisely.

Musk on Wednesday in a post on the Tesla site contested Broder’s account, and revealed that every car Tesla lends to a reviewer has its telemetry recorders set to 11. Thus, Musk had the data that proved that Broder was driving in a way that could have no conceivable purpose except to make the Tesla S perform below spec: Broder drove faster than he claimed, drove circles in a parking lot for a while, and didn’t recharge the car to full capacity.

Boom! Broder was caught red-handed, and it was data that brung him down. The only two questions left were why did Broder set out to tank the Tesla, and would it take hours or days for him to be fired?

Except…

Rebecca Greenfield at Atlantic Wire took a close look at the data — at least at the charts and maps that express the data — and evaluated how well they support each of Musk’s claims. Overall, not so much. The car’s logs do seem to contradict Broder’s claim to have used cruise control. But the mystery of why Broder drove in circles in a parking lot seems to have a reasonable explanation: he was trying to find exactly where the charging station was in the service center.

But we’re not done. Commenters on the Atlantic piece have both taken it to task and provided some explanatory hypotheses. Greenfield has interpolated some of the more helpful ones, as well as updating her piece with testimony from the tow-truck driver, and more.

But we’re still not done. Margaret Sullivan [twitter:sulliview] , the NYT “public editor” — a new take on what in the 1960s we started calling “ombudspeople” (although actually in the ’60s we called them “ombudsmen”) — has jumped into the fray with a blog post that I admire. She’s acting like a responsible adult by witholding judgment, and she’s acting like a responsible webby adult by talking to us even before all the results are in, acknowledging what she doesn’t know. She’s also been using social media to discuss the topic, and even to try to get Musk to return her calls.

Now, this whole affair is both typical and remarkable:

It’s a confusing mix of assertions and hypotheses, many of which are dependent on what one would like the narrative to be. You’re up for some Big Newspaper Schadenfreude? Then John Broder was out to do dirt to Tesla for some reason your own narrative can supply. You want to believe that old dinosaurs like the NYT are behind the curve in grasping the power of ubiquitous data? Yup, you can do that narrative, too. You think Elon Musk is a thin-skinned capitalist who’s willing to destroy a man’s reputation in order to protect the Tesla brand? Yup. Or substitute “idealist” or “world-saving environmentally-aware genius,” and, yup, you can have that narrative too.

Not all of these narratives are equally supported by the data, of course — assuming you trust the data, which you may not if your narrative is strong enough. Data signals but never captures intention: Was Broder driving around the parking lot to run down the battery or to find a charging station? Nevertheless, the data do tell us how many miles Broder drove (apparently just about the amount that he said) and do nail down (except under the most bizarre conspiracy theories) the actual route. Responsible adults like you and me are going to accept the data and try to form the story that “makes the most sense” around them, a story that likely is going to avoid attributing evil motives to John Broder and evil conspiratorial actions by the NYT.

But the data are not going to settle the hash. In fact, we already have the relevant numbers (er, probably) and yet we’re still arguing. Musk produced the numbers thinking that they’d bring us to accept his account. Greenfield went through those numbers and gave us a different account. The commenters on Greenfield’s post are arguing yet more, sometimes casting new light on what the data mean. We’re not even close to done with this, because it turns out that facts mean less than we’d thought and do a far worse job of settling matters than we’d hoped.

That’s depressing. As always, I am not saying there are no facts, nor that they don’t matter. I’m just reporting empirically that facts don’t settle arguments the way we were told they would. Yet there is something profoundly wonderful and even hopeful about this case that is so typical and so remarkable.

Margaret Sulllivan’s job is difficult in the best of circumstances. But before the Web, it must have been so much more terrifying. She would have been the single point of inquiry as the Times tried to assess a situation in which it has deep, strong vested interests. She would have interviewed Broder and Musk. She would have tried to find someone at the NYT or externally to go over the data Musk supplied. She would have pronounced as fairly as she could. But it would have all been on her. That’s bad not just for the person who occupies that position, it’s a bad way to get at the truth. But it was the best we could do. In fact, most of the purpose of the public editor/ombudsperson position before the Web was simply to reassure us that the Times does not think it’s above reproach.

Now every day we can see just how inadequate any single investigator is for any issue that involves human intentions, especially when money and reputations are at stake. We know this for sure because we can see what an inquiry looks like when it’s done in public and at scale. Of course lots of people who don’t even know that they’re grinding axes say all sorts of mean and stupid things on the Web. But there are also conversations that bring to bear specialized expertise and unusual perspectives, that let us turn the matter over in our hands, hold it up to the light, shake it to hear the peculiar rattle it makes, roll it on the floor to gauge its wobble, sniff at it, and run it through sophisticated equipment perhaps used for other purposes. We do this in public — I applaud Sullivan’s call for Musk to open source the data — and in response to one another.

Our old idea was that the thoroughness of an investigation would lead us to a conclusion. Sadly, it often does not. We are likely to disagree about what went on in Broder’s review, and how well the Tesla S actually performed. But we are smarter in our differences than we ever could be when truth was a lonelier affair. The intelligence isn’t in a single conclusion that we all come to — if only — but in the linked network of views from everywhere.

There is a frustrating beauty in the way that knowledge scales.

Tags:

[2b2k] Science as social object

An article in published in Science on Thursday, securely locked behind a paywall, paints a mixed picture of science in the age of social media. In “Science, New Media, and the Public,” Dominique Brossard and Dietram A. Scheufele urge action so that science will be judged on its merits as it moves through the Web. That’s a worthy goal, and it’s an excellent article. Still, I read it with a sense that something was askew. I think ultimately it’s something like an old vs. new media disconnect.

The authors begin by noting research that suggests that “online science sources may be helping to narrow knowledge gaps” across educational levels[1]. But all is not rosy. Scientists are going to have “to rethink the interface between the science community and the public.” They point to three reasons.

First, the rise of online media has reduced the amount of time and space given to science coverage by traditional media [2].

Second, the algorithmic prioritizing of stories takes editorial control out of the hands of humans who might make better decisions. The authors point to research that “shows that there are often clear discrepancies between what people search for online, which specific areas are suggested to them by search engines, and what people ultimately find.” The results provided by search engines “may all be linked in a self-reinforcing informational spiral…”[3] This leads them to ask an important question:

Is the World Wide Web opening up a new world of easily accessible scientific information to lay audiences with just a few clicks? Or are we moving toward an online science communication environment in which knowledge gain and opinion formation are increasingly shaped by how search engines present results, direct traffic, and ultimately narrow our informational choices? Critical discussions about these developments have mostly been restricted to the political arena…

Third, we are debating science differently because the Web is social. As an example they point to the fact that “science stories usually…are embedded in a host of cues about their accuracy, importance, or popularity,” from tweets to Facebook “Likes.” “Such cues may add meaning beyond what the author of the original story intended to convey.” The authors cite a recent conference [4] where the tone of online comments turned out to affect how people took the content. For example, an uncivil tone “polarized the views….”

They conclude by saying that we’re just beginning to understand how these Web-based “audience-media interactions” work, but that the opportunity and risk are great, so more research is greatly needed:

Without applied research on how to best communicate science online, we risk creating a future where the dynamics of online communication systems have a stronger impact on public views about science than the specific research that we as scientists are trying to communicate.

I agree with so much of this article, including its call for action, yet it felt odd to me that scientists will be surprised to learn that the Web does not convey scientific information in a balanced and impartial way. You only are surprised by this if you think that the Web is a medium. A medium is that through which content passes. A good medium doesn’t corrupt the content; it conveys signal with a minimum of noise.

But unlike any medium since speech, the Web isn’t a passive channel for the transmission of messages. Messages only move through the Web because we, the people on the Web, find them interesting. For example, I’m moving (infinitesimally, granted) this article by Brossard and Scheufele through the Web because I think some of my friends and readers will find it interesting. If someone who reads this post then tweets about it or about the original article, it will have moved a bit further, but only because someone cared about it. In short, we are the medium, and we don’t move stuff that we think is uninteresting and unimportant. We may move something because it’s so wrong, because we have a clever comment to make about it, or even because we misunderstand it, but without our insertion of ourselves in the form of our interests, it is inert.

So, the “dynamics of online communication systems” are indeed going to have “a stronger impact on public views about science” than the scientific research itself does because those dynamics are what let the research have any impact beyond the scientific community. If scientific research is going to reach beyond those who have a professional interest in it, it necessarily will be tagged with “meaning beyond what the author of the original story intended to convey.” Those meanings are what we make of the message we’re conveying. And what we make of knowledge is the energy that propels it through the new system.

We therefore cannot hope to peel the peer-to-peer commentary from research as it circulates broadly on the Net, not that the Brossard and Scheufele article suggests that. Perhaps the best we can do is educate our children better, and encourage more scientists to dive into the social froth as the place where their research is having its broadest effect.

 


Notes, copied straight from the article:

[1] M. A. Cacciatore, D. A. Scheufele, E. A. Corley, Public Underst. Sci.; 10.1177/0963662512447606 (2012).

[2] C. Russell, in Science and the Media, D. Kennedy, G. Overholser, Eds. (American Academy of Arts and Sciences, Cambridge, MA, 2010), pp. 13–43

[3] P. Ladwig et al., Mater. Today 13, 52 (2010)

[4] P. Ladwig, A. Anderson, abstract, Annual Conference of the Association for Education in Journalism and Mass Communication, St. Louis, MO, August 2011; www.aejmc. com/home/2011/06/ctec-2011-abstracts

Tags:

[2b2k] Science as social object

An article in published in Science on Thursday, securely locked behind a paywall, paints a mixed picture of science in the age of social media. In “Science, New Media, and the Public,” Dominique Brossard and Dietram A. Scheufele urge action so that science will be judged on its merits as it moves through the Web. That’s a worthy goal, and it’s an excellent article. Still, I read it with a sense that something was askew. I think ultimately it’s something like an old vs. new media disconnect.

The authors begin by noting research that suggests that “online science sources may be helping to narrow knowledge gaps” across educational levels[1]. But all is not rosy. Scientists are going to have “to rethink the interface between the science community and the public.” They point to three reasons.

First, the rise of online media has reduced the amount of time and space given to science coverage by traditional media [2].

Second, the algorithmic prioritizing of stories takes editorial control out of the hands of humans who might make better decisions. The authors point to research that “shows that there are often clear discrepancies between what people search for online, which specific areas are suggested to them by search engines, and what people ultimately find.” The results provided by search engines “may all be linked in a self-reinforcing informational spiral…”[3] This leads them to ask an important question:

Is the World Wide Web opening up a new world of easily accessible scientific information to lay audiences with just a few clicks? Or are we moving toward an online science communication environment in which knowledge gain and opinion formation are increasingly shaped by how search engines present results, direct traffic, and ultimately narrow our informational choices? Critical discussions about these developments have mostly been restricted to the political arena…

Third, we are debating science differently because the Web is social. As an example they point to the fact that “science stories usually…are embedded in a host of cues about their accuracy, importance, or popularity,” from tweets to Facebook “Likes.” “Such cues may add meaning beyond what the author of the original story intended to convey.” The authors cite a recent conference [4] where the tone of online comments turned out to affect how people took the content. For example, an uncivil tone “polarized the views….”

They conclude by saying that we’re just beginning to understand how these Web-based “audience-media interactions” work, but that the opportunity and risk are great, so more research is greatly needed:

Without applied research on how to best communicate science online, we risk creating a future where the dynamics of online communication systems have a stronger impact on public views about science than the specific research that we as scientists are trying to communicate.

I agree with so much of this article, including its call for action, yet it felt odd to me that scientists will be surprised to learn that the Web does not convey scientific information in a balanced and impartial way. You only are surprised by this if you think that the Web is a medium. A medium is that through which content passes. A good medium doesn’t corrupt the content; it conveys signal with a minimum of noise.

But unlike any medium since speech, the Web isn’t a passive channel for the transmission of messages. Messages only move through the Web because we, the people on the Web, find them interesting. For example, I’m moving (infinitesimally, granted) this article by Brossard and Scheufele through the Web because I think some of my friends and readers will find it interesting. If someone who reads this post then tweets about it or about the original article, it will have moved a bit further, but only because someone cared about it. In short, we are the medium, and we don’t move stuff that we think is uninteresting and unimportant. We may move something because it’s so wrong, because we have a clever comment to make about it, or even because we misunderstand it, but without our insertion of ourselves in the form of our interests, it is inert.

So, the “dynamics of online communication systems” are indeed going to have “a stronger impact on public views about science” than the scientific research itself does because those dynamics are what let the research have any impact beyond the scientific community. If scientific research is going to reach beyond those who have a professional interest in it, it necessarily will be tagged with “meaning beyond what the author of the original story intended to convey.” Those meanings are what we make of the message we’re conveying. And what we make of knowledge is the energy that propels it through the new system.

We therefore cannot hope to peel the peer-to-peer commentary from research as it circulates broadly on the Net, not that the Brossard and Scheufele article suggests that. Perhaps the best we can do is educate our children better, and encourage more scientists to dive into the social froth as the place where their research is having its broadest effect.

 


Notes, copied straight from the article:

[1] M. A. Cacciatore, D. A. Scheufele, E. A. Corley, Public Underst. Sci.; 10.1177/0963662512447606 (2012).

[2] C. Russell, in Science and the Media, D. Kennedy, G. Overholser, Eds. (American Academy of Arts and Sciences, Cambridge, MA, 2010), pp. 13–43

[3] P. Ladwig et al., Mater. Today 13, 52 (2010)

[4] P. Ladwig, A. Anderson, abstract, Annual Conference of the Association for Education in Journalism and Mass Communication, St. Louis, MO, August 2011; www.aejmc. com/home/2011/06/ctec-2011-abstracts

Tags:

[2b2k] Science as social object

An article in published in Science on Thursday, securely locked behind a paywall, paints a mixed picture of science in the age of social media. In “Science, New Media, and the Public,” Dominique Brossard and Dietram A. Scheufele urge action so that science will be judged on its merits as it moves through the Web. That’s a worthy goal, and it’s an excellent article. Still, I read it with a sense that something was askew. I think ultimately it’s something like an old vs. new media disconnect.

The authors begin by noting research that suggests that “online science sources may be helping to narrow knowledge gaps” across educational levels[1]. But all is not rosy. Scientists are going to have “to rethink the interface between the science community and the public.” They point to three reasons.

First, the rise of online media has reduced the amount of time and space given to science coverage by traditional media [2].

Second, the algorithmic prioritizing of stories takes editorial control out of the hands of humans who might make better decisions. The authors point to research that “shows that there are often clear discrepancies between what people search for online, which specific areas are suggested to them by search engines, and what people ultimately find.” The results provided by search engines “may all be linked in a self-reinforcing informational spiral…”[3] This leads them to ask an important question:

Is the World Wide Web opening up a new world of easily accessible scientific information to lay audiences with just a few clicks? Or are we moving toward an online science communication environment in which knowledge gain and opinion formation are increasingly shaped by how search engines present results, direct traffic, and ultimately narrow our informational choices? Critical discussions about these developments have mostly been restricted to the political arena…

Third, we are debating science differently because the Web is social. As an example they point to the fact that “science stories usually…are embedded in a host of cues about their accuracy, importance, or popularity,” from tweets to Facebook “Likes.” “Such cues may add meaning beyond what the author of the original story intended to convey.” The authors cite a recent conference [4] where the tone of online comments turned out to affect how people took the content. For example, an uncivil tone “polarized the views….”

They conclude by saying that we’re just beginning to understand how these Web-based “audience-media interactions” work, but that the opportunity and risk are great, so more research is greatly needed:

Without applied research on how to best communicate science online, we risk creating a future where the dynamics of online communication systems have a stronger impact on public views about science than the specific research that we as scientists are trying to communicate.

I agree with so much of this article, including its call for action, yet it felt odd to me that scientists will be surprised to learn that the Web does not convey scientific information in a balanced and impartial way. You only are surprised by this if you think that the Web is a medium. A medium is that through which content passes. A good medium doesn’t corrupt the content; it conveys signal with a minimum of noise.

But unlike any medium since speech, the Web isn’t a passive channel for the transmission of messages. Messages only move through the Web because we, the people on the Web, find them interesting. For example, I’m moving (infinitesimally, granted) this article by Brossard and Scheufele through the Web because I think some of my friends and readers will find it interesting. If someone who reads this post then tweets about it or about the original article, it will have moved a bit further, but only because someone cared about it. In short, we are the medium, and we don’t move stuff that we think is uninteresting and unimportant. We may move something because it’s so wrong, because we have a clever comment to make about it, or even because we misunderstand it, but without our insertion of ourselves in the form of our interests, it is inert.

So, the “dynamics of online communication systems” are indeed going to have “a stronger impact on public views about science” than the scientific research itself does because those dynamics are what let the research have any impact beyond the scientific community. If scientific research is going to reach beyond those who have a professional interest in it, it necessarily will be tagged with “meaning beyond what the author of the original story intended to convey.” Those meanings are what we make of the message we’re conveying. And what we make of knowledge is the energy that propels it through the new system.

We therefore cannot hope to peel the peer-to-peer commentary from research as it circulates broadly on the Net, not that the Brossard and Scheufele article suggests that. Perhaps the best we can do is educate our children better, and encourage more scientists to dive into the social froth as the place where their research is having its broadest effect.

 


Notes, copied straight from the article:

[1] M. A. Cacciatore, D. A. Scheufele, E. A. Corley, Public Underst. Sci.; 10.1177/0963662512447606 (2012).

[2] C. Russell, in Science and the Media, D. Kennedy, G. Overholser, Eds. (American Academy of Arts and Sciences, Cambridge, MA, 2010), pp. 13–43

[3] P. Ladwig et al., Mater. Today 13, 52 (2010)

[4] P. Ladwig, A. Anderson, abstract, Annual Conference of the Association for Education in Journalism and Mass Communication, St. Louis, MO, August 2011; www.aejmc. com/home/2011/06/ctec-2011-abstracts

Tags:

[2b2k] My world leader can beat up your world leader

There’s a knowingly ridiculous thread at Reddit at the moment: Which world leader would win if pitted against other leaders in a fight to the death.

The title is a straightline begging for punchlines. And it is a funny thread. Yet, I found it shockingly informative. The shock comes from realizing just how poorly informed I am.

My first reaction to the title was “Putin, duh!” That just shows you what I know. From the thread I learned that Joseph Kabila (Congo) and Boyko Borisov (Bulgaria) would kick Putin’s ass. Not to mention that Jigme Khesar Namgyel Wangchuck (Bhutan), who would win on good looks.

Now, when I say that this thread is “shockingly informative,” I don’t mean that it gives sufficient or even relevant information about the leaders it discusses. After all, it focuses on their personal combat skills. Rather, it is an interesting example of the haphazard way information spreads when that spreading is participatory. So, we are unlikely to have sent around the Wikipedia article on Kabila or Borisov simply because we all should know about the people leading the nations of the world. Further, while there is more information about world leaders available than ever in human history, it is distributed across a huge mass of content from which we are free to pick and choose. That’s disappointing at the least and disastrous at its worst.

On the other hand, information is now passed around if it is made interesting, sometimes in jokey, demeaning ways, like an article that steers us toward beefcake (although the president of Ireland does make it up quite high in the Reddit thread). The information that gets propagated through this system is thus spotty and incomplete. It only becomes an occasion for serendipity if it is interesting, not simply because it’s worthwhile. But even jokey, demeaning posts can and should have links for those whose interest is piqued.

So, two unspectacular conclusions.

First, in our despair over the diminishing of a shared knowledge-base of important information, we should not ignore the off-kilter ways in which some worthwhile information does actually propagate through the system. Indeed, it is a system designed to propagate that which is off-kilter enough to be interesting. Not all of that “news,” however, is about water-skiing cats. Just most.

Second, we need to continue to have the discussion about whether there is in fact a shared news/knowledge-base that can be gathered and disseminated, whether there ever was, whether our populations ever actually came close to living up to that ideal, the price we paid for having a canon of news and knowledge, and whether the networking of knowledge opens up any positive possibilities for dealing with news and knowledge at scale. For example, perhaps a network is well-informed if it has experts on hand who can explain events at depth (and in interesting ways) on demand, rather than assuming that everyone has to be a little bit expert at everything.

Tags:

[2b2k] Information overload? Not so much. (Part 2)

Yesterday I tried to explain my sense that we’re not really suffering from information overload, while of course acknowledging that there is vastly more information out there than anyone could ever hope to master. Then a comment from Alex Richter helped me clarify my thinking.

We certainly do at times feel overwhelmed. But consider why you don’t feel like you’re suffering from information overload about, say, the history of stage costumes, Chinese public health policy, the physics of polymers, or whatever topic you would never have majored in, even though each of these topics contains an information overload. I think there are two reasons those topics don’t stress you.

First, and most obviously, because (ex hypothesis) you don’t care about that topic, you’re not confronted with having to hunt down some piece of information, and that topic’s information is not in your face.

But I think there’s a second reason. We have been taught by our previous media that information is manageable. Give us 23 minutes and we’ll give you the world, as the old radio slogan used to say. Read the daily newspaper — or Time or Newsweek once a week — and now you have read the news. That’s the promise implicit in the old media. But the new medium promises us instead edgeless topics and endless links. We know there is no possibility of consuming “the news,” as if there were such a thing. We know that whatever topic we start with, we won’t be able to stay within its bounds without doing violence to that topic. There is thus no possibility of mastering a field. So, sure, there’s more information than anyone could ever take in, but that relieves us of the expectation that we will master it. You can’t be overwhelmed if whelming is itself impossible.

So, I think our sense of being overwhelmed by information is an artifact of our being in a transitional age, with old expectations for mastery that the new environment gives the lie to.

No, this doesn’t mean that we lose all responsibility for knowing anything. Rather, it means we lose responsibility for knowing everything.

Tags:

[2b2k] Linking is a public good

Mathew Ingram at GigaOm has posted the Twitter stream that followed upon his tweet criticizing the Wall Street Journal for running an article based on a post by TechCrunch’s MC Siegler, who responded in an angry post.

Mathew’s point is that linking is a good journalistic practice, even if author of the the second article independently confirmed the information in the first, as happened in this case. Mathew thinks it’s a matter of trust, and if the repeater gets caught at it, it would indeed erode trust. Of course, they probably won’t, and even if you did read the WSJ article after reading the TechCrunch post, you’d probably assume that the news was coming from a common source.

I think there’s another reason why reports ought to link to their, um, inspirations: Links are a public good. They create a web that is increasingly rich, useful, diverse, and trustworthy. We should all feel an obligation to be caretakers of and contributors to this new linked public.

And there’s a further reason. In addition to building this new infrastructure of curiosity, linking is a small act of generosity that sends people away from your site to some other that you think shows the world in a way worth considering. Linking is a public service that reminds us how deeply we are social and public creatures.

Which I think helps explains why newspapers often are not generous with their links. A paper like the WSJ believes its value — as well as its self-esteem — comes from being the place you go for news. It covers the stories worth covering, and the stories tell you what you need to know. It is thus a stopping point in the ecology of information. And that’s the oeprational definition of authority: The last place you visit when you’re looking for an answer. If you are satisfied with the answer, you stop your pursuit of it. Take the links out and you think you look like more of an authority. To this mindset, links are sign of weakness.

This made more sense when knowledge was paper-based, because in practical terms that’s pretty much how it worked: You got your news rolled up and thrown onto your porch once a day, and if you wanted more information about an article in it, you were pretty much SOL. Paper masked just how indebted the media were to one another. The media have always been an ecology of knowledge, but paper enabled them to pretend otherwise, and to base much of their economic value on that pretense.

Until newspapers are as heavily linked as GigaOm, TechCrunch, and Wikipedia, until newspapers revel in pointing away from themselves, they are depending on a value that was always unreal and now is unsustainable.

Tags: