Posts Tagged 2b2k

[2b2k] “In Over Our Heads” – My Simmons commencement address

On Friday, I had the tremendous honor of being awarded a Doctor of Letters degree from Simmons College, and giving the Commencement address at the Simmons graduate students’ ceremony.

Simmons is an inspiring place, and not only for its deep commitment to educating women. Being honored this way — especially along with Ruth Ellen Fitch, Madeleine M. Joullié, and Billie Jean King — made me very, very happy.

 


Thank you so much to Simmons College, President Drinan, and the Board of Trustees for this honor, which means the world to me. I’m just awed by it. Also, Professor Candy Schwartz and Dean Eileen Abels, a special thank you. And this honor is extra-special meaningful because my father-in-law, Marvin Geller, is here today, and his sister, Jeannie Geller Mason, was, as she called herself, a Simmons girl, class of 1940. Afterwards, Marvin will be happy to sing you the old “We are the girls of Simmons C” college song if you ask.

So, first, to the parents: I have been in your seat, and I know how proud – and maybe relieved – you are. So, congratulations to you. And to the students, it’s such an honor to be here with you to celebrate your being graduated from Simmons College, a school that takes seriously the privilege of helping its students not only to become educated experts, but to lead the next cohort in their disciplines and professions.

Now, as I say this, I know that some of you may be shaking your inner heads, because a commencement speaker is telling you about how bright your futures are, but maybe you have a little uncertainty about what will happen in your professions and with your career. That’s not only natural, it’s reasonable. But, some of you – I don’t know how many — may be feeling beyond that an uncertainty about your own abilities. You’re being officially certified with an advanced degree in your field, but you may not quite feel the sense of mastery you expected.

In other words, you feel the way I do now. And the way I did in 1979 when I got my doctorate in philosophy. I knew well enough the work of the guy I wrote my dissertation on, but I looked out at the field and knew just how little I knew about so much of it. And I looked at other graduates, and especially at the scholars and experts who had been teaching us, and I thought to myself, “They know so much more than I do.” I could fake it pretty well, but actually not all that well.

So, I want to reassure those of you who feel the way that I did and do, I want to reassure you that that feeling of not really knowing what you should, that feeling may stay with you forever. In fact, I hope it does — for your sake, for your profession, and for all of us.

But before explaining, I need to let you in on the secret: You do know enough. It’s like Gloria Steinem’s response, when she was forty, to people saying that she didn’t look forty. Steinem replied, “This is what forty looks like.” And this is what being a certified expert in your field feels like. Simmons knows what deserving your degree means, and its standards are quite high. So, congratulations. You truly earned this and deserve it.

But here’s why it’s good to get comfortable with always having a little lack of confidence. First, if you admit no self-doubt, you lose your impulse to learn. Second, you become a smug, know-it-all and no one likes you. Third, what’s even worse, is that you become a soldier in the army of ignorance. Your body language tells everyone else that their questions are a sign of weakness, which shuts down what should have been a space for learning.

The one skill I’ve truly mastered is asking stupid questions. And I don’t mean questions that I pretend are stupid but then, like Socrates, they show the folly of all those around me. No, they’re just dumb questions. Things I really should know by now. And quite often it turns out that I’m not the only one in the room with those questions. I’ve learned far more by being in over my head than by knowing what I’m talking about. And, as I’ll get to, we happen to be in the greatest time for being in over our heads in all of human history.

Let me give you just one quick example. In 1986 I became a marketing writer at a local tech startup called Interleaf that made text-and-graphic word processors. In 1986 that was a big deal, and what Interleaf was doing was waaaay over my head. So, I hung out with the engineers, and I asked the dumbest questions. What’s a font family? How can the spellchecker look up words as fast as you type them? When you fill a shape with say, purple, how does the purple know where to stop? Really basic. But because it was clear that I was a marketing guy who was genuinely interested in what the engineers were doing, they gave me a lot of time and an amazing education. Those were eight happy years being in over my head.

I’m still way over my head in the world of libraries, which are incredibly deep institutions. Compared to “normal” information technology, the data libraries deal with is amazingly profound and human. And librarians have been very generous in helping me learn just a small portion of what they know. Again, this is in part because they know my dumb questions are spurred by a genuine desire to understand what they’re doing, down to the details.

In fact, going down to the details is one very good way to make sure that you are continually over your head. We will never run out of details. The world’s just like that: there’s no natural end to how closely you can look at thing. And one thing I’ve learned is that everything is interesting if looked at at the appropriate level of detail.

Now, it used to be that you’d have to seek out places to plunge in over your head. But now, in the age of the Internets, all we have to do is stand still and the flood waters rise over our heads. We usually call this “information overload,” and we’re told to fear it. But I think that’s based on an old idea we need to get rid of.

Here’s what I mean. So, you know Flickr, the photo sharing site? If you go there and search for photos tagged “vista,” you’ll get two million photos, more vistas than you could look at if you made it your full time job.

If you go to Google and search for apple pie recipes, you’ll get over 1.3 million of them. Want to try them all out to find the best one. Not gonna happen.

If you go to Google Images and search for “cute cats,” you’ll get over seven million photos of the most adorable kittens ever, as well as some ads and porn, of course, because Internet.

So that’s two million vista photos. 1.3 million apple pie recipes. 7.6 million cute cat photos. We’re constantly warned about information overload, yet we never hear one word single word about the dangers of Vista Overload, Apple Pie Overload, or Cute Kitten overload. How have the media missed these overloads! It’s a scandal!

I think there’s actually a pretty clear reason why we pay no attention to these overloads. We only feel overloaded by that which we feel responsible for mastering. There’s no expectation that we’ll master vista photos, apple pie recipes, or photos of cute cats, so we feel no overload. But with information it’s different because we used to have so much less of it that back then mastery seemed possible. For example, in the old days if you watched the daily half hour broadcast news or spent twenty minutes with a newspaper, you had done your civic duty: you had kept up with The News. Now we can see before our eyes what an illusion that sense of mastery was. There’s too much happening on our diverse and too-interesting planet to master it, and we can see it all happening within our browsers.

The concept of Information Overload comes from that prior age, before we accepted what the Internet makes so clear: There is too, too much to know. As we accept that, the idea of mastery will lose its grip, We’ll stop feeling overloaded even though we’re confronted with exactly the same amount of information.

Now, I want to be careful because we’re here to congratulate you on having mastered your discipline. And grad school is a place where mastery still applies: in order to have a discipline — one that can talk with itself — institutions have to agree on a master-able set of ideas, knowledge, and skills that are required for your field. And that makes complete sense.

But, especially as the Internet becomes our dominant medium of ideas, knowledge, culture, and entertainment, we are all learning just how much there is that we don’t know and will never know.

And it’s not just the quantity of information that makes true mastery impossible in the Age of the Internet. It’s also what it’s doing to the domains we want to master — the topics and disciplines. In the Encyclopedia Britannica — remember that? — an article on a topic extends from the first word to the last, maybe with a few suggested “See also’s” at the end. The article’s job is to cover the topic in that one stretch of text. Wikipedia has different idea. At Wikipedia, the articles are often relatively short, but they typically have dozens or even hundreds of links. So rather than trying to get everything about, say, Shakespeare into a couple of thousand words, Wikipedia lets you click on links to other articles about what it mention — to Stratford-on-Avon, or iambic pentameter, or about the history of women in the theater. Shakespeare at Wikipedia, in other words, is a web of linked articles. Shakespeare on the Web is a web. And it seems to me that that webby structure actually is a more accurate reflection of the shape of knowledge: it’s an endless series of connected ideas and facts, limited by interest, not an article that starts here and ends there. In fact, I’d say that Shakespeare himself was a web, and so am I, and so are you.

But if topics and disciplines are webs, then they don’t have natural and clear edges. Where does the Shakespeare web end? Who decides if the article about, say, women in the theater is part of the Shakespeare web or not? These webs don’t have clearcut edges. But that means that we also can’t be nearly as clear about what it means to master Shakespeare. There’s always more. The very shape of the Web means we’re always in over our heads.

And just one more thing about these messy webs. They’re full of disagreement, contradiction, argument, differences in perspective. Just a few minutes on the Web reveals a fundamental truth: We don’t agree about anything. And we never will. My proof of that broad statement is all of human history. How do you master a field, even if you could define its edges, when the field doesn’t agree with itself?

So, the concept of mastery is tough in this Internet Age. But that’s just a more accurate reflection of the way it always was even if we couldn’t see it because we just didn’t have enough room to include every voice and every idea and every contradiction, and we didn’t have a way to link them so that you can go from one to another with the smallest possible motion of your hand: the shallow click of a mouse button.

The Internet has therefore revealed the truth of what the less confident among us already suspected: We’re all in over our heads. Forever. This isn’t a temporary swim in the deep end of the pool. Being in over our heads is the human condition.

The other side of this is that the world is far bigger, more complex, and more unfathomably interesting than our little brains can manage. If we can accept that, then we can happily be in over our heads forever…always a little worried that we really are supposed to know more than we do, but also, I hope, always willing to say that out loud. It’s the condition for learning from one another…

…And if the Internet has shown us how overwhelmed we are, it’s also teaching us how much we can learn from one another. In public. Acknowledging that we’re just humans, in a sea of endless possibility, within which we can flourish only in our shared and collaborative ignorance.

So, I know you’re prepared because I know the quality of the Simmons faculty, the vision of its leadership, and the dedication of its staff. I know the excellence of the education you’ve participated in. You’re ready to lead in your field. May that field always be about this high over your head — the depth at which learning occurs, curiosity is never satisfied, and we rely on one another’s knowledge, insight, and love.

Thank you.

Tags:

[2b2k] Digital Humanities: Ready for your 11AM debunking?

The New Republic continues to favor articles debunking claims that the Internet is bringing about profound changes. This time it’s an article on the digital humanities, titled “The Pseudo-Revolution,” by Adam Kirsch, a senior editor there. [This seems to be the article. Tip of the hat to Jose Afonso Furtado.]

I am not an expert in the digital humanities, but it’s clear to the people in the field who I know that the meaning of the term is not yet settled. Indeed, the nature and extent of the discipline is itself a main object of study of those in the discipline. This means the field tends to attract those who think that the rise of the digital is significant enough to warrant differentiating the digital humanities from the pre-digital humanities. The revolutionary tone that bothers Adam so much is a natural if not inevitable consequence of the sociology of how disciplines are established. That of course doesn’t mean he’s wrong to critique it.

But Adam is exercised not just by revolutionary tone but by what he perceives as an attempt to establish claims through the vehemence of one’s assertions. That is indeed something to watch out for. But I think it also betrays a tin-eared reading by Adam. Those assertions are being made in a context the authors I think properly assume readers understand: the digital humanities is not a done deal. The case has to be made for it as a discipline. At this stage, that means making provocative claims, proposing radical reinterpretations, and challenging traditional values. While I agree that this can lead to thoughtless triumphalist assumptions by the digital humanists, it also needs to be understood within its context. Adam calls it “ideological,” and I can see why. But making bold and even over-bold claims is how discourses at this stage proceed. You challenge the incumbents, and then you challenge your cohort to see how far you can go. That’s how the territory is explored. This discourse absolutely needs the incumbents to push back. In fact, the discourse is shaped by the assumption that the environment is adversarial and the beatings will arrive in short order. In this case, though, I think Adam has cherry-picked the most extreme and least plausible provocations in order to argue against the entire field, rather than against its overreaching. We can agree about some of the examples and some of the linguistic extensions, but that doesn’t dismiss the entire effort the way Adam seems to think it does.

It’s good to have Adam’s challenge. Because his is a long and thoughtful article, I’ll discuss the thematic problems with it that I think are the most important.

First, I believe he’s too eager to make his case, which is the same criticism he makes of the digital humanists. For example, when talking about the use of algorithmic tools, he talks at length about Franco Moretti‘s work, focusing on the essay “Style, Inc.: Reflections on 7,000 Titles.” Moretti used a computer to look for patterns in the titles of 7,000 novels published between 1740 and 1850, and discovered that they tended to get much shorter over time. “…Moretti shows that what changed was the function of the title itself.” As the market for novels got more crowded, the typical title went from being a summary of the contents to a “catchy, attention-grabbing advertisement for the book.” In addition, says Adam, Moretti discovered that sensationalistic novels tend to begin with “The” while “pioneering feminist novels” tended to begin with “A.” Moretti tenders an explanation, writing “What the article ‘says’ is that we are encountering all these figures for the first time.”

Adam concludes that while Moretti’s research is “as good a case for the usefulness of digital tools in the humanities as one can find” in any of the books under review, “its findings are not very exciting.” And, he says, you have to know which questions to ask the data, which requires being well-grounded in the humanities.

That you need to be well-grounded in the humanities to make meaningful use of digital tools is an important point. But here he seems to me to be arguing against a straw man. I have not encountered any digital humanists who suggest that we engage with our history and culture only algorithmically. I don’t profess expertise in the state of the digital humanities, so perhaps I’m wrong. But the digital humanists I know personally (including my friend Jeffrey Schnapp, a co-author of a book, Digital_Humanities, that Adam reviews) are in fact quite learned lovers of culture and history. If there is indeed an important branch of digital humanities that says we should entirely replace the study of the humanities with algorithms, then Adam’s criticism is trenchant…but I’d still want to hear from less extreme proponents of the field. In fact, in my limited experience, digital humanists are not trying to make the humanities safe for robots. They’re trying to increase our human engagement with and understanding of the humanities.

As to the point that algorithmic research can only “illustrate a truism rather than discovering a truth,” — a criticism he levels even more fiercely at the Ngram research described in the book Uncharted — it seems to me that Adam is missing an important point. If computers can now establish quantitatively the truth of what we have assumed to be true, that is no small thing. For example, the Ngram work has established not only that Jewish sources were dropped from German books during the Nazi era, but also the timing and extent of the erasure. This not only helps make the humanities more evidence-based —remember that Adam criticizes the digital humanists for their argument-by-assertion —but also opens the possibility of algorithmically discovering correlations that overturn assumptions or surprise us. One might argue that we therefore need to explore these new techniques more thoroughly, rather than dismissing them as adding nothing. (Indeed, the NY Times review of Uncharted discusses surprising discoveries made via Ngram research.)

Perhaps the biggest problem I have with Adam’s critique I’ve also had with some digital humanists. Adam thinks of the digital humanities as being about the digitizing of sources. He then dismisses that digitizing as useful but hardly revolutionary: “The translation of books into digital files, accessible on the Internet around the world, can be seen as just another practical tool…which facilitates but does not change the actual humanistic work of thinking and writing.”

First, that underplays the potential significance of making the works of culture and scholarship globally available.

Second, if you’re going to minimize the digitizing of books as merely the translation of ink into pixels, you miss what I think is the most important and transformative aspect of the digital humanities: the networking of knowledge and scholarship. Adam in fact acknowledges the networking of scholarship in a twisty couple of paragraphs. He quotes the following from the book Digital_Humanities:

The myth of the humanities as the terrain of the solitary genius…— a philosophical text, a definitive historical study, a paradigm-shifting work of literary criticism — is, of course, a myth. Genius does exist, but knowledge has always been produced and accessed in ways that are fundamentally distributed…

Adam responds by name-checking some paradigm-shifting works, and snidely adds “you can go to the library and check them out…” He then says that there’s no contradiction between paradigm-shifting works existing and the fact that “Scholarship is always a conversation…” I believe he is here completely agreeing with the passage he thinks he’s criticizing: genius is real; paradigm-shifting works exist; these works are not created by geniuses in isolation.

Then he adds what for me is a telling conclusion: “It’s not immediately clear why things should change just because the book is read on a screen rather than on a page.” Yes, that transposition doesn’t suggest changes any more worthy of research than the introduction of mass market paperbacks in the 1940s [source]. But if scholarship is a conversation, might moving those scholarly conversations themselves onto a global network raise some revolutionary possibilities, since that global network allows every connected person to read the scholarship and its objects, lets everyone comment, provides no natural mechanism for promoting any works or comments over any others, inherently assumes a hyperlinked rather than sequential structure of what’s written, makes it easier to share than to sequester works, is equally useful for non-literary media, makes it easier to transclude than to include so that works no longer have to rely on briefly summarizing the other works they talk about, makes differences and disagreements much more visible and easily navigable, enables multiple and simultaneous ordering of assembled works, makes it easier to include everything than to curate collections, preserves and perpetuates errors, is becoming ubiquitously available to those who can afford connection, turns the Digital Divide into a gradient while simultaneously increasing the damage done by being on the wrong side of that gradient, is reducing the ability of a discipline to patrol its edges, and a whole lot more.

It seems to me reasonable to think that it is worth exploring whether these new affordances, limitations, relationships and metaphors might transform the humanities in some fundamental ways. Digital humanities too often is taken simply as, and sometimes takes itself as, the application of computing tools to the humanities. But it should be (and for many, is) broad enough to encompass the implications of the networking of works, ideas and people.

I understand that Adam and others are trying to preserve the humanities from being abandoned and belittled by those who ought to be defending the traditional in the face of the latest. That is a vitally important role, for as a field struggling to establish itself digital humanities is prone to over-stating its case. (I have been known to do so myself.) But in my understanding, that assumes that digital humanists want to replace all traditional methods of study with computer algorithms. Does anyone?

Adam’s article is a brisk challenge, but in my opinion he argues too hard against his foe. The article becomes ideological, just as he claims the explanations, justifications and explorations offered by the digital humanists are.

More significantly, focusing only on the digitizing of works and ignoring the networking of their ideas and the people discussing those ideas, glosses over the locus of the most important changes occurring within the humanities. Insofar as the digital humanities focus on digitization instead of networking, I intend this as a criticism of that nascent discipline even more than as a criticism of Adam’s article.

Tags:

[2b2k] In defense of the library Long Tail

Two percent of Harvard’s library collection circulates every year. A high percentage of the works that are checked out are the same as the books that were checked out last year. This fact can cause reflexive tsk-tsking among librarians. But — with some heavy qualifications to come — this is at it should be. The existence of a Long Tail is not a sign of failure or waste. To see this, consider what it would be like if there were no Long Tail.

Harvard’s 73 libraries have 16 million items [source]. There are 21,000 students and 2,400 faculty [source]. If we guess that half of the library items are available for check-out, which seems conservative, that would mean that 160,000 different items are checked out every year. If there were no Long Tail, then no book would be checked out more than any other. In that case, it would take the Harvard community an even fifty years before anyone would have read the same book as anyone else. And a university community in which across two generations no one has read the same book as anyone else is not a university community.

I know my assumptions are off. For example, I’m not counting books that are read in the library and not checked out. But my point remains: we want our libraries to have nice long tails. Library long tails are where culture is preserved and discovery occurs.

And, having said that, it is perfectly reasonable to work to lower the difference between the Fat Head and the Long Tail, and it is always desirable to help people to find the treasures in the Long Tail. Which means this post is arguing against a straw man: no one actually wants to get rid of the Long Tail. But I prefer to put it that this post argues against a reflex of thought I find within myself and have encountered in others. The Long Tail is a requirement for the development of culture and ideas, and at the same time, we should always help users to bring riches out of the Long Tail

Tags:

[2b2k] Protein Folding, 30 years ago

Simply in terms of nostalgia, this 1985 video called “Knowledge Engineering: Artificial Intelligence Research at the Stanford Heuristic Programming Project” from the Stanford archives is charming right down to its Tron-like digital soundtrack.

But it’s also really interesting if you care about the way we’ve thought about knowledge. The Stanford Heuristic Programming Project under Edward Feigenbaum did groundbreaking work in how computers represent knowledge, emphasizing the content and not just the rules. (Here is a 1980 article about the Project and its projects.)

And then at the 8:50 mark, it expresses optimism that an expert system would be able to represent not only every atom of proteins but how they fold.

Little could it have been predicted that protein folding even 30 years later would be better recognized by the human brain than by computers, and that humans playing a game — Fold.It — would produce useful results.

It’s certainly the case that we have expert systems all over the place now, from Google Maps to the Nest thermostat. But we also see another type of expert system that was essentially unpredictable in 1985. One might think that the domain of computer programming would be susceptible to being represented in an expert system because it is governed by a finite set of perfectly knowable rules, unlike the fields the Stanford project was investigating. And there are of course expert systems for programming. But where do the experts actually go when they have a problem? To StackOverflow where other human beings can make suggestions and iterate on their solutions. One could argue that at this point StackOverflow is the most successful “expert system” for computer programming in that it is the computer-based place most likely to give you an answer to a question. But it does not look much like what the Stanford project had in mind, for how could even Edward Feigenbaum have predicted what human beings can and would do if connected at scale?

(Here’s an excellent interview with Feigenbaum.)

Tags:

[shorenstein] Andy Revkin on communicating climate science

I’m at a talk by Andrew Revkin of the NY Times’ Dot Earth blog at the Shorenstein Center. [Alex Jones mentions in his introduction that Andy is a singer-songwriter who played with Pete Seeger. Awesome!]

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

Andy says he’s been a science reporter for 31 years. His first magazine article was about the dangers of the anti-pot herbicide paraquat. (The article won an award for investigative journalism). It had all the elements — bad guys, victims, drama — typical of “Woe is me. Shame on you” environmental reporting. His story on global warming in 1988 has “virtually the same cast of characters” that you see in today’s coverage. “And public attitudes are about the same…Essentially the landscape hasn’t changed.” Over time, however, he has learned how complex climate science is.

In 2010, his blog moved from NYT’s reporting to editorial, so now he is freer to express his opinions. He wants to talk with us today about the sort of “media conversation” that occurs now, but didn’t when he started as a journalist. We now have a cloud of people who follow a journalist, ready to correct them. “You can say this is terrible. It’s hard to separate noise from signal. And that’s correct.” “It can be noisy, but it’s better than the old model, because the old model wasn’t always right.” Andy points to the NYT coverage on the build up to the invasion of Iraq. But this also means that now readers have to do a lot of the work themselves.

He left the NYT in his mid-fifties because he saw that access to info more often than not doesn’t change you, but instead reinforces your positions. So at Pace U he studies how and why people understand ecological issues. “What is it about us that makes us neglect long-term imperatives?” This works better in a blog in a conversation drawing upon other people’s expertise than an article. “I’m a shitty columnist,” he says. People read columns to reinforce their beliefs, although maybe you’ll read George Will to refresh your animus :) “This makes me not a great spokesperson for a position.” Most positions are one-sided, whereas Andy is interested in the processes by which we come to our understanding.

Q: [alex jones] People seem stupider about the environment than they were 20 years ago. They’re more confused.

A: In 1991 there was a survey of museum goers who thought that global warming was about the ozone hole, not about greenhouse gases. A 2009 study showed that on a scale of 1-6 of alarm, most Americans were at 5 (“concerned,” not yet “alarmed”). Yet, Andy points out, the Cap and Trade bill failed. Likewise,the vast majority support rebates on solar panels and fuel-efficient vehicles. They support requiring 45mph fuel efficiency across vehicle fleets, even at a $1K price premium. He also points to some Gallup data that showed that more than half of the respondents worry a great a deal or a fair amount, but that number hasn’t changed since they Gallup began asking the question, in 1989. [link] Furthermore, global warming doesn’t show up as one of the issues they worry about.

The people we need to motivate are innovators. We’ll have 9B on the planet soon, and 2B who can’t make reasonable energy choices.

Q: Are we heading toward a climate tipping point?

A: There isn’t evidence that tipping points in climate are real and if they are, we can’t really predict them. [link]

Q: The permafrost isn’t going to melt?

A: No, it is melting. But we don’t know if it will be catastrophic.

Andy points to a photo of despair at a climate conference. But then there’s Scott H. DeLisi who represents a shift in how we relate to communities: Facebook, Twitter, Google Hangouts. Inside Climate News won the Pulitzer last year. “That says there are new models that may work. Can they sustain their funding?” Andy’s not sure.

“Journalism is a shinking wedge of a growing pie of ways to tell stories.”

“Escape from the Nerd Loop”: people talking to one another about how to communicate science issues. Andy loves Twitter. The hashtag is as big an invention as photovoltaics, he says. He references Chris Messina, its inventor, and points to how useful it is for separating and gathering strands of information, including at NASA’s Asteroid Watch. Andy also points to descriptions by a climate scientist who went to the Arctic [or Antarctic?] that he curated, and to a singing scientist.

Q: I’m a communications student. There was a guy named Marshall McLuhan, maybe you haven’t heard of him. Is the medium the message?

A: There are different tools for different jobs. I could tell you the volume of the atmosphere, but Adam Nieman, a science illustrator, used this way to show it to you.

Q: Why is it so hard to get out of catastrophism and into thinking about solutions?

A: Journalism usually focuses on the down side.If there’s no “Woe is me” element, it tends not to make it onto the front page. At Pace U. we travel each spring and do a film about a sustainable resource farming question. The first was on shrimp-farming in Belize. It’s got thousands of views but it’s not on the nightly news. How do we shift our norms in the media?

[david ropiek] Inherent human psychology: we pay more attention to risks. People who want to move the public dial inherently are attracted to the more attention-getting headlines, like “You’re going to die.”

A: Yes. And polls show that what people say about global warming depends on the weather outside that day.

A report recently drew the connection between climate change and other big problems facing us: poverty, war, etc. What did you think of it?

A: It was good. But is it going to change things? The Extremes report likewise. The city that was most affected by the recent typhoon had tripled its population, mainly with poor people. Andy values Jesse Ausubel who says that most politics is people pulling on disconnected levels.

Q: Any reflections on the disconnect between breezy IPCC executive summaries and the depth of the actual scientific report?

A: There have been demands for IPCC to write clearer summaries. Its charter has it focused on the down sides.

Q: How can we use open data and community tools to make better decisions about climate change? Will the data Obama opened up last month help?

A: The forces of stasis can congregate on that data and raise questions about it based on tiny inconsistencies. So I’m not sure it will change things. But I’m all for transparency. It’s an incredibly powerful tool, like when the US Embassy was doing its own twitter feed on Beijing air quality. We have this wonderful potential now; Greenpeace (who Andy often criticizes) did on-the-ground truthing about companies deforesting organgutang habitats in Indonesia. Then they did a great campaign to show who’s using the palm oil: Buying a Kitkat bar contributes to the deforesting of Borneo. You can do this ground-truthing now.

Q: In the past 6 months there seems to have been a jump in climate change coverage. No?

A: I don’t think there’s more coverage.

Q: India and Pakistan couldn’t agree on water control in part because the politicians talked about scarcity while the people talked in terms of their traditional animosities. How can we find the right vocabularies?

A: If the conversation is about reducing vulnerabilities and energy efficiency, you can get more consensus than talking about global warming.

Q: How about using data visualizations instead of words?

A: I love visualizations. They spill out from journalism. How much it matters is another question. Ezra Klein just did a piece that says that information doesn’t matter.

Q: Can we talk about your “Years of Living Dangerously” piece? [Couldn't hear the rest of the question].

A: My blog is edited by the op-ed desk, and I don’t always understand their decisions. Journalism migrates toward controversy. The Times has a feature “Room for Debate,” and I keep proposing “Room for Agreement” [link], where you’d see what people who disagree about an issue can agree on.

Q: [me] Should we still be engaging with deniers? With whom should we be talking?

A: Yes, we should engage. We taxpayers subsidize second mortgages on houses in wild fire zones in Colorado. Why? So firefighters have to put themselves at risk? [link] That’s an issue that people agree on across the spectrum. When it comes to deniers, we have to ask what exactly are you denying, Particular data? Scientific method? Physics? I’ve come to the conclusion that even if we had perfect information, we still wouldn’t galvanize the action we need.

[Andy ends by singing a song about liberated carbon. That's not something you see every day at the Shorenstein Center.]

[UPDATE (the next day): I added some more links.]

Tags:

Why I love the Web, Reason #4,763: The Pulp-o-mizer

So much beautiful work has gone into the free service that is the Pulp-o-mizer — a brilliant way to create your own retro sf covers. It took under 5 minutes to create each of these:

Pulp-O-Mizer_Cover_Image (1)

Pulp-O-Mizer_Cover_Image

Thank you, Pulp-o-mizer! Thank you, Web!

Tags:

Linked Data for Libraries: And we’re off!

I’m just out of the first meeting of the three universities participating in a Mellon grant — Cornell, Harvard, and Stanford, with Cornell as the grant instigator and leader — to build, demonstrate, and model using library resources expressed as Linked Data as a tool for researchers, student, teachers, and librarians. (Note that I’m putting all this in my own language, and I was certainly the least knowledgeable person in the room. Don’t get angry at anyone else for my mistakes.)

This first meeting, two days long, was very encouraging indeed: it’s a superb set of people, we are starting out on the same page in terms of values and principles, and we enjoyed working with one another.

The project is named Linked Data for Libraries (LD4L) (minimal home page), although that doesn’t entirely capture it, for the actual beneficiaries of it will not be libraries but scholarly communities taken in their broadest sense. The idea is to help libraries make progress with expressing what they know in Linked Data form so that their communities can find more of it, see more relationships, and contribute more of what the communities learn back into the library. Linked Data is not only good at expressing rich relations, it makes it far easier to update the dataset with relationships that had not been anticipated. This project aims at helping libraries continuously enrich the data they provide, and making it easier for people outside of libraries — including application developers and managers of other Web sites — to connect to that data.

As the grant proposal promised, we will use existing ontologies, adapting them only when necessary. We do expect to be working on an ontology for library usage data of various sorts, an area in which the Harvard Library Innovation Lab has done some work, so that’s very exciting. But overall this is the opposite of an attempt to come up with new ontologies. Thank God. Instead, the focus is on coming up with implementations at all three universities that can serve as learning models, and that demonstrate the value of having interoperable sets of Linked Data across three institutions. We are particularly focused on showing the value of the high-quality resources that libraries provide.

There was a great deal of emphasis in the past two days on partnerships and collaboration. And there was none of the “We’ll show ‘em where they got it wrong, by gum!” attitude that in my experience all too often infects discussions on the pioneering edge of standards. So, I just got to spend two days with brilliant library technologists who are eager to show how a new generation of tech, architecture, and thought can amplify the already immense value of libraries.

There will be more coming about this effort soon. I am obviously not a source for tech info; that will come soon and from elsewhere.

Tags:

A gift from God

I know I’m late to the love fest, but I’ve been under the flu. I read Pope Francis’ Message for World Communication Day when it was issued on Jan. 24, and I only get happier upon re-reading it.

NOTE please that I am outside of my comfort zone in this posting, for two reasons. First, I am not a Christian and I know I may be misreading the Pope’s words. Second, I am going to evaluate and expound on what a Pope says. Chutzpah* has a new poster boy! So, please think of this only as me trying to make personal sense of a message that I find profoundly hopeful. *[The joke this links to is not really about chutzpah, but it's a pretty good joke.]


The first thing to note are the ways the message refuses to go wrong. The Catholic Church put the “higher” in “hierarchy,” so it’d be understandable if it viewed the Internet as a threat to its power. Or as a source of sinful temptation. Because it’s both of those things. The Pope might even have seen the Internet quite positively as a powerful communication medium for getting out the Church’s message.

But he doesn’t. He sees the Internet as “Communication at the Service of an Authentic Culture of Encounter,” as the post’s subtitle puts it. This is because he views the Internet not within the space of communication, but within the despair of a fragmented world. Not only are there vast inequalities, but these inequalities are literally before our eyes:

Often we need only walk the streets of a city to see the contrast between people living on the street and the brilliant lights of the store windows. We have become so accustomed to these things that they no longer unsettle us.

Traditional media can show us that other world, but we need something more. We need to be unsettled. “The internet, in particular, offers immense possibilities for encounter and solidarity,” Pope Francis says, and then adds a remarkable characterization:

This is something truly good, a gift from God.

Not: The Internet is a source of temptations to be resisted. Not: The Internet is just the latest over-hyped communication technology, and remember when we thought telegraphs would bring world peace? Not: The Internet is merely a technology and thus just another place for human nature to reassert itself. Not: The Internet is just a way for the same old powers to extend their reach. Not: The Internet is an opportunity to do good, but be wary because we can also do evil with it. It may be many of those. But first: The Internet — its possibilities for encounter and solidarity — is truly good. The Internet is a gift from God.

This is not the language I would use. I’m an agno-atheistical Jew who lives in solidarity with an Orthodox community. (Long story.) But I think – you can never tell with these cross-tradition interpretations – that the Pope’s words express the deep joy the Internet brings me. “This is not to say that certain problems do not exist,” the Pope says in the next paragraph, listing the dangers with a fine concision. But still: The Internet is truly good. Why?

For the Pope, the Internet is an opportunity to understand one another by hearing one another directly. This understanding of others, he says, will lead us to understand ourselves in the context of a world of differences:

If we are genuinely attentive in listening to others, we will learn to look at the world with different eyes and come to appreciate the richness of human experience as manifested in different cultures and traditions.

This will change our self-understanding as well, without requiring us to abandon our defining values:

We will also learn to appreciate more fully the important values inspired by Christianity, such as the vision of the human person, the nature of marriage and the family, the proper distinction between the religious and political spheres, the principles of solidarity and subsidiarity, and many others.

(This seems to me to be a coded sentence, with meanings not readily apparent to those outside the fold. Sorry if I’m misunderstanding its role in the overall posting.)

The Pope does not shy away from the difficult question this idea raises, and pardon me for having switched the order of the following two sentences:

What does it mean for us, as disciples of the Lord, to encounter others in the light of the Gospel? How…can communication be at the service of an authentic culture of encounter?

That is (I think), how can a Catholic engage with others who deny beliefs that the Catholic holds with all the power of faith? (And this is obviously not a question only for Catholics.) The Pope gives a beautiful answer: we should “see communication in terms of ‘neighbourliness’.”

“Communication” as the transferring of meaning is a relatively new term. The Pope’s answer asks us to bring it back from its abstract understanding. Certainly the Pope’s sense takes “communication” out of the realm of marketing that sees it as the infliction of a message on a market. It also enriches it beyond the information science version of communication as the moving of an encoded message through a medium. (Info science of course understands that its view is not the complete story.) It instead looks at communication as something that humans do within a social world:

Those who communicate, in effect, become neighbours. The Good Samaritan not only draws nearer to the man he finds half dead on the side of the road; he takes responsibility for him. Jesus shifts our understanding: it is not just about seeing the other as someone like myself, but of the ability to make myself like the other. Communication is really about realizing that we are all human beings, children of God.

From my point of view [more here, here, and here], the problem with our idea of communication is that it assumes it’s the overcoming of apartness. We imagine individuals with different views of themselves and their world who manage to pierce their solitude by spewing out some sounds and scribbles. Communication! But, those sounds and scribbles only work because they occur within a world that is already shared, and we only bother because the world we share and those we share it with matter to us. Communication implies not isolation and difference but the most profound togetherness and sameness imaginable. Or, as I wouldn’t put it, we are all children of G-d.

Then Pope Francis gets to his deeper critique, which I find fascinating: “Whenever communication is primarily aimed at promoting consumption or manipulating others, we are dealing with a form of violent aggression…” And “Nowadays there is a danger that certain media so condition our responses that we fail to see our real neighbour.” The primary threat to the Internet, then, is to treat it as if it were a traditional medium that privileges the powerful and serves their interests. Holy FSM!

Pope Francis then goes on to draw the deeper conclusion he has led us to: the threat isn’t fundamentally that the old media will use the Net for their old purposes. The actual threat is considering the Internet to be a communications medium at all. “It is not enough to be passersby on the digital highways, simply ‘connected’; connections need to grow into true encounters.”

The impartiality of media is merely an appearance; only those who go out of themselves in their communication can become a true point of reference for others.

The most basic image we have of how communication works is wrong: messages do not simply move through media. Rather, in the Pope’s terms, they are acts of engagement. This is clear in face-to-face communication among neighbors, and it seems clear to me on the Net: A tweet that no one retweets goes silent because its recipients have chosen not to act as its medium. A page that no one links to is only marginally on the Web because its recipients have chosen not to create a new link (a channel or medium) that incorporates that page more deeply into the network. The recipient-medium distinction fails on the Net, and the message-medium distinction fails on the Web.

Now, this does not mean that Internet communication is all about people always encountering one another as neighbors. Not hardly. So the Pope’s post then considers how Christians can engage with others on the Net without simply broadcasting their beliefs. Here his Catholic particularity starts to shape his vision in a way that differentiates it from my own. He sees the Internet as “a street teeming with people who are often hurting, men and women looking for salvation or hope,” whereas I would probably have begun with something about joy. (I’m pointing out a difference, not criticizing!)

Given the tension between his belief that faith has a way to alleviate the pain he perceives and his desire for truly mutual engagement, he talks about “Christian witness.” I don’t grasp the nuances of this concept (“We are called to show that the Church is the home of all” – er, no thank you), but I do appreciate the Pope’s explicit contrasting Christian witness with “bombarding people with religious messages.” Rather, he says (quoting his predecessor), it’s about

… our willingness to be available to others “by patiently and respectfully engaging their questions and their doubts as they advance in their search for the truth and the meaning of human existence.”

Since that seems to imply (I think) a dialogue in which one side assumes superiority and refuses the possibility of changing, the new Pope explains that

To dialogue means to believe that the “other” has something worthwhile to say, and to entertain his or her point of view and perspective. Engaging in dialogue does not mean renouncing our own ideas and traditions, but the claim that they alone are valid or absolute.

The Pope is dancing here. He’s dancing, I believe, because he is adopting the language of communication. If the role of the Catholic is to engage in dialogue, then we are plunged into the problems of the world’s plural beliefs. We western liberals like to think that in an authentic dialogue, both sides are open to change, but the Pope does not want to suggest that Catholics put their faith up for grabs. So, the best he can do is say that the “other’s” viewpoint be “entertained” and treated as worthwhile…although apparently not worthwhile enough to be adopted by the faithful Catholic.

There are two points important for me to make right now. First, I’m not carping about the actual content. This sort of pluralism (or whatever label you want to apply) takes the pressure off a world that simply cannot survive absolutism. So, thank you, Pope Francis! Second, I personally think it’s bunk to insist that for a dialogue to be “authentic” both sides have to be open to change. Such an insistence comes from a misunderstanding about how understanding works. So while I personally would prefer that everyone carry a mental reservation that appends “…although I might be wrong” to every statement,* I don’t have a problem with the Pope’s formulation of what an authentic Christian dialogue looks like. *[I simply don't know the Catholic Church's position on faith and doubt.]

I find this all gets much simpler – you get a nice walk instead of a dance – if you stick with the program announced in the Pope’s post’s subtitle: dethroning communication and putting it into the service of neighborliness. I believe the Pope’s vision of the Net as a place where neighbors can help one another lovingly and mercifully gives us a better way to frame the Net and the opportunity it presents. I assume his talk of “Christian witness” and becoming “a true point of reference for others” also gets around the “dialoguing” difficulty.

In fact, the whole problem recedes if you drop all language of communication from the Pope’s message. For example, when the Pope says the faithful should “dialogue with people today … to help them encounter Christ,” the hairs on my Jewish pate go up; if there’s one thing I don’t want to do, it’s to dialogue with a Christian who wants to help me encounter Christ. Framing the Net as being about communication (or information, for that matter) leads us back into the incompatible ideas of truth we encounter. But if we frame the Internet as being about people being human to one another, people being neighbors, the differences in belief are less essential and more tolerable. Neighbors manifest love and mercy. Neighbors find value in theirs differences. Neighbors first, communicators on occasion and preferably with some beer or a nice bottle of wine.

Neighbors first. I take that as the Pope’s message, and I think it captures the gift the Internet gives us. It is also makes clear the challenge. The Net of course poses challenges to our souls or consciences, to our norms and our expectations, to our willingness to accept others into our hearts, but also a challenge to our understanding: Stop thinking about the Net as being about communication. Start thinking about it as a place where we can choose to be more human to one another.

That I can say Amen to.

 


I apologize for I am forcing the Pope’s comments into my own frame of understanding. I am happy to have that frame challenged. I ask only that you take me as, well, your neighbor.

 


In a note from the opposite end of the spectrum, Eszter Hargittai has posted an op-ed. You probably known Eszter as one of the most respected researchers into the skills required to succeed with the Internet – no, not everyone can just waltz onto the Net and benefit equally from it – and she is not someone who finds antisemitism everywhere she looks. What’s going on in Hungary is scary. Read her op-ed.

Tags:

[2b2k] Social Science in the Age of Too Big to Know

Gary King [twitter:kinggarry] , Director of Harvard’s Institute for Quantitative Social Science, has published an article (Open Access!) on the current status of this branch of science. Here’s the abstract:

The social sciences are undergoing a dramatic transformation from studying problems to solving them; from making do with a small number of sparse data sets to analyzing increasing quantities of diverse, highly informative data; from isolated scholars toiling away on their own to larger scale, collaborative, interdisciplinary, lab-style research teams; and from a purely academic pursuit focused inward to having a major impact on public policy, commerce and industry, other academic fields, and some of the major problems that affect individuals and societies. In the midst of all this productive chaos, we have been building the Institute for Quantitative Social Science at Harvard, a new type of center intended to help foster and respond to these broader developments. We offer here some suggestions from our experiences for the increasing number of other universities that have begun to build similar institutions and for how we might work together to advance social science more generally.

In the article, Gary argues that Big Data requires Big Collaboration to be understood:

Social scientists are now transitioning from working primarily on their own, alone in their officesâ??a style that dates back to when the offices were in monasteriesâ??to working in highly collaborative, interdisciplinary, larger scale, lab-style research teams. The knowledge and skills necessary to access and use these new data sources and methods often do not exist within any one of the traditionally defined social science disciplines and are too complicated for any one scholar to accomplish alone

He begins by giving three excellent examples of how quantitative social science is opening up new possibilities for research.

1. Latanya Sweeney [twitter:LatanyaSweeney] found “clear evidence of racial discrimination” in the ads served up by newspaper websites.

2. A study of all 187M registered voters in the US showed that a third of those listed as “inactive” in fact cast ballots, “and the problem is not politically neutral.”

3. A study of 11M social media posts from China showed that the Chinese government is not censoring speech but is censoring “attempts at collective action, whether for or against the government…”

Studies such as these “depended on IQSS infrastructure, including access to experts in statistics, the social sciences, engineering, computer science, and American and Chinese area studies. ”

Gary also points to “the coming end of the quantitative-qualitative divide” in the social sciences, as new techniques enable massive amounts of qualitative data to be quantified, enriching purely quantitative data and extracting additional information from the qualitative reports.

Instead of quantitative researchers trying to build fully automated methods and qualitative researchers trying to make do with traditional human-only methods, now both are heading toward using or developing computer-assisted methods that empower both groups.

We are seeing a redefinition of social science, he argues:

We instead use the term “social science” more generally to refer to areas of scholarship dedicated to understanding, or improving the well-being of, human populations, using data at the level of (or informative about) individual people or groups of people.

This definition covers the traditional social science departments in faculties of schools of arts and science, but it also includes most research conducted at schools of public policy, business, and education. Social science is referred to by other names in other areas but the definition is wider than use of the term. It includes what law school faculty call “empirical research,” and many aspects of research in other areas, such as health policy at schools of medicine. It also includes research conducted by faculty in schools of public health, although they have different names for these activities, such as epidemiology, demography, and outcomes research.

The rest of the article reflects on pragmatic issues, including what this means for the sorts of social science centers to build, since community is “by far the most important component leading to success…” ” If academic research became part of the X-games, our competitive event would be “‘extreme cooperation’”.

Tags:

[2b2k] From thinkers to memes

The history of Western philosophy usually has a presumed shape: there’s a known series of Great Men (yup, men) who in conversation with their predecessors came up with a coherent set of ideas. You can list them in chronological order, and cluster them into schools of thought with their own internal coherence: the neo-Platonists, the Idealists, etc. Sometimes, the schools and not the philosophers are the primary objects in the sequence, but the topology is basically the same. There are the Big Ideas and the lesser excursions, the major figures and the supporting players.

Of course the details of the canon are always in dispute in every way: who is included, who is major, who belongs in which schools, who influenced whom. A great deal of scholarly work is given over to just such arguments. But there is some truth to this structure itself: philosophers traditionally have been shaped by their tradition, and some have had more influence than others. There are also elements of a feedback loop here: you need to choose which philosophers you’ll teach in philosophy courses, so you you act responsibly by first focusing on the majors, and by so doing you confirm for the next generation that the ones you’ve chosen are the majors.

But I wonder if in one or two hundred years philosophers (by which I mean the PT-3000 line of Cogbots™) will mark our era as the end of the line — the end of the linear sequence of philosophers. Rather than a sequence of recognized philosophers in conversation with their past and with one another, we now have a network of ideas being passed around, degraded by noise and enhanced by pluralistic appropriation, but without owners — at least without owners who can hold onto their ideas long enough to be identified with them in some stable form. This happens not simply because networks are chatty. It happens not simply because the transmission of ideas on the Internet occurs through a p2p handoff in which each of the p’s re-expresses the idea. It happens also because the discussion is no longer confined to a handful of extensively trained experts with strict ideas about what is proper in such discussions, and who share a nano-culture that supersedes the values and norms of their broader local cultures.

If philosophy survives as anything more than the history of thought, perhaps we will not be able to outline its grand movements by pointing to a handful of thinkers but will point to the webs through which ideas passed, or, more exactly, the ideas around which webs are formed. Because no idea passes through the Web unchanged, it will be impossible to pretend that there are “ideas-in-themselves” — nothing like, say, Idealism which has a core definition albeit with a history of significant variations. There is no idea that is not incarnate, and no incarnation that is not itself a web of variations in conversation with itself.

I would spell this out for you far more precisely, but I don’t know what I’m talking about, beyond an intuition that the tracks end at the trampled field in which we now live.

Tags: