Archive for category philosophy

[2b2k] From thinkers to memes

The history of Western philosophy usually has a presumed shape: there’s a known series of Great Men (yup, men) who in conversation with their predecessors came up with a coherent set of ideas. You can list them in chronological order, and cluster them into schools of thought with their own internal coherence: the neo-Platonists, the Idealists, etc. Sometimes, the schools and not the philosophers are the primary objects in the sequence, but the topology is basically the same. There are the Big Ideas and the lesser excursions, the major figures and the supporting players.

Of course the details of the canon are always in dispute in every way: who is included, who is major, who belongs in which schools, who influenced whom. A great deal of scholarly work is given over to just such arguments. But there is some truth to this structure itself: philosophers traditionally have been shaped by their tradition, and some have had more influence than others. There are also elements of a feedback loop here: you need to choose which philosophers you’ll teach in philosophy courses, so you you act responsibly by first focusing on the majors, and by so doing you confirm for the next generation that the ones you’ve chosen are the majors.

But I wonder if in one or two hundred years philosophers (by which I mean the PT-3000 line of Cogbots™) will mark our era as the end of the line — the end of the linear sequence of philosophers. Rather than a sequence of recognized philosophers in conversation with their past and with one another, we now have a network of ideas being passed around, degraded by noise and enhanced by pluralistic appropriation, but without owners — at least without owners who can hold onto their ideas long enough to be identified with them in some stable form. This happens not simply because networks are chatty. It happens not simply because the transmission of ideas on the Internet occurs through a p2p handoff in which each of the p’s re-expresses the idea. It happens also because the discussion is no longer confined to a handful of extensively trained experts with strict ideas about what is proper in such discussions, and who share a nano-culture that supersedes the values and norms of their broader local cultures.

If philosophy survives as anything more than the history of thought, perhaps we will not be able to outline its grand movements by pointing to a handful of thinkers but will point to the webs through which ideas passed, or, more exactly, the ideas around which webs are formed. Because no idea passes through the Web unchanged, it will be impossible to pretend that there are “ideas-in-themselves” — nothing like, say, Idealism which has a core definition albeit with a history of significant variations. There is no idea that is not incarnate, and no incarnation that is not itself a web of variations in conversation with itself.

I would spell this out for you far more precisely, but I don’t know what I’m talking about, beyond an intuition that the tracks end at the trampled field in which we now live.

Tags:

[2b2k] Margaret Sullivan on Objectivity

Magaret Sullivan [twitter:Sulliview] is the public editor of the New York Times. She’s giving a lunchtime talk at the Harvard Shorenstein Center [twitter:ShorensteinCtr] . Her topic is: how is social media is changing journalism? She says she’s open to any other topic during the Q&A as well.

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

Margaret says she’s going to talk about Tom Kent, the standards editor for the Association Press, and Jay Rosen [twitter:jayrosen_nyu] . She begins by saying she respects them both. [Disclosure: Jay is a friend] She cites Tom [which I'm only getting roughly]: At heart, objective journalism sets out to establish the facts, state the range of opinions, and take a first cut at which arguments are the most rigorous. Journalists should show their commitment to balance by keeping their opinions to themselves. Tom wrote a memo to his staff (leaked to Romenesca
) about expressing personal opinions on social networks. [Margaret wrote an excellent column about this a month ago.]

Jay Rosen, she says, thinks that objectivity is an outdated concept. Journalists should tell their readers where they’re coming from so you can judge their output based on that. “The grounds for trust are slowly shifting. The view from nowhere is getting harder to trust, and ‘here’s where I’m coming from’ is become more trustworthy.” [approx] Objectivity is a cop out, says Jay.

Margaret says that these are the two poles, although both are very reasonable people.

Now she’s going to look at two real situations. The NYT Jerusalem bureau chief Jody Rudoren is relatively new. It is one of the most difficult positions. Within a few weeks she had sent some “twitter messages” (NYT won’t allow the word “tweets,” she says, although when I tweeted this, some people disagreed; Alex Jones and Margaret bantered about this, so she was pretty clear about the policy.). She was criticized for who she praised in the tweets, e.g., Peter Beinart. She also linked without comment to a pro-Hezbollah newspaper. The NYT had an editor “work with her” on her social media; that is, she no longer had free access to those media. Margaret notes that many believe “this is against the entire ethos of social media. If you’re going to be on social media, you don’t want a NYT editor sitting next to you.”

The early reporting from Newtown was “pretty bad” across the entire media, she says. In the first few hours, a shooter was named — Ryan Lanza — and a Facebook photo of him was shown. But it was the wrong Ryan Lanza. And then it turned out it was that other Ryan Lanza’s brother. The NYT in its early Web reporting said “according to early Web reports” the shooter was Ryan Lanza. Lots of other wrong information was floated, and got into early Web reports (although generally not into the N YT). “Social media was a double edged sword because it perpetuated these inaccuracies and then worked to correct them.” It often happens that way, she says.

So, where’s the right place to be on the spectrum between Tom and Jay? “It’s no longer possible to be completely faceless. Journalists are on social media. They’re honing their personal brands. Their newspapers are there…They’re trying to use the Web to get their message out, and in that process they’re exposing who they are. Is that a bad thing? Is it a bad thing for us to know what a political reporter’s politics are? I don’t think that question is easily answerable now. I come down a little closer to where Tom Kent is. I think that it makes a lot of sense for hard news reporters … for the White House reporter, I think it makes a lot of sense to keep their politics under wraps. I don’t see how it helps for people to be prejudging and distrusting them because ‘You’re in the tank for so-and-so.’” Phil Corbett, the standards editor for the NYT, rejects the idea there is no impartial journalism. He rejects that it’s a pretense or charade.

Margaret says, “The one thing I’m very sure of is that this business of impartiality and balance should no longer mean” going down the middle in a he-said-she-said. That’s false equivalence. “That’s changing and should change.” There are facts that we fully believe are true. Evolution and Creationism are not equivalents.

Q&A

Q: Alex Jones: It used to be that the NYT wouldn’t let you cite an anonymous negative comment, along the lines of “This or that person sucks.”

A: Everyone agrees doing so is bad, but I still see it from time to time.

Q: Alex Jones: The NYT policy used to be that you must avoid an appearance of conflict of interest. E.g., a reporter’s son was in the Israeli Army. Should that reporter be forbidden from covering Israel?

A: WhenEthan Bronner went to cover Israel, his son wasn’t in the military. But then his son decided to go join up. “It certainly wasn’t ideal.” Should Ethan have been yanked out the moment his son joined? I’m not sure, Margaret says. It’s certainly problematic. I don’t know the answer.

Q: Objectivity doesn’t always draw a clear line. How do you engage with people whose ideas are diametrically opposed to yours?

A: Some issues are extremely difficult and you’re probably not going to come to a meeting of the minds on it. Be respectful. Accept that you’re not going to make much headway.

Q: Wouldn’t transparency fragment the sources? People will only listen to sources that agree.

A: Yes, this further fractures a fractured environment. It’s useful to have some news sources that set out to be in neither camp. The DC bureau chief of the NYT knows a lot about economics. For him to tell us about his views on that is helpful, but it doesn’t help to know who he voted for.

Q: Martin Nisenholz] The NYT audience is smart but it hasn’t lit up the NYT web site. Do you think the NYT should be a place where people can freely offer their opinions/reviews even if they’re biased? E.g., at Yelp you don’t know if the reviewer is the owner, a competitor… How do you feel about this central notion of user ID and the intersection with commentary?

A: I disagree that readers haven’t lit up the web site. The commentary beneath stories is amazing…

Q: I meant in reviews, not hard news…

A: A real ID policy improves the tenor.

Q: How about the snarkiness of twitter?

A: The best way to be mocked on Twitter is to be earnest. It’s a place to be snarky. It’s regrettable. Reporters should be very careful before they hit the “tweet” button. The tone is a problem.

Q: If you want to build a community — and we reporters are constantly pushed to do that — you have to engage your readers. How can you do that without disclosing your stands? We all have opinions, and we share them with a circle we feel safe in. But sometimes those leak. I’d hope that my paper would protect me.

A: I find Twitter to be invaluable. Incredible news source. Great way to get your message out. The best thing for me is not people’s sarcastic comments. It’s the link to a story. It’s “Hey, did you see this?” To me that’s the most useful part. Even though I describe it as snarky, I’ve also found it to be a very supportive place. When you take a stand, as I did on Sunday about the press not holding things back for national security reasons, you can get a lot of support there. You just have to be careful. Use it for th best possible reasons: to disseminate info, rather than to comment sarcastically.

Q: Between Kent and Rosen, I don’t think there is some higher power of morality that decides this. It depends on where you sit and what you own. If you own NYT, you have billions of dollars in good will you’ve built up. Your audience comes to you with a certain expectation. There’s an inherent bias in what they cover, but also expectations about an effort toward objectivity. Social media is a distribution channel, not a place to bear your soul. A foreign correspondent for Time made a late-night blog post. (“I’d put a breathalyzer on keyboards,” he says.) A seasoned reporter said offhandedly that maybe the victim of some tragedy deserved it. This got distributed via social media as Time Mag’s position. Reporters’ tweets should be edited first. The institution has every right to have a policy that constrains what reporters say on social media. But now there are legal cases. Social media has become an inalienable right. In the old days, the WSJ fired a reporter for handing out political leaflets in a subway station. If you’re Jay Rosen and your business is to throw bombs at the institutional media, and to say everything you do is wrong [!], then that’s ok. But if you own a newspaper, you have to stand up for objectivity.

A: I don’t disagree, although I think Jay is a thoughtful person.

Q: I blog on the HuffPo. But at Harvard, blogging is not considered professional. It’s thought of as tossed off…

A: A blog is just a delivery system. It’s not inherently good or bad, slapdash or well-researched. It’s a way to get your message out.

A: [Alex Jones] Actually there’s a fair number of people who blog at Harvard. The Berkman Center, places like that. [Thank you, Alex :)]

Q: How do you think about the evolution of your job as public editor? Are you thinking about how you interact with the readers and the rhythm of how you publish?

A: When I was brought in 5 months ago, they wanted to take it to the new media world. I was very interested in that. The original idea was to get rid of the print column all together. But I wanted to do both. I’ve been doing both. It’s turned into a conversation with readers.

Q: People are deeply convinced of wrong ideas. Goebbels’ diaries show an upside down world in which Churchill is a gangster. How do you know what counts as fact?

A: Some things are just wrong. Paul Ryan was wrong about criticizing Obama for allowing a particular GM plant to close. The plant closed before Obama took office. That’s a correctable. When it’s more complex, we have to hear both sides out.


Then I got to ask the last question, which I asked so clumsily that it practically forced Margaret to respond, “Then you’re locking yourself into a single point of view, and that’s a bad way to become educated.” Ack.

I was trying to ask the same question as the prior one, but to get past the sorts of facts that Margaret noted. I think it’d be helpful to talk about the accuracy of facts (about which there are their own questions, of course) and focus the discussion of objectivity at least one level up the hermeneutic stack. I tried to say that I don’t feel bad about turning to partisan social networks when I need an explanation of the meaning of an event. For my primary understanding I’m going to turn to people with whom I share first principles, just as I’m not going to look to a Creationism site to understand some new paper about evolution. But I put this so poorly that I drew the Echo Chamber rebuke.

What it really comes down to, for me, is the theory of understanding and knowledge that underlies the pursuit of objectivity. Objectivity imagines a world in which we understand things by considering all sides from a fresh, open start. But in fact understanding is far more incremental, far more situated, and far more pragmatic than that. We understand from a point of view and a set of commitments. This isn’t a flaw in understanding. It is what enables understanding.

Nor does this free us from the responsibility to think through our opinions, to sympathetically understand opposing views, and to be open to the possibility that we are wrong. It’s just to say that understanding has a job to do. In most cases, it does that job by absorbing the new into our existing context. There is a time and place for revolution in our understanding. But that’s not the job we need to do as we try to make sense of the world pressing in on us. Reason can’t function in the world the way objectivity would like it to.


I’m glad the NY Times is taking these questions seriously,and Margaret is impressive (and not just because she takes Jay Rosen very seriously). I’m a little surprised that we’re still talking about objectivity, however. I thought that the discussion had usefully broken the concept up into questions of accuracy, balance, and fairness — with “balance” coming into question because of the cowardly he-said-she-said dodges that have become all too common, and that Margaret decries. I’m not sure what the concept of objectivity itself adds to this mix except a set of difficult assumptions.

Tags:

Attending to appearances

I picked up a copy of Bernard Knox’s 1994 Backing into the Future because somewhere I saw it referenced about the weird fact that the ancient Greeks thought that the future was behind them. Knox presents evidence from The Odyssey and Oedipus the King to back this up, so to speak. But that’s literally on the first page of the book. The rest of it consists of brilliant and brilliantly written essays about ancient life and scholarship. Totally enjoyable.

True, he undoes one of my favorite factoids: that Greeks in Homer’s time did not have a concept of the body as an overall unity, but rather only had words for particular parts of the body. This notion comes most forcefully from Bruno Snell in The Discovery of Mind, although I first read about it — and was convinced — by a Paul Feyerabend essay. In his essay “What Did Achilles Look Like?,” Knox convincingly argues that the Greeks had both and a word and concept for the body as a unity. In fact, they may have had three. Knox then points to Homeric uses that seem to indicate, yeah, Homer was talking about a unitary body. E.g., “from the bath he [Oydsseus] stepped, in body [demas] like the immortals,” and Poseidon “takes on the likeness of Calchas, in bodily form,” etc. [p. 52] I don’t read Greek, so I’ll believe whatever the last expert tells me, and Knox is the last expert I’ve read on this topic.

In a later chapter, Knox comes back to Bernard William’s criticism, in Shame and Necessity, of the “Homeric Greeks had no concept of a unitary body” idea, and also discusses another wrong thing that I had been taught. It turns out that the Greeks did have a concept of intention, decision-making, and will. Williams argues that they may not have had distinct words for these things, but Homer “and his characters make distinctions that can only be understood in terms of” those concepts. Further, Williams writes that Homer has

no word that means, simply, “decide.” But he has the notion…All that Homer seems to have left out is the idea of another mental action that is supposed necessarily to lie between coming to a conclusion and acting on it: and he did well in leaving it out, since there is no such action, and the idea of it is the invention of bad philosophy.” [p. 228]

Wow. Seems pretty right to me. What does the act of “making a decision” add to the description of how we move from conclusion to action?

Knox also has a long appreciation of Martha Nussbaum’s The Fragility of Goodness (1986) which makes me want to go out and get that book immediately, although I suspect that Knox is making it considerably more accessible than the original. But it sounds breath-takingly brilliant.

Knox’s essay on Nussbaum, “How Should We Live,” is itself rich with ideas, but one piece particularly struck me. In Book 6 of the Nichomachean Ethics, Aristotle dismisses one of Socrates’ claims (that no one knowingly does evil) by saying that such a belief is “manifestly in contradiction with the phainomena.” I’ve always heard the word “phainomena” translated in (as Knox says) Baconian terms, as if Aristotle were anticipating modern science’s focus on the facts and careful observation. We generally translate phainomena as “appearances” and contrast it with reality. The task of the scientist and the philosopher is to let us see past our assumptions to reveal the thing as it shows itself (appears) free of our anticipations and interpretations, so we can then use those unprejudiced appearances as a guide to truths about reality.

But Nussbaum takes the word differently, and Knox is convinced. Phainomena, are “the ordinary beliefs and sayings” and the sayings of the wise about things. Aristotle’s method consisted of straightening out whatever confusions and contradictions are in this body of beliefs and sayings, but then to show that at least the majority of those beliefs are true. This is a complete inversion of what I’d always thought. Rather than “attending to appearances” meaning dropping one’s assumptions to reveal the thing in its untouched state, it actually means taking those assumptions — of the many and of the wise — as containing truth. It is a confirming activity, not a penetrating and an overturning. Nussbaum says for Aristotle (and in contrast to Plato), “Theory must remain committed to the ways human beings live, act, see.” (Note that it’s entirely possible I’m getting Aristotle, Nussbaum, and Knox wrong. A trifecta of misunderstanding!)

Nussbaum’s book sounds amazing, and I know I should have read it, oh, 20 years ago, but it came out the year I left the philosophy biz. And Knox’s book is just wonderful. If you ever doubted why we need scholars and experts — why would you think such a thing? — this book is a completely enjoyable reminder.

Tags:

[2b2k] Facts, truths, and meta-knowledge

Last night I gave a talk at the Festival of Science in Genoa (or, as they say in Italy, Genova). I was brought over by Codice Edizioni, the publisher of the just-released Italian version of Too Big to Know (or, as they say in Italy “La Stanza Intelligente” (or as they say in America, “The Smart Room”)). The event was held in the Palazzo Ducale, which ain’t no Elks Club, if you know what I mean. And if you don’t know what I mean, what I mean is that it’s a beautiful, arched, painted-ceiling room that holds 800 people and one intimidated American.

genova - palazzo ducale


After my brief talk, Serena Danna of Corriere della Serra interviewed me. She’s really good. For example, her first question was: If the facts no longer have the ability to settle arguments the way we hoped they would, then what happens to truth?


Yeah, way to pitch the ol’ softballs, Serena!


I wasn’t satisfied with my answer, which had three parts. (1) There are facts. The world is one way and not all the other ways that it isn’t. You are not free to make up your own facts. [Yes, I'm talking to you, Mitt!] (2) The basing of knowledge primarily on facts is a relatively new phenomenon. (3) I explicitly invoked Heidegger’s concept of truth, with a soupçon of pragmatism’s view of truth as a tool intended to serve a purpose.


Meanwhile, I’ve been watching The Heidegger Circle mailing list contort itself trying to understand Heidegger’s views about the world that existed before humans entered the scene. Was there Being? Were there beings? It seems to me that any answer has to begin by saying, “Of course the world existed before we did.” But not everyone on the list is comfortable with a statement that simple. Some seem to think that acknowledging that most basic fact somehow diminishes Heidegger’s analysis of the relation of Being and disclosure. Yo, Heideggerians! The world shows itself to us as independent of us. We were born into it, and it keeps going after we’ve died. If that’s a problem for your philosophy, then your philosophy is a problem. And for all of the problems with Heidegger’s philosophy, that just isn’t one. (To be fair, no one on the list suggests that the existence of the universe depends upon our awareness of it, although some are puzzled about how to maintain Heidegger’s conception of “world” (which does seem to depend on us) with that which survives our awareness of it. Heidegger, after all, offers phenomenological ontology, so there is a question about what Being looks like when there is no one to show itself to.)


So, I wasn’t very happy with what I said about truth last night. I said that I liked Heidegger’s notion that truth is the world showing itself to us, and it shows itself to us differently depending on our projects. I’ve always liked this idea for a few reasons. First, it’s phenomenologically true: the onion shows itself differently whether you’re intending to cook it, whether you’re trying to grow it as a cash crop, whether you’re trying to make yourself cry, whether you’re trying to find something to throw at a bad actor, etc. Second, because truth is the way the world shows itself, Heidegger’s sense contains the crucial acknowledgement that the world exists independently of us. Third, because this sense of truth look at our projects, it contains the crucial acknowledgement that truth is not independent of our involvement in the world (which Heidegger accurately characterizes not with the neutral term “involvement” but as our caring about what happens to us and to our fellow humans). Fourth, this gives us a way of thinking about truth without the correspondence theory’s schizophrenic metaphysics that tells us that we live inside our heads, and our mental images can either match or fail to match external reality.


But Heidegger’s view of truth doesn’t do the job that we want done when we’re trying to settle disagreements. Heidegger observes (correctly in my and everybody’s opinion) that different fields have different methodologies for revealing the truth of the world. He speaks coldly (it seems to me) of science, and warmly of poetry. I’m much hotter on science. Science provides a methodology for letting the world show itself (= truth) that is reproducible precisely so that we can settle disputes. For settling disputes about what the world is like regardless of our view of it, science has priority, just as the legal system has priority for settling disputes over the law.


This matters a lot not just because of the spectacular good that science does, but because the question of truth only arises because we sense that something is hidden from us. Science does not uncover all truths but it uniquely uncovers truths about which we can agree. It allows the world to speak in a way that compels agreement. In that sense, of all the disciplines and methodologies, science is the closest to giving the earth we all share its own authentic voice. That about which science cannot speak in a compelling fashion across all cultures and starting points is simply not subject to scientific analysis. Here the poets and philosophers can speak and should be heard. (And of course the compulsive force science manifests is far from beyond resistance and doubt.)


But, when we are talking about the fragmenting of belief that the Internet facilitates, and the fact that facts no longer settle arguments across those gaps, then it is especially important that we commit to science as the discipline that allows the earth to speak of itself in its most compelling terms.


Finally, I was happy that last night I did manage to say that science provides a model for trying to stay smart on the Internet because it is highly self-aware about what it knows: it does not simply hold on to true statements, but is aware of the methodology that led us to see those statements as true. This type of meta awareness — not just within the realm of science — is crucial for a medium as open as the Internet.

Tags:

[2b2k] Truth, knowledge, and not knowing: A response to “The Internet Ruins Everything”

Quentin Hardy has written up on the NYT Bits blog the talk I gave at UC Berkeley’s School of Information a few days ago, refracting it through his intelligence and interests. It’s a terrific post and I appreciate it. [Later that day: Here's another perspicacious take on the talk, from Marcus Banks.]

I want to amplify the answer I gave to Quentin’s question at the event. And I want to respond to the comments on his post that take me as bemoaning the fate of knowledge in the age of the Net. The post itself captures my enthusiasm about networked knowledge, but the headline of Quentin’s post is “The Internet ruins everything,” which could easily mislead readers. I am overall thrilled about what’s happening to knowledge.

Quentin at the event noted that the picture of networked knowledge I’d painted maps closely to postmodern skepticism about the assumption that there are stable, eternal, knowable truths. So, he asked, did we invent the Net as a tool based on those ideas, or did the Net just happen to instantiate them? I replied that the question is too hard, but that it doesn’t much matter that we can’t answer it. I don’t think I did a very good job explaining either part of my answer. (You can hear the entire talk and questions here. The bit about truth starts at 46:36. Quentin’s question begins at 1:03:19.)

It’s such a hard question because it requires us to disentangle media from ideas in a way that the hypothesis of entanglement itself doesn’t allow. Further, the play of media and ideas occurs on so many levels of thought and society, and across so many forms of interaction and influence, that the results are emergent.

It doesn’t matter, though, because even if we understood how it works, we still couldn’t stand apart from the entanglement of media and ideas to judge those ideas independent of our media-mediated involvement with them. We can’t ever get a standpoint that isn’t situated within that entanglement. (Yes, I acknowledge that the idea that ideas are always situated is itself a situated idea. Nothing I can do about that.)

Nevertheless, I should add that almost everything I’ve written in the past fifteen years is about how our new medium (if that’s what the Net is (and it’s not)) affects our ideas, so I obviously find some merit in looking at the particulars of how media shape ideas, even if I don’t have a general theory of how that chaotic dance works.

I can see why Quentin may believe that I have “abandoned the idea of Truth,” even though I don’t think I have. I talked at the I School about the Net being phenomenologically more true to avoid giving the impression that I think our media evolve toward truth the way we used to think (i.e., before Thomas Kuhn) science does. Something more complex is happening than one approximation of truth replacing a prior, less accurate approximation.

And I have to say that this entire topic makes me antsy. I have an awkward, uncertain, unresolved attitude about the nature of truth. The same as many of us. I claim no special insight into this at all. Nevertheless, here goes…

My sense that truth and knowledge are situated in one’s culture, history, language, and personal history comes from Heidegger. I also take from Heidegger my sense of “phenomenological truth,” which takes truth as being the ways the world shows itself to us, rather than as an inner mental representation that accords with an outer reality. This is core to Heidegger and phenomenology. There are many ways in which we enable the world to show itself to us, including science, religion and art. Those ways have their own forms and rules (as per Wittgenstein). They are genuinely ways of knowing the world, not mere “games.” Nor are the truths these engagements reveal “pictures of reality” (to use Quentin’s phrase). They are — and I’m sorry to get all Heideggerian on you again — ways of being in the world. We live them. They are engaged, embodied truths, not mere representations or cognitions.

So, yes, I am among the many who have abandoned the idea of Truth as an inner representation of an outer reality from which we are so essentially detached that some of the greatest philosophers in the West have had to come up with psychotic theories to explain how we can know our world at all. (Leibniz, Spinoza, and Descartes, you know who I’m talking about.) But I have not abandoned the idea that the world is one way and not another. I have not abandoned the idea that beliefs can seem right but be wrong. I have not abandoned the importance of facts and evidence within many crucial discourses. Nor have I abandoned the idea that it is supremely important to learn how the world is. In fact, I may have said in the talk, and do say (I think) in the book that networked knowledge is becoming more like how scientists have understood knowledge for generations now.

So, for me the choice isn’t between eternal verities that are independent of all lived historial situations and the chaos of no truth at all. We can’t get outside of our situation, but that’s ok because truth and knowledge are only possible within a situation. If the Net’s properties are closer to the truth of our human condition than, say, broadcast’s properties were, that truth of our human condition itself is situated in a particular historical-cultural moment. That does not lift the obligation on us poor humans beings to try to understand, cherish, and engage with our world as truthfully as we possibly can.

But the main thing is, no, I don’t think the Net is ruining everything, and I am (overall) thrilled to see how the Net is transforming knowledge.

Tags:

What “I know” means

If meaning is use, as per Wittgenstein and John Austin, then what does “know” mean?

I’m going to guess that the most common usage of the term is in the phrase “I know,” as in:

1. “You have to be careful what you take Lipitor with.” “I know.”
2. “The science articles have gotten really hard to read in Wikipedia.” “I know.”
3. “This cookbook thinks you’ll just happen to have strudel dough just hanging around.” “I know.”
4. “The books are arranged by the author’s last name within any one topic area.” “I know.”
5. “They’re closing the Red Line on weekends.” “I know!”

In each of these, the speaker is not claiming to have an inner state of belief that is justifiable and true. The speaker is using “I know” to shape the conversation and the social relationship with the initial speaker.

1., 4. “You can stop explaining now.”
2., 3. “I agree with you. We’re on the same side.”
5. “I agree that it’s outrageous!”

And I won’t even mention words like “surely” and “certainly” that are almost always used to indicate that you’re going to present no evidence for the claim that follows.

Tags:

[2b2k] Re-reading myselves

When I run into someone who wants to talk with me about something I’ve written in a book, they quite naturally assume that I am more expert about what I’ve written than they are. But it’s almost certainly the case that they’re more familiar with it than I am because they’ve read it far more recently than I have. I, like most writers, don’t sit around re-reading myself. I therefore find myself having to ask the person to remind me of what I’ve said. Really, I said that?

But, over the past twenty-four hours, I’ve re-read myself in three different modes.

I’ve been wrapped up in a Library Innovation Lab project that we submitted to the Digital Public Library of America on Thursday night, with 1.5 hours to spare before the midnight deadline. Our little team worked incredibly hard all summer long, and what we submitted we think is pretty spectacular as a vision, a prototype of innovative features, and in its core, work-horse functionality. (That’s why I’ve done so little blogging this sumer.)

So, the first example of re-reading is editing a bunch of explanatory Web pages — a FAQ, a non-tech explanation of some hardcore tech, a guided tour, etc. — that I wrote for our DPLA project. In this mode, I feel little connection to what I’ve written; I’m trying to edit it purely from the reader’s point of view, as if someone else had written it. Of course, I am oblivious to many of the drafts’ most important shortcomings because I’m reading them through the same glasses I had on when I wrote them. Things make sense to me that would not to readers who have the good fortune not to be me. Nevertheless, it’s just a carpentry job, trying to sand down edges and make the pieces fit. It’s the wood that matters, not whoever the carpenter happened to be.

In the second mode, I re-read something I wrote a long time ago. Someone on the Heidegger mailing list I audit asked for articles on Heidegger’s concept of the “world” in Being and Time and in The Origin of the Artwork. I remembered that I had written something about that a couple of careers ago. So, I did a search and found “Earth, World and the Fourfold” in the 1984 edition of Tulane Studies in Philosophy. (It’s locked up nice and tight so, no, you can’t read it even if you want to. Yeah, this is a completely optimal system of scholarship we’ve built for ourselves. [sarcasm]) I used my privileged access via my university and re-read it. It’s a fully weird experience. I remember so little of the content of the article and am so disassociated from the academic (or more exactly, the pathetic pretender to the same) I was that it was like reading a message from a former self. Actually, it wasn’t like that. It was that exactly.

I actually enjoyed reading the article. For one thing, unsurprisingly, I agreed with its general outlook and approach. It argues that Heidegger’s shifting use of “world,” especially with regard to that which he contrasts it with, expresses his struggle to deal with the danger that phenomenology will turn reality into a mere appearance. How can phenomenology account for that which shows itself to us as being beyond the mere showing? That is, how do we understand and acknowledge the fact that the earth shows itself to us as that which was here before us and will outlast us?

Since this was the topic of my doctoral dissertation and has remained a topic of great interest to me — it runs throughout all my books, including Too Big to Know — it’s not startling that I found Previous Me’s article interesting. And yet, Present Me persistently asked two sorts of distancing questions.

First, granting that the question itself is interesting, why was this guy (Previous Me) so wrapped up in Heidegger’s way of grappling with it? To get to Heidegger’s answers (such as they are) you have to wade through a thicket wrapped in profound scholarship wrapped in arrogantly awful writing. Now, Present Me remembers the personal history that led Previous Me to Heidegger: an identity crisis (as we used to call it) that manifested itself intellectually, that could not be addressed by pre-Heideggerian traditional philosophy (because that tradition of philosophy caused the intellectual conundrum in the first place). But outside of that personal history, why Heidegger? So, the article reads to Present Me as a wrestling match within a bubble invisible to Previous Me.

Second, my internal editor was present throughout: Damn, that was an inelegant phrase! Wait, this paragraph needs a transition! What the hell did that sentence mean? Jeez, this guy sounds pretentious here!

So, reading something of mine from the distant past was a tolerable and even interesting experience because PreviousMe was distant enough.

Third, I am this weekend reading the page proofs of Too Big to Know. At this point in the manufacturing process known as “writing a book,” I am allowed only to make the most minor of edits. If a change causes lines on the page to shift to a new page, there can be consequences expensive to my publisher. So, I’m reading looking for bad commas and “its” instead of “it’s”, all of which should have been (and so far have been) picked up by Christine Arden, the superb copy-editor who went through my book during the previous pass. But, I also am reading it looking for infelicities I can fix — maybe change an “a” to “the” or some such. This requires reading not just for punctuation but also for rhythm and meaning. In other words, I simultaneously have to read the book as if I were a reader, not just an editor. And that is a disconcerting, embarrassing, frustrating process. There are things about the book that pleasantly surprise me — Where did I come up with that excellent example! — but fundamentally I am focused on it critically. Worse, this is PresentMe seeing how PresentMe presents himself to the world. I am in a narcissistic bubble of self-loathing.

Which is too bad since this is the taste being left by what is very likely to be the last time I read Too Big to Know.

(My publisher would probably like me to note that the book is possibly quite good, and the people who have read it so far seem enthusiastic. But, how the hell would I know before you tell me?)

Tags:

The titles of philosophy

Philosophers Carnival at Philosophy, etc. points to philosophical posts from around the Web over the past three months. Many of them weigh in on age old questions about the continuity of consciousness, whether objective morality is possible without Heaven and Hell, and whether time travel is possible.

But I was especially struck by the online sites and journals where the articles were posted:

This is a far cry from titles such as the Review of Metaphysics and Journal of Medieval and Early Modern Studies typical of the printed output of the old school. (Browse a list here.) The new titles seem to mock the academy, even though many of the papers would be at home in its traditional publications. But slowly the wrapped will take on the properties of the wrapper…

Tags: