Archive for May, 2012

[2b2k] Attribution isn’t just about credit. It’s about networking knowledge.

David Kay pointed out to me a piece by Arthur Brisbane, the NY Times Public Editor. In it Arthur deals with a criticism of a NYT article that failed to acknowledge the work of prior journalists and investigators (“uncredited foundational reporting”) that led to the NYT story. For example, Hella Winston at The Jewish Week told Arthur:

The lack of credit stings. “You get so much flak — these are difficult stories,” Ms. Winston told me, “People come down on you.” The Times couldn’t have found all its sources among victims and advocates by itself, she added: “You wouldn’t have known they existed, you wouldn’t have been able to talk to them, if we hadn’t written about them for years.”

But, as David Kay points out, this is not just about giving credit. As the piece says about the Poynter Institute‘s Kelly McBride:

[McBride] struck another theme, echoed by other ethics experts: that providing such credit would have enabled readers to find other sources of information on the subject, especially through online links.

Right. It’s about making the place smarter by leaving traceable tracks of the ideas one’s post brings together. It’s about building networks of knowledge.

Tags:

[2b2k][mesh] Setting the record straight: Overall, the networking of knowledge is awesome

Christine Dobby posted at the Financial Post about my session at the Mesh conference on Thursday in Toronto. She accurately captured two ideas, but missed the bigger point I was trying to make, which — given how well she captured the portion of my comments she blogs about — was undoubtedly my fault. Worse, the post gives incredibly short shrift to two powerful and important sessions that morning by Rebecca MacKinnon and Michael Geist about the threats to Internet freedom…way more important (in my view, natch) than what the FP post leads with.

To judge for yourself, you might want to check the live blogging I did of the sessions by Rebecca and Michael. These were great sessions by leaders in their fields, people who are full-time working on keeping the Internet free and open. They are fighting for us and our Internet. (Likewise true of Andy Carvin, of course, who gave an awesome afternoon session.) What they said seems to me clearly to be so much more important than my recapitulation of a decade-old argument that I think is valid but is not even half the story.

On to moi moi moi.

Christine does a nice job summarizing my summary of the echo chamber argument, and I’m pleased that she followed that up with my use of Reddit as an example of how an echo chamber — a group that shares a set of beliefs, values, and norms — can enable a sympathetic yet critical encounter with those who hold radically different views. But, here are the first two paragraphs of Christine’s post about the morning at Mesh:

With the vast sprawl of the web — and in spite of its power to fact check information — stupidity abounds, says David Weinberger.

“One of the bad things we get from networked knowledge is it’s easier than ever to be stupid because you can find other people who can reinforce your beliefs,” the U.S. academic, Internet commentator and author of the recent Too Big to Know told a Toronto audience Thursday.

True, and I did indeed say that. But I don’t want to leave the impression that I’m going around to conferences bashing the Net as a stupidity enabler. In fact, I spent the first half hour at Mesh being interviewed by the inestimable Mathew Ingram about the rise of networked knowledge, about which I am overall quite enthusiastic. The networking of knowledge is enabling knowledge to scale far beyond the limits within which it’s operated since it was born 2,500 years ago. It’s enabling knowledge to shed some of the blinkered limitations that it had embraced as a virtue. Overall, it’s an awesomely good thing, although I did try to point out some of the risks and dangers.

So, it’s weird for me to read in the FP that the take-away is that the Net is creating echo chambers that are making us stupider. Indeed, as my remarks on Reddit were intended to indicate, the echo chamber argument can lead us to underestimate the positive importance of groups sharing views and values: conversation and understanding itself require a huge amount of agreement to be productive. As I wrote not too long ago, culture is an echo chamber.

So, put Christine’s post together with the post you’re currently reading and you’ll get a more accurate representation of what I intended to say and certainly what I believe. Sort of like how networked knowledge works, come to think of it :)

More important, go read what Rebecca, Michael, and Andy had to say. (And I also really liked Michael O’Connor Clarke’s session, but couldn’t live blog it.)

Tags:

[2b2k][mesh] Andy Carvin

Andy Carvin (@acarvin) is being interviewed by Mathew Ingram at Mesh.

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

He’s cut back to hundreds of tweets now, rather than the 1,400 he did at the height of the Arab Spring. He says Twitter shut him down for a bit on Feb. 2 on the day of the Battle of the Camel in Egypt because his volume was so high that he looked like a spammer. He contacted Twitter and they turned him back on in 15 minutes.

He went to NPR at 2006 as “senior strategist,” an intentionally vague title. He says his real role is Guinea Pig in Residence. He gets to tinker with tools and methodologies. In Dec. 2010 he began to see tweets from Tunisia from people he knew, about the protest self-immolation of a street vendor. He had a loose network of bloggers he knew over the years, in part from participating in Global Voices. “When something important happens, the network comes back to life.” He had an intuition that the protests might expand into a genuine revolution. None of the mainstream media were covering it. [PS: Here's a very cool and useful interactive timeline of Arab Spring, from The Guardian.]

Mathew: The role of social media? Andy: Each country has been very different. “I don’t call this the Twitter Revolution or a Social Media Revolution, because I couldn’t then look in the eye of someone who lost a family member” engaged in the protests. Facebook was a way to get word out to lots of people. A researcher recently has said that simple information exchange at a place like Facebook was a public act, broadcasting to people, a declaration, helping to nudge the revolution forward. “Because it was not anonymous,” notes Andy.

Expats curated news and footage and spread it. “In the last couple of days of the Tunisian revolution, people were using Facebook and Twitter to identify sniper nests.” People would say don’t go to a particular intersection because of the snipers. It spread from country to country. He says that Libyans were tweeting about the day the revolution should begin. “It was literally like they were using Outlook appointments.”

How did he curate the twitterers? He kept lists in each country. It’s probably about 500 people. But a total of about 2,000 people are in a loose network of folks who will respond to his questions and requests (e.g., for translations).

Mathew: You weren’t just retweeting. Andy: Yes. I didn’t know anyone in Libya. I didn’t know who to trust. I started asking around if anyone knows Libyan expats. I picked up the phone/skype. I got leads. At the beginning I was following maybe 10 people. And then it scaled up. When the videos came out the mainstream media wouldn’t run them because they weren’t sure they were from Libya. But Andy tweeted them, noting that there’s uncertainty, and asking for help. A Libyan would tweet back that they recognize the east Libyan accent, or someone recognized a Libyan court house which Google images then confirmed.

Mathew: Were you inherently skeptical? Andy: I was inherently curious. I compiled source lists, seeing who’s following people I trusted, seeing how many followers and how long they’ve tweeted, and how they are talking to one another. Eventually I’d see that some are married, or are related. Their relationships made it more likely they were real.

Mathew: You were posting faster than news media do. They do more verification. Andy: Which is why I get uncomfortable when people prefer my twitter feed as a newswire. It’s not a newswire. It’s a newsroom. It’s where I’m trying to separate fact from fiction, interacting with people. That’s a newsroom.

Mathew: You were doing what I was talking with David W [that's me!] about: using a network of knowledge. Andy: A NYT reporter tweeted that he was trying to figure out what a piece of ordinance was. Andy tweeted it and one of his followers identified it precisely. It took 45 mins. The guy who figured it out was a 15 year old kid in Georgia who likes watching the Military channel.

Sometimes Andy gives his followers exercises. E.g., his followers picked apart a photo that was clearly photoshopped. What else did they find in it? They found all sorts of errors, including that the president of Yemen had two ears on one side. It becomes an ad hoc world wide social experiment in Internet literacy. It’s network journalism, collaborative journalism, process journalism. Andy says he doesn’t like the term “curator,” but there are times when that’s what he’s doing. There are other times when he’s focused on realtime verification. He likes the term “dj” better. He’s getting samples from all over, and his challenge is to “spin it out there in a way that makes people want to dance.”

[It opens up to audience questions]

Q: Are people more comfortable talking to you as a foreign journalist?

Yes. Early on, people vouched for me. But once I hit my stride, people knew me before their revolution began and I’d be in touch with them. It’s a mutually beneficial relationship.

Q: How should journalists use Twitter?

They have to carve out time to do it. It helps to be ADHD. And when you’re on Twitter, you are who you are. When I’m on Twitter, I’m the guy who has a family and works in his yard. And you have to be prepared to be accountable in real time. When I screw up, my followers tell me. E.g., in Libya there was a video of an injured girl being prepared for surgery. I wrote that and tweeted it. Within 30 seconds a number of people tweeted back to me that she was dead and was being prepared for burial. Andy retweeted what he had been told.

Q [me]: How do we scale you so that there are more of you? And tweeting 1400 times a day isn’t sustainable, so is there a way to scale in that direction as well?

Andy: Every time I felt “Poor me” I remembered that I was sitting on a nice NPR roof deck, tweeting. And when I go home, at 6pm I am offline. I fix dinner, I give the kids a bath, I read to them. If it’s a busy time, I’ll go back online for an hour.

He would tweet when he went offline for bits during Arab Spring, and would explain why, in part so people would understand that he’s not in the Middle East and not in danger.

About scaling: A lot of what I do is teachable. The divide between those in journalism who use social media and those who don’t is about how comfortable they are being transparent. Also, if you’re responsible for a beat, it’s understandable why you haven’t tried it. Editors should think about giving journalists 20% time to cultivate their sources online, building their network, etc. You get it into people’s job descriptions. You figure it out.

Q: [mathew] On the busy days, you needed someone to curate you.

That began to happen when I brought on a well-known Saudi Blogger [couldn't get his name]. He was curating user-generated content from Syria. He was curating me, but his sources were better than mine, so I began retweeting him.

Q: How do you address the potential for traumatic stress, which you can suffer from even if you’re not in the field?

Andy: In Feb. I learned the term “vicarious PTSD.” The signs were beginning. The first videos from Libya and Bahrain were very graphic. But I have a strong support network on line and with my family. I feel it’s important to talk about this on Twitter. If I see a video of a dead child, I’ll vent about, sometimes in ways that as a journalist are not appropriate, but are appropriate as a human. If I ever become like a TV detective who can look at a body on the ground and crack jokes, then I shouldn’t be in this business.

Andy: I do retweet disturbing stuff. The old media were brought into homes, families. So they had to be child-friendly. But with social media, the readers has to decide to be on Twitter and then to follow me. I’ll sometimes retweet something and say that people shouldn’t open it, but it needs to be out there. There have been some videos I haven’t shared: sexual assaults, executions, things involving kids. I draw the line.

[Loved it loved it loved it. Andy perfectly modeled a committed journalist who remains personal, situated, transparent, and himself.]

Tags:

[2b2k] Peter Galison on The Collective Author

Harvard professor Peter Galison (he’s actually one of only 24 University Professors, a special honor) is opening a conference on author attribution in the digital age.

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

He points to the vast increase in the number of physicists involved in an experiment, some of which have 3,000 people working on them. This transforms the role of experiments and how physicists relate to one another. “When CERN says in a couple of months that ‘We’ve found the Higgs particle,’ who is the we?”

He says that there has been a “pseudo-I”: A group that functions under the name of a single author. A generation or two ago this was common: The Alvarez Group,” Thorndike Group, ” etc. This is like when the works of a Rembrandt would in fact come from his studio. But there’s also “The Collective Group”: a group that functions without that name — often without even a single lead institution.” This requires “complex internal regulation, governance, collective responsibility, and novel ways of attributing credit.” So, over the past decades physicists have been asked very fundamental questions about how they want to govern. Those 3,000 people have never all met one another; they’re not even in the same country. So, do they stop the accelerator because of the results from one group? Or, when CERN scientists found data suggesting faster than light neutrinos, the team was not unanimous about publishing those results. When the results were reversed, the entire team suffered some reputational damage. “So, the stakes are very high about how these governance, decision-making, and attribution questions get decided.”

He looks back to the 1960s. There were large bubble chambers kept above their boiling point but under pressure. You’d get beautiful images of particles, and these were the iconic images of physics. But these experiments were at a new, industrial scale for physics. After an explosion in 1965, the labs were put under industrial rules and processes. In 1967 Alan Thorndike at Brookhaven responded to these changes in the ethos of being an experimenter. Rarely is the experimenter a single individual, he said. He is a composite. “He might be 3, 5 or 8, possibly as many as 10, 20, or more.” He “may be spread around geographically…He may be epehemral…He is a social phenomenon, varied in form and impossible to define precisely.” But he certainly is not (said Thorndike) a “cloistered scientist working in isolation at his laboratory bench.” The thing that is thinking is a “composite entity.” The tasks are not partitioned in simple ways, the way contractors working on a house partition their tasks. Thorndike is talking about tasks in which “the cognition itself does not occur in one skull.”

By 1983, physicists were colliding beams that moved particles out in all directions. Bigger equipment. More particles. More complexity. Now instead of a dozen or two participants, you have 150 or so. Questions arose about what an author is. In July 1988 one of the Stanford collaborators wrote an internal memo saying that all collaborators ought to be listed as authors alphabetically since “our first priority should be the coherence of the group and the de facto recognition that contributions to a piece of physics are made by all collaborators in different ways.” They decided on a rule that avoided the nightmare of trying to give primacy to some. The memo continues: “For physics papers, all physicist members of the colaboration are authors. In addition, the first published paper should also include the engineers.” [Wolowitz! :)]

In 1990s rules of authorship got more specific. He points to a particular list of seven very specific rules. “It was a big battle.”

In 1997, when you get to projects as large as ATLAS at CERN, the author count goes up to 2,500. This makes it “harder to evaluate the individual contribution when comparing with other fields in science,” according to a report at the time. With experiments of this size, says Peter, the experimenters are the best source of the review of the results.

Conundrums of Authorship: It’s a community and you’re trying to keep it coherent. “You have to keep things from falling apart” along institutional or disciplinary grounds. E.g., the weak neutral current experiment. The collaborators were divided about whether there were such things. They were mockingly accused of proposing “alternating weak neutral currents,” and this cost them reputationally. But, trying to making these experiments speak in one voice can come at a cost. E.g., suppose 1,900 collaborators want to publish, but 600 don’t. If they speak in one voice, that suppresses dissent.

Then there’s also the question of the “identity of physicists while crediting mechanical, cryogenic, electrical engineers, and how to balance with builders and analysts.” E.g., analysts have sometimes claimed credit because they were the first ones to perceive the truth in the data, while others say that the analysts were just dealing with the “icing.”

Peter ends by saying: These questions go down to our understanding of the very nature of science.

Q: What’s the answer?
A: It’s different in different sciences, each of which has its own culture. Some of these cultures are still emerging. It will not be solved once and for all. We should use those cultures to see what part of evaluations are done inside the culture, and which depend on external review. As I said, in many cases the most serious review is done inside where you have access to all the data, the backups, etc. Figuring out how to leverage those sort of reviews could help to provide credit when it’s time to promote people. The question of credit between scientists and engineers/technicians has been debated for hundreds of years. I think we’ve begun to shed some our class anxiety, i.e., the assumption that hand work is not equivalent to head work, etc. A few years ago, some physicists would say that nanotech is engineering, not science; you don’t hear that so much any more. When a Nobel prize in 1983 went to an engineer, it was a harbinger.

Q: Have other scientists learned from the high energy physicists about this?
A: Yes. There are different models. Some big science gets assimilated to a culture that is more like abig engineering process. E.g., there’s no public awareness of the lead designers of the 747 we’ve been flying for 50 years, whereas we know the directors of Hollywood films. Authorship is something we decide. That the 747 has no author but Hunger Games does was not decreed by Heaven. Big plasma physics is treated more like industry, in part because it’s conducted within a secure facility. The astronomers have done many admirable things. I was on a prize committee that give the award to a group because it was a collective activity. Astronomers have been great about distributing data. There’s Galaxy Zoo, and some “zookeepers” have been credited as authors on some papers.

Q: The credits are getting longer on movies as the specializations grow. It’s a similar problem. They tell you how did what in each category. In high energy physics, scientists see becoming too specialized as a bad thing.
A: In the movies many different roles are recognized. And there are questions of distribution of profits, which is not so analogous to physics experiments. Physicists want to think of themselves as physicists, not as sub-specialists. If you are identified as, for example, the person who wrote the Monte Carlo, people may think that you’re “just a coder” and write you off. The first Ph.D. in physics submitted at Harvard was on the Bohr model; the student was told that it was fine but he had to do an experiment because theoretical physics might be great for Europe but not for the US. It’s naive to think that physicists are Da Vinci’s who do everything; the idea of what counts as being a physicist is changing, and that’s a good thing.

[I wanted to ask if (assuming what may not be true) the Internet leads to more of the internal work being done visibly in public, might this change some of the governance since it will be clearer that there is diversity and disagrement within a healthy network of experimenters. Anyway, that was a great talk.]

Tags:

[2b2k] The Net as paradigm

Edward Burman recently sent me a very interesting email in response to my article about the 50th anniversary of Thomas Kuhn’s The Structure of Scientific Revolutions. So I bought his 2003 book Shift!: The Unfolding Internet – Hype, Hope and History (hint: If you buy it from Amazon, check the non-Amazon sellers listed there) which arrived while I was away this week. The book is not very long — 50,000 words or so — but it’s dense with ideas. For example, Edward argues in passing that the Net exploits already-existing trends toward globalization, rather than leading the way to it; he even has a couple of pages on Heidegger’s thinking about the nature of communication. It’s a rich book.

Shift! applies The Structure of Scientific Revolutions to the Internet revolution, wondering what the Internet paradigm will be. The chapters that go through the history of failed attempts to understand the Net — the “pre-paradigms” — are fascinating. Much of Edward’s analysis of business’ inability to grasp the Net mirrors cluetrain‘s themes. (In fact, I had the authorial d-bag reaction of wishing he had referenced Cluetrain…until I realized that Edward probably had the same reaction to my later books which mirror ideas in Shift!) The book is strong in its presentation of Kuhn’s ideas, and has a deep sense of our cultural and philosophical history.

All that would be enough to bring me to recommend the book. But Edward admirably jumps in with a prediction about what the Internet paradigm will be:

This…brings us to the new paradigm, which will condition our private and business lives as the twenty-first century evolves. It is a simple paradigm, and may be expressed in synthetic form in three simple words: ubiquitous invisible connectivity. That is to say, when the technologies, software and devices which enable global connectivity in real time become so ubiquitous that we are completely unaware of their presence…We are simply connected.” [p. 170]

It’s unfair to leave it there since the book then elaborates on this idea in very useful ways. For example, he talks about the concept of “e-business” as being a pre-paradigm, and the actual paradigm being “The network itself becomes the company,” which includes an erosion of hierarchy by networks. But because I’ve just written about Kuhn, I found myself particularly interested in the book’s overall argument that Kuhn gives us a way to understand the Internet. Is there an Internet paradigm shift?

The are two ways to take this.

First, is there a paradigm by which we will come to understand the Internet? Edward argues yes, we are rapidly settling into the paradigmatic understanding of the Net. In fact, he guesses that “the present revolution [will] be completed and the new paradigm of being [will] be in force” in “roughly five to eight years” [p. 175]. He sagely points to three main areas where he thinks there will be sufficient development to enable the new paradigm to take root: the rise of the mobile Internet, the development of productivity tools that “facilitate improvements in the supply chain” and marketing, and “the increased deployment of what have been termed social applications, involving education and the political sphere of national and local government.” [pp. 175-176] Not bad for 2003!

But I’d point to two ways, important to his argument, in which things have not turned out as Edward thought. First, the 5-8 years after the book came out were marked by a continuing series of disruptive Internet developments, including general purpose social networks, Wikipedia, e-books, crowdsourcing, YouTube, open access, open courseware, Khan Academy, etc. etc. I hope it’s obvious that I’m not criticizing Edward for not being prescient enough. The book is pretty much as smart as you can get about these things. My point is that the disruptions just keep coming. The Net is not yet settling down. So we have to ask: Is the Net going to enable continuous disruption and self-transformation? If so will it be captured by a paradigm? (Or, as M. Knight Shyamalan might put it, is disruption the paradigm?)

Second, after listing the three areas of development over the next 5-8 years, the book makes a claim central to the basic formulation of the new paradigm Edward sees emerging: “And, vitally, for thorough implementation [of the paradigm] the three strands must be invisible to the user: ubiquitous and invisible connectivity.” [p. 176] If the invisibility of the paradigm is required for its acceptance, then we are no closer to that event, for the Internet remains perhaps the single most evident aspect of our culture. No other cultural object is mentioned as many times in a single day’s newspaper. The Internet, and the three components the book point to, are more evident to us than ever. (The exception might be innovations in logistics and supply chain management; I’d say Internet marketing remains highly conspicuous.) We’ve never had a technology that so enabled innovation and creativity, but there may well come a time when we stop focusing so much cultural attention on the Internet. We are not close yet.

Even then, we may not end up with a single paradigm of the Internet. It’s really not clear to me that the attendees at ROFLcon have the same Net paradigm as less Internet-besotted youths. Maybe over time we will all settle into a single Internet paradigm, but maybe we won’t. And we might not because the forces that bring about Kuhnian paradigms are not at play when it comes to the Internet. Kuhnian paradigms triumph because disciplines come to us through institutions that accept some practices and ideas as good science; through textbooks that codify those ideas and practices; and through communities of professionals who train and certify the new scientists. The Net lacks all of that. Our understanding of the Net may thus be as diverse as our cultures and sub-cultures, rather than being as uniform and enforced as, say, genetics’ understanding of DNA is.

Second, is the Internet affecting what we might call the general paradigm of our age? Personally, I think the answer is yes, but I wouldn’t use Kuhn to explain this. I think what’s happening — and Edward agrees — is that we are reinterpreting our world through the lens of the Internet. We did this when clocks were invented and the world started to look like a mechanical clockwork. We did this when steam engines made society and then human motivation look like the action of pressures, governors, and ventings. We did this when telegraphs and then telephones made communication look like the encoding of messages passed through a medium. We understand our world through our technologies. I find (for example) Lewis Mumford more helpful here than Kuhn.

Now, it is certainly the case that reinterpreting our world in light of the Net requires us to interpret the Net in the first place. But I’m not convinced we need a Kuhnian paradigm for this. We just need a set of properties we think are central, and I think Edward and I agree that these properties include the abundant and loose connections, the lack of centralized control, the global reach, the ability of everyone (just about) to contribute, the messiness, the scale. That’s why you don’t have to agree about what constitutes a Kuhnian paradigm to find Shift! fascinating, for it helps illuminate the key question: How are the properties of the Internet becoming the properties we see in — or notice as missing from — the world outside the Internet?

Good book.

Tags:

[everythingismisc] Scaling Japan

MetaFilter popped up a three-year-old post from Derek Sivers about how streeet addresses work in Japan. The system does a background-foreground duck-rabbit Gestalt flip on Western addressing schemes. I’d already heard about it — book-larnin’ because I’ve never been to Japan — but the post got me thinking about how things scale up.

What we would identify by street address, the Japanese identify by house number within a block name. Within a block, the addresses are non-sequential, reflecting instead the order of construction.

I can’t remember where I first read about this (I’m pretty sure I wrote about it in Everything Is Miscellaneous), but it pointed out some of the assumptions and advantages of this systems: it assumes local knowledge, confuses invaders, etc. But my reaction then was the same as when I read Derek’s post this morning: Yeah, but it doesn’t scale. Confusing invaders is a positive outcome of a failure to scale, but getting tourists lost is not. The math just doesn’t work: 4 streets intersected by 4 avenues creates 9 blocks, but add just 2 more streets and 2 more avenues and you’ve enclosed another 16 blocks. So, to navigate a large western city you have to know many many fewer streets and avenues than the number of existing blocks.

But of course I’m wrong. Tokyo hasn’t fallen apart because there are too many blocks to memorize. Clearly the Japanese system does scale.

In part that’s because according to the Wikipedia article on it, blocks are themselves located within a nested set of named regions. So you can pop up the geographic hierarchy to a level where there are fewer entities in order to get a more general location, just as we do with towns, counties, states, countries, solar system, galaxy, the universe.

But even without that, the Japanese system scales in ways that peculiarly mirror how the Net scales. Computers have scaled information in the Western city way: bits are tucked into chunks of memory that have sequential addresses. (At least they did the last time I looked in 1987.) But the Internet moves packets to their destinations much the way a Japanese city’s inhabitants might move inquiring visitors along: You ask someone (who we will call Ms. Router) how to get to a particular place, and Ms. Router sends you in a general direction. After a while you ask another person. Bit by bit you get closer, without anyone having a map of the whole.

At the other end of the stack of abstraction, computers have access to such absurdly large amounts of information either locally or in the cloud — and here namespaces are helpful — that storing the block names and house numbers for all of Tokyo isn’t such a big deal. Point your mobile phone to Google Maps’ Tokyo map if you need proof. With enough memory,we do not need to scale physical addresses by using schemes that reduce it to streeets and avenues. We can keep the arrangement random and just look stuff up. In the same way, we can stock our warehouses in a seemingly random order and rely on our computers to tell us where each item is; this has the advantage of letting us put the most requested items up front, or on the shelves that require humans to do the least bending or stretching.

So, I’m obviously wrong. The Japanese system does scale. It just doesn’t scale in the ways we used when memory spaces were relatively small.

Tags: