Katie Fehrenbacher at GigaOm lists tne ways Big Data is reshaping energy.
Archive for January, 2012
HuffingtonPost has done a very nice job turning a piece I wrote for them (“13 ways the Net is making us smarter”) into a photo-illustrated slide show.
Skip Walter’s post about his growing acceptance and understanding of the need for digital humanities hits on so many of my intellectual pleasure spots, starting with Russ Ackoff’s knowledge network, and including Kate Hayles and Cathy Davidson, and more and more. (Yes, he mentions “Too Big to Know” in passing, but that’s irrelevant to my reaction.)
Panagiotis Takis Metaxas (at the Berkman Center) and Eni Mustafaraj have written a paper called “trails of Trustworthiness in Real-Time Streams” [pdf] about how to support critical thinking about social networking conversations, while maintaining privacy. From the abstract:
When confronted with information that requires fast ac- tion, our system will enable its educated users to evaluate its provenance, its credibility and the independence of the multi- ple sources that may provide this information.
They say the only real hope is to solve the problem within closed streams that provide membership functions because there “it is possible to determine the a priori trustworthiness of a message received,” by evaluating the credibility of users on particular topics. They believe this can be done by watching the actions of users. For example, “In general, the more often a user re-posts messages from a sender, the more trusted the sender becomes.” And: “A message that has been sent by different, independent users has more trustworthiness than one that has been initiated by a single user.”
There’s much more in their paper…
From a post by Derrick Harris at GigaOm:
A fully sequenced human genome results in about 100GB of raw data, although DNAnexus Founder and CEO Andreas Sundquist told me that volume increases to about 1TB by the time the genome has been analyzed. He also says we’re on pace to have 1 million genomes sequenced within the next two years. If that holds true, there will be approximately 1 million terabytes (or 1,000 petabytes, or 1 exabyte) of genome data floating around by 2014.
Why, that’s more than the number of books in the Library of Congress times miles to the moon plus the length of all football fields laid end to end!
Here’s a 20 minute interview on KUOW in Seattle from last week. We talk about networked knowledge, science, echo chambers, long form thinking, and the irresoluteness of experts.
Google has announced that it is retiring Needlebase, a service it acquired with its ITA purchase. That’s too bad! Needlebase is a very cool tool. (It’s staying up until June 1 so you can download any work you’ve done there.)
Needlebase is a browser-based tool that creates a merged, cleaned, de-duped database from databases. Then you can create a variety of user-happy outputs. There are some examples here.
Google says it’s evaluating whether Needlebase can be threaded into its other offerings.
I am doing by dangdest to overcome my reluctance to directly self-promote myself (although I seem to be fine with indirect self-promotion), so here’s my list of public stops over the next few days on the West Coast:
Today: Seattle Town Hall, 7:30pm
Thursday: 1pm: Corte Madera, Book Passage, 51 Tamal Vista Blvd
7pm: Mountain View, Books Inc., 301 Castro Street
I also have some media and corporate stops.
See you there, I hope.