Idea Filter

Thinking about the attention and information overload problem, I realized that what I want is an idea filter. Richard MacManus seconded the notion (and pointed out that, not surprisingly, he was there long before me.)

I thought it would be helpful to demonstrate what I mean by an idea filter. I read a lot of bloggers, but only a handful consistently have ideas — it’s taken a while for me to figure out who they are. I’ve written before about Umair Haque, who is an idea machine. For now, I’d like to point to three other examples of bloggers with ideas and insight:

Alex Barnett on why real meme-trackers have not yet been invented:

The word ‘memetracking’ or (‘meme tracker’) has been used to describe services such as Memeorandum, Megit, Tailrank and Chuquet.

I can’t call them ‘memetrackers’. I like them, they’re useful sites and all that, but they aren’t ‘meme’ trackers.

A meme, as defined by Richard Dawkins in his 1976 book The Selfish Gene, is

“a unit of cultural transmission, or a unit of imitation.

…Examples of memes are tunes, ideas, catch-phrases, clothes fashions, ways of making pots or of building arches.

… If a scientist hears, or reads about, a good idea, he passed it on to his colleagues and students. He mentions it in his articles and his lectures. If the idea catches on, it can be said to propagate itself, spreading from brain to brain.

(There are other definitions floating around but the originator’s will do to make my point.)

There is imitation going on at these these sites: ‘oh, she’s reporting this news, it’s interesting, so I will’. These sites track the act of passing on units of news. In this sense these sites are tracking the memetic quality of blogs. However, generally speaking, these sites are not tracking the spreading of ‘ideas’ or ‘memes’. They are tracking bits of news being passed on from one blogger or site to another.

A bit of news is not a meme, nor an idea, it is a bit of news.

Noah Brier on how digital technology transformed the economics of attention:

Because for the first time attention is a measurable commodity. Before the internet there was no good way to measure the attention people paid to things. Of course there were some general measures, but beyond paying the entry fee to a movie or the cover price for a magazine, the whole measurement thing was pretty fuzzy. Nielsen tried to do it for years and their numbers have been exposed on multiple occasions. Then all of a sudden digital technology comes along and with it, all the wonderful recordability (my word) that comes along with it. Suddenly there’s a shift from a world where you struggle to measure where attention is being paid to one where you’re buried in data. Cell phone records, email inboxes and internet cookies all contain the pieces that can eventually make up the whole.

When we look back on the early internet, we might very well say that the biggest shift it brought on was forcing the world to rethink advertising. After all, why buy a spot in a magazine where you’ll hope that people will pay attention to your ad on page 63 when you can buy an advertisement on a topic-specific website that guarantees 10,000 people who have made a conscious decision to visit the site will see your ad each day. Or further, why pay for that ad at all if it doesn’t get clicked on? All of a sudden, the happy magazine publishers and television network producers are sweating about the fact that they can’t guarantee people are going to pay attention. It’s almost as if someone pulled away the curtain and revealed the big secret: When you buy an advertisement in traditional media all you can do is hope that people pay attention. When you add in the fact that people are increasingly fragmenting their attention amongst multiple media at once, you’ve got companies like NBC and CBS trying to convince advertisers that people really do care (and this isn’t even to mention the disruptive technologies like Tivo and BitTorrent).

Nick Carr on the Nature article about Wikipedia’s accuracy:

If you were to state the conclusion of the Nature survey accurately, then, the most you could say is something like this: “If you only look at scientific topics, if you ignore the structure and clarity of the writing, and if you treat all inaccuracies as equivalent, then you would still find that Wikipedia has about 32% more errors and omissions than Encyclopedia Britannica.” That’s hardly a ringing endorsement.

The problem with those who would like to use “open source” as a metaphor, stretching it to cover the production of encyclopedias, media, and other sorts of information, is that they tend to focus solely on the “community” aspect of the open-source-software model. They ignore the fact that above the programmer community is a carefully structured hierarchy, a group of talented individuals who play a critical oversight role in filtering the contributions of the community and ensuring the quality of the resulting code. Someone is in charge, and experts do count.

The open source model is not a democratic model. It is the combination of community and hierarchy that makes it work. Community without hierarchy means mediocrity.

Before you respond to my choices, think about this — I chose Alex, Noah, and Nick because they are doing more than echoing — they are doing original thinking with a high degree of analytic discipline. You need not agree with their conclusions to appreciate the quality of their thinking.

I want an idea filter that can direct me to original thinking that will stretch my mind and challenge my own thinking. As I said before, I think such a filter is only possible through the combined power of technology and human intelligence.

More on the idea filter to come — it may be an idea, but it still needs a lot of work.