A recent diary proposed introducing peer review at Daily Kos apparently largely to counter (supposed) substantial and pernicious cliques and cabals corrupting Daily Kos ...
... "cliques" and subgroups that band together to get their particular groups' members elevated to the recommended list regardless of the worthiness or utility of their particular work, idea, critique, or message
This 10-megaton flyswatter approach seemed wrong on a number of grounds, but the most serious was the introduction of the idea that dKos is subject to the untoward sway of secretive groups of unscrupulous users.
In the extended body I'll relate what data I can muster on the subject based on who recommends who. I hope others with real information to share will chime in.
This is the dark side of meta: not so amusing, and surely a distraction from the real reason we're all here, but I hope you'll agree it is important to address it forthrightly.
This kind of accusation is perhaps the most destructive possible in any community (this is experience talking, y'all), and I think requires nothing less than the antiseptic action of prolonged full direct sunlight to either expose and eliminate a real problem or demonstrate it's absence.
I publish statistical summaries of diary recommendation and comment activity, mostly in the form of lists of high impact diaries (daily and weekly) but occasionally covering longer periods of time and delving into other patterns.
As a result of these activities, I have a data set which can be used to look into the impact of a possible cabal on impact at Daily Kos.
What to look for? The simple answer is multiple recommendations. If a reader repeatedly recommends the same author, it could mean ...
- The author is just really really good, and though the reader never looks at authors names, this author is so good that almost everything they write gets this readers recommendation
- The reader is a subscriber, knows the author from previous diaries, and expects good things - maybe not everything is as good as the best, but ... the recommendation is a continuing show of support
- The reader is a member of an insidious and evil cabal of conspirators, bent on propagating their ideas and subverting a slumbering populace.
From counting multiple recommendations alone, it is impossible to conclude which of the three situations apply. However, we can at least put a limit to the extent of the damage by knowing how prevalent multiple recommendations are, and perhaps who receives more of their total support
from multiple recommendations.
How can I use counts of multiple recommendations to indicate the cliquishness of support for each diary author? I've thought of two ways. The first is to calculate for each author the fraction of total recommendations over a given time interval that derive from readers who have repeatedly recommended that author. The second is to calculate for each author the level of multiple recommendations that divides the total recommendations in half.
First method: to do this in a compact way, we have to set the interval, and the cutoff for what number of recommendations should be counted as "repeatedly". I've chosen an interval of 13 weeks (1/4 year), ending last Friday, Nov 11, and set the cutoff at 10 recommendations. In addition, I've limited the inquiry to those diarists that received 500 or more recommendations over that time period; there are 153 of these.
Among those 153 authors only 63 got any recommendations from adherents (users who recommended 10 or more diaries by that author), and of these only 15 received over 10% of their recommendations from adherents. Of those 15, only five appeared to have increased rank based on the efforts of those adherents. These include RubDMC, Dood_abides, ePluribus_Media, JR_Monsterfodder, and me (jotter).
I determined the increased rank by comparing
rank based on total recommendations vs that based only on recommendations from those who recommended exactly three times ("what I tell you three times is true"; singleton or even duplicate recommendations could be just random or biased by "gee it's on the recommend list, it must be good", more than three gets back into "axe to grind" territory).
An example of instances where adherents, while present, don't make a substantial difference, is no further away than the top three diarists for those 13 weeks. By total number of recommendations, the top three are Jerome a Paris, CindySheehan, and DarkSyde. By "true" recommendations (as above), the top three becomes DarkSyde, CindySheehan, Jerome a Paris. No substantial difference (unless you're Jerome or DarkSyde, I suppose, but ... come on guys! You both rock, OK?).
Here's the table of the 5 diarists that get a substantial portion
of their total recommendation from adherents, and whose overall
ranking is greatly improved by that portion.
user |
total |
rank |
3x-recs |
rank |
RubDMC |
1649 |
27 |
45 |
95 |
Dood Abides |
1613 |
29 |
147 |
39 |
ePluribus_Media |
1372 |
37 |
108 |
56 |
JR Monsterfodder |
1242 |
43 |
102 |
59 |
jotter |
611 |
117 |
12 |
134 |
Of these five, two (RubDMC, jotter) publish daily series of articles that have attracted a steady, if small, following. I doubt this group of adherents meets anyones definition of a cabal: atleast I've never requested recommendations of anyone in any form other than comments in Daily Kos itself.
Of the three remaining, one (Dood abides) publishes a semi regular
satire feature featuring photoshopped art, another (JR Monsterfodder)
regularly posts diaries on Wal-Mart, and the last (ePluribus Media)
is of course the public media group started around SusanG's
Gannon/Plame diaries in Jan '05. It is completely unsurprising that each of these has developed a legitimate following.
What's wrong with this approach? Since some authors earn their recommends the hard way, with lots of diaries that many people like, while others get theirs with just a few "home run" diaries, this method is a bit flawed, since we use the same cutoff for each author. So, turning (at last) to the second method, the method is as follows.
For each author, there is a distribution of recommendations that derive from readers who recommended them once, twice, three ... etc times during the interval in question. For each level of recommendation, you can calculate what fraction of the total came from this level or below. By reporting just the level of recommendation that divides the total in half, we get a sense of the distribution. The higher this number, the more cliquish the support for that author. For simplicity let's call this number the R50, for recommendation level that accounts for 50% of the total.
This method allows us to examine all 5492 diarists, and
we see the following.
R50 Number of Authors
1 5420
2 49
3 11
4+ 9
This shows that cliquishness develops relatively rarely.
Here are the authors with R50 >= 4.
author |
nrec |
nreaders |
R50 |
jotter |
611 |
100 |
41 |
RubDMC |
1649 |
297 |
24 |
Jerome_a_Paris |
8321 |
2130 |
9 |
CindySheehan |
7841 |
2004 |
8 |
bonddad |
5057 |
1638 |
6 |
Frankenoid |
618 |
216 |
5 |
DarkSyde |
6302 |
2329 |
4 |
Stirling_Newberry |
4278 |
1795 |
4 |
Unsurprisingly, every author on this list publishes many diaries on a narrow subject area, for example energy, the Iraq war, economics, gardening, science. The first two have already been discussed. For the others, it is already known that multiple recommendations don't change their ranking much.
While it remains possible that an occasional diary may
have been "played" into prominence, the most visible diarists, those with the most recommendations over a substantial period of time don't appear to have benefitted untowardly from a number of persons united in close design to promote their private views and interests in a community by intrigue.