For some reason this morning I felt like doing a bit of writing. Little did I know, it would be in response to a rather scandalous sounding article over at Ars Technica. The discussion was partially about how Google has supposedly messed up comments on YouTube. (Personally, I see them mostly as the cesspool of the internet, so I tend to stay away). To make a long story short, it seems that a lot of people aren’t too happy with Google’s decision to require a Google+ account to comment (rather than just a YouTube account). Part of the reason for doing this was to prevent/reduce inappropriate comments.
This is the overly-long comment that I made regarding the subject:
As someone who is very interested in launching an online system, I might say I’ve got a vested interested in how online communities work.
Obviously, there is potential value in allowing comments (and other types of feedback: upvotes/downvotes) in an online community. It helps the good content float to the surface. Unfortunately, if it isn’t implemented correctly, the poo floats to the surface instead. Some people (recently: Popular Science) have just dropped comments altogether. Maybe it’s just me, but this seems to be a really quick way of killing a community. Instead of a nice feedback loop, you’ve dropped half the party out of the conversation, and are instead stuck with a monologue, where the content creators are the only ones doing the talking, and everyone else is forced to listen.
Dropping an online commenting system seems like a really bad idea on two fronts:
- It no longer allows community feedback, which can really be helpful in developing content that is catered toward the users. This also acts as a mechanism for letting the content creators know about upcoming trends. Ignoring these trends can end up being catastrophic, in some cases.
- It restricts the social interactions between members of the community. You really can’t create much of a community without those sorts of social interactions. Half of the reason I frequent the sites that I do is to hear the feedback from the people, as often it is as good as, if not better than the original material. (Ars is included here!)
So, how do you manage an online feedback (comments/upvotes/downvotes) system? Well, there’s a couple of ways of doing that:
- Decouple popularity and rating. Just because something is “popular” (meaning that it gets a lot of replies or a lot of votes) doesn’t mean it is good. Rather than trying to use a single value to determine whether or not content should float to the top, it really takes two values. Looking at how Ars does this, it allows comments to be both up-voted and down-voted. The total number of votes would be the popularity, and the ratio of up-votes to down-votes could be the rating. This means that it isn’t just controversial topics that float to the top. It can even be set up in a way that lets the users decide what kind of content they want to see – the highly rated items, or the controversial items. Sites like Reddit get around this by using a proper ranking algorithm, which brings me to point #2…
- Use a proper ranking algorithm. Supposing that you are running a service that relies on user content, you obviously want the best content to float to the top. There’s different approaches to this, some work better in certain situations. Take, for example, Amazon. With product ratings, you have the option of giving a product a rating out of 5 stars. The problem with Amazon is that it uses a very simple average. So, supposing that I want to sort products by rating, and I’ve got two of them: one with 499 five star ratings and one 1 star rating, and the other product has a single five star rating, with Amazon’s system, the one with one rating gets ranked higher. This is an example of a system that doesn’t work so well. The Reddit folks get around this by using a rating system called Wilson’s Lower Bound. Basically, it calculates the probability of a comment being good or not based on the first 40 or so ratings. (See: http://amix.dk/blog/post/19588)
- Moderators and a clear line in the sand. At one point or another, there needs to be some way of removing offending content. This almost always comes down to having some sort of human deciding what should go, and what should stay. By allowing people to report comments (or other user generated content), this makes it fairly easy to bring offensive content to a moderator’s attention. There also needs to be a strict guideline of what is considered offensive and what isn’t. Apple’s app store was the focus of this a while ago: people were quite upset that certain apps with questionable content were getting through while others were not. Wherever you draw your line in the sand, it needs to be consistent and visible to the users of the community.
- If you don’t have the option of removing offending content using a manual process, at least give the users a way of hiding offensive content, and let them set their own threshold. The particulars on this depend on how you have implemented your rating system. For example, if you are using just up-votes/down-votes, you could set a threshold such that anything receiving more than 20 down-votes and and having a negative overall rating (up-votes minus down-votes), doesn’t get shown by default.
- If possible, use the community to guide itself. The majority of sites that I go to tend to be self-regulating. In that sense, the community polices itself. I’m more than happy to report a post as being spam if I know that something will eventually be done about it.
Anyway, that’s more than enough rambling for now. As mentioned by another commenter, with a ship the size of Google, it is hard to change directions quickly. That being said, you’d think someone that large would also have a pretty good idea of how to do things.
Just my $0.02.