I learned about two interesting, if mostly unrelated concepts lately. I’m sure that they’ll be useful to me at some point in the future.

The first one is called a Wilson score. Wilson scores are useful to sort a set of reviews or ratings in a meaningful way. Let’s say you run some sort of e-commerce site (we’ll call it Omazan) which lets buyers leave either thumbs up or thumbs down ratings for any given product. There are two common ways of sorting these that are totally wrong.

1. # thumbs up – # thumbs down: Consider the case where you have an item that has 20 thumbs up and 10 thumbs down. This means that 2/3 of people like it. Suppose there is another product that has 500 thumbs up and 490 thumbs down. This means that only about half of people like it. However, both of these products are rated equally given this heuristic.
2. # thumbs up / # total ratings: A simple average works well in many cases, but not very well for small numbers. Say you have a product which has received no thumbs down, but 1 thumb up, and a product which has received 500 thumbs up and 1 thumbs down. The first product, which has much fewer ratings, will be rated higher than the second product, and it doesn’t make sense to order them this way.
How does the Wilson score work? Essentially, you plug the reviews you have into the formula, as well as a confidence score. You will get out a confidence interval for that score (in layman’s terms, you have a confidence interval at 95% of what the actual distribution of ratings is). If you take the lower bound of this confidence interval, it’s a pretty good way of ordering things. Here’s some psuedocode to calculate it (assuming a magic function that can look up a Z score for a confidence interval):
def wilson_score(num_positive, num_negative, conf):
num_total = num_positive + num_negative
if  num_total == 0:
return 0
z = lookup_z(conf)
p_hat = num_positive / num_total
return (p_hat + z*z/(2*num_total) – z*sqrt((p_hat*(1-p_hat)+z*z/(4*num_total))/num_total))/(1+z*z/num_total)
Useful trick.
The second thing is something called an H-index. H-indices are used to calculate how awesome a scholar is. An H-index is a value H which is the highest number for which H or more publications have received at least H citations. So for example, if I have published 8 papers, and one of them had been cited 20 times, but the other 7 had been cited only 7 times, I would have an H score of 7. If each of those seven had been cited 6 times each, I would have an H score of 6. This serves to be a valuable measure of widespread impact, rather than blockbuster ability; sort of a way to weed out the one hit wonders.