Skip to content


A few points about my Vino Value Algorithm.

The purpose of the proprietary Vino Value algorithm is to identify value—which relates overall quality (from personal ratings, after tasting a wine) to retail price (obtained by winemakers as being the price a bottle sold at their winery, or in local stores). The algorithm combines subjective and objective data–including tasting quality scores, price per bottle, and others factors that may include distribution and aging potential. I developed this on the basis of methods learned while acquiring two engineering degrees and an MBA, as well as on decades of performing strategic management consultancies throughout the world. Methods embodied in the algorithm include linear interpolation, regression analysis, price elasticity comparison and game theory. It is also based on my own decades of critically sampling diverse wines.
The idea for the Vino Value algorithm came to me in rural Bordeaux when I was recuperating from a severe episode of ‘flu four years ago. Bedridden, I spent days assembling this before field testing the method in the U.S., France and other countries.

A few notes about the algorithm are below.

ONE. We expect that as the price of a bottle increases, so should the quality of the wine inside. This is not true. Take any issue of a renowned wine publication, such as Wine Spectator, and for a specific region (say, the Anderson Valley of California) if you plot the ‘expert’s’ numerical ratings for quality of wines (generally a number between 80 and 100) against the price per bottle, you would expect to see some semblance of a straight line. Higher price should be associated with better quality. Instead, if you plot these points on an X-Y graph they resemble shotgun pellets sprayed against the side of a barn door. Correlation between price and quality? Forget it. The Vino Value algorithm takes a group of wines (usually from a specific region) and ranks them all according to price (low price means high score, high price means low score). It then takes subjective wine ratings (my own score, based on tastings, and usually between 80 and 100 points) and combines these two numbers. Based on this, the algorithm ranks wines as having a price value that is ‘out of range’ (these are never published), ‘good ♫ ,’ ‘excellent ♫♫ ,’ and ‘superlative ♫♫♫ .’

TWO. Regarding wine scoring: Take a group of individuals who have been tasting and comparing wines for a few years, and provide them with three wines to taste. Let’s say these wines have already been scored by professionals and tasting experts. If the experts considered one wine as amazing, one wine as okay, and one wine as a dog, it’s very likely that this group of avid tasters will classify the wines in the same categories. Or take ten wines. Although there will be subjective differences (I know, based on having held many such wine tastings) most of the overall ‘scores’ contributed by ‘experts’ and avid enthusiasts will be hierarchically similar, from poor to excellent.

As an avid taster, I have confidence in my subjective evaluation of the overall quality of wines. Although I am not a sommelier or professional, I wrote a book about wine based on tastings made in a dozen countries. I’ve also written a wine blog for nine years, am a contributor to Forbes, the Wine Enthusiast magazine and Food52 (as well as other publications). My subjective evaluation regarding the overall quality of wines is generally solid.

THREE: With results from this algorithm, a list of wines is prepared in which there are no losers. If I list a wine that has been ‘value evaluated,’ it has at the very least good quality, and good price value. (If a stunning quality wine has an outrageous price compared to similar wines in the same region, it will not be included in the list.)

%d bloggers like this: