Hat’s off to WINE magazine for publishing the judge’s scores for Thursday’s Chenin Challenge as it shines a precise statistical light onto the problem raised publicly by Remington Norman MW in November, that tasting panels don’t know what they’re doing.
Two judges, one, like Rem, a Master of Wine and the other the Proprietor of a Wine Judging Academy were clearly singing from different hymn sheets as on the 14 wines whose scores were published, they achieved a most remarkable negative correlation of -0.77. In plain English, the wines the Master loved the Proprietor hated and vice versa. Such consistent divergence of opinion makes including their scores in an averaging process with the other three judges worthless and indeed throws question marks at the whole rating methodology.
The Challenge winner (wine #2 above, the Rex Equus 2008) has the worst discrepancy between Master and Proprietor. To show how futile the process is, we can reconstruct one judge’s scores from the other by the simple formulas:
Master = 29.32 – 0.8*Proprietor
Proprietor = 28.25 – 0.74*Master
with the negative weights confirming the foolishness of the enterprise. If the two people in question were not powerful opinion formers it would be hilarious. As it is, it is simply sad and another small tragedy for Chenin and SA wine.
The “best” judge (with criterion highest correlation with the scoring mean) was the only Cape Wine Master in the line-up, which should please the Cape Wine Academy no end. Interestingly enough, the Proprietor (who is also a wine importer and retailer) correlates with published retail prices at 0.4 while the Master has a negative correlation of -0.4 with price. A glorious situation to be in as you prefer cheaper wines to more expensive ones.
The “best judge” correlated with price to the tune of an impressive correlation coefficient of 0.6 so in future the Chenin Challenge will be well advised to let Ginette de Fleuriot do the judging all on her own.
*** abusive comment deleted ***
Points well made Neil, especially since your observations questions the value of blind tastings (vs your old bug bear – sighted tastings) by panels?
The good news, is that tastes vary hugely. If that were not the case, we would not have such a selection of wines available.
Wonder why ‘art-experts’ don’t attempt to rate pieces of art, music or the likes of architecture on a 20 pt or 100 pt scale? Maybe they tried and gave it up as a bad experiment a long time ago!
It was interesting to see the ‘Platter guide’ of Mendoza in Argentina recently. No attempt at wine ratings, just useful information about the areas, wines and wineries concerned, along with the contact information required.
What the experts think, is only somewhat interesting – it is what the consumer thinks that really matters!
I still believe blind tastings are the only honest way to judge wines. The problem here is the “experts” are negatively correlated, so averaging their scores is meaningless.
Far better to have said “Master’s best wine was Fort Simon & Leopard’s Leap & Boschendal & Graham Beck “, “Proprietor’s best wine was Rex Equus” and “Ginette’s best wine was Rijk’s and Rex Equus.”
Consumers are being seriously misled by the current system.
Thank you Neil, we are indeed very proud of all our Diploma and Cape Wine Master graduates. I feel that they are all more than capable to judge any wine show, anywhere in the world.