Search term(s):
Welcome to That Useful Wine Site!
You have apparently come to this page from a link on a search engine or another site. If this is your first visit here, I much recommend that you take a few minutes to look over the introductory material accessible via the blue “Introductory” zone of the Site Menu available from the “hamburger” icon in the upper right of this (and every) page. An understanding of the purposes and principles of organization of this site will, I hope and believe, much augment your experience here, for this page and in general. You can simply click this link to get at the site front page, which, unsurprisingly, is the best place to start. Thank you for visiting.Quick page jumps:
The history of wine ratings is morbidly fascinating. In the days of yore, wine was “rated” in a purely descriptive manner: there were no such things as “scores”. Reviewers could, and did, gush on in arbitrary and often confounding paeans of praise. Moreover, it was widely felt that many reviewers were essentially in the pocket of the wine-making industry, accepting not merely free sample bottles, but often lavish meals or even all-expenses-paid trips abroad to the wineries (plus those lavish meals). Many were themselves overtly involved in making or selling wine.
In 1959, Dr. Maynard A. Amerine, Professor of Enology at the University of California, Davis (an institution famous for its wine expertise) essayed a more exact approach to wine rating, based on a 20-point schema. The idea was to create a standardized scale for evaluations, so that this reviewer’s perceptions could be compared with that reviewer’s perceptions on a meaningful basis. That scale, and modest variations of it, are still in use today by tasters both talented amateur and top professional.
In modern times, many came to feel that the 20-point scale had two limitations: it was too narrow a scale, not allowing finer distinctions than 5% at a time; and it was too anchored in older times, when wines were more commonly found with really serious defects. For example, it awards 1 point—fully 5% of every wine’s score—for “clarity”, though you would be exceedingly hard put to it today to find a wine anywhere in the civilized world that is cloudy or hazy. In consequence, a move developed toward 100-point scales.
(Some reviewers just use a 5-point scale, awarding “stars”, and “half-stars”, to wines as a very rough, quick indicator of approximate value or interest as that reviewer sees it. The argument for this approach is that the other scales artificially try to make finer distinctions than are realistic, as if an 88-point wine and an 89-point wine are perceptibly different, which not everyone agrees is so.)
What many people do not realize is that most “100-point” scales do not, in fact, have a 100-point range. The most famous, that of Robert Parker, developed around 1975, is actually a 50 - 100 scale. It was supposedly intended to mimic school grading systems, with their “A-B-C-D-F” scales, except with a decimal gradation for each “grading”. But it is, in the real world, narrower yet: nowadays virtually no wine receives a score below 80, further squeezing the scale down to 20 “real-world” points—essentially no real improvement over the older U.C. Davis scale.
Robert Parker was famously the first wine critic to hit the big time, largely because he used such a finely graded scale to rank wines he reported on. He was also among the very first to clearly establish a separation between those who review wine and those who sell wine, so that consumers could feel they were receiving unbiased advice. Parker founded a publication, the Wine Advocate, that remains today arguably the most influential of review publications, and that publication is arguably still the most important reviews source. It was a commonplace—and a truth— that Parker’s opinion alone could largely make (or break) any given wine.
(As the years wear on, that seems less and less true: many important critics are now as or more widely respected and heeded, Jancis Robinson—now described by some as the world’s most respected wine critic—being one. Wine writers are beginning to refer to “the bad old days” of the late 80s, the 90s, and even the early 2000s, when hugeness in a wine sometimes seemed the sole criterion for critical adulation.)
The problem that many saw with that was this: the entire wine-drinking world was being forced to conform to the tastes of one man. No one doubts Parker’s (or any leading critic’s) honest dispassion, or even their tasting skills. But ability to discern is not the same as preference in style. The chiefest problem is sheer quantity.
In December of 2000, in a long profile of Parker published in The Atlantic Monthly, Parker told the interviewer that he tastes 10,000 wines a year; that works out to about 27 wines a day each and every day of the year. But elsewhere, Parker has said that he tastes 40 to 200 wines a day (so either the 10,000 was way low, or he doesn’t taste every day of the year—if you plug through the numbers, they are farcical). To be clearer, the issue is the simple ability or inability of anyone, supertaster or not, to be able to discern meaningful subtleties in wines at that pace, owing to sensory fatigue. As one writer (C. S. Miller) has put it, “tasting so many wines in one sitting is really questionable and has actually been proven as pretty much impossible due to something called aroma reset. Apparently our palates have a 5-second reset after each wine tasted and that reset time doubles every couple of wines tasted. So after 30 or so you would need like 30 minutes for your palate to reset. I like to refer to it as the Staten Island dump phenomenon. Drive past the dump and whew, stay there a couple of hours and you don’t notice it anymore.” If we assume Parker’s typical work day of actual tasting time is, say, six hours and the quota is, say, 120 wines that day (both reasonable assumptions), that’s another wine every three minutes, hour after hour. Absurd.
So what is the inevitable consequence of all that? Obviously, it is that the wines that will stand out, and thus receive the higher ratings, are going to be the boldest ones: the ones with high alcohol, with heavy, dominant fruit and other flavors, with largeness. Wines with delicacy and subtlety will simply get lost in the sensory depletion. It’s like a largely deaf music reviewer trying to rate compositions on a minute’s exposure—delicate string quartets are going to lose, and Mahlerian symphonies are going to win. Remember, that doesn’t mean there is anything wrong with either the music of Mahler or the gigantic sort of wine; but there is something severely wrong when 12½% alcohol wines with delicate but sensual qualities simply cannot get a fair hearing in the court.
This is not a cranky view that we alone espouse. Alice Feiring, who both blogs and writes about wine for Time magazine and has contributed to many other notable publications, is a leader in the “anti-Parker” movement. The subtitle of her book The Battle for Wine and Love is “How I Saved the World from Parkerization”. Whether Ms. Feiring gets all the credit may be quite debatable, but there is a very definite opposition movement.
Mind, it isn’t just that more subtle, “small” wines get lost. There is also the open question of how meaningful the actual ratings can be. This passage from the Wikipedia article on Parker is of note:
A lengthy profile of Parker entitled “The Million Dollar Nose” ran in The Atlantic Monthly in December 2000. Among other claims, Parker told the author that he tastes 10,000 wines a year and “remembers every wine he has tasted over the past thirty-two years and, within a few points, every score he has given as well.” Yet, in a public blind tasting of fifteen top wines from Bordeaux 2005—which he has called “the greatest vintage of my lifetime”—Parker could not correctly identify any of the wines, confusing left bank wines for right several times.
Indeed, tasters’ (and especially published critics’) abilities is a matter that is usually taken as a given, but it it? A Jonah Lehrer article titled “Does All Wine Taste the Same?” in The New Yorker is pretty much required reading for anyone who wants some insight into critics’ abilities to do the things they purport to do. It is owing to occasional cold-blooded examinations of reality like that article that we firmly believe that most reviews are verbal cotton candy: big and fluffy, but squeeze them a bit and what’s left is a rather small ball of ick.
If you want to entertain yourself with some more scathingly critical, but thoroughly evidence-based, critiques of the plausibility of serious wine tasting, check the many results of this Google search on the matter. Or, if you’re in a hurry, just follow this link to a statistical evaluation of the judges’ ratings.
Moreover, as we noted above, the “100-point” scale is nothing like—and even the actual 50-point range is a fake. The reality is that virtually no modern wine ever gets a rating below 80. That means that in effect we are back to the 20-point scale that Parker’s system was supposed to supplant. Moreover, with the huge attention paid to “ratings” by many unsure wine purchasers, almost no wine scoring under 90 points (especially if it is not quite inexpensive) has much of a chance at big sales; even wines garnering high-80s scores struggle somewhat.
We do not mean to seem to be picking on Parker. He is the large target on the horizon, but there are plenty of others who purport to rate innumerable wines on a “100-point” scale. The idea of precise numeric scales was doubtless a quite useful one three decades ago, when wine writing was elliptical and of dubious honesty. Today, wine is big business in America and the world, and consumers are generally a deal more schooled in the subject. Honest and intelligent reviewers with blogs abound (as do, to be sure, hopeless amateurs who seem actually pleased and even perversely proud that they have never before even heard of, say, Albariño or Assyrtiko).
Yet another complication for the poor consumer is that ratings vary by region. Reviewers in Oceania (Australia and New Zealand) tend to score all wines some points higher than is normal in the U.S., while not a few European ones are a couple or three points lower on average. Knowing who is rating the wine is as important as the points number itself. And, as suggested above, even in the U.S., some reviewers are known for higher-than-average point awards (James Suckling comes to mind).
(Also, not a few amateur reviewers seem unacquainted with various wine faults that are, regrettably, scarcely rare; such reviewers not infrequently pan widely praised wines with descriptions that make it highly likely that they had encountered a faulty bottle that they interpreted, and wrote up, as a poor wine. They have other, um, quirks too; we loved a review of a modest Montepulciano that warned readers “pales in comparison to more full-bodied wines such as Amarone”—well we should hope so, inasmuch as he was reviewing a wine costing $5 a bottle.)
Insightful descriptions of wines that do not at all depend on numeric scales abound, and in many ways are better than the terse, almost cryptic few lines that accompany a number that is supposed to really be all you need to know. Numeric ratings are not meaningless—when we know them for wines, we show them—but they require careful and informed interpretation.
We base our selection of wines to display on critic ratings as amalgamated by Wine Searcher. We generally use 89 points as a minimum; but we will use 90 points as a minimum if there are enough generally available under-$20 bottlings of 90 or more for a particular variety.
In light of what we have said above, why those numbers? Because it is our feeling that critics make a much bigger distinction between 89-point scores and 90-point scores than, say, between 88-point scores and 89-point scores. A score beginning with a “9” puts the reviewed wine into a whole other category than any score beginning with an “8”; and critics are choosy about 90+ ratings. That one-point difference between 89 and 90 is, we believe, a real hurdle for most reviewers.
We would always use 90 as a minimum were it not that so many interesting grapes have few 90-point bottlings reasonably available at $20 or less. Indeed, in a few cases, we had to include 88-point wines just to have any samples at all to show.
For long, the “Big Three” in published wine ratings were, roughly in this order, Parker’s Wine Advocate (and its online spinoff eRobertParker), James Suckling at the Wine Spectator, and Steven Tanzer’s International Wine Cellar. Following a little way back from those was the Wine Enthusiast, in turn followed by several distinctly lesser lights.
About a decade ago, there was a major shakeup in the world of wine criticism. The short of it is that Antonio Galloni—who at that time was the heir apparent at Wine Advocate to the aging Mr. Parker but who fell out with Parker when Parker sold the enterprise to some investors in Singapore—left the Advocate and founded Vinous. Vinous then bought out Tanzer’s International Wine Cellar; Tanzer is now an editor at Vinous. For the long of it, we strenuously recommend a reading of W. Blake Gray’s 2014 article “The Changing Face of Wine Criticism”, which explores in much more detail what happened and, more important, what may ensue.
One thing that clearly emerges from the article is the (well-known) reality that (as we suggested above) the major rating houses have very different standards. As one brief passage puts it, “Galloni might not give as many 100-point scores as his former master, but he does pass out 96s like Halloween candy. From Tanzer, an 88 is still a wine he likes, not a veiled admonition, and a 93 is something he adores.” We here always felt Tanzer’s ratings were the most useful and reliable; we have not kept up with Vinous, but hope that he remains a highly useful source.
While the wine search engine Wine Searcher shows some consumer ratings for each wine, there are only numbers, no descriptives. That is still helpful, but for a lot more, one can turn to the remarkable site CellarTracker. It is a place (free access) where you can read the opinions of serious, and usually knowledgeable, wine consumers on almost any wine in the world, each separated by vintage year. This is a really, really helpful resource folr getting behind the sheer numbers, and also for seeing how real wine drinkers rate wines (which is often rather different—sometimes rather higher, sometimes rather lower—than the paid pro reviewers do.
That is the site’s main use, but it also has numerous articles on and about wine, all good reading. And you can not just look up a particular wine, you can browse the listed wines by any of several criteria (the site front page covers all this). You can see a little of the site’s history at the link in this sentence.
The one drawback to using Cellar Tracker is that it is sometimes difficult to find the wine you are interested in owing to the confusing variety of listing methods users employ when leaving notes. To quote the site:
“It is most efficient to type as little as possible, just the first few characters of the critical words (e.g. the name of a winery, the variety, the appellation, the vineyard), so that you are most likely to find and match an existing wine. Generally speaking, the less information you type, the more likely you are to find a match if one exists, as you are less likely to throw off the search with typographical errors or extraneous words. Please remember that the wine database is populated and shared by ALL users of the site. That means that we try to settle on consistent ways of representing specific wines across all vintages, even though there are often dozens of varying conventions used inconsistently throughout the industry. While we are always open to corrections, it is not always possible to represent a given wine label the way a specific individual might wish to.”
That’s the problem: database entries are made solely by the users. It often takes some careful searching to find what you want. Usually it is not a problem, but it arises often enough to be an annoyance.
We said the site is fully accessible at no cost. That is true, but you can subscribe to receive “premium” features, as described at the page linked. The fee is not mandatory: their business model is what is sometimes called “freemium”. They do suggest some contribution levels, based on the number of bottles in your online “cellar”.
We heartily recommend the site. We add just this one crucial note: when you look up a wine on CellarTracker, you really must read the user comments, not just look at the numbers. Quite often reviewers who leave quite nice descriptions leave weirdly low rating numbers: one must suppose they feel that 85 is a really high grade…
This web page is strictly compliant with the WHATWG (Web Hypertext Application Technology Working Group) HyperText Markup Language (HTML5) Protocol versionless “Living Standard” and the W3C (World Wide Web Consortium) Cascading Style Sheets (CSS3) Protocol v3 — because we care about interoperability. Click on the logos below to test us!