Frequently Asked Questions

How come a wine featured in one tasting (wine X) received three stars or was not recommended and in another tasting in the same issue, the same wine (wine X) received a high rating ie 5 stars

The most frequent question we get asked is why do results differ in tastings?

It is not surprising for judging panels to differ on their ratings for different wines.

In the 70’s -80’s a Lindemans Port wine for example would show on the back label “two trophies, four gold medals, ten silver and eighteen bronze”. Clearly different expert panels gave different results. There are endless numbers of reasons for differing results. According to Adelaide University Lecturer in Olfactory Science, Richard Gawel, it often depends on the context.For example, in a regional tasting all varietalwines are tasted within their category irregardless of price, so if two wines are judged with the same star rating then, yes, they are rated equally within that star rating band by the panel. If it is a “style” tasting, say national cabernet tasting, then the wines are judged within price brackets or vintage and the judges may well make allowances for a very good wine at a low price and expect more from wines at a higher price, or rate wines higher from “better” vintages. However, theoretically they should not do this.

A regional panel may judge differently from a national panel because they may be looking for local “typicity” which would not be a factor nationally. For example, in a Hunter wine show they might look at 100 very young light flavoured and acidic semillon wines which would get short shift elsewhere but might achieve high ratings because the winemaker/judges know that some of these wines will be outstanding with time. On the other hand it is not uncommon for a reserve wine from a stable to achieve a lower rating than its lesser sibling. This may be because the lower priced wine is currently more balanced and drinkable, whereas the reserve wine might still have “edges” as it comes together, the winemaker having made it for longer maturing, not for drinking now.

Some wines are bottled with high dissolved oxygen levels and can change rapidly in bottle. Others can have batch variations if bottled at different times. And of course you have the human factor; some expert judges are “hot” on secondary cork taint, or reduced (sulphide) characters in screwcap wines, others are brett (brettanomyces spoilage) police etc etc. Then you can have “summer results” versus “winter results” where lighter wines tend to do better in summer against more alcoholic versions in winter. Just look at the results from the same wines at the Brisbane Wine Show compared to the Melbourne Wine Show. I suspect that in summer our blood is thinner than in winter and we prefer wines accordingly.

Interestingly we have seen in our own Wine of the Year awards taste-off in only a nine month period a wine can go from a gold medal to being judged a bronze medal. Sometimes it is the same expert judge who has tasted the same wine again. It does make a mockery of the 100 point system which implies an accuracy that simply does not exist. Just because we want a scientific “correct” answer everytime does not make it so!

With a judging panel of three judges it only takes one judge to change the result. For example, for a wine judged in a New Releases tasting one judge might give the wine 18 points, another 17 and another 15.5 points. By averaging normally the wine would receive four stars, but with our majority rules system (closest two scores) we like to acknowlege the better wines and knock out the lesser wines, so where the scores differ widely the judge in the middle gets the call to either leave his score where it is or go up half a point. In this example if he went up half a point it would take the wine to four and a half points. With the more competitive national Cabernet tasting one judge might give the wine 18 points another 16 and another 15.5 hence three stars (two closest scores). Had the judge in the middle given a higher score the result might have been closer.

With every 100 wines tasted (even with the best expert judges) we usually find that there are only 1-2 wines where all three judges have exactly the same score. Where does this leave the reader? My advice is that if a wine gets recommended on a number of occasions both with a very good rating and a good rating it is worth trying. The rest is up to your taste.

What does 'blind' tasting mean?

This involves a panel of three judges assessing anonymous (“blind”) glasses of wine placed before them in varietal or style categories. Unlike some shows where glasses are poured from bottles wrapped in paper bags (where you can often see the top or outline of the bottle) Winestate only presents a line up of anonymous glasses allowing for a truly unbiased setting.

Why haven't you tasted my favourite wine - X?

Whilst Winestate judges over 10,000 Australian and New Zealand wines each year – more than any other wine show or magazine in the world –  we obviously cannot judge every wine commercially available. Wineries and wine companies have various reasons for submitting to different competitions. We do buy the occasional icon wine (like Grange, or Paul Jaboulet La Chapelle) as yardsticks for particular style tastings.

What is the process for submitting wines?

Winestate judges New Releases for every issue and these can be submitted by producers any time. Other style or varietal tastings and deadlines are outlined in the editorial calendar available from the website. Entry forms are also available from the website. If all else fails contact tasting@winestate.com.au for further information and we will be happy to help guide you in the process.