Are Reviews Objective?
The quick and obvious answer is NO. As much as I try, I am still not objective. Or even internally consistent (for example the Viator got a higher score than the Neutron despite the Neutron being a better knife...I got go fix that). But simply because gear reviews aren’t objective doesn’t mean they are worthless. I think there is a lot of value that can be derived from criticism even if it is not objective.
Gear reviews, as a very small slice of criticism, often run into three objections vis-a-vis a lack of objectivity: bias, relativism, and bandwagoning. Almost as if a microcosm for our current society, if you have an opinion on gear you have to expect criticism from those three places. I am going to make the case that criticism in general and gear reviews in particular are worthwhile if they are done correctly, and by correctly I mean done in a way that takes these problems into account. The solution to bias is transparency. The solution to relativism is consensus (or to use the Habermas phrase—overlapping consensus—because you know you can’t have scholarly jargon without a bit of redundancy). And the solution to bandwagoning is time and structure.
The Problem of Bias
The problem of bias is the easiest issue in reviews to understand but the most difficult to spot. Simply put it comes down to this issue: is the review improperly influenced by a concern unrelated to the merits of the item being review?
In this regard Consumer Reports operates as the paragon of unbiased reviews. They do not accept advertising. They do not disclose purchasers or reviewers. They do not take review samples. Obviously, they have a much larger revenue stream than I do and so I can’t go that far. If I could, I certainly would, but alas this is a hobby and not a job.
My tack, which a lot of reviewers use, to combat bias is to state up front their possible biasing sources. For me, I know I have a preference for small knives and so I say that. I also try to do a thorough job disclosing the source of the review sample—a half stop between not taking review samples and approaching reviewers with your hand out. Finally, I take great steps to rid myself of review samples in a way that does not provide me with remunerations or benefits (more on this soon).
If you don’t know where or how a gear reviewer acquired a particular piece they are reviewing, you shouldn’t trust the review. As someone that has reviewed gear for years now, I have been approached to do all sorts of reviews, ranging from “Here is our product, do what you want” to “I don’t have a review sample but can you say nice things about us.” The lack of sourcing in a review means that you don’t know how to evaluate the evaluation. Is this evaluation the reviewer’s true opinion or something bought and sold? Bias is a huge problem and there are simple steps to highlight it, acknowledging that you can never get rid of it entirely because a reviewer, like everyone, is a creature of habit.
The Problem of Relativism
Often the rejoinder to any form of criticism is simple: “That’s your opinion.” This, in many ways, is a useless response as it is both always true and always less than insightful. At the core though there is a more pernicious argument in this attack. It is the simple notion that all opinions are equally valid. This, of course, if true, makes any opposing viewpoint without merit because its all just shouting in the wind. Despite this criticism we all understand in an intuitive sense that some opinions are more useful than others. More to the point—we BEHAVE as if some opinions are more useful than others.
For example, over time an opinion about the quality of certain works of art has coalesced—people generally agree that Beethoven’s 9th Symphony is good, that Miles Davis’s Kind of Blue is worth listening to, and that Picasso’s Guernica is a moving work of art. Not everyone agrees, but lots of people do. More importantly, lots of people with lots of experience and insight agree. We are currently in a world where expertise is looked on with skepticism, but this has happened historically many times. Eventually, over time, we move away from that position and back to understanding that expertise is valuable. After all, I think most people prefer a dentist to do their fillings as opposed to random guy with a drill.
These three things together are a strong pushback against the relativistic attack on reviews—time, consensus, and expertise. But it needs to be all three. If you have two of those three things, it is probably not enough. For example, at the time most people, even those with experience, believed that Georg Philipp Telemann was a superior to Bach (both being contemporaries). It was easy to see why someone, on first blush, could think that way—Telemann was self-taught, Telemann’s output was enormous (he is the most prolific composer of all time), and he was famous in lifetime. But Bach’s music was so complex it took a while for even the experts of the day to catch up and when they did the consensus opinion slowly changed. Now, centuries later, everyone knows Bach and only a comparative few have heard of Telemann.
If, over time, there is a strong consensus among those with deeply experienced insights that something is good, it is probably good. Reviews aren’t objective but that particular three-legged stool is as close as we can get and is the best bulwark against relativism.
The Problem of Bandwagoning
The Telemann example raises another problem—bandwagoning. This is especially bad in the gear world. Many of the YouTubers and Internet writers know each other and so opinions and influences bleed together. When you combine this with the the efficiency of social media in terms of spreading opinions, bandwagoning is very hard to combat in reviews. For me, it goes something like this: “Nick likes something and I respect Nick so I would probably like that thing too,” or at least that is how the logic goes.
One way to break the thrall of bandwagoning is simple—review things when others aren’t. I try to be current and grab new and interesting stuff to review, but sometimes when something is scorching hot, I will pass on it, knowing I am coming back to it. This lets the fever subside. But this solution to bandwagoning is not particularly satisfying.
My other solution to combat bandwagoning is to have a highly structured review system. Often times people like stuff but can’t explain exactly why. By having a review system like I do, I am forcing myself to think about something on a very granular level. This, in turn, compels me to really think about whether something is good or something is hyped. For example, it seems pretty clear to me that the Para3 is just not as good a knife as the PM2 and is probably not that good a knife compared to the competition. By having the review scoring system I was able to focus on why—the fact that it leaves the PM2’s best attribute on the table, has a substandard handle, and carries steel that doesn’t match its price point all came to the front of my mind because of my reviewing scale. Maybe I would have gotten to these insights without the scoring system, but having it forces me to walk through the steps of justifying my opinion and that helps me think about things more critically. Its the best defense I can muster against bandwagoning.
Reviews aren’t objective. But simply because something isn’t objective doesn’t mean that it lacks value. Good reviews, whether of gear or operas, should have some way of accounting for and combating the problems I laid out above. When looking at new reviewers, check to see if they have acknowledged the dangers of bias, relativism, and bandwagoning and you will be better equipped to determine the value of their opinion.