Evaluating Quality Control
I received an email recently that asked me how I evaluate quality control. This is a very, very hard thing to do, but when it is possible I will let you know.
First, everything produced by man, whether it is something as complex as a state of the art luxury car or as simple as brown paper bag from the grocery store, is subject to manufacturing errors. A single lemon does not equal bad quality control. People on forums seem to disagree with this sometimes, using a difficult-to-refute performative proof logic--if I got a defect, they must not have good QC. But the truth of the matter is that no matter the skill or the scale of the producer, errors will occur and a single error or a few errors (depending on the scale of production) do not indicate poor quality control. In industrial production, Lego is often cited in books on management and business as having the best QC around. They produce literally billions of small items, all of which must be precisely made in order to work, and many of which have to be bundled together is specific ways to make the final product. They have to do this quickly and efficiently to make sure they can turn a profit margin selling these tiny things (relatively) cheaply. Despite this, the error rate coming out of Lego is regularly cited as 13 manufacturing errors per 1 million bricks made. Think about that for a second. 13 out of 1,000,000. That is below the error accepted for convictions in criminal cases ("beyond a reasonable doubt" is often explained as 99% sure or 1 in a 100, Lego's error rate would be .0013 out of a 100, a much smaller rate of error). An error rate of 0 is not possible when you have an endeavour performed by humans or machines made by humans, so the idea that one bad knife or light equals poor QC is absurd, despite the seeming power of the logic used.
So how do I evaluate QC problems or design flaws? Its not easy, and I have to do it indirectly most of the time, but here is how I do it.
In some instances, its clear from the number of reported problems and the source of reported problems that there is a QC issue. The most recent example I can think of is, the Elmax steel controversy. Whatever you think about Cliff Stamp, it is pretty obvious that he is really methodical when it comes to his blades. The man keeps journals about sharpening angles for given knives. He hunts down and consolidates CATRA numbers for steel. He is a polarizing figure, but he is a good source of information. He initially pointed out that ZT's heat treat on the first run of Elmax blades left they prone to rolling and dulling. I noted this in my review of the ZT0560. He put it out there and then not one or two people agreed (you can find agreement between one or two people on the internet regarding just about anything), but dozens of people agreed and showed pictures of problems. This is the first form of QC evaluation--good and many sources complaining about a problem.
The second way I evaluate QC is by tracing design improvements and changes. I noted in my review of the Strider PT CC that the lock face geometry changed and that the pivot design changed. Both of these things indicated a problem with previous designs. This sort of iterative upgrading is common in the knife and light world. When changes occur that aren't "materials upgrades" like better steel or a new emitter, it can (but not always) point to problems with the original production models. Spyderco does this all of the time--the molded clip on the Delica, Endura, and Dragonfly, all gave way to steel clips in their iterative upgrade process. They even have a name for it Constant Quality Improvement. This behavior, displayed by both Strider and Spyderco, is a sign there were problems with the original, but it is also a sign of a superior maker. Everything could be made better and the fact that these two companies are always doing that tells you a good deal about why they are so well respected in the gear community (their knives, that is).
The third way I evaluate QC is probably the easiest--recalls. So few companies that make gear we are interested in have products subject to recall, but some do. Gerber, for instance, has had many product recalls. The Instant was recalled within a year of its very high profile launch because the button lock failed at inopportune times. Their parang would break off. And there are others. The reality is that this many recalls spread out over many different designs indicates a problem with QC and given the scope, it indicates a company-wide problem with QC. Fixed blades snapping in two is not like "my clip broke off" or "my frame lock has blade play". This indicates a serious lapse in QC and is one of the reasons I don't really bother to review Gerber gear and regularly bash the company.
Fourth, and rarest of all, is direct company input. I have been fortunate enough to knows lots of folks that know way more than I do about gear production and every once in a while I will learn about problems with OEMs or other things of the sort. It hasn't happened with a piece of gear I have reviewed, but if it does, you'll know.
One flawed version of something is not a QC issue, but it might be indicative of one. Its hard to evaluate, because of my distinct lack of sample size (usually only a single piece). But in some instances when I have had multiple pieces (like I have for a review I am working on right now) I feel comfortable saying my experience is indicative of poor QC. That is VERY rare. Lemons occur everywhere, even in custom lights and knives, evaluating QC requires you to not focus on a single piece, but on the production run as a whole, and generally that's difficult.