A reader wrote in to the automation list (http://www.control.com) and asked why some vendors specify accuracy in percent full scale, when percent reading is so much obviously better. Another reader wrote back and explained that it was done to make the first instrument look better, since many people don't look past the percent sign. Thus, 1% is 1%. Of course, we know that is not true. As David W. Spitzer and I have repeatedly pointed out in our "Consumer Guide to..." series, accuracy is one of the most often mis-used specifications in the process automation market. In our latest book, "The Consumer Guide to Non-Contact Level Gauges" we point out how difficult it is to match types of fruit in determining what's real in level gauge performance. There are at least seven different ways to specify the accuracy of a non-contact level gauge, and we saw them all. Of course, this kind of specsmanship only sows more FUD (Fear, Uncertainty, Doubt) in the minds of users of these devices. I've railed for years against specsmanship...everybody is probably tired of hearing me rant about it. But the fact is, it isn't changing. And it produces a cognitive dissonance that is not good for the profession. But you know, if we users don't step up and hold the vendors accountable for spreading FUD and vaporware, and make it cost them to do it, we surely deserve what we're getting.