Probably because most individuals, myself included, haven't done the engineering, analysis and testing necessary to understand enough to make the right kinds of changes. Most of the work I have seen is "seat of the pants". That's not to say there hasn't been legitimate hard work involved, and many flashes of insight. There are a lot of members that have tried to make better performing moderators. Sometimes you get lucky, but most of the time this approach is incremental at best, and a step backwards at worst.
There has to be a better way to figure this out. It's a tough multi faceted problem, that depends on a lot of things. To sort it all out would require a large time investment,
a lot of knowledge about multiple disciplines, and quite a bit of fabrication and testing.
But, yes, I agree, there's a lot of hucksterism on forums, especially on moderators, and it gets tiresome. Would be good to do reasonable tests on them. There's been many attempts at this, some far better than others. To do it correctly is
hard work.
As of the moment, short of a full MIL-STD test, the best attempt (that I know of) has been by
@OldSpook. He's improved his testing with some suggestions from this subforum. One could argue that his test and the MIL-STD test have different objectives, but at least it is documented, methodical, and repeatable, with equipment that is accessible to many. I applaud his efforts in this area, to get meaningful relative quantitative measurement results. This, in my opinion, is a reasonable way to cut through the marketing BS, and make fair comparisons of moderators, on a common platform. Then we AG owners can make more informed decisions on how to part with our hard earned cash.
Sure, different moderators perform differently on different AG's. Why is that? It would be useful to know why - because then maybe we can make better and more robust designs. If one doesn't fundamentally understand the problem, it's likely the solution will be elusive... My two cents. Peace.