Posted by DiogenesII on 27/02/2022 10:24:38:
…
Since I wrote the last post, it occurs to me that blades were probably slightly better, in the better days of yore at least – having considered why I like to put tension on modern blades, it's probably because some of them are, well, less than perfect.
I have some 'own brand' all-hards from a major supplier, they're okay, but the cut is chattery and rippled unless stretched out to the max. I wonder whether the material is as hard, stiff, or more stretchy than old stock, and whether the teeth are as accurately ground or set..
Maybe I'll try an old stock Eclipse or Starrett blade in an 'old school' frame and compare..
Old chaps like to believe that old tools and materials were somehow 'better' in the past, but there's precious little evidence to support the idea! Rather the opposite: if modern tools and materials were no good, industry wouldn't be making smart phones by the billion, landing probes on Mars, erecting buildings 1km high, replacing coal with green energy, building nano-machines, or perfecting autonomous motor vehicles! Yesteryear industry did not have the techniques or understanding needed to produce many of today's high-tech products, and R&D had to work hard to get to where we are now.
Comparing old tools with new is undoubtedly interesting though. Unfortunately, individuals collecting anecdotal evidence in their workshops is a waste of time because it's essential to eliminate observer bias. If Silly Old Duffer believes one tool is 'better' than another, he will subconsciously favour it. Observer bias is difficult to eliminate, even if the observer is determined to prevent it!
The best way to compare blades is with a machine that measures over a long period of time how much work the motor does driving a saw through a given material under controlled conditions.
Ideally 4 people who only know what they need to know are involved:
- Number 1 selects a large number of blades and relabels them. Only he knows who made which blade and how old they are. There must be several examples of each type of blade, which are sent to Number 2..
- Number 2 tests the blades with a machine. He knowledge of which blade is which and the job should be done by someone with no interest or understanding of hacksaw blades. The machine measures and records the amount of work done by each blade when cutting through a large sample of the same material.
- Number 3 receives the machine's anonymised results and ranks them. He does not know which blade is which and can only use the evidence provided by the machine – he scores tables of numbers comparing hundreds of blades.
- The ranking is sent back to Number 1 who reattaches the original identities. Only then can performance patterns be identified. And whatever wins, the evidence is good.
- Number 4 checks the methodology and process I've outlined. Just as people are biased, so aree badly designed experiments. Everything about the experiment, method, results and conclusions should be made public so that mistakes can be spotted by others.
Human assessments are often widely inaccurate. Studies of wine testers have shown them to be highly biased and inaccurate when the tester believes he knows what he's tasting! Such as marking exactly the same wine in two different glasses both very highly and badly! Same problem comparing HiFi equipment, violins, shares, brand-name products and much else. In medicine, it's been shown a percentage of test subjects believe a placebo had a positive effect even when told it was a placebo.
Basically, comparative tests have to be carefully designed because everyone is daft! People who believe in Snake Oil will find ways of proving it's good stuff. Even clever chaps get things wrong. It's hard to get this stuff right! Nonetheless, engineers should work from careful evidence and avoid boosting unverified personal preferences.
Dave