By MMyers
Date 02-23-2011 14:16
Edited 02-23-2011 14:23
Since they're not apples to apples, you would need to take several samples and measure all of them using each method and compare. If you did a statistical analysis on the results, I bet they'd both be with the error band. They may be violating the spec, but I doubt the difference in technique really makes any significant difference. They only have a little more than 10% fewer points but 100% more fields. Now if this were 100x and 50 points or something drastically different, I'd be more inclined to say it'd make a significant difference.
As for accuracy or confidence interval, consider the Fahrenheit and Celsius temperature scales. Fahrenheit has more graduations of temperature, thus must be more accurate, but the standard in everything medical (and actually almost everywhere in the world except the U.S.), a generally agreed critical field, is Celsius. Cue the "law of diminishing returns" and the phrase "sometimes enough is all you need".
All that said, I'd either force them to spec or run this one up the flag pole and do comparative tests of the two techniques. There is no way I'd make this call on the fly.
They are demonstrably out of compliance with your specification. If I was in charge, I would reject it. If I were bound to apply some sort of "value engineering judgment", perhaps it could be found acceptable, after comprehensive metallurgical, mechanical engineering, corrosion engineering, and fitness for purpose analysis. Even then, what is the cost, and who pays?
If it is in a Nuke, what are the legal / NRC ramifications?