Mmm, just have to wade in here on the whole topic of error reporting...
Quote:
EDIT - redirecting output to a file, while a wonderful feature, defeats the purpose of the warning system.
Any diagnostic chatter should not be seen unless requested.
Yikes! A silent failure seems pretty darn useless. If the tool doesn't offer an option, it may still be possible to direct output to a null device, but why?
Peter Brown in "Writing Interactive Interpreters and Compilers" (highly recommended, even if it is >30 years old), points out that "the error case is the normal case" - by which he means that during program development, it's normal to run into errors. Once a program is completed, there should be no more errors. But when it's complete work on it stops. So the majority of time a programmer is creating a new program, errors are happening.
So what HXA tries to do is provide enough information to answer the question "why didn't what I expect to happen actually happen?" To the extent that it reduces the time necessary to figure that out, it is successful.
On sending error output to a file, I'm a big believer. Partly because one error can cause another, cascade error, which occurs only because the first did and not because it's a real error in itself. I want to be able to see the one that set off the cascade. Sometimes I see a way to prevent the cascade from happening.
But also because HXA has a lot - hundreds - of test programs, some suppposed to succeed and some designed to fail. Every so often I want to test all of them, to see if any changes I've made recently cause some test result to change as well. This has been very helpful in pointing out where I've created a test for some obscure possibility and then forgotten that it can happen - but the test is still there, waiting to remind me.
The point is I'm not about to wade through the output of hundreds of tests to see what happened. Instead I have a collection of batch files and AWK scripts that go through them, trying to match up what should have happened against what actually happened. The output of those programs is what I look at. If they say something changed, I have to fix either the source (oops!) or the test (the definition of "what ought to happen" changed).
It's not hard to create "filters" that scan through voluminous output picking out only the "important" stuff. It's almost exactly what AWK was designed for in the first place. Perl is probably a good pick for the job as well. By all means go ahead and ignore the unimportant stuff, but don't assume it's all unimportant.