I don't realize how your suggestion could solve the present problem.
Which present problem? The OP asked about automated WBPP workflow, and in particular automated alternatives to manual Blink rejection; this explicitly excludes manual image examination.
I fully agree that when automated alignment / integration fails, the correct option is manual examination and analysis of the images - but this was not the question.
If images fail to align with the selected registration reference they are rejected; this rejection process is straightforward and requires no further explanation. WBPP also supports rejection on the basis of image weighting. The basic mechanism used by WBPP for managing varying image quality is to use image weighting generated by the SFS process, using parameters set by the user in the "subframe weighting" section of the "lights" tab.
WBPP also includes a "Minimum weight" parameter in the "Integration parameters". If this parameter is set to zero, all images are integrated with their calculated weight;
this is the setting I strongly recommend. My argument is that the design of this optional setting is
realy bad; I'm not saying that making it better will solve all problems, but it will certainly solve some. The operation of this parameter is not obvious; it works like this:
- the frame weights are calculated normally;
- they are divided by the maximum (best) weight, to give a "normalised weight";
- frames with a normalised weight less than the "Minimum weight" parameter are rejected;
- the default value for "Minimum weight" is 0.05, so this mechanism is enabled by default.
Mystified users have quite often come to the forum asking why all their frames have been rejected by WBPP; it has turned out that they have a single anomalous frame
with much higher weight than the others. Thus, all the other frames are failing because their normalised weight falls below the default limit.
In general, PixInsight adopts robust design principles wherever possible, in which behaviour is not significantly influenced by a small numer of anomalous ("outlier") values; this is achieved by using
robust statistical estimators wherever possible. The rejection method defined above depends on the value of the single maximum weight. The "maximum" operator is as bad as you can get for non-robust operators - it is distorted by a single outlier measurement - which is the cause of the problem described above. Replacing this design with a design based on robust estimates of the weights distribution (e.g. the median; the MAD) would prevent this type of problem; for example:
- the frame weights are calculated normally;
- the median weight, m, is calculated (a location estimate);
- the MAD (median absolute deviation from the median), s, is calculated (a scale estimate);
- the "Minimum weight" parameter is replaces with a "likelihood" parameter, p (i.e. the estimated probablity that a frame will have a weight as low or lower than the measured weight);
- a frame with measured weight w is rejected if P(w) < p, where P() is the cumulative distribution function of an appropriately selected function (probably not normal / gaussian - since weights cannot be less than zero), with location parameter m and scale parameter s.
I'm not saying it fixes all problems with automatic rejection; I'm not saying it replaces proper manual evaluation of data. I am saying it would be a significant improvement to the highly non-robust current design of the automated workflow.