Is there a way to automate WBPP so it integrates after an imaging session?

rugbyrene

Member
Hi all,

Not sure if this is even possible but I was wondering if there was a way for WBPP to be automated so that at the end of an imaging session, the images I’ve taken can be picked up and WBPP called to integrate these lights with the pre-done master calibration files (darks, flats etc). I waste so much time after an imaging session with the WBPP process that it would be good if it could be automated so that an integrated image is waiting for me in the morning.

Obviously it would need to automate the Blink process (via some parameters) and the launch WBPP, and call on the files from a directory (again using pre-set integration parameters). I just feel this would save so much time if a fully integrated file was waiting for us in the morning. And if you don’t like what it has done, well you can always redo it yourself the next day.

Cheers
 
I think you have to be more specific about automation.
I can feed all my files into WBPP, have the required settings and let it run, the end result is an drizzled, autocropped image.
I don't need to watch that and go to bed, or eat a pizza, watch a TV show, etc...

Cheers
Tom
 
Obviously it would need to automate the Blink process
Blink just displays images. There would be no purpose to running Blink in an automated workflow. WBPP includes image measaurement, weighting and bad frame rejection in the standard pipeline, which replace the functionality of a manual Blink / reject process.
1714128300228.png
 
That being said, in a recent post I concluded that rejection isn't doing so well for me. So are there some setting in WBPP I have missed?
 
So are there some setting in WBPP I have missed?
Probably not. Once you have selected and set the parameters for you subframe weighting in the appropriate section of the "lights" tab, your only control over frame rejection (as far as I am aware) is the quite awful "minimum weight" setting in the "image integration" settings. This is so unpredictable that you might as well toss a coin. I have proposed a better design elsewhere, but we will have to see if anyone pays any attention to it.
 
I have proposed a better design elsewhere, but we will have to see if anyone pays any attention to it.
I don't realize how your suggestion could solve the present problem.

There are (at least) two conditions which none of PixInsight's metrics would detect:
1. certain guiding errors ( https://pixinsight.com/forum/index.php?threads/best-reference-frame-for-registration.20059/ ) and
2. halos around bright stars due to thin clouds or haze.
In my experience, sorting out frames that are affected by one of these flaws is the only way to avoid negative impact on the integration result.

If you know a different method, please tell us.

Bernd
 
I don't realize how your suggestion could solve the present problem.
Which present problem? The OP asked about automated WBPP workflow, and in particular automated alternatives to manual Blink rejection; this explicitly excludes manual image examination.
I fully agree that when automated alignment / integration fails, the correct option is manual examination and analysis of the images - but this was not the question.
If images fail to align with the selected registration reference they are rejected; this rejection process is straightforward and requires no further explanation. WBPP also supports rejection on the basis of image weighting. The basic mechanism used by WBPP for managing varying image quality is to use image weighting generated by the SFS process, using parameters set by the user in the "subframe weighting" section of the "lights" tab.

WBPP also includes a "Minimum weight" parameter in the "Integration parameters". If this parameter is set to zero, all images are integrated with their calculated weight; this is the setting I strongly recommend. My argument is that the design of this optional setting is realy bad; I'm not saying that making it better will solve all problems, but it will certainly solve some. The operation of this parameter is not obvious; it works like this:
  • the frame weights are calculated normally;
  • they are divided by the maximum (best) weight, to give a "normalised weight";
  • frames with a normalised weight less than the "Minimum weight" parameter are rejected;
  • the default value for "Minimum weight" is 0.05, so this mechanism is enabled by default.
Mystified users have quite often come to the forum asking why all their frames have been rejected by WBPP; it has turned out that they have a single anomalous frame with much higher weight than the others. Thus, all the other frames are failing because their normalised weight falls below the default limit.

In general, PixInsight adopts robust design principles wherever possible, in which behaviour is not significantly influenced by a small numer of anomalous ("outlier") values; this is achieved by using robust statistical estimators wherever possible. The rejection method defined above depends on the value of the single maximum weight. The "maximum" operator is as bad as you can get for non-robust operators - it is distorted by a single outlier measurement - which is the cause of the problem described above. Replacing this design with a design based on robust estimates of the weights distribution (e.g. the median; the MAD) would prevent this type of problem; for example:
  • the frame weights are calculated normally;
  • the median weight, m, is calculated (a location estimate);
  • the MAD (median absolute deviation from the median), s, is calculated (a scale estimate);
  • the "Minimum weight" parameter is replaces with a "likelihood" parameter, p (i.e. the estimated probablity that a frame will have a weight as low or lower than the measured weight);
  • a frame with measured weight w is rejected if P(w) < p, where P() is the cumulative distribution function of an appropriately selected function (probably not normal / gaussian - since weights cannot be less than zero), with location parameter m and scale parameter s.

I'm not saying it fixes all problems with automatic rejection; I'm not saying it replaces proper manual evaluation of data. I am saying it would be a significant improvement to the highly non-robust current design of the automated workflow.
 
Last edited:
  • Like
Reactions: dld
Which present problem?
The OP wrote:
Obviously it would need to automate the Blink process (via some parameters) and the launch WBPP, and call on the files from a directory (again using pre-set integration parameters).
but automating the Blink process is impossible. The Blink process will not assess frames, this is the task of the person who visually checks the images using the Blink tool.

This visual assessment should be done before selection of reference frames in order to avoid automatic selection of a bad registration reference and bad local normalization reference frames. One example is the frame with doubled stars due to guiding issue. Such a frame is an outlier, detectable by metrics. This kind of bad frames would be identified if your suggestion was realized. As yet I did not encounter the former issue myself, and I regard it as exceptional case, caused by bumping against the telescope or tripod.

Another example are frames with halos around bright stars due to thin clouds or haze. This kind of bad frames is not detectable as outliers in any metric. So your suggestion will not at all help to reject such frames. For me, thin clouds passing through during a capturing session is a situation that happens more often. My experience with this issue is: if you don't visually sort out such frames, the halos will be present in the integration result.

So the present problem is: we have no metrics that identfy the above mentioned type of bad frames.

Bernd
 
but automating the Blink process is impossible.
Indeed. The OP wants an automated solution, so Blink is not part of it. Telling him he needs to blink first is simply not addressing his request. The WBPP solution, relying on SFS weighting, will often work. The WBPP solution using "Minimum weight" is broken and needs to be repaired. As far as I know these are the best automated options. You do not need to keep repeating that manual inspection is better - we all know that.
 
That is what I am saying: because currently there is no such metric that can characterize halos around bright stars, a weighting within WBPP will not identify affected frames. This has nothing to do with "Minimum weight" is broken, and fixing "Minimum weight" will not improve the detection of frames with halos at all.

I didn't claim that I have a solution for the OP. I am asking whether a suitable metric could be found if one makes an effort to find one.

Bernd
 
It seems to me the OP is asking a simpler question:

Can they script PI & WBPP to run automatically after an imaging capture session, perhaps overnight so they have a (draft) integrated image to see in the morning?

I bet the answer is yes, but I've never personally investigated this.

There's even a NINA "PI live-stacking" plug-in which pretty much promises to do exactly what the OP is asking for.
 
Can they script PI & WBPP to run automatically after an imaging capture session, perhaps overnight so they have a (draft) integrated image to see in the morning?

I bet the answer is yes, but I've never personally investigated this.
I agree that this is probably what the OP is asking about.
It is worth understanding that "live stacking" and running WBPP are quite different things.
Live stacking (by it's nature) is constrained to incremental operation - each new frame must be calibrated, aligned and (somehow) appropriately scaled to be added into an incremental integration. This process inevitably involves compromises.
WBPP is a batch process that progresses all frames through a sequence of processing steps. To automate WBPP (so you could leave it and come back in the morning to an integrated image), you would need to make sure that you either captured and identified to WBPP all the required calibration frames before you leave the system unattended or have a sufficiently sophisticated automated observatory system and capture application to configure and capture calibration frames unattended. The capture software would than have to capture all frames before initiating WBPP processing, and find a way to pass the captured frames into the WBPP settings. I'm fairly sure no such interface exists; the usual "live stack" solution of monitoring a folder for new files and processing them as they appear won't work, since WBPP needs to know when the last file has been written before it can continue (WBPP is not an incremental workflow).
There's even a NINA "PI live-stacking" plug-in which pretty much promises to do exactly what the OP is asking for.
Is there? I have heard of users configuring NINA to communicate with the EZ processing "EZ Live Stack" plug-in, but I wasn't aware of any NINA-native live stack option.
I am asking whether a suitable metric could be found if one makes an effort to find one.
This is a perfectly reasonable (and very interesting) subject for another thread.
 
I run NINA with the PI plugin. I will apply darks and flats and stack and integrate each frame as it is collected and then display the result. It has a option to save the calibrated frames but I never do that.

Kurt
 
I run NINA with the PI plugin. I will apply darks and flats and stack and integrate each frame as it is collected and then display the result. It has a option to save the calibrated frames but I never do that.

Kurt
Then in the morning you already have a decent preview of your result, right? Calibrated, aligned, selected, integrated, color combined. So you can tell whether to continue the next night or move onto another target? Seems like a nice feature.
 
I can't find any documentation / description of this NINA function anywhere. Exactly which PI plugin are you talking about?
 
Download NINA version 3 here https://nighttime-imaging.eu/download/

When you have it installed go to he plugin icon on the bottom left and click on it to find the plugin.

You can watch the stacking in real-time image by image. NINA has really turned into a fantastic product. I was forced into using it because my mount does not run under linux. Now with the great plugins I will never give it up.

Kurt
 
The NINA PI Live stack plug-in a great way of previewing data collection on-the-fly but it in no way takes the place of culling poor frames and running WBPP. If that plug-in eventually interacts with Target Sceduler and it’s in built image grading, it may get a little closer but …
 
Yes that is for sure Chris. I suppose you could use it to do just the calibration however I have a rotator and it doesn't seem to handle that I have/take flats for every rotation angle.

The image grading can be helpful but I have been sucking in the rejects in to WBPP and using WBPP and NSG to cull out the bad frames. Blinking them is a pain because you have to go filter by filter. My many thanks to Tom Palmer for his outstanding sequencer plug-in. It is the only way I do collections now.
 
Back
Top