Resume failed integration?

Giddy

Active member
Hi. I'm on the trial version. I was trying to stack with PI for the first time yesterday. 298 lights plus flats and bias, no darks. After 14 hours I got an out of memory error during the last step before autocrop. Is it possible to resume at that step or do I have to run the full integration all over again?

Also, my understanding is there are limited solutions to this memory issue. I only have 16GB of RAM so I increased my Windows swap file to 150GB (on NVMe drive). I'm not sure that will be enough... is there a way I can calculate how much memory will be required so I don't waste another 14 hrs to have it fail again?

EDIT: Thought I should mention I'm using the "Faster with good quality" settings.
 
Last edited:
If you simply rerun the WBPP session all the intermediate files should be in cache, so it should very rapidly get to the same point. Of course, it may then fail again, but it should only take minutes, not hours. A150GB Windows paging file should be big enough.
 
If you simply rerun the WBPP session all the intermediate files should be in cache, so it should very rapidly get to the same point. Of course, it may then fail again, but it should only take minutes, not hours. A150GB Windows paging file should be big enough.
I think it depends on where the failure occurred. If it was in the integration phase, I don't think anything is recoverable. Calibrated files will be cached, but calibration is pretty fast. Registered files should still be accessible, but registration is usually pretty fast, too. My guess is most of that 14 hours was integration, not the previous, cached steps. (Or maybe normalization... which I think a lot of people disable.)
 
Brilliant! I was hoping it worked that way but I was afraid to have it purge everything and then I'd find out later there was a trick to it. Thanks!
 
I think it depends on where the failure occurred. If it was in the integration phase, I don't think anything is recoverable. Calibrated files will be cached, but calibration is pretty fast. Registered files should still be accessible, but registration is usually pretty fast, too. My guess is most of that 14 hours was integration, not the previous, cached steps. (Or maybe normalization... which I think a lot of people disable.)
Sorry, I'm still new so I used the wrong terminology. I meant resuming the stacking process at integration, not resuming the integration step itself.

Normalization was longest single step at almost 5h and then almost 6h of integration before the memory error. The rest was 2-3 hrs combined. I think Registration and LN reference were over 30 minutes each, but I don't recall the exact time.

It jumped all the way to integration which at least means I can hope for a result by the time I get home from work.

I really hope this works. The primary reason I'm trying to use PI to stack is because I have not been able to get a good stack out of DSS or Siril without dark frames. I was told I don't need them for my camera (mirrorless) so I was trying to compare results. I got pretty good results with full calibration frames in Siril but when I tried without darks I get a horrible result. Same in DSS. Not sure why. So I'm trying to test it out in PixInsight. I just wish it went a little faster.

I'm wondering if it would be better for me to use the faster settings without normalization just so I can get comparable results to what Siril or DSS would do. Is the integration step faster if I skip local normalization?
 
Sorry, I'm still new so I used the wrong terminology. I meant resuming the stacking process at integration, not resuming the integration step itself.

Normalization was longest single step at almost 5h and then almost 6h of integration before the memory error. The rest was 2-3 hrs combined. I think Registration and LN reference were over 30 minutes each, but I don't recall the exact time.

It jumped all the way to integration which at least means I can hope for a result by the time I get home from work.

I really hope this works. The primary reason I'm trying to use PI to stack is because I have not been able to get a good stack out of DSS or Siril without dark frames. I was told I don't need them for my camera (mirrorless) so I was trying to compare results. I got pretty good results with full calibration frames in Siril but when I tried without darks I get a horrible result. Same in DSS. Not sure why. So I'm trying to test it out in PixInsight. I just wish it went a little faster.

I'm wondering if it would be better for me to use the faster settings without normalization just so I can get comparable results to what Siril or DSS would do. Is the integration step faster if I skip local normalization?
"Stacking" and "integration" are the same things. But at that stage (the last, unless you have autocropping) no intermediate files are being created, so there's no caching. Meaning if it crashes here, you'll need to start over. But you don't need to completely start over, since the calibrated images created earlier should all be intact. And yes, at least for testing purposes, I think you should probably disable local normalization and save yourself a few hours. Lots of people don't use it at all and don't seem to notice any visual difference.
 
"Stacking" and "integration" are the same things. But at that stage (the last, unless you have autocropping) no intermediate files are being created, so there's no caching. Meaning if it crashes here, you'll need to start over. But you don't need to completely start over, since the calibrated images created earlier should all be intact. And yes, at least for testing purposes, I think you should probably disable local normalization and save yourself a few hours. Lots of people don't use it at all and don't seem to notice any visual difference.
Got it, thanks for clarifying. I guess I was sort of right.

I'm at work so if it fails again I will try running without normalization. Do you know if that will shorten the integration step at all, or just reduce my time by the local normalization time?
 
the files still need to be normalized one way or another for integration. no idea if LN takes longer during integration or not (vs. imageintegration's built in normalization methods.) it's probably not significant...
 
Just confirmed it finished but I won't be able to check results until tonight. It only took 3 hours for integration step so I think I read the time wrong on the crashed run.. it must have been almost 6 minutes, not 6 hours (I had gone to bed).

Very eager to see the results!
 
Very disappointing. I have no idea what I'm doing wrong. When I stack using all calibration frames in other programs I get a very nice file to work with. No matter what program I use, if I exclude dark frames, I get a really strong circular gradient and the image is unprocessable. All same data, just using or excluding dark frames.

Left is screen stretch PI stack with no darks, all default settings. Right is stack from another program with darks. I want to try PI with darks but it takes so long to stack compared to other softwares. It seems I can't get away from taking darks, if just to make the processing easier in the end.

bad_dark_stack.png

The results in other programs excluding darks were much, much worse. But in PI, after applying SPCC, I end up with this:

after_spcc.png


GradientCorrection (with automatic convergence) leaves a vignette (hard to see in screenshot, but clear in PI):
gradient_corrected.png


And then EZ Soft Stretch offers me this:
soft_stretch.png


I'm stumped. :(
 
Very disappointing. I have no idea what I'm doing wrong. When I stack using all calibration frames in other programs I get a very nice file to work with. No matter what program I use, if I exclude dark frames, I get a really strong circular gradient and the image is unprocessable. All same data, just using or excluding dark frames.

Left is screen stretch PI stack with no darks, all default settings. Right is stack from another program with darks. I want to try PI with darks but it takes so long to stack compared to other softwares. It seems I can't get away from taking darks, if just to make the processing easier in the end.

View attachment 22993
The results in other programs excluding darks were much, much worse. But in PI, after applying SPCC, I end up with this:

View attachment 22994

GradientCorrection (with automatic convergence) leaves a vignette (hard to see in screenshot, but clear in PI):
View attachment 22995

And then EZ Soft Stretch offers me this:
View attachment 22996

I'm stumped. :(
Getting a circular gradient when you don't use darks makes no sense. That circular gradient argues a problem with your flats. The color cast looks perfectly normal for a linked STF. GradientCorrection has a bit of a learning curve to get it right, but isn't intended to fix vignetting or the similar optical problem you're seeing here. Your actual gradient is very slight, and linear, as they usually are. And I'd avoid EZ Soft Stretch at this point and simply stretch using HT. A simpler system with fewer things to go wrong.

Can you post links to your master calibration files and master light? What software are you using to acquire the data (or are you simply taking if off the camera card). There are a lot of opportunities with a non-astronomical camera like yours to end up with poorly matched calibration frames.
 
Ok, thanks for the tips... I think I'm going to reset and start over. I must have mixed up files or screwed something up along the way. Maybe I copied the wrong flats or something. I don't want to waste anyone's time so I'll make sure I didn't make a dumb mistake and then come back here if I'm still stuck.

Tonight is a good night for imaging so I'm going to collect brand new data but skip the darks so I can get more light subs. That was the advice I've gotten on CN. I thought I could try with existing data but I clearly did something wrong along the way and I'm starting to get confused so starting fresh will work better for me I think. So many files to deal with!

I'm capturing using ASIAIR, so files are all fits except bias which I captured before getting the ASIAIR a couple of weeks ago. I've been planning to take new bias with the air to get fits files. So my plan will be to take 300 or so lights and 40 flats. Tomorrow I will reshoot bias and then stack and hope for the best.

I will also try stacking some of my other data in PI. Something with fewer subs.

Just bought a commercial license so at least I can stack on my laptop and play games on my PC now. :)
 
I'm capturing using ASIAIR, so files are all fits except bias which I captured before getting the ASIAIR a couple of weeks ago.

this could be your problem, using different capture software across lights and calibration frames can sometimes lead to big issues.
 
this could be your problem, using different capture software across lights and calibration frames can sometimes lead to big issues.
I didn't know that... thanks! I'll make sure to not stack mixed data from now on.
 
Getting a circular gradient when you don't use darks makes no sense.
Oh yes it does - if you don't do anything to subtract bias. Here are some synthetic images:
  • SynF: a 50% ADU synthetic flat with 1/r^2 fall-off
  • SynL0: SynF/100 - equivalent to a bias-corrected featureless light sub
  • SynL1000: SynL0 + 1000 ADU - equivalent to a featureless light sub without bias / dark subtraction
  • SynL0F: SynL0 flat-corrected with SynF; the result is perfectly flat.
  • SynL1000F: SynL1000 flat corrected with SynF; the result has radial "anti-vignetting", just like the image in post #12 above.
Bottom line: you must either do dark subtraction, or at least do bias subtraction - or you will expect this sort of effect.
1714736553425.png
 
This has been fun and frustrating at the same time. I started integrating a smaller set of data last night. Things seemed to be going swimmingly on the new install on my laptop. I had cleared my meager 500GB hard drive of all superfluous data and feed up 150GB. I change my swap file to 50GB. Wasn't enough. I went to bed and woke up to an out of memory error. :( Upped swap to 100GB and trying again. All the processing data is going to an external USB 3 SSD.

I am using flats and biases. For all my shots I did take darks at the end of the session. This whole experiment started with what I thought was an innocuous question on CN... is it better to take my darks at the start of the session when the ambient temp is higher, or at the end when it is lower? The answers could be summed up as "stop taking darks, with your camera they're doing more harm than good". I've just been trying to prove that out, one way or the other. That has also been tougher than I expected, probably due to user error.

PI Question: I love how it will reuse the cached files after an error like this, but I've noticed that if any files failed to register, it repeats all those time consuming steps again when you rerun. For example, my data has 66 lights and 2 were rejected. When I ran it the second time, it re-registered everything (failed 2 again) and local normalization ran again, etc. Is there a way to have it accept the previous registration and just use the 64 it registered on the previous run?
 
Back
Top