Looking for improvement

Hi there,
I have been using Pixinsight for roughly a month now, and it is incredible! But I came to a point now where I am asking myself how I can improve my workflow. I am using various techniques, but I feel like I could get so much more out of my data. Down below, you find a preview of my 2 most recent images and a link to the full size files.

Caldwell3Preview.jpg
Messier82Preview.jpg


Full Size: https://hidrive.ionos.com/share/7kycu16vus

The workflow differs for both of these images, since I tried to stretch the stars separately for Caldwell 3, while I stretched everything for Messier 82. But it is usually like this:
- Wbpp
- Light denoising, Gradient removal of LRGB (Monochrome camera so separate channels)
- Combine RGB + SPCC
- Starnet (Caldwell 3), reduce background saturation and increase galaxy sat. (RGB) (range_mask+mediantransform+clonestamp to get a good mask + curves)
- Hyperbolic stretch (GHS) for both RGB and Luminance (Pixelmath and op_screen to put them back together)
- Local Normalization + BlurXTerminator for Clear, Create HDR if needed, denoising if needed
- Channel combination L on top of LRGB
- Cleaning up of imperfections

I forgot for Messier 82 I also had H-alpha data, which I put into using the Toolbox/CombineHAwithRGB (with a mask applied) before creating the LRGB.

Stretching the stars separately helped a lot, but I am still not 100% happy with them. They look a lot better now for Caldwell 3 (maybe a bit too colorful ':D), but they somehow lack contrast I feel like. That is what I get when using screen as blending. If I just add the starmask to the starless image, then I just blow out the highlights. So do you have any tips on how to stretch the stars, so I still get the diffraction spikes and smaller stars but don't blow the highlights in the end? Should I try a negative local factor in the Generalized Hyperbolic for the stars? Or general recommendations of what I should include in the workflow? Are there any advanced techniques it is worth spending time into?

Critical feedback is also appreciated, what do you like about the images, what is too much?

Thank you very much!
 
I've had the exact same problem! I find that an initial stretch with ArcsinhStretch (with "Protech highlights" checked) combined with finer GHS stretches, yields much better star colors while also showing faint stars.

Good luck :)
 
Hi there,
I have been using Pixinsight for roughly a month now, and it is incredible! But I came to a point now where I am asking myself how I can improve my workflow. I am using various techniques, but I feel like I could get so much more out of my data. Down below, you find a preview of my 2 most recent images and a link to the full size files.

View attachment 27242View attachment 27243

Full Size: https://hidrive.ionos.com/share/7kycu16vus

The workflow differs for both of these images, since I tried to stretch the stars separately for Caldwell 3, while I stretched everything for Messier 82. But it is usually like this:
- Wbpp
- Light denoising, Gradient removal of LRGB (Monochrome camera so separate channels)
- Combine RGB + SPCC
- Starnet (Caldwell 3), reduce background saturation and increase galaxy sat. (RGB) (range_mask+mediantransform+clonestamp to get a good mask + curves)
- Hyperbolic stretch (GHS) for both RGB and Luminance (Pixelmath and op_screen to put them back together)
- Local Normalization + BlurXTerminator for Clear, Create HDR if needed, denoising if needed
- Channel combination L on top of LRGB
- Cleaning up of imperfections

I forgot for Messier 82 I also had H-alpha data, which I put into using the Toolbox/CombineHAwithRGB (with a mask applied) before creating the LRGB.

Stretching the stars separately helped a lot, but I am still not 100% happy with them. They look a lot better now for Caldwell 3 (maybe a bit too colorful ':D), but they somehow lack contrast I feel like. That is what I get when using screen as blending. If I just add the starmask to the starless image, then I just blow out the highlights. So do you have any tips on how to stretch the stars, so I still get the diffraction spikes and smaller stars but don't blow the highlights in the end? Should I try a negative local factor in the Generalized Hyperbolic for the stars? Or general recommendations of what I should include in the workflow? Are there any advanced techniques it is worth spending time into?

Critical feedback is also appreciated, what do you like about the images, what is too much?

Thank you very much!
There are a lot of ways to skin a cat. I'd never denoise or gradient correct individual RGB channels. And I rarely separate stars and treat them separately. So my workflow here would be:

-WBPP
-ChannelCombination to create RGB
-gradient correction (GC or MGC) on the RGB image and the L image
-BlurX the RGB and the L
-HT/GHS/Curves to stretch the RGB and the L
-LRGBCombine to build the LRBG
-Curves to tune the saturation
-NoiseX
-HT to reset the black point

Where the stars are a LOT brighter than the target object, I sometimes use star removal. In that case, I first stretch the data until I'm happy with the stars, then I remove them (de-screen with StarX), then I finish stretching the starless image, then I screen the stars back in. I don't get very good results removing stars from a linear image, stretching them, and adding them back in.
 
I am mainly doing widefield imaging with 2x Samyang 135mm lenses on DSLRs.
So I know the problems with corwded star fields and too bright stars pretty well, but I am not sure how well my solution transfers to your image.

Basically, the only option I found that works for most of the cases I have encounters is to completely separate the stars from the object and treat them as individual images.
My workflow is as follows:
WBPP -> BXT Correct only -> MGC -> SPCC -> BlurXTerminator with minimal sharpening to help StarXTerminator -> StarXTerminator yielding an image called "object" and "stars"
For object: NoiseXTermiantor if necessary -> MaskedStretch -> CurvesTransformation -> Saturation
For stars: HistogramTransformation -> Saturation

Combine using PixelMath, maybe some slight HT in the end to get the black point right.

With the HT for the stars image, I have complete control over how many stars I want. With widefield images, I have really not found any other good solution for this problem.
At least for me, this also allows for a relatively accurate image in terms of color and object representation, with a (probably necessary) less accurate star brightness representation. For reference, here are most of my current images: https://app.astrobin.com/u/Astrogerdt#gallery

Interestingly, I have had much more success removing stars from a linear image than with a stretched image, in contrast to Chris experiences.....

However, what I have found out: optimizing the preprocessing has produced much more quality gains for me, both in terms of S/N and how well SXT can handle stars in the image, compared to post-processing workflows. Whenever I can minimize an issue, I do it as early as it can be done reasonably well, which is in the preprocessing stage most of the time.

CS Gerrit
 
Back
Top