Removing Canon Banding with PJSR

Simon,

store the script somewhere in the file system (in my case, I store them separate from the PI scripts in C:/PCLGeorg/scripts, so the dont get deleted with the next update). The use menu entry Script/Feature Scripts... to add this directory. The script is then available in menu entry Script/Utilities.

Georg
 
Hi Georg,

Thanks for the pointer. I copied the script into Notepad and then just saved it with a .js extension and it seemed to work just fine. I tried it on one of my Canon images and it cleared up the banding very effectively.

Many thanks Georg.

Regards
          Simon
 
Georg, it's fantastic! Look at bias. :surprised:
Thank you. I must recalibrate all my archive. So, how to apply the script to group of images (to ImageContainer) ?
Best regards,
Nikolay.
 

Attachments

Hi Georg, (and others)

If you apply your 'banding removal' to ALL of your subs (lights, darks, flats, flatdarks, biasoffsets etc.) is there not a possibility that you end up with 'cumulative noise' ?

In other words, each time you apply the 'de-banding' algorithm, you are introducing a signal value that has been 'interpolated' as opposed to having been 'acquired', and interpolation is always a 'best guess' process, and the definition of 'noise' can be taken to be 'a signal value that was obtained OTHER than by measurement'.

I am curious, simply because - as is the case with my deBayering trials, I only want to deBayer 'after' I have calibrated. If I debayered my full sub-frame dataset, then I feel that I would have introduced 'noise' too early in the post-processing stage. In fact, if I could align and stack my calibrated light frames, and the only deBayer my 'final' Light, that would be ideal. I would only have deBayered 'once' - but I know that I cannot do that because, by then, I would have lost the association between the deBayer CFA grid and the actual image.

But, in your case, it looks as if you could wait until you have the final calibrated light - and only remove the banding at that stage. Would this be the desired workflow?

Cheers,
 
Hi Sander,

But, in your case, it looks as if you could wait until you have the final calibrated light - and only remove the banding at that stage. Would this be the desired workflow?

In fact, so far I have applied the script only to images already calibrated and stacked with DSS (complete with flats, darks, bias frames). The peculiar thing about this Canon banding issue is that it cannot be completely eliminated by calibration, and the dark bands also appear in different regions of the image. Some attribute it to read noise, some believe it is electromagnetic interference, some have even other ideas. That's why I need this script: to remove the banding that remains after calibration and stacking.

I was surprised by Nikolay's finding that it works nicely on bias frames as well, but just as you, I am not sure that it should be applied to non-calibrated frames.

Georg

 
Hi Niall and Georg,

I agree....it is not at all clear that you should remove the banding from the uncalibrated images because by doing so you are effectively adding noise to a measured signal by modifying it.

However....I pose the following as a question...because I don't know the answer:

The banding seems to be completely random, i.e. if you look at two frames (lights, darks, flats, whatever) the banding is always there and it is always in different positions. The point of taking a dark or a flat or a bias frame is to take a measurement of the dark signal, or the flat signal or the bias signal. We try to take as many as possible to reduce the noise whilst maintaining the signal. And that obviously works just fine for most of the signals and noise sources that we encounter.

However, the banding seems to be quite structured (on a line by line basis) and completely random in its location. Obviously taking lots of lights, darks, bias frames etc should eventually blur the banding out. But because of the banding's structure and dominant size it obviously takes a huge number of frames to do that.....and we all end up with banding still visible (all be it reduced) in our stacked calibrated images.

So I wonder if there is a balance to be struck here between two methods. The first is to apply the banding removal to each sub, each flat, bias and dark frame. This will add noise by virtue of very subtle modifications to the signal of each frame...but reduces the noise due to the structured but random banding. The second route is to leave all the subs, darks, bias and flats in their pure pristine form and remove the banding after calibration and stacking.

I don't know mathematically which one would give the best results, and I guess it would actually be quite a big problem to work it out rigorously.

Anyone know the answer?

Cheers
        Simon
 
Simon,

I am not fluent in the mathematics of noise calculation. But some time ago I did an experiment with a rediculous number of shots: 100 bias, flats, darks and lights each (which clearly is not feasible for normal shots). To my surprise, the banding was still there in the calibrated images. There is something in there that is just not fixed by calibration. Maybe the temperature fluctuations that happen unavoidably over night? Maybe some variation in the movement of the shutter?

Thats when I started to think about an algorithmic approach.

Georg
 
Georg,

I am wondering why you need to involve BiasOffset frames in your data collection?

Is it not the case with a DSLR (un-modified) that, because you cannot control the CCD temperature, you really have to take Darks at, more or less, the same time as you take Lights, and that - because of this temperature limitation - you have to take Darks using the same exposure time as used for your Lights?

And, in the previous statement, you can obviosly substitute Flats for Lights, and FlatDarks for Darks.

Which, in my mind, leaves no requirement for BiasOffset frames at all. (I have always considered BiasOffset frames as only being needed to 're-scale' longer-exposure Darks to match shorter-exposure Lights. Am I missing something ?)

Is it not the case that the BiasOffset component is present, in statistically 'equal' amounts, in both a Light and a Dark, and is therefore 'eliminated' when a Dark is subtracted from a Light (given that temp and exposure time are the same for both Light and Dark)?

But that then makes me wonder if, should the Canon 'banding' be present in the BiasOffset subs, then the 'noise' is purely a function of the 'readout' process? If that were the case then, if there is no repeatable 'pattern' to the noise - that is, the noise IS truly random, in which case should it be eliminated as 'early' in the process as possible ?

Either that, or move up here to Scotland, where our clouds will successfully filter out all sorts of CCD noise ^-^

Cheers,
 
Hi Niall,

the clouds should be useable as filters here in Munich as well, especially in winter. Can you give me details of this scottish filter technique ?  ;)

Have a look at http://deepskystacker.free.fr/english/theory.htm (especially the bottom). You need some type of bias to get rid of the pedestal in the flats that are divided into the end result. There are different ways to arrive at this bias. Also, I found that my images are often slightly better when I allow DSS to do dark optimization (even if the darks were taken on the same evening). And dark optimization requires bias again.

Georg
 
OK,

As I understood things, when you apply your MasterDark to each of your Lights, using 'SUBTRACT' then you eliminate the 'common mode' BiasOffset signal. The same goes for the Flats and FlatDarks.

This gives you a new set of part-calibrated Lights, and a fully calibrated MasterFlat, neither of which contain any 'bias' component, thus allowing you to 'divide' the MasterFlat into every part-calibrated Light - giving you a final set of FULLY calibrated Lights, ready for alignment and stacking.

So, I am still not clear why there is a need for BiasOffset frames. Obviously, when I get home I will have to re-read the DSS help file, and that from Nebulosity as well - and I will have to go through the appropriate chapter of my 'Bible', the HAIP.

As for using the 'Scottish Cloud Filter' method -it is very easy. Irrespective of day or night, and even ignoring your ownership of a telescope, Google foran image of your desired target, apply a random 0 to 5 degree rotation, a Y-flip, crop to suit, apply Gaussian Noise to suit your actual imager, then rescale to an appropriate image size. If a guilty conscience occurs, consider buying property in New Mexico - et voil? !!

Cheers,
 
Hi,

last night I went to be, and when I was almost sleeping I came to a possible solution to this problem. :)

My idea is to calculate the average value of each pixel row. Then, you substract, from this unidimensional function, the smaller wavelet layers, wich will be representatives of the banding. :)

As we cannot make an unidimensional wavelet transform, we can generate an image from this calculated pixel column. This image would be one with "cloned" columns. Thus, we can make a bidimensional wavelet transform to extract banding.

My intuition says me that this method will have some problems. I can see some solutions a priori... but I need to see how it works. I think for the moment it can be a good starting point.


What do you think?
Regards,
Vicent.
 
Vincent,

sounds interesting. But I am not sure if I understand what the advantage compared to the currently implemented method would be. Can you explain a bit?

Georg
 
Juan,

can you share the sources for the Scrollbox script that you used in http://pixinsight.com/forum/index.php?topic=1159.msg5786#msg5786 ? I did not find an example in the PI distribution.

Georg
 
Sure thing:

Code:
#include <pjsr/Sizer.jsh>

function MyTabPageControl( parent )
{
   this.__base__ = ScrollBox;
   if ( parent )
      this.__base__( parent );
   else
      this.__base__();

   this.bmp = TranslucentPlanets( planetsData );

   this.autoScroll = true;
   this.tracking = true;

   this.initScrollBars = function()
   {
      this.pageWidth = this.bmp.width;
      this.pageHeight = this.bmp.height;
      this.setHorizontalScrollRange( 0, Math.max( 0, this.bmp.width - this.viewport.width ) );
      this.setVerticalScrollRange( 0, Math.max( 0, this.bmp.height - this.viewport.height ) );
      this.viewport.update();
   };

   this.viewport.onResize = function()
   {
      this.parent.initScrollBars();
   };

   this.onHorizontalScrollPosUpdated = function( x )
   {
      this.viewport.update();
   };

   this.onVerticalScrollPosUpdated = function( y )
   {
      this.viewport.update();
   };

   this.viewport.onPaint = function( x0, y0, x1, y1 )
   {
      var g = new Graphics( this );
      g.fillRect( x0, y0, x1, y1, new Brush( 0xff000000 ) );
      g.drawBitmap( this.parent.scrollPosition.symmetric(), this.parent.bmp );
      g.end();
   };

   this.initScrollBars();
}

MyTabPageControl.prototype = new ScrollBox;

function MyTabbedDialog()
{
   this.__base__ = Dialog;
   this.__base__();

   this.pages = new Array;
   this.pages.push( new MyTabPageControl( this ) );
   this.pages.push( new MyTabPageControl( this ) );
   this.pages.push( new MyTabPageControl( this ) );
   this.pages.push( new MyTabPageControl( this ) );

   this.tabs = new TabBox( this );
   this.tabs.setMinSize( 400, 400 );
   this.tabs.addPage( this.pages[0], "First" );
   this.tabs.addPage( this.pages[1], "Second" );
   this.tabs.addPage( this.pages[2], "Third" );
   this.tabs.addPage( this.pages[3], "Fourth" );

   this.okButton = new PushButton( this );
   this.okButton.text = "OK";
   this.okButton.onClick = function()
   {
      this.dialog.ok();
   }

   // Group the three buttons into a horizontal row
   this.buttons = new HorizontalSizer;
   this.buttons.addStretch();
   this.buttons.add( this.okButton );

   // Setup the dialog layout.
   this.sizer = new VerticalSizer;
   this.sizer.margin = 6;
   this.sizer.spacing = 6;
   this.sizer.add( this.tabs );
   this.sizer.add( this.buttons );

   // Set the dialog title and geometry
   this.windowTitle = "ScrollBox Test Script";
   this.adjustToContents();
   //this.setFixedSize(); // don't allow the user to resize this dialog
}

MyTabbedDialog.prototype = new Dialog;

/**
 * The TranslucentPlanetsData object defines functional parameters for the
 * TranslucentPlanets routine.
 */
function TranslucentPlanetsData()
{
   this.size = 800;                   // Size in pixels of the generated image
   this.maxRadius = 60;               // Maximum planet radius
   this.numberOfPlanets = 120;        // Number of translucent planets
   this.networkFrequency = 25;        // Frequency of network lines
   this.skyTopColor = 0xff000000;     // Top background color (solid black by default)
   this.skyBottomColor = 0xff000050;  // Bottom background color (dark blue by default)
   this.networkColor = 0xffff8000;    // Network color (solid orange by default)
   this.networkBkgColor = 0xff000000; // Network background color (solid black by default)
   this.planetTransparency = 0x80;    // Alpha value of all random planet colors
}

// Global TranslucentPlanets parameters.
var planetsData = new TranslucentPlanetsData;

/**
 * Renders a TranslucentPlanets scene as a newly created bitmap.
 */
function TranslucentPlanets( data )
{
   function ARGBColor( r, g, b )
   {
      return (data.planetTransparency << 24) | (r << 16) | (g << 8) | b;
   }

   // Working bitmap
   var bmp = new Bitmap( data.size, data.size );

   // Create a graphics context to draw on our working bitmap
   var g = new Graphics( bmp );

   // We want high-quality antialiased graphics
   g.antialiasing = true;

   // Fill the background with a linear gradient
   var lg = new LinearGradientBrush( new Point( 0 ), new Point( bmp.height ),
                                     [[0, data.skyTopColor], [1, data.skyBottomColor]] );
   g.fillRect( bmp.bounds, lg );

   // Draw random circles
   for ( var i = 0; i < data.numberOfPlanets; ++i )
   {
      // Random colors in the range [0,255]
      var red = Math.round( 255*Math.random() );
      var green = Math.round( 255*Math.random() );
      var blue = Math.round( 255*Math.random() );

      // Avoid too dark circles
      if ( red < 24 && green < 24 && blue < 24 )
      {
         --i;
         continue;
      }

      // 32-bit AARRGGBB color values
      var color1 = ARGBColor( red, green, blue );
      var color2 = ARGBColor( red >> 1, green >> 1, blue >> 1 );

      // Random center and radius
      var center = new Point( data.size*Math.random(), data.size*Math.random() );
      var radius = data.maxRadius*Math.random();

      // Define working objects
      g.pen = new Pen( color2 );
      g.brush = new RadialGradientBrush( center, radius, center, [[0, color1], [1, color2]] );

      // Draw this planet
      g.drawCircle( center, radius );
   }

   // Erase the network region by drawing a dense network
   g.antialiasing = false;
   g.pen = new Pen( data.networkBkgColor );
   for ( var i = 0; i < data.size; ++i )
      g.drawLine( i-1, data.size, -1, i+1 );

   // Generate the network
   g.antialiasing = true;
   g.pen = new Pen( data.networkColor );
   for ( var i = 0; i < data.size; i += data.networkFrequency )
      g.drawLine( i, data.size-1, 0, i );
   g.drawLine( data.size-1, data.size-1, 0, data.size-1 );

   // End painting
   g.end();

   return bmp;
}

var dlg = new MyTabbedDialog;
dlg.execute();

Happy coding! ;)
 
georg.viehoever said:
Vincent,

sounds interesting. But I am not sure if I understand what the advantage compared to the currently implemented method would be. Can you explain a bit?

Georg

Hi,

it's not a matter of advantages... it's simply, perhaps, a different way to solve the same problem.


Vicent.
 
Juan,

I am working to convert the script to work with ScrollBox based previews. However, I cannot get the script working properly. Something fails in applying corrective factors to an image. The following code has been reduced to the essentials, just adding 1.0 to all channels. However, as can be seen in the attached screenshot, only the red channel is changed, and by no means all red pixels as I would expect from the code. Can you have a look to make sure this is not a PI bug, and confirm that it is a bug in my head  ;) ?

Thanks,
Georg

Code:
function main(){
      var window = ImageWindow.activeWindow;
      var targetView=window.currentView;
      var targetImage=targetView.image;
      targetView.beginProcess();
      var resultImage=new Image( targetImage.width, targetImage.height,
                                 targetImage.numberOfChannels, targetImage.colorSpace,
                                 (targetImage.bitsPerSample < 32) ? 32 : 64, SampleType_Real );
      targetImage.resetSelections();
      resultImage.resetSelections();
      resultImage.assign(targetImage);
      var iResultHeight=resultImage.height;
      var lineRect=new Rect(resultImage.width,1);
      resultImage.resetSelections();
      // for each channel
      for (var chan=0; chan<resultImage.numberOfChannels;++chan){
            resultImage.selectedChannel=chan;
            // and each row
            for (var row=0; row<iResultHeight;++row) {
               lineRect.moveTo(0,row);
               resultImage.selectedRect=lineRect;
               resultImage.apply(1.0,ImageOp_Add);
            }  //for row
         }  //for channel
      resultImage.resetSelections();
      targetImage.resetSelections();
      targetImage.assign( resultImage );
      // end transaction
      targetView.endProcess();
}

main();
 

Attachments

  • pichangepixelssmall.jpg
    pichangepixelssmall.jpg
    10.3 KB · Views: 126
Hi Georg,

Your code snippet works fine (provided you #include <pjsr/SampleType.jsh> and <pjsr/ImageOp.jsh>).

Bear in mind that all real images are naturally bounded to the [0,1] range in PixInsight. A real image can indeed have values outside [0,1], but then it won't be correctly represented on the screen. The screen rendering engine expects all real images bounded to [0,1], and translates [0,1] to [0,255] to generate screen bitmaps. In other words, real values outside [0,1] are accepted but cause overflow and hence invalid screen renditions.

In addition, the PJSR uses the [0,1] virtual range for all pixel sample types. This allows you to work in a homogeneous way that is independent on data types. The PCL/C++ framework uses templates to manage different data types uniformly, but this is not possible in JavaScript, hence the need for a virtual representation.

Since you are adding 1.0, you are surely exceeding the [0,1] range. You just need to call Image.rescale() at the end of your process (before representing your image into a Bitmap) to rescale everything to [0,1].

Does this make sense? :)
 
Juan,

I am afraid it does not make sense for me. I zoomed into the lower left of the image. If you compare the left unprocessed image to the right one (see new attachement), you see that most pixels did not change their value at all. In many cases, the R channel did not change at all. And I could not find  a case where G or B changed at all. I dont see how this matches with what you expect. I appears to me that if R=1.0, then any operation to G or B dont happen.

Georg
 

Attachments

  • pichangepixelssmall2.jpg
    pichangepixelssmall2.jpg
    62.9 KB · Views: 132
Hi Georg,

I've just made a test and Image.apply() seems to work correctly. This is the test script (a slightly modified version of your initial snippet):

Code:
#include <pjsr/SampleType.jsh>
#include <pjsr/ImageOp.jsh>

var valuesToAdd = [ -0.1, +0.1, +0.2 ];

function main(){
      var window = ImageWindow.activeWindow;
      var targetView=window.currentView;
      var targetImage=targetView.image;
      targetView.beginProcess();
      var resultImage=new Image( targetImage.width, targetImage.height,
                                 targetImage.numberOfChannels, targetImage.colorSpace,
                                 (targetImage.bitsPerSample < 32) ? 32 : 64, SampleType_Real );
      targetImage.resetSelections();
      resultImage.resetSelections();
      resultImage.assign(targetImage);
      var iResultHeight=resultImage.height;
      var lineRect=new Rect(resultImage.width,1);
      resultImage.resetSelections();
      // for each channel
      for (var chan=0; chan<resultImage.numberOfChannels;++chan){
            resultImage.selectedChannel=chan;
            // and each row
            for (var row=0; row<iResultHeight;++row) {
               lineRect.moveTo(0,row);
               resultImage.selectedRect=lineRect;
               resultImage.apply( valuesToAdd[chan], ImageOp_Add );  /* ### */
            }  //for row
         }  //for channel
      resultImage.resetSelections();
      targetImage.resetSelections();
      targetImage.assign( resultImage );
      // end transaction
      targetView.endProcess();
}

main();

See the attached screenshot. The image to the left (test1) has been generated with NewImage as a RGB image with initial values R=0.25, G=0.5, B=0.75. The test2 image is a copy of test1 after applying the above script. The final values are R=0.15, G=0.6, B=0.95, as expected (see the valuesToAdd array).

(To further help you, I'd need to see your code)TM
 

Attachments

  • tests-image-apply.jpg
    tests-image-apply.jpg
    19.6 KB · Views: 127
Juan,

thanks for the help. The code appears to work properly after I added a
Code:
 resultImage.normalize();

The problem seems to be that when there are RGB values that are outside of [0,1], the rendering on the screen appears to work in unexpected ways (or is it assign() that clips values, I did not investigate). When hovering over pixels with the mouse pointer, the values for RGB displayed in the PI status line never exceeded values of 1.0. I guess this is why I never suspected this was a problem.

Thanks again!

Georg
 
Back
Top