Thursday, May 24, 2012

First light with new solar imaging setup!

This is a report from the 'first light' session with my new high resolution solar imaging setup!

During the bright summer months here in Denmark I stop work on deep sky or planetary imaging and switch to solar. I do H-alpha imaging with a Skynyx 2-2M camera, i.e. the 'lucky cam' technique with thousands of images.

In the past years I have been on a trend towards ever higher resolution. It started with increasing my focal length to get better sampling on a Coronado SM60 setup. Next, I increased the aperture to 100mm by acquiring a Daystar rear-mounted filter (back-to-back comparison with Coronado is here). This year I purchased a second hand 152mm F=900mm achromatic refractor to use with the Daystar. Normally (and safest!) a Daystar is used with a front mounted energy rejection filter (ERF), but on the Daystar Yahoo forum I heard that this could be replaced by a high quality UV/IR filter that is mounted internally. ERF's are VERY expensive for large apertures so this trick was really what inspired me to try working at six inches.

The scope is a Chinese made achromat mounted on a Takahashi EM-200 equatorial mount. The scope mechanical quality is very good for the price level and I have heard that the optics also are good - especially when the light will be filtered and a tele-extender is used with a small chip camera. Super-duper APO's with super-duper H-alpha filters are really a joke! I had made special adapters so that a Baader 2" UV/IR rejection filter could be placed just after the focuser. This is the highest quality filter and nothing less must be used for this kind of application out of safety concerns. The filter is mounted in a way so that it is not carrying any weight. This is important since a very long imaging train causing lots of torque is used. After some T2 spacer tubes I have a Baader TZ4 tele-extender, then comes the Daystar Quantum SE H-alpha filter. The spacer tubes places the TZ4 at the optimal position relative to the telescope focal plane while ensuring that the 3" Crayford focusser is not extended at all - thus minimizing problems with a sagging drawtube. After the Daystar comes a lightweight helical focusser from Borg and then the Skynyx 2-2M camera. The helical focusser enables me to only move the camera and not the entire long, huge and heavy imaging assembly. Everything is screwed securely together (no draw tubes or clamping rings) to minimize sagging. To achieve this is a true nightmare in adapters! Check out the photos of this setup below.

6" solar imaging setup on Takahashi EM200 mount. Note that dew cap is fully retracted to avoid possible tube currents.

Details of the assembly for high resolution solar imaging
After fiddling with all the spacer tubes and adapters in the imaging assembly I had time to try a few shots. I took two 90 second sequences consisting of ~1500 images of active region 11484. First the Daystar was at 6563.8Å and then I set it to the H-alpha wavelength of 6562.8Å. Images below are stacks of the 60 best images followed by some wavelet sharpening:
AR11484 @ 6563.8Å. Total length of this complex is ~120 arcsec.

AR11484 @ 6562.8Å a few minutes later.

I did not bother to remove traces of Newton rings. The images were taken two hours past midday over a low black roof, so the seeing should not have been to good. Finally, I have not checked the scope collimation after it arrived. Still, I estimate the resolution to be around 1 arcsecond - not bad for a first try. I can't wait to play more with this setup over the summer. Let's hope for clear weather during the Venus transit on June 6th!!

Wednesday, May 23, 2012

M100 processing - Part 5

I have now calibrated and rejected bad data, so the original 1064 image files have been reduced to 96 LRGB light frames. In this post I will reduce this further to just four - one for each filter. I use MaxIm for stacking with the 'Auto - Star Matching' alignment method. I have previously found that bi-linear interpolation produces slightly sharper - but also more noisy - images than bi-cubic interpolation. On the RGB data I'll go for bi-cubic since resolution does not matter so much here. On the luminance data let's see what can be gained by bi-linear/bi-cubic and all/best half stacking:

FWHM of stacked luminance frames:

                         Bi-cubic:             Bi-linear:
All frames:         3.53"                  3.45"
Best half:           3.41"                  3.31"

By 'best half' I mean using only the sharpest 50% as defined by the image FWHM value. The FWHM can be reduced ~6% by using bi-linear and 'best half' compared to using all images and bi-cubic interpolation. However, the SNR of faint regions of M100 is cut in half by doing so and I think this is too high a price to pay. Instead, I’d rather have high SNR and then later try my luck with deconvolution which really demands low noise data. See for yourself below - there isn't much difference visible between the two results!
Enlarged sections of stacked luminance frames. There is no significant difference visible, but the computer tells me that SNR is better on the bi-cubic/all image.
For aligning  images from various filters I use a common reference image from the luminance stack. This reference is of course not included in the stacking process, but it ensures that the resulting, stacked LRGB images are aligned to each other.

Next step will be to experiment with deconvolution on the luminance image, so stay tuned!


I should mention a problem with MaxIm I encountered while combining the luminance data. I have two sets of sub-exposures - one from March 25th and another from March 27th. By mistake I used a different guide star on these two nights and as a consequence the two image series are pretty severely misaligned:
Two sub-exposures that are severely misaligned.
Still, the data from both nights is good so I'll go ahead with alignment and combining. Note that the two nights produced a different background level - one the first I had ADU=1700 while on the second I got ADU=2300. In MaxIm I use the 'Auto - star matching' alignment mode which works very well, then combine using 'Sigma-clip' combine method and 'delta-level' normalization. The result is shown below.
Problem: combined image has a large offset where the sub-exposures fail to overlap.
I spent several days pondering this problem without success. Only when writing this blog post did the correct line of thought come into place - and with that, a solution! I think the problem arises in two steps. First, during alignment, MaxIm sets pixel values outside the original field of view to zero (other programs often chose to use a median edge value). On my images this will result on a lot of zeroes, due to the large misalignment between the two nights. Next, during image combination, this zero-value creates an offset on the combined result, as shown above. The solution is very simple: just activate the 'Ignore black pixels' option on the 'Combine' tab. As shown below this fixes the problem. Of course the background still exhibits a discontinuity in noise level where the two dataseries fail to overlap but this is quite natural and easy to handle later on.
Problem solved: use 'ignore black pixels' option!

Tuesday, May 1, 2012

M100 processing part 4 - FWHM evaluation

I have now finished working on rejection of blue and luminance data. In total I acquired 180 light sub-exposures through LRGB filters, representing 36 hours of integration time. Of this I wounded up rejecting roughly half due to a fading deep sky signal or high background level. I knew this would happen and that is partly why I took so much data - even after rejection I still wanted to have enough for a low noise image.

So, from a total of 1064 individual files prior to any processing (most of which were calibration data) we are now down to 96 light frames of reasonable quality! Next step is to look at the average stellar FWHM to see if more rejection is needed. For this I use CCDInspector:
FWHM on remaining sub-exposures (click for larger version)

Average FWHM with RGB filters is around 3.0" while it is 3.8" for the luminosity data (UV-IR rejection filter). Is this just because of poor seeing on those nights or is it typical a broader bandwidth results in fuzzier images? I suspect the latter, but I am not sure. I use a RC astrograph with no refractive elements except for the deflection plate in my AO-8 tip-tilt guider.

For the RGB data I will not reject any images due to FWHM since none of them differ significantly from the mean. On the luminosity data I want to pursue maximum sharpness and here I might choose to explore if something can be gained by stacking only the best half. Stay tuned!