Stark Labs Affordable, Powerful, and Easy to Use Astrophotography Software




Nebulosity

Fine Focus in Nebulosity

We’ve covered focusing here a few times before, but I thought it would be worthwhile hitting it one more time with one more video. Previously, I’ve talked about fine-focusing in Nebulosity with a Bahtinov Mask and there is also a movie showing and older version of the tool up in the Tutorials section. So, the question is:

Q: How do I get critical focus in Nebulosity?
If you’ve not read the section on fine-focusing in Nebulosity with a Bahtinov Mask you may want to head on over there for a longer treatment, but the upshot is that I firmly believe you don’t need an auto-focus system to easily reach critical focus. Auto-focus is great if you’re running unattended (be it a remote observatory or having the camera change filters, etc. while you’re sleeping). But, you can hit crisp focus without it and without tearing your hair out. The Bahtinov mask is one way, but without this you can get quick, clear, numeric and graphical feedback on your focus.

I run routinely at f/4 and I don’t even have a motor on my focuser these days. I’ve done this on an f/4 Newt (where the motor really did help) and I currently do this on my Borg 101 f/4 completely manually. It only takes a minute and it’s not something I fret about.

How? Rough focus is obtained with the Frame / Focus command. Click on it and you’ll loop through images. Don’t obsess here and just get the stars to be fairly small. Then, click on Abort, Fine Focus, and then some star in the field. The video below will show the Fine Focus in action. Personally, I pay attention to the HFR (Half Flux Radius) and make small adjustments while watching the graph (allowing for the scope to settle between adjustments). Keep in mind, with a 1s exposure, you’ll always have a bit of variation from frame to frame. As you go towards focus, the HFR will get smaller (graph goes down). Once you go past it, the graph will go up. You can then back off, knowing the sharpest focus you obtained, and using that value as your target. Despite being a fast touch-typist, it took me longer to write this paragraph than it often takes to focus.


Fine Focus and Bahtinov Masks

Q: How can I get focus easily and why don’t you write an autofocus routine?

Focus is something amateur astrophotographers worry about a lot. Some get so concerned about the challenge reaching focus that they push hard for having an autofocus setup, thinking this will make their lives a lot easier and that they can be assured of sharp images as a result.

Don’t get me wrong -- I like the concept of autofocus. But, there are two things to know before going that route. First, it’s going to cost you. To do autofocus well and to have it work smoothly, you really want not only a motor on the focuser but you want the ability to know just where the focuser is. You can do this with a micrometer style readout (like the Televue setup) or with encoders on the motor. If you go with encoders on the motor, you need to profile the system well to know just how much backlash there is as the encoders will turn without the focuser moving. If you don’t have encoders, life is more challenging and if your focuser has image shift, you’ll be looking at one direction of movement only. So, better have something with encoders and a solid enough setup that you don’t have shift.
Second, it’ll take time. It’ll either take time to run the full “V-curves” each time you’re out or it’ll take time to profile the focuser / scope’s V-curve so that you can hit a point on each side of focus and know, given the star’s size on those two points where focus is. Toss in parameters to derive and you’re not looking at something quick and easy that can work on any of the various bits of hardware people have out there.
Again, don’t get me wrong. For some setups, it’s essential. If you’re running a remote observatory, for example, auto-focus is going to be a huge win. But, for your typical user, if focus can be done quickly and easily without this, it may not be worth the hassle.

I’m here to suggest it can be done quickly and easily without any of this.

In one of my earlier tutorials, I posted a video of both rough focusing and fine focusing using an earlier version of Nebulosity’s fine-focus tool. Here, I’ll show a video of me using both the current version of the Fine Focus tool and augmenting this with the use of a Bahtinov mask.

What the heck is a Bahtinov mask you ask? It’s an elaboration on the idea of a Hartmann mask -- something you stick over the front end of your scope to induce a diffraction pattern that makes focusing easier. David Polivka over at Astrojargon.net has a great webpage on the mask and how to make one. I made one myself out of a piece of thin cardboard by printing out the pattern from David’s site, taping it onto the cardboard, and using a razor blade and straight edge to cut out the pattern. (I’ll post a picture up here soon.) With some creative folding of the cardboard, it snugly holds itself onto the front of the tube just fine.

The concept is that you adjust the focus until the middle spike is nicely centered. Here, I have a video of me going through the focus process. I’m working on an 8” f/5 Newt (I’ve done this at f/4 as well without issues) that has a heavy camera setup on a stock one-speed GSO Crayford. No nice Feathertouch here. No, I’m using a simple focuser on a mount pushed to its capacity. What we see is the image of the star in focus with the mask in the upper left, the profile in the upper right (orient the diffraction right and that profile could be very useful!), and the running log and current values for the max intensity and the half flux radius. Play the video and you’ll see (and hear) me go from this well-focused spot to taking it out of focus with the mask and bringing it back. I’ll then pull off the mask and show the star is in focus, nudge the focus out a bit and bring it back showing we get to the same focus spot. Note, the HFR will be different with the mask on and off, of course, but the point is that the focuser position is the same in each. When one is in focus, the other is as well. In the minute I’m actually doing anything here, you’ll see me hit focus with each method. So, that’s focusing the system twice in a minute.



Personally, I think this is pretty easy and straightforward. Watching this video should give you a good feel for using the Bahtinov mask with Nebulosity. Watching the other video should give you a good feel for using Nebulosity without this (and on a very unstable night). Either way, you can be confident that you’re hitting accurate focus on modest hardware with no investment and in a short time.

Stacking accuracy

Q: How can I get the sharpest images in my stack using Nebulosity? How does Nebulosity compare to other stacking tools?

Nebulosity has several means of aligning the images prior to actually stacking them. We can use simple translation, translation + rotation, translation + rotation + scaling, and Drizzle. I've covered Drizzle in an article for Astrophoto Insight, so I'll focus on the more traditional methods here.

The big difference between "translation" and "translation + rotation (+ scaling)" is that when doing a translation-only alignment, Nebulosity does not resample the image. It does "whole pixel" registration. This sounds worse than "sub-pixel" registration. Isn't it better to shift by small fractions of a pixel? Well, it would be, except for the fact that when you do so, you need to know what the image looks like shifted a fraction of a pixel. That means, you must interpolate the image and interpolation does cause a loss of sharpness. So, you're faced with a trade-off. Keep the image exactly as-is and shift it by whole pixels or resample it and shift it by fractional pixels.

Now, toss into this the fact that our long-exposure shots are already blurred by the atmosphere (and to a varying degree from frame to frame) and you've got a mess if you try to determine which is better from just thinking about it. So, we have what we call an "empirical problem." Let's get some data and test it.

I took some data I had from M57 shot with an Atik 16IC at 1800 mm of focal length and some wider-field data of M101 shot on a QHY 2Pro at 800 mm. I ran the M57 data through a number of alignments and Michael Garvin ran the M101 data through several as well.

Here are the images from M57 (click here for full-res PNG file). All were processed identically, save for the alignment / stacking technique.


Here are the images from M101 (click here for full-res PNG version). Again, all were processed identically. Here, the image has been enlarged by 2x and a high-pass filter overlay used to sharpen each (all images were on the same layer in Photoshop so the same exact sharpening was applied).


So what do we take from all this? Well, first, there's not a whole lot of difference among the methods. All seem to do about the same thing. To my eye, adding the "starfield fine tune" flag in Nebulosity helps a touch and using the resampling (adding a rotation component) hurts a touch, but these aren't huge effects. Someday, I'll beef up the resampling algorithm used in the rotation + (scale) version. Comparing Nebulosity's results with those of other programs again seems pretty much a tie. I can't pick out anything in their stacks that I don't see as well in Nebulosity's. Overall, these images seem to be limited more by the actual sharpness of the original data than by the stacking method.


Gain, Offset, and Bit Depth

Q: What should I set my "gain" and "offset" to?

Before answering this, a bit of background is useful. Specifically, just what the heck do gain and offset do? Before we cover this, a brief primer on how those photons you capture become intensities you see on the screen is needed. If you wish, skip down to "OK, so what should I set my gain and offset to?" below.

How do signals off my CCD become intensity values?
When each CCD pixel is read out, there is a certain amount of voltage corresponding to how many photons were collected and converted into electrons. This is an analog signal that needs to be converted into a digital signal so that we have a number corresponding to the intensity. This conversion happens in the analog to digital converter (ADC). In so doing, we have a specification often seen on cameras, the overall system gain, typically specified as some number of electrons per ADU (analog-digital-unit, aka the raw intensities you see in your image in a program like Nebulosity). A camera may have an overall system gain of something like 0.7 e-/ADU or 1.3 e-/ADU, etc. This means that each electron registered corresponds to 0.7 or 1.3 raw intensity units.

There are four key limitations to keep in mind when thinking about the ADC process:

1) There are no fractional ADU outputs. So, one electron in both the systems above would probably end up recording 1 ADU. You can't have half an ADU (and you can't have half an electron).

2) Your ADC has a minimum value of 0 and a total number of intensity steps of 2 ^ (# bits in your ADC). For a 16-bit ADC, this is 0-65,535. For an 8-bit ADC, this is 0-255, etc.

3) Zero is evil and 65,535 is bad but not evil. When your signal hits either, you loose information. If the sky is at zero and your faint galaxy is at zero, no amount of stretching will bring it back. 0*1 = 0*100 = 0.

4) Your CCD has a limited number of electrons it can hold called the well depth. This may be 20,000 e-, 40,000 e-, etc. Note, that for all the cameras I know of that let you adjust the gain and offset (Orion Starshoot, Meade DSIs, QHY cameras, etc.), the well depth is < 65,535. This will be key for my argument below.

What do gain and offset do?
With all this in your head, we can now describe what gain and offset controls on cameras do. After coming off the CCD and before hitting the actual ADC there is typically a small pre-amplifier (this may be inside the ADC chip itself). What this preamp does is allow you to boost the signal by some variable amount and to shift the signal up by some variable amount. The boosting is called gain and the shift is called offset.

So, let's say that you have pixels that would correspond to 0.1, 0.2, 1.1, and 1.0 ADU were the ADC able to deal with fractional numbers. Now, given that it's not, this would turn into 0, 0, 1, and 1 ADU. Two bad things have happened. First, the 0.1 and 0.2 have become the same number and the 1.1 and 1.0 have become the same number. We've distorted the truth and failed to accurately represent subtle changes in intensity. This failure is called quantization error. Second, the first two have become 0 and, as noted above, 0 is an evil black hole of information.

Well, what if we scaled these up by 10x before converting them into numbers (i.e., we introduce some gain)? We'd get 1, 2, 11, and 10. Hey, now we're getting somewhere! With gain alone, we've actually fixed both problems. In reality, the situation is often different and the ADC's threshold for moving from 0 to 1 might be high enough so that it takes a good number of electrons to move from 0 to 1. This is where injecting an offset (a DC voltage) into the signal comes in to make sure that all signals you could possibly have coming off the CCD turn into a number other than zero.

Gain's downside: Bit depth and dynamic range
From the above example, it would seem like we should all run with lots of gain. The more the better! Heck, it makes the picture brighter too! I often get questions about this with the assumption that gain is making the camera more sensitive. It's not. Gain does not make your camera more sensitive. It boosts the noise as well as the signal and does not help the signal to noise ratio (SNR) in and of itself. Gain trades off dynamic range and quantization error.

We saw above how it reduces quantization error. By boosting the signal we can have fractional differences become whole-number differences. What's this about dynamic range?

Let's come up with another example. Let's have one camera with a gain of 1. So, 1 e-/ADU. Let's have another run at 0.5 e-/ADU. Now, let's have a pixel with 1k e-, another with 10k e-, another at 30k e-, and another at 50k e-. In our 1 e-/ADU cam, we of course have intensities of 1000, 10000, 30000, and 50000. In our 0.5 e-/ADU cam, we have intensities of 2000, 20000, 60000, and 65535. What? Why not 100000? Well, our 16-bit camera has a fixed limit of 65535. Anything above that gets clipped off. So while the 1 e-/ADU camera can faithfully preserve this whole range, the 0.5 e-/ADU camera can't. Its dynamic range is limited now.

How do manufacturers determine gain and offset for cameras that don't allow the user to adjust them?
Let's pretend we're making a real-world camera now and put in some real numbers and see how these play out. Let's look at a Kodak KAI-2020 sensor, for example. The chip has a well-depth specified at 45k e-. So, if we want to stick 45,000 intensity values into a range of 0-65,535, one easy way to do it is to set the gain at 45,000 / 65535 or at 0.69 e-/ADU. Guess what the SBIG ST-2000 (which uses this chip) has the gain fixed at... 0.6 e-/ADU. How about the QSI 520ci? 0.8 e-/ADU. As 45k e- is a target value with actual chips varying a bit, the two makers have chosen to set things up a bit differently to deal with this variation (SBIG's will clip the top end off as it's going non-linear a bit more readily), but both are in the same range and both fix the value.

Why? There's no real point in letting users adjust this. Let's say we let users control the gain and they set it to 5 e-/ADU. Well, with 45k e- for a maximum electron count at 5 e-/ADU, we end up with a max of 9,000 ADU and we induce strong quantization error. 10, 11, 12, 13 and 14 e- would all become the same value of 2 ADU in the image, loosing the detail you so desperately want. What if the user set it the other way to 0.1 e-/ADU? Well, you'd turn those electron counts into 100, 110, 120, 130, and 140 ADU and wonder just what's the point of skipping 10 ADU per electron. You'd also make 6553 e- be the effective full-well capacity of the chip. So, 6535:1 would be the maximum dynamic range rather than 45000:1. Oops. That nice detail in the core of the galaxy will have been blown out and saturated. You could have kept it preserved and not lost a darn thing (since each electron counts for > 1 ADU) if you'd left the gain at ~0.7 e-/ADU.

What about offset? Well, it's easy enough to figure out the minimum value a chip is going to produce and add enough offset in the ADC process to keep it such that this is never going to hit 0.

OK, so what should I set my gain and offset to?
The best value for your camera may not be the best value for other cameras. In particular, different makers set things up differently. For example, on a Meade DSI III that I recently tested, running the gain full-out at 100% let it just hit full well at 65,535 ADU. Running below 100% and it hit full-well at 40,000 or 30,000, or 10,000 ADU. There's no point in running this camera at anything less than 100% gain. On a CCD Labs Q8-HR I have, even at gains of 0 and 1 (on its 0-63 scale), the camera would hit 65535 on bright objects (like the ceiling above my desk). There's no point in running this camera at gains higher than 0 or 1.

Why is there no point? The camera only holds 25k e-. If a gain of 0 or 1 gets me to 0.38 e-/ADU (so that those 25k e- become 65535), running at 0.1 e-/ADU will only serve to limit my dynamic range. Each single electron already comes out to more than 2 ADU.

So, how do I set it? (man, you ramble a lot when you get going!)
1) Take a bias frame and look for the minimum value in it. Is it at least, say 100 and less than a thousand or a few thousand? If so, your offset is fine. If it's too low, boost the offset. If it's high, drop it. Repeat until you have a bias frame with an offset in, roughly 100 - 1000. Don't worry about precision here, it won't matter at all in the end. You now know your offset. Set it and forget it. Never change it.

2) Aim the camera at something bright or just put it on your desk with no lens or lenscap on and take a picture. Look at the max value in the image. Is it well below 65k? If so, boost the gain. Is it at 65k? If so drop the gain. Now, if you're on a real target (daylight ones are great for this) you can look at the histogram and see the bunching up at the top end as the camera is hitting full-well. Having that bunch-up roughly at 65,535 plus or minus a bit is where you want to be. If you pull up just shy, you'll get the "most out of your chip" but you'll also have non-linearity up there. You've got more of a chance of having odd color casts on saturated areas, for example, as a result. If you let that just clip off, you've lost a touch but what you've lost is very non-linear data anyway (all this assumes, BTW, an ABG chip which all of these cams in question are). Record that gain and set it and forget it. Never change it.

By doing this simple, daytime, two-step process you've set things up perfectly. You'll be sure to never hit the evil of zero and you'll be making your chip's dynamic range fit best into the 16-bits of your ADC. Again, all the cameras in question have full-well capacities below 65,535 so you are sure to have enough ADUs to fit every electron you record into its own intensity value.

Caveat
The above assumes you have more ADUs available than electrons. This is true as noted for the cameras in question here but isn't universally true. For example, if you have an 8-bit ADC, variable gain is quite important as you may want yourself to trade-off quantization error and dynamic range. You may be fine blowing out the core of a galaxy to get those faint arms and want to run at 1 or 2 e-/ADU instead of 10 or 50 or 200 e-/ADU. This happens in 12-bit DSLRs as well with their 4096 shades of intensity but not so much with 14-bit DSLRs and their 16,384 shades.

Please note that none of this has considered noise at all. The situation is even "worse" when we factor in the actual noise we have. If the noise in the frame is 8 ADU that means the bottom 3 bits are basically worthless. That 45,000:1 dynamic range is really 45,000:8 or 5,625:1 and you're not even able to really pull out every electron. But, that's a topic for another day. (Google "Shannon Information" if interested).

Combining images: means, medians, and standard deviations

Q: I hear medians are a good way to stack images as they can remove things like hot pixels, cosmic rays, or streaks from satellites. Does Nebulosity support this?

The short answer is no ... but... When combining images we want something that helps reduce the noise. We'd also like something that is tolerant of "outliers". The mean (average) is great at the first part but lousy at the second part. Medians are not so hot at the first part but great on the second part. What we'd like is something that is good at both parts. Nebulosity supports standard-deviation based filtering of your images to let you keep most of the frames and pitch just the bad ones.

OK, so what is it and why is it better? What are these 1.5, 1.75, etc. thresholds I'm being asked about?

If you were to take a perfect image of a target, each pixel would have its "ideal" or "true" value - how much intensity there is from that part of the target. The trouble is, each time we sample the target (aka each image) we get that true value for a pixel but we also get some noise on top of it. We want, of course, the true value. How do we get rid of that noise?

In statistics, we have several typical ways of describing our data. Think right now just about a single pixel (after alignment). So, we have the same spot in the arm of M51 or something. The most common way is the mean (aka average) of all of our samples (aka images, light frames, etc.). It should tell us the central tendency and therefore estimate the truth. The more samples we have, the better the estimate is since we don't have to rely on just one sample (which has truth plus or minus some noise value) or a few samples. With more samples, the noise tends to cancel and we are left with a closer estimate of the truth (the noise, BTW, tends to follow a 1/sqrt(# samples) rule). We can quantify how much noise there is in our samples with a second way of describing our data. The variance (and its square root, the standard deviation) are the typical ways we do this, telling us how much "spread" there is in our samples.

If we assume the data are "normal", about 70% of all samples will lie within one standard deviation (SD) of the mean (that is, 70% are less than one standard deviation above or one standard deviation below the average). About 95% like within 2 SD of the mean. Below, I show the histogram of 5000 normally-distributed random numbers (pretend you had 5000 light frames!). Samples in green lie within 1 SD of the mean. Yellow (and green) lie within 1.5 SD. Orange (and yellow and green) are within 2 SD and red are outside of 2SD. Now, these are all real samples (nothing like an errant satellite) but we could safely pitch those samples in red or orange and still have a good estimate of the mean. We'd not loose too many samples and we'd take out those that are more likely to be outliers. If a sample comes in that is > 2SD, odds are pretty good it's an errant value (like a hot pixel or satellite). Even if it's not, since we don't have 5000 samples - we have far fewer - filtering these out will help keep our estimate of the mean centered where it should be and not skewed by that outlier. Thinking about this diagram will help us a lot in the next step - understanding what happens during stacking. Just remember that with the standard deviation, we know what kind of values we might expect to find and what type of values are really abnormal (e.g., something 5 SD from the mean is very abnormal as there is only a 0.000057% chance this is a real sample and not the result of something else going on).


OK, given that background, here is what happens during the stack. For each (aligned) pixel, we calculate the mean and standard deviation across all of the images in the stack. If your SD threshold is at 1.5, any samples of that pixel that have an intensity beyond 1.5 SD from the mean are removed and a new average, excluding these samples, is calculated. This, BTW, is why hot pixels are often eliminated using SD stacking - those hot pixel values are very abnormal and lie far away from the mean.

With the filter set at 1.75, it takes a more extreme or "outlying" intensity value to be counted as "bad" than at 1.5. At 2.0, it takes even more abnormal a value to be excluded. Thus, more samples go into the final image using a higher threshold (and more things like semi-hot pixels as well). Typically, filtering values at 1.5 or 1.75 will yield the best results.

Standard-deviation based stacking therefore lets in more good samples than a median and takes out more (>0) bad samples than the mean (average). That's what makes it such a nice technique for filtering out bad samples. Note, you're not filtering out whole frames. This math is done on each pixel. So, frame #1 may have a bad value at pixel 109,231 but be great everywhere else. For all other pixels, this frame's data will be used but for pixel 109,231 it won't.

The technique isn't perfect. With a lot of outliers, the estimate of the standard deviation goes way up. So, we have a bigger "spread" that is considered normal and it takes something more aberrant to get filtered out. There are techniques to get around this, of course as well, but that's a topic for another day.