Stark Labs Affordable, Powerful, and Easy to Use Astrophotography Software

Ask Craig

Fine Focus in Nebulosity

We’ve covered focusing here a few times before, but I thought it would be worthwhile hitting it one more time with one more video. Previously, I’ve talked about fine-focusing in Nebulosity with a Bahtinov Mask and there is also a movie showing and older version of the tool up in the Tutorials section. So, the question is:

Q: How do I get critical focus in Nebulosity?
If you’ve not read the section on fine-focusing in Nebulosity with a Bahtinov Mask you may want to head on over there for a longer treatment, but the upshot is that I firmly believe you don’t need an auto-focus system to easily reach critical focus. Auto-focus is great if you’re running unattended (be it a remote observatory or having the camera change filters, etc. while you’re sleeping). But, you can hit crisp focus without it and without tearing your hair out. The Bahtinov mask is one way, but without this you can get quick, clear, numeric and graphical feedback on your focus.

I run routinely at f/4 and I don’t even have a motor on my focuser these days. I’ve done this on an f/4 Newt (where the motor really did help) and I currently do this on my Borg 101 f/4 completely manually. It only takes a minute and it’s not something I fret about.

How? Rough focus is obtained with the Frame / Focus command. Click on it and you’ll loop through images. Don’t obsess here and just get the stars to be fairly small. Then, click on Abort, Fine Focus, and then some star in the field. The video below will show the Fine Focus in action. Personally, I pay attention to the HFR (Half Flux Radius) and make small adjustments while watching the graph (allowing for the scope to settle between adjustments). Keep in mind, with a 1s exposure, you’ll always have a bit of variation from frame to frame. As you go towards focus, the HFR will get smaller (graph goes down). Once you go past it, the graph will go up. You can then back off, knowing the sharpest focus you obtained, and using that value as your target. Despite being a fast touch-typist, it took me longer to write this paragraph than it often takes to focus.

Aperture, f-ratios, myths, etc.

Q: F-ratio myth - Myth, reality, or what? Does aperture always rule?

A lot has been written and discussed on the web about what f-ratio really means for us as astrophotographers. F-ratio is simply the ratio of the focal length to the size of the aperture. So, if you have a 200 mm telescope that has a focal length of 1000 mm, it is an f/5. This is true for your telescope and for your camera lens. Clearly, the f-ratio can be varied by changing either the aperture or the focal length. If we stick a focal reducer or a barlow on the scope, we’re changing the f-ratio of the system (Note: not of the objective itself, so sticking a barlow on your f/4 Newt to make it f/8 won’t make the coma of an f/4 go away!). Likewise, when the iris inside your camera lens cuts down the light (reduces the aperture) you’re changing the f-ratio.

If we keep the aperture constant and change the f-ratio by somehow scaling the focal length (reducing or extending it), we’re not changing the total number of photons hitting our detector from a given DSO. As Stan Moore and others have pointed out on pages like the one dedicated to the “f-ratio Myth”, it is the aperture alone that determines how many photons we gather from a DSO. If you imagine your scope to be a bucket, catching photons streaming across space, it should be obvious that the bigger a bucket you have, the more photons you get. Period. No ifs, ands, or buts as it were.

I feel Stan does a good job on his page explaining how film differs from CCDs and where the noise comes from. But, in seeing how this is often depicted in online discussions, I feel one key caveat he makes is often lost. Note that at the end of his page, he says that, “There is an actual relationship between S/N and f-ratio, but it is not the simple characterization of the ‘f-ratio myth’.” What is missed by many when they conclude that “f-ratio doesn’t matter” is that this is true ONLY when you are well above the read noise.

Running at a larger f-ratio for a given aperture means that you are spreading the light over more pixels. Thus, each pixel is getting less light and so the signal hitting that pixel is less. Some aspects of the noise (e.g., read noise) will be constant (not scale with the intensity of the signal the way shot noise does). Thus as the signal gets very faint, it gets closer and closer to the read noise. When we hit the read noise, the signal is lost. Doubling the focal length (one f-stop) will have 25% as much light hitting the CCD well, meaning we will be that much closer to the read noise. If the exposure length is long enough such that the edges of our galaxy or nebula are still well above this noise, it matters little if at all. But, if we are pushing this and just barely above the noise (or if our camera has a good bit of noise), this will more rapidly come into play.

Please note, that none of what I am saying here contradicts Stan’s message. He makes this same point and if you look closely at the images on his site, the lower f-ratio shot does appear to have less noise. As noted, it’s not “10x better”, but it’s not the same either.

Here, I’ve taken some data from Mark Keitel’s site. Mark was kind enough to post FITS files of M1 taken through an FRC 300 at f/7.8 and f/5.9. I ran a DDP on the data and used Photoshop to match black and white points and to crop the two frames. Click on the thumbnail for a bigger view and/or just look at the crop.

Here is a crop around the red and yellow circled areas. In each of these, the left image is the one at f/7.8 and the right at f/5.9 (as you might guess from the difference in scale. Now, look carefully at the circled areas. You can see there is more detail recorded at the lower f-ratio. We can see the noise here in the image and that these bits are closer to the noise floor. Again, the point is that it’s incorrect to say that the f-ratio rules all and that a 1” scope at f/5 is equal to a 100” scope at f/5, but it’s also wrong to say that under real-world conditions, it’s entirely irrelevant.

Q: Does aperture always rule?
The most often quoted phrase in our community is that aperture rules and it’s true. It’s true that if all else is equal, bigger scopes will do better. It’s also false to say that big scopes are bad on planets or bad in the city and that smaller is better under these circumstances.

However, there are times when we don’t fully realize what’s not equal. This point was made clearly to me recently when I went out to do some testing on a camera and brought along an 8” f/5 Antares Newtonian (with Paracorr) and a 4” f/4 Borg 101 ED APO. I aimed, among other things, at the Horsehead Nebula and thought I had a good handle on what the images would show me. I thought, as most of you probably would, that the 8” Newt (1000 mm focal length) would spank the 4” APO (400 mm focal length). After all, the Newt gathers 4x as many photons as the APO. I went in thinking less about the caveat to the f-ratio myth than I should have though, and given this, the results were quite surprising.

These are taken right after each other and are both 5 minute frames, with no application of any post-processing other than simple stretching (data from QSI 540). You can click on the image for a 100% version of the shot here.

I don’t know about you, but I’m seeing more detail in the 4” scope than in the 8” scope. The bit of emission nebula by the horse’s mane is one area you can pick this out. Whether you think the 4” is better than the 8” here might be debated (I don’t think so, but some might) but what we can certainly say is that the 4” wasn’t put to shame in this comparison. Despite giving up “4x the number of photons”, it is doing very well.


Is it some “APO magic”? Hardly. The answer, IMHO, comes down to two factors:

1) Much of the signal we’re pulling out here is close to the noise floor and, by having a lower f-ratio (and shorter focal length), the 4” APO is getting more photons onto the CCD wells as a result, getting us into the “caveat” range of the f-ratio myth.

2) There is more light loss in the Newt than we might expect.

Let’s put some quick and dirty numbers onto these images and pretend we’re imaging a flat field (e.g., the emission nebula around the Horesehead). What do we have here? Well, if the aperture were the same and we’re running one at 1000 mm and one at 400 mm of focal length, we can figure the difference in photon number hitting the well by (1000 / 400) ^ 2. This is a factor of 6.25. So, if we took a 1000 mm scope and reduced the focal length to 400 mm, each CCD well would be hit with 6.25x as much light. Again, this won’t matter a lot for brighter areas, but when you’re right near the noise, this will certainly come into play.

The aperture isn’t the same, of course, and the 4” scope is collecting 25% as many photons as the 8” scope, owing to the difference in aperture. So, if we’re counting the number of photons from a diffuse source hitting the CCD, that 6.25 factor goes down to 1.56x. But, that is still in favor of the 4” APO. Note, if this were a 3” APO, the light loss due to aperture would be down to 14% of the 8” scope, which would put the photon count hitting each well at a factor of 0.88, now tipped to the Newt’s favor.

All this is still pretty close and I don’t think enough to account for the images. Here’s something we’ve not considered yet, however. The Newt uses two Al-coated mirrors and several lenses in the Paracorr. The APO here is a doublet with several more lenses in its reducer / corrector. If we suppose that the two correctors loose similar amounts of light, we’re left with two mirrors vs. a doublet. That doublet is passing on the order of 97% of the light, but each surface of the Newt is only passing about 86% of the light. With two mirrors, we’re down to about 76% of the light. We’re also not at an effective 8” of aperture in terms of photon gathering owing to the central obstruction (about 2.5” here). The central obstruction alone puts us down to a 7.6” scope and if we factor in the mirrors’ light loss it’s down to a 6.6” scope. So, rather than an 8” vs. 4” scope with a 4x total photon boost for the 8”, it’s more like a 6.6” scope which is only a 2.75x total photon boost.

Now, if the total photon boost is only 2.75x instead of 4x (aka, the light throughput on the 4” scope is 36% of the bigger scope instead of 25%), we can update the numbers from above. Ignoring the aperture (keeping it constant), the focal length had 6.25x as many photons hitting the CCD well and getting us above the noise. With perfect optics, the light cut was 1.56x (6.25 * 0.25), but it’s now at 2.27x with more real-world numbers. That means that each pixel recording the nebula is getting 2.27x as many photons hitting it when the 4” scope is attached as when the 8” scope is attached.

Math, math, math... does this really happen? My camera’s bias signal is about 209 in this area. I measured the mean intensity in a 10x10 region using Nebulosity’s Pixel Info tool for three areas right around and in the Horsehead. On the Borg, they measured 425, 302, and 400. On the Newt, they measured 287, 254, and 278. Now, if we pull out the 209 for the bias signal we have 216, 93, and 191 vs. 78, 45, and 69. If we calculate the ratios, we have 2.76x, 2.07x, and 2.77x. Average these and we’re at 2.5x.

The back-of-envelope math said I should have 2.27x as much light hitting each CCD well with the 4” scope all things considered and the practical measurement came up with 2.5x. In my book, that’s close enough for jazz and a clear verification of the basic idea. Aperture did not win here. When all else is equal, it wins, but all else is not always equal. To my eye, the image with the 4” looks better and we find that despite seeming to be a bit light in the photon department when all one considers is aperture, it’s actually pulling in more photons onto each CCD well. While it has fewer photons on the whole target still (36% of the 8” scope’s amount), per CCD well it’s doing better. If aperture were all that mattered and that focal length didn’t matter at all, the 8” would have soundly trounced the 4”.

Must this be the case? Will a 4” APO always beat out an 8” Newt? Hardly. If we run them both at the same focal length, the APO won’t have a chance. Only 36% of the photons are now spread out to the same image scale and so each CCD well has only 36% as many photons hitting it. The Newt will clearly win here. Now, keep in mind, that if aperture were all that mattered, the 4” would have handily lost the competition above. It didn’t. Put them on par on image scale and it will.

This last bit is really the key for people to understand. What aperture buys you are more photons. You can trade these photons off in various ways. If you keep the image scale the same, your SNR will go up relative to a scope with a smaller aperture. If you like, you can trade things off here and buy magnification (aka resolution) for that aperture and keep the same SNR. By varying the focal length (and therefore image scale and f-ratio) we control this trade-off.

And yes, it is true, that once we’re well above the read noise, the effects I’ve mentioned here become weaker. But, a lot of the things we amateurs try to do aren’t always well above the read noise. I know I’m often plumbing the depths to see just what I can pull out. Just as we shouldn’t look only at the f-ratio when making our decisions and think that we can shoot for a quarter as long given a 2-stop difference (e.g., f/8 to f/4) we shouldn’t entirely ignore this. In addition to easing guide constraints and getting a wider FOV, running that f/10 SCT at f/6.3 or so will let you get above that read noise faster.

Fine Focus and Bahtinov Masks

Q: How can I get focus easily and why don’t you write an autofocus routine?

Focus is something amateur astrophotographers worry about a lot. Some get so concerned about the challenge reaching focus that they push hard for having an autofocus setup, thinking this will make their lives a lot easier and that they can be assured of sharp images as a result.

Don’t get me wrong -- I like the concept of autofocus. But, there are two things to know before going that route. First, it’s going to cost you. To do autofocus well and to have it work smoothly, you really want not only a motor on the focuser but you want the ability to know just where the focuser is. You can do this with a micrometer style readout (like the Televue setup) or with encoders on the motor. If you go with encoders on the motor, you need to profile the system well to know just how much backlash there is as the encoders will turn without the focuser moving. If you don’t have encoders, life is more challenging and if your focuser has image shift, you’ll be looking at one direction of movement only. So, better have something with encoders and a solid enough setup that you don’t have shift.
Second, it’ll take time. It’ll either take time to run the full “V-curves” each time you’re out or it’ll take time to profile the focuser / scope’s V-curve so that you can hit a point on each side of focus and know, given the star’s size on those two points where focus is. Toss in parameters to derive and you’re not looking at something quick and easy that can work on any of the various bits of hardware people have out there.
Again, don’t get me wrong. For some setups, it’s essential. If you’re running a remote observatory, for example, auto-focus is going to be a huge win. But, for your typical user, if focus can be done quickly and easily without this, it may not be worth the hassle.

I’m here to suggest it can be done quickly and easily without any of this.

In one of my earlier tutorials, I posted a video of both rough focusing and fine focusing using an earlier version of Nebulosity’s fine-focus tool. Here, I’ll show a video of me using both the current version of the Fine Focus tool and augmenting this with the use of a Bahtinov mask.

What the heck is a Bahtinov mask you ask? It’s an elaboration on the idea of a Hartmann mask -- something you stick over the front end of your scope to induce a diffraction pattern that makes focusing easier. David Polivka over at has a great webpage on the mask and how to make one. I made one myself out of a piece of thin cardboard by printing out the pattern from David’s site, taping it onto the cardboard, and using a razor blade and straight edge to cut out the pattern. (I’ll post a picture up here soon.) With some creative folding of the cardboard, it snugly holds itself onto the front of the tube just fine.

The concept is that you adjust the focus until the middle spike is nicely centered. Here, I have a video of me going through the focus process. I’m working on an 8” f/5 Newt (I’ve done this at f/4 as well without issues) that has a heavy camera setup on a stock one-speed GSO Crayford. No nice Feathertouch here. No, I’m using a simple focuser on a mount pushed to its capacity. What we see is the image of the star in focus with the mask in the upper left, the profile in the upper right (orient the diffraction right and that profile could be very useful!), and the running log and current values for the max intensity and the half flux radius. Play the video and you’ll see (and hear) me go from this well-focused spot to taking it out of focus with the mask and bringing it back. I’ll then pull off the mask and show the star is in focus, nudge the focus out a bit and bring it back showing we get to the same focus spot. Note, the HFR will be different with the mask on and off, of course, but the point is that the focuser position is the same in each. When one is in focus, the other is as well. In the minute I’m actually doing anything here, you’ll see me hit focus with each method. So, that’s focusing the system twice in a minute.

Personally, I think this is pretty easy and straightforward. Watching this video should give you a good feel for using the Bahtinov mask with Nebulosity. Watching the other video should give you a good feel for using Nebulosity without this (and on a very unstable night). Either way, you can be confident that you’re hitting accurate focus on modest hardware with no investment and in a short time.

PHD, subpixel guiding, and star SNR

Q: I've seen some quote the accuracy of the guiding available in their software with numbers like 1/20th of a pixel accuracy. a) How is this possible, and b) How accurate is PHD?

There are a whole bunch of things that will go into how accurate your guiding is. Your mount, the wind, flex, etc. all go into this. Here, we'll go over finding the star's position to an accuracy well below a pixel and how the star's signal to noise ratio (SNR) affects this accuracy. Since the SNR of the star (which is determined by the amount of light hitting the CCD and by the camera's noise) affects the accuracy, I won't quote a single hard and fast number as to how accurate it can be. I could quote an extreme (1/200th of a pixel) but I doubt it would mean much and would sound more like marketing hype. Since PHD is free, there tends to be little incentive for marketing hype.

In an article I wrote for Astronomy Technology Today on PHD Guiding, I went over the basics of how you find the middle of a star to accuracy well below a pixel. Here's an image that may help. Below we have stars hitting three different 2x2 areas of a CCD. In the first case, the star is hitting the exact intersection of these four pixels, so the star's light is evenly distributed among all four. In the next, we have the star still centered vertically but shifted to the right just a bit. More of the star's energy is now on the right two pixels than the left, so these two are brighter than the left two. For both the left and the right, the top and bottom pixels have gotten the same amount of energy, so there is no top/bottom difference. In the last panel, the star is now centered a bit down and to the right of the intersection of the four pixels. Most of its energy hits the lower-right pixel with equal amounts hitting the lower-left and upper-right and the least hitting the upper-left. Thus, so long as the star lights up more than one pixel, we can estimate small shifts in its position.

Here's another way to look at this. In this surface plot, I'm showing the energy of a simulated star that is perfectly centered.

Here is that same star shifted just a touch -- 0.2 pixels one way and 0.7 pixels in the other:

Note now now the star's profile is no longer symmetric. We can use this shift to estimate just how much the star has moved. When using perfect images such as these, it's easy to pick up small shifts. The real question is how well it does on real-world stars. I can't exactly go and take pictures of stars that have moved 0.001 pixels or anything of the sort because I'd have no way of knowing how much the star really moved. (I don't even know if the Hubble could hit that...). What I can do is to simulate real stars with real noise and see how well it works.

Here are profiles of the same star from two different cameras (there are actually two stars and one camera is rotated 180 degrees relative to the other so the second star you see clearly to the right on the image on the right is to the left in the noisy one on the left). Clearly, the one on the left has more noise and a lower SNR than the one on the right. These are real profiles from real stars with real cameras (mag 11.7 star, 2 s exposure, Celestron CPC 1100 XLT at f/6.3-ish).

Here, for reference, is an image of the star on the left. This is certainly not a very clean star to guide on, but PHD will lock onto it and guide off of it.

To simulate this, I can get something pretty close by modeling a star with variable amounts of Gaussian noise:

I apologize for the fact that the size of the X and Y axes differ between the simulations and the actual stars. The simulations show a more "zoomed out" view than the actual star profiles, but the SNRs of the two setups are comparable. The big question is how accurately can PHD locate the center of the star in each kind of image?

To get at this, I moved the simulated star by 0.05 pixel steps in both directions so I would know the true distance the star moved (using good old Pythagoras) and I would have a bunch of samples to get a good estimate of PHD's error with no noise, low noise (right image), and high noise (left image). Shown here is the average error in location for the three noise conditions (error bars show standard error of the mean)

Without any noise, PHD is accurate down to (on average) 0.004 pixels or 1/250th of a pixel. With a low amount of noise, the accuracy goes to 0.018 pixels or 1/56th of a pixel and with high amounts of noise it goes to 0.18 pixels or 1/5.5th of a pixel. Better stars and/or better guide cameras will get you more accuracy, but even with this very noisy star, we're still at 1/5th of a pixel accuracy.

What about noise reduction? Does smoothing the image help? Well, it helps in terms of increasing the odds that PHD will lock onto the star, but it doesn't help the accuracy of localization at all. At best, it does nothing and at worst, it hurts your accuracy a bit. Again, it will help PHD find the star in the first place, so if this plot included "lost stars" as errors (and they are errors), it would have a nice effect. But once found, smoothing does nothing to help the localization.

So there you have it. Can we get to sub-pixel levels? Sure. Under perfect conditions, PHD Guiding gets you down to an insane level of 0.004 pixels worth of error showing that the basic math in sub-pixel error calculation works. Done well (and yes, it can be done poorly - I know from first-hand experience a few ways to do it poorly and/or to introduce systematic biases in the star-finding routine), it can get you more accurate star localization than you'd ever really need. The SNR of your guide star will affect your accuracy, however. Under more real-world conditions, accuracy drops, but still can be very good. I have seen many stars in my guide frame with SNRs like the Low-noise SNR star used here that led to a 1/50th pixel accuracy. One notable implication of this is that with higher SNR stars, you can afford to use wider FOV guide setups since the higher SNR leads to increased precision in localization.

In practice, the High Noise star here is about as noisy a star as you can successfully guide on with PHD. Much worse than this and you're going to be suffering lost locks occasionally. With even this level of noise, we're still below a quarter of a pixel worth of error. Odds are, your mount, the wind, flex, etc. will be causing more trouble than even this error unless you're using a very wide FOV guide setup.

Stacking accuracy

Q: How can I get the sharpest images in my stack using Nebulosity? How does Nebulosity compare to other stacking tools?

Nebulosity has several means of aligning the images prior to actually stacking them. We can use simple translation, translation + rotation, translation + rotation + scaling, and Drizzle. I've covered Drizzle in an article for Astrophoto Insight, so I'll focus on the more traditional methods here.

The big difference between "translation" and "translation + rotation (+ scaling)" is that when doing a translation-only alignment, Nebulosity does not resample the image. It does "whole pixel" registration. This sounds worse than "sub-pixel" registration. Isn't it better to shift by small fractions of a pixel? Well, it would be, except for the fact that when you do so, you need to know what the image looks like shifted a fraction of a pixel. That means, you must interpolate the image and interpolation does cause a loss of sharpness. So, you're faced with a trade-off. Keep the image exactly as-is and shift it by whole pixels or resample it and shift it by fractional pixels.

Now, toss into this the fact that our long-exposure shots are already blurred by the atmosphere (and to a varying degree from frame to frame) and you've got a mess if you try to determine which is better from just thinking about it. So, we have what we call an "empirical problem." Let's get some data and test it.

I took some data I had from M57 shot with an Atik 16IC at 1800 mm of focal length and some wider-field data of M101 shot on a QHY 2Pro at 800 mm. I ran the M57 data through a number of alignments and Michael Garvin ran the M101 data through several as well.

Here are the images from M57 (click here for full-res PNG file). All were processed identically, save for the alignment / stacking technique.

Here are the images from M101 (click here for full-res PNG version). Again, all were processed identically. Here, the image has been enlarged by 2x and a high-pass filter overlay used to sharpen each (all images were on the same layer in Photoshop so the same exact sharpening was applied).

So what do we take from all this? Well, first, there's not a whole lot of difference among the methods. All seem to do about the same thing. To my eye, adding the "starfield fine tune" flag in Nebulosity helps a touch and using the resampling (adding a rotation component) hurts a touch, but these aren't huge effects. Someday, I'll beef up the resampling algorithm used in the rotation + (scale) version. Comparing Nebulosity's results with those of other programs again seems pretty much a tie. I can't pick out anything in their stacks that I don't see as well in Nebulosity's. Overall, these images seem to be limited more by the actual sharpness of the original data than by the stacking method.

Gain, Offset, and Bit Depth

Q: What should I set my "gain" and "offset" to?

Before answering this, a bit of background is useful. Specifically, just what the heck do gain and offset do? Before we cover this, a brief primer on how those photons you capture become intensities you see on the screen is needed. If you wish, skip down to "OK, so what should I set my gain and offset to?" below.

How do signals off my CCD become intensity values?
When each CCD pixel is read out, there is a certain amount of voltage corresponding to how many photons were collected and converted into electrons. This is an analog signal that needs to be converted into a digital signal so that we have a number corresponding to the intensity. This conversion happens in the analog to digital converter (ADC). In so doing, we have a specification often seen on cameras, the overall system gain, typically specified as some number of electrons per ADU (analog-digital-unit, aka the raw intensities you see in your image in a program like Nebulosity). A camera may have an overall system gain of something like 0.7 e-/ADU or 1.3 e-/ADU, etc. This means that each electron registered corresponds to 0.7 or 1.3 raw intensity units.

There are four key limitations to keep in mind when thinking about the ADC process:

1) There are no fractional ADU outputs. So, one electron in both the systems above would probably end up recording 1 ADU. You can't have half an ADU (and you can't have half an electron).

2) Your ADC has a minimum value of 0 and a total number of intensity steps of 2 ^ (# bits in your ADC). For a 16-bit ADC, this is 0-65,535. For an 8-bit ADC, this is 0-255, etc.

3) Zero is evil and 65,535 is bad but not evil. When your signal hits either, you loose information. If the sky is at zero and your faint galaxy is at zero, no amount of stretching will bring it back. 0*1 = 0*100 = 0.

4) Your CCD has a limited number of electrons it can hold called the well depth. This may be 20,000 e-, 40,000 e-, etc. Note, that for all the cameras I know of that let you adjust the gain and offset (Orion Starshoot, Meade DSIs, QHY cameras, etc.), the well depth is < 65,535. This will be key for my argument below.

What do gain and offset do?
With all this in your head, we can now describe what gain and offset controls on cameras do. After coming off the CCD and before hitting the actual ADC there is typically a small pre-amplifier (this may be inside the ADC chip itself). What this preamp does is allow you to boost the signal by some variable amount and to shift the signal up by some variable amount. The boosting is called gain and the shift is called offset.

So, let's say that you have pixels that would correspond to 0.1, 0.2, 1.1, and 1.0 ADU were the ADC able to deal with fractional numbers. Now, given that it's not, this would turn into 0, 0, 1, and 1 ADU. Two bad things have happened. First, the 0.1 and 0.2 have become the same number and the 1.1 and 1.0 have become the same number. We've distorted the truth and failed to accurately represent subtle changes in intensity. This failure is called quantization error. Second, the first two have become 0 and, as noted above, 0 is an evil black hole of information.

Well, what if we scaled these up by 10x before converting them into numbers (i.e., we introduce some gain)? We'd get 1, 2, 11, and 10. Hey, now we're getting somewhere! With gain alone, we've actually fixed both problems. In reality, the situation is often different and the ADC's threshold for moving from 0 to 1 might be high enough so that it takes a good number of electrons to move from 0 to 1. This is where injecting an offset (a DC voltage) into the signal comes in to make sure that all signals you could possibly have coming off the CCD turn into a number other than zero.

Gain's downside: Bit depth and dynamic range
From the above example, it would seem like we should all run with lots of gain. The more the better! Heck, it makes the picture brighter too! I often get questions about this with the assumption that gain is making the camera more sensitive. It's not. Gain does not make your camera more sensitive. It boosts the noise as well as the signal and does not help the signal to noise ratio (SNR) in and of itself. Gain trades off dynamic range and quantization error.

We saw above how it reduces quantization error. By boosting the signal we can have fractional differences become whole-number differences. What's this about dynamic range?

Let's come up with another example. Let's have one camera with a gain of 1. So, 1 e-/ADU. Let's have another run at 0.5 e-/ADU. Now, let's have a pixel with 1k e-, another with 10k e-, another at 30k e-, and another at 50k e-. In our 1 e-/ADU cam, we of course have intensities of 1000, 10000, 30000, and 50000. In our 0.5 e-/ADU cam, we have intensities of 2000, 20000, 60000, and 65535. What? Why not 100000? Well, our 16-bit camera has a fixed limit of 65535. Anything above that gets clipped off. So while the 1 e-/ADU camera can faithfully preserve this whole range, the 0.5 e-/ADU camera can't. Its dynamic range is limited now.

How do manufacturers determine gain and offset for cameras that don't allow the user to adjust them?
Let's pretend we're making a real-world camera now and put in some real numbers and see how these play out. Let's look at a Kodak KAI-2020 sensor, for example. The chip has a well-depth specified at 45k e-. So, if we want to stick 45,000 intensity values into a range of 0-65,535, one easy way to do it is to set the gain at 45,000 / 65535 or at 0.69 e-/ADU. Guess what the SBIG ST-2000 (which uses this chip) has the gain fixed at... 0.6 e-/ADU. How about the QSI 520ci? 0.8 e-/ADU. As 45k e- is a target value with actual chips varying a bit, the two makers have chosen to set things up a bit differently to deal with this variation (SBIG's will clip the top end off as it's going non-linear a bit more readily), but both are in the same range and both fix the value.

Why? There's no real point in letting users adjust this. Let's say we let users control the gain and they set it to 5 e-/ADU. Well, with 45k e- for a maximum electron count at 5 e-/ADU, we end up with a max of 9,000 ADU and we induce strong quantization error. 10, 11, 12, 13 and 14 e- would all become the same value of 2 ADU in the image, loosing the detail you so desperately want. What if the user set it the other way to 0.1 e-/ADU? Well, you'd turn those electron counts into 100, 110, 120, 130, and 140 ADU and wonder just what's the point of skipping 10 ADU per electron. You'd also make 6553 e- be the effective full-well capacity of the chip. So, 6535:1 would be the maximum dynamic range rather than 45000:1. Oops. That nice detail in the core of the galaxy will have been blown out and saturated. You could have kept it preserved and not lost a darn thing (since each electron counts for > 1 ADU) if you'd left the gain at ~0.7 e-/ADU.

What about offset? Well, it's easy enough to figure out the minimum value a chip is going to produce and add enough offset in the ADC process to keep it such that this is never going to hit 0.

OK, so what should I set my gain and offset to?
The best value for your camera may not be the best value for other cameras. In particular, different makers set things up differently. For example, on a Meade DSI III that I recently tested, running the gain full-out at 100% let it just hit full well at 65,535 ADU. Running below 100% and it hit full-well at 40,000 or 30,000, or 10,000 ADU. There's no point in running this camera at anything less than 100% gain. On a CCD Labs Q8-HR I have, even at gains of 0 and 1 (on its 0-63 scale), the camera would hit 65535 on bright objects (like the ceiling above my desk). There's no point in running this camera at gains higher than 0 or 1.

Why is there no point? The camera only holds 25k e-. If a gain of 0 or 1 gets me to 0.38 e-/ADU (so that those 25k e- become 65535), running at 0.1 e-/ADU will only serve to limit my dynamic range. Each single electron already comes out to more than 2 ADU.

So, how do I set it? (man, you ramble a lot when you get going!)
1) Take a bias frame and look for the minimum value in it. Is it at least, say 100 and less than a thousand or a few thousand? If so, your offset is fine. If it's too low, boost the offset. If it's high, drop it. Repeat until you have a bias frame with an offset in, roughly 100 - 1000. Don't worry about precision here, it won't matter at all in the end. You now know your offset. Set it and forget it. Never change it.

2) Aim the camera at something bright or just put it on your desk with no lens or lenscap on and take a picture. Look at the max value in the image. Is it well below 65k? If so, boost the gain. Is it at 65k? If so drop the gain. Now, if you're on a real target (daylight ones are great for this) you can look at the histogram and see the bunching up at the top end as the camera is hitting full-well. Having that bunch-up roughly at 65,535 plus or minus a bit is where you want to be. If you pull up just shy, you'll get the "most out of your chip" but you'll also have non-linearity up there. You've got more of a chance of having odd color casts on saturated areas, for example, as a result. If you let that just clip off, you've lost a touch but what you've lost is very non-linear data anyway (all this assumes, BTW, an ABG chip which all of these cams in question are). Record that gain and set it and forget it. Never change it.

By doing this simple, daytime, two-step process you've set things up perfectly. You'll be sure to never hit the evil of zero and you'll be making your chip's dynamic range fit best into the 16-bits of your ADC. Again, all the cameras in question have full-well capacities below 65,535 so you are sure to have enough ADUs to fit every electron you record into its own intensity value.

The above assumes you have more ADUs available than electrons. This is true as noted for the cameras in question here but isn't universally true. For example, if you have an 8-bit ADC, variable gain is quite important as you may want yourself to trade-off quantization error and dynamic range. You may be fine blowing out the core of a galaxy to get those faint arms and want to run at 1 or 2 e-/ADU instead of 10 or 50 or 200 e-/ADU. This happens in 12-bit DSLRs as well with their 4096 shades of intensity but not so much with 14-bit DSLRs and their 16,384 shades.

Please note that none of this has considered noise at all. The situation is even "worse" when we factor in the actual noise we have. If the noise in the frame is 8 ADU that means the bottom 3 bits are basically worthless. That 45,000:1 dynamic range is really 45,000:8 or 5,625:1 and you're not even able to really pull out every electron. But, that's a topic for another day. (Google "Shannon Information" if interested).

Combining images: means, medians, and standard deviations

Q: I hear medians are a good way to stack images as they can remove things like hot pixels, cosmic rays, or streaks from satellites. Does Nebulosity support this?

The short answer is no ... but... When combining images we want something that helps reduce the noise. We'd also like something that is tolerant of "outliers". The mean (average) is great at the first part but lousy at the second part. Medians are not so hot at the first part but great on the second part. What we'd like is something that is good at both parts. Nebulosity supports standard-deviation based filtering of your images to let you keep most of the frames and pitch just the bad ones.

OK, so what is it and why is it better? What are these 1.5, 1.75, etc. thresholds I'm being asked about?

If you were to take a perfect image of a target, each pixel would have its "ideal" or "true" value - how much intensity there is from that part of the target. The trouble is, each time we sample the target (aka each image) we get that true value for a pixel but we also get some noise on top of it. We want, of course, the true value. How do we get rid of that noise?

In statistics, we have several typical ways of describing our data. Think right now just about a single pixel (after alignment). So, we have the same spot in the arm of M51 or something. The most common way is the mean (aka average) of all of our samples (aka images, light frames, etc.). It should tell us the central tendency and therefore estimate the truth. The more samples we have, the better the estimate is since we don't have to rely on just one sample (which has truth plus or minus some noise value) or a few samples. With more samples, the noise tends to cancel and we are left with a closer estimate of the truth (the noise, BTW, tends to follow a 1/sqrt(# samples) rule). We can quantify how much noise there is in our samples with a second way of describing our data. The variance (and its square root, the standard deviation) are the typical ways we do this, telling us how much "spread" there is in our samples.

If we assume the data are "normal", about 70% of all samples will lie within one standard deviation (SD) of the mean (that is, 70% are less than one standard deviation above or one standard deviation below the average). About 95% like within 2 SD of the mean. Below, I show the histogram of 5000 normally-distributed random numbers (pretend you had 5000 light frames!). Samples in green lie within 1 SD of the mean. Yellow (and green) lie within 1.5 SD. Orange (and yellow and green) are within 2 SD and red are outside of 2SD. Now, these are all real samples (nothing like an errant satellite) but we could safely pitch those samples in red or orange and still have a good estimate of the mean. We'd not loose too many samples and we'd take out those that are more likely to be outliers. If a sample comes in that is > 2SD, odds are pretty good it's an errant value (like a hot pixel or satellite). Even if it's not, since we don't have 5000 samples - we have far fewer - filtering these out will help keep our estimate of the mean centered where it should be and not skewed by that outlier. Thinking about this diagram will help us a lot in the next step - understanding what happens during stacking. Just remember that with the standard deviation, we know what kind of values we might expect to find and what type of values are really abnormal (e.g., something 5 SD from the mean is very abnormal as there is only a 0.000057% chance this is a real sample and not the result of something else going on).

OK, given that background, here is what happens during the stack. For each (aligned) pixel, we calculate the mean and standard deviation across all of the images in the stack. If your SD threshold is at 1.5, any samples of that pixel that have an intensity beyond 1.5 SD from the mean are removed and a new average, excluding these samples, is calculated. This, BTW, is why hot pixels are often eliminated using SD stacking - those hot pixel values are very abnormal and lie far away from the mean.

With the filter set at 1.75, it takes a more extreme or "outlying" intensity value to be counted as "bad" than at 1.5. At 2.0, it takes even more abnormal a value to be excluded. Thus, more samples go into the final image using a higher threshold (and more things like semi-hot pixels as well). Typically, filtering values at 1.5 or 1.75 will yield the best results.

Standard-deviation based stacking therefore lets in more good samples than a median and takes out more (>0) bad samples than the mean (average). That's what makes it such a nice technique for filtering out bad samples. Note, you're not filtering out whole frames. This math is done on each pixel. So, frame #1 may have a bad value at pixel 109,231 but be great everywhere else. For all other pixels, this frame's data will be used but for pixel 109,231 it won't.

The technique isn't perfect. With a lot of outliers, the estimate of the standard deviation goes way up. So, we have a bigger "spread" that is considered normal and it takes something more aberrant to get filtered out. There are techniques to get around this, of course as well, but that's a topic for another day.

Will DSLR Shutter work with my camera?

Q: I have a WhizBang Foo-Matic Model XP-Ultra-Mega camera. Will it work with DSLR Shutter?

In general, if your camera has a “bulb” port that allows it to be triggered by a simple external device, it should work just fine. DSLR Shutter is really moronically simple. It sends "go" and "don't go" signals to simple parallel port data lines, the RTS/DTR lines of a serial port, or to the ShoeString DSUSB. The former two are very simple, binary signals. The ShoeString is “semi-intelligent" in that I need a software library from Shoestring and need to code up support for it. Doug's libraries are trivial to use and it's still sending very simple commands. It is still just sending "go" and "don't go" signals to this bulb port on the camera.

There is another type of command that can be sent to a camera directly over it's USB port. These "intelligent" commands differ from camera maker to camera maker and within camera makers can differ across models. To send these commands, one needs:

1) SDK (software development kit) from the manufacturer
2) A camera to work on for development
3) A lot more code

The "bulb" port is generic. Heck, you could use DSLR Shutter to turn the lights on and off in your house with a touch of hardware. The simplicity here comes from using this generic style of interface. Were it to bypass this interface and use the USB link to the camera, DSLR Shutter would grow to several times its size just to support the Canon DIGIC II / III cameras (that use the same SDK). So, if you can use this basic kind of trigger signal (i.e., if you have a "bulb" trigger on your camera), you're in luck. If not, you're not in luck and won't be as I have no intention of expanding this right now.

Is my poor guiding the result of flex?

Q: My guiding doesn't seem great and I get stars that aren't round. How can I tell if this is flex in my guide setup?

Unless you're using an SBIG camera with two sensors that both get the light from the same OTA, you're bound to have some differential flex between your main imaging camera and your guide camera. Let's say the main OTA is solidly locked onto the mount and the guide scope is attached by rubber bands to the top of the main scope. Since gravity tends to always pull down, as the whole rig rotates, the deflection of the guide scope relative to the main scope changes. With the guide scope atop the main scope, the guide scope will aim a bit too far down. Rotate the whole rig so that the guide scope is now to one side but keep the cameras fixed and the flex in the guide rig makes it aim "left" or "right" on the image (i.e., gravity is now "camera-left" or "camera-right" rather than "camera-down").

All rigs will have some flex. The question is, how much? Is it a big enough factor to hurt your imaging?

Here's a simple way to measure and hopefully rule out flex. Let's assume that PHD (or whatever package you use) isn't loosing the star and that it's guiding somewhere between perfectly and off by, oh, 3-5 arcsec. So, your mount has some error it just can't keep up with, it overshoots, etc (settings being off, mount being crud, etc.). If that is the case, over time, the star should wobble back and forth but on average be in the same place. We overshoot, we undershoot, we oscillate back and forth past the star - whatever. On average, the star is in the right place, but we have line segments instead of points.

Go out some nice, still night (please don't attempt this with 40 MPH gusts...) and shoot, say an hour of anything at something less than say 1 min exposures. We want something short enough that your mount isn't coughing up furballs during the exposure. Be guiding of course during this.

Now, take those images and do two stacks of them:

1) Do an Align and Combine without any aligning (i.e., fixed alignment). Do this for say 1 min worth of shots, 5 min worth of shots, and for the whole shebang. Does the 1 min stack look pretty clean? How much worse is the 5 min? Now, the big question - how much worse is the whole shebang? If the big stack is a lot worse than the 1-5 min stack, you've got flex. Why? The guide scope kept the guide star on target, plus or minus a few arcsec. That error may show up in the 1-5 min (which can show flex too) but if an hour is a lot longer trail, the only explanation is flex (assuming PHD kept a good lock). A 50 arcsec trail there isn't PHD wobbling.

Note, you can use the Measure Distance tool in Nebulosity to see just how many pixels long your trail is. See how wide a star is in a single shot and see how long the trail is, subtract the two (e.g. 117 pixels - 5 pixels = 112 pixels per hour = 1.8 pixels per minute = you'll not be exposing for 10 minutes with clean stars).

2) Do an Align and Combine with Translation (or Translation + Rotation) in Nebulosity. You'll find in your directory an align.txt file with the dx and dy (shifts in x and y) needed to bring that frame into alignment. You can open this up in something like Excel and plot the total distance the stars moved. Ideally, dx and dy would always be 0. If you're having issues, they won't. Use good old Pythagorus to determine the total distance the stars moved: sqrt(dx*dx + dy*dy). If this is a horizontal line on average with some bumps up and down / some noise, you've got no flex. If there is a real drift component, you've got flex.

Now, how bad is it? The easiest way to check is to fit a straight line to your plot. If you're in Excel, you can just have it add a "trend line". Make sure to have it put the equation on the graph if you do this. That equation will tell you just how bad things are. If you're not in Excel and can't automatically add a trend line, print out your plot and just put a ruler on there and draw a line. Your eye is amazingly good at this fit.

The key number that you need is the slope. In Excel, you'll see an equation like "y = 0.4683x - 1.0786" or some such thing. That 0.4683 is what you need. If doing by hand, the slope is the "rise over the run". That is, how much do you move up on the y axis (pixels) over some amount on the x-axis (time). You may find that your line goes up 10 pixels in 15 minutes, making your slope 10/15 or 0.667 pixels per minute.

Here is a sample from two guide rigs I used:

The first slope of 0.4683 pixels per minute means that if I want a max of 1 pixel worth of extra elongation, I can go for about 2.1 minutes in the first setup and 17 minutes in the second. 1 pixel is pretty harsh, so if we double that to 2 pixels (which will still seem pretty round), I'm talking ~4 minutes and over 30 minutes that I can go before differential flex has become an issue for me.