Stark Labs Affordable, Powerful, and Easy to Use Astrophotography Software




IP4AP - Save yourself $$$$$

IP4AP - Image Processing for AstroPhotography

Why do my images suck? OK, let me rephrase that a bit. Why do my images suck less than they used to but still just don't have the same bang that those really good images have? Is it my gear? Is it me?

Well, like most people, I turn first to the gear. My scope, mount, or camera must be the culprit. Or, I don't have some cool new software someone else has. That's it... yea... If I had a new camera and got that new software that does the Zambizi Triple Datum-Redux method, I'd be all set. Break out the wallet, this won't be cheap. But, I'll get those better shots!

Nope. I've been in this now long enough to know that that's just not going to do it. Sure, it may help some, but it won't really do it. The problem is me, not the gear.

I was first introduced to this concept (that I'm the one that sucks, not the gear) in another context - car racing. Like every American male, I thought I knew a good bit about how to drive and that out on a track I could keep up with most anyone, so long as the car was capable. How laughably wrong I was. A number of years ago, I first tried autocrossing (aka Solo Racing) in an old Porsche 911 I had (as old as I was). I came in last. Dead last. It must be the car - these guys all had their cars prepped a lot more than I did. Yea. New tires, rework the engine... Break out the wallet, this won't be cheap.

Then I signed up for a course run by the local club (the San Diego PCA - great group) and found out I was by far the limiting factor. Others could make my car fly around the track and I was using habits I built up on the street that were just plain wrong. Was I stupid? No, just ignorant. Nobody had taught me how to really drive and how to really control a car -- how to use weight balance, friction, and your power to really go fast. I picked up a number of tricks and techniques there that got me going a lot faster and made me a better driver - on the track and off. The hobby became a lot more fun (steering with the gas pedal is still a serious blast), I wasn't so frustrated, and, well I didn't suck quite so badly. All for a nominal fee for the course.

Why this tangent? Well, it's the same thing here. I've been my worst enemy when it comes to data processing. I've had bad habits that have let to results that didn't do justice to the underlying raw data. The gear's not the rate-limiting factor. I am. Or, I was.

Enter Warren Keller and his IP4AP series of tutorials. IP4AP may get confused on the tongue with AIP4WIN, but the two products couldn't be further apart. Jim Burnell and Richard Berry are technical wizards with a great product in AIP4WIN and The Handbook of Astronomical Image Processing. If you want the geek / weenie stuff, they're your guys (and as a geek, I mean that in a flattering way). But, if you want to easily learn how to use PhotoShop (or AstroArt) by looking over an expert's shoulder, Warren's your man.

A few weeks ago, I got an early copy of his newly done / redone "Intermediate" set of tutorials. This is aimed at using Photoshop to handle things like gradients and vignetting issues, powerful use of the Curves tool, dealing with LRGB data, and use of DDP (this last one in AstroArt). Warren spends a lot of time himself with PhotoShop, a lot of time learning techniques from others, a lot of time devising his own techniques, and a lot of time teaching people these techniques. This all shows in the videos. The idea behind IP4AP is to let Warren separate the wheat from the chaff in techniques out there, distill them down, and then show you how they work and why. In the video format, you get to look over Warren's shoulder as he processes images. It's a really effective technique. While you don't get to ask questions right then and there (you can - you can e-mail him and even setup one-on-one sessions), you get to do watch them whenever you like and replay sections as often as you like to see just what he did to achieve a certain effect.

Warren can get a bit goofy at times, but it's part of his wry sense of humor and it serves a real purpose. While watching the videos, you're there to learn, but you shouldn't loose sight of the fact that this is a hobby and meant to be fun. The semi-random appearance of a picture of a cow will help to keep you in this mindset.

So, if you're reading this blog and aren't a contender for APOD (and by reading this blog, odds are pretty much 100% you're not) and if you're not trying to image by lying on your back with a point-and-shoot, holding the shutter down and manually tracking the stars (so that it's not the equipment's fault entirely), I'll make the following wager. Spending $40 on IP4AP will do more for your images than any purchase you could make for 10 times this cost. So, wanna break out the wallet and for that new scope, camera, or mount? Or want to part with a mere $40 (direct) and quite possibly (if not probably) get an even bigger effect on the images you're making? The choice is yours. I know what I'll be doing when he comes out with the next DVD in the series...

NEAIC and NEAF

NEAIC and NEAF 2008

I'm writing this on the plane, heading back from my first trip to NEAF and to NEAIC. Many of you may know about NEAF as it's the largest astronomical vendor show in the country. For two days, the town of Suffern, NY is invaded by more amateur astronomers than you knew existed. In fact, most of the amateur astronomers and vendors that you know exist show up at NEAF. I had a lot of fun catching up with people I met least year at MWAIC, meeting in person people I've known over the Internet for years, and making new friends. It's really an amazing event. You owe it to yourself to get here some year.

NEAIC is an imaging satellite conference that brings in phenomenal speakers and vendors dedicated to astrophotography. At NEAIC, I got to spend time talking with Al Nagler (whom I'd meat at MAIC last year, spending hours talking with about astronomy and audio gear) and David Nagler of Televue, Rui Trippa of Atik, John Cordiale of Adirondack, Tim Puckett of Apogee, Kevin Nelson of QSI, Alan Holmes of SBIG, the Bisque brothers of Software Bisque, Don Goldman of Astrodon filters, Bob Denny of DC-3 Dreams (and ASCOM-fame), and Al Degutis of Astrophoto Insight. Many others were there as well. NEAIC was run by Jim Burnell (AIP4WIN) and Bob Moore who did an amazing job getting speakers together and organizing things. There giving talks were some of the world's best imagers, sharing some of their techniques. Seeing folks like Ken Crawford, Neil Flemming, and Jay GaBany, whose work is simply stunning, present many of their techniques was a real education. Getting to spend some good time with them was also a lot of fun - all really great guys in addition to being incredible imagers. I know a number of the speakers are putting their talks up on their websites. Check out the links above to see what you missed. My modest contribution to the conference was a talk entitled "Guiding on the Cheap". I've placed a QuickTime movie of the talk up in the Tutorials section for people to watch.

NEAIC was setup to have talks than ran the full range, from beginner to expert. We had Dave Snay doing a session on what to do and expect in your first night of imaging, taking people through the gear needed and how to go about actually getting and starting to process your first shots. We also had a talk by Robert Reeves on what's new in webcam astrophotography. Talks then went all the way to the design of ultra-high-end optical systems tailored for monster CCDs by Peter Ceravolo and on building a 9" TMB folded APO by Dietmar Hager, with many in between. There were workshops on topics like using AIP4WIN by Richard Berry, CCDNavigator and CCDAutoPilot by Steve Walters and John Smith, and on getting the most out of Photoshop by Warren Keller. By having three talks going at once, NEAIC managed over 20 talks in two days. All in all, a lot learned and a lot of fun had. If you missed it, consider it next year or consider MAIC in Chicago in a few months.

All this, and NEAF hadn't even begun! While I grew up not far from Suffern, I moved away before the hobby really took hold as an adult, and I'd never managed to make it to NEAF. Well, I sense I'll be going back. If there is gear you want to see in person or people you want to talk to about gear, the gear and people can be found at NEAF. It's really impressive. You can find a list of vendors on the main NEAF page, but that may still not give you a real idea of what it's like. NEAF was kicked off for me at the Sky and Telescope party, which was followed by the OPT party. The number of people in the room there who have helped move our hobby to where we are now was amazing. The room was packed and the vast majority I certainly didn't get to talk to. But if you've not been to NEAF, I can give you an idea by letting you know who I got to really talk to. There was Dennis DiCicco of Sky and Telescope, Craig Weatherwax of OPT (we seem to share the same barber), Alan Traino (who runs NEAF and can sure tell a story), Doug George of Diffraction Limited, and John Smith of CCDWare. Of course, more time talking with the likes of Jim Burnell, Warren Keller, Al and David Nagler, Kevin Nelson, Ken Crawford, the Bisque Brothers, Al Degutis, and Bob Denney.

Then at NEAF, there was a mix of getting to see all the newly introduced stuff, getting to kick the tires on existing gear, and getting go scrounge around for nice bargains in the clearance bins. I went in wanting to meet the crew from Lunt Solar Systems and see what their new solar scopes were about. Sadly, things were cloudy and so I couldn't look through them, but I did get to spend some time with them on the table. Boy, they sure are tempting! Over in the Astronomics booth, I saw the 8" RC from Astro-Tech. 6", 8", 10", 12" and 16" scopes are in the works - real RCs at low prices. The 6" comes in at $1295 and the 8" at $2995. The 10" was something around $5k, I believe. In fact, there were a number of RC scopes, modified DK scopes, etc. there at lower prices than we've seen before by far. Mike Siniscalchi and I spent a lot of time talking with Mike Bieler of Astronomics about their plans and about Cloudy Nights. I also got to meet Russ and crew from Denkmeier and to hear about a really neat image-intensifier system they're working on and to chat with Ted Ishikawa from Hutech (another Borg may be in my future), and with Gary and Stuart Parkerson of Astronomy Technology Today.

Many of the above I'd expected to do while here. No, I didn't know about the RC scopes, but I'd assumed we'd have something new from Astro-Tech. I also assumed we'd have something new from Televue. That was their 8 mm Ethos. I got to get a look through a pair of those in a binoviewer. Holy cow! You could hardly tell you were looking through anything! But, most of this I'd expected to some extent. Exciting, sure, but expected in some way. What was also great to see was the new stuff from new companies (or at least those I'd never heard of). One that caught my eye was a slick polymer solution for cleaning optics. It's not inexpensive stuff, but it sure did so the job and I'll probably pick some up for the next time I'm trying to clean my CCD off. It bonds to the dust particles, solidifies, and you just peel it (and the dust) off. Very slick. Another was the StableMax from Telescope Stability Systems, a new company. While there are some great portable piers out there - some of which are just breathtakingly cool, these aren't inexpensive bits of gear. The StableMax was seriously sturdy and has a very trick setup for mating to the mount. A removable, indexed adapter plate attaches to your mount and then slides into a spot on the tripod. This makes it so you can change mount heads easily while using the same tripod (adapter plates range from $50-$100) - either if you have more than one mount or if you end up swapping mounts down the road. The machining was wonderfully precise and the thing wouldn't budge. I'm going to be taking some measurements of my EM-10 and talking to Tim Ray soon as this seems to be a great, well-engineered solution.

Add to this, I met a ton of users - far too many to list here (and far too taxing on my memory to recall all the names!) I'd like to thank all those that came up to say "Hi" and that they were big fans of PHD Guiding. Getting to talk with you and hear how much its helped and see how many people its helped is really wonderful.

If there's one thing to get out of this blog entry, it's not that Craig got to meet a bunch of folks. It's that NEAIC and NEAF let me meet a bunch of folks. If you were here, you could have asked Al Nagler about the new Ethos, Russ Lederman about the new coatings on his binoviewers, Gary and Stuart about what's coming up in ATT, etc. They're all here, they're all amateur astronomers like you, and they're all ready and willing to talk. You'll get to do that and you'll meet people you may have known in the ether for ages, but never actually met. You may come home a bit lighter in the wallet (others seem to have managed to spend nothing - Craig from OPT made sure that wasn't my fate within minutes of opening, selling me a Baader Hyperion zoom), but I doubt you'll mind. I can't imagine you wouldn't have a good time.

Combining images: means, medians, and standard deviations

Q: I hear medians are a good way to stack images as they can remove things like hot pixels, cosmic rays, or streaks from satellites. Does Nebulosity support this?

The short answer is no ... but... When combining images we want something that helps reduce the noise. We'd also like something that is tolerant of "outliers". The mean (average) is great at the first part but lousy at the second part. Medians are not so hot at the first part but great on the second part. What we'd like is something that is good at both parts. Nebulosity supports standard-deviation based filtering of your images to let you keep most of the frames and pitch just the bad ones.

OK, so what is it and why is it better? What are these 1.5, 1.75, etc. thresholds I'm being asked about?

If you were to take a perfect image of a target, each pixel would have its "ideal" or "true" value - how much intensity there is from that part of the target. The trouble is, each time we sample the target (aka each image) we get that true value for a pixel but we also get some noise on top of it. We want, of course, the true value. How do we get rid of that noise?

In statistics, we have several typical ways of describing our data. Think right now just about a single pixel (after alignment). So, we have the same spot in the arm of M51 or something. The most common way is the mean (aka average) of all of our samples (aka images, light frames, etc.). It should tell us the central tendency and therefore estimate the truth. The more samples we have, the better the estimate is since we don't have to rely on just one sample (which has truth plus or minus some noise value) or a few samples. With more samples, the noise tends to cancel and we are left with a closer estimate of the truth (the noise, BTW, tends to follow a 1/sqrt(# samples) rule). We can quantify how much noise there is in our samples with a second way of describing our data. The variance (and its square root, the standard deviation) are the typical ways we do this, telling us how much "spread" there is in our samples.

If we assume the data are "normal", about 70% of all samples will lie within one standard deviation (SD) of the mean (that is, 70% are less than one standard deviation above or one standard deviation below the average). About 95% like within 2 SD of the mean. Below, I show the histogram of 5000 normally-distributed random numbers (pretend you had 5000 light frames!). Samples in green lie within 1 SD of the mean. Yellow (and green) lie within 1.5 SD. Orange (and yellow and green) are within 2 SD and red are outside of 2SD. Now, these are all real samples (nothing like an errant satellite) but we could safely pitch those samples in red or orange and still have a good estimate of the mean. We'd not loose too many samples and we'd take out those that are more likely to be outliers. If a sample comes in that is > 2SD, odds are pretty good it's an errant value (like a hot pixel or satellite). Even if it's not, since we don't have 5000 samples - we have far fewer - filtering these out will help keep our estimate of the mean centered where it should be and not skewed by that outlier. Thinking about this diagram will help us a lot in the next step - understanding what happens during stacking. Just remember that with the standard deviation, we know what kind of values we might expect to find and what type of values are really abnormal (e.g., something 5 SD from the mean is very abnormal as there is only a 0.000057% chance this is a real sample and not the result of something else going on).


OK, given that background, here is what happens during the stack. For each (aligned) pixel, we calculate the mean and standard deviation across all of the images in the stack. If your SD threshold is at 1.5, any samples of that pixel that have an intensity beyond 1.5 SD from the mean are removed and a new average, excluding these samples, is calculated. This, BTW, is why hot pixels are often eliminated using SD stacking - those hot pixel values are very abnormal and lie far away from the mean.

With the filter set at 1.75, it takes a more extreme or "outlying" intensity value to be counted as "bad" than at 1.5. At 2.0, it takes even more abnormal a value to be excluded. Thus, more samples go into the final image using a higher threshold (and more things like semi-hot pixels as well). Typically, filtering values at 1.5 or 1.75 will yield the best results.

Standard-deviation based stacking therefore lets in more good samples than a median and takes out more (>0) bad samples than the mean (average). That's what makes it such a nice technique for filtering out bad samples. Note, you're not filtering out whole frames. This math is done on each pixel. So, frame #1 may have a bad value at pixel 109,231 but be great everywhere else. For all other pixels, this frame's data will be used but for pixel 109,231 it won't.

The technique isn't perfect. With a lot of outliers, the estimate of the standard deviation goes way up. So, we have a bigger "spread" that is considered normal and it takes something more aberrant to get filtered out. There are techniques to get around this, of course as well, but that's a topic for another day.

Will DSLR Shutter work with my camera?

Q: I have a WhizBang Foo-Matic Model XP-Ultra-Mega camera. Will it work with DSLR Shutter?

In general, if your camera has a “bulb” port that allows it to be triggered by a simple external device, it should work just fine. DSLR Shutter is really moronically simple. It sends "go" and "don't go" signals to simple parallel port data lines, the RTS/DTR lines of a serial port, or to the ShoeString DSUSB. The former two are very simple, binary signals. The ShoeString is “semi-intelligent" in that I need a software library from Shoestring and need to code up support for it. Doug's libraries are trivial to use and it's still sending very simple commands. It is still just sending "go" and "don't go" signals to this bulb port on the camera.

There is another type of command that can be sent to a camera directly over it's USB port. These "intelligent" commands differ from camera maker to camera maker and within camera makers can differ across models. To send these commands, one needs:

1) SDK (software development kit) from the manufacturer
2) A camera to work on for development
3) A lot more code

The "bulb" port is generic. Heck, you could use DSLR Shutter to turn the lights on and off in your house with a touch of hardware. The simplicity here comes from using this generic style of interface. Were it to bypass this interface and use the USB link to the camera, DSLR Shutter would grow to several times its size just to support the Canon DIGIC II / III cameras (that use the same SDK). So, if you can use this basic kind of trigger signal (i.e., if you have a "bulb" trigger on your camera), you're in luck. If not, you're not in luck and won't be as I have no intention of expanding this right now.

Is my poor guiding the result of flex?

Q: My guiding doesn't seem great and I get stars that aren't round. How can I tell if this is flex in my guide setup?

Unless you're using an SBIG camera with two sensors that both get the light from the same OTA, you're bound to have some differential flex between your main imaging camera and your guide camera. Let's say the main OTA is solidly locked onto the mount and the guide scope is attached by rubber bands to the top of the main scope. Since gravity tends to always pull down, as the whole rig rotates, the deflection of the guide scope relative to the main scope changes. With the guide scope atop the main scope, the guide scope will aim a bit too far down. Rotate the whole rig so that the guide scope is now to one side but keep the cameras fixed and the flex in the guide rig makes it aim "left" or "right" on the image (i.e., gravity is now "camera-left" or "camera-right" rather than "camera-down").

All rigs will have some flex. The question is, how much? Is it a big enough factor to hurt your imaging?

Here's a simple way to measure and hopefully rule out flex. Let's assume that PHD (or whatever package you use) isn't loosing the star and that it's guiding somewhere between perfectly and off by, oh, 3-5 arcsec. So, your mount has some error it just can't keep up with, it overshoots, etc (settings being off, mount being crud, etc.). If that is the case, over time, the star should wobble back and forth but on average be in the same place. We overshoot, we undershoot, we oscillate back and forth past the star - whatever. On average, the star is in the right place, but we have line segments instead of points.

Go out some nice, still night (please don't attempt this with 40 MPH gusts...) and shoot, say an hour of anything at something less than say 1 min exposures. We want something short enough that your mount isn't coughing up furballs during the exposure. Be guiding of course during this.

Now, take those images and do two stacks of them:

1) Do an Align and Combine without any aligning (i.e., fixed alignment). Do this for say 1 min worth of shots, 5 min worth of shots, and for the whole shebang. Does the 1 min stack look pretty clean? How much worse is the 5 min? Now, the big question - how much worse is the whole shebang? If the big stack is a lot worse than the 1-5 min stack, you've got flex. Why? The guide scope kept the guide star on target, plus or minus a few arcsec. That error may show up in the 1-5 min (which can show flex too) but if an hour is a lot longer trail, the only explanation is flex (assuming PHD kept a good lock). A 50 arcsec trail there isn't PHD wobbling.

Note, you can use the Measure Distance tool in Nebulosity to see just how many pixels long your trail is. See how wide a star is in a single shot and see how long the trail is, subtract the two (e.g. 117 pixels - 5 pixels = 112 pixels per hour = 1.8 pixels per minute = you'll not be exposing for 10 minutes with clean stars).

2) Do an Align and Combine with Translation (or Translation + Rotation) in Nebulosity. You'll find in your directory an align.txt file with the dx and dy (shifts in x and y) needed to bring that frame into alignment. You can open this up in something like Excel and plot the total distance the stars moved. Ideally, dx and dy would always be 0. If you're having issues, they won't. Use good old Pythagorus to determine the total distance the stars moved: sqrt(dx*dx + dy*dy). If this is a horizontal line on average with some bumps up and down / some noise, you've got no flex. If there is a real drift component, you've got flex.

Now, how bad is it? The easiest way to check is to fit a straight line to your plot. If you're in Excel, you can just have it add a "trend line". Make sure to have it put the equation on the graph if you do this. That equation will tell you just how bad things are. If you're not in Excel and can't automatically add a trend line, print out your plot and just put a ruler on there and draw a line. Your eye is amazingly good at this fit.

The key number that you need is the slope. In Excel, you'll see an equation like "y = 0.4683x - 1.0786" or some such thing. That 0.4683 is what you need. If doing by hand, the slope is the "rise over the run". That is, how much do you move up on the y axis (pixels) over some amount on the x-axis (time). You may find that your line goes up 10 pixels in 15 minutes, making your slope 10/15 or 0.667 pixels per minute.

Here is a sample from two guide rigs I used:



The first slope of 0.4683 pixels per minute means that if I want a max of 1 pixel worth of extra elongation, I can go for about 2.1 minutes in the first setup and 17 minutes in the second. 1 pixel is pretty harsh, so if we double that to 2 pixels (which will still seem pretty round), I'm talking ~4 minutes and over 30 minutes that I can go before differential flex has become an issue for me.



Wecome to Craig's Astro Blog

Welcome to my astrophotography blog. As the author of programs like PHD Guiding and Nebulosity, I get a lot of questions on user groups like the Stark Labs Yahoo group. While some of these cover things specific to the software, other things are more general. One goal of this blog is to bring together a number of questions and answers that come up often or are of broad appeal.

I also get a lot of cameras and other gear here for testing (see reviews in Astrophoto Insight and Astronomy Technology Today) and for integration into the software. A second goal of the blog is to give short-form reviews / thoughts on these.

This is my first blog. I have no idea how well this will work out, but it costs nothing but time to try. Hmmm, I seem to recall saying something like that when I started to write Nebulosity.

While starting things off, I'd like to give a big THANK YOU to Michael Garvin for helping me get this off the ground.

Craig