Stark Labs Affordable, Powerful, and Easy to Use Astrophotography Software




Software

Equinox 6

When I moved over to the Mac a few years ago, I kept Windows running in a virtual machine a lot of the time. There were two programs that I just had to have going whenever I was thinking about a night’s imaging or about new scopes or cameras. One was Rod Wodaski’s CCD Calc and the other was Cartes du Ciel. Being a “switcher”, I wanted to have something native to the Mac but at the same time, I was so used to both of these and they did just what I needed them to do. But, it was time to make a switch.

I looked at a lot of options on the Mac. AstroImageBrowser provided a decent stand-in for CCD Calc. It does some things a lot better than CCD Calc, but it doesn’t do a few things that CCD Calc does or just does better. Neither program lets you see how things will look with an FOV of more than 1 degree and with today’s DSLRs and other decent-sized chips, it doesn’t take much to get larger than 1 degree. (These days, I find myself using something else, however, thanks to some recent updates to Equinox 6 - read on.)

When looking at “planetarium” software, I had a much harder time. I’ve tried Starry Night Pro on both the Mac and Windows and there was something about the interface I just couldn’t ever get my head around. The sky never seemed to move the way I wanted it to and despite giving it a solid try, it just didn’t work for me. In addition, their support of my Takahashi Temma mount was limited to Windows. I’d been a fan of TheSky for a long time, but their Mac version was very out of date. (TheSkyX is out now, but still only in the Student Edition). I trield Stellarium and Celestia and both are beautiful, but neither would let me really plan for an evening of imaging much less control the Tak mount. I looked at Voyager 4, but the price tag was a bit steep, especially since it wouldn’t control the odd Tak mount (sense a theme here?). AstroPlanner could do a lot, but I really did want more of the planetarium-style interface. It’s a neat package that I encourage folks to look at, but it’s not exactly a replacement for CdD. Oh, and yes, I did even work with CdC’s code, getting things sort-of compiled and going on OS X, but this isn’t ready for prime time (or wasn’t then). As a side note, the mount can be run via cocoaTemma if you know where you want to go by name or coordinate and I do have things going nicely on it with a PDA and TheSky Pocket Edition).

Enter Equinox 6, by Darryl Robertson. Equinox 6 has been around for awhile and I won’t pretend to say I know it’s history, but I’ve now used it for almost two years and can say I like what I see. At first, I must admit it took a little getting used to. For starters, there is a separate “main view” that shows the whole sky or whatever portion of the sky you’re zoomed to and a “scope view” that shows what should be in your telescope. It’s the “scope view” that can show the fainter stars, show camera or eyepiece overlays, etc. and it took a bit of time to get used to this split setup. Now, it doesn’t bother me and can even be a nice feature at times. In truth, it didn’t take very long to adjust (and as always, reading the manual actually helped. It’s a very nice manual.)

It’s got all the features you’d expect from a nice planetarium package (and then some!) and I encourage you to take it for a test drive even if just for these (and yes, it does even control my Tak Temma mount!). What I’d like to really point out today are one long-standing thing and two new things that I think make Equinox 6 exceptionally cool and help show what kind of product it is. These latter two things actually are evidence of the first and that is that Darryl is just the sort of guy you’d love to have writing a program you use. In my time using it, I’ve gotten to watch it grow and watch how quickly bugs are fixed and patches put out. I’ve spotted a few bugs myself and let him know either via e-mail or via the Yahoo group and patches arrive promptly. Any time you write software and certainly any time you don’t have a large development team and substantial beta-tester crew, you’re going to have bugs. (As we all know, even when you do have huge development teams, etc. bugs happen.) Bugs are part of life with software and what matters is how well and how promptly they get fixed. Darryl gets an “A” in my book here.

I’ve pitched a few ideas to him for the program and some he’s gently said “See page X in the manual -- it’s already there” and others he’s taken to heart and thought about. Some of these (and I’m certainly not the only one giving suggestions) have appeared in the program. One I particularly like is the ability to grab shots from the DSS of any FOV you want and see how your scope + camera combination will frame things. True, you may need to spend a few minutes figuring things out if you don’t have a pre-defined camera (which is really just a pre-defined sensor size), but once done you can do things like this:
That’s a nice wide view of the Veil from DSS data as it would look on my Borg 101 f/4 and QSI 540. I can swap around cameras or scopes, rotate the FOV, nudge things around, etc. and still see just how much of this target will fit without having to deal with limitations of 1 degree of FOV. When thinking about scopes or cameras, I can grab any number of targets, see just how it will fit, etc. I can also see just how faint something really is since the DSS shots are all standardized. This has been a really nice feature for me and one that makes Equinox stand out for me.

Another recent addition is the ability to superimpose information from the NOMAD star database in in the “scope view” here (I had nothing to do with this one and was just pleasantly suprised when it arrived in an update!). One potential limitation of Equinox has been that the main star database is limited to 12th magnitude stars, even in the “scope view”. 99% of the time, that’s not a limitation, but at times I’ve needed to see and/or know the magnitudes of something fainter. NOMAD is a “simple merge of data from the Hipparcos, Tycho-2, UCAC-2 and USNO-B1 catalogues, supplemented by photometric information from the 2MASS final release point source catalogue.” With it, you can get things like detailed magnitudes for stars down to 18th magnitude. Here is a shot of Equinox’s “scope view” with the filter set at 15th magnitude:



I think I’m not going to have trouble finding out any star magnitudes anymore! Of course, you can turn on or off aspects of this display, showing just the stars even if you like, etc.

As noted above, there is a lot more to Equinox 6 than just these two features as it is a mature package. What these new features help show is that Equinox 6 is continuing to evolve with slick new features being continually added. Registered users get free updates so registered users get all the bug fixes and new features. I like that approach (as it’s what I use in my commercial software.) If you’re a Mac user and haven’t given Equinox 6 a try or haven’t looked at it for some time, head on over to its site and give it a shot.


PHD, subpixel guiding, and star SNR

Q: I've seen some quote the accuracy of the guiding available in their software with numbers like 1/20th of a pixel accuracy. a) How is this possible, and b) How accurate is PHD?

There are a whole bunch of things that will go into how accurate your guiding is. Your mount, the wind, flex, etc. all go into this. Here, we'll go over finding the star's position to an accuracy well below a pixel and how the star's signal to noise ratio (SNR) affects this accuracy. Since the SNR of the star (which is determined by the amount of light hitting the CCD and by the camera's noise) affects the accuracy, I won't quote a single hard and fast number as to how accurate it can be. I could quote an extreme (1/200th of a pixel) but I doubt it would mean much and would sound more like marketing hype. Since PHD is free, there tends to be little incentive for marketing hype.

In an article I wrote for Astronomy Technology Today on PHD Guiding, I went over the basics of how you find the middle of a star to accuracy well below a pixel. Here's an image that may help. Below we have stars hitting three different 2x2 areas of a CCD. In the first case, the star is hitting the exact intersection of these four pixels, so the star's light is evenly distributed among all four. In the next, we have the star still centered vertically but shifted to the right just a bit. More of the star's energy is now on the right two pixels than the left, so these two are brighter than the left two. For both the left and the right, the top and bottom pixels have gotten the same amount of energy, so there is no top/bottom difference. In the last panel, the star is now centered a bit down and to the right of the intersection of the four pixels. Most of its energy hits the lower-right pixel with equal amounts hitting the lower-left and upper-right and the least hitting the upper-left. Thus, so long as the star lights up more than one pixel, we can estimate small shifts in its position.


Here's another way to look at this. In this surface plot, I'm showing the energy of a simulated star that is perfectly centered.


Here is that same star shifted just a touch -- 0.2 pixels one way and 0.7 pixels in the other:

Note now now the star's profile is no longer symmetric. We can use this shift to estimate just how much the star has moved. When using perfect images such as these, it's easy to pick up small shifts. The real question is how well it does on real-world stars. I can't exactly go and take pictures of stars that have moved 0.001 pixels or anything of the sort because I'd have no way of knowing how much the star really moved. (I don't even know if the Hubble could hit that...). What I can do is to simulate real stars with real noise and see how well it works.

Here are profiles of the same star from two different cameras (there are actually two stars and one camera is rotated 180 degrees relative to the other so the second star you see clearly to the right on the image on the right is to the left in the noisy one on the left). Clearly, the one on the left has more noise and a lower SNR than the one on the right. These are real profiles from real stars with real cameras (mag 11.7 star, 2 s exposure, Celestron CPC 1100 XLT at f/6.3-ish).


Here, for reference, is an image of the star on the left. This is certainly not a very clean star to guide on, but PHD will lock onto it and guide off of it.


To simulate this, I can get something pretty close by modeling a star with variable amounts of Gaussian noise:

I apologize for the fact that the size of the X and Y axes differ between the simulations and the actual stars. The simulations show a more "zoomed out" view than the actual star profiles, but the SNRs of the two setups are comparable. The big question is how accurately can PHD locate the center of the star in each kind of image?

To get at this, I moved the simulated star by 0.05 pixel steps in both directions so I would know the true distance the star moved (using good old Pythagoras) and I would have a bunch of samples to get a good estimate of PHD's error with no noise, low noise (right image), and high noise (left image). Shown here is the average error in location for the three noise conditions (error bars show standard error of the mean)



Without any noise, PHD is accurate down to (on average) 0.004 pixels or 1/250th of a pixel. With a low amount of noise, the accuracy goes to 0.018 pixels or 1/56th of a pixel and with high amounts of noise it goes to 0.18 pixels or 1/5.5th of a pixel. Better stars and/or better guide cameras will get you more accuracy, but even with this very noisy star, we're still at 1/5th of a pixel accuracy.

What about noise reduction? Does smoothing the image help? Well, it helps in terms of increasing the odds that PHD will lock onto the star, but it doesn't help the accuracy of localization at all. At best, it does nothing and at worst, it hurts your accuracy a bit. Again, it will help PHD find the star in the first place, so if this plot included "lost stars" as errors (and they are errors), it would have a nice effect. But once found, smoothing does nothing to help the localization.



So there you have it. Can we get to sub-pixel levels? Sure. Under perfect conditions, PHD Guiding gets you down to an insane level of 0.004 pixels worth of error showing that the basic math in sub-pixel error calculation works. Done well (and yes, it can be done poorly - I know from first-hand experience a few ways to do it poorly and/or to introduce systematic biases in the star-finding routine), it can get you more accurate star localization than you'd ever really need. The SNR of your guide star will affect your accuracy, however. Under more real-world conditions, accuracy drops, but still can be very good. I have seen many stars in my guide frame with SNRs like the Low-noise SNR star used here that led to a 1/50th pixel accuracy. One notable implication of this is that with higher SNR stars, you can afford to use wider FOV guide setups since the higher SNR leads to increased precision in localization.

In practice, the High Noise star here is about as noisy a star as you can successfully guide on with PHD. Much worse than this and you're going to be suffering lost locks occasionally. With even this level of noise, we're still below a quarter of a pixel worth of error. Odds are, your mount, the wind, flex, etc. will be causing more trouble than even this error unless you're using a very wide FOV guide setup.





Stacking accuracy

Q: How can I get the sharpest images in my stack using Nebulosity? How does Nebulosity compare to other stacking tools?

Nebulosity has several means of aligning the images prior to actually stacking them. We can use simple translation, translation + rotation, translation + rotation + scaling, and Drizzle. I've covered Drizzle in an article for Astrophoto Insight, so I'll focus on the more traditional methods here.

The big difference between "translation" and "translation + rotation (+ scaling)" is that when doing a translation-only alignment, Nebulosity does not resample the image. It does "whole pixel" registration. This sounds worse than "sub-pixel" registration. Isn't it better to shift by small fractions of a pixel? Well, it would be, except for the fact that when you do so, you need to know what the image looks like shifted a fraction of a pixel. That means, you must interpolate the image and interpolation does cause a loss of sharpness. So, you're faced with a trade-off. Keep the image exactly as-is and shift it by whole pixels or resample it and shift it by fractional pixels.

Now, toss into this the fact that our long-exposure shots are already blurred by the atmosphere (and to a varying degree from frame to frame) and you've got a mess if you try to determine which is better from just thinking about it. So, we have what we call an "empirical problem." Let's get some data and test it.

I took some data I had from M57 shot with an Atik 16IC at 1800 mm of focal length and some wider-field data of M101 shot on a QHY 2Pro at 800 mm. I ran the M57 data through a number of alignments and Michael Garvin ran the M101 data through several as well.

Here are the images from M57 (click here for full-res PNG file). All were processed identically, save for the alignment / stacking technique.


Here are the images from M101 (click here for full-res PNG version). Again, all were processed identically. Here, the image has been enlarged by 2x and a high-pass filter overlay used to sharpen each (all images were on the same layer in Photoshop so the same exact sharpening was applied).


So what do we take from all this? Well, first, there's not a whole lot of difference among the methods. All seem to do about the same thing. To my eye, adding the "starfield fine tune" flag in Nebulosity helps a touch and using the resampling (adding a rotation component) hurts a touch, but these aren't huge effects. Someday, I'll beef up the resampling algorithm used in the rotation + (scale) version. Comparing Nebulosity's results with those of other programs again seems pretty much a tie. I can't pick out anything in their stacks that I don't see as well in Nebulosity's. Overall, these images seem to be limited more by the actual sharpness of the original data than by the stacking method.


IP4AP - Save yourself $$$$$

IP4AP - Image Processing for AstroPhotography

Why do my images suck? OK, let me rephrase that a bit. Why do my images suck less than they used to but still just don't have the same bang that those really good images have? Is it my gear? Is it me?

Well, like most people, I turn first to the gear. My scope, mount, or camera must be the culprit. Or, I don't have some cool new software someone else has. That's it... yea... If I had a new camera and got that new software that does the Zambizi Triple Datum-Redux method, I'd be all set. Break out the wallet, this won't be cheap. But, I'll get those better shots!

Nope. I've been in this now long enough to know that that's just not going to do it. Sure, it may help some, but it won't really do it. The problem is me, not the gear.

I was first introduced to this concept (that I'm the one that sucks, not the gear) in another context - car racing. Like every American male, I thought I knew a good bit about how to drive and that out on a track I could keep up with most anyone, so long as the car was capable. How laughably wrong I was. A number of years ago, I first tried autocrossing (aka Solo Racing) in an old Porsche 911 I had (as old as I was). I came in last. Dead last. It must be the car - these guys all had their cars prepped a lot more than I did. Yea. New tires, rework the engine... Break out the wallet, this won't be cheap.

Then I signed up for a course run by the local club (the San Diego PCA - great group) and found out I was by far the limiting factor. Others could make my car fly around the track and I was using habits I built up on the street that were just plain wrong. Was I stupid? No, just ignorant. Nobody had taught me how to really drive and how to really control a car -- how to use weight balance, friction, and your power to really go fast. I picked up a number of tricks and techniques there that got me going a lot faster and made me a better driver - on the track and off. The hobby became a lot more fun (steering with the gas pedal is still a serious blast), I wasn't so frustrated, and, well I didn't suck quite so badly. All for a nominal fee for the course.

Why this tangent? Well, it's the same thing here. I've been my worst enemy when it comes to data processing. I've had bad habits that have let to results that didn't do justice to the underlying raw data. The gear's not the rate-limiting factor. I am. Or, I was.

Enter Warren Keller and his IP4AP series of tutorials. IP4AP may get confused on the tongue with AIP4WIN, but the two products couldn't be further apart. Jim Burnell and Richard Berry are technical wizards with a great product in AIP4WIN and The Handbook of Astronomical Image Processing. If you want the geek / weenie stuff, they're your guys (and as a geek, I mean that in a flattering way). But, if you want to easily learn how to use PhotoShop (or AstroArt) by looking over an expert's shoulder, Warren's your man.

A few weeks ago, I got an early copy of his newly done / redone "Intermediate" set of tutorials. This is aimed at using Photoshop to handle things like gradients and vignetting issues, powerful use of the Curves tool, dealing with LRGB data, and use of DDP (this last one in AstroArt). Warren spends a lot of time himself with PhotoShop, a lot of time learning techniques from others, a lot of time devising his own techniques, and a lot of time teaching people these techniques. This all shows in the videos. The idea behind IP4AP is to let Warren separate the wheat from the chaff in techniques out there, distill them down, and then show you how they work and why. In the video format, you get to look over Warren's shoulder as he processes images. It's a really effective technique. While you don't get to ask questions right then and there (you can - you can e-mail him and even setup one-on-one sessions), you get to do watch them whenever you like and replay sections as often as you like to see just what he did to achieve a certain effect.

Warren can get a bit goofy at times, but it's part of his wry sense of humor and it serves a real purpose. While watching the videos, you're there to learn, but you shouldn't loose sight of the fact that this is a hobby and meant to be fun. The semi-random appearance of a picture of a cow will help to keep you in this mindset.

So, if you're reading this blog and aren't a contender for APOD (and by reading this blog, odds are pretty much 100% you're not) and if you're not trying to image by lying on your back with a point-and-shoot, holding the shutter down and manually tracking the stars (so that it's not the equipment's fault entirely), I'll make the following wager. Spending $40 on IP4AP will do more for your images than any purchase you could make for 10 times this cost. So, wanna break out the wallet and for that new scope, camera, or mount? Or want to part with a mere $40 (direct) and quite possibly (if not probably) get an even bigger effect on the images you're making? The choice is yours. I know what I'll be doing when he comes out with the next DVD in the series...