Stark Labs Affordable, Powerful, and Easy to Use Astrophotography Software




Telescope Stability Systems

I’ve got a nice Takahashi EM-10 mount. While its GOTO system is a bit quirky (an understatement), it sure is a nice and smooth mount. Its polar scope is also a real joy as in about 5 minutes I’m aligned well enough to image all night long. In fact, here’s a shot I took without any alignment over 100 minutes of exposure using PHD and a QSI 540wsg (integrated off-axis guider) following under 5 minutes of polar alignment. Overall, a very fine mount.


Its tripod is really designed to be nicely portable and while sturdy, still isn’t a rock. Tak intends the mount for lighter payloads than I often put on there, which does make the tripod start to show signs of weakness. Recently, I reviewed a Telescope Stability Systems tripod, the “Stable Max” for Astrophoto Insight magazine. The tripod is such a wonderful addition to my aresenal, I bought it. From the beefy legs and center column to the precision machined bushings everywhere to the nicely modular approach that lets you use the tripod with any mount (by means of an adapter plate), this thing is a real joy.

One thing I can’t really convey in the format of the review is just how this stacked up to the stock tripod in real use. When out testing it, I prepared a series of videos doing things like tapping on the OTA and dropping (padded) objects onto the mount while recording the resulting bounces with a camera. PDF just doesn’t capture video all that well. So, you get ot see them here. This is me tapping the OTA with the rig on the TSS tripod and the stock Tak tripod.


Here I am dropping a heavy weight onto the tripod’s countershaft (rope attached to scope and to padded weight, dropped from the same position each time) with both setups:


If a picture is worth 1,000 words this video was worth buying the tripod. I don’t fear the wind nearly so much as I used to. Sure, you hit the countershaft with a weight and the image moves. But it settles down very quickly getting you right back on track and helps show just how solid the setup is overall (a quick blip in a 5 minute image isn’t nearly so damaging to your image as a long time bouncing.)

Equinox 6

When I moved over to the Mac a few years ago, I kept Windows running in a virtual machine a lot of the time. There were two programs that I just had to have going whenever I was thinking about a night’s imaging or about new scopes or cameras. One was Rod Wodaski’s CCD Calc and the other was Cartes du Ciel. Being a “switcher”, I wanted to have something native to the Mac but at the same time, I was so used to both of these and they did just what I needed them to do. But, it was time to make a switch.

I looked at a lot of options on the Mac. AstroImageBrowser provided a decent stand-in for CCD Calc. It does some things a lot better than CCD Calc, but it doesn’t do a few things that CCD Calc does or just does better. Neither program lets you see how things will look with an FOV of more than 1 degree and with today’s DSLRs and other decent-sized chips, it doesn’t take much to get larger than 1 degree. (These days, I find myself using something else, however, thanks to some recent updates to Equinox 6 - read on.)

When looking at “planetarium” software, I had a much harder time. I’ve tried Starry Night Pro on both the Mac and Windows and there was something about the interface I just couldn’t ever get my head around. The sky never seemed to move the way I wanted it to and despite giving it a solid try, it just didn’t work for me. In addition, their support of my Takahashi Temma mount was limited to Windows. I’d been a fan of TheSky for a long time, but their Mac version was very out of date. (TheSkyX is out now, but still only in the Student Edition). I trield Stellarium and Celestia and both are beautiful, but neither would let me really plan for an evening of imaging much less control the Tak mount. I looked at Voyager 4, but the price tag was a bit steep, especially since it wouldn’t control the odd Tak mount (sense a theme here?). AstroPlanner could do a lot, but I really did want more of the planetarium-style interface. It’s a neat package that I encourage folks to look at, but it’s not exactly a replacement for CdD. Oh, and yes, I did even work with CdC’s code, getting things sort-of compiled and going on OS X, but this isn’t ready for prime time (or wasn’t then). As a side note, the mount can be run via cocoaTemma if you know where you want to go by name or coordinate and I do have things going nicely on it with a PDA and TheSky Pocket Edition).

Enter Equinox 6, by Darryl Robertson. Equinox 6 has been around for awhile and I won’t pretend to say I know it’s history, but I’ve now used it for almost two years and can say I like what I see. At first, I must admit it took a little getting used to. For starters, there is a separate “main view” that shows the whole sky or whatever portion of the sky you’re zoomed to and a “scope view” that shows what should be in your telescope. It’s the “scope view” that can show the fainter stars, show camera or eyepiece overlays, etc. and it took a bit of time to get used to this split setup. Now, it doesn’t bother me and can even be a nice feature at times. In truth, it didn’t take very long to adjust (and as always, reading the manual actually helped. It’s a very nice manual.)

It’s got all the features you’d expect from a nice planetarium package (and then some!) and I encourage you to take it for a test drive even if just for these (and yes, it does even control my Tak Temma mount!). What I’d like to really point out today are one long-standing thing and two new things that I think make Equinox 6 exceptionally cool and help show what kind of product it is. These latter two things actually are evidence of the first and that is that Darryl is just the sort of guy you’d love to have writing a program you use. In my time using it, I’ve gotten to watch it grow and watch how quickly bugs are fixed and patches put out. I’ve spotted a few bugs myself and let him know either via e-mail or via the Yahoo group and patches arrive promptly. Any time you write software and certainly any time you don’t have a large development team and substantial beta-tester crew, you’re going to have bugs. (As we all know, even when you do have huge development teams, etc. bugs happen.) Bugs are part of life with software and what matters is how well and how promptly they get fixed. Darryl gets an “A” in my book here.

I’ve pitched a few ideas to him for the program and some he’s gently said “See page X in the manual -- it’s already there” and others he’s taken to heart and thought about. Some of these (and I’m certainly not the only one giving suggestions) have appeared in the program. One I particularly like is the ability to grab shots from the DSS of any FOV you want and see how your scope + camera combination will frame things. True, you may need to spend a few minutes figuring things out if you don’t have a pre-defined camera (which is really just a pre-defined sensor size), but once done you can do things like this:
That’s a nice wide view of the Veil from DSS data as it would look on my Borg 101 f/4 and QSI 540. I can swap around cameras or scopes, rotate the FOV, nudge things around, etc. and still see just how much of this target will fit without having to deal with limitations of 1 degree of FOV. When thinking about scopes or cameras, I can grab any number of targets, see just how it will fit, etc. I can also see just how faint something really is since the DSS shots are all standardized. This has been a really nice feature for me and one that makes Equinox stand out for me.

Another recent addition is the ability to superimpose information from the NOMAD star database in in the “scope view” here (I had nothing to do with this one and was just pleasantly suprised when it arrived in an update!). One potential limitation of Equinox has been that the main star database is limited to 12th magnitude stars, even in the “scope view”. 99% of the time, that’s not a limitation, but at times I’ve needed to see and/or know the magnitudes of something fainter. NOMAD is a “simple merge of data from the Hipparcos, Tycho-2, UCAC-2 and USNO-B1 catalogues, supplemented by photometric information from the 2MASS final release point source catalogue.” With it, you can get things like detailed magnitudes for stars down to 18th magnitude. Here is a shot of Equinox’s “scope view” with the filter set at 15th magnitude:



I think I’m not going to have trouble finding out any star magnitudes anymore! Of course, you can turn on or off aspects of this display, showing just the stars even if you like, etc.

As noted above, there is a lot more to Equinox 6 than just these two features as it is a mature package. What these new features help show is that Equinox 6 is continuing to evolve with slick new features being continually added. Registered users get free updates so registered users get all the bug fixes and new features. I like that approach (as it’s what I use in my commercial software.) If you’re a Mac user and haven’t given Equinox 6 a try or haven’t looked at it for some time, head on over to its site and give it a shot.


Astrophoto Insight & Astronomy Technology Today

Some of you may have seen articles and reviews I have done in Astrophoto Insight, Astronomy Technology Today, and Cloudy Nights (you can find these on the Articless and Reviews section of my personal page). I consider these three of my favorite astro-resources. Toss in the various Yahoo Groups and you’re set as far as I’m concerned. While the Yahoo groups and Cloudy Nights are free websites, Astrophoto Insight and Astronomy Technology Today both involve subscriptions for full access. Now, these aren’t break-the-bank kinds of prices. Astrophoto Insight will let you download the current issue for free and wants $24.95 for a “Platinum” level membership that will give you full access. Astronomy Technology Today wants $18 (for US print + online or for International online access). My advice - subscribe to both.

I subscribe to both and I do so not just because I’ve published in them or met the guys who run them. Sure, Al from Astrophoto Insight and Stuart and Gary from Astronomy Technology Today are all stand-up guys. These things and $1.69 get you a cup of coffee, not a wallet opening for a subscription, though. I subscribe because they publish solid articles on things I want to read about. From real tips and techniques to solid reviews, both do a bang-up job. And please, I’m not talking about my reviews and articles in here. I certainly skip those and can read them for free. When the latest issue of either comes out, I devour it. I devour it in the way I used to devour S&T years ago.

“Oh, but magazines are driven by ads” one might say. Sure, that’s a part of it. I’ve got a very long history with magazines and reviews as I grew up in the business (my father was a magazine editor). Ads give the magazines a lot of the money they need to do what they do but this can present a conflict of interest. So far, I’ve not detected biases in the reviews that would suggest the reviews are being slanted based on ad money. As someone who’s written for both, I can also state that I’ve been able to freely talk about the downsides of gear in my reviews. To me, that’s huge. Any product will have its good sides and bad. Some have more good and some more bad. To trust a source, you’ve got to know that when there are bad sides, they’ll be covered and not swept under the rug. Seeing both from the inside has made me feel I can certainly trust both. (FWIW, the more common thing to have happen is that when a product is really bad, it just won’t get reviewed. No, I’ve not hit that yet with either, but I did see it a bit growing up.)

Ads also do things for readers (apart from helping the magazine exist). They let us see neat new toys and find out new things going on in our hobby. Just a few days ago after seeing an ad in one of them I said, “Hey, that’s a cool new gizmo!” and contacted the company for more info. Depsite spending a lot of time with this hobby (far too much my wife would say), I’d missed this new gizmo (just so you don’t think I’m making this up, it was the Moonlight Telescope’s SCT focuser that lets you screw the focal reducer into the drawtube.)

There’s another thing that these two magazines do for readers when it comes to ads. They show ads for products that can’t make it into the bigger magazines. I certainly know this from first-hand experience. Our hobby has big companies and small companies and the small ones have certainly done a lot for our hobby too. Small ones often can’t afford to advertise in bigger magazines but can potentially afford to advertise in API and ATT. Or, even if they could, the ad wouldn’t have as much info in it as it’d be crammed into a small space.

If you’re not a subscriber / haven’t checked them out, do so. Heck, if somehow you’re reading this and don’t know about Cloudy Nights, stop reading this and get over there now. We’ve got some fantastic sources of information and communities available to us. Use them. Support them.

Craig

PHD, subpixel guiding, and star SNR

Q: I've seen some quote the accuracy of the guiding available in their software with numbers like 1/20th of a pixel accuracy. a) How is this possible, and b) How accurate is PHD?

There are a whole bunch of things that will go into how accurate your guiding is. Your mount, the wind, flex, etc. all go into this. Here, we'll go over finding the star's position to an accuracy well below a pixel and how the star's signal to noise ratio (SNR) affects this accuracy. Since the SNR of the star (which is determined by the amount of light hitting the CCD and by the camera's noise) affects the accuracy, I won't quote a single hard and fast number as to how accurate it can be. I could quote an extreme (1/200th of a pixel) but I doubt it would mean much and would sound more like marketing hype. Since PHD is free, there tends to be little incentive for marketing hype.

In an article I wrote for Astronomy Technology Today on PHD Guiding, I went over the basics of how you find the middle of a star to accuracy well below a pixel. Here's an image that may help. Below we have stars hitting three different 2x2 areas of a CCD. In the first case, the star is hitting the exact intersection of these four pixels, so the star's light is evenly distributed among all four. In the next, we have the star still centered vertically but shifted to the right just a bit. More of the star's energy is now on the right two pixels than the left, so these two are brighter than the left two. For both the left and the right, the top and bottom pixels have gotten the same amount of energy, so there is no top/bottom difference. In the last panel, the star is now centered a bit down and to the right of the intersection of the four pixels. Most of its energy hits the lower-right pixel with equal amounts hitting the lower-left and upper-right and the least hitting the upper-left. Thus, so long as the star lights up more than one pixel, we can estimate small shifts in its position.


Here's another way to look at this. In this surface plot, I'm showing the energy of a simulated star that is perfectly centered.


Here is that same star shifted just a touch -- 0.2 pixels one way and 0.7 pixels in the other:

Note now now the star's profile is no longer symmetric. We can use this shift to estimate just how much the star has moved. When using perfect images such as these, it's easy to pick up small shifts. The real question is how well it does on real-world stars. I can't exactly go and take pictures of stars that have moved 0.001 pixels or anything of the sort because I'd have no way of knowing how much the star really moved. (I don't even know if the Hubble could hit that...). What I can do is to simulate real stars with real noise and see how well it works.

Here are profiles of the same star from two different cameras (there are actually two stars and one camera is rotated 180 degrees relative to the other so the second star you see clearly to the right on the image on the right is to the left in the noisy one on the left). Clearly, the one on the left has more noise and a lower SNR than the one on the right. These are real profiles from real stars with real cameras (mag 11.7 star, 2 s exposure, Celestron CPC 1100 XLT at f/6.3-ish).


Here, for reference, is an image of the star on the left. This is certainly not a very clean star to guide on, but PHD will lock onto it and guide off of it.


To simulate this, I can get something pretty close by modeling a star with variable amounts of Gaussian noise:

I apologize for the fact that the size of the X and Y axes differ between the simulations and the actual stars. The simulations show a more "zoomed out" view than the actual star profiles, but the SNRs of the two setups are comparable. The big question is how accurately can PHD locate the center of the star in each kind of image?

To get at this, I moved the simulated star by 0.05 pixel steps in both directions so I would know the true distance the star moved (using good old Pythagoras) and I would have a bunch of samples to get a good estimate of PHD's error with no noise, low noise (right image), and high noise (left image). Shown here is the average error in location for the three noise conditions (error bars show standard error of the mean)



Without any noise, PHD is accurate down to (on average) 0.004 pixels or 1/250th of a pixel. With a low amount of noise, the accuracy goes to 0.018 pixels or 1/56th of a pixel and with high amounts of noise it goes to 0.18 pixels or 1/5.5th of a pixel. Better stars and/or better guide cameras will get you more accuracy, but even with this very noisy star, we're still at 1/5th of a pixel accuracy.

What about noise reduction? Does smoothing the image help? Well, it helps in terms of increasing the odds that PHD will lock onto the star, but it doesn't help the accuracy of localization at all. At best, it does nothing and at worst, it hurts your accuracy a bit. Again, it will help PHD find the star in the first place, so if this plot included "lost stars" as errors (and they are errors), it would have a nice effect. But once found, smoothing does nothing to help the localization.



So there you have it. Can we get to sub-pixel levels? Sure. Under perfect conditions, PHD Guiding gets you down to an insane level of 0.004 pixels worth of error showing that the basic math in sub-pixel error calculation works. Done well (and yes, it can be done poorly - I know from first-hand experience a few ways to do it poorly and/or to introduce systematic biases in the star-finding routine), it can get you more accurate star localization than you'd ever really need. The SNR of your guide star will affect your accuracy, however. Under more real-world conditions, accuracy drops, but still can be very good. I have seen many stars in my guide frame with SNRs like the Low-noise SNR star used here that led to a 1/50th pixel accuracy. One notable implication of this is that with higher SNR stars, you can afford to use wider FOV guide setups since the higher SNR leads to increased precision in localization.

In practice, the High Noise star here is about as noisy a star as you can successfully guide on with PHD. Much worse than this and you're going to be suffering lost locks occasionally. With even this level of noise, we're still below a quarter of a pixel worth of error. Odds are, your mount, the wind, flex, etc. will be causing more trouble than even this error unless you're using a very wide FOV guide setup.





Stacking accuracy

Q: How can I get the sharpest images in my stack using Nebulosity? How does Nebulosity compare to other stacking tools?

Nebulosity has several means of aligning the images prior to actually stacking them. We can use simple translation, translation + rotation, translation + rotation + scaling, and Drizzle. I've covered Drizzle in an article for Astrophoto Insight, so I'll focus on the more traditional methods here.

The big difference between "translation" and "translation + rotation (+ scaling)" is that when doing a translation-only alignment, Nebulosity does not resample the image. It does "whole pixel" registration. This sounds worse than "sub-pixel" registration. Isn't it better to shift by small fractions of a pixel? Well, it would be, except for the fact that when you do so, you need to know what the image looks like shifted a fraction of a pixel. That means, you must interpolate the image and interpolation does cause a loss of sharpness. So, you're faced with a trade-off. Keep the image exactly as-is and shift it by whole pixels or resample it and shift it by fractional pixels.

Now, toss into this the fact that our long-exposure shots are already blurred by the atmosphere (and to a varying degree from frame to frame) and you've got a mess if you try to determine which is better from just thinking about it. So, we have what we call an "empirical problem." Let's get some data and test it.

I took some data I had from M57 shot with an Atik 16IC at 1800 mm of focal length and some wider-field data of M101 shot on a QHY 2Pro at 800 mm. I ran the M57 data through a number of alignments and Michael Garvin ran the M101 data through several as well.

Here are the images from M57 (click here for full-res PNG file). All were processed identically, save for the alignment / stacking technique.


Here are the images from M101 (click here for full-res PNG version). Again, all were processed identically. Here, the image has been enlarged by 2x and a high-pass filter overlay used to sharpen each (all images were on the same layer in Photoshop so the same exact sharpening was applied).


So what do we take from all this? Well, first, there's not a whole lot of difference among the methods. All seem to do about the same thing. To my eye, adding the "starfield fine tune" flag in Nebulosity helps a touch and using the resampling (adding a rotation component) hurts a touch, but these aren't huge effects. Someday, I'll beef up the resampling algorithm used in the rotation + (scale) version. Comparing Nebulosity's results with those of other programs again seems pretty much a tie. I can't pick out anything in their stacks that I don't see as well in Nebulosity's. Overall, these images seem to be limited more by the actual sharpness of the original data than by the stacking method.