Results of two camera test

NOTE #1: This thread is here just for information for those with two camera setups - not for discussion as to possible changes to SGP. That is already in another thread.

NOTE #2: This was an unguided system for these tests since my setup allows this to work for as long as 20 minute exposures. None of the below is likely to work with a system that requires guiding.

My setup consists of:

  1. Paramount ME permanently mounted with very precision alignments, good PEC, very large model, and pro track used.
  2. Three systems on the mount
    a) FSQ 106 and Moravian 16200 (unguided)
    b) TOA-130 and self-guide SBIG STT-8300
    c) Canon 200 mm lens at F5 and SBIG STT-8300 (unguided)

The guided system #2 was not used for this test.

The “master” system was the FSQ/Moravian. It took 10 minute RGB images all night with the usual focus and flip and “dither by mount” (aka the poorly named “direct mount guiding”).

The “slave” system (really just a separate instance of SGP) was the Canon 200 mm STT-8300. It took 5 minute RGB images all night in a totally “dumb” configuration - all it did was shoot and focus as required.

Bottom line is (as expected) all the FSQ frames were great (no surprise there) and a 40 out of 51 frames from the 200 mm lens showed no significant elongation of the stars, the remaining 11 showed elongation ranging from mild to severe. Better than I would have expected, really

Please do not interpret this as a reason not to do a “real dual software” since this use case is very limited (especially since it was unguided) but I figured the data would be helpful to folks with good, well aligned and modeled, permanent mounts who want to supplement, especially with a camera lens system.


1 Like

I would add that the reason that this would probably not work very well with a longer guided system is that dithers with this unguided “dither by mount” system are very fast (a couple seconds) whereas with guided systems they can take many seconds or even a minute or two to stabilize in the new location. That is more likely to create elongation in the secondary system if there is no software system (like that being discussed on the other thread) to avoid that.

Very cool system. My use case is that of a guy with a lot lesser equipment… I have seen some really cool shots where folks will use the main/master imaging system to shoot a target and the 2nd/slave camera is usually set for a wide field of the same area of the sky. The images can then be manipulated together for video zoom effects etc.

Anyway that said I started out in AP using my Canon70D. My ES127’s handle has a slit where I could bolt the Canon on top of the scope.

My setup for reference:

Es127 / 60mm Stellarvue Guidescope / ZWO 290 mini guider cam / moonlight focuser / ES 3” reducer/flatener / ZWO 8 slot filter wheel / ZWO 1600mm-cool “master” / Hypertuned Orion Atlas.

It’s on a tripod and I do travel with it. Using a polemaster to help with alignment.

The main has roughly 666mm focal length with the reducer on. I figure most of the time I would probably be shooting the Canon at 200-350mm depending on which lens / target etc.

Since I get such limited sky time, maximizing the number of subs I can take per time interval is very very appealing to me

Thanks for posting this CCDMan. I hope others chime in with ideas etc.

I’ve been dual imaging for about 9 months now with a 10" RC/Moravian G3-16200 (0.62"/px) and an 80mm/QHY163M (1.63"/px). I dither the RC every 3-4 images and at those image scales I don’t see hardly any star elongation in the 80mm images.

But now I’m setting up a system where the image scales will be much more similar, and thus dithering will more greatly affect the slave system. However, I’m looking at this in a slightly different way than you are. With the new CMOS cameras short exposures are the norm. At most I will only have to throw away one 2min CMOS exposure every 30-40min. I won’t be dithering every CMOS exposure, but experience has shown me that with hundreds of CMOS subs, as long as I dither every so often that is good enough.

Nice Systems! I mainly built the 200 mm system for winter use. We have very rare clear nights and very poor seeing in the winter (totally opposite of summer) so under sampled (5.57") and quick is needed. It is so easy and fun to use that I built a second one that will be piggybacked on the New Mexico Deep Sky West system I share with a buddy (although that one is better sampled with a QSI 6120).

I remember you had the Moravian 16200. I had tons of trouble with mine - chip mounted not square and filter wheel also not mounted right. Could not get decent images at the edges. Finally sent it back for rebuilding and got a Nightcrawler focuser for more rigidity and now it is working well. Took 6 months, though. Moravian and Deep Sky Products were both very good about it and it did not cost me anything but time.

Even more interesting is that I had a chance to test the 16200 with a new FSQ106EDX IV. It was actually poorer than my old FSQ 106N in terms of edge stars (focuser and camera were identical). Go figure.

I don’t have anything to add other than I wish I could get 20 minute subs unguided

1 Like

Sorry to hear that you had trouble with your Moravian, but glad to hear that Moravian took care of you. My camera has been rock solid for over 2 years now. I know of one other person whose main board developed a problem. Moravian sent him a new board that he was able to install himself. Again, good customer service.

The biggest problem I’ve experienced with dual scope imaging is simply aligning the two scopes to be close to parallel. With my current system that’s not super critical since the smaller scope has twice the image scale. But my new setup will require more precision. I got it pretty close last night (first light) but I may need to invest in an Alt/Az pointing system.

Yes, pretty much needs a mount system that is designed for that and mostly a permanent setup. There may be others but the main ones I know of are 10 Micron and Software Bisque mounts.

Did a further test last night (testing is all I can do because of the smoke haze :man_firefighter:)

This time 15 minute main cam shots and 7.5 minute 200 mm shots and only 5 of 33 200 mm images were elongated.

Good info all round, there are clearly some clever people at work here :slight_smile:

I’ve also been using a dual scope setup for a while now:

  • Avalon Linear Mount
  • Takahashi FSQ 85 with reducer / Moravian 8300
  • WO 71 / Moravian 8300 / Avalon XGuider for FOV alignment
  • Takahashi FS60 / Lodestar2 for guiding

The fields of view of both image trains are very similar and the main aim is to capture L with one and RGB with the other (or for narrow band a 2/1 split that suits the target - can’t afford a duplicate Astrodon :slight_smile: ).

I’ve also been successful in running a master/slave setup with two instances of SGP which runs fully automated with the exception of connecting the cameras (Moravian have a great driver which allows connection of the camera by serial number but unfortunately the entered numbers in the respective instances get muddled up so manual connection is needed). The slave instance is dumb, it only acquires images and runs the focus routine, everything else is done by the master instance. Because of the separate guide scope the slave only really looses images during the meridian flip - as long as I don’t dither :frowning:

So onto dithering! Unfortunately I don’t have an unguided mount so the first solution is out (in fact the Avalon must be guided due to the way it’s constructed). Because of the CCDs I won’t be able to do short exposures and because exposure times for me are generally similar between the two scopes any overlap with a dither will loose a fair amount of integration time. So really I would need some synchronisation to happen.

Luckily I was thinking a lot on this yesterday (after commenting in the other thread) and nothing beats a good challenge :slight_smile:! So I’ve had a look at all the options to interface with SGP. Unfortunately at the moment there is not enough there to implement a ‘clever solution’ as there is just not enough information and access available. Having said I did come across another idea which might work well with my particular setup (but maybe not for other configurations). It’s also not ideal as it still wasted time but better than nothing I guess.

The basic idea is that (via the SGP API) I am monitoring the camera status of the master system. Here I am looking for the camera status to change from anything to ‘INTEGRATING’. Once this happens I know that an exposure has started and (assuming I know the exposure time that is used beforehand) I know how much time I have available to expose on the slave system before ‘other stuff’ such as dithering or focusing might occur. Now, if I select a slightly shorter exposure time on the slave system (say 5s less, assuming that download times are equal) I can guarantee that the slave system will have finished its exposure before the master system. I could of course also do three exposures if the slave system has a much shorter exposure time and will be finished by the time the master has finished its exposure. After finishing an exposure the slave system will wait again until the master camera status switches to ‘INTEGRATING’ on the next image and the game begins again. Practically there are obvious drawbacks, for example when the master system focusses the slave has to wait. Also when the slave system focusses it might miss the master system starting an exposure and it will have to wait until the next exposure. But that doesn’t worry me too much as the focus runs don’t happen that often.

As I’m lying in the bed with flu I’ve fired up my computer and created a rough and ready .NET app which does exactly that. When started it will poll the API until the master camera starts integrating then closes. In combination with SGP this can be used with scripts (which also allows the call to an EXE file), i.e. when the script function in the slave instance calls this ‘mini app’ it will start and run until the master camera starts integrating, then closes. This causes the slave instance to pause until that time, then proceed with integrating.

There is a catch though. Unfortunately SGP only allows us to run a script before/after each event and not before/after each image. So this really spoils the fun a bit but fortunately (at least for me) there is a workaround. This is possible because my slave system usually only has one filter (L or one of the narroband filter. So instead of creating just one event and take all images there I create two events with the exact same details. If I then set SGP to ‘rotate through events’ the slave system will alternate between those two identical events but crucially allow me to run the script every time an image is taken.

I know this is clunky at best but I think all that is possible just now as there is no way to get more detailed information or hook into SGP. I’ve not used this on the real setup yet but just run a simulation and it worked well. There is still a lot more testing to do but at least it could be something that might work for my scenario.


Very creative idea! This illustrates is exactly what many of us have been saying despite the naysayers - that a few small changes to SGP would allow the creative users to deal with the dual camera problem with very little work on the part of the devs!