Based on a disappointingly low number of good quality imaging nights I have experienced in the last year I have upgraded my imaging rigs to a dual telescope/camera setup on each mount. Initial endeavours to run the dual rig setup on 2 separate (SGP) PCs is hit and miss due to an inability to synchronise the primary and slave systems and dither.
The availability and reduction in hardware costs has made multiple imaging setups more affordable, coupled with uncertain weather patterns it is likely this demand will increase across the SGP user base.
May I enquire if there is a scheduled programme to implement multi camera operation in SGP? If so, what is the delivery timeline.
Lots of people run dual SGPro rigs off the same computer using two separate instances of SGPro 3. It works fine, but there are a few intricacies that SGPro 4 will make easier on folks that want to do this… coordinated dither, competing recovery, etc.
To be honest, I could care less about recovery. I would rather have it shut down and notify (or to be more accurate, start calibration frames and notify, then shut down if the calibration frames also fail). My experience is that if you have a well tweaked system, failures are almost always to do with something that recovery cannot fix (weather or some hardware issue). In years of using SGP, I do not ever recall a recovery that worked w/o intervention and most often that intervention was to quit for the night anyway!
So I would not hold up development so that recovery can be perfected for dual systems.
I have had some success with two instances, but inevitably there is a reject rate for the secondary system that varies from 10 to 30 percent or so. I would not go to anything else besides SGP just to get two camera, but having said that, we have been waiting a very long time.
I don’t think this opinion will be shared by many. It seems to me like recovery should work as advertised on the primary (master) instance. Also, remember that when we release a feature, we have to support it. If we provide a feature for general use that has a thousand pitfalls and “gotchas” we will never get anything else done. As it stands right now, almost ANY inadvertent movement of the mount by a slave instance will produce a bad session (and likely logs for us to pour over and figure out what happened). At a minimum, all mount-driven commands (both automatic and manual) need to be disabled in slaves… and, as a result of this, we get “coordinated” recovery for free.
I guess I do not understand why a slave instance would ever produce ANY movement (failing a badly built or configured system). It should not even have access to the mount or guider system and just take exposure cues from the main system which does have that access.
At a minimum, all mount-driven commands (both automatic and manual) need to be disabled in slaves
Which is a pretty non-trivial amount of UI and logic to disable when an instance is in “Slave Mode” and one of the major reasons that this feature hasn’t seen the light of day. “Coordinated dithering” is rather simple between almost any number of instances and is mostly complete. The difficult parts are basically “everything else” the needs to happen to make sure that you can’t accidentally put yourself into a bad spot.
I have been running a dual scope rig with SGP for years. It works perfectly for me. The wasted images on the slave are insignificant, 1 or 2 images at the end of each target. And I can choose any exposure length on each camera, they do not need to be coordinated in any way for efficiency. The current rig is a 12" Orion Optics UK Newtonian with a TeleVue NP127is piggybacked, running at .7 and .75 arcsec/pixel on a Parmount MEII with absolute encoders. So my images are lined up to within a pixel.
How do I do it? I don’t dither. Coordinated dithering is the only thing that this approach does not accommodate.
So you say, how can you get good final stacked images without dithering? Simple. Use the Cosmetic Correction feature of the PixInsight Pre-processing script. Cosmetic correction automatically corrects instances of hot or cold pixels in the final stacked image. If you don’t believe me that this works incredibly well, just check out my images on Astrobin.
It is quite possible that I have not understood any of this, but to me there seem to be a “simple” solution to this:
Why not add another camera,focuser and filter wheel to the equipment setup?
So you will have from the top
then all the rest. I doubt that there are many set ups with two rotators and flatboxes, but it could be added at a later stage if needed.
Pressing run sequence will start start the sequence for both cameras, including auto focus if both cameras have an auto focuser.
For dithering it could be arranged so that camera1 waits for camera2 to finish its frame before dithering.
It just seems simpler if all is kept within one instance. But maybe this is what has been planned all along?
May I enquire what setting are configured for your primary and secondary slave setup. I have just configured 2 instances of SGP on a single multi core PC and hope to try out once skies clear. Previously I have run primary and slave on separate PC’s.
My primary setup is configured for; Camera, filters, focuser, plate solve, PhD guiding, Mesu Mount, observatory.
My secondary setup is configured for; camera, filters, focuser only.
Do you run the frames on the 2 systems for the same time?
Do you auto focus on change of filter/temperature change?
I understand you don’t dither, I have just acquired Pixinsight, still learning. In there a tutorial for the hot/cold pixel process?
Do you know of a good guide or tutorial which highlights the pitfalls and consequences of running 2 instances of SGP in a dual imaging setup?
Sorry to put this on you, I don’t want to waste a limited skies we get here in the U.K.
Glad to help Martin. Over the years I have mentioned how effective SGP has been with the current feature set in running a dual camera/scope setup. And many folks are doing it. Here are some links to past discussions that cover most of the issues you may want to consider.
The setup you have outlined is right on: one fully configured scope/camera that controls PHD2, mount, and dome. The other just takes images controlling just its camera, filter and focuser. Very simple.
One potential problem when running this on one pc is, if you have two of the same focuser or camera. The driver must be able to support 2 on the same pc. I have two ZWO cameras and the ZWO driver does support 2 cameras, even of the same model.
In the Batch PreProcessing script in PI on the Light tab the top box is labeled “Cosmetic Correction”. Check the box and choose the “Cosmetic Correction” format that you have created. Creating the “Cosmetic Correction” requires you to point to your “Master Dark”, then choose a “Hot Pixels Threshold” and a “Cold Pixels Threshold”. I select values that produce a fixed pixel count of about 9000. In practice, most stacked images have no noticeable hot or cold pixels, a rather perfect result. Sometimes there are a few, and I just use the Clone tool to fix them, an easy and quick process. There is one feature of the “Cosmetic Correction” format dialog that I don’t think many people know about. I have found that with a particular camera or Master Dark, there would be a few very noticeable hot pixels that the routine did not fix. These would always be where 2 or 3 hot pixels were neighbors and the routine was not sophisticated enough to detect them. For those, there is a manual feature that lets you provide a list of specific coordinates for these bad pixels. And of course you only do this once. But I have not needed to do this for a couple of years.
Dithering has the following negative aspects:
It wastes imaging time because it can only be done while both cameras are idle. Time that could be productively taking more images.
To avoid wasting a large amount of time on 1 of the 2 cameras, any implementation of dual dither support will require that the exposure times on the 2 cameras be coordinated such that one is an exact multiple of the other. For example 3 minutes on the main camera and 2 minutes on the secondary camera will waste 1/3 of the time on the slave camera. There is no such restriction if you don’t dither. I choose any exposure length I want on each camera.
If you still feel the need to dither, there are several approaches that allow you to do this. They are covered in the topics linked to above. To summarize:
Main camera/scope is a small FOV doing the dither every 5 frames. Slave is wide field, so either the small FOV dither does not affect it much, or you only need to throw out 1 in 5 frames. If your wide field exposure is half that of the narrow FOV, you only lose 1 in 10 images.
Imaging a target over several nights will produce a natural dither if your centering process differs by more than 2 or 3 pixels from night to night. And you can simulate this on one night by creating multiple targets that are the same target.
Glad you find it useful Martin.
Here is an example from my Astrobin collection. Most of the images I have posted over the past 2 years are produced from simultaneous images taken on the AG12" Newtonian and NP127is refractor piggy backed on the MEII mount, never dithered. I combine the images from both cameras. The resolution of the AG12 is .70 arcsec/pixel, and for the NP127is is .75 arcsec. They are all registered to the best image on the AG12 which is the Master scope/camera.
This is an image of a faint planetary nebula, SH-174 in Ha and OIII, with RGB for star colors.
Between the two cameras there are 484 images totaling 36.5 hours.
Of course I chose one of my IOTD images, but all of my images are basically hot pixel free. And none of them have ever been dithered. https://www.astrobin.com/full/384567/0/?image_list_page=5&nc=
I am certainly benefiting from your wisdom and last night managed to open 2 simultaneous SGP instances. Them checked operation off all components, all ok.
During this exercise I was surprised that; after setting up my master and slave profiles with the requisite comm ports and camera serial numbers established under settings, profiles were saved and then designated to the primary and slave sequences which were then saved, on closing and reopening the 2 saved sequences the first (primary) sequence selected the wrong camera (although they are individually serialised Atik383 unit) and focuser (comm 3 although comm 4 was saved under the profile) and filterwheel (again serialised Atik units). While this caught me out at first, it was quickly resolved by selecting the correct equipment manually in the equipment tab before connecting each unit.
It seemed the first sequence randomly selected the hardware although the right units were designated and saved within the profile. Is this normal or is there some link I have missed?
For now I have a crib sheet and have to check each equipment before connecting, it works but could be a point of failure in the future.
All of this is controlled by your Atik drivers. It sounds like you are doing the right things.
However, you may be experiencing a problem which has shown up frequently for users of SGP. The problem is confusing the real function of the profile. Changing a profile does NOT change anything in any of your Sequences that were ORIGINALLY based on that profile.
Just make your driver selection changes directly to each Sequence. Or, change the profile, then create a BRAND NEW sequence from that profile.