Dual Camera working

I guess I do not understand why a slave instance would ever produce ANY movement (failing a badly built or configured system). It should not even have access to the mount or guider system and just take exposure cues from the main system which does have that access.

They shouldn’t. Thus:

At a minimum, all mount-driven commands (both automatic and manual) need to be disabled in slaves

Which is a pretty non-trivial amount of UI and logic to disable when an instance is in “Slave Mode” and one of the major reasons that this feature hasn’t seen the light of day. “Coordinated dithering” is rather simple between almost any number of instances and is mostly complete. The difficult parts are basically “everything else” the needs to happen to make sure that you can’t accidentally put yourself into a bad spot.

Thanks,
Jared

I have been running a dual scope rig with SGP for years. It works perfectly for me. The wasted images on the slave are insignificant, 1 or 2 images at the end of each target. And I can choose any exposure length on each camera, they do not need to be coordinated in any way for efficiency. The current rig is a 12" Orion Optics UK Newtonian with a TeleVue NP127is piggybacked, running at .7 and .75 arcsec/pixel on a Parmount MEII with absolute encoders. So my images are lined up to within a pixel.

How do I do it? I don’t dither. Coordinated dithering is the only thing that this approach does not accommodate.

So you say, how can you get good final stacked images without dithering? Simple. Use the Cosmetic Correction feature of the PixInsight Pre-processing script. Cosmetic correction automatically corrects instances of hot or cold pixels in the final stacked image. If you don’t believe me that this works incredibly well, just check out my images on Astrobin.

It is quite possible that I have not understood any of this, but to me there seem to be a “simple” solution to this:
Why not add another camera,focuser and filter wheel to the equipment setup?
Equipment
So you will have from the top
Camera1
Focuser1
Filterwheel1
Camera2
Focuser2
Filterwheel 2
then all the rest. I doubt that there are many set ups with two rotators and flatboxes, but it could be added at a later stage if needed.
Pressing run sequence will start start the sequence for both cameras, including auto focus if both cameras have an auto focuser.
For dithering it could be arranged so that camera1 waits for camera2 to finish its frame before dithering.
It just seems simpler if all is kept within one instance. But maybe this is what has been planned all along?

Hi Jmacon,

May I enquire what setting are configured for your primary and secondary slave setup. I have just configured 2 instances of SGP on a single multi core PC and hope to try out once skies clear. Previously I have run primary and slave on separate PC’s.

My primary setup is configured for; Camera, filters, focuser, plate solve, PhD guiding, Mesu Mount, observatory.

My secondary setup is configured for; camera, filters, focuser only.

Do you run the frames on the 2 systems for the same time?
Do you auto focus on change of filter/temperature change?

I understand you don’t dither, I have just acquired Pixinsight, still learning. In there a tutorial for the hot/cold pixel process?

Do you know of a good guide or tutorial which highlights the pitfalls and consequences of running 2 instances of SGP in a dual imaging setup?

Sorry to put this on you, I don’t want to waste a limited skies we get here in the U.K.

Kind regards Martin

Glad to help Martin. Over the years I have mentioned how effective SGP has been with the current feature set in running a dual camera/scope setup. And many folks are doing it. Here are some links to past discussions that cover most of the issues you may want to consider.

The setup you have outlined is right on: one fully configured scope/camera that controls PHD2, mount, and dome. The other just takes images controlling just its camera, filter and focuser. Very simple.

One potential problem when running this on one pc is, if you have two of the same focuser or camera. The driver must be able to support 2 on the same pc. I have two ZWO cameras and the ZWO driver does support 2 cameras, even of the same model.

In the Batch PreProcessing script in PI on the Light tab the top box is labeled “Cosmetic Correction”. Check the box and choose the “Cosmetic Correction” format that you have created. Creating the “Cosmetic Correction” requires you to point to your “Master Dark”, then choose a “Hot Pixels Threshold” and a “Cold Pixels Threshold”. I select values that produce a fixed pixel count of about 9000. In practice, most stacked images have no noticeable hot or cold pixels, a rather perfect result. Sometimes there are a few, and I just use the Clone tool to fix them, an easy and quick process. There is one feature of the “Cosmetic Correction” format dialog that I don’t think many people know about. I have found that with a particular camera or Master Dark, there would be a few very noticeable hot pixels that the routine did not fix. These would always be where 2 or 3 hot pixels were neighbors and the routine was not sophisticated enough to detect them. For those, there is a manual feature that lets you provide a list of specific coordinates for these bad pixels. And of course you only do this once. But I have not needed to do this for a couple of years.

Dithering has the following negative aspects:

  1. It wastes imaging time because it can only be done while both cameras are idle. Time that could be productively taking more images.

  2. To avoid wasting a large amount of time on 1 of the 2 cameras, any implementation of dual dither support will require that the exposure times on the 2 cameras be coordinated such that one is an exact multiple of the other. For example 3 minutes on the main camera and 2 minutes on the secondary camera will waste 1/3 of the time on the slave camera. There is no such restriction if you don’t dither. I choose any exposure length I want on each camera.

If you still feel the need to dither, there are several approaches that allow you to do this. They are covered in the topics linked to above. To summarize:

  1. Main camera/scope is a small FOV doing the dither every 5 frames. Slave is wide field, so either the small FOV dither does not affect it much, or you only need to throw out 1 in 5 frames. If your wide field exposure is half that of the narrow FOV, you only lose 1 in 10 images.
  2. Imaging a target over several nights will produce a natural dither if your centering process differs by more than 2 or 3 pixels from night to night. And you can simulate this on one night by creating multiple targets that are the same target.
1 Like

Hi Jmacon,

Wow, a comprehensive reply to my queries, I could not have wish for more. Thank you.

I will start reading the threads you have so kindly referenced, and to think I was considering moving to APT I will flail myself tonight. :pleading_face:

Glad you find it useful Martin.
Here is an example from my Astrobin collection. Most of the images I have posted over the past 2 years are produced from simultaneous images taken on the AG12" Newtonian and NP127is refractor piggy backed on the MEII mount, never dithered. I combine the images from both cameras. The resolution of the AG12 is .70 arcsec/pixel, and for the NP127is is .75 arcsec. They are all registered to the best image on the AG12 which is the Master scope/camera.

This is an image of a faint planetary nebula, SH-174 in Ha and OIII, with RGB for star colors.
Between the two cameras there are 484 images totaling 36.5 hours.
Of course I chose one of my IOTD images, but all of my images are basically hot pixel free. And none of them have ever been dithered.

Sh2-174%20Emission%20Nebula-A16mOO-HaOiii-t300x150-t600x31-A183mTV-RGBt100x69-HaOiii-t300x142-t200x92-20181117-20190107

1 Like

Re: no dithering. Have you had any problems with walking noise, and if so, how are you handling that when processing?

Being able to read helps.

Bernd

I have never heard the term before, but it is somewhat descriptive. Never seen anything resembling it in my images.

Hi Jmacon,

I am certainly benefiting from your wisdom and last night managed to open 2 simultaneous SGP instances. Them checked operation off all components, all ok.

During this exercise I was surprised that; after setting up my master and slave profiles with the requisite comm ports and camera serial numbers established under settings, profiles were saved and then designated to the primary and slave sequences which were then saved, on closing and reopening the 2 saved sequences the first (primary) sequence selected the wrong camera (although they are individually serialised Atik383 unit) and focuser (comm 3 although comm 4 was saved under the profile) and filterwheel (again serialised Atik units). While this caught me out at first, it was quickly resolved by selecting the correct equipment manually in the equipment tab before connecting each unit.

It seemed the first sequence randomly selected the hardware although the right units were designated and saved within the profile. Is this normal or is there some link I have missed?
For now I have a crib sheet and have to check each equipment before connecting, it works but could be a point of failure in the future.

Being able to read helps.

Was this reply in relation to my question re walking noise?

All of this is controlled by your Atik drivers. It sounds like you are doing the right things.

However, you may be experiencing a problem which has shown up frequently for users of SGP. The problem is confusing the real function of the profile. Changing a profile does NOT change anything in any of your Sequences that were ORIGINALLY based on that profile.

Just make your driver selection changes directly to each Sequence. Or, change the profile, then create a BRAND NEW sequence from that profile.

Indeed. “Walking noise” is caused by warm pixels and drift during the whole capturing session. Jerry described the way how the warm pixels are removed from the individual calibrated light frames by CosmeticCorrection. This is especially useful when no dithering is applied during capturing.

Bernd

Bernd, I think you’re misunderstanding what Jerry was recommending. He specifically mentions hot and cold pixels (aka salt and pepper noise), which usually take on the more extreme values, not “warm pixels”. Cosmetic correction is good for that and for removing sat and plane trails because the resulting pixel values become outliers statistically. Fixed pattern noise (FPN, one of the causes of walking noise) is not hot and cold pixels, and as far as I can determine no one seriously recommends using cosmetic correction to remove it. It doesn’t respond well to CC because the pixels are generally not statistical outliers. Yes it is possible to reduce the appearance of FPN by using CC but it will also harm your image’s SNR because you begin rejecting vast numbers of signal pixels as noise. There are numerous threads on CN by AP vets that spell this out, here is one of many:

CN tread

The ASI1600MM is notorious for producing FPN and walking noise when dithering is not used (refer to CN again). Because Jerry uses a 1600 and doesn’t use dithering explicitly, I was wondering whether there was some other process he was using that he simply didn’t mention, thus my enquiry. I doubt that his use of CC is having much of an effect on reducing any FPN that may exist in his uncalibrated subs, so I assume his use of “implicit dithering” by taking subs over multiple sessions, recentring after meridian flips, etc., plus mixing the 1600 subs with those of the 183, is enough to keep it at bay.

So I owe you a thank you Jerry; I would never have guessed that a “no dither” regime was possible with a 1600. The loss of imaging time due to dithering can be significant, especially so if you’re using large guiding cadance, and even more so if you dither every frame as some imagers do. And “no dithering” dual setups become considerable more efficient because there would be no sub corruption due to dithering. I might just rethink a dual imaging rig again.

Thanks for this thread Xsubmariner.

1 Like

Thanks Ross for your elaboration on FPN. I was not aware that the 1600s are reputed to be notorious for this. I simply have never noticed any aberrations in my images that would indicate any walking noise issues. I do know that my ASI183mm has a severe case of amp glow at the right edge of the image. Fortunately when I combine images from the 1600 and the 183 the amp glow is a non-issue, for the reason that the FOV of the AG12/ASI1600 is smaller than the FOV of the NP172is/ASI183, so I am not using the right edge where the amp glow is located. By design of course. (he he)

Back to the 1600, many of my images only use subs from the 1600, and I don’t see any walking noise there either. Certainly combining images could mask this, but I have never seen it in the pure images from one or the other, which I do always look at before Integrating the combined set of Registered images. Maybe I’m lucky that my 1600 does not exhibit FPN?

Also many of my images involve only 1 or 2 sessions which would not produce meaningful dither. Plus my centering is usually within 1 or 2 pixels of dead center from night to night. Having a fixed obs really helps.

I don’t think this can harm the SNR since the CC routine only lets you reject up to 10,000 hot pixels, which is a tiny fraction of the 20m total pixels.

Hello Jerry.

I should have used the term fixed pattern noise rather than walking noise. If your PA is accurate I doubt you’ll see walking noise, but my understanding is you’ll see the FPN in the 1600 when you integrate the subs into a master and then stretch the master. You won’t notice it in single subs as it’s among the background noise, but because it is similar in nature to signal (rather than noise) no amount of integration time will reduce it.

I don’t think this can harm the SNR since the CC routine only lets you reject up to 10,000 hot pixels, which is a tiny fraction of the 20m total pixels.

Try adjusting the sigma and level controls rather than the quantity control; you can reject as many pixels as you wish, including many millions. But yes, if you are only rejecting 10,000 pixels I doubt you are rejecting enough FPN pixels (if any at all) to affect your SNR.

Also many of my images involve only 1 or 2 sessions which would not produce meaningful dither. Plus my centering is usually within 1 or 2 pixels of dead center from night to night. Having a fixed obs really helps.

That’s intriguing, and good to know too. Perhaps FPN is only really a problem on faint targets? Anyway, thanks for info on your dual setup and processing.

I tried detecting an exoplanet using the transit method and my first attempt used a piggy back guide scope. Over the course of the 5 hours of 30 second images the field drifted by about 30 pixels as a result of flex and possibly mirror flop.

The resulting graphs of relative magnitude had some slow variation that made the detection uncertain, the experts called it ‘red noise’ and it was because of the movement. Is this what you mean by walking noise?

Next time I used an OAG, no drift and the transit was visible.

Guy’s this thread is a great read for me. Picking up on the likely consequences of dual operation and the requisite preemptive actions needed to minimise them is great. I can see that I need to put more thought into the regime that I follow for sequencing my sessions. Thank you for your input, it is important to me at this relatively early stage of learning.