I have been hopefully following the threads asking for dual camera control with dithering integration but it seems that after several years nothing has happened along these lines. Since I have recently gotten two wide-field systems tweaked and running (8300 and 200 mm lens plus FSQ and 16200), this is of more interest than ever to me. So two questions:
Is there any realistic hope of this becoming available anytime soon (next few months being “soon” in this context), even in a minimal function way?
Other users: Please do not suggest not integrating dithering - that is the core of good imaging as well as the core of any two camera setup. Not gonna do that.
Failing availability of SGP two camera, does anyone know of other SW that will handle this? I do know there was/is a third party MaxIm solution - I tried that and it is pretty poor (unless it has improved) so I would avoid that and MaxIm since escaping MaxIm is one of the reasons I went to SGP in the first place.
Because if you are imaging with a dual scope/camera at a similar image scale (without coordinated support), if you dither one camera the exposure from the other camera will be ruined and you’ll lose that time.
I certainly agree that dithering is very beneficial, but in your specific case I don’t think dithering is as beneficial due to the large difference in image scale. Even if you use extreme dither on the FSQ/16200 I doubt you will see that dither show up in the 200mm/8300 images.
Don’t get me wrong, dithering is the way to go, but I think it’s at least worth a shot to see how it works (or doesn’t) before investing in some other software.
Indeed - if you were running a mount with an array of two or more scopes at the same image scale, targeted at the same object, you would definitely require supported dithering. A friend runs an array of 4 takahasi refractors and uses Maxim to support that.
I run a mount with a galaxy imager (C14 and SBIG) and a canon DSLR with a 200mm lens and just dither the galaxy imager. It seems to have worked fine - but I am not a professional imager, I go for the highest standard I can within the constraints I have.
I have been out of things for a while due to building a dome, but I remember ( I think) that Ken and Jared had a list where folks had voted for the features they wanted, I guess that drives the work programme for SGP.
I know this has been worked on and progress has been made, but this feature is considerably harder to plan and implement than the other feature requests in the poll. For that reason, it was moved down further in the to-do list. If you scroll down further in the poll thread you’ll see what I’m talking about.
I am not in the know, but I hope that this feature request eventually comes through…no ETA AFAIK.
Yeah, me too but I am a bit pessimistic given the apparent lack of progress. This did get me to thinking. MaxIm has a 3rd party app available that coordinates two instances. I did try that and it is pretty flaky so never really used it much. After using MaxIm for many years I grew to dislike it so if it was between using MaxIm with two camera and SGP w/o two camera, I would still choose SGP. It would just be sad to have to make the choice.
What I do wonder is if SGP has the hooks present for a 3rd party to create an app that would coordinate two instances? If the developers do not feel it to be a high priority, maybe one of our user/programmers that has two cameras on their mount does.
The problem with Hyperion Prism is that although it does support coordinated dual cameras this is only available in a semi automated way i.e. you set everything up manually beforehand. It unfortunately is not available when you run an automated sequence.
Well, that eliminates that. So it is either SGP or someone maybe can write an app that coordinates two SGP instances. Sigh.
I think the demand for SW that does this is being seriously underestimated. It reminds me of years back when several MaxIm users (including me) were trying to get them to add graphing to their guiding. Years went by with no interest until am imager buddy wrote a plugin that did just that. It was extremely popular and not long afterward, graphing showed up as a native feature.
I did look at this but the API doesn’t support that kind of functionality (yet). The only other way i could think of would have be via events but unfortunately events cannot be triggered before/ after each image, only for lines in a sequence which rules that idea out. I would be happy to spend some time on this but at the moment I can’t see a way to make it work.
Thanks! Yes, I think that a few changes to the API would indeed let others do what most of us two-camera supporters are hoping for.
I also wonder if the developers may think that we are asking for more than we (or at least I) really are. Here is what I envision:
Master and slave systems/instances, there could be more than one slave system, although more than one would probably be rare.
Master system would work exactly as SGP/PhD presently do with regard to all functions - the only extra thing the master would need to do is to be able to tell the slave systems when it starts a main (as opposed to focus or plate solve exposures) exposure and how long that exposure will be.
Slave system would only do three things:
a) Change filters as required
b) Take and download and save it’s images as set
c) Take focus exposures and perform focus as set
Of the above, only slave exposures (main and focus) would need to avoid mount movement (dithers, plate solves/micro slews, target changes, and flips). Downloads and filter changes are unaffected by mount movements, of course. The simple way to do that is for the slave systems to use only somewhat shorter exposures than the master system and only start those if the master system is currently taking it’s own (longer) main exposure (as opposed to a focus or plate solve exposure). It would be up to the user to insure that the slave exposures were always a bit shorter (maybe 10-20 percent) than the masters so that they would “fit” within the master exposure. I do not see this as a big deal since most of the time, the master and slave systems will need different exposures anyway due to filters, camera sensitivity, and f ratio differences.
It is certainly not possible to double one’s output of images with two cameras since too many things have to happen. It is probably possible to get maybe a 150% or more increase in images as opposed to one camera (assuming two slaves, more if there are three or more). That is still a lot!
I, for one, would be quite happy with nothing more than the above - at least to begin with and quite possibly permanently - it is more than any other SW offers!
NOTE: One thing I would also suggest is that the available dither magnitudes be expanded to a larger range since differing focal lengths between master and slave systems would mean that the availabile dither scales for the master system might be not enough for the slave or vice-versa. I have suggested this already in another thread and this would benefit more than just multi-camera systems
I think the number of people wanting multi camera operation is very small. Probably no more than the half dozen who have posted about it.
Part of the problem is that the people asking for this are grossly underestimating the complexity of this. For example CCDMan’s description of what is needed dismisses the major functionality - synchronising the multiple image acquisition processes - with virtually no mention.
This synchronisation will be difficult to implement. It will need something like a state machine where there’s a central state and each process has to monitor the state, send requests for a state change, then wait for the state to be appropriate to what is needed. Adding all this will need a major rewrite of the application, it’s effect will be very pervasive. If it goes wrong - and it will - then it’s quite possible that everyone will be affected, not just those using multiple cameras.
Not what I asked. There are only six people that asked for this feature according to the poll? When I look at the poll it is showing 25% of respondents asked for this. I don’t think I’ve asked for this feature more than others.
If you don’t plan on offering this feature for whatever reason, then just say so and be done with it.
That sounds right to me. From a strictly user POV, I just want to know if I can still look forward to this or whether it is a dead issue. This affects planning as well as equipment decisions so is important information.
Is the feature dead or is it alive? Just tell us.
As far as what the real world demand is, right now that is nothing more than opinion. One can argue that forever with no certainty. Much like the guide graph in MaxIm that I mentioned above - one never really knows what demand is until someone actually has it available.
As for me, I will stop beating what increasingly does appear to be a dead horse.
In general I fully support your ‘less is more’ approach, there is no point in chasing down lots or difficult features if stability and usability suffers. I know a lot of work has already gone into v3 to stabilise SGP and to put it onto a more maintainable and viable footing. I think though that there could come a time where bigger features might be a possibility and it’s obviously up to the developers to judge what makes sense in terms of cost/benefit. I think you are a bit harsh here as there genuinely seems to be interest in this, I know the last feature poll might not be very scientific but it did rank quite high on the list which included the votes of 140 people. Still not a massive number in real terms but certainly more than you suggest. Also if you look back at post around this topic there were some very encouraging messages from the developers themselves which obviously got hopes up, and you can’t really fault people from following it up.
I also don’t think that it necessarily would have to be very sophisticated or difficult to implement if we are only looking at coordinated dithering (no coordination for focus, mount move, centering, flips etc.) and a strict master/slave setup. The master just does it’s thing without regard to the slave instance with the only exception that the dither command can be delayed (optionally even) if the slave instance has an ongoing exposure. The slave instance would only have to be able to check if the master instance is dithering or not (and maybe if the master instance has stopped it’s sequence). Yes there will be dropped subs and you would need to think carefully about exposure times of the slave instance to avoid wasting dead time but those things could be improved on slowly. A post in the last poll seems to indicate that the developers are thinking along similar lines:
Having said all that (as you know from your own work) there would probably be some people (including myself) who would be happy to give up time for free to make this happen if it was possible to coordinate this externally. I haven’t looked at the ins and outs of it in detail but a possible approach could be to implement ‘pre and post exposure’ and ‘pre and post’ dithering events which would allow an external app to coordinate the dithering. This way there would be no development or support burden for the developers and the few of us badly wanting this would stop moaning about it
@Jared & @Ken:
Would you consider implementing (or even just considering) some smaller changes and maybe some small cooperation (as and when time allows, entirely at your discretion) to allow me to create an external app to achieve coordinated dithering between two instances of SGP?
Add me so 7 users
I have side by side setup and multiple cameras. The most I can hope for is start a sequence then using another ap, pull some wide field/color shots for fun manually.
Don’t hold your breath for a requested feature. (they never happen).
Prism is superb, but I’ve not switched to it yet, I have some issues at this time and I’m to busy acquiring data to work on them. It’s amazing in what it can do… it’s also amazing in what it can’t do. I’ve found no way to add gain/exposure data to the filenames of subs. (Pretty basic feature).