I do feel that SGPro is often quite inefficient in its non-imaging-time-use, especially with the modern CMOS cameras where “image download time” is often measured in fractions of a second. Things like analysing the image to measure statistics (like n° of stars and their HFR) have too much impact even on a fast PC and slow everything down.
In theory, with my ASI1600 in my environment, I can get away with having individual exposures of 15-30s and use lots & lots of them. Sure, that’s heavy on the processing step afterwards, but that isn’t really time-constrained, you can run it during the day or on cloudy nights, or just on another machine.
But the overhead SGPro introduces per individual image makes me choose longer sub-exposures.
To me it doesn’t really make sense that the Image statistics calculations are blocking the imaging, they should be run on a separate thread, especially since they don’t actually trigger anything in the imaging logic (like refocussing if the HFR values deviate too much) but even if they did, it’s simple to abort the next exposure if something like that is detected.
Of course, there will always be overhead (dithering, filter changes, refocussing) but when working with short exposures, you don’t have to do those every single exposure, you can do 20x15s, then dither, and the result would in practice be similar to dithering once every 300s exposure.
I’m extremely worried that the statistics collection being “blocking” for the progress of exposures will make matters worse and worse - there are new cameras coming out with 60 megapixel sensors, I’m assuming that this wil linearly impact that part of the process, meaning that whereas it happens in ~4 seconds on my current setup (16Mpixel ASI1600, relatively fast laptop) it will take 15 seconds with a 60Mpixel camera. At that point, even the “longer” exposures start to suffer from that extra overhead.
I don’t know what the best solution would be, but (a) faster image statistics and (b) moving it off the main thread would eliminate this piece of overhead.
I’d also love to be able to more finely manage the “event rotation”, right now, if I want to do LRGB but do three L for every RGB, but I want to “loop through events” so I have consistent data for all channels by the end of the night, or by the time clouds roll in, I have to set it up with three separate sets of “L”. Worse still, if you think of the “short exposures” system and all the overhead filter changes introduce, I’d love to be able to do 10xR, 10xG, 10xB then loop back to 10xR,…- right now I have to either accept doing all R first, then all G, then all B, set up 30 sequential events, 10 per colour or accept the ridiculous overhead doing R, G, B, R, G, B,… hundreds of times (at 15s per image, that would effectively cause imaging time to drop below 50% of clear skies)
So a more flexibly configurable event rotation (perhaps a “weight” system, where you can define a weight per event which affects when “rotation” triggers?) coupled with optimising away the “dead” time between actual exposures would have a significant impact on reducing observation overhead.