Exposure Time vs. Total Sequence Time...wasting 50% of Imaging Time

I’m new to using SGP and really like the features and ease of use but the ratio between actual exposure time compared to the imaging session time is horrible. Last time it took 8 hours to yield 4 hours of exposures (170 x 120s exposures).

Is there anything I can do to speed up the sequencing to get a better ratio of exposures to imaging time?

Could you tell a little more about your setup?
Camera type? CMOS/CCD?
If you dither, how often?
How often do you run autofocus?

Sorry, that would be helpful.

Camera: ZWO ASI1600MM Pro w/ Filter Wheel.
Mount: Paramount MyT
Focuser: Pegasus Astro Focus Cube

Using a NUC at the mount via Remote Desktop connection to the house. Dithering every expsoure (will be changing this to every 3rd exposure). Auto focus set to LUM filter with offsets for RGB and Ha filters, 9 images at 8 seconds at 2x2 binning. I’m going to try to reduce the auto focus exposures to 4 seconds to save some time. Usually auto focus every 10 - 15 exposures but last night only auto focused between each filter (4 filter changes).

Just to be clear on something, you said you only autofocused at filter changes. Were you rotating through events, or did you choose Finish entire events first? If you were rotating through events then you would have been doing an autofocus after each exposure.

A log file might be helpful. You definitely shouldn’t be losing 4hrs over an 8hr night.

One cause is definitely your crappy NUC, they are not very good for astronomy use, definitely not with a fast CMOS camera with a 16mp sensor.
SGP does a little processing after each image is downloaded and before a new exposure is started, this certainly takes away time and the slower the computer the more time this steals.
People have asked for SGP to be able to start an exposure as soon as the last one was done like some other softwares can do, i really hope this can be enabled because i’m using a CMOS camera too.

Dithering between each exposure takes a lot of time, how much depends a lot on if you are guiding or not. 3+ dithers per filter should be enough every 3rd exposure might be too frequent too.

2 Likes

I finished event, did not rotate through events…only four (4) auto focus sequences.

My math was wrong on the time, it was 5.7 hours of exposures (forgot one event) so the SGP “overhead” time was about 29% of the total imaging session.

Just trying to figure out if this is par for the course or if I can make some improvements.

A bit too blanket a statement. Depends on the unit and what processor, memory, and drive it is using. The problem with NUC can be that people looking for cheap tend to go with them so they buy low end specs. For example, kinda doubt this one would be all that slow…

i7 NUC MVME

1 Like

I know there’s ok’ish NUC’s but i’ve still yet to find someone that bought one to use for the astrophotgraphy setup, they are usually buying Celeron cpu’s which are total crap, single core speeds at 1/3 to 1/2 of an i5 isn’t unusual

I have an i5 nuc, 16 gig, and 1 tb ssd drive, that I use with my setup and it works great. Very fast.

@Tkriz1027

I know this conversation has turned toward NUCs (which is fine). Just a note that the ratio you specify in the title is not normal. If you want to post logs that show what one of your sequences is doing, we can take a look.

You are correct, my 50% waste was incorrect as I forgot to add in a 2nd event in my session. I’m basically using 30% of the sequence time for processing time, not imaging.

The issue is not my NUC as my image download time is no more than 4s and is a quad core i7 processor with 1TB HD. As others have mentioned it is likely due to my settings which I will continue to review and refine.

I appreciate everyone’s suggestions and advice.

That still seems high. Please feel free to post logs. Here are some time wasters that we see a bunch:

  • PHD2 settling threshold too low
  • Auto centering error threshold too low
  • Attempting to plate solve or auto focus with narrowband filters
  • Unnecessary auto focus triggers
  • Excessively short Light frames
  • Image history with a very slow machine

Interesting thread. I reckon my “overheads” are about 25% of total imaging time. Seems quite a lot, but if I’m in bed it’s the only way to go. One feature I’d like to see to bite into that is an option to invoke auto focus only after a meridian flip, but not after every centring action. If I’m doing a mosaic it’s unnecessary, but the box has to be ticked if at some point in the night the scope is going to flip. I’m using a C14. Only other thing I’d mention is that sometimes it seems to take a while to acknowledge that the autoguider has resumed and is settled.

Indeed. The best thing I did for my setup was to add a NUC to the telescope mount. Connection problems vanished, reliabililty increased dramatically, and session time was more efficient. Wasting 4 hrs every 8 is indeed a problem - I’m only seeing about a 10-15% overhead. Now I added a 256GB SSD to my NUC, so I download images directly to it. If you are remotely sending them over the WiFi that could really slow things down.

I’ve made some improvement with adjusting the focusing, dithering and guiding settings which has lowered my overhead to about 20%. I’ll need to keep playing with things to get this dialed down as much as possible.

Don’t be scared of cheap NUCs, I’m running 2 rigs with J3455 Celeron based NUCs and they have no problem handling SGP, PHD, EQMOD and CdC. I wouldn’t try to run with less than 4GB RAM and an SSD is essential.
Re overhead vs imaging time, I had the same concerns as you when I first started using SGP but gradually as I gained more experience and understanding I tweaked the various parameters for autofocus, guiding etc. and made big improvements. It’s an iterative process, I don’t think there’s a universal prescription, you just have to work out what’s best for your own setup.

1 Like

I do feel that SGPro is often quite inefficient in its non-imaging-time-use, especially with the modern CMOS cameras where “image download time” is often measured in fractions of a second. Things like analysing the image to measure statistics (like n° of stars and their HFR) have too much impact even on a fast PC and slow everything down.

In theory, with my ASI1600 in my environment, I can get away with having individual exposures of 15-30s and use lots & lots of them. Sure, that’s heavy on the processing step afterwards, but that isn’t really time-constrained, you can run it during the day or on cloudy nights, or just on another machine.

But the overhead SGPro introduces per individual image makes me choose longer sub-exposures.

To me it doesn’t really make sense that the Image statistics calculations are blocking the imaging, they should be run on a separate thread, especially since they don’t actually trigger anything in the imaging logic (like refocussing if the HFR values deviate too much) but even if they did, it’s simple to abort the next exposure if something like that is detected.

Of course, there will always be overhead (dithering, filter changes, refocussing) but when working with short exposures, you don’t have to do those every single exposure, you can do 20x15s, then dither, and the result would in practice be similar to dithering once every 300s exposure.

I’m extremely worried that the statistics collection being “blocking” for the progress of exposures will make matters worse and worse - there are new cameras coming out with 60 megapixel sensors, I’m assuming that this wil linearly impact that part of the process, meaning that whereas it happens in ~4 seconds on my current setup (16Mpixel ASI1600, relatively fast laptop) it will take 15 seconds with a 60Mpixel camera. At that point, even the “longer” exposures start to suffer from that extra overhead.

I don’t know what the best solution would be, but (a) faster image statistics and (b) moving it off the main thread would eliminate this piece of overhead.

I’d also love to be able to more finely manage the “event rotation”, right now, if I want to do LRGB but do three L for every RGB, but I want to “loop through events” so I have consistent data for all channels by the end of the night, or by the time clouds roll in, I have to set it up with three separate sets of “L”. Worse still, if you think of the “short exposures” system and all the overhead filter changes introduce, I’d love to be able to do 10xR, 10xG, 10xB then loop back to 10xR,…- right now I have to either accept doing all R first, then all G, then all B, set up 30 sequential events, 10 per colour or accept the ridiculous overhead doing R, G, B, R, G, B,… hundreds of times (at 15s per image, that would effectively cause imaging time to drop below 50% of clear skies)

So a more flexibly configurable event rotation (perhaps a “weight” system, where you can define a weight per event which affects when “rotation” triggers?) coupled with optimising away the “dead” time between actual exposures would have a significant impact on reducing observation overhead.

2 Likes

A separate thread to measure statistics would be a good thing.

I propose a change to the “rotate through events”
My proposal is that you set how many exposures should be taken per rotation insted of total, to control amount of images a value for amount of rotations could be set under “delay and ordering options”
Maybe remove/move the delay portion? Does anyone actually use the delay?

Circling back to this after spending a couple of months refining the SPG process and also fine tuning my mount (Paramount Myt) and PEC I have been able to reduce the SPG overhead to about 15% of my total imaging time. A large part of this is due to imaging unguided but also with the improved (and faster) autofocus routine in the latest version of SGP.