I don’t think I showed this image history graph before - it’s real data showing changes in focus criteria (HFR, number of stars recorded, temperature) for a 3hour run where the target was rising in the East (already above 30deg) up towards the meridian. Note that the scope was already reasonably well thermally equillibrated before this run began.
The temperature trigger is set to 1deg so no re-focus was performed until just before the last frame (should be obvious from the Focus Position!) which occured just before onset of dawn twilight. So you can see the HFR gradually reduces over time (improved seeing?) and the number of stars recorded increases. Although it’s fairly predicatable in this example, there being a fairly linear drift, those changes would (likely) run in the opposite direction if the target started out at the meridian and slowly sunk to the West. This sequence of events does not even take into account the possibility of changing atmospheric conditions which can also drastically affect measured parameters.
In some other threads I note comments that desire to short-cut the autofocus mechanism, using single-point comparators as a test of focus drift (for e.g.,). Personally I think that’s a bad idea - as you can see from the above graph you are going to get changes anyway based purely on the altitude of the target, and these changes will be dependant on whether the object is rising or setting. No, I personally think the autofocus already does a fantastic job of maintaining focus and I for one do not begrudge the time taken for each and every full autofocus run, but I would like to set my own triggers for additional focus runs where I think they would be beneficial.