Added ZWO settings dialog?

I do think it is probably confusing to use dynamic range here. DR would generally be the saturation point over the read noise, or perhaps over the total background sky noise. The former would model effective camera DR, as if the camera had read noise as low as in the stack. The latter would be more of a model of effective real-world bit depth in the stack. The latter, to calculate effective bit depth in the stack, is usually what I prefer to use when discussing “range of information within the stack”.

At a higher gain, the camera definitely has lower dynamic range. There is no getting away from that. That said…at a higher gain, you usually use shorter exposures. Using shorter exposures usually means you acquire more subs and stack more subs, which means you recover more bit depth through stacking than if you use longer subs at a lower gain. This in the end helps to normalize the effective dynamic range of your stack. For a given camera, usually with reasonable stack sizes (which i would consider to be up to around 300 subs…you can certainly stack more, but you need to stack a LOT more for it to be useful, so it becomes a much greater challenge), a deep stack at a high gain will have similar DR as a shallow stack at a low gain.

The low gain may have an advantage in DR, even compared to a deep high gain stack, in fact…however things may not be just that simple. Lower gains, at least on CMOS cameras, often have more FPN. Notably more banding and similar issues. Long exposures usually have brighter glows. This FPN can warrant even longer exposures to effectively bury this additional unwanted pattern noise, which can diminish DR and balance high and low gains even more.

I think what you are trying to describe is not the dynamic range of the signal, but it’s fidelity (kind of like high fidelity audio…which is about the quality of the audio). The higher gain reduces quantization error, thus allowing you to sample the signal with greater precision and accuracy, producing a higher quality signal. That signal may not have maximum hardware DR, but if you don’t actually need all that much DR…such as with shorter exposures and narrow band filters, then the higher quality signal is valuable and useful as it separates finer, fainter details more readily and produces a more natural normal noise distribution with fewer patterns.

The fidelity or the overall quality of the signal is high at higher gains, and often low at lower gains, with CMOS cameras. A low fidelity signal, one that has higher FPN, brighter glows, more hot pixels, etc. may require more stacking to average out all of those things as well…which can diminish the value of greater hardware dynamic range at lower gains.

2 Likes

That is not the case in my experience (with CCDs). My RGB exposures typically use 2-5 minutes to the onset of clipping the brightest stars. With 5 or 3nm narrowband it requires longer exposures to register something. Most will agree Ha is the strongest signal with OIII and SII being considerably weaker. There are a few nebula where this is not so, but I would say they are the exception. When I’m imaging I will take 2-3x more 10 or 20-minute exposures of OIII and SII and the stacks are considerably noiser than the Ha stack.

I don’t think that is the case. Any data I’ve seen on stacking tells me that it increases signal to noise by averaging out the noise which does have the effect of making the lower bits more meaningful, however, there is no increase in bit depth. The only way to increase the bit depth with stacking would to be to do stacks of overlapping exposure values to get an HDR result. Here is a good analysis of the effects of stacking; Image Processing Stacking Methods Compared, Clarkvision.com

So it depends on what you mean by “dynamic range”. Stacking does reduce the noise thus improving the detail in the low bits but there is no increase in overall bit depth.

It seems a little strange to me that ZWO camera owners are having to figure this out on their own. ZWO sells these cameras specifically for astronomical imaging. Why has ZWO not published definitive guide lines for using their cameras for the various types of astronomical imaging being done with their cameras? If stars, galaxies, nebula, etc. require different gains and offsets, it seems to me that ZWO should tell its users what those are.

Charlie

Amen to that!

Yes I’ve been trying to boil all this down to some simple rules of thumb I can include in my lectures and videos (which up to now have been DSLR based).

It’s been quite a journey, and frustrating, as everytime I think I have it nailed down it gets slippery again.

Astro cmos cameras; To bin or not to bin, and proper exposure is discussed endlessly (sometimes flogged on by me in search of simple rules and/or software aids) on forums, etc.

When I first got my current rig, 12" RC truss, CCDT67, filters/filter wheel and ASI1600MM-Cooled, I was super stoked and happy with initial results, just “exposing to the right” as on a DSLR and using the minimum read noise setting provided by ZWO in their driver.

From my much darker than where I am now location I quickly produced some (I thought) nice images:

Then the feedback started damping my new found joy. Halos around bright stars, blown out stars, etc.

Tried to fix the halos in PS, and diagnose the problem. Turns out to be the ZWO (pre 2018) filters. Note: some people have another problem with different looking star halos, that are caused by the sensor glass as delivered by panasonic to zwo and others.

Then I started replacing stars and researching better exposure settings.

Moving to a new home (sadly in a white zone) then slowed me down but basically I went from sort of “fat dumb and happy” to stressing about exposure and not being able to get to a simple rule to do it right, that somewhat continues to this day.

Now I image from a while zone, taking 11 hours to produce what you could do from a dark site in 45min:

but then I don’t have to drive, haul my heavy gear, and can comfortably sleep in my own bed while imaging.

Still, the hunt for “proper” exposure following simple rules for these new cmos cooled astro cameras continues.

Going back to a post further up the thread before the lastest lively DR discussions here is my latest attempt (that I wanted feedback on from multiple folks);

“Expose to the right” becomes more specifically:

Use this table of offsets and gains and median ADU (as shown in SGP):

“Median ADU shown in SGP:
Gain 0 Offset 10: 400 ADU
Gain 75 Offset 12: 550 ADU
Gain 139 Offset 21: 850 ADU
Gain 200 Offset 50: 1690 ADU
Gain 300 Offset 50 : 2650 ADU”

Suggested setting for LRGB, OSC shots of galaxies, globs, clusters, etc. are Gain 75 offset 12.

For NB (and perhaps LRGB or OSC of Nebulae) try gain 200 offset 50.

Next point your scope at the highest point in the sky your target will pass during imaging. If it will cross the meridian, point just to the west of the meridian where it will pass (so you don’t need to flip during the next steps).

Do trial exposures (each filter). Your goal is to get a median ADU (as shown in SGP) that is at or a little above what is shown in the above table.

This will ensure your are above the skyglow and other noise factors, at the worst case sky position for your target (darkest sky due to LP).

Sanity check how much clipping you have for stars. If more than you want/can tolerate, lower the gain.

Now the other part of all of this that has frustrated me is the ability to predict what will happen to the exposure results if I change something.

I’ve posted before about teaching my DSLR students to use the highest iso setting in their camera to determine the proper exposure at the iso they will actually be imaging at, in order to same precious dark time.

I now understand the exact meaning of the gain numbers in these ZWO cameras, so can somewhat relate them to iso, or at least understand the math for “equivalent exposures” at different gains should be. However, when it comes to “median ADU” that doesn’t seem to be deterministic.

Similar with doubling the exposure at the same gain and how it affects median ADU.

Any input on that or my overall approach?

I want to be “fat dumb and happy” again !;0)

1 Like

@Jon Rista,

Yeah I was thinking similar re: my use of “Dynamic Range”. I think “resolution” or “bit resolution” might be a better technical terms. I stil need to talk about a “range” of ADU values, however.

And yes, the whole stacking thing I’m not considering. I’m watching with interest and testing the “smart histogram and brain” for exposures that SharpCap is developing, which does take stacking into account if I understand correctly, and was hoping for something similar here in SGP.

Thanks

@dts350z I think you are right there is really two different concepts of dynamic range in these discussions. One is defined by the physical bits of the camera (or resolution as you say). This is the maximum possible dynamic range - no amount of stacking will get you beyond this limit. Then there is the useful or noise limited dynamic range. Stacking will improve this limit. So when you choose a gain, you are setting the max dynamic limit which no amount of post processing will undo (assuming equal exposure values).

I totally agree on that and it applies to CMOS sensors too. They are both linear devices, so their output is directly proportional to light flux, which in turn depends on the filter’s bandwidth and the relative abundance of hydrogen, oxygen and sulfur in the nebula (typically Ha >> OIII, SII).

There is a universally accepted definition of dynamic range in the scientific community, which is the one @jon.rista reported above: DR = FWC/RN.

That’s not likely to happen, because there are so many parameters that are involved in an exposure: the surface magnitude of your target (or better, the monochrome flux for each wavelength that will be part of your picture), the surface magnitude of your sky (light pollution), the bandwidth of any filter you might use, the diameter and focal length of your scope and the optical efficiency, the size of the sensor and its pixels, the quantum efficiency of the sensor, the exposure time. Those are the physical quantities that are required to evaluate how many electrons you will get per pixel. Most of them may not be evaluated by the manufacturers, as they depend on your specific configuration).

On the other hand, as per my previous post, chances are that you’ll only need one or two different gains, depending on the fact that you’re shooting wideband and/or narrowband (you don’t need different gains for different type of objects, i.e. galaxies vs nebulae). Then you’re simply going to set the appropriate exposure time for your target and for your sky. Offset can be established once and for all (for each gain value you will use) by simply choosing the lowest value that keeps the histogram detached from the left hand side (no pixels clipped to zero). The good news is that some manufacturer will provide an automatic offset setting.

I believe you’ll have an easier time with a CMOS astrocamera if you think of it like a DSLR. On a DSLR you just choose an ISO setting and then an exposure time. With a CMOS camera you’re doing the same: choose a gain that fits your purposes and then the appropriate exposure time. Regarding the gain, you don’t need to be very precise. I mean that, with a camera offering a gain from 0 to 560, there will be no practical difference if you choose 60 instead of 59.

If you read my whole post, you will see that I said the increase in dynamic range could also be termed an increase in effective bit depth or an increase in tonal range.

Regardless, as you stack, averaging of the signal in each frame results in a reduction of noise in the final output. The official definition of dynamic range is FWC/RN. That would apply directly to the hardware DR of a camera. Dynamic range refers to the number of discrete tones that can be discerned in the data. Due to noise, a change of some specific amount, the dynamic range “step”, is necessary for one tone to be discretely discerned from another.

For an integrated stack, this concept can be extended. Read noise, along with all other forms of noise, is reduced as you stack (with an averaging model). The maximum number of discrete tones possible in an integrated image would be FWC/(RN/SQRT(SubCount)). Again, this is a number of discretely discernible steps. This would be the maximum. Since an image contains other information and thus other noise as well, plus additional offsets, a more accurate and representative formula of the discrete number of steps discernible in an integrated stack of subs would be:

(FWC - BiasOffset - DarkOffset - SkyOffset)/(SQRT(ReadNoise^2 + SkyNoise^2 + DarkNoise^2)/SQRT(SubCount))

Determining the number of discrete steps of useful information in an image need not be restricted to just hardware. It would apply to any image signal. Stacking increases SNR, yes, however the SNR is not quite the same as the dynamic range…or, if you prefer a different term, tonal range. Tonal range would describe the number of steps of information from the noise floor (subtracting any offsets) through the maximum signal, whereas SNR would be relative to any given measured signal peak (which could be very low for a background signal area, moderate for an object signal area, or very high for a star signal area). SNR is dynamic and different for each pixel of the image, whereas the tonal range would be consistent for the entire image.

As for bit depth. If you stack your data using high precision accumulators (i.e. 32-bit or 64-bit float), then you can most assuredly increase the effective bit depth as well. If you stack with accumulators of the same bit depth as your camera, then no…you wouldn’t gain anything. In fact, stacking would be largely useless if you stacked with low precision accumulators.

These days, pretty much all stacking is done with high precision accumulators, which means that as you stack, your effective bit depth follows the formula above. You can convert the above steps to effective bit depth with:

EffectiveBits = log2((FWC - BiasOffset - DarkOffset - SkyOffset)/(SQRT(ReadNoise^2 + SkyNoise^2 + DarkNoise^2)/SQRT(SubCount)))

Not sure I get all that you are saying. If you are saying you can stack and increase the bit depth beyond that of the camera, that is not physically possible. You can’t create information where none exists. If the camera is 12 bits, there is no way to create the 13th bit because that information does not exist in the original data no mater how many you stack.

For each bit in the camera the signal is T + Ni where T is the true value and Ni is some random noise. When you stack n images you get n * (T +Ni) / n which equals T + Ni / n. With stacking the last term trends to zero so you are left with the original true (noiseless) value. That is, stacking does not increase the physical bit depth beyond what the camera provides. Stacking does reduce the noise and increase the signal to noise but you can’t get more resolution than the camera started with unless you do something like HDR. You can use high resolution accumulators but any bits beyond that of the camera are false information (artifacts of the math). If the camera is 12 bits, those 12 bits only represent those values - they contain no information that would allow you to determine more bits no mater how many you combine. In base 10, suppose I gave you the values 2,5,7. Ok lets stack them by averaging. The result is 4.66666. Your premise is that the .6666 has added to the information content of the data but that is false - it is an artifact of the math. The math operations produced extra digits but not valid information.

https://www.researchgate.net/publication/258813969_Temporal_image_stacking_for_noise_reduction_and_dynamic_range_improvement

and more readable:

http://keithwiley.com/astroPhotography/imageStacking.shtml

etc.

You can do precisely that. Consider a dice that has a very slight imbalance. There are only six states. Throwing it once you probably get a 1/6 chance of getting any particular number. Throwing it a thousand times and averaging the results, you will reveal the slight anomaly.
I have never tried it but if you take a 32-bit FITS image, stacked from 30 16-bit downloads from a CMOS camera and create a 12-bit copy (the depth of the ADC). Now stretch both similarly… try it and smile :slight_smile:

Noise ironically can improve resolution. I recall something similar happens in CD audio stages, where oversampling and a slight dither reduces quantisation noise and provides better resolution.

2 Likes

Chris, I only have a very basic knowledge about the drizzle integration algorithm and no clue about its implementation, but your reply made me think about it. Do you know if it’s based on the same principle? Thanks.

I can’t quite recall - doing a quick look:
“Dither is an intentionally applied form of noise used to randomize quantization error, preventing large-scale patterns such as color banding in images. Dither is routinely used in processing of both digital audio and video data, and is often one of the last stages of mastering audio to a CD.”
I guess that there is no need to add noise to our sensor signal, there is already plenty about before ADC conversion.

Of course it’s possible.

Consider a very simple case. Let’s say that you have a 2 bit camera with zero read noise and unity gain and no quantization error. For each pixel on the camera, each pixel can have the value 0, 1, 2 or 3 - and nothing else.

Now, let’s say that you take 16,000 exposures, taking care to not saturated any pixels (they might be really short exposures). Now do a simple sum of each of the frames using software that stores 16 bits per pixel. The dimmest pixel could be 0, and the brightest could be 48000. And any values in between could exist. You would have the performance of a 16 bit camera with a FWC of 48000.

In the real world, the read noise and quantization error matter, so the math is not as simple. But the principle still applies.

1 Like

@buzz If you have a 12 bit camera, I think you can agree that there is only 12 bits of information in each image. Right?

So now stack a bunch of them. All the images you combine have no information about the 13th bit. So, where does the information about the 13th bit come from?

Stacking does not create information that was not in the original images. This is basic information theory. Stacking does average out the noise - that information was in the original images. When you average 12 bit images, you get an average of 12 bits - no extra bits of information are created. The same is true if you stack by adding - you don’t create new information.

Don’t confuse data with information. Averaging a bunch of images can create more than 12 bits of data but that is not new information.

@wadeh237 You are confusing data with information. Each exposure as only has only 2 bits of information. Add as may as you want - it does not create new information - just more data. The original 2 bits of information does not predict any other bits. Combining them does not change that.

On the other hand, doing overlapping HDR exposures does create new information. But just repeating the same exposure does not predict any new bits - no new information is created.

Think of a camera without noise. If you take lots of exposures, every one will be the same value. So combining them adds no new information because they are all the same. Now consider a camera with noise. Taking lots of exposures can average out the noise but there is still no new bits of information being created because the basic underlying noiseless image is the same in each exposure.