QHY163M Bias and Dark frames

When taking bias frames with my QSI683 (KAF-8300 chip), I see ADU values like 250 for 1x1 bias frames and 600 for 2x2 bias frames (at -10C). However, when I took my initial bias frames with the QHY163M (CMOS chip), I got ADU values of 2400 at 1x1 binning and 7400 ADU at 2x2 binning. These were taken at -10C and gain = 0.

I am puzzled by the very large ADU values of the QHY bias frames. It gets even stranger for dark frames. Dark frames at 30 sec, 90 sec and 180 secs all show the same values and those values are the same as the bias frame values.

I really don’t understand these results.


From what I understand, bias frames aren’t a good idea for CMOS cameras, you are better matching dark frames to lights and subtracting without any attempt to optimise the dark frame subtraction (is that just a PI thing anyway?). But that’s not answering your question. The reason for the high values with your CMOS camera is the offset that has been set in the driver and this is not something you can control in CCD cameras. The offset adds a value to each pixel reading to ensure that you don’t end up with negative values when you include the variation due to read noise. If you look at the histgrams of your dark frames, the left hand side of the histogram should be just a bit above zero. If it is too far to the right, you are losing dynamic range, so you can set the offset lower to move it left. But darks and lights need to be matched in terms of gain, offset, temperature and time.

Now as to why your dark frames are all about the same, it is a similar answer. The dark frame is offset + dark current x time. So dark frames do increase as time gets longer, but dark current is small at low temperatures, so the readings are dominated by your offset, which maybe is a bit too high. The whole issue of “best” offset and gain values to use is a minefield, and get three people together you’ll probably get six different opinions!
I am not claiming to be an expert by any means compared to others with more experience, but what has worked for me is that I’ve figured out the offset that works for the gain that I use most often (in the case of my ASI294MC-Pro, that is a gain of 120 and an offset of 10) and I stick with that. If I ever decided to use a higher gain (this has been suggested for narrowband imaging, but I’m not sure I see the need to go beyond unity gain ever other than a reduction in read noise), I would need to use a higher offset.
The histogram is your friend in working through all of this, or if you use SharpCap, it will do quite a bit of this for you when you run a sensor analysis.

Hi Charlie

I am puzzled by the very large ADU values of the QHY bias frames.

The QHY163 is a 12bit camera (possible ADU values from 0 to 4095). On the other side, ASCOM specifies (more or less, this is another story) that the image must be 16bit, with ADU values from 0 to 65535. For that to happen, the ASCOM driver simply rescale the ADU vales coming out of the camera, by multiplying them by 16.
If you (or the manufacturer) now set an offset of let’s say 100ADU for the camera, this will show as 100 x 16 = 1600ADU as delivered by the ASCOM driver.

On a side note, the moment this signal is read into PixInsight, it will be once again rescaled to floating-point (values between 0 and 1 with a lot of decimal places). You can tell PI to present these values as 8, 12, 14 or 16 bit (and some other) values.

Regarding binning 2x2: if life would be perfect, after binning we should get exactly 4 times the unbinned offset value. It is not perfect, only close to.

It gets even stranger for dark frames. Dark frames at 30 sec, 90 sec and 180 secs all show the same values and those values are the same as the bias frame values.

For all CMOS cameras and almost for all CCD cameras there is an internal function to subtract an average value of the accumulated dark current (based on some hidden pixels who never get light exposure) from the image, before it being output. You basically never see the accumulated dark current, only the noise of the dark signal (both fixed pattern noise and random noise). The fixed pattern noise will be removed (well … almost) during the calibration process by master dark subtraction.

Kind regards,


Thanks for the great explanations. Clearly, I need to get a better handle on setting the offsets correctly. While my QSI683 is currently doing fine, we will all be using CMOS sooner or later. I know from reading previous forum posts that there is a lot of confusion about gain and offsets in CMOS cameras. I chose a gain value of 0 simply because the QHY manual shows that setting provides the greatest well depth. And, yes, the whole 12 bit to 16 bit conversion issue can be confusing when you see that the QHY163M is documented to have a full well depth of 50,000 electrons.

Initially, it didn’t seem to me that a 12 bit camera could do that well with imaging because of the limited dynamic range but then you have to consider the fact that all these pretty pictures are being viewed on computer monitors, which are mostly 8 bit devices. Some high end monitors are 10 bit and a small number of high end gaming monitors are 12 bit. So, a 16 bit image and a 12 bit image viewed on an 8 bit monitor look the same.


I have an ASI183MC-Pro which is 12 bit and has small pixels with a full well depth of 15k. 12 bits means 4096 discrete values possible from the ADC, so if I use a gain of 0, that means a full well will give an output of 4096, so that amounts to about 3.7 e/ADU (this in native ADU, not scaled to 16 bit - I’ll come back to that). so if I get between 4 and 7 electrons, that will just register as an output of 1, and that for me represents significant quantization of the signal, so on dim objects, that could be fairly significant. If I go to the other end of the range, a gain of 300, then my gain number is about 0.07 e/ADU, so if a pixel gets more than around 450 e, it will give an output of 4096, so you end up potentially significantly clipping brighter objects, like stars, and lose colour data. For me, the best spot is unity gain where 1 electron equated to 1 ADU, so there are no quantization artefacts and you get access to the full well depth. On this camera (and all ZWO cameras I think), this is at a gain of 120.
As to the offset, at unity gain (and this works on the ASI183 and the ASI294), an offset of 10 works just fine. To see whether your camera is set up OK, take a 10 s dark with your normal gain and offset settings and look at the histogram. If it is clipped on the left, the offset is too low. If there is a big gap between the left side of the bell curve and the y-axis, then it is too high, so you can adjust the offset to get the left hand tail just above zero. If you don’t, your darks will be clipped and that could adversely impact your final images.
CCDs don’t have these complications, but then CCDs have other problems like high read noise compared to modern CMOS astro cameras.


Viewing my 300 second dark frame in PixInsight’s Histogram, I could see a narrow spike of ADU values well offset from 0. Using the camera setup function, I could see the factory offset was 153. Playing with the offset number yielded good results at an offset of 40 with the gain set at 0.

Doing more research, I found that other imagers using the QHY163M were also using an offset of 40 at gain 0. There were some recommendations for using gain 20 with an offset of 80. So, it seems like I will need to do some testing with actual targets to see what works best with my setup. To get 1 electron = 1 ADU, I would need to use a gain of about 80 with an offset of 200. This setting will certainly get tested.

Just an FYI for other QHY users – when I changed the offset using the camera setup dialog, the new offset value only went into effect after disconnecting from the camera and re-connecting.



Setting the offset of a CMOS camera in an ASCOM based imaging application is only possible if the imaging application supports ASCOM 6.5 AND the camera vendor has released v3 ASCOM drivers. So, has any camera vendor released v3 drivers?


ZWO has a v3 driver, but SGpro still doesn’t seem to handle offset properly yet. You seem to be able to set offset in event settings, but the value changes itself randomly and the gain doesn’t get reported when using it as part of the filename. It clearly is possible since other packages manage it.

This is currently supported in the 4.0 beta

I’m using the 4 beta and I’m afraid I can’t get it to work properly. I set the offset and gain in the driver as I trust that, and I use offset and gain in the file name. Gain appears correctly, but offset shows up as “NA”. Setting it on the Camera tab has no discernible effect since offset is still not reported in the file name. Behaviour in the event settings is even more odd - once the camera is connected it isn’t possible to change gain and offset numbers (greyed out), so trying without the camera connected allows you to set a gain and offset, but the offset value changes randomly.

I am fixing up some issues where the new offset implementation can misbehave due to inconsistencies between cameras (and just some general SGPro cleanup). All of these issues are pretty well cleaned up now, but I need to test a bit more with other cameras.

Don’t get me wrong, I’m not meaning to be over-critical here, what I’m pointing out are quite minor issues in terms of the way I use SGPro. I have looked at other software with a more ”modern” interface, but for me, nothing beats SGPro for ease of use and reliability. We get so few clear nights as it is, I’m looking for software that maximises my acquisition time, and this is it!

Some more testing with my QHY163M has provided a clearer insight into the relationship between gain and offset on a CMOS camera. At a gain setting of 0, the full well depth is 20,000 electrons and that seems like it would be good to use when you wanted to avoid saturating bright stars. At a gain setting of 120, the well depth is 4096 so there is a one-to-one conversion with the 12bit ADC. It would seem that the gain setting of 120 would be a match for imaging faint nebula and ignoring the star saturation. The matching offset values vary depending on both gain setting and binning.

The puzzling issue is the ADU values between bias frames and dark frames. I am getting higher ADU counts from a bias frame than I get from a 300 second dark frame. Repeated testing shows the same result. This is from doing frame analysis in PixInsight.

Anyone have some idea why a dark frame would have lower ADU values than a bias frame (both taken at -10C). Equally strange is that a bias frames download in slightly over 2 seconds but the dark frames download in less than 0.5 second.


Perhaps light leakage in case of the bias frames?

That doesn’t make sense. It sounds as if the bias frames were captured in 1x1 and the dark frames in 2x2 binning.



I have the camera on my desk for this testing. Camera protected from light leaks. Binning set the same. I will run new tests to see if I can identify some setup issue. I know that CMOS cameras have very low dark currents and some imagers don’t even shoot dark frames with CMOS cameras but I don’t see how a dark frame could have lower ADU values than a bias frame. The bias frames look like what you expect to see in a bias frame and darks look OK as well; that is some hot pixels showing.


I don’t have any experience of the QHY163, my CMOS cameras are the ASI294 and ASI183, but certainly with the ASI294, there is a problem where I was getting odd results and poorly calibrating flat frames if the exposure was too short. For my 294, I set my flat panel brightness so that exposures are 3 seconds and that works fine. As I understand it, with very short exposures, with some cameras you start getting odd gradients in the frames that render them useless. I think I’ve seen somewhere specifically for the QHY163 and the ASI1600 that bias frames should be shot with at least 0.3 second exposures. One recommendation for CMOS cameras (from Jon Rista and others) is that you should aim to have dark frames of the same exposure as your lights then, in PixInsight for example, you don’t use bias frames and certainly don’t attempt to optimise the dark frames. So I have a library of 30s, 60s, 120s, 180s, 300s and 600s dark masters that I match up to my light frames, so my light exposures have to be one of these numbers.