Increased dynamic range by combining frames

When the sensor of the camera finishes the exposure time, the analog-digital converter (A/D) measures the amount of photons captured in each pixel and converts that measurement into an integer number.
The range of values this number can take depends on the number of bits of the A/D converter
Webcams have 8 bits A/D converters, so the range is from 0 to 255. This makes webcams dynamic range quite low and so it is convenient to try improving it combining many frames.
This raises the controversy of whether or not this combination can reconstruct the original signal with better dynamic range that each frame or just improve signal to noise ratio.
What follows is my analysis of this subject.

The input of the A/D converter is a signal produced by the emission of the object, its passage through the atmosphere, sky tracking and optics quality, thermal noise in the sensor and electronic noise.
This makes the signal at a given pixel of the image not a fixed value, but a value that changes every time you repeat the exposure, ie it is a random variable. The "true" value to be measured is the mean value of the random variable.
When the A/D converter measures the signal, its value is rounded to the nearest integer, so the probability of obtaining the integer n [P(n)] is equal to the probability of a signal having a value between n-0.5 and n+0.5
By the law of large numbers, when many values are averaged, the resulting variable tends to have the same expected value (or mean) that the averaged variable and much smaller variance (the original variance divided by the amount of averaged values). Thus averaging many values there is a very high probability of getting a result very close to the mean of the rounded values.
Because we lack information about the shape of the probability distribution density function of the original signal, I assumed that this is a normal distribution (Gaussian) with mean in the "true value" and a certain variance. Based in this assumption I calculated the mean of the rounded variable [which is the sum for all possible n of the product n*P(n)] depending of the "true value" of the original signal and its variance. That gave me the following graph:

As shown, for standard deviations about 0.5 units or more the function is nearly the identity function, ie the mean of the rounded variable is equal to the mean of the original variable and thus averaging many rounded variable values we can reproduce the original variable with any dynamic range if we use enough averaged values.
On the other end for standard deviation below 0.05 units the function is almost equal to the rounded value and so dynamic range can't be increased no matter how many value are averaged.
For values of standard deviation between 0.05 and 0.5 averaging many rounded values partly increases the dynamic range.
Ultimately it all depends on the variance of the original variable, which in turn depend on the change in the amount of light emitted by the object, the atmospheric seeing and noise of the CCD.
We need to determine the usual size of the noise with some combination of CCD + telescope + sky + object.
It is to be noted that there is an effect that also introduces "noise" (in the sense that it increases the variance of the original signal that is rounded). It is the fact that when we stack images we average values corresponding all to one "point" of the object but due to tracking errors they "fell" in different sensor pixels in every frame and so changes in photon collection efficiency and its measurement from pixel to pixel introduces an increase in the variance of the original signal we are rounding and averaging. This also probably helps for the standard deviation to be greater than 0.5

Monitor calibration

Monitor calibration
Adjust your monitor to see all grey boxes