Re: AP155 4 OT


Roland Christen
 

I guess you can argue in all directions on this subject. My initial
thought remains the same, regardless of drizzling and deconvolution.
If you match the size of the pixel to the size of the Airy disc, you
will get 90% of the performance of your system. If you undersample,
you are throwing away resolution that you cannot get back, even with
the fanciest algorithms. Most of the deconvolution processes give you
speudo-resolution with telltale "stringing" of the fainter stars. I
find that objectionable, but then beauty is in the eye .....

Rolando

--- In ap-gto@..., "Joseph M Zawodny" <jmzawodny@...>
wrote:
Well it depends what you are trying to optimize as well as how well
the optical system performs. Take the Tak Epsilon 180 I mentioned,
the
f/2.8 speed should match well with a ~3.75 micron pixel size if you
were going to take a small number of long exposures and your goal
was
to maintain maximum resolution. But the Tak advertises a uniform 10
micron spot size across the full field which (depending upon the
details of spot or MTF really) is probably better matched for a 9
micron pixel if the quantity you are optimizing is resolution.
Perhaps
a better question is who would buy that scope? What are they trying
to
optimize? My guess is that that they are going for large diffuse
objects in a large field. There, the f2.8 will efficiently gather
the
diffuse glow relative to the point stellar sources. Another aspect
of
all of this that has been eluded to but not really focussed on is
some
of the newer resolution enhancing techniques (drizzling for
example).
With these techniques you oversample the optical resolution at high
SNR and employ a deconvolution algorithm to regain the otherwise
lost
resolution due to the inherent pixelization of digital imagery. It
is
a lot of work and requires good knowledge of the optical system to
extract the maximum information, but it does work. There are a lot
of
variables to consider when optimizing the imaging system and they go
beyond hardware to include approach and technique. It ultimately
boils
down to how much data you have (can take), how good the data
(sampling
and SNR) is, and how well you understand it.

As for the SXVF-M8C, it had good geometrical efficiency (high fill
factor) and the QE was typical for front side illuminated detectors.
It had a small full well, but a high speed interface that made it
suitable for stacking lots of images. I'm still waiting for SBIG to
get past the USB 1.1 interface - and it looks like that is in the
works - but will their older models be upgraded to at least USB 2 -
who knows.

Joe

Join main@ap-gto.groups.io to automatically receive all group messages.