APPM data meaning


Sam
 

Hi,

 

I am trying to understand the meaning of the data for the pointing model - Does Ray or anyone else have a document that explains the data from a pointing model.

 

I am fascinated by how the pointing model works and what it does, but would like to understand the results better.

 

I ran a 200 point model last weekend, and the RMS pointing error is 8 seconds East and 7 sec West (see image)- sounds good to me but would like to learn more about the meaning as well as the data in the other columns.

 

I appreciate your help.


Sam


Sam
 

Hi

Someone pointed out that the APCC help files have some descriptions on the meaning of certain APPM model variables (see below).
However it would be interesting to know what should be expected for a mount like AP1100 class.  For instance, Is an RMS of 8 sec with a range 20 sec a good result for such a mount  ?  

I believe that these questions would come to mind after someone performs a pointing model - so it would be appreciated if anyone can share their experience.  As well, It would also be valuable if anyone knows of any further explanations or documents on this topic.

Thanks
Sam 

In the upper left section is the Model Properties Group Box. These are are the most important model
 terms. You can check/uncheck them to see how well the model fits the data by looking at how the
 average and maximum errors (and graph points) change. Usually you will want all terms selected.
 NOTE: in the terms the Index Dec Error, Non-perp (Dec. OTA), and Flex Cantilever Axis usually have
 inverted signs between the axes. Also, the quality of the Polar alignment numbers can be judged by
 how well they match in each of the dual models. If they are close you can be confident that the
 numbers are modeled well (the Azimuth is most important).
 The last large section in the upper right, the table, simple shows the raw data points collected for the model. You can uncheck specific data points if they seem to be bad data points.


Ray Gralak
 

Hi Sam,

However it would be interesting to know what should be expected for a mount like AP1100 class. For
instance, Is an RMS of 8 sec with a range 20 sec a good result for such a mount ?
Besides polar alignment, it's not the mount contributing to the pointing terms, but the telescope that your mount is carrying. So, you can't really state an expected RMS number for the mount alone, as two different telescopes can produce completely different pointing terms.

However, in general, refractors with solid connections and no dangling cables to get caught will produce the best results. Telescopes with mirrors that can randomly tilt/move, like SCTs, will usually have the worst performance.

-Ray

-----Original Message-----
From: main@ap-gto.groups.io [mailto:main@ap-gto.groups.io] On Behalf Of Sam
Sent: Saturday, January 22, 2022 5:58 AM
To: main@ap-gto.groups.io
Subject: Re: [ap-gto] APPM data meaning

Hi

Someone pointed out that the APCC help files have some descriptions on the meaning of certain APPM
model variables (see below).
However it would be interesting to know what should be expected for a mount like AP1100 class. For
instance, Is an RMS of 8 sec with a range 20 sec a good result for such a mount ?

I believe that these questions would come to mind after someone performs a pointing model - so it would be
appreciated if anyone can share their experience. As well, It would also be valuable if anyone knows of any
further explanations or documents on this topic.

Thanks
Sam


In the upper left section is the Model Properties Group Box. These are are the most important model
terms. You can check/uncheck them to see how well the model fits the data by looking at how the
average and maximum errors (and graph points) change. Usually you will want all terms selected.
NOTE: in the terms the Index Dec Error, Non-perp (Dec. OTA), and Flex Cantilever Axis usually have
inverted signs between the axes. Also, the quality of the Polar alignment numbers can be judged by
how well they match in each of the dual models. If they are close you can be confident that the
numbers are modeled well (the Azimuth is most important).
The last large section in the upper right, the table, simple shows the raw data points collected for the model.
You can uncheck specific data points if they seem to be bad data points.


Sam
 

 Thanks Ray - that makes good sense.

Sam 


Andrew J
 

@Sam. I would like to better understand this as well. Can you share screen shots of the check boxes you are talking about. I don't recall seeing check boxes in APPM. 


Andrew J
 

@Sam. Nevermind. Your original sceen shots didn't come through on the email. I see you did post them in the online groups.io posting. I have to admit i have no idea what these different check boxes do. I typically just leave them all checked. 


Sam
 

Hi Andrew,

Yes exactly - the checked boxes correspond to a variable which has been solved by APPM and is specific type the mount / scope configuration.
There is limited information in the APCC help files (as per my earlier email).
However, most users will surely want to better understand the data (I.e.  what does the variable mean, what is a normal range etc.) so adding more complete descriptions would be useful.

Regards
Sam 


Ray Gralak
 

Hi Sam,

The pointing terms are the best-fit values for various mechanical factors. I said "best-fit" because they may or may not accurately represent the mechanical factor. So, trying to put a detailed reason for each term may lead someone to believe that something needs fixing when there isn't. The model is there to compensate for those factors. A more important factor is repeatability. For example, if there is a lot of randomness in pointing because something moving around (e.g. a primary mirror in a large scope) then the model will not be as effective.

The only exception may be the polar alignment terms, which depending on their magnitude, may introduce field rotation.

-Ray

-----Original Message-----
From: main@ap-gto.groups.io [mailto:main@ap-gto.groups.io] On Behalf Of Sam
Sent: Tuesday, January 25, 2022 6:11 AM
To: main@ap-gto.groups.io
Subject: Re: [ap-gto] APPM data meaning

Hi Andrew,

Yes exactly - the checked boxes correspond to a variable which has been solved by APPM and is specific
type the mount / scope configuration.
There is limited information in the APCC help files (as per my earlier email).
However, most users will surely want to better understand the data (I.e. what does the variable mean, what is
a normal range etc.) so adding more complete descriptions would be useful.

Regards
Sam


Luca Marinelli
 

Hi Ray,

I think the naming of the correction terms is self-explanatory. What may be confusing are the units. Are the correction terms expressed in arcsec or different units?

Thanks,

Luca

On Jan 25, 2022, at 10:01 AM, Ray Gralak via groups.io <iogroups=siriusimaging.com@groups.io> wrote:

Hi Sam,

The pointing terms are the best-fit values for various mechanical factors. I said "best-fit" because they may or may not accurately represent the mechanical factor. So, trying to put a detailed reason for each term may lead someone to believe that something needs fixing when there isn't. The model is there to compensate for those factors. A more important factor is repeatability. For example, if there is a lot of randomness in pointing because something moving around (e.g. a primary mirror in a large scope) then the model will not be as effective.

The only exception may be the polar alignment terms, which depending on their magnitude, may introduce field rotation.

-Ray

-----Original Message-----
From: main@ap-gto.groups.io [mailto:main@ap-gto.groups.io] On Behalf Of Sam
Sent: Tuesday, January 25, 2022 6:11 AM
To: main@ap-gto.groups.io
Subject: Re: [ap-gto] APPM data meaning

Hi Andrew,

Yes exactly - the checked boxes correspond to a variable which has been solved by APPM and is specific
type the mount / scope configuration.
There is limited information in the APCC help files (as per my earlier email).
However, most users will surely want to better understand the data (I.e. what does the variable mean, what is
a normal range etc.) so adding more complete descriptions would be useful.

Regards
Sam





Sam
 

Thank you for the explanation Ray - it is understood that each scope / mount setup is unique and may be affected by a number of factors in different ways, so it is good to know that the model tries to account for these variables through a curve-fit approach and shows the results for each of these variables in the model (left of screen)

I have a question regarding your comment « A more important factor is repeatability« .

Am I correct to understand that the model « bulls eye » charts provide an indication of error and repeatability ? (Since a large number of platesolves are done within close proximity).
For instance the charts I got show an avg error of 8 seconds and a range of 20 sec - are these in the right ballpark or should they be better ?

If the above is incorrect, how would one get a measure of « repeatability ».

thanks
sam




ap@CaptivePhotons.com
 

On Wed, Jan 26, 2022 at 09:50 AM, Sam wrote:
If the above is incorrect, how would one get a measure of « repeatability ».
There is a "verify model" run you can do, which runs a new model (does not need to be same point count) using the currently loaded model, so you can see how close the various slews come to the expected point.  That is probably a better indication of repeatability.  After all a telescope that (for example) has a pretty big but random mirror flop when it does the flip during the first run, may yield a good fit (if it doesn't move around after the flip), but if it flops differently on every flip it will not be repeatable. 

Linwood


Ray Gralak
 

Hi Sam,

Thank you for the explanation Ray - it is understood that each scope / mount setup is unique and may be
affected by a number of factors in different ways, so it is good to know that the model tries to account for
these variables through a curve-fit approach and shows the results for each of these variables in the model
(left of screen)
The all-sky model does not use a curve-fit approach. To do that it would have to know the type of curve being fitted, which can be quite complex and vary between setups. It uses another mathematical approach.

If the above is incorrect, how would one get a measure of « repeatability ».
The best way is an empirical approach in APPM by using the "Model 5x and Park" option in the "After Complete" dropdown list. This will perform five mapping runs in a row, and from that you can compare the errors between the runs. If the setup is solid, the RA/Dec pointing error for each run will be similar.

-Ray

-----Original Message-----
From: main@ap-gto.groups.io [mailto:main@ap-gto.groups.io] On Behalf Of Sam
Sent: Wednesday, January 26, 2022 6:50 AM
To: main@ap-gto.groups.io
Subject: Re: [ap-gto] APPM data meaning

Thank you for the explanation Ray - it is understood that each scope / mount setup is unique and may be
affected by a number of factors in different ways, so it is good to know that the model tries to account for
these variables through a curve-fit approach and shows the results for each of these variables in the model
(left of screen)

I have a question regarding your comment « A more important factor is repeatability« .

Am I correct to understand that the model « bulls eye » charts provide an indication of error and repeatability ?
(Since a large number of platesolves are done within close proximity).
For instance the charts I got show an avg error of 8 seconds and a range of 20 sec - are these in the right
ballpark or should they be better ?

If the above is incorrect, how would one get a measure of « repeatability ».

thanks
sam






Ray Gralak
 

Hi Luca,

 

> I think the naming of the correction terms is self-explanatory. What may be confusing are the units. Are the

> correction terms expressed in arcsec or different units?

 

Yes, the units are all in arc-seconds. It shows the units when you open the model, which I have identified in this screenshot:

 

 

-Ray