Re: AP1100AE needs exorcism (DEC turned all the way around like Linda Blair)


Ray Gralak
 

Hi Linwood,

I am a bit puzzled by those delays, as the amount of time referenced (about 7-8 seconds) does not seem
represented by the time stamps on either those lines, or even relative to the 'giving up line'.
Messages and responses are all queue-driven within APCC. The time values are measured from the time a response was returned from the mount until they are dequeued and processed by another thread in APCC. If another application is using significant CPU time, then that thread may not get a chance to finish processing the responses expediently.

But regardless... I think this shows APCC did the run-away due to the model name, and not due to CPU
competition?
Not really the model name, but APCC should have disabled the model when it could not load the PNT file. I'm still looking into the reason for the crazy coordinates that were used when this situation occurred. Can you post a screenshot of the pointing terms when APCC is in this state?

-Ray

-----Original Message-----
From: main@ap-gto.groups.io [mailto:main@ap-gto.groups.io] On Behalf Of ap@...
Sent: Tuesday, January 18, 2022 1:33 PM
To: main@ap-gto.groups.io
Subject: Re: [ap-gto] AP1100AE needs exorcism (DEC turned all the way around like Linda Blair)

Thanks, Ray. Several things to digest here. And I have a different slant on the theories below after more
research.

First, I was indeed doing a test of a new star analysis program on NINA. It is quite compute intensive. I am
not quite sure why it would be running though, since it needed to slew before it could take the image to
analyze. But it is quite a coincidence this was the first time I used it live. So it seems the most likely culprit,
but maybe not...

The model:

On 1/14 I built a new 200 (+/-) point model. I assumed it was being loaded each night since then by default,
however I think I screwed up.

I renamed the model (since "ApPointData-2022-01-14-183708.pnt" is not exactly meaningful and I have two
OTA's). I did the rename that evening while APCC was still running (assuming it was already loaded, Indeed I
reviewed a bunch of target/corrected slews from that night, all good).

The 17th is the next night I imaged, when the runaway happened. APCC tried to load the file name by which it
knew the model and failed (because I had renamed it). I should have manually loaded and activated it, but it
had been 3 days and my attention span is about 3 minutes so I did not.

My further guess is that somehow having failed to load it, it proceeded to build a garbage model as opposed
to turning off the model altogether. I reproduced it today in daylight, and there is no indication of failure in
APCC itself, the model comes up with no data (all zeros). HOWEVER, pointing and tracking corrections are
turned on, and when I did the same slew from NINA, DEC ran away again, and in the APCC log I see this:



0042829 2022-01-18 16:24:29.881: Debug, Pointing Corrector, Slew - Target Slew: RA = 02:01:07.10, Dec
= +85*12:39.2, East Model = True

0042830 2022-01-18 16:24:29.886: Debug, Pointing Corrector, Slew - Corrected Slew: RA = 18:56:30.47,
Dec = +511*15:55.4

So it is pretty easy to reproduce just from renaming the model file, and forgetting to load the model file the
next time you start APCC. APCC does something internally that results in wacky calculations in that situation.

Without closing APCC, I loaded the right model, had NINA re-issue the slow, and it worked perfectly.

So I am going to speculate that whatever happened with the delayed poll response was unrelated to the wacky
slew (since in my test there were no competing CPU processes now in daylight).

I am a bit puzzled by those delays, as the amount of time referenced (about 7-8 seconds) does not seem
represented by the time stamps on either those lines, or even relative to the 'giving up line'.

But regardless... I think this shows APCC did the run-away due to the model name, and not due to CPU
competition?

Linwood



Join main@ap-gto.groups.io to automatically receive all group messages.