Re: USB (or Serial) vs Ethernet: Which is better or more reliable?
On Sep 16, 2018, at 2:15 PM, email@example.com [ap-gto] <firstname.lastname@example.org> wrote:Ah, I might have missed the "active" part of your description. Good to know you're using one and that it's working for you.
Maybe "trust" is not the right word. I was referring to reliability between USB2 and Ethernet (hardware and driver level) like which one of the two would have better chance of disconnects.Part of my day job sees me working on ethernet drivers, and with that work comes experience with how they interact with the higher stacks in the OS, such the IP layers and whatnot. Ergo, I get to see how the sausage is made, and have some hand in making it. So, I'm not anti-ethernet or anti-IP by far. I'm just for using the right tool for the situation, and if there's no technical or reliability-based need to change what you're doing now, then why introduce new or additional cabling, protocols, and such into the mix. Maintaining a single data cable to your mount from inside your house sounds like what anyone would want.
From a POV based on simplicity, USB-Serial is always going to have fewer variables and moving parts to contend with compared to ethernet+TCP/IP, and certainly if wifi/802.11 is thrown into the mix. I think USB gets a bad rap sometimes because people are unaware of particulars governing cable length and/or bus power budgets, where things are pushed too far and ends up giving one the impression that USB itself is unreliable. I think there's also a propensity to grab the cheapest gear one finds, which leads to additional problems. I believe that there's additional suspicion about USB-Serial in particular due to the issue of counterfeit versions of USB-Serial chips from FTDI and Prolific finding their way into the market. But, as the saying goes, you tend to get what you pay for, and putting just a little time into researching powered hubs and convertors pays off with better reliability in the end.
But look at it this way: On that USB bus, you have two high-bandwidth devices in the form of a main and guide camera, plus a gaggle of low-bandwidth devices, such as your mount, focuser, and any filter wheel or communications-enabled dew heater, switches, or the like. Because of their nature, the cameras are going to always be the most sensitive to USB issues which is why some camera manufacturers let you adjust the speeds which they initiate their sessions with on the bus. With the mount, I'm assuming that you're using PDH or the like with your guidecam and are thus sending the periodic pulseguide data to the mount over its USB connection, and the A-P ASCOM driver is always talking to the mount. If all of this is already reliable for you - even in the face of data coming off your cameras in bulk every so often - then chances are it'll remain so.
Yes, you could always run an ethernet cable to the mount and talk to it over TCP or UDP, but it's another cable to run for what is probably no measurable advantage over what's working for you now, and front the looks of it, the mount would be the only user of that cable. Additionally, ethernet interfaces on consumer hardware are sometimes fraught with as much gremlins as poorly-sourced USB devices (such as no-name USB-ethernet dongles), and on top of that, you have the added complexity of a IP stack which, while largely reliable, isn't immune to configuration snafus if you're running multiple interfaces (say, wifi as your main interface and a hardwire ethernet line directly to the mount). There are also Windows' firewall and any firewalls your anti-virus software plops onto your box to contend with. You just need to be aware that there are more moving parts in that scenario and be prepared to debug them.
 A friend at a star party once forgot a USB A-to-micro cable that he needed for a piece of gear, so he went to the local 7-11 and got one those 99 cent USB cables that you often see up next to the cash register. Spoiler alert: it didn't work.