jis
Permanent Way Inspector
Staff member
Administator
Moderator
AU Supporting Member
Gathering Team Member
It is hard to tell exactly what happened with enough certainty to actually know what to make of all this. It depends of how many things got fried when the alleged circuit board was dislodged, and how easy or hard is it to replace them, and how easy or hard it was to bypass them.
One would imagine that such a critical component would have redundancy of some sort to work with, but stories of poorly designed redundancy are all over the place, including routing all three redundant hydraulic lines through the same conduit on a plane that brought the DC-10 down in Chicago and the THY crash in Paris. Usually most of such bad designs result from attempts to save money, unless just plain incompetence is involved. It is all about risk mitigation anyway, since nothing can be made completely fail safe. Mr. Murphy is always alive. You can only make the failure vectors, at least the known ones, very very unlikely. Of course the unknown ones are still out there.
I doubt we will ever know in enough detail what exactly happened and what components were exactly in which cabinet and how their power and other external connections were routed, for us to even do credible Monday morning quarter-backing.
Having run part of an IT center in Bell Labs where we were introducing then new technologies like distributed work stations over the first commercial twisted pair Ethernet and very early fiber channels, using redundant servers and connectivity, in an operational telephony environment, I have some idea about how these things go. Mostly quite well. Sometimes not so much.
One would imagine that such a critical component would have redundancy of some sort to work with, but stories of poorly designed redundancy are all over the place, including routing all three redundant hydraulic lines through the same conduit on a plane that brought the DC-10 down in Chicago and the THY crash in Paris. Usually most of such bad designs result from attempts to save money, unless just plain incompetence is involved. It is all about risk mitigation anyway, since nothing can be made completely fail safe. Mr. Murphy is always alive. You can only make the failure vectors, at least the known ones, very very unlikely. Of course the unknown ones are still out there.
I doubt we will ever know in enough detail what exactly happened and what components were exactly in which cabinet and how their power and other external connections were routed, for us to even do credible Monday morning quarter-backing.
Having run part of an IT center in Bell Labs where we were introducing then new technologies like distributed work stations over the first commercial twisted pair Ethernet and very early fiber channels, using redundant servers and connectivity, in an operational telephony environment, I have some idea about how these things go. Mostly quite well. Sometimes not so much.
Last edited by a moderator: