BigDumbDinosaur wrote:
Of course it isn’t, as there actually is no such a thing.
Well, I'd certainly disagree with that. While ChatGPT isn't, to my mind, AI (I, rightly or wrongly, associate "intelligence" with the ability to reason), SHRDLU to my mind clearly exhibits the basic characteristics of intelligence.
Quote:
Quote:
...it's just rule-based control systems that are very predictable.
Evidently, it’s not as predictable as one might think, since in at least two cases, things happened that were unexpected.... In Sullenberger’s situation, in which he wanted to intentionally pitch up the aircraft to the point of incipient stall to lessen the severity of the impact with the water, the flight control computer refused to raise the elevators to the required degree—it wouldn’t let him intentionally stall the aircraft.
Really? I'm not an Airbus pilot, but my impression of Normal Law is that such behaviour would be perfectly predictable. Maybe not what you
want in that particular situation, but predictable nonetheless.
Quote:
Autopilots in themselves have usually not been a problem—at least, back when they were electromechanical and consistent in behavior. Where the trouble started was when someone got the idea that a computer could take over that and other aspects of flight management.
Except that autopilot has always been a computer, though not always a digital one.
Quote:
Increasingly, that led to more judgment being off-loaded from pilots to machine, leading to a problem that commercial aviation is now encountering all-too-often.
Well, you may consider it to be "too often," and many would agree, but are you looking at the overall incident and accident rate, rather than cherry-picking anecodotes? The stats tell us that commercial aviation is safer than it's ever been. We can't prevent every problem, but trading a few accidents that did happen for hundreds of accidents that didn't, but would have under earlier control schemes, doesn't seem to me such a bad thing.
Quote:
The current generation of transport-rated pilots have become over-reliant on automation and increasingly do not exhibit the level of airmanship seen in past generations who routinely flew planes with only autopilots and yaw dampers (and no FBW) to offload some of the work. The result is mistakes can and do get made when the automation goes on the fritz.
This is true; it's a known problem that we can see we need to take steps to address. But again, "mistakes are being made" is the wrong thing to look at. The right thing to look at is, "how many mistakes are injuring and killing people now, and how many were injuring and killing people under the old scheme."
Quote:
As he has noted, automation has bred complacency in some airmen, and that is some day going to get people killed when a real emergency develops and the strong airmanship needed to save the day isn’t there.
Yes, very true. On the other hand, people have
already been killed by strong airmen who made mistakes that these days automation catches and corrects.
Humans make errors, and "be a better pilot!" is not going to fix them all. All the evidence points to the current trade-off we're making being better than the previous one.