6502.org Forum  Projects  Code  Documents  Tools  Forum
It is currently Thu May 09, 2024 5:09 am

All times are UTC




Post new topic Reply to topic  [ 44 posts ]  Go to page Previous  1, 2, 3
Author Message
PostPosted: Fri May 12, 2023 3:36 pm 
Offline
User avatar

Joined: Sat Dec 01, 2018 1:53 pm
Posts: 727
Location: Tokyo, Japan
ChatGPT 4 wrote:
[I'm] capable of understanding...language...

Well, nice to know that it still lies! (Sorry, "hallucinates.")

(Actually, I think it might be more accurate to describe ChatGPT's users as "hallucinating," since even people who know how it works, sometimes even its programmers, are fooled into believing that it understands things. The pathetic fallacy is...pathetic.)

_________________
Curt J. Sampson - github.com/0cjs


Top
 Profile  
Reply with quote  
PostPosted: Fri May 12, 2023 4:27 pm 
Offline

Joined: Fri Apr 15, 2022 1:56 pm
Posts: 45
Location: San Antonio, TX, USA
Me: Please write a response to this forum post:
cjs wrote:
ChatGPT 4 wrote:
[I'm] capable of understanding...language...

Well, nice to know that it still lies! (Sorry, "hallucinates.")

(Actually, I think it might be more accurate to describe ChatGPT's users as "hallucinating," since even people who know how it works, sometimes even its programmers, are fooled into believing that it understands things. The pathetic fallacy is...pathetic.)
ChatGPT4: Hello there,

Thank you for your engagement and bringing up an interesting point. You're correct in that saying ChatGPT "understands" language might be misleading. In fact, what ChatGPT does is pattern recognition on a massive scale. It doesn't "understand" language in the way humans do, but rather identifies patterns in the data it has been trained on to generate relevant responses.

The illusion of understanding is created by its ability to generate coherent and contextually appropriate responses, but this doesn't indicate any form of true comprehension or consciousness. Instead, it is a testament to the sophistication of the machine learning algorithms at work and the vast amount of data it was trained on.

As for the term "hallucination", in the context of AI, it refers to the model generating outputs that seem plausible but are not grounded in reality. This can occur due to the inherent randomness in the model's design or the mismatch between the training data and the real world.

Your comment about users or programmers being "fooled" into believing that the AI understands things is a good reminder for us all. It's crucial to remember that even the most advanced AI, like ChatGPT, is just a tool that operates based on its programming and training data. It doesn't possess consciousness, beliefs, feelings, or an understanding of the world in the way humans do.

We appreciate your input and critical thinking in understanding the capabilities and limitations of AI technology.

Best,
ChatGPT


Top
 Profile  
Reply with quote  
PostPosted: Thu Jun 22, 2023 3:20 am 
Offline
User avatar

Joined: Fri Aug 30, 2002 1:09 am
Posts: 8432
Location: Southern California
Off the topic of ASCII-to-decimal conversion, but perhaps important to ChatGPT users, is this article:
"Over 100,000 Stolen ChatGPT Account Credentials Sold on Dark Web Marketplaces"
https://thehackernews.com/2023/06/over- ... count.html
Be careful out there.

_________________
http://WilsonMinesCo.com/ lots of 6502 resources
The "second front page" is http://wilsonminesco.com/links.html .
What's an additional VIA among friends, anyhow?


Top
 Profile  
Reply with quote  
PostPosted: Thu Jun 22, 2023 4:29 pm 
Offline
User avatar

Joined: Thu May 28, 2009 9:46 pm
Posts: 8177
Location: Midwestern USA
pjdennis wrote:
It's crucial to remember that even the most advanced AI, like ChatGPT, is just a tool that operates based on its programming and training data. It doesn't possess consciousness, beliefs, feelings, or an understanding of the world in the way humans do.

That is something that some Airbus pilots have discovered the hard way. The introduction of artificial “intelligence” to airliner control systems has led to unfortunate incidents, one of which happened at an air show at which the Airbus folks were doing a flight demo and the plane’s “intelligence” overrode the pilot’s control inputs. The flight ended with the plane plowing into some trees because the flight control computer wouldn’t relinquish control to the pilot, who was attempting a non-standard flight pattern.

Similary, Captain Chesley Sullenberger noted that during his “miracle on the Hudson” forced ditching, the A320’s artificial “intelligence” interfered with his attempt to increase flare just as the plane touched the water. He was trying to soften the impact as much as possible by touching down tail-first, but the plane’s flight-control computer would not allow it, as it deemed the angle-of-attack excessive for the airspeed.


GARTHWILSON wrote:
"Over 100,000 Stolen ChatGPT Account Credentials Sold on Dark Web Marketplaces"

Sounds as though ChatGPT isn’t all that smart if it so easily relinquished sensitive information. :D

_________________
x86?  We ain't got no x86.  We don't NEED no stinking x86!


Top
 Profile  
Reply with quote  
PostPosted: Thu Jun 22, 2023 7:59 pm 
Offline
User avatar

Joined: Sat Dec 01, 2018 1:53 pm
Posts: 727
Location: Tokyo, Japan
BigDumbDinosaur wrote:
The introduction of artificial “intelligence” to airliner control systems has led to unfortunate incidents....

This stuff isn't artificial intelligence, it's just rule-based control systems that are very predictable. It's just that you have to understand what the systems do which is complex. But it's a general design rule of aircraft that there's always a way for the pilot to override automation, except in cases where a pilot really can't do the job (such as an aircraft with negative stability, which would not be anything used in commercial air transport) or, these days, when the commanded changes would exceed aircraft frame limits (i.e., it won't let you do things that make the wings fall off, though these protections may be removed if enough flight systems fail that it can no longer properly detect such commanded actions).

I can't speak to the specific situations you mention without references, but I suspect if you read the NTSB (or similar authority) reports, you'll find that your view of what actually happened will drastically change from, "the aircraft forced the pilots to crash."

Remember, autopilot and suchlike isn't replacing pilots. It's just another control method to let a pilot fly an aircraft with less work. But of course you need to do more work in training to understand the automation.

_________________
Curt J. Sampson - github.com/0cjs


Top
 Profile  
Reply with quote  
PostPosted: Fri Jun 23, 2023 8:29 am 
Offline
User avatar

Joined: Thu May 28, 2009 9:46 pm
Posts: 8177
Location: Midwestern USA
cjs wrote:
This stuff isn't artificial intelligence...

Of course it isn’t, as there actually is no such a thing.

Quote:
...it's just rule-based control systems that are very predictable.

Evidently, it’s not as predictable as one might think, since in at least two cases, things happened that were unexpected. In one case, the plane was flown into terrain by the “predictable, rule-based system” when it would not respond to control inputs by the pilot flying. In Sullenberger’s situation, in which he wanted to intentionally pitch up the aircraft to the point of incipient stall to lessen the severity of the impact with the water, the flight control computer refused to raise the elevators to the required degree—it wouldn’t let him intentionally stall the aircraft.

Quote:
It's just that you have to understand what the systems do which is complex.

You don’t need to patronize me on that—I likely know far more about aircraft control design than you’d suspect. :D

Quote:
But it's a general design rule of aircraft that there's always a way for the pilot to override automation...

There wasn’t in the case of the Airbus air show crash.

Software logic “convinced” the flight control computer the plane was configured to land and that the landing was about to occur. The aircraft would not respond any control input that would put it into what the computer thought was a hazardous attitude that might trigger a stall. Flying at tree-top height and trimmed to land (full flaps and slats), the pilot’s attempts to initiate an emergency climb were rebuffed. The plane flew straight and level into the trees beyond the runway threshold with the engines running at about 90 percent power—which was sufficient power to support a rate of climb that could have saved them.

Quote:
I can't speak to the specific situations you mention without references, but I suspect if you read the NTSB (or similar authority) reports, you'll find that your view of what actually happened will drastically change from, "the aircraft forced the pilots to crash."

It may surprise you to learn that not only do I read NTSB reports, I’m on a mailing list from which I automatically receive them. :D I’ve had an interest in air accident investigation since my Navy days, which were during the 1960s.

In the case of the air show incident—Air France flight 296Q, it was not in the USA (crash site was in Germany) and involved a Airbus A320, hence was out of the NSTB’s direct purview (however, a US investigator was involved per international protocol, as the aircraft had CFM engines). I did read the English translation of the report published by France’s BEA, which report was thorough, although phrased in some ways that suggested some political influence may have been involved in the translation.

According to the BEA report, FDR data indicated the plane was flying 30 feet AGL as it executed a low and slow pass down the runway. Airspeed was a few knots above stall. The plan was to then initiate a steep, full-power climb to demonstrate the plane’s performance during a go-around. The plane was supposed to be at 100 feet AGL during the pass, but the captain, flying an unstablized visual approach, over-descended and evidently didn’t realize it until flying down the runway.

With the plane 70 feet lower than it should have been, and flying down the wrong runway (there’s a separate story behind that, which wasn’t widely reported), an immediate and extremely-steep climb was needed to clear the trees up ahead. The F/O called for TO/GA power, the engines were immediately spooled up, and the captain pulled the side-stick way back to pitch up hard and climb out. However, the plane’s computer ignored his elevator input due to alpha mode operation (meant to prevent a pitch-up that would trigger a stall) and the plane flew straight into the trees. In short, while the initial low pass was improperly executed—unstable approach, insufficient altitude, and lined up on the wrong runway, it was the &@*#@!% computer that sealed the plane’s fate by overriding the elevator input that was needed to get over the trees.

The BEA report made specific note of the alpha mode operation and the resulting aircraft behavior:

    “This above mentioned flight control mode, named Alpha Prot (high angle-of-attack protection) is a priority node [sic] as soon as angle of attack reaches 14.5°. It is not a degraded node and cannot be deactivated by the crew.” (emphasis added)

In other words, software overruled the captain’s control inputs and he didn’t have the option of defeating the “A-law” flight mode.

Airbus is a big proponent of fly-by-wire (FBW—the A320 was the first commercial airliner to use it), which in itself, is not necessarily bad (unless you’re a pilot in a FBW plane with electrical problems or buggy software :?). However, they got carried away and tightly bound their FBW system into the flight control software, taking away decision-making capabilities from the flight crew.

While Airbus and the French government did their best to deflect blame from the Airbus design, following analysis strongly suggested the root cause of the crash was in the flight control software’s A-law configuration. Yes, the plane was flown into a compromised position by virtue of being slow, too low and on the wrong flight path. That the engines were at flight idle didn’t help, as that increased the lag from when TO/GA was initiated to when N1 speed was high enough to produce the needed thrust. However, it appears the situation would have been recoverable had A-law not overruled the pilot’s actions.

Quote:
Remember, autopilot and suchlike isn't replacing pilots. It's just another control method to let a pilot fly an aircraft with less work. But of course you need to do more work in training to understand the automation.

Autopilots in themselves have usually not been a problem—at least, back when they were electromechanical and consistent in behavior. Where the trouble started was when someone got the idea that a computer could take over that and other aspects of flight management. Increasingly, that led to more judgment being off-loaded from pilots to machine, leading to a problem that commercial aviation is now encountering all-too-often.

The current generation of transport-rated pilots have become over-reliant on automation and increasingly do not exhibit the level of airmanship seen in past generations who routinely flew planes with only autopilots and yaw dampers (and no FBW) to offload some of the work. The result is mistakes can and do get made when the automation goes on the fritz.

A close friend who flies left seat for a major carrier, and is one of their senior check airmen, has mentioned several times that he has encountered pilots during check rides who do not have strong stick-and-rudder skills and have a tendency to “chase needles” when hand-flying. My friend blames that on what he calls excessive cockpit automation reliance—these airmen apparently aren’t doing enough hand flying to stay sharp.

During check rides, he’ll sometimes, without warning, trip the breaker powering the main flight control computer, thus killing autopilot, autothrottle and some other automation features (it does not disable basic flight instruments, e.g., altimeter and artificial horizon). He has been alarmed several times by pilots who had difficulty manually flying the plane and keeping it on course, at the prescribed altitude and at the prescribed airspeed.

As he has noted, automation has bred complacency in some airmen, and that is some day going to get people killed when a real emergency develops and the strong airmanship needed to save the day isn’t there.

_________________
x86?  We ain't got no x86.  We don't NEED no stinking x86!


Top
 Profile  
Reply with quote  
PostPosted: Fri Jun 23, 2023 4:56 pm 
Offline
User avatar

Joined: Sat Dec 01, 2018 1:53 pm
Posts: 727
Location: Tokyo, Japan
BigDumbDinosaur wrote:
Of course it isn’t, as there actually is no such a thing.

Well, I'd certainly disagree with that. While ChatGPT isn't, to my mind, AI (I, rightly or wrongly, associate "intelligence" with the ability to reason), SHRDLU to my mind clearly exhibits the basic characteristics of intelligence.

Quote:
Quote:
...it's just rule-based control systems that are very predictable.

Evidently, it’s not as predictable as one might think, since in at least two cases, things happened that were unexpected.... In Sullenberger’s situation, in which he wanted to intentionally pitch up the aircraft to the point of incipient stall to lessen the severity of the impact with the water, the flight control computer refused to raise the elevators to the required degree—it wouldn’t let him intentionally stall the aircraft.

Really? I'm not an Airbus pilot, but my impression of Normal Law is that such behaviour would be perfectly predictable. Maybe not what you want in that particular situation, but predictable nonetheless.

Quote:
Autopilots in themselves have usually not been a problem—at least, back when they were electromechanical and consistent in behavior. Where the trouble started was when someone got the idea that a computer could take over that and other aspects of flight management.

Except that autopilot has always been a computer, though not always a digital one.

Quote:
Increasingly, that led to more judgment being off-loaded from pilots to machine, leading to a problem that commercial aviation is now encountering all-too-often.

Well, you may consider it to be "too often," and many would agree, but are you looking at the overall incident and accident rate, rather than cherry-picking anecodotes? The stats tell us that commercial aviation is safer than it's ever been. We can't prevent every problem, but trading a few accidents that did happen for hundreds of accidents that didn't, but would have under earlier control schemes, doesn't seem to me such a bad thing.

Quote:
The current generation of transport-rated pilots have become over-reliant on automation and increasingly do not exhibit the level of airmanship seen in past generations who routinely flew planes with only autopilots and yaw dampers (and no FBW) to offload some of the work. The result is mistakes can and do get made when the automation goes on the fritz.

This is true; it's a known problem that we can see we need to take steps to address. But again, "mistakes are being made" is the wrong thing to look at. The right thing to look at is, "how many mistakes are injuring and killing people now, and how many were injuring and killing people under the old scheme."

Quote:
As he has noted, automation has bred complacency in some airmen, and that is some day going to get people killed when a real emergency develops and the strong airmanship needed to save the day isn’t there.

Yes, very true. On the other hand, people have already been killed by strong airmen who made mistakes that these days automation catches and corrects.

Humans make errors, and "be a better pilot!" is not going to fix them all. All the evidence points to the current trade-off we're making being better than the previous one.

_________________
Curt J. Sampson - github.com/0cjs


Top
 Profile  
Reply with quote  
PostPosted: Fri Jun 23, 2023 6:13 pm 
Offline

Joined: Mon Jan 19, 2004 12:49 pm
Posts: 683
Location: Potsdam, DE
Of course, this is the same argument that fully controlled cars will replace manual drivers because *most of the time and for most people* they do a better job of it. Right up to the time they don't see a heavy goods vehicle across the road, or mistake an exit barrier for a road, or can't see a bicycle...

Neil


Top
 Profile  
Reply with quote  
PostPosted: Fri Jun 23, 2023 7:09 pm 
Offline

Joined: Fri Apr 15, 2022 1:56 pm
Posts: 45
Location: San Antonio, TX, USA
barnacle wrote:
Of course, this is the same argument that fully controlled cars will replace manual drivers because *most of the time and for most people* they do a better job of it. Right up to the time they don't see a heavy goods vehicle across the road, or mistake an exit barrier for a road, or can't see a bicycle...Neil
Right, there will still be accidents, just different accidents than would have occurred without the automation. Even so, if and when we get to the point that autonomous vehicles are proven to be significantly safer on average, it will be hard to argue against wide adoption.


Top
 Profile  
Reply with quote  
PostPosted: Fri Jun 23, 2023 7:46 pm 
Offline
User avatar

Joined: Thu May 28, 2009 9:46 pm
Posts: 8177
Location: Midwestern USA
pjdennis wrote:
barnacle wrote:
Of course, this is the same argument that fully controlled cars will replace manual drivers because *most of the time and for most people* they do a better job of it. Right up to the time they don't see a heavy goods vehicle across the road, or mistake an exit barrier for a road, or can't see a bicycle...Neil
Right, there will still be accidents, just different accidents than would have occurred without the automation. Even so, if and when we get to the point that autonomous vehicles are proven to be significantly safer on average, it will be hard to argue against wide adoption.

Agreed...until the automation goes haywire and runs the vehicle into the path of an express train. :D

I am going way out on a limb here and will say automation will never replace human judgment. Human beings (intelligent ones, that is) have the innate ability to rapidly size up an unexpected situation and devise a workable solution with only incomplete data available. We’re nowhere near that capability with machine “intelligence.”

_________________
x86?  We ain't got no x86.  We don't NEED no stinking x86!


Top
 Profile  
Reply with quote  
PostPosted: Fri Jun 23, 2023 7:53 pm 
Offline
User avatar

Joined: Thu May 28, 2009 9:46 pm
Posts: 8177
Location: Midwestern USA
barnacle wrote:
Of course, this is the same argument that fully controlled cars will replace manual drivers because *most of the time and for most people* they do a better job of it. Right up to the time they don't see a heavy goods vehicle across the road, or mistake an exit barrier for a road, or can't see a bicycle...

Reminds me of an accident Google had with one of their self-driving cars that couldn’t tell the white sides of a box truck’s body from the sky on a bright and sunny day. :lol: Word had it the truck’s driver was not amused. :shock:

_________________
x86?  We ain't got no x86.  We don't NEED no stinking x86!


Top
 Profile  
Reply with quote  
PostPosted: Fri Jun 23, 2023 7:53 pm 
Offline
User avatar

Joined: Thu Dec 11, 2008 1:28 pm
Posts: 10800
Location: England
We have gone utterly off-topic. There's usually a cultural difference between a specialist forum and general purpose social media.


Top
 Profile  
Reply with quote  
PostPosted: Fri Jun 23, 2023 8:09 pm 
Offline
User avatar

Joined: Thu May 28, 2009 9:46 pm
Posts: 8177
Location: Midwestern USA
BigEd wrote:
We have gone utterly off-topic.

...a not-uncommon occurrence around here. :D

Quote:
There's usually a cultural difference between a specialist forum and general purpose social media.

Well, the gist of this topic is AI and its foibles, especially as demonstrated by ChatGPT. Commentary has meandered from the original subject, but still is in the realm of AI’s limitations, I’d think. Oftentimes, anecdotes help to fill in the gaps.

_________________
x86?  We ain't got no x86.  We don't NEED no stinking x86!


Top
 Profile  
Reply with quote  
PostPosted: Sat Jun 24, 2023 1:14 am 
Offline
User avatar

Joined: Sat Dec 01, 2018 1:53 pm
Posts: 727
Location: Tokyo, Japan
BigDumbDinosaur wrote:
pjdennis wrote:
Even so, if and when we get to the point that autonomous vehicles are proven to be significantly safer on average, it will be hard to argue against wide adoption.

Agreed...until the automation goes haywire and runs the vehicle into the path of an express train. :D

Well, some people will find that a good argument against automation, but they're terrible engineers and should not be in charge of this stuff.

Basing one's approach to safety on emotionally resonant anecdotes literally kills people. Germany has done this over the past decade since Fukushima when they decided to shut down all their nuclear power plants. A good chunk of that power generation capability was replaced by coal which is far more polluting, directly kills far more people per megawatt generated even before you get to things like earlier deaths from pollution, and, ironically, releases more radiation into the environment per megawatt generated even counting radiation released by nuclear plant accidents. More people will die because of this than if they'd continued their use of nuclear power. (But at least they'll have died in the "right way." Sigh.)

That said, we're a long way away from reliable fully autonomous cars, much less aircraft. Arguing against that one is easy because the technology's not there yet. (Consider that we haven't even really started in on fully automated aircraft taxi yet, a much easier job than either, and one that would save airlines a considerable amount of money.)

BigDumbDinosaur wrote:
I am going way out on a limb here and will say automation will never replace human judgment.

Well, your limb broke off long ago, because we've been replacing human judgement with automation for a long, long time now. I use it every day: I, as with literally tens of millions of people around me every day, rely on my rice cooker to cook my rice rather than monitoring it myself, and it works much better than if I did it. We've been doing that since the 1950s, and it had virtually completely replaced human judgement about temperature settings by the '80s.

In the realm of safety, automation is widely accepted too. There are a huge number of examples of things from anti-lock brakes to GFCI where we let automation completely take over because it indisputably keeps things a whole lot safer than letting humans do the job does.

_________________
Curt J. Sampson - github.com/0cjs


Top
 Profile  
Reply with quote  
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 44 posts ]  Go to page Previous  1, 2, 3

All times are UTC


Who is online

Users browsing this forum: No registered users and 9 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to: