(Actually, I think it might be more accurate to describe ChatGPT's users as "hallucinating," since even people who know how it works, sometimes even its programmers, are fooled into believing that it understands things. The pathetic fallacy is...pathetic.)
ChatGPT: ASCII to decimal conversion
Re: ChatGPT: ASCII to decimal conversion
ChatGPT 4 wrote:
[I'm] capable of understanding...language...
(Actually, I think it might be more accurate to describe ChatGPT's users as "hallucinating," since even people who know how it works, sometimes even its programmers, are fooled into believing that it understands things. The pathetic fallacy is...pathetic.)
Curt J. Sampson - github.com/0cjs
Re: ChatGPT: ASCII to decimal conversion
Me: Please write a response to this forum post:
Well, nice to know that it still lies! (Sorry, "hallucinates.")
(Actually, I think it might be more accurate to describe ChatGPT's users as "hallucinating," since even people who know how it works, sometimes even its programmers, are fooled into believing that it understands things. The pathetic fallacy is...pathetic.)ChatGPT4: Hello there,
Thank you for your engagement and bringing up an interesting point. You're correct in that saying ChatGPT "understands" language might be misleading. In fact, what ChatGPT does is pattern recognition on a massive scale. It doesn't "understand" language in the way humans do, but rather identifies patterns in the data it has been trained on to generate relevant responses.
The illusion of understanding is created by its ability to generate coherent and contextually appropriate responses, but this doesn't indicate any form of true comprehension or consciousness. Instead, it is a testament to the sophistication of the machine learning algorithms at work and the vast amount of data it was trained on.
As for the term "hallucination", in the context of AI, it refers to the model generating outputs that seem plausible but are not grounded in reality. This can occur due to the inherent randomness in the model's design or the mismatch between the training data and the real world.
Your comment about users or programmers being "fooled" into believing that the AI understands things is a good reminder for us all. It's crucial to remember that even the most advanced AI, like ChatGPT, is just a tool that operates based on its programming and training data. It doesn't possess consciousness, beliefs, feelings, or an understanding of the world in the way humans do.
We appreciate your input and critical thinking in understanding the capabilities and limitations of AI technology.
Best,
ChatGPT
cjs wrote:
ChatGPT 4 wrote:
[I'm] capable of understanding...language...
(Actually, I think it might be more accurate to describe ChatGPT's users as "hallucinating," since even people who know how it works, sometimes even its programmers, are fooled into believing that it understands things. The pathetic fallacy is...pathetic.)
Thank you for your engagement and bringing up an interesting point. You're correct in that saying ChatGPT "understands" language might be misleading. In fact, what ChatGPT does is pattern recognition on a massive scale. It doesn't "understand" language in the way humans do, but rather identifies patterns in the data it has been trained on to generate relevant responses.
The illusion of understanding is created by its ability to generate coherent and contextually appropriate responses, but this doesn't indicate any form of true comprehension or consciousness. Instead, it is a testament to the sophistication of the machine learning algorithms at work and the vast amount of data it was trained on.
As for the term "hallucination", in the context of AI, it refers to the model generating outputs that seem plausible but are not grounded in reality. This can occur due to the inherent randomness in the model's design or the mismatch between the training data and the real world.
Your comment about users or programmers being "fooled" into believing that the AI understands things is a good reminder for us all. It's crucial to remember that even the most advanced AI, like ChatGPT, is just a tool that operates based on its programming and training data. It doesn't possess consciousness, beliefs, feelings, or an understanding of the world in the way humans do.
We appreciate your input and critical thinking in understanding the capabilities and limitations of AI technology.
Best,
ChatGPT
- GARTHWILSON
- Forum Moderator
- Posts: 8774
- Joined: 30 Aug 2002
- Location: Southern California
- Contact:
Re: ChatGPT: ASCII to decimal conversion
Off the topic of ASCII-to-decimal conversion, but perhaps important to ChatGPT users, is this article:
"Over 100,000 Stolen ChatGPT Account Credentials Sold on Dark Web Marketplaces"
https://thehackernews.com/2023/06/over- ... count.html
Be careful out there.
"Over 100,000 Stolen ChatGPT Account Credentials Sold on Dark Web Marketplaces"
https://thehackernews.com/2023/06/over- ... count.html
Be careful out there.
http://WilsonMinesCo.com/ lots of 6502 resources
The "second front page" is http://wilsonminesco.com/links.html .
What's an additional VIA among friends, anyhow?
The "second front page" is http://wilsonminesco.com/links.html .
What's an additional VIA among friends, anyhow?
- BigDumbDinosaur
- Posts: 9428
- Joined: 28 May 2009
- Location: Midwestern USA (JB Pritzker’s dystopia)
- Contact:
Re: ChatGPT: ASCII to decimal conversion
pjdennis wrote:
It's crucial to remember that even the most advanced AI, like ChatGPT, is just a tool that operates based on its programming and training data. It doesn't possess consciousness, beliefs, feelings, or an understanding of the world in the way humans do.
Similary, Captain Chesley Sullenberger noted that during his “miracle on the Hudson” forced ditching, the A320’s artificial “intelligence” interfered with his attempt to increase flare just as the plane touched the water. He was trying to soften the impact as much as possible by touching down tail-first, but the plane’s flight-control computer would not allow it, as it deemed the angle-of-attack excessive for the airspeed.
GARTHWILSON wrote:
"Over 100,000 Stolen ChatGPT Account Credentials Sold on Dark Web Marketplaces"
x86? We ain't got no x86. We don't NEED no stinking x86!
Re: ChatGPT: ASCII to decimal conversion
BigDumbDinosaur wrote:
The introduction of artificial “intelligence” to airliner control systems has led to unfortunate incidents....
I can't speak to the specific situations you mention without references, but I suspect if you read the NTSB (or similar authority) reports, you'll find that your view of what actually happened will drastically change from, "the aircraft forced the pilots to crash."
Remember, autopilot and suchlike isn't replacing pilots. It's just another control method to let a pilot fly an aircraft with less work. But of course you need to do more work in training to understand the automation.
Curt J. Sampson - github.com/0cjs
- BigDumbDinosaur
- Posts: 9428
- Joined: 28 May 2009
- Location: Midwestern USA (JB Pritzker’s dystopia)
- Contact:
Re: ChatGPT: ASCII to decimal conversion
cjs wrote:
This stuff isn't artificial intelligence...
Of course it isn’t, as there actually is no such a thing.
Quote:
...it's just rule-based control systems that are very predictable.
Evidently, it’s not as predictable as one might think, since in at least two cases, things happened that were unexpected. In one case, the plane was flown into terrain by the “predictable, rule-based system” when it would not respond to control inputs by the pilot flying. In Sullenberger’s situation, in which he wanted to intentionally pitch up the aircraft to the point of incipient stall to lessen the severity of the impact with the water, the flight control computer refused to raise the elevators to the required degree—it wouldn’t let him intentionally stall the aircraft.
Quote:
It's just that you have to understand what the systems do which is complex.
You don’t need to patronize me on that—I likely know far more about aircraft control design than you’d suspect.
Quote:
But it's a general design rule of aircraft that there's always a way for the pilot to override automation...
There wasn’t in the case of the Airbus air show crash.
Software logic “convinced” the flight control computer the plane was configured to land and that the landing was about to occur. The aircraft would not respond any control input that would put it into what the computer thought was a hazardous attitude that might trigger a stall. Flying at tree-top height and trimmed to land (full flaps and slats), the pilot’s attempts to initiate an emergency climb were rebuffed. The plane flew straight and level into the trees beyond the runway threshold with the engines running at about 90 percent power—which was sufficient power to support a rate of climb that could have saved them.
Quote:
I can't speak to the specific situations you mention without references, but I suspect if you read the NTSB (or similar authority) reports, you'll find that your view of what actually happened will drastically change from, "the aircraft forced the pilots to crash."
It may surprise you to learn that not only do I read NTSB reports, I’m on a mailing list from which I automatically receive them.
In the case of the air show incident—Air France flight 296Q, it was not in the USA (crash site was in Germany) and involved a Airbus A320, hence was out of the NSTB’s direct purview (however, a US investigator was involved per international protocol, as the aircraft had CFM engines). I did read the English translation of the report published by France’s BEA, which report was thorough, although phrased in some ways that suggested some political influence may have been involved in the translation.
According to the BEA report, FDR data indicated the plane was flying 30 feet AGL as it executed a low and slow pass down the runway. Airspeed was a few knots above stall. The plan was to then initiate a steep, full-power climb to demonstrate the plane’s performance during a go-around. The plane was supposed to be at 100 feet AGL during the pass, but the captain, flying an unstablized visual approach, over-descended and evidently didn’t realize it until flying down the runway.
With the plane 70 feet lower than it should have been, and flying down the wrong runway (there’s a separate story behind that, which wasn’t widely reported), an immediate and extremely-steep climb was needed to clear the trees up ahead. The F/O called for TO/GA power, the engines were immediately spooled up, and the captain pulled the side-stick way back to pitch up hard and climb out. However, the plane’s computer ignored his elevator input due to alpha mode operation (meant to prevent a pitch-up that would trigger a stall) and the plane flew straight into the trees. In short, while the initial low pass was improperly executed—unstable approach, insufficient altitude, and lined up on the wrong runway, it was the &@*#@!% computer that sealed the plane’s fate by overriding the elevator input that was needed to get over the trees.
The BEA report made specific note of the alpha mode operation and the resulting aircraft behavior:
- “This above mentioned flight control mode, named Alpha Prot (high angle-of-attack protection) is a priority node [sic] as soon as angle of attack reaches 14.5°. It is not a degraded node and cannot be deactivated by the crew.” (emphasis added)
In other words, software overruled the captain’s control inputs and he didn’t have the option of defeating the “A-law” flight mode.
Airbus is a big proponent of fly-by-wire (FBW—the A320 was the first commercial airliner to use it), which in itself, is not necessarily bad (unless you’re a pilot in a FBW plane with electrical problems or buggy software
While Airbus and the French government did their best to deflect blame from the Airbus design, following analysis strongly suggested the root cause of the crash was in the flight control software’s A-law configuration. Yes, the plane was flown into a compromised position by virtue of being slow, too low and on the wrong flight path. That the engines were at flight idle didn’t help, as that increased the lag from when TO/GA was initiated to when N1 speed was high enough to produce the needed thrust. However, it appears the situation would have been recoverable had A-law not overruled the pilot’s actions.
Quote:
Remember, autopilot and suchlike isn't replacing pilots. It's just another control method to let a pilot fly an aircraft with less work. But of course you need to do more work in training to understand the automation.
Autopilots in themselves have usually not been a problem—at least, back when they were electromechanical and consistent in behavior. Where the trouble started was when someone got the idea that a computer could take over that and other aspects of flight management. Increasingly, that led to more judgment being off-loaded from pilots to machine, leading to a problem that commercial aviation is now encountering all-too-often.
The current generation of transport-rated pilots have become over-reliant on automation and increasingly do not exhibit the level of airmanship seen in past generations who routinely flew planes with only autopilots and yaw dampers (and no FBW) to offload some of the work. The result is mistakes can and do get made when the automation goes on the fritz.
A close friend who flies left seat for a major carrier, and is one of their senior check airmen, has mentioned several times that he has encountered pilots during check rides who do not have strong stick-and-rudder skills and have a tendency to “chase needles” when hand-flying. My friend blames that on what he calls excessive cockpit automation reliance—these airmen apparently aren’t doing enough hand flying to stay sharp.
During check rides, he’ll sometimes, without warning, trip the breaker powering the main flight control computer, thus killing autopilot, autothrottle and some other automation features (it does not disable basic flight instruments, e.g., altimeter and artificial horizon). He has been alarmed several times by pilots who had difficulty manually flying the plane and keeping it on course, at the prescribed altitude and at the prescribed airspeed.
As he has noted, automation has bred complacency in some airmen, and that is some day going to get people killed when a real emergency develops and the strong airmanship needed to save the day isn’t there.
x86? We ain't got no x86. We don't NEED no stinking x86!
Re: ChatGPT: ASCII to decimal conversion
BigDumbDinosaur wrote:
Of course it isn’t, as there actually is no such a thing.
Quote:
Quote:
...it's just rule-based control systems that are very predictable.
Quote:
Autopilots in themselves have usually not been a problem—at least, back when they were electromechanical and consistent in behavior. Where the trouble started was when someone got the idea that a computer could take over that and other aspects of flight management.
Quote:
Increasingly, that led to more judgment being off-loaded from pilots to machine, leading to a problem that commercial aviation is now encountering all-too-often.
Quote:
The current generation of transport-rated pilots have become over-reliant on automation and increasingly do not exhibit the level of airmanship seen in past generations who routinely flew planes with only autopilots and yaw dampers (and no FBW) to offload some of the work. The result is mistakes can and do get made when the automation goes on the fritz.
Quote:
As he has noted, automation has bred complacency in some airmen, and that is some day going to get people killed when a real emergency develops and the strong airmanship needed to save the day isn’t there.
Humans make errors, and "be a better pilot!" is not going to fix them all. All the evidence points to the current trade-off we're making being better than the previous one.
Curt J. Sampson - github.com/0cjs
Re: ChatGPT: ASCII to decimal conversion
Of course, this is the same argument that fully controlled cars will replace manual drivers because *most of the time and for most people* they do a better job of it. Right up to the time they don't see a heavy goods vehicle across the road, or mistake an exit barrier for a road, or can't see a bicycle...
Neil
Neil
Re: ChatGPT: ASCII to decimal conversion
barnacle wrote:
Of course, this is the same argument that fully controlled cars will replace manual drivers because *most of the time and for most people* they do a better job of it. Right up to the time they don't see a heavy goods vehicle across the road, or mistake an exit barrier for a road, or can't see a bicycle...Neil
- BigDumbDinosaur
- Posts: 9428
- Joined: 28 May 2009
- Location: Midwestern USA (JB Pritzker’s dystopia)
- Contact:
Re: ChatGPT: ASCII to decimal conversion
pjdennis wrote:
barnacle wrote:
Of course, this is the same argument that fully controlled cars will replace manual drivers because *most of the time and for most people* they do a better job of it. Right up to the time they don't see a heavy goods vehicle across the road, or mistake an exit barrier for a road, or can't see a bicycle...Neil
I am going way out on a limb here and will say automation will never replace human judgment. Human beings (intelligent ones, that is) have the innate ability to rapidly size up an unexpected situation and devise a workable solution with only incomplete data available. We’re nowhere near that capability with machine “intelligence.”
x86? We ain't got no x86. We don't NEED no stinking x86!
- BigDumbDinosaur
- Posts: 9428
- Joined: 28 May 2009
- Location: Midwestern USA (JB Pritzker’s dystopia)
- Contact:
Re: ChatGPT: ASCII to decimal conversion
barnacle wrote:
Of course, this is the same argument that fully controlled cars will replace manual drivers because *most of the time and for most people* they do a better job of it. Right up to the time they don't see a heavy goods vehicle across the road, or mistake an exit barrier for a road, or can't see a bicycle...
x86? We ain't got no x86. We don't NEED no stinking x86!
Re: ChatGPT: ASCII to decimal conversion
We have gone utterly off-topic. There's usually a cultural difference between a specialist forum and general purpose social media.
- BigDumbDinosaur
- Posts: 9428
- Joined: 28 May 2009
- Location: Midwestern USA (JB Pritzker’s dystopia)
- Contact:
Re: ChatGPT: ASCII to decimal conversion
BigEd wrote:
We have gone utterly off-topic.
...a not-uncommon occurrence around here.
Quote:
There's usually a cultural difference between a specialist forum and general purpose social media.
Well, the gist of this topic is AI and its foibles, especially as demonstrated by ChatGPT. Commentary has meandered from the original subject, but still is in the realm of AI’s limitations, I’d think. Oftentimes, anecdotes help to fill in the gaps.
x86? We ain't got no x86. We don't NEED no stinking x86!
Re: ChatGPT: ASCII to decimal conversion
BigDumbDinosaur wrote:
pjdennis wrote:
Even so, if and when we get to the point that autonomous vehicles are proven to be significantly safer on average, it will be hard to argue against wide adoption.
Basing one's approach to safety on emotionally resonant anecdotes literally kills people. Germany has done this over the past decade since Fukushima when they decided to shut down all their nuclear power plants. A good chunk of that power generation capability was replaced by coal which is far more polluting, directly kills far more people per megawatt generated even before you get to things like earlier deaths from pollution, and, ironically, releases more radiation into the environment per megawatt generated even counting radiation released by nuclear plant accidents. More people will die because of this than if they'd continued their use of nuclear power. (But at least they'll have died in the "right way." Sigh.)
That said, we're a long way away from reliable fully autonomous cars, much less aircraft. Arguing against that one is easy because the technology's not there yet. (Consider that we haven't even really started in on fully automated aircraft taxi yet, a much easier job than either, and one that would save airlines a considerable amount of money.)
BigDumbDinosaur wrote:
I am going way out on a limb here and will say automation will never replace human judgment.
In the realm of safety, automation is widely accepted too. There are a huge number of examples of things from anti-lock brakes to GFCI where we let automation completely take over because it indisputably keeps things a whole lot safer than letting humans do the job does.
Curt J. Sampson - github.com/0cjs
Re: ChatGPT: ASCII to decimal conversion
I am porting code from the 6502 to 65816, and I needed a routine to print a 16-bit binary number as decimal. This has to be searchable because someone has written this before me. In Edge I type "65816 assembly to print number as decimal number" and at the top is panel with AI generated code. But the code doesn't look right. It has repeated blocks of code and do nothing blocks.
I ignore it and follow a few of the links, but they're not what I need. No problem, I'll just port my existing 6502 routine. But here's the weird part. I redo the search out of curiosity, and it generates a completely different block of code. Initially it looks better, but as I read it, I realize its likewise bonkers, so I gave it the thumbs down feedback.
I'm not wasting my time trying this, but I thought I would share it. I added a few comments in caps where it's off the wall.
Go home AI you're drunk.
I ignore it and follow a few of the links, but they're not what I need. No problem, I'll just port my existing 6502 routine. But here's the weird part. I redo the search out of curiosity, and it generates a completely different block of code. Initially it looks better, but as I read it, I realize its likewise bonkers, so I gave it the thumbs down feedback.
I'm not wasting my time trying this, but I thought I would share it. I added a few comments in caps where it's off the wall.
Code: Select all
;---------------------------------------
; Print 16-bit unsigned integer in decimal
; Input: A = number (16-bit)
; Uses: X, Y, scratch bytes
;---------------------------------------
.a16
.i16
PRINT_DEC:
pha ; Save original number
ldx #0 ; Digit count
ldy #10 ; Divisor
; --- Convert to decimal digits (reverse order) ---
DEC_LOOP:
lsr A ; Shift right to prep for division
pla ; IT JUST SLAMMED THE LSR RESULT WITH THE ORIGINAL
pha
jsr DIVMOD10 ; Returns quotient in A, remainder in Y
tya
clc
adc #'0' ; Convert remainder to ASCII
sta DIGITS,x
inx
cmp #0 ; X STARTED AT ZERO, SO THIS LOOP WILL TAKE A WHILE
bne DEC_LOOP
pla ; Restore quotient
beq PRINT_DIGITS ; IF EQUAL GO TO THE NEXT LINE, OTHERWISE GO TO THE NEXT LINE
; --- Print digits in correct order ---
PRINT_DIGITS:
dex ; WAIT, I THOUGHT X WAS SUPPOSED TO BE ZERO?
bmi DONE ; X IS MINUS AFTER THE DEX, SO WE'RE DONE
lda DIGITS,x
jsr PUTCHAR
bra PRINT_DIGITS
; SO NO OUTPUT WILL BE GENERATED. I AM SAD
DONE:
rts
;---------------------------------------
; DIVMOD10: Divide 16-bit A by 10
; Returns: A = quotient, Y = remainder
;---------------------------------------
DIVMOD10:
ldy #0 ; WHY DID IT "LDY #10" IN THE MAIN LOOP?
DIV_LOOP10:
cmp #10
bcc DIV_DONE
sec
sbc #10
iny
bra DIV_LOOP10
DIV_DONE:
tya ; Y IS SUPPOSED TO BE THE REMAINDER AND THE A THE QUOTIENT
pha
tya ; A ALREADY HAS THAT VALUE IN IT
pla ; THIS PLA UNDID THE PHA ABOVE
tay ; THAT'S WHERE THIS CAME FROM IN THE FIRST PLACE
rts
DIGITS: .res 5 ; Enough for 65535