6502.org Forum  Projects  Code  Documents  Tools  Forum
It is currently Tue May 14, 2024 12:58 pm

All times are UTC




Post new topic Reply to topic  [ 44 posts ]  Go to page Previous  1, 2, 3  Next
Author Message
PostPosted: Mon May 01, 2023 3:01 pm 
Offline
User avatar

Joined: Sat Dec 01, 2018 1:53 pm
Posts: 727
Location: Tokyo, Japan
Proxy wrote:
hmm, you just got unlucky it seems.


While I agree that in industry much programming relies on luck to be correct, I am far from convinced that "Put it into ChatGPT and hope you're lucky" is any kind of an improvement over current methods of development.

Quote:
overall Language models like chatGPT can understand logic to a degree...


They do not. That is exactly the pathetic fallacy I mentioned earlier.

_________________
Curt J. Sampson - github.com/0cjs


Top
 Profile  
Reply with quote  
PostPosted: Mon May 01, 2023 3:33 pm 
Offline
User avatar

Joined: Sun Jun 30, 2013 10:26 pm
Posts: 1929
Location: Sacramento, CA, USA
I'm certainly not an expert in the subject area, but the definition you linked talks about personification of non-humans, and humans certainly don't have the market cornered on logic, at least my understanding of what that word means. I have managed to convince myself that the human population is approximately half-filled with idiots, and that realization has helped me navigate some tricky scenarios and emotions. :)

_________________
Got a kilobyte lying fallow in your 65xx's memory map? Sprinkle some VTL02C on it and see how it grows on you!

Mike B. (about me) (learning how to github)


Top
 Profile  
Reply with quote  
PostPosted: Mon May 01, 2023 3:57 pm 
Offline
User avatar

Joined: Sat Dec 01, 2018 1:53 pm
Posts: 727
Location: Tokyo, Japan
barrym95838 wrote:
I'm certainly not an expert in the subject area, but the definition you linked talks about personification of non-humans...

I think we can probably all agree that ChatGPT is not a human. Thus, attributing the reasons for the text it produces to be the same reasons that a human might produce that text is the very definition of the pathetic fallacy.

Quote:
...and humans certainly don't have the market cornered on logic...

I agree. But "humans often don't use logic" does not in any way reasonably lead one to, "thus chat GPT is using logic."

Quote:
I have managed to convince myself that the human population is approximately half-filled with idiots....

Which sounds to me like a good argument that, "that text looks like it could come from a human" could well be coming from an logic-free idot.

_________________
Curt J. Sampson - github.com/0cjs


Top
 Profile  
Reply with quote  
PostPosted: Mon May 01, 2023 4:44 pm 
Offline

Joined: Thu Jan 21, 2016 7:33 pm
Posts: 269
Location: Placerville, CA
cjs wrote:
Which sounds to me like a good argument that, "that text looks like it could come from a human" could well be coming from an logic-free idot.

At long last, artificial stupidity!


Top
 Profile  
Reply with quote  
PostPosted: Mon May 01, 2023 6:28 pm 
Offline

Joined: Tue Nov 10, 2015 5:46 am
Posts: 215
Location: Kent, UK
At least in the free ChatGPT 3.5, asking it to "make a 5-word sentence where each word begins and ends with the letter s", gives an incorrect response.

When told about its mistake, it apologizes and tries again. And fails again.

If you ask it what its goal is, and what constraints have been given, it will correctly answer that it has been asked to generate a 5 word sentence, etc, etc. It will try again, and fail.

If you ask it to LIST 5 words that all start and end with the letter s it will similarly fail.

If you ask it to evaluate whether the 5 words it came up with match the constraints (that each word start and end with s), it will correctly identify its mistake, and try again (and fail again).

If you ask it to repeatedly generate 5 word groups, evaluate the correctness of its answer, and repeat until all 5 words are correct, it will try, correctly recognize its errors, repeat a few times, and finally decide that an incorrect list of words is its final answer.

This is a test I did a month or so ago, so it may or may not still do this... and I haven't tried the pay version (which is supposed to be dramatically superior).

ChatGPT 3.5 is pretty good at generating fragments of code. I had it generate a number of different 'C' and python pieces. Some it got 100% right, others it got mostly right. As long as you don't blindly trust its results, it can save an awful lot of boilerplate typing.

When it fails, though, sometimes on the very simplest things, it's hilarious.


Top
 Profile  
Reply with quote  
PostPosted: Mon May 01, 2023 6:41 pm 
Offline
User avatar

Joined: Thu May 28, 2009 9:46 pm
Posts: 8182
Location: Midwestern USA
barrym95838 wrote:
I have managed to convince myself that the human population is approximately half-filled with idiots, and that realization has helped me navigate some tricky scenarios and emotions. :)

Hmmm...why does your remark make me think of that vast wasteland we call the “Internet?”

Attachment:
apes_on_computers.jpg
apes_on_computers.jpg [ 363.12 KiB | Viewed 3582 times ]

_________________
x86?  We ain't got no x86.  We don't NEED no stinking x86!


Top
 Profile  
Reply with quote  
PostPosted: Mon May 01, 2023 7:02 pm 
Offline

Joined: Mon Jan 19, 2004 12:49 pm
Posts: 684
Location: Potsdam, DE
HHGTTG...

"Ford, there's an infinity of monkeys out there, want to talk to us about this script for Shakespeare they've worked out."

For pete's sake; they trained it on the internet, a place notorious for people asking questions which are then answered by people with even less of a clue. What did they expect?

Neil


Top
 Profile  
Reply with quote  
PostPosted: Mon May 01, 2023 7:30 pm 
Offline
User avatar

Joined: Tue Nov 16, 2010 8:00 am
Posts: 2353
Location: Gouda, The Netherlands
sark02 wrote:
At least in the free ChatGPT 3.5, asking it to "make a 5-word sentence where each word begins and ends with the letter s", gives an incorrect response.


That's not really surprising. It's a problem that requires processing it's own partial output. It's a small miracle that it even gets close.


Top
 Profile  
Reply with quote  
PostPosted: Mon May 01, 2023 9:02 pm 
Offline
User avatar

Joined: Thu May 28, 2009 9:46 pm
Posts: 8182
Location: Midwestern USA
Just keep in mind, gentlemen, the meaning of the adjective “artificial” that precedes the noun “intelligence.” It’s helpful to think in terms of artificial flavors in foods... :D

_________________
x86?  We ain't got no x86.  We don't NEED no stinking x86!


Top
 Profile  
Reply with quote  
PostPosted: Mon May 01, 2023 9:09 pm 
Offline
User avatar

Joined: Thu May 28, 2009 9:46 pm
Posts: 8182
Location: Midwestern USA
barnacle wrote:
"Ford, there's an infinity of monkeys out there, want to talk to us about this script for Shakespeare they've worked out."

Put that same cadre of monkeys on synthesizers and they’ll likely compose a new version of Die Kunst der Fuge. :shock:

_________________
x86?  We ain't got no x86.  We don't NEED no stinking x86!


Top
 Profile  
Reply with quote  
PostPosted: Wed May 10, 2023 3:48 pm 
Offline

Joined: Sat Oct 09, 2021 11:21 am
Posts: 704
Location: Texas
Quote:
Chad: Hello

ChatGPT: Hello, how may I assist you today?

Chad: Do you know about the Chatbot ELIZA?

ChatGPT: Yes, as an AI language model, I am aware of the Chatbot ELIZA. ELIZA was an early natural language processing program developed in the 1960s by Joseph Weizenbaum. ELIZA was designed to simulate a psychotherapist, using pattern recognition techniques to respond to users' inputs. ELIZA is widely considered to be one of the first chatbots and set the foundation for many future language models and conversational agents.

Chad: How are you any different?

ChatGPT: As an AI language model, I do not have the ability to differ in the sense of personal qualities or characteristics. However, I differ in my function as a language model, providing assistance in generating human-like responses and engaging in conversation with people.

Chad: So you are "without personality", yet use the same functionality?

ChatGPT: As an AI language model, I do not have a personal identity or self-awareness. However, I am programmed to respond to user inputs and generate appropriate outputs based on programmed algorithms and data processing. Therefore, I use the same functionalities that are inherent in my programming to understand and respond to user queries.

Chad: Just like ELIZA.

ChatGPT: I'm sorry, I'm not sure what you're referring to. Can you please provide more information or context?


I'm unimpressed. Besides, it's more *fun* for humans to solve their own problems, right? :)

Chad


Top
 Profile  
Reply with quote  
PostPosted: Wed May 10, 2023 3:55 pm 
Offline
User avatar

Joined: Sat Dec 01, 2018 1:53 pm
Posts: 727
Location: Tokyo, Japan
sburrow wrote:
I'm unimpressed.

Are you kidding? Isn't "I'm sorry, I'm not sure what you're referring to. Can you please provide more information or context?" so much better than, "Can you elaborate on that?" What more could you ask for from over fifty years of chatbot research? :-P

_________________
Curt J. Sampson - github.com/0cjs


Top
 Profile  
Reply with quote  
PostPosted: Wed May 10, 2023 9:36 pm 
Offline

Joined: Thu Jan 21, 2016 7:33 pm
Posts: 269
Location: Placerville, CA
Ahahahahahaha I'm stealing that.*

* (with attribution ;D)


Top
 Profile  
Reply with quote  
PostPosted: Thu May 11, 2023 4:02 pm 
Offline

Joined: Mon Jan 19, 2004 12:49 pm
Posts: 684
Location: Potsdam, DE
+++OUT OF CHEESE ERROR+++


Top
 Profile  
Reply with quote  
PostPosted: Fri May 12, 2023 2:40 pm 
Offline

Joined: Fri Apr 15, 2022 1:56 pm
Posts: 45
Location: San Antonio, TX, USA
sburrow wrote:
Quote:
Chad: Hello

ChatGPT: Hello, how may I assist you today?

For comparison, ChatGPT version 4:
Quote:
Me: Do you know about the Chatbot ELIZA?

ChatGPT4: Yes, I do. ELIZA is one of the earliest examples of a chatbot, developed in the mid-1960s by Joseph Weizenbaum at the Massachusetts Institute of Technology (MIT). ELIZA used pattern matching and substitution methodology to simulate conversation, which was quite advanced for its time.

ELIZA's most famous script, DOCTOR, was designed to simulate a Rogerian psychotherapist (named after psychologist Carl Rogers), who uses restatement and reflection of a patient's words instead of direct answers or advice. This made ELIZA appear to understand conversation, even though it didn't actually comprehend what it was saying.

For example, if a user said to ELIZA, "I am feeling sad," ELIZA might respond, "I'm sorry to hear you're feeling sad. Can you tell me more about that?" It's important to note that ELIZA didn't understand emotion or context but instead used pattern recognition and pre-determined scripts to generate responses.

While ELIZA was a pioneering system in artificial intelligence and natural language processing, it is very basic by today's standards. Modern chatbots and AI systems like GPT-4, the model on which I am based, use machine learning algorithms to generate more contextually accurate and coherent responses.

Me: How are you any different?

ChatGPT4: While ELIZA was groundbreaking in its time, it was essentially a sophisticated pattern-matching system. It didn't understand language in the way humans do, but instead looked for specific patterns in the input it received and then produced pre-programmed responses. It couldn't learn from previous interactions, handle unexpected inputs gracefully, or generate new and contextually appropriate responses.

On the other hand, I (ChatGPT) am based on the GPT-4 model developed by OpenAI, which uses machine learning to generate responses. GPT-4 is trained on a vast amount of text data and can generate unique responses to a wide range of inputs. It's capable of understanding and generating language in a more human-like way, making the conversation feel more natural and engaging.

Here are a few ways in which I differ from ELIZA:

  1. Understanding Context: Unlike ELIZA, I can understand and keep track of context to a certain extent. This allows me to provide responses that are more relevant to the ongoing conversation.
  2. Generating Unique Responses: I can generate unique and creative responses instead of relying on a fixed set of pre-programmed outputs.
  3. Learning from Data: My underlying model is trained on a large dataset, which allows me to understand a wide array of topics and generate responses accordingly.
  4. Versatility: While ELIZA was designed to mimic a psychotherapist, I can converse on a wide range of topics, provide information, generate creative text, and more.

However, it's important to note that like ELIZA, I do not have consciousness or feelings, nor do I have access to personal data about individuals unless it has been shared in the course of our conversation. I am designed to respect user privacy and confidentiality.

Me: So you are "without personality", yet use the same functionality?

ChatGPT4: As an AI language model, I don't possess a personality in the human sense. I don't have feelings, beliefs, or desires, and I don't have personal experiences. However, the responses I generate can give the impression of personality because they're based on patterns in the data I was trained on, which includes a wide range of human writings and communications.

Regarding functionality, while there might be some surface similarities between ELIZA and me (for example, we both generate text-based responses to user inputs), the underlying technologies are quite different. ELIZA was based on simple pattern matching and substitution, whereas I use a machine learning model (specifically, a transformer model) to understand and generate language in a more human-like way.

This allows me to handle a much wider range of inputs and generate more contextually appropriate and diverse outputs than ELIZA could. However, it's worth noting that both ELIZA and I are tools created by humans and don't possess consciousness or personal identities.

Me: Just like ELIZA.

ChatGPT4: Yes, in that respect, I am indeed like ELIZA. Both ELIZA and I, ChatGPT, are artificial intelligence programs that interact with users through text-based conversation. We don't possess consciousness, emotions, or personal identities. We both generate responses based on the input we receive, following the rules and methodologies programmed into us.

However, there are significant differences in those rules and methodologies, as well as the complexity and versatility of our responses. While ELIZA relies on pattern matching and pre-determined scripts, I use advanced machine learning algorithms to generate unique and contextually relevant responses. But at the end of the day, we are both tools developed by human engineers to facilitate human-computer interaction.


Top
 Profile  
Reply with quote  
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 44 posts ]  Go to page Previous  1, 2, 3  Next

All times are UTC


Who is online

Users browsing this forum: No registered users and 9 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to: