guile-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: GPT-4 knows Guile! :)


From: Damien Mattei
Subject: Re: GPT-4 knows Guile! :)
Date: Sat, 18 Mar 2023 10:23:11 +0100

chatGPT is using the "static" 2021 free data available on internet ,free databases,...  it knows poorly after 2021 as "they" say.

Without the human intelligence that has create all the data chatGPT can do nothing.

Now people use chatGPT to create poor quality data stored on the  internet web page instead of making the web page content by intelligent humans. In the next release of chatGPT it will use those new "poor data" in its training data set  to deliver answers of lesser quality again.

On Sat, Mar 18, 2023 at 10:03 AM Mikael Djurfeldt <mikael@djurfeldt.com> wrote:
On Sat, Mar 18, 2023 at 9:58 AM Mikael Djurfeldt <mikael@djurfeldt.com> wrote:
On Sat, Mar 18, 2023 at 9:46 AM <tomas@tuxteam.de> wrote:
On Sat, Mar 18, 2023 at 09:41:37AM +0100, Mikael Djurfeldt wrote:
> On Sat, Mar 18, 2023 at 9:36 AM <tomas@tuxteam.de> wrote:

[...]

> > Perhaps you didn't know, but you are training the model :-)
> >
>
> Unfortunately not. I'm prompting it within its 32000 token (GPT-4)
> attention span. Next conversation the model is back to exactly the same
> state again. Then, of course, it is possible that OpenAI chooses to filter
> out something from the dialogs it has had.

You don't think that those coversations end up as raw data for the
next model? I'd be surprised, but you know definitely more than me.

I know very little apart from knowing what deep learning is and having skimmed the "Attention is all you need"-paper. I only meant that you are not training the model during and between sessions. It is certainly possible that OpenAI filters out things from the dialogs to use as part of training for the next version. They warn you that they may take data from the dialogs. If and how they do that I don't know.

Or, as GPT-4 would phrase it: I apologize for the confusion in my previous answer. You may be right that I'm training the next version of the model. :)
 

reply via email to

[Prev in Thread] Current Thread [Next in Thread]