On Sat, Mar 18, 2023 at 09:41:37AM +0100, Mikael Djurfeldt wrote:
> On Sat, Mar 18, 2023 at 9:36 AM <firstname.lastname@example.org> wrote:
> > Perhaps you didn't know, but you are training the model :-)
> Unfortunately not. I'm prompting it within its 32000 token (GPT-4)
> attention span. Next conversation the model is back to exactly the same
> state again. Then, of course, it is possible that OpenAI chooses to filter
> out something from the dialogs it has had.
You don't think that those coversations end up as raw data for the
next model? I'd be surprised, but you know definitely more than me.
I know very little apart from knowing what deep learning is and having skimmed the "Attention is all you need"-paper. I only meant that you are not training the model during and between sessions. It is certainly possible that OpenAI filters out things from the dialogs to use as part of training for the next version. They warn you that they may take data from the dialogs. If and how they do that I don't know.