On Sat, Mar 18, 2023 at 09:22:43AM +0100, Mikael Djurfeldt wrote:
> BTW, in the bouncing ball example, I find it amazing that I could get an
> improvement of the code by complaining:
> But all those SDL_ calls look like C bindings. Please use guile-sdl2
> (It was also quite entertaining that I had to ask it to write the code
> "according to the guile-sdl2 manual".)
Perhaps you didn't know, but you are training the model :-)
Unfortunately not. I'm prompting it within its 32000 token (GPT-4) attention span. Next conversation the model is back to exactly the same state again. Then, of course, it is possible that OpenAI chooses to filter out something from the dialogs it has had.
So, a trick you can do is to start out every session with a standard set of prompts (like "keep it short" or whatever) which will then act as a kind of configuration.
This is called gamification, latest known in early 2000s. Luis von
Ahn  did quite a bit of pioneering work in that (he called it
"human computation", his PhD was "Games With a Purpose", Google
licensed a game from him to make people "out there" tag images for
So I'd say this is established "technology".
I think you're right that this will be implemented.