[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[nongnu] elpa/gptel a202911009 148/273: gptel: Add post-stream hook, sc
From: |
ELPA Syncer |
Subject: |
[nongnu] elpa/gptel a202911009 148/273: gptel: Add post-stream hook, scroll commands |
Date: |
Wed, 1 May 2024 10:02:17 -0400 (EDT) |
branch: elpa/gptel
commit a202911009533fa216199d0546c39ed686f6044c
Author: Karthik Chikmagalur <karthikchikmagalur@gmail.com>
Commit: Karthik Chikmagalur <karthikchikmagalur@gmail.com>
gptel: Add post-stream hook, scroll commands
* gptel.el (gptel-auto-scroll, gptel-end-of-response,
gptel-post-response-hook, gptel-post-stream-hook): Add
`gptel-post-stream-hook` that runs after each text insertion when
streaming responses. This can be used to, for instance,
auto-scroll the window as the response continues below the
viewport. The utility function `gptel-auto-scroll` does this.
Provide a utility command `gptel-end-of-response`, which moves the
cursor to the end of the response when it is in or before it.
* gptel-curl.el (gptel-curl--stream-insert-response): Run
`gptel-post-stream-hook` where required.
* README: Add FAQ, simplify structure, mention the new hooks and
scrolling/navigation options.
---
README.org | 97 ++++++++++++++++++++++++++++++++++++-----------------------
gptel-curl.el | 4 ++-
gptel.el | 40 ++++++++++++++++++++++--
3 files changed, 100 insertions(+), 41 deletions(-)
diff --git a/README.org b/README.org
index 72a0f9f1bf..7223fc4c7c 100644
--- a/README.org
+++ b/README.org
@@ -34,7 +34,6 @@
https://github-production-user-asset-6210df.s3.amazonaws.com/8607532/278854024-a
GPTel uses Curl if available, but falls back to url-retrieve to work without
external dependencies.
** Contents :toc:
- - [[#breaking-changes][Breaking Changes]]
- [[#installation][Installation]]
- [[#straight][Straight]]
- [[#manual][Manual]]
@@ -51,22 +50,18 @@ GPTel uses Curl if available, but falls back to
url-retrieve to work without ext
- [[#in-any-buffer][In any buffer:]]
- [[#in-a-dedicated-chat-buffer][In a dedicated chat buffer:]]
- [[#save-and-restore-your-chat-sessions][Save and restore your chat
sessions]]
- - [[#using-it-your-way][Using it your way]]
- - [[#extensions-using-gptel][Extensions using GPTel]]
+ - [[#faq][FAQ]]
+ -
[[#i-want-the-window-to-scroll-automatically-as-the-response-is-inserted][I
want the window to scroll automatically as the response is inserted]]
+ -
[[#i-want-the-cursor-to-move-to-the-next-prompt-after-the-response-is-inserted][I
want the cursor to move to the next prompt after the response is inserted]]
+ - [[#i-want-to-change-the-prefix-before-the-prompt-and-response][I want to
change the prefix before the prompt and response]]
+ - [[#why-another-llm-client][Why another LLM client?]]
- [[#additional-configuration][Additional Configuration]]
- - [[#why-another-llm-client][Why another LLM client?]]
- - [[#will-you-add-feature-x][Will you add feature X?]]
+ - [[#the-gptel-api][The gptel API]]
+ - [[#extensions-using-gptel][Extensions using GPTel]]
- [[#alternatives][Alternatives]]
+ - [[#breaking-changes][Breaking Changes]]
- [[#acknowledgments][Acknowledgments]]
-** Breaking Changes
-
-- Possible breakage, see #120: If streaming responses stop working for you
after upgrading to v0.5, try reinstalling gptel and deleting its native comp
eln cache in =native-comp-eln-load-path=.
-
-- The user option =gptel-host= is deprecated. If the defaults don't work for
you, use =gptel-make-openai= (which see) to customize server settings.
-
-- =gptel-api-key-from-auth-source= now searches for the API key using the host
address for the active LLM backend, /i.e./ "api.openai.com" when using ChatGPT.
You may need to update your =~/.authinfo=.
-
** Installation
GPTel is on MELPA. Ensure that MELPA is in your list of sources, then install
gptel with =M-x package-installā= =gptel=.
@@ -231,8 +226,8 @@ You can pick this backend from the transient menu when
using gptel (see Usage),
|-------------------+-------------------------------------------------------------------------|
| *Command* | Description
|
|-------------------+-------------------------------------------------------------------------|
-| =gptel= | Create a new dedicated chat buffer. (Not required,
gptel works anywhere.) |
-| =gptel-send= | Send selection, or conversation up to =(point)=.
(Works anywhere in Emacs.) |
+| =gptel-send= | Send conversation up to =(point)=, or selection if
region is active. Works anywhere in Emacs. |
+| =gptel= | Create a new dedicated chat buffer. Not required to
use gptel. |
| =C-u= =gptel-send= | Transient menu for preferenes, input/output
redirection etc. |
| =gptel-menu= | /(Same)/
|
|-------------------+-------------------------------------------------------------------------|
@@ -241,9 +236,9 @@ You can pick this backend from the transient menu when
using gptel (see Usage),
*** In any buffer:
-1. Select a region of text and call =M-x gptel-send=. The response will be
inserted below your region.
+1. Call =M-x gptel-send= to send the text up to the cursor. The response will
be inserted below. Continue the conversation by typing below the response.
-2. You can select both the original prompt and the response and call =M-x
gptel-send= again to continue the conversation.
+2. If a region is selected, the conversation will be limited to its contents.
3. Call =M-x gptel-send= with a prefix argument to
- set chat parameters (GPT model, directives etc) for this buffer,
@@ -280,23 +275,35 @@ The default mode is =markdown-mode= if available, else
=text-mode=. You can set
Saving the file will save the state of the conversation as well. To resume
the chat, open the file and turn on =gptel-mode= before editing the buffer.
-** Using it your way
+** FAQ
+*** I want the window to scroll automatically as the response is inserted
-GPTel's default usage pattern is simple, and will stay this way: Read input in
any buffer and insert the response below it.
+To be minimally annoying, GPTel does not move the cursor by default. Add the
following to your configuration to enable auto-scrolling.
-If you want custom behavior, such as
-- reading input from or output to the echo area,
-- or in pop-up windows,
-- sending the current line only, etc,
+#+begin_src emacs-lisp
+(add-hook 'gptel-post-stream-hook 'gptel-auto-scroll)
+#+end_src
-GPTel provides a general =gptel-request= function that accepts a custom prompt
and a callback to act on the response. You can use this to build custom
workflows not supported by =gptel-send=. See the documentation of
=gptel-request=, and the [[https://github.com/karthink/gptel/wiki][wiki]] for
examples.
+*** I want the cursor to move to the next prompt after the response is inserted
-*** Extensions using GPTel
+To be minimally annoying, GPTel does not move the cursor by default. Add the
following to your configuration to move the cursor:
-These are packages that depend on GPTel to provide additional functionality
+#+begin_src emacs-lisp
+(add-hook 'gptel-post-response-hook 'gptel-end-of-response)
+#+end_src
-- [[https://github.com/kamushadenes/gptel-extensions.el][gptel-extensions]]:
Extra utility functions for GPTel.
-- [[https://github.com/kamushadenes/ai-blog.el][ai-blog.el]]: Streamline
generation of blog posts in Hugo.
+You can also call =gptel-end-of-response= as a command at any time.
+
+*** I want to change the prefix before the prompt and response
+
+Customize =gptel-prompt-prefix-alist= and =gptel-response-prefix-alist=. You
can set a different pair for each major-mode.
+
+*** Why another LLM client?
+
+Other Emacs clients for LLMs prescribe the format of the interaction (a comint
shell, org-babel blocks, etc). I wanted:
+
+1. Something that is as free-form as possible: query the model using any text
in any buffer, and redirect the response as required. Using a dedicated
=gptel= buffer just adds some visual flair to the interaction.
+2. Integration with org-mode, not using a walled-off org-babel block, but as
regular text. This way the model can generate code blocks that I can run.
** Additional Configuration
:PROPERTIES:
@@ -335,18 +342,11 @@ These are packages that depend on GPTel to provide
additional functionality
| *Chat UI options* | |
|-----------------------------+----------------------------------------|
| =gptel-default-mode= | Major mode for dedicated chat buffers. |
-| =gptel-prompt-prefix-alist= | Text inserted before queries. |
+| =gptel-prompt-prefix-alist= | Text inserted before queries. |
| =gptel-response-prefix-alist= | Text inserted before responses. |
|-----------------------------+----------------------------------------|
-** Why another LLM client?
-
-Other Emacs clients for LLMs prescribe the format of the interaction (a comint
shell, org-babel blocks, etc). I wanted:
-
-1. Something that is as free-form as possible: query the model using any text
in any buffer, and redirect the response as required. Using a dedicated
=gptel= buffer just adds some visual flair to the interaction.
-2. Integration with org-mode, not using a walled-off org-babel block, but as
regular text. This way the model can generate code blocks that I can run.
-
-** Will you add feature X?
+** COMMENT Will you add feature X?
Maybe, I'd like to experiment a bit more first. Features added since the
inception of this package include
- Curl support (=gptel-use-curl=)
@@ -365,6 +365,19 @@ Maybe, I'd like to experiment a bit more first. Features
added since the incept
Features being considered or in the pipeline:
- Fully stateless design (#17)
+** The gptel API
+
+GPTel's default usage pattern is simple, and will stay this way: Read input in
any buffer and insert the response below it. Some custom behavior is possible
with the transient menu (=C-u M-x gptel-send=).
+
+For more programmable usage, gptel provides a general =gptel-request= function
that accepts a custom prompt and a callback to act on the response. You can use
this to build custom workflows not supported by =gptel-send=. See the
documentation of =gptel-request=, and the
[[https://github.com/karthink/gptel/wiki][wiki]] for examples.
+
+*** Extensions using GPTel
+
+These are packages that depend on GPTel to provide additional functionality
+
+- [[https://github.com/kamushadenes/gptel-extensions.el][gptel-extensions]]:
Extra utility functions for GPTel.
+- [[https://github.com/kamushadenes/ai-blog.el][ai-blog.el]]: Streamline
generation of blog posts in Hugo.
+
** Alternatives
Other Emacs clients for LLMs include
@@ -374,13 +387,21 @@ Other Emacs clients for LLMs include
There are several more:
[[https://github.com/CarlQLange/chatgpt-arcana.el][chatgpt-arcana]],
[[https://github.com/MichaelBurge/leafy-mode][leafy-mode]],
[[https://github.com/iwahbe/chat.el][chat.el]]
+** Breaking Changes
+
+- Possible breakage, see #120: If streaming responses stop working for you
after upgrading to v0.5, try reinstalling gptel and deleting its native comp
eln cache in =native-comp-eln-load-path=.
+
+- The user option =gptel-host= is deprecated. If the defaults don't work for
you, use =gptel-make-openai= (which see) to customize server settings.
+
+- =gptel-api-key-from-auth-source= now searches for the API key using the host
address for the active LLM backend, /i.e./ "api.openai.com" when using ChatGPT.
You may need to update your =~/.authinfo=.
+
** Acknowledgments
- [[https://github.com/algal][Alexis Gallagher]] and
[[https://github.com/d1egoaz][Diego Alvarez]] for fixing a nasty multi-byte bug
with =url-retrieve=.
- [[https://github.com/tarsius][Jonas Bernoulli]] for the Transient library.
-
# Local Variables:
# toc-org-max-depth: 4
+# eval: (and (fboundp 'toc-org-mode) (toc-org-mode 1))
# End:
diff --git a/gptel-curl.el b/gptel-curl.el
index f8aa63ab44..52295ba1a7 100644
--- a/gptel-curl.el
+++ b/gptel-curl.el
@@ -247,7 +247,9 @@ See `gptel--url-get-response' for details."
0 (length response) '(gptel response rear-nonsticky t)
response)
(goto-char tracking-marker)
- (insert response))))))
+ ;; (run-hooks 'gptel-pre-stream-hook)
+ (insert response)
+ (run-hooks 'gptel-post-stream-hook))))))
(defun gptel-curl--stream-filter (process output)
(let* ((proc-info (alist-get process gptel-curl--process-alist)))
diff --git a/gptel.el b/gptel.el
index 1a61e85b56..3ecb7eaa9a 100644
--- a/gptel.el
+++ b/gptel.el
@@ -211,10 +211,27 @@ to ChatGPT. Note: this hook only runs if the request
succeeds."
:type 'hook)
(defcustom gptel-post-response-hook nil
- "Hook run after inserting ChatGPT's response into the current buffer.
+ "Hook run after inserting the LLM response into the current buffer.
This hook is called in the buffer from which the prompt was sent
-to ChatGPT. Note: this hook runs even if the request fails."
+to the LLM, and after the full response has been inserted. Note:
+this hook runs even if the request fails."
+ :group 'gptel
+ :type 'hook)
+
+;; (defcustom gptel-pre-stream-insert-hook nil
+;; "Hook run before each insertion of the LLM's streaming response.
+
+;; This hook is called in the buffer from which the prompt was sent
+;; to the LLM, immediately before text insertion."
+;; :group 'gptel
+;; :type 'hook)
+
+(defcustom gptel-post-stream-hook nil
+ "Hook run after each insertion of the LLM's streaming response.
+
+This hook is called in the buffer from which the prompt was sent
+to the LLM, and after a text insertion."
:group 'gptel
:type 'hook)
@@ -429,6 +446,25 @@ and \"apikey\" as USER."
"Ensure VAL is a number."
(if (stringp val) (string-to-number val) val))
+(defun gptel-auto-scroll ()
+ "Scroll window if LLM response continues below viewport.
+
+Note: This will move the cursor."
+ (when (and (window-live-p (get-buffer-window (current-buffer)))
+ (not (pos-visible-in-window-p)))
+ (scroll-up-command)))
+
+(defun gptel-end-of-response (&optional arg)
+ "Move point to the end of the LLM response ARG times."
+ (interactive "p")
+ (dotimes (if arg (abs arg) 1)
+ (text-property-search-forward 'gptel 'response t)
+ (when (looking-at (concat "\n\\{1,2\\}"
+ (regexp-quote
+ (gptel-prompt-prefix-string))
+ "?"))
+ (goto-char (match-end 0)))))
+
(defmacro gptel--at-word-end (&rest body)
"Execute BODY at end of the current word or punctuation."
`(save-excursion
- [nongnu] elpa/gptel 4d4b61af94 259/273: gptel-transient: More robust dry-run commands, (continued)
- [nongnu] elpa/gptel 4d4b61af94 259/273: gptel-transient: More robust dry-run commands, ELPA Syncer, 2024/05/01
- [nongnu] elpa/gptel 70889ad95c 263/273: gptel-gemini: Add Gemini 1.5 (#284), ELPA Syncer, 2024/05/01
- [nongnu] elpa/gptel 4273f067e8 271/273: gptel-org: Improve stream converter, ELPA Syncer, 2024/05/01
- [nongnu] elpa/gptel c319966997 272/273: gptel-org: Further improve stream converter, ELPA Syncer, 2024/05/01
- [nongnu] elpa/gptel 040baad910 034/273: gptel: Remove aio dependency, ELPA Syncer, 2024/05/01
- [nongnu] elpa/gptel 1828dd3fa4 050/273: gptel: Set "waiting" state after sending the prompt, ELPA Syncer, 2024/05/01
- [nongnu] elpa/gptel c9795fe9e8 060/273: gptel: org support for streaming WIP, ELPA Syncer, 2024/05/01
- [nongnu] elpa/gptel 644fc1de2f 118/273: gptel-transient: Handle empty input when setting temperature, ELPA Syncer, 2024/05/01
- [nongnu] elpa/gptel 3c01477c37 129/273: gptel: api-key shenanigans, ELPA Syncer, 2024/05/01
- [nongnu] elpa/gptel 644e341244 141/273: Add multiline prefixes & AI response prefixes (#142), ELPA Syncer, 2024/05/01
- [nongnu] elpa/gptel a202911009 148/273: gptel: Add post-stream hook, scroll commands,
ELPA Syncer <=
- [nongnu] elpa/gptel c3ca4fd0a0 158/273: gptel-transient: Set suffix-state explicitly for directives, ELPA Syncer, 2024/05/01
- [nongnu] elpa/gptel 0fce1d86d1 171/273: README: fix typo (#168), ELPA Syncer, 2024/05/01
- [nongnu] elpa/gptel 92a8c0bdac 183/273: gptel: letrec expansion error in Emacs 27.2, ELPA Syncer, 2024/05/01
- [nongnu] elpa/gptel b34e217bbf 182/273: README: Mention gptel-request, ELPA Syncer, 2024/05/01
- [nongnu] elpa/gptel d2f56c62a0 193/273: gptel-transient: Allow redirection to any buffer, ELPA Syncer, 2024/05/01
- [nongnu] elpa/gptel bf994c0765 204/273: gptel: Add response regeneration, history and ediff, ELPA Syncer, 2024/05/01
- [nongnu] elpa/gptel 87c190076e 212/273: README: Clarify example configuration code, ELPA Syncer, 2024/05/01
- [nongnu] elpa/gptel 12340eda46 228/273: gptel-transient: Truncate system prompt when messaging, ELPA Syncer, 2024/05/01
- [nongnu] elpa/gptel f58ad9435c 225/273: gptel: Use libjansson support if available, ELPA Syncer, 2024/05/01
- [nongnu] elpa/gptel 161c77ad7f 235/273: gptel-transient: Adjust several menu options, ELPA Syncer, 2024/05/01