emacs-elpa-diffs
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[nongnu] elpa/gptel 190d1d20e2 121/273: gptel: Update header line and pa


From: ELPA Syncer
Subject: [nongnu] elpa/gptel 190d1d20e2 121/273: gptel: Update header line and package info description
Date: Wed, 1 May 2024 10:02:11 -0400 (EDT)

branch: elpa/gptel
commit 190d1d20e200352b7adce363d2fc38895cec37c2
Author: Karthik Chikmagalur <karthikchikmagalur@gmail.com>
Commit: Karthik Chikmagalur <karthikchikmagalur@gmail.com>

    gptel: Update header line and package info description
---
 gptel.el | 54 ++++++++++++++++++++++++++++++++++++++++++++----------
 1 file changed, 44 insertions(+), 10 deletions(-)

diff --git a/gptel.el b/gptel.el
index f29a7abbac..4b121d154c 100644
--- a/gptel.el
+++ b/gptel.el
@@ -1,4 +1,4 @@
-;;; gptel.el --- A simple ChatGPT client      -*- lexical-binding: t; -*-
+;;; gptel.el --- A simple multi-LLM client      -*- lexical-binding: t; -*-
 
 ;; Copyright (C) 2023  Karthik Chikmagalur
 
@@ -27,29 +27,63 @@
 
 ;;; Commentary:
 
-;; A simple ChatGPT client for Emacs.
+;; gptel is a simple Large Language Model chat client, with support for 
multiple models/backends.
 ;;
-;; Requirements:
-;; - You need an OpenAI API key. Set the variable `gptel-api-key' to the key 
or to
-;;   a function of no arguments that returns the key.
+;; gptel supports ChatGPT, Azure, and local models using Ollama and GPT4All.
 ;;
-;; - Not required but recommended: Install `markdown-mode'.
+;;  Features:
+;;  - It’s async and fast, streams responses.
+;;  - Interact with LLMs from anywhere in Emacs (any buffer, shell, minibuffer,
+;;    wherever)
+;;  - LLM responses are in Markdown or Org markup.
+;;  - Supports conversations and multiple independent sessions.
+;;  - Save chats as regular Markdown/Org/Text files and resume them later.
+;;  - You can go back and edit your previous prompts or LLM responses when
+;;    continuing a conversation. These will be fed back to the model.
+;;
+;; Requirements for ChatGPT/Azure:
+;;
+;; - You need an OpenAI API key. Set the variable `gptel-api-key' to the key or
+;;   to a function of no arguments that returns the key. (It tries to use
+;;   `auth-source' by default)
+;;
+;; - For Azure: define a gptel-backend with `gptel-make-azure', which see.
+;;
+;; For local models using Ollama or GPT4All:
+;;
+;; - The model has to be running on an accessible address (or localhost)
+;; - Define a gptel-backend with `gptel-make-ollama' or `gptel-make-gpt4all',
+;;   which see.
 ;;
 ;; Usage:
-;; gptel can be used in any buffer or in a dedicated chat buffer.
+;;
+;; gptel can be used in any buffer or in a dedicated chat buffer. The
+;; interaction model is simple: Type in a query and the response will be
+;; inserted below. You can continue the conversation by typing below the
+;; response.
 ;;
 ;; To use this in a dedicated buffer:
 ;; - M-x gptel: Start a ChatGPT session
 ;; - C-u M-x gptel: Start another session or multiple independent ChatGPT 
sessions
 ;;
-;; - In the chat session: Press `C-c RET' (`gptel-send') to send
-;;   your prompt. Use a prefix argument (`C-u C-c RET') to set chat parameters.
+;; - In the chat session: Press `C-c RET' (`gptel-send') to send your prompt.
+;;   Use a prefix argument (`C-u C-c RET') to access a menu. In this menu you
+;;   can set chat parameters like the system directives, active backend or
+;;   model, or choose to redirect the input or output elsewhere (such as to the
+;;   kill ring).
+;;
+;; - If using `org-mode': You can save this buffer to a file. When opening this
+;;   file, turning on `gptel-mode' will allow resuming the conversation.
 ;;
 ;; To use this in any buffer:
 ;;
 ;; - Select a region of text and call `gptel-send'. Call with a prefix argument
-;;   to set chat parameters.
+;;   to access the menu. The contents of the buffer up to (point) are used
+;;   if no region is selected.
 ;; - You can select previous prompts and responses to continue the 
conversation.
+;;
+;; Finally, gptel offers a general purpose API for writing LLM ineractions
+;; that suit how you work, see `gptel-request'.
 
 ;;; Code:
 (declare-function markdown-mode "markdown-mode")



reply via email to

[Prev in Thread] Current Thread [Next in Thread]