[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[nongnu] elpa/gptel c6a07043af 179/273: gptel-kagi: Add support for Kagi
From: |
ELPA Syncer |
Subject: |
[nongnu] elpa/gptel c6a07043af 179/273: gptel-kagi: Add support for Kagi FastGPT |
Date: |
Wed, 1 May 2024 10:02:20 -0400 (EDT) |
branch: elpa/gptel
commit c6a07043af9c185bdeb068ef7660991588714ea2
Author: Karthik Chikmagalur <karthikchikmagalur@gmail.com>
Commit: Karthik Chikmagalur <karthikchikmagalur@gmail.com>
gptel-kagi: Add support for Kagi FastGPT
* gptel.el: Bump version and update package description.
* gptel-kagi.el (gptel--parse-response, gptel--request-data,
gptel--parse-buffer, gptel-make-kagi): Add new file and support
for the Kagi FastGPT LLM API. Streaming and setting model
parameters (temperature, max tokesn) are not supported by the API.
A Kagi backend can be added with `gptel-make-kagi`.
* README.org: Update with instructions for Kagi.
---
README.org | 51 ++++++++++++++-----
gptel-kagi.el | 154 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
gptel.el | 36 ++++++++------
3 files changed, 215 insertions(+), 26 deletions(-)
diff --git a/README.org b/README.org
index 11073798f2..67755213e6 100644
--- a/README.org
+++ b/README.org
@@ -2,18 +2,19 @@
[[https://melpa.org/#/gptel][file:https://melpa.org/packages/gptel-badge.svg]]
-GPTel is a simple Large Language Model chat client for Emacs, with support for
multiple models/backends.
-
-| LLM Backend | Supports | Requires |
-|-------------+----------+---------------------------|
-| ChatGPT | ✓ | [[https://platform.openai.com/account/api-keys][API
key]] |
-| Azure | ✓ | Deployment and API key |
-| Ollama | ✓ | [[https://ollama.ai/][Ollama running locally]] |
-| GPT4All | ✓ | [[https://gpt4all.io/index.html][GPT4All running
locally]] |
-| Gemini | ✓ | [[https://makersuite.google.com/app/apikey][API key]]
|
-| Llama.cpp | ✓ |
[[https://github.com/ggerganov/llama.cpp/tree/master/examples/server#quick-start][Llama.cpp
running locally]] |
-| Llamafile | ✓ |
[[https://github.com/Mozilla-Ocho/llamafile#quickstart][Local Llamafile
server]] |
-| PrivateGPT | Planned | - |
+GPTel is a simple Large Language Model chat client for Emacs, with support for
multiple models and backends.
+
+| LLM Backend | Supports | Requires |
+|--------------+----------+---------------------------|
+| ChatGPT | ✓ | [[https://platform.openai.com/account/api-keys][API
key]] |
+| Azure | ✓ | Deployment and API key |
+| Ollama | ✓ | [[https://ollama.ai/][Ollama running locally]] |
+| GPT4All | ✓ | [[https://gpt4all.io/index.html][GPT4All running
locally]] |
+| Gemini | ✓ | [[https://makersuite.google.com/app/apikey][API
key]] |
+| Llama.cpp | ✓ |
[[https://github.com/ggerganov/llama.cpp/tree/master/examples/server#quick-start][Llama.cpp
running locally]] |
+| Llamafile | ✓ |
[[https://github.com/Mozilla-Ocho/llamafile#quickstart][Local Llamafile
server]] |
+| Kagi FastGPT | ✓ | [[https://kagi.com/settings?p=api][API key]]
|
+| PrivateGPT | Planned | - |
*General usage*: ([[https://www.youtube.com/watch?v=bsRnh_brggM][YouTube
Demo]])
@@ -48,6 +49,7 @@ GPTel uses Curl if available, but falls back to url-retrieve
to work without ext
- [[#ollama][Ollama]]
- [[#gemini][Gemini]]
- [[#llamacpp-or-llamafile][Llama.cpp or Llamafile]]
+ - [[#kagi-fastgpt][Kagi FastGPT]]
- [[#usage][Usage]]
- [[#in-any-buffer][In any buffer:]]
- [[#in-a-dedicated-chat-buffer][In a dedicated chat buffer:]]
@@ -249,6 +251,31 @@ You can pick this backend from the menu when using gptel
(see [[#usage][Usage]])
#+end_src
#+html: </details>
+#+html: <details><summary>
+**** Kagi FastGPT
+#+html: </summary>
+
+*NOTE*: Kagi's FastGPT model does not support multi-turn conversations,
interactions are "one-shot". It also does not support streaming responses.
+
+Register a backend with
+#+begin_src emacs-lisp
+;; :key can be a function that returns the API key
+(gptel-make-kagi
+ "Kagi" ;Name of your choice
+ :key "YOUR_KAGI_API_KEY")
+#+end_src
+These are the required parameters, refer to the documentation of
=gptel-make-kagi= for more.
+
+You can pick this backend from the transient menu when using gptel (see
Usage), or set this as the default value of =gptel-backend=:
+
+#+begin_src emacs-lisp
+;; OPTIONAL configuration
+(setq-default gptel-model "fastgpt" ;only supported Kagi model
+ gptel-backend (gptel-make-kagi "Kagi" :key ...))
+#+end_src
+
+#+html: </details>
+
** Usage
(This is also a [[https://www.youtube.com/watch?v=bsRnh_brggM][video demo]]
showing various uses of gptel.)
diff --git a/gptel-kagi.el b/gptel-kagi.el
new file mode 100644
index 0000000000..70d8189be2
--- /dev/null
+++ b/gptel-kagi.el
@@ -0,0 +1,154 @@
+;;; gptel-kagi.el --- Kagi support for gptel -*- lexical-binding: t; -*-
+
+;; Copyright (C) 2023 Karthik Chikmagalur
+
+;; Author: Karthik Chikmagalur <karthikchikmagalur@gmail.com>
+;; Keywords: hypermedia
+
+;; This program is free software; you can redistribute it and/or modify
+;; it under the terms of the GNU General Public License as published by
+;; the Free Software Foundation, either version 3 of the License, or
+;; (at your option) any later version.
+
+;; This program is distributed in the hope that it will be useful,
+;; but WITHOUT ANY WARRANTY; without even the implied warranty of
+;; MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+;; GNU General Public License for more details.
+
+;; You should have received a copy of the GNU General Public License
+;; along with this program. If not, see <https://www.gnu.org/licenses/>.
+
+;;; Commentary:
+
+;; This file adds support for the Kagi FastGPT LLM API to gptel
+
+;;; Code:
+(require 'gptel)
+(require 'cl-generic)
+(eval-when-compile
+ (require 'cl-lib))
+
+;;; Kagi
+(cl-defstruct (gptel-kagi (:constructor gptel--make-kagi)
+ (:copier nil)
+ (:include gptel-backend)))
+
+(cl-defmethod gptel--parse-response ((_backend gptel-kagi) response info)
+ (let* ((data (plist-get response :data))
+ (output (plist-get data :output))
+ (references (plist-get data :references)))
+ (when references
+ (setq references
+ (cl-loop with linker =
+ (pcase (buffer-local-value 'major-mode
+ (plist-get info :buffer))
+ ('org-mode
+ (lambda (text url)
+ (format "[[%s][%s]]" url text)))
+ ('markdown-mode
+ (lambda (text url)
+ (format "[%s](%s)" text url)))
+ (_ (lambda (text url)
+ (buttonize
+ text (lambda (data) (browse-url data))
+ url))))
+ for ref across references
+ for title = (plist-get ref :title)
+ for snippet = (plist-get ref :snippet)
+ for url = (plist-get ref :url)
+ for n upfrom 1
+ collect
+ (concat (format "[%d] " n)
+ (funcall linker title url) ": "
+ (replace-regexp-in-string
+ "</?b>" "*" snippet))
+ into ref-strings
+ finally return
+ (concat "\n\n" (mapconcat #'identity ref-strings "\n")))))
+ (concat output references)))
+
+(cl-defmethod gptel--request-data ((_backend gptel-kagi) prompts)
+ "JSON encode PROMPTS for sending to ChatGPT."
+ `(,@prompts :web_search t :cache t))
+
+(cl-defmethod gptel--parse-buffer ((_backend gptel-kagi) &optional
_max-entries)
+ (let ((prompts)
+ (prop (text-property-search-backward
+ 'gptel 'response
+ (when (get-char-property (max (point-min) (1- (point)))
+ 'gptel)
+ t))))
+ (if (and (prop-match-p prop)
+ (prop-match-value prop))
+ (user-error "No user prompt found!")
+ (setq prompts (list
+ :query
+ (if (prop-match-p prop)
+ (concat
+ ;; Fake a system message by including it in the
prompt
+ gptel--system-message "\n\n"
+ (string-trim
+ (buffer-substring-no-properties
(prop-match-beginning prop)
+ (prop-match-end
prop))
+ (format "[\t\r\n ]*\\(?:%s\\)?[\t\r\n ]*"
+ (regexp-quote (gptel-prompt-prefix-string)))
+ (format "[\t\r\n ]*\\(?:%s\\)?[\t\r\n ]*"
+ (regexp-quote
(gptel-response-prefix-string)))))
+ "")))
+ prompts)))
+
+;;;###autoload
+(cl-defun gptel-make-kagi
+ (name &key stream key
+ (host "kagi.com")
+ (header (lambda () `(("Authorization" . ,(concat "Bot "
(gptel--get-api-key))))))
+ (models '("fastgpt"))
+ (protocol "https")
+ (endpoint "/api/v0/fastgpt"))
+ "Register a Kagi FastGPT backend for gptel with NAME.
+
+Keyword arguments:
+
+HOST is the Kagi host (with port), defaults to \"kagi.com\".
+
+MODELS is a list of available Kagi models: only fastgpt is supported.
+
+STREAM is a boolean to toggle streaming responses, defaults to
+false. Kagi does not support a streaming API yet.
+
+PROTOCOL (optional) specifies the protocol, https by default.
+
+ENDPOINT (optional) is the API endpoint for completions, defaults to
+\"/api/v0/fastgpt\".
+
+HEADER (optional) is for additional headers to send with each
+request. It should be an alist or a function that retuns an
+alist, like:
+((\"Content-Type\" . \"application/json\"))
+
+KEY (optional) is a variable whose value is the API key, or
+function that returns the key.
+
+Example:
+-------
+
+(gptel-make-kagi \"Kagi\" :key my-kagi-key)"
+ stream ;Silence byte-compiler
+ (let ((backend (gptel--make-kagi
+ :name name
+ :host host
+ :header header
+ :key key
+ :models models
+ :protocol protocol
+ :endpoint endpoint
+ :url (if protocol
+ (concat protocol "://" host endpoint)
+ (concat host endpoint)))))
+ (prog1 backend
+ (setf (alist-get name gptel--known-backends
+ nil nil #'equal)
+ backend))))
+
+(provide 'gptel-kagi)
+;;; gptel-kagi.el ends here
diff --git a/gptel.el b/gptel.el
index b97a523303..837616f4cd 100644
--- a/gptel.el
+++ b/gptel.el
@@ -3,7 +3,7 @@
;; Copyright (C) 2023 Karthik Chikmagalur
;; Author: Karthik Chikmagalur
-;; Version: 0.5.5
+;; Version: 0.6.0
;; Package-Requires: ((emacs "27.1") (transient "0.4.0") (compat "29.1.4.1"))
;; Keywords: convenience
;; URL: https://github.com/karthink/gptel
@@ -29,20 +29,24 @@
;; gptel is a simple Large Language Model chat client, with support for
multiple models/backends.
;;
-;; gptel supports ChatGPT, Azure, Gemini and local models using Ollama and
-;; GPT4All.
+;; gptel supports
+;; - The services ChatGPT, Azure, Gemini, and Kagi (FastGPT)
+;; - Local models via Ollama, Llama.cpp, Llamafiles or GPT4All
;;
-;; Features:
-;; - It’s async and fast, streams responses.
-;; - Interact with LLMs from anywhere in Emacs (any buffer, shell, minibuffer,
-;; wherever)
-;; - LLM responses are in Markdown or Org markup.
-;; - Supports conversations and multiple independent sessions.
-;; - Save chats as regular Markdown/Org/Text files and resume them later.
-;; - You can go back and edit your previous prompts or LLM responses when
-;; continuing a conversation. These will be fed back to the model.
+;; Additionally, any LLM service (local or remote) that provides an
+;; OpenAI-compatible API is supported.
;;
-;; Requirements for ChatGPT, Azure or Gemini:
+;; Features:
+;; - It’s async and fast, streams responses.
+;; - Interact with LLMs from anywhere in Emacs (any buffer, shell, minibuffer,
+;; wherever)
+;; - LLM responses are in Markdown or Org markup.
+;; - Supports conversations and multiple independent sessions.
+;; - Save chats as regular Markdown/Org/Text files and resume them later.
+;; - You can go back and edit your previous prompts or LLM responses when
+;; continuing a conversation. These will be fed back to the model.
+;;
+;; Requirements for ChatGPT, Azure, Gemini or Kagi:
;;
;; - You need an appropriate API key. Set the variable `gptel-api-key' to the
;; key or to a function of no arguments that returns the key. (It tries to
@@ -50,13 +54,17 @@
;;
;; - For Azure: define a gptel-backend with `gptel-make-azure', which see.
;; - For Gemini: define a gptel-backend with `gptel-make-gemini', which see.
+;; - For Kagi: define a gptel-backend with `gptel-make-kagi', which see
;;
-;; For local models using Ollama or GPT4All:
+;; For local models using Ollama, Llama.cpp or GPT4All:
;;
;; - The model has to be running on an accessible address (or localhost)
;; - Define a gptel-backend with `gptel-make-ollama' or `gptel-make-gpt4all',
;; which see.
;;
+;; Consult the package README for examples and more help with configuring
+;; backends.
+;;
;; Usage:
;;
;; gptel can be used in any buffer or in a dedicated chat buffer. The
- [nongnu] elpa/gptel 4e35e998a8 014/273: gptel-curl: Rename functions for linting, (continued)
- [nongnu] elpa/gptel 4e35e998a8 014/273: gptel-curl: Rename functions for linting, ELPA Syncer, 2024/05/01
- [nongnu] elpa/gptel 1ada9c9214 031/273: gptel: Handle insertion with region-active correctly, ELPA Syncer, 2024/05/01
- [nongnu] elpa/gptel d77c8f37c5 057/273: gptel: Improve header-line-format, ELPA Syncer, 2024/05/01
- [nongnu] elpa/gptel 6419e8f021 120/273: gptel: Add multi-llm support, ELPA Syncer, 2024/05/01
- [nongnu] elpa/gptel aa50cbab70 123/273: gptel: Bump version, ELPA Syncer, 2024/05/01
- [nongnu] elpa/gptel c97778d5a8 127/273: gptel: address byte-compile and checkdoc warnings, ELPA Syncer, 2024/05/01
- [nongnu] elpa/gptel 3e361323d5 137/273: Update available OpenAI GPT models to match API (#146), ELPA Syncer, 2024/05/01
- [nongnu] elpa/gptel 32dd463bd6 160/273: README: Mention YouTube demo, ELPA Syncer, 2024/05/01
- [nongnu] elpa/gptel 3ac5963080 168/273: README: Add instructions for Llamafile, ELPA Syncer, 2024/05/01
- [nongnu] elpa/gptel bea31e33e2 175/273: gptel-ollama: Use default host in gptel-make-ollama, ELPA Syncer, 2024/05/01
- [nongnu] elpa/gptel c6a07043af 179/273: gptel-kagi: Add support for Kagi FastGPT,
ELPA Syncer <=
- [nongnu] elpa/gptel 3fb064a763 181/273: gptel: Better handling of read-only bufs, ELPA Syncer, 2024/05/01
- [nongnu] elpa/gptel e79e386964 191/273: README: Move gptel-request to FAQ, ELPA Syncer, 2024/05/01
- [nongnu] elpa/gptel a61fda4661 197/273: gptel-transient: better multi-line directive editing, ELPA Syncer, 2024/05/01
- [nongnu] elpa/gptel 226f8f0d90 208/273: gptel: Add customizable display-action (#216), ELPA Syncer, 2024/05/01
- [nongnu] elpa/gptel 8ba07d042c 210/273: gptel: Bump version, ELPA Syncer, 2024/05/01
- [nongnu] elpa/gptel a32f4effe5 215/273: gptel-curl: Handle empty responses correctly, ELPA Syncer, 2024/05/01
- [nongnu] elpa/gptel e5f54d1d09 229/273: gptel-anthropic: Modify order of request items, ELPA Syncer, 2024/05/01
- [nongnu] elpa/gptel f529457bbe 232/273: gptel: Use visual-line-mode when ediff-ing, ELPA Syncer, 2024/05/01
- [nongnu] elpa/gptel 9eea4be5ed 245/273: gptel-transient: Fix gptel-menu definition bug (#265), ELPA Syncer, 2024/05/01
- [nongnu] elpa/gptel 6d3e4a99f5 236/273: gptel-transient: Rename additional-directive functions, ELPA Syncer, 2024/05/01