emacs-elpa-diffs
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[elpa] externals/llm 53b5ebcbdb 9/9: Add new provider GPT4All


From: ELPA Syncer
Subject: [elpa] externals/llm 53b5ebcbdb 9/9: Add new provider GPT4All
Date: Thu, 26 Oct 2023 00:58:44 -0400 (EDT)

branch: externals/llm
commit 53b5ebcbdbc8837c3401bf37fc53d37fa94c4230
Author: Andrew Hyatt <ahyatt@gmail.com>
Commit: Andrew Hyatt <ahyatt@gmail.com>

    Add new provider GPT4All
---
 NEWS.org       |  2 ++
 README.org     | 10 +++++--
 llm-gpt4all.el | 86 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
 llm-openai.el  | 23 ++++++++--------
 llm-tester.el  |  2 +-
 5 files changed, 109 insertions(+), 14 deletions(-)

diff --git a/NEWS.org b/NEWS.org
index 096bbaa1ed..6ff8d1322f 100644
--- a/NEWS.org
+++ b/NEWS.org
@@ -1,7 +1,9 @@
 * Version 0.5
 - Fixes for conversation context storage, requiring clients to handle ongoing 
conversations slightly differently.
+- Fixes for proper sync request http error code handling.
 - =llm-ollama= can now be configured with a different hostname.
 - Callbacks now always attempts to be in the client's original buffer.
+- Add provider =llm-gpt4all=.
 * Version 0.4
 - Add helper function ~llm-chat-streaming-to-point~.
 - Add provider =llm-ollama=.
diff --git a/README.org b/README.org
index 4ccfc7d3a1..674d0e0e13 100644
--- a/README.org
+++ b/README.org
@@ -35,9 +35,15 @@ In addition to the provider, which you may want multiple of 
(for example, to cha
 - ~llm-vertex-gcloud-region~: The gcloud region to use.  It's good to set this 
to a region near where you are for best latency.  Defaults to "us-central1".
 ** Ollama
 [[https://ollama.ai/][Ollama]] is a way to run large language models locally. 
There are [[https://ollama.ai/library][many different models]] you can use with 
it. You set it up with the following parameters:
-- ~:port~: The localhost port that ollama is run on.  This is optional and 
will default to the default ollama port.
-- ~:chat-mode~: The model name to use for chat.  This is not optional for chat 
use, since there is no default.
+- ~:host~: The host that ollama is run on.  This is optional and will default 
to localhost.
+- ~:port~: The port that ollama is run on.  This is optional and will default 
to the default ollama port.
+- ~:chat-model~: The model name to use for chat.  This is not optional for 
chat use, since there is no default.
 - ~:embedding-model~: The model name to use for embeddings.  This is not 
optional for embedding use, since there is no default.
+** GPT4All
+[[https://gpt4all.io/index.html][GPT4All]] is a way to run large language 
models locally.  To use it with =llm= package, you must click "Enable API 
Server" in the settings.  It does not offer embeddings or streaming 
functionality, though, so Ollama might be a better fit for users who are not 
already set up with local models.  You can set it up with the following 
parameters:
+- ~:host~: The host that GPT4All is run on.  This is optional and will default 
to localhost.
+- ~:port~: The port that GPT4All is run on.  This is optional and will default 
to the default ollama port.
+- ~:chat-model~: The model name to use for chat.  This is not optional for 
chat use, since there is no default.
 ** Fake
 This is a client that makes no call, but it just there for testing and 
debugging.  Mostly this is of use to programmatic clients of the llm package, 
but end users can also use it to understand what will be sent to the LLMs.  It 
has the following parameters:
 - ~:output-to-buffer~: if non-nil, the buffer or buffer name to append the 
request sent to the LLM to.
diff --git a/llm-gpt4all.el b/llm-gpt4all.el
new file mode 100644
index 0000000000..3b0f811085
--- /dev/null
+++ b/llm-gpt4all.el
@@ -0,0 +1,86 @@
+;;; llm-gpt4all.el --- llm module for integrating with GPT4All -*- 
lexical-binding: t; -*-
+
+;; Copyright (c) 2023  Free Software Foundation, Inc.
+
+;; Author: Andrew Hyatt <ahyatt@gmail.com>
+;; Homepage: https://github.com/ahyatt/llm
+;; SPDX-License-Identifier: GPL-3.0-or-later
+;;
+;; This program is free software; you can redistribute it and/or
+;; modify it under the terms of the GNU General Public License as
+;; published by the Free Software Foundation; either version 3 of the
+;; License, or (at your option) any later version.
+;;
+;; This program is distributed in the hope that it will be useful, but
+;; WITHOUT ANY WARRANTY; without even the implied warranty of
+;; MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+;; General Public License for more details.
+;;
+;; You should have received a copy of the GNU General Public License
+;; along with GNU Emacs.  If not, see <http://www.gnu.org/licenses/>.
+
+;;; Commentary:
+;; This files implements the llm functionality defined in llm.el, for GPT4All.
+;; The GPT4All API is based on Open AI's, so we depend on the llm-openai module
+;; here.
+;;
+;; GPT4All does not support embeddings.
+;;
+;; Users using GPT4All must enable the API Server in their GPT4All settings for
+;; this to work.
+
+;;; Code:
+(require 'llm)
+(require 'llm-request)
+(require 'llm-openai)
+
+(cl-defstruct llm-gpt4all
+  "A structure for holding information needed by GPT4All.
+
+CHAT-MODEL is the model to use for chat queries. It must be set.
+
+URL is the host to connect to.  If unset, it will default to http://localhost.
+
+PORT is the port to connect to (an integer). If unset, it will
+default the default GPT4all port."
+  chat-model host port)
+
+(defun llm-gpt4all--url (provider path)
+  "Return the URL for PATH, given the settings in PROVIDER."
+  (format "http://%s:%d/v1/%s"; (or (llm-gpt4all-host provider) "localhost")
+          (or (llm-gpt4all-port provider) 4891) path))
+
+(cl-defmethod llm-chat ((provider llm-gpt4all) prompt)
+  (let ((response (llm-openai--handle-response
+                   (llm-request-sync (llm-gpt4all--url provider 
"chat/completions")
+                                     :data (llm-openai--chat-request 
(llm-gpt4all-chat-model provider) prompt))
+                   #'llm-openai--extract-chat-response)))
+    (setf (llm-chat-prompt-interactions prompt)
+          (append (llm-chat-prompt-interactions prompt)
+                  (list (make-llm-chat-prompt-interaction :role 'assistant 
:content response))))
+    response))
+
+(cl-defmethod llm-chat-async ((provider llm-gpt4all) prompt response-callback 
error-callback)
+  (let ((buf (current-buffer)))
+    (llm-request-async (llm-gpt4all--url provider "chat/completions")
+                       :data (llm-openai--chat-request (llm-gpt4all-chat-model 
provider) prompt)
+      :on-success (lambda (data)
+                    (let ((response (llm-openai--extract-chat-response data)))
+                      (setf (llm-chat-prompt-interactions prompt)
+                            (append (llm-chat-prompt-interactions prompt)
+                                    (list (make-llm-chat-prompt-interaction 
:role 'assistant :content response))))
+                      (llm-request-callback-in-buffer buf response-callback 
response)))
+      :on-error (lambda (_ data)
+                  (let ((errdata (cdr (assoc 'error data))))
+                    (llm-request-callback-in-buffer buf error-callback 'error
+                             (format "Problem calling GPT4All: %s message: %s"
+                                     (cdr (assoc 'type errdata))
+                                     (cdr (assoc 'message errdata)))))))))
+
+(cl-defmethod llm-chat-streaming ((provider llm-gpt4all) prompt 
_partial-callback response-callback error-callback)
+  ;; GPT4All does not implement streaming, so instead we just use the async 
method.
+  (llm-chat-async provider prompt response-callback error-callback))
+
+(provide 'llm-gpt4all)
+
+;;; llm-gpt4all.el ends here
diff --git a/llm-openai.el b/llm-openai.el
index 45b53e0e38..679f6e783e 100644
--- a/llm-openai.el
+++ b/llm-openai.el
@@ -55,11 +55,11 @@ will use a reasonable default."
   (ignore provider)
   (cons "Open AI" "https://openai.com/policies/terms-of-use";))
 
-(defun llm-openai--embedding-request (provider string)
+(defun llm-openai--embedding-request (model string)
   "Return the request to the server for the embedding of STRING.
-PROVIDER is the llm-openai provider."
+MODEL is the embedding model to use, or nil to use the default.."
   `(("input" . ,string)
-    ("model" . ,(or (llm-openai-embedding-model provider) 
"text-embedding-ada-002"))))
+    ("model" . ,(or model "text-embedding-ada-002"))))
 
 (defun llm-openai--embedding-extract-response (response)
   "Return the embedding from the server RESPONSE."
@@ -84,7 +84,7 @@ PROVIDER is the llm-openai provider."
   (let ((buf (current-buffer)))
     (llm-request-async "https://api.openai.com/v1/embeddings";
                      :headers `(("Authorization" . ,(format "Bearer %s" 
(llm-openai-key provider))))
-                     :data (llm-openai--embedding-request provider string)
+                     :data (llm-openai--embedding-request 
(llm-openai-embedding-model provider) string)
                      :on-success (lambda (data)
                                    (llm-request-callback-in-buffer
                                     buf vector-callback 
(llm-openai--embedding-extract-response data)))
@@ -99,12 +99,12 @@ PROVIDER is the llm-openai provider."
   (llm-openai--handle-response
    (llm-request-sync "https://api.openai.com/v1/embeddings";
                :headers `(("Authorization" . ,(format "Bearer %s" 
(llm-openai-key provider))))
-               :data (llm-openai--embedding-request provider string))
+               :data (llm-openai--embedding-request 
(llm-openai-embedding-model provider) string))
    #'llm-openai--embedding-extract-response))
 
-(defun llm-openai--chat-request (provider prompt &optional return-json-spec 
streaming)
+(defun llm-openai--chat-request (model prompt &optional return-json-spec 
streaming)
   "From PROMPT, create the chat request data to send.
-PROVIDER is the llm-openai provider to use.
+MODEL is the model name to use.
 RETURN-JSON-SPEC is the optional specification for the JSON to return.
 STREAMING if non-nil, turn on response streaming."
   (let (request-alist system-prompt)
@@ -136,7 +136,7 @@ STREAMING if non-nil, turn on response streaming."
                                       ("content" . ,(string-trim 
(llm-chat-prompt-interaction-content p)))))
                                   (llm-chat-prompt-interactions prompt)))
           request-alist)
-    (push `("model" . ,(or (llm-openai-chat-model provider) 
"gpt-3.5-turbo-0613")) request-alist)
+    (push `("model" . ,(or model "gpt-3.5-turbo-0613")) request-alist)
     (when (llm-chat-prompt-temperature prompt)
       (push `("temperature" . ,(/ (llm-chat-prompt-temperature prompt) 2.0)) 
request-alist))
     (when (llm-chat-prompt-max-tokens prompt)
@@ -160,7 +160,7 @@ STREAMING if non-nil, turn on response streaming."
   (let ((buf (current-buffer)))
     (llm-request-async "https://api.openai.com/v1/chat/completions";
       :headers `(("Authorization" . ,(format "Bearer %s" (llm-openai-key 
provider))))
-      :data (llm-openai--chat-request provider prompt)
+      :data (llm-openai--chat-request (llm-openai-chat-model provider) prompt)
       :on-success (lambda (data)
                     (let ((response (llm-openai--extract-chat-response data)))
                       (setf (llm-chat-prompt-interactions prompt)
@@ -180,7 +180,8 @@ STREAMING if non-nil, turn on response streaming."
   (let ((response (llm-openai--handle-response
                    (llm-request-sync 
"https://api.openai.com/v1/chat/completions";
                                      :headers `(("Authorization" . ,(format 
"Bearer %s" (llm-openai-key provider))))
-                                     :data (llm-openai--chat-request provider 
prompt))
+                                     :data (llm-openai--chat-request 
(llm-openai-chat-model provider)
+                                                                     prompt))
                    #'llm-openai--extract-chat-response)))
     (setf (llm-chat-prompt-interactions prompt)
           (append (llm-chat-prompt-interactions prompt)
@@ -217,7 +218,7 @@ STREAMING if non-nil, turn on response streaming."
   (let ((buf (current-buffer)))
     (llm-request-async "https://api.openai.com/v1/chat/completions";
                        :headers `(("Authorization" . ,(format "Bearer %s" 
(llm-openai-key provider))))
-                       :data (llm-openai--chat-request provider prompt nil t)
+                       :data (llm-openai--chat-request (llm-openai-chat-model 
provider) prompt nil t)
                        :on-error (lambda (_ data)
                                    (let ((errdata (cdr (assoc 'error data))))
                                      (llm-request-callback-in-buffer buf 
error-callback 'error
diff --git a/llm-tester.el b/llm-tester.el
index bae06426d1..d6e8d50e11 100644
--- a/llm-tester.el
+++ b/llm-tester.el
@@ -136,7 +136,7 @@
          (message "ERROR: Provider %s returned a response not in the original 
buffer" (type-of provider)))
        (message "SUCCESS: Provider %s provided a streamed response %s in %d 
parts, complete text is: %s" (type-of provider) streamed counter text)
        (if (= 0 counter)
-           (message "ERROR: Provider %s streaming request never happened!" 
(type-of provider))))
+           (message "WARNING: Provider %s returned no partial updates!" 
(type-of provider))))
      (lambda (type message)
        (unless (eq buf (current-buffer))
          (message "ERROR: Provider %s returned a response not in the original 
buffer" (type-of provider)))



reply via email to

[Prev in Thread] Current Thread [Next in Thread]