emacs-elpa-diffs
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[elpa] externals/llm 326d533cfb 1/2: Fix ollama mentioned instead of lla


From: ELPA Syncer
Subject: [elpa] externals/llm 326d533cfb 1/2: Fix ollama mentioned instead of llama.cpp
Date: Mon, 26 Feb 2024 00:58:12 -0500 (EST)

branch: externals/llm
commit 326d533cfb3fedf395e50a39742e66a17b038472
Author: SmallAndSoft <45131567+SmallAndSoft@users.noreply.github.com>
Commit: GitHub <noreply@github.com>

    Fix ollama mentioned instead of llama.cpp
---
 README.org | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/README.org b/README.org
index ce4768f888..9f0a98a4e7 100644
--- a/README.org
+++ b/README.org
@@ -69,7 +69,7 @@ In addition to the provider, which you may want multiple of 
(for example, to cha
 Llama.cpp does not have native chat interfaces, so is not as good at 
multi-round conversations as other solutions such as Ollama.  It will perform 
better at single-responses.  However, it does support Open AI's request format 
for models that are good at conversation.  If you are using one of those 
models, you should probably use the Open AI Compatible provider instead to 
connect to Llama CPP.
 
 The parameters default to optional values, so mostly users should just be 
creating a model with ~(make-llm-llamacpp)~.  The parameters are:
-- ~:scheme~: The scheme (http/https) for the connection to ollama.  This 
default to "http".
+- ~:scheme~: The scheme (http/https) for the connection to llama.cpp.  This 
default to "http".
 - ~:host~: The host that llama.cpp server is run on.  This is optional and 
will default to localhost.
 - ~:port~: The port that llama.cpp server is run on.  This is optional and 
will default to 8080, the default llama.cpp port.
 ** Fake



reply via email to

[Prev in Thread] Current Thread [Next in Thread]