gnu-emacs-sources
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[GNU ELPA] Llm version 0.23.0


From: ELPA update
Subject: [GNU ELPA] Llm version 0.23.0
Date: Sat, 01 Feb 2025 05:03:38 -0500

Version 0.23.0 of package Llm has just been released in GNU ELPA.
You can now find it in M-x list-packages RET.

Llm describes itself as:

  ===================================
  Interface to pluggable llm backends
  ===================================

More at https://elpa.gnu.org/packages/llm.html

## Summary:

                          ━━━━━━━━━━━━━━━━━━━━━━━
                           LLM PACKAGE FOR EMACS
                          ━━━━━━━━━━━━━━━━━━━━━━━





  1 Introduction
  ══════════════

    This library provides an interface for interacting with Large Language
    Models (LLMs). It allows elisp code to use LLMs while also giving
    end-users the choice to select their preferred LLM. This is
    particularly beneficial when working with LLMs since various
    high-quality models exist, some of which have paid API access, while
    others are locally installed and free but offer medium
    quality. Applications using LLMs can utilize this library to ensure
    compatibility regardless of whether the user has a local LLM or is
    paying for API access.

## Recent NEWS:

1 Version 0.23.0
════════════════

  • Add GitHub's GitHub Models
  • Accept lists as nonstandard
  • Add Deepseek R1 model
  • Show the chat model as the name for Open-AI compatible models (via
    [@whhone])


[@whhone] <https://github.com/whhone>


2 Version 0.22.0
════════════════

  • Change `llm-tool-function' to `llm-tool', change
    `make-llm-tool-function' to take any arguments.


3 Version 0.21.0
════════════════

  • Incompatible change to function calling, which is now tool use,
    affecting arguments and methods.
  • Support image understanding in Claude
  • Support streaming tool use in Claude
  • Add `llm-models-add' as a convenience method to add a model to the
    known list.


4 Version 0.20.0
════════════════

  • Add ability to output according to a JSON spec.
  • Add Gemini 2.0 Flash, Gemini 2.0 Flash Thinking, and Llama 3.3 and
    QwQ models.


5 Version 0.19.1
════════════════

  • Fix Open AI context length sizes, which are mostly smaller than
    advertised.


6 Version 0.19.0
════════════════

  • Add JSON mode, for most providers with the exception of Claude.
  • Add ability for keys to be functions, thanks to Daniel Mendler.


7 Version 0.18.1
════════════════

  • Fix extra argument in `llm-batch-embeddings-async'.


8 Version 0.18.0
════════════════

  • Add media handling, for images, videos, and audio.
  • Add batch embeddings capability (currently for just Open AI and
    Ollama).
  • Add Microsoft Azure's Open AI
  • Remove testing and other development files from ELPA packaging.
  • Remove vendored `plz-event-source' and `plz-media-type', and add
    requirements.
  • Update list of Ollama models for function calling.
  • Centralize model list so things like Vertex and Open AI compatible
    libraries can have more accurate context lengths and capabilities.
  • Update default Gemini chat model to Gemini 1.5 Pro.
  • Update default Claude chat model to latest Sonnet version.
  • Fix issue in some Open AI compatible providers with empty function
    call arguments


9 Version 0.17.4
════════════════

  • Fix problem with Open AI's `llm-chat-token-limit'.
  • Fix Open AI and Gemini's parallel function calling.
  • Add variable `llm-prompt-default-max-tokens' to put a cap on number
    of tokens regardless of model size.


10 Version 0.17.3
═════════════════

  • More fixes with Claude and Ollama function calling conversation,
    thanks to Paul Nelson.
  • Make `llm-chat-streaming-to-point' more efficient, just inserting
    new text, thanks to Paul Nelson.
  • Don't output streaming information when `llm-debug' is true, since
    it tended to be overwhelming.


11 Version 0.17.2
═════════════════

  • Fix compiled functions not being evaluated in `llm-prompt'.
  • Use Ollama's new `embed' API instead of the obsolete one.
  • Fix Claude function calling conversations
  • Fix issue in Open AI streaming function calling.
  • Update Open AI and Claude default chat models to the later models.


12 Version 0.17.1
═════════════════

  • Support Ollama function calling, for models which support it.
  • Make sure every model, even unknown models, return some value for
    `llm-chat-token-limit'.
  • Add token count for llama3.1 model.
  • Make `llm-capabilities' work model-by-model for embeddings and
    functions


13 Version 0.17.0
═════════════════

  • Introduced `llm-prompt' for prompt management and creation from
    generators.
  • Removed Gemini and Vertex token counting, because `llm-prompt' uses
    token counting often and it's best to have a quick estimate than a
    more expensive more accurate count.


14 Version 0.16.2
═════════════════

  • Fix Open AI's gpt4-o context length, which is lower for most paying
    users than the max.


15 Version 0.16.1
═════════════════

  • Add support for HTTP / HTTPS proxies.


16 Version 0.16.0
═════════════════

  • Add "non-standard params" to set per-provider options.
  • Add default parameters for chat providers.


17 Version 0.15.0
═════════════════

  • Move to `plz' backend, which uses `curl'.  This helps move this
    package to a stronger foundation backed by parsing to spec.  Thanks
    to Roman Scherer for contributing the `plz' extensions that enable
    this, which are currently bundled in this package but will
    eventually become their own separate package.
  • Add model context information for Open AI's GPT 4-o.
  • Add model context information for Gemini's 1.5 models.


18 Version 0.14.2
═════════════════

  • Fix mangled copyright line (needed to get ELPA version unstuck).
  • Fix Vertex response handling bug.
  …  …

reply via email to

[Prev in Thread] Current Thread [Next in Thread]