LLM

LLM

This filter sends the received Message to a Large Language Model (LLM) and propagates the response. It uses any-llm-go to support multiple LLM providers through a unified interface.

Both the prompt and system_prompt parameters support Golang templates, allowing you to dynamically compose prompts using the main field and any extra fields of the incoming Message.

Supported Providers

OpenAI, Anthropic, Ollama, DeepSeek, Groq, Mistral, Gemini, llama.cpp, Llamafile.

Parameters

ParameterTypeDefaultDescription
modelSTRING(required) The model name to use (e.g. "gpt-4", "claude-3-opus-20240229", "llama3")
promptSTRING(required) The user prompt sent to the LLM. Supports Golang templates with Message fields
system_promptSTRINGemptyAn optional system prompt. Supports Golang templates with Message fields
providerSTRING“openai”The LLM provider to use: openai, anthropic, ollama, deepseek, groq, mistral, gemini, llamacpp, llamafile
api_keySTRINGemptyAPI key for the provider (required for cloud providers like OpenAI, Anthropic, etc.)
api_urlSTRINGemptyCustom base URL for the API endpoint (useful for proxies or self-hosted instances)
targetSTRING“main”The field of the Message where the LLM response will be stored. Use "main" to replace the message content, or any other name to set an extra field
temperatureFLOAT0.7Sampling temperature for the model (higher values produce more random output)
max_tokensINT1024Maximum number of tokens to generate in the response
... | llm(model="gpt-4", prompt="Summarize: {{ .main }}", api_key="sk-...", provider="openai") | ...

Output

The LLM response text is placed in the field specified by target (default: main). The following extra fields are set on the output Message:

Extra FieldDescription
llm_modelThe model name returned by the provider
llm_prompt_tokensNumber of tokens in the prompt
llm_completion_tokensNumber of tokens in the completion
llm_total_tokensTotal tokens used (prompt + completion)
llm_raw_responseThe full JSON response from the provider
The Message is dropped if the LLM request fails or returns no response choices.

Examples

Using OpenAI to summarize text:

... | llm(model="gpt-4", prompt="Summarize the following text:\n{{ .main }}", system_prompt="You are a helpful assistant.", api_key="sk-...", provider="openai") | ...

Using a local Ollama instance:

... | llm(model="llama3", prompt="{{ .main }}", provider="ollama", api_url="http://localhost:11434") | ...

Using templates with extra fields and a custom target:

... | llm(model="gpt-4", prompt="Translate '{{ .main }}' to {{ .language }}", target="translation", api_key="sk-...", provider="openai") | ...