LLM
This filter sends the received Message to a Large Language Model (LLM) and propagates the response. It uses any-llm-go to support multiple LLM providers through a unified interface.
Both the prompt and system_prompt parameters support Golang templates, allowing you to dynamically compose prompts using the main field and any extra fields of the incoming Message.
Supported Providers
OpenAI, Anthropic, Ollama, DeepSeek, Groq, Mistral, Gemini, llama.cpp, Llamafile.
Parameters
| Parameter | Type | Default | Description |
|---|---|---|---|
| model | STRING | (required) The model name to use (e.g. "gpt-4", "claude-3-opus-20240229", "llama3") | |
| prompt | STRING | (required) The user prompt sent to the LLM. Supports Golang templates with Message fields | |
| system_prompt | STRING | empty | An optional system prompt. Supports Golang templates with Message fields |
| provider | STRING | “openai” | The LLM provider to use: openai, anthropic, ollama, deepseek, groq, mistral, gemini, llamacpp, llamafile |
| api_key | STRING | empty | API key for the provider (required for cloud providers like OpenAI, Anthropic, etc.) |
| api_url | STRING | empty | Custom base URL for the API endpoint (useful for proxies or self-hosted instances) |
| target | STRING | “main” | The field of the Message where the LLM response will be stored. Use "main" to replace the message content, or any other name to set an extra field |
| temperature | FLOAT | 0.7 | Sampling temperature for the model (higher values produce more random output) |
| max_tokens | INT | 1024 | Maximum number of tokens to generate in the response |
... | llm(model="gpt-4", prompt="Summarize: {{ .main }}", api_key="sk-...", provider="openai") | ...Output
The LLM response text is placed in the field specified by target (default: main). The following extra fields are set on the output Message:
| Extra Field | Description |
|---|---|
| llm_model | The model name returned by the provider |
| llm_prompt_tokens | Number of tokens in the prompt |
| llm_completion_tokens | Number of tokens in the completion |
| llm_total_tokens | Total tokens used (prompt + completion) |
| llm_raw_response | The full JSON response from the provider |
The
Message is dropped if the LLM request fails or returns no response choices.Examples
Using OpenAI to summarize text:
... | llm(model="gpt-4", prompt="Summarize the following text:\n{{ .main }}", system_prompt="You are a helpful assistant.", api_key="sk-...", provider="openai") | ...Using a local Ollama instance:
... | llm(model="llama3", prompt="{{ .main }}", provider="ollama", api_url="http://localhost:11434") | ...Using templates with extra fields and a custom target:
... | llm(model="gpt-4", prompt="Translate '{{ .main }}' to {{ .language }}", target="translation", api_key="sk-...", provider="openai") | ...