Apply an LLM over a vector of prompts
llmapply.Rdllmapply is the lapply-style entry point for running a single prompt against an LLM
repeatedly over a vector of inputs. Pass either a model name (in which case an LLM is built
on the fly using backend, system_prompt, output_schema) or a pre-built LLM object.
Usage
llmapply(
x,
model_or_llm,
backend = c("ollama", "openai", "anthropic"),
system_prompt = SYSTEM_PROMPT_DEFAULT,
output_schema = NULL,
verbosity = 1L,
extract_responses = TRUE,
...
)Arguments
- x
Character or list: Values to iterate over. Each element forms the user prompt for one call to the LLM.
- model_or_llm
Character or LLM: Either the name of a model (a string) or a pre-built
LLMobject (for example from create_Ollama, create_OpenAI, or create_Anthropic).- backend
Character {"ollama", "openai", "anthropic"}: Backend to use when
model_or_llmis a string. Ignored whenmodel_or_llmis anLLMobject.- system_prompt
Character: System prompt to use when building the
LLMfrom a model name. Ignored whenmodel_or_llmis anLLMobject.- output_schema
Optional Schema: Output schema to enforce, created with schema. When
model_or_llmis a string, this is baked into the builtLLM. Whenmodel_or_llmis a pre-builtLLM, supplying this here is a conflict and will error.- verbosity
Integer [0, Inf): Verbosity level. The per-call verbosity is
verbosity - 1L.- extract_responses
Logical: If
TRUE, return a character vector of assistant responses (withNA_character_for missing assistant content). IfFALSE, return the raw list ofMessageobjects from each call.- ...
Additional per-call arguments forwarded to generate (e.g.
temperature,top_p,max_tokens,stop,think,top_k,seed).
Value
If extract_responses = TRUE, a character vector the same length as x. Otherwise, a
list of Message objects.
Details
Per-call overrides such as temperature, top_p, max_tokens, stop, think, plus
backend-specific options like top_k or seed, are forwarded via ... to generate. Vectors
passed via ... are not yet recycled across x — they are forwarded as-is to each call.