Generate Method
generate.RdGeneric method for generating text or structured output from LLMs and Agents.
Usage
generate(
x,
prompt,
temperature = NULL,
top_p = NULL,
max_tokens = NULL,
stop = NULL,
think = NULL,
output_schema = NULL,
verbosity = 1L,
...
)Arguments
- x
An object of class LLM or Agent.
- prompt
Character: The prompt to pass to the model or agent.
- temperature
Optional numeric [0, 2]: Per-call sampling temperature.
- top_p
Optional numeric [0, 1]: Nucleus sampling cutoff.
- max_tokens
Optional integer [1, Inf): Maximum tokens to generate. For Anthropic, this overrides the config-level value (which is required); for Ollama this maps to
options.num_predict; for OpenAI-compatible backends this maps tomax_tokens.- stop
Optional character: Stop sequence(s). Mapped to
stop_sequenceson Anthropic andoptions.stopon Ollama.- think
Optional logical or character: Whether to enable model thinking (reasoning trace) for this call. Character values target
gpt-oss-style local models.- output_schema
Optional Schema: Output schema to enforce on this call's response. If omitted, the object's default schema (if any) is used.
- verbosity
Integer: Verbosity level.
- ...
Additional backend-specific per-call arguments. See Details.
Details
The system prompt is set once at agent (or LLM) construction time and is not overridable per call. Construct a new agent if you need a different system prompt.
Backend-specific extra arguments accepted via ...:
Ollama:
top_k(integer),seed(integer)OpenAI:
seed(integer)Anthropic:
top_k(integer)
Any argument set to NULL (the default) falls back to the value baked into the
underlying LLMConfig at construction time.
Examples
# Requires running Ollama server and gemma4:e4b model
if (FALSE) { # \dontrun{
agent <- create_agent(
config_Ollama(
model_name = "gemma4:e4b",
temperature = 0.2
)
)
generate(agent, "What is your name?", temperature = 0.7)
} # }