Generate a response for a given prompt
Arguments
- model
A character string of the model name such as "llama3".
- prompt
A character string of the prompt like "The sky is..."
- suffix
A character string after the model response. Default is "".
- images
A path to an image file to include in the prompt. Default is "".
- system
A character string of the system prompt (overrides what is defined in the Modelfile). Default is "".
- template
A character string of the prompt template (overrides what is defined in the Modelfile). Default is "".
- context
A list of context from a previous response to include previous conversation in the prompt. Default is an empty list.
- stream
Enable response streaming. Default is FALSE.
- raw
If TRUE, no formatting will be applied to the prompt. You may choose to use the raw parameter if you are specifying a full templated prompt in your request to the API. Default is FALSE.
- keep_alive
The time to keep the connection alive. Default is "5m" (5 minutes).
- output
A character vector of the output format. Default is "resp". Options are "resp", "jsonlist", "raw", "df", "text", "req" (httr2_request object).
- endpoint
The endpoint to generate the completion. Default is "/api/generate".
- host
The base URL to use. Default is NULL, which uses Ollama's default base URL.
- ...
Additional options to pass to the model.
Examples
if (FALSE) { # test_connection()$status_code == 200
# text prompt
generate("llama3", "The sky is...", stream = FALSE, output = "df")
# stream and increase temperature
generate("llama3", "The sky is...", stream = TRUE, output = "text", temperature = 2.0)
# image prompt
# something like "image1.png"
image_path <- file.path(system.file("extdata", package = "ollamar"), "image1.png")
# use vision or multimodal model such as https://ollama.com/benzie/llava-phi-3
generate("benzie/llava-phi-3:latest", "What is in the image?", images = image_path, output = "text")
}