Generate embeddings for a single prompt - deprecated in favor of embed()
Source: R/ollama.R
embeddings.Rd
This function will be deprecated over time and has been superceded by embed()
. See embed()
for more details.
Usage
embeddings(
model,
prompt,
normalize = TRUE,
keep_alive = "5m",
endpoint = "/api/embeddings",
host = NULL,
...
)
Arguments
- model
A character string of the model name such as "llama3".
- prompt
A character string of the prompt that you want to get the vector embedding for.
- normalize
Normalize the vector to length 1. Default is TRUE.
- keep_alive
The time to keep the connection alive. Default is "5m" (5 minutes).
- endpoint
The endpoint to get the vector embedding. Default is "/api/embeddings".
- host
The base URL to use. Default is NULL, which uses Ollama's default base URL.
- ...
Additional options to pass to the model.