Skip to contents

This function will be deprecated over time and has been superceded by embed(). See embed() for more details.

Usage

embeddings(
  model,
  prompt,
  normalize = TRUE,
  keep_alive = "5m",
  endpoint = "/api/embeddings",
  host = NULL,
  ...
)

Arguments

model

A character string of the model name such as "llama3".

prompt

A character string of the prompt that you want to get the vector embedding for.

normalize

Normalize the vector to length 1. Default is TRUE.

keep_alive

The time to keep the connection alive. Default is "5m" (5 minutes).

endpoint

The endpoint to get the vector embedding. Default is "/api/embeddings".

host

The base URL to use. Default is NULL, which uses Ollama's default base URL.

...

Additional options to pass to the model.

Value

A numeric vector of the embedding.

References

API documentation

Examples

if (FALSE) { # test_connection()$status_code == 200
embeddings("nomic-embed-text:latest", "The quick brown fox jumps over the lazy dog.")
# pass model options to the model
embeddings("nomic-embed-text:latest", "Hello!", temperature = 0.1, num_predict = 3)
}