Skip to contents

Supercedes the embeddings() function.

Usage

embed(
  model,
  input,
  truncate = TRUE,
  normalize = TRUE,
  keep_alive = "5m",
  endpoint = "/api/embed",
  host = NULL,
  ...
)

Arguments

model

A character string of the model name such as "llama3".

input

A vector of characters that you want to get the embeddings for.

truncate

Truncates the end of each input to fit within context length. Returns error if FALSE and context length is exceeded. Defaults to TRUE.

normalize

Normalize the vector to length 1. Default is TRUE.

keep_alive

The time to keep the connection alive. Default is "5m" (5 minutes).

endpoint

The endpoint to get the vector embedding. Default is "/api/embeddings".

host

The base URL to use. Default is NULL, which uses Ollama's default base URL.

...

Additional options to pass to the model.

Value

A numeric matrix of the embedding. Each column is the embedding for one input.

References

API documentation

Examples

if (FALSE) { # test_connection()$status_code == 200
embed("nomic-embed-text:latest", "The quick brown fox jumps over the lazy dog.")
# pass multiple inputs
embed("nomic-embed-text:latest", c("Good bye", "Bye", "See you."))
# pass model options to the model
embed("nomic-embed-text:latest", "Hello!", temperature = 0.1, num_predict = 3)
}