👀 Our most exciting product launch yet 🚀 Join us May 8th for Sanity Connect
Experimental feature

Embeddings Index API

Embeddings allow you to search for what your documents are about. Use the Embeddings Index API to build LLM agents or to enable semantic search.

With Sanity, you can retrieve documents from a dataset and present them how you want. Matching documents in the search results can match a literal string or a regular expression.

But what if you need to search for documents based on what they are about?

Embeddings Index API lets you do just that.

After creating an index based on a dataset, we will feed your documents to a small embeddings model with OpenAI every time you publish content. Then, we store the resulting embeddings in a vector database.

This way, you can search the embeddings to retrieve corresponding documents by sending requests to the Embeddings Index API. Behind the scenes, this embeds your search string and returns the most similar documents in your dataset.

Embeddings Index API functionality is available through the Embeddings Index CLI, the Embeddings Index UI for Sanity Studio, and the Embeddings Index HTTP API.

Gotcha

Embeddings Index API is currently in beta. Features and behavior may change without notice.

Embeddings Index API is available to projects on the Growth plan and above.

Using this feature requires Sanity to send data to OpenAI and Pinecone to store vector interpretations of documents.

What can you use embeddings for?

Embeddings are simplified representations of more complex data. While they simplify the original content, they keep contextual information. Therefore, embeddings can serve use cases that leverage machine learning, prediction, and search.

For example, you can use embeddings to:

  • Implement semantic search: you can make semantic search available to your editors or customers so that they can use it to find similar documents. The embeddings index offers a fast lookup that you can use for document similarity searches. In fact, the embeddings index ships with a Studio UI component that demonstrates finding similar documents or documents similar to a phrase.
  • Enable related content instructions with AI Assist: you can enable AI Assist to work with reference fields for documents as long as they are included in an embeddings index.
  • Build Large Language Model (LLM) agents: you could fine-tune a model to generate different inflections of output; however, this isn't a good way to teach it about new concepts or facts. A more effective approach is to use embeddings to represent domain knowledge and then feed whole documents or summaries into new prompts. This is a great way to give LLMs long-term memory.

Getting better comparisons

If you don't include a projection in your index configuration, we process the entire document JSON into a less verbose format and embed all of it, breaking it into chunks as we go to accommodate the limit of the model doing the embedding.

If you compare your documents with excerpts from other documents, this may work fine for you out of the box. Occasionally, you might need to reshape your documents to something that looks more like your query string.

For example: you want to create a great document search based on short strings from users. If you wish to optimize this, you may generate summaries of every document in your collection using a LLM. You could then embed only the summaries. Inversely, when searching, you could have the LLM model doing the querying imagine what a summary of the document represented by the search string would look like.

In this example, you would be comparing apples to apples—summaries of actual documents and the summary of a document that could represent the search string. Just using entire documents and search strings will still produce results, but the quality will probably be lower.

Setting up an embeddings index

You can create an embeddings index in one of the following ways:

The following practical example guides you through configuring an embeddings index for a Sanity project using the Embeddings Index CLI.

Prerequisites

Creating an embeddings index

To create an embeddings index, open a terminal session, and then run:

# Create an embeddings index by passing arguments
embeddings-index create --indexName "<name-of-the-index>" --dataset "<name-of-the-dataset>" --filter "<GROQ-filter>" --projection "<GROQ-projection>"

# Alternatively, create an embeddings index by passing a JSON manifest
embeddings-index create --manifest <manifest-file-name>.json

Creating an index can take time, depending on the number of existing documents and the indexer load.

Defining an embeddings index

You can define the configuration of an embeddings index in one of the following ways:

  • By passing configuration arguments when you create the index in the CLI.
  • By storing the configuration details in a JSON manifest file.

Defining the index in the CLI

To define a new embeddings index in the root directory of a Sanity project, pass the following required arguments with the embeddings-index create command:

  • --indexName: assign a descriptive name to the index.
  • --dataset: specify the name of an existing dataset. This is the target dataset to index.
  • --filter: specify the filtering criteria to include in the index only the selected subset of documents from the database.
    The filter must be a valid GROQ filter without the square brackets that wrap the value assigned to _type.
    Example: _type=='tutorial'
  • --projection: specify the projection criteria to include in the index only the selected subset of properties from the filtered documents.
    The projection must be a valid GROQ projection, including curly brackets.
    Example: {title, author}

Alternatively, you can create an embeddings index by passing a JSON manifest file with the --manifest argument:

  • --manifest <manifest-file-name>.json

Example

# Create embeddings index with arguments
# 'filter' has no '[]' square brackets
# 'projection' keeps '{}' curly brackets
embeddings-index create --indexName "my-embeddings-index" --dataset "production" --filter "_type='myDocumentType'" --projection "{...}"

# Create embeddings index with JSON manifest
# The JSON manifest is in the project root directory
embeddings-index create --manifest embeddings-index-manifest.json

Defining the index in a JSON manifest

To store, reuse, and manage embeddings indexes with source code control and versioning, define their configuration in a JSON manifest file. Save the embeddings indexes manifest.json file to the root directory of a Sanity project.

A JSON manifest file defining an embeddings index must contain the following required fields:

{
  indexName: string,
  dataset: string,
  filter: string,
  projection: string
}

Example

{
  "indexName": "my-embeddings-index",
  "dataset": "production",
  "filter": "_type=='myType'", // No '[]' square brackets
  "projection": "{...}" // Keeps '{}' square brackets
}

To create a JSON manifest file, invoke the manifest command:

embeddings-index manifest --out manifest.json --indexName "<name-of-the-index>" --dataset "<name-of-the-dataset>" --filter "<GROQ-filter>" --projection "<GROQ-projection>"

Checking an embeddings index status

You can check the status of your embeddings indexes to monitor the creation progress or the completeness of the indexes.

To check the status of all embeddings indexes in a Sanity project, run:

embeddings-index-cli list

To check the status of a specific embeddings index in a Sanity project, run:

embeddings-index-cli get --indexName "<name-of-the-index>"

Further reading

Create and manage embeddings indexes with the Embeddings Index CLI and HTTP API

Was this article helpful?