Skip to main content
LangChain is a framework designed to simplify the creation of applications using large language models. To get started with LangChain, follow the instructions.

Document loader

Cube’s integration with LangChain comes as the document loader that is intended to be used to populate a vector database with embeddings derived from the data model. Later, this vector database can be queried to find best-matching entities of the semantic layer. This is useful to match free-form input, e.g., queries in a natural language, with the views and their members in the data model.
We’re also providing an chat-based demo application (see source code on GitHub) with example OpenAI prompts for constructing queries to Cube’s SQL API. If you wish to create an AI-powered conversational interface for the semantic layer, these prompts can be a good starting point.

Configuring the connection to Cube

The document loader connects to Cube using the REST API, and will need a JWT to authenticate. If you’re using Cube Cloud, you can retrieve these details from a deployment’s Overview page.

Querying Cube

Please refer to the blog post for details on querying Cube and building a complete AI-based application. Also, please feel free to review a chat-based demo application source code on GitHub.