Documentation Index
Fetch the complete documentation index at: https://docs.cube.dev/llms.txt
Use this file to discover all available pages before exploring further.
- Shared — designed for development use cases. Runs on compute
shared with other deployments within the selected region.
- Dedicated — designed for production workloads and
high-availability. Runs on compute dedicated to your deployment.
- Multi-cluster — designed for demanding production workloads,
high-scalability, high-availability, and advanced multi-tenancy configurations.
Runs on multiple Dedicated deployments.
Shared
Available for free, no credit card required. Your free trial is limited to 2
Shared deployments and only 1,000 queries per day. Upgrade to
any paid plan to unlock all features.
Shared deployments run on compute shared with other deployments within the
selected region, which keeps the cost low but means resources aren’t reserved
for you exclusively.
If your account uses single-tenant infrastructure,
Shared deployments are only shared with your other deployments on that
infrastructure — never with other customers. Your environment remains fully
isolated at the infrastructure level.
Shared deployments are designed for development use cases only. This makes
it easy to get started with Cube quickly, and also allows you to build and
query pre-aggregations on-demand.
Shared deployments don’t have dedicated refresh workers
and, consequently, they do not refresh pre-aggregations on schedule.
Shared deployments do not provide high-availability nor do they guarantee
fast response times. Shared deployments also auto-suspend
after 30 minutes of inactivity, which can cause the first request after the
deployment wakes up to take additional time to process. They also have
limits on the maximum number of queries per day and the maximum
number of Cube Store Workers. We strongly advise not using a Shared
deployment in a production environment, it is for testing and learning about
Cube only and will not deliver a production-level experience for your users.
You can try a Shared deployment by
signing up for Cube to try it free
(no credit card required).
Dedicated
Dedicated deployments run on compute dedicated exclusively to your deployment,
giving you predictable performance and full control over capacity.
Dedicated deployments are designed to support high-availability production
workloads. It consists of several key components, including starting with 2 Cube
API instances, 1 Cube Refresh Worker and 2 Cube Store Routers - all of which run
on compute dedicated to your deployment. The deployment can automatically scale
to meet the needs of your workload by adding more components as necessary;
check the page on scalability to learn more.
Multi-cluster
Multi-cluster deployments are designed for demanding production workloads,
high-scalability, high-availability, and large multi-tenancy
configurations, e.g., with more than 100 tenants.
It provides you with two options:
- Scale the number of Dedicated deployments serving your
workload, allowing to route requests over up to 10 Dedicated deployments and
up to 100 API instances.
- Optionally, scale the number of Cube Store routers, allowing for increased
Cube Store querying performance.
Each Dedicated deployment is billed separately, and all Dedicated deployments
can use auto-scaling to match demand.
Configuring Multi-cluster
To switch your deployment to Multi-cluster, navigate to
Settings → General, select it under Type, and confirm
with ✓.
To set the number of Dedicated deployments within your Multi-cluster
deployment, navigate to Settings → Configuration and edit
Number of clusters.
Routing traffic between Dedicated deployments
Cube routes requests between multiple Dedicated deployments within a
Multi-cluster deployment based on context_to_app_id.
In most cases, it should return an identifier that does not change over time
for each tenant.
The following implementation will make sure that all requests from a
particular tenant are always routed to the same Dedicated deployment. This
approach ensures that only one Dedicated deployment keeps compiled data model
cache for each tenant and serves its requests. It allows to reduce the
footprint of the compiled data model cache on individual Dedicated deployments.
from cube import config
@config('context_to_app_id')
def context_to_app_id(ctx: dict) -> str:
return f"CUBE_APP_{ctx['securityContext']['tenant_id']}"
If your implementation of context_to_app_id returns identifiers that change
over time for each tenant, requests from one tenant would likely hit multiple
Dedicated deployments and you would not have the benefit of reduced memory
footprint. Also you might see 502 or timeout errors in case of different
deployment nodes would return different context_to_app_id results for the
same request.
Switching between deployment types
To switch a deployment’s type, go to the deployment’s Settings screen
and select from the available options.