Available on Enterprise and above plans.
You can also choose a Monitoring Integrations tier.
Guides
Monitoring integrations work with various popular monitoring tools. Check the following guides and configuration examples to get tool-specific instructions:Amazon CloudWatch
Amazon S3
Datadog
Grafana Cloud
New Relic
Configuration
To enable monitoring integrations, navigate to Settings → Monitoring Integrations and click Enable Vector to add a Vector agent to your deployment. You can use the dropdown to select a Monitoring Integrations tier.prometheus_exporter sink, in case you’d like to setup metrics
export.
Additionally, create a vector.toml configuration file
next to your cube.js file. This file is used to keep sinks configuration. You
have to commit this file to the main branch of your deployment for Vector
configuration to take effect.
Environment variables
You can use environment variables prefixed withCUBE_CLOUD_MONITORING_ to
reference configuration parameters securely in the vector.toml file.
Example configuration for exporting logs to
Datadog:
Inputs for logs
Sinks accept theinputs option that allows to specify which components of a
Cube Cloud deployment should export their logs:
| Input name | Description |
|---|---|
cubejs-server | Logs of API instances |
refresh-scheduler | Logs of the refresh worker |
warmup-job | Logs of the pre-aggregation warm-up |
cubestore | Logs of Cube Store |
query-history | Query History export |
cubestore input, you can filter logs
by providing an array of their severity levels via the levels option. If not
specified, only error and info logs will be exported.
| Level | Exported by default? |
|---|---|
error | ✅ Yes |
info | ✅ Yes |
debug | ❌ No |
trace | ❌ No |
If you’d like to adjust severity levels of logs from API instances and the
refresh scheduler, use the
CUBEJS_LOG_LEVEL environment variable.Sinks for logs
You can use a wide range of destinations for logs, including the following ones: Example configuration for exporting all logs, including all Cube Store logs to Azure Blob Storage:Inputs for metrics
Metrics are exported using themetrics input. Metrics will have their respective
metric names and_types: gauge or
counter.
All metrics of the counter type reset to zero at the midnight (UTC) and increment
during the next 24 hours.
You can filter metrics by providing an array of input names via the list option.
| Input name | Metric name, type | Description |
|---|---|---|
cpu | cube_cpu_usage_ratio, gauge | CPU usage of a particular node in the deployment. Usually, a number in the 0—100 range. May exceed 100 if the node is under load |
memory | cube_memory_usage_ratio, gauge | Memory usage of a particular node in the deployment. Usually, a number in the 0—100 range. May exceed 100 if the node is under load |
requests-count | cube_requests_total, counter | Number of API requests to the deployment |
requests-success-count | cube_requests_success_total, counter | Number of successful API requests to the deployment |
requests-errors-count | cube_requests_errors_total, counter | Number of errorneous API requests to the deployment |
requests-duration | cube_requests_duration_ms_total, counter | Total time taken to process API requests, milliseconds |
requests-success-duration | cube_requests_duration_ms_success, counter | Total time taken to process successful API requests, milliseconds |
requests-errors-duration | cube_requests_duration_ms_errors, counter | Total time taken to process errorneous API requests, milliseconds |
inputs. It applies to
metics only.
Example configuration for exporting all metrics from cubejs-server to
Prometheus using the prometheus_remote_write
sink:
Sinks for metrics
Metrics are exported in the Prometheus format which is compatible with the following sinks:prometheus_exporter(native to Prometheus, compatible with Mimir)prometheus_remote_write(compatible with Grafana Cloud)
cubejs-server to
Prometheus using the
prometheus_exporter sink:
prometheus_exporter under Metrics export:
prometheus_exporter by
setting CUBE_CLOUD_MONITORING_METRICS_USER and
CUBE_CLOUD_MONITORING_METRICS_PASSWORD environment variables, respectively.
Query History export
With Query History export, you can bring Query History data to an external monitoring solution for further analysis, for example:- Detect queries that do not hit pre-aggregations.
- Set up alerts for queries that exceed a certain duration.
- Attribute usage to specific users and implement chargebacks.
Requires the M tier
of Monitoring Integrations.
To configure Query History export, add the
query-history input to the inputs
option of the sink configuration. Example configuration for exporting Query History data
to the standard output of the Vector agent:
| Field | Description |
|---|---|
trace_id | Unique identifier of the API request. |
account_name | Name of the Cube Cloud account. |
deployment_id | Identifier of the deployment. |
environment_name | Name of the environment, NULL for production. |
api_type | Type of data API used (rest, sql, etc.), NULL for errors. |
api_query | Query executed by the API, represented as string. |
security_context | Security context of the request, represented as a string. |
status | Status of the request: success or error. |
error_message | Error message, if any. |
start_time_unix_ms | Start time of the execution, Unix timestamp in milliseconds. |
end_time_unix_ms | End time of the execution, Unix timestamp in milliseconds. |
api_response_duration_ms | Duration of the execution in milliseconds. |
cache_type | Cache type: no_cache, pre_aggregations_in_cube_store, etc. |
See this recipe for an example of analyzing data from
Query History export.