未验证 提交 62d483eb 编写于 作者: wu-sheng's avatar wu-sheng 提交者: GitHub

Merge branch 'master' into banyandb-property

......@@ -41,7 +41,7 @@ jobs:
with:
submodules: true
- name: Check license header
uses: apache/skywalking-eyes@df70871af1a8109c9a5b1dc824faaf65246c5236
uses: apache/skywalking-eyes@775fe1ffda59b7e100aa144d0ef8d7beae17f97d
code-style:
if: (github.event_name == 'schedule' && github.repository == 'apache/skywalking') || (github.event_name != 'schedule')
......@@ -78,7 +78,7 @@ jobs:
go-version: "1.16"
- name: Check Dependencies Licenses
run: |
go install github.com/apache/skywalking-eyes/cmd/license-eye@df70871af1a8109c9a5b1dc824faaf65246c5236
go install github.com/apache/skywalking-eyes/cmd/license-eye@775fe1ffda59b7e100aa144d0ef8d7beae17f97d
license-eye dependency resolve --summary ./dist-material/release-docs/LICENSE.tpl || exit 1
if [ ! -z "$(git diff -U0 ./dist-material/release-docs/LICENSE)" ]; then
echo "LICENSE file is not updated correctly"
......
Subproject commit b8d5a5c27c271303ff1d3911fb85a554352f4f23
Subproject commit 9c7a0bf8e0c2195280adf65f50cbd3161dd588ff
# PromQL Service
PromQL([Prometheus Query Language](https://prometheus.io/docs/prometheus/latest/querying/basics/)) Service
exposes Prometheus Querying HTTP APIs including the bundled PromQL expression system.
Third-party systems or visualization platforms that already support PromQL (such as Grafana),
could obtain metrics through PromeQL Service.
As SkyWalking and Prometheus have fundamental differences in metrics classification, format, storage, etc.
The PromQL Service supported will be a subset of the complete PromQL
## Details Of Supported Protocol
The following doc describes the details of the supported protocol and compared it to the PromQL official documentation.
If not mentioned, it will not be supported by default.
### Time series Selectors
#### [Instant Vector Selectors](https://prometheus.io/docs/prometheus/latest/querying/basics/#instant-vector-selectors)
For example: select metric `service_cpm` which the service is `$service` and the layer is `$layer`.
```text
service_cpm{service='$service', layer='$layer'}
```
**Note: The label matching operators only support `=` instead of regular expressions.**
#### [Range Vector Selectors](https://prometheus.io/docs/prometheus/latest/querying/basics/#range-vector-selectors)
For example: select metric `service_cpm` which the service is `$service` and the layer is `$layer` within the last 5 minutes.
```text
service_cpm{service='$service', layer='$layer'}[5m]
```
#### [Time Durations](https://prometheus.io/docs/prometheus/latest/querying/basics/#time-durations)
| Unit | Definition | Support |
|------|--------------|---------|
| ms | milliseconds | yes |
| s | seconds | yes |
| m | minutes | yes |
| h | hours | yes |
| d | days | yes |
| w | weeks | yes |
| y | years | **no** |
### Binary operators
#### [Arithmetic binary operators](https://prometheus.io/docs/prometheus/latest/querying/operators/#arithmetic-binary-operators)
| Operator | Definition | Support |
|----------|----------------------|---------|
| + | addition | yes |
| - | subtraction | yes |
| * | multiplication | yes |
| / | division | yes |
| % | modulo | yes |
| ^ | power/exponentiation | **no** |
##### Between two scalars
For example:
```text
1 + 2
```
##### Between an instant vector and a scalar
For example:
```text
service_cpm{service='$service', layer='$layer'} / 100
```
##### Between two instant vectors
For example:
```text
service_cpm{service='$service', layer='$layer'} + service_cpm{service='$service', layer='$layer'}
```
**Note: The operations between vectors require the same metric and labels, and don't support [Vector matching](https://prometheus.io/docs/prometheus/latest/querying/operators/#vector-matching).**
#### [Comparison binary operators](https://prometheus.io/docs/prometheus/latest/querying/operators/#comparison-binary-operators)
| Operator | Definition | Support |
|----------|------------------|---------|
| == | equal | yes |
| != | not-equal | yes |
| \> | greater-than | yes |
| < | less-than | yes |
| \>= | greater-or-equal | yes |
| <= | less-or-equal) | yes |
##### Between two scalars
For example:
```text
1 > bool 2
```
##### Between an instant vector and a scalar
For example:
```text
service_cpm{service='$service', layer='$layer'} > 1
```
##### Between two instant vectors
For example:
```text
service_cpm{service='service_A', layer='$layer'} > service_cpm{service='service_B', layer='$layer'}
```
### HTTP API
#### Expression queries
##### [Instant queries](https://prometheus.io/docs/prometheus/latest/querying/api/#instant-queries)
```text
GET|POST /api/v1/query
```
| Parameter | Definition | Support | Optional |
|-----------|-------------------------------------------------------------------------------------------------------------------------------------|---------|------------|
| query | prometheus expression | yes | no |
| time | **The latest metrics value from current time to this time is returned. If time is empty, the default look-back time is 2 minutes.** | yes | yes |
| timeout | evaluation timeout | **no** | **ignore** |
For example:
```text
/api/v1/query?query=service_cpm{service='agent::songs', layer='GENERAL'}
```
Result:
```json
{
"status": "success",
"data": {
"resultType": "vector",
"result": [
{
"metric": {
"__name__": "service_cpm",
"layer": "GENERAL",
"scope": "Service",
"service": "agent::songs"
},
"value": [
1677548400,
"6"
]
}
]
}
}
```
##### [Range queries](https://prometheus.io/docs/prometheus/latest/querying/api/#range-queries)
```text
GET|POST /api/v1/query_range
```
| Parameter | Definition | Support | Optional |
|-----------|--------------------------------------------------------------------------------------|---------|------------|
| query | prometheus expression | yes | no |
| start | start timestamp, **seconds** | yes | no |
| end | end timestamp, **seconds** | yes | no |
| step | **SkyWalking will automatically fit Step(DAY, HOUR, MINUTE) through start and end.** | **no** | **ignore** |
| timeout | evaluation timeout | **no** | **ignore** |
For example:
```text
/api/v1/query_range?query=service_cpm{service='agent::songs', layer='GENERAL'}&start=1677479336&end=1677479636
```
Result:
```json
{
"status": "success",
"data": {
"resultType": "matrix",
"result": [
{
"metric": {
"__name__": "service_cpm",
"layer": "GENERAL",
"scope": "Service",
"service": "agent::songs"
},
"values": [
[
1677479280,
"18"
],
[
1677479340,
"18"
],
[
1677479400,
"18"
],
[
1677479460,
"18"
],
[
1677479520,
"18"
],
[
1677479580,
"18"
]
]
}
]
}
}
```
#### Querying metadata
##### [Finding series by label matchers](https://prometheus.io/docs/prometheus/latest/querying/api/#finding-series-by-label-matchers)
```text
GET|POST /api/v1/series
```
| Parameter | Definition | Support | Optional |
|-----------|------------------------------|---------|----------|
| match[] | series selector | yes | no |
| start | start timestamp, **seconds** | yes | no |
| end | end timestamp, **seconds** | yes | no |
For example:
```text
/api/v1/series?match[]=service_traffic{layer='GENERAL'}&start=1677479336&end=1677479636
```
Result:
```json
{
"status": "success",
"data": [
{
"__name__": "service_traffic",
"service": "agent::songs",
"scope": "Service",
"layer": "GENERAL"
},
{
"__name__": "service_traffic",
"service": "agent::recommendation",
"scope": "Service",
"layer": "GENERAL"
},
{
"__name__": "service_traffic",
"service": "agent::app",
"scope": "Service",
"layer": "GENERAL"
},
{
"__name__": "service_traffic",
"service": "agent::gateway",
"scope": "Service",
"layer": "GENERAL"
},
{
"__name__": "service_traffic",
"service": "agent::frontend",
"scope": "Service",
"layer": "GENERAL"
}
]
}
```
**Note: SkyWalking's metadata exists in the following metrics(traffics):**
- service_traffic
- instance_traffic
- endpoint_traffic
#### [Getting label names](https://prometheus.io/docs/prometheus/latest/querying/api/#getting-label-names)
```text
GET|POST /api/v1/labels
```
| Parameter | Definition | Support | Optional |
|-----------|-----------------|---------|----------|
| match[] | series selector | yes | yes |
| start | start timestamp | **no** | yes |
| end | end timestamp | **no** | yes |
For example:
```text
/api/v1/labels?match[]=instance_jvm_cpu'
```
Result:
```json
{
"status": "success",
"data": [
"layer",
"scope",
"top_n",
"order",
"service_instance",
"parent_service"
]
}
```
#### [Querying label values](https://prometheus.io/docs/prometheus/latest/querying/api/#querying-label-values)
```text
GET /api/v1/label/<label_name>/values
```
| Parameter | Definition | Support | Optional |
|-----------|-----------------|---------|----------|
| match[] | series selector | yes | no |
| start | start timestamp | **no** | yes |
| end | end timestamp | **no** | yes |
For example:
```text
/api/v1/label/__name__/values
```
Result:
```json
{
"status": "success",
"data": [
"meter_mysql_instance_qps",
"service_cpm",
"envoy_cluster_up_rq_active",
"instance_jvm_class_loaded_class_count",
"k8s_cluster_memory_requests",
"meter_vm_memory_used",
"meter_apisix_sv_bandwidth_unmatched",
"meter_vm_memory_total",
"instance_jvm_thread_live_count",
"instance_jvm_thread_timed_waiting_state_thread_count",
"browser_app_page_first_pack_percentile",
"instance_clr_max_worker_threads",
...
]
}
```
#### [Querying metric metadata](https://prometheus.io/docs/prometheus/latest/querying/api/#querying-metric-metadata)
```text
GET /api/v1/metadata
```
| Parameter | Definition | Support | Optional |
|-----------|---------------------------------------------|---------|----------|
| limit | maximum number of metrics to return | yes | **yes** |
| metric | **metric name, support regular expression** | yes | **yes** |
For example:
```text
/api/v1/metadata?limit=10
```
Result:
```json
{
"status": "success",
"data": {
"meter_mysql_instance_qps": [
{
"type": "gauge",
"help": "",
"unit": ""
}
],
"meter_apisix_sv_bandwidth_unmatched": [
{
"type": "gauge",
"help": "",
"unit": ""
}
],
"service_cpm": [
{
"type": "gauge",
"help": "",
"unit": ""
}
],
...
}
}
```
## Metrics Type For Query
### Supported Metrics [Scope](../../../oap-server/server-core/src/main/java/org/apache/skywalking/oap/server/core/query/enumeration/Scope.java)(Catalog)
All scopes are not supported completely, please check the following table:
| Scope | Support |
|-------------------------|---------|
| Service | yes |
| ServiceInstance | yes |
| Endpoint | yes |
| ServiceRelation | no |
| ServiceInstanceRelation | no |
| Process | no |
| ProcessRelation | no |
### General labels
Each metric contains general labels: [layer](../../../oap-server/server-core/src/main/java/org/apache/skywalking/oap/server/core/analysis/Layer.java).
Different metrics will have different labels depending on their Scope and metric value type.
| Query Labels | Scope | Expression Example |
|----------------------------------|-----------------|------------------------------------------------------------------------------------------------|
| layer, service | Service | service_cpm{service='$service', layer='$layer'} |
| layer, service, service_instance | ServiceInstance | service_instance_cpm{service='$service', service_instance='$service_instance', layer='$layer'} |
| layer, service, endpoint | Endpoint | endpoint_cpm{service='$service', endpoint='$endpoint', layer='$layer'} |
### Common Value Metrics
- Query Labels:
```text
{General labels}
```
- Expression Example:
```text
service_cpm{service='agent::songs', layer='GENERAL'}
```
- Result (Instant Query):
```json
{
"status": "success",
"data": {
"resultType": "vector",
"result": [
{
"metric": {
"__name__": "service_cpm",
"layer": "GENERAL",
"scope": "Service",
"service": "agent::songs"
},
"value": [
1677490740,
"3"
]
}
]
}
}
```
### Labeled Value Metrics
- Query Labels:
```text
--{General labels}
--labels: Used to filter the value labels to be returned
--relabels: Used to rename the returned value labels
note: The number and order of labels must match the number and order of relabels.
```
- Expression Example:
```text
service_percentile{service='agent::songs', layer='GENERAL', labels='0,1,2', relabels='P50,P75,P90'}
```
- Result (Instant Query):
```json
{
"status": "success",
"data": {
"resultType": "vector",
"result": [
{
"metric": {
"__name__": "service_percentile",
"label": "P50",
"layer": "GENERAL",
"scope": "Service",
"service": "agent::songs"
},
"value": [
1677493380,
"0"
]
},
{
"metric": {
"__name__": "service_percentile",
"label": "P75",
"layer": "GENERAL",
"scope": "Service",
"service": "agent::songs"
},
"value": [
1677493380,
"0"
]
},
{
"metric": {
"__name__": "service_percentile",
"label": "P90",
"layer": "GENERAL",
"scope": "Service",
"service": "agent::songs"
},
"value": [
1677493380,
"0"
]
}
]
}
}
```
### Sort Metrics
- Query Labels:
```text
--parent_service: <optional> Name of the parent service.
--top_n: The max number of the selected metric value
--order: ASC/DES
```
- Expression Example:
```text
service_instance_cpm{parent_service='agent::songs', layer='GENERAL', top_n='10', order='DES'}
```
- Result (Instant Query):
```json
{
"status": "success",
"data": {
"resultType": "vector",
"result": [
{
"metric": {
"__name__": "service_instance_cpm",
"layer": "GENERAL",
"scope": "ServiceInstance",
"service_instance": "651db53c0e3843d8b9c4c53a90b4992a@10.4.0.28"
},
"value": [
1677494280,
"14"
]
},
{
"metric": {
"__name__": "service_instance_cpm",
"layer": "GENERAL",
"scope": "ServiceInstance",
"service_instance": "4c04cf44d6bd408880556aa3c2cfb620@10.4.0.232"
},
"value": [
1677494280,
"6"
]
},
{
"metric": {
"__name__": "service_instance_cpm",
"layer": "GENERAL",
"scope": "ServiceInstance",
"service_instance": "f5ac8ead31af4e6795cae761729a2742@10.4.0.236"
},
"value": [
1677494280,
"5"
]
}
]
}
}
```
### Sampled Records
- Query Labels:
```text
--parent_service: Name of the parent service
--top_n: The max number of the selected records value
--order: ASC/DES
```
- Expression Example:
```text
top_n_database_statement{parent_service='localhost:-1', layer='VIRTUAL_DATABASE', top_n='10', order='DES'}
```
- Result (Instant Query):
```json
{
"status": "success",
"data": {
"resultType": "vector",
"result": [
{
"metric": {
"__name__": "top_n_database_statement",
"layer": "VIRTUAL_DATABASE",
"scope": "Service",
"record": "select song0_.id as id1_0_, song0_.artist as artist2_0_, song0_.genre as genre3_0_, song0_.liked as liked4_0_, song0_.name as name5_0_ from song song0_ where song0_.liked>?"
},
"value": [
1677501360,
"1"
]
},
{
"metric": {
"__name__": "top_n_database_statement",
"layer": "VIRTUAL_DATABASE",
"scope": "Service",
"record": "select song0_.id as id1_0_, song0_.artist as artist2_0_, song0_.genre as genre3_0_, song0_.liked as liked4_0_, song0_.name as name5_0_ from song song0_ where song0_.liked>?"
},
"value": [
1677501360,
"1"
]
},
{
"metric": {
"__name__": "top_n_database_statement",
"layer": "VIRTUAL_DATABASE",
"scope": "Service",
"record": "select song0_.id as id1_0_, song0_.artist as artist2_0_, song0_.genre as genre3_0_, song0_.liked as liked4_0_, song0_.name as name5_0_ from song song0_ where song0_.liked>?"
},
"value": [
1677501360,
"1"
]
}
]
}
}
```
......@@ -103,6 +103,8 @@
* Support prometheus HTTP API and promQL.
* `Scope` in the Entity of Metrics query v1 protocol is not required and automatical correction. The scope is determined based on the metric itself.
* Add explicit `ReadTimeout` for ConsulConfigurationWatcher to avoid `IllegalArgumentException: Cache watchInterval=10sec >= networkClientReadTimeout=10000ms`.
* Fix `DurationUtils.getDurationPoints` exceed, when `startTimeBucket` equals `endTimeBucket`.
* Support process OpenTelemetry ExponentialHistogram metrics
#### UI
......@@ -136,7 +138,10 @@
* Update BanyanDB client to 0.3.0.
* Add AWS DynamoDB menu.
* Fix: add auto period to the independent mode for widgets.
* optimize menus and add Windows monitoring menu.
* Optimize menus and add Windows monitoring menu.
* Add a calculation for the cpm5dAvg.
* add a cpm5d calculation.
* Fix data processing error in the eBPF profiling widget.
#### Documentation
......
......@@ -26,3 +26,4 @@ CloudWatch metrics with S3 --> CloudWatch Metric Stream (OpenTelemetry formart)
1. Only OpenTelemetry format is supported (refer to [Metric streams output formats](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch-metric-streams-formats.html))
2. A proxy(e.g. Nginx, Envoy) is required in front of OAP's Firehose receiver to accept HTTPS requests from AWS Firehose through port `443` (refer to [Amazon Kinesis Data Firehose Delivery Stream HTTP Endpoint Delivery Specifications](https://docs.aws.amazon.com/firehose/latest/dev/httpdeliveryrequestresponse.html).
3. AWS Firehose receiver support setting accessKey for Kinesis Data Firehose, please refer to [configuration vocabulary](./configuration-vocabulary.md)
......@@ -68,6 +68,6 @@ K8s Service as a `Service` in OAP and land on the `Layer: K8S_SERVICE`.
## Customizations
You can customize your own metrics/expression/dashboard panel.
The metrics definition and expression rules are found in `/config/otel-rules/k8s-cluster.yaml,/config/otel-rules/k8s-node.yaml, /config/otel-rules/k8s-service.yaml`.
The metrics definition and expression rules are found in `/config/otel-rules/k8s/k8s-cluster.yaml,/config/otel-rules/k8s/k8s-node.yaml, /config/otel-rules/k8s/k8s-service.yaml`.
The K8s Cluster dashboard panel configurations are found in `/config/ui-initialized-templates/k8s`.
The K8s Service dashboard panel configurations are found in `/config/ui-initialized-templates/k8s_service`.
......@@ -33,7 +33,7 @@ Each MySQL/MariaDB server is cataloged as an `Instance` in OAP.
### Customizations
You can customize your own metrics/expression/dashboard panel.
The metrics definition and expression rules are found in `/config/otel-rules/mysql.yaml`.
The metrics definition and expression rules are found in `/config/otel-rules/mysql`.
The MySQL dashboard panel configurations are found in `/config/ui-initialized-templates/mysql`.
## Collect sampled slow SQLs
......
......@@ -47,7 +47,7 @@ PostgreSQL monitoring provides monitoring of the status and resources of the Pos
### Customizations
You can customize your own metrics/expression/dashboard panel.
The metrics definition and expression rules are found in `/config/otel-rules/postgresql.yaml`.
The metrics definition and expression rules are found in `/config/otel-rules/postgresql`.
The PostgreSQL dashboard panel configurations are found in `/config/ui-initialized-templates/postgresql`.
## Collect sampled slow SQLs
......
......@@ -38,7 +38,7 @@ Supported configurations are as follows:
|core.default.endpoint-name-grouping| The endpoint name grouping setting. Overrides `endpoint-name-grouping.yml`. | Same as [`endpoint-name-grouping.yml`](endpoint-grouping-rules.md). |
|core.default.log4j-xml| The log4j xml configuration. Overrides `log4j2.xml`. | Same as [`log4j2.xml`](dynamical-logging.md). |
|agent-analyzer.default.traceSamplingPolicy| The sampling policy for default and service dimension, override `trace-sampling-policy-settings.yml`. | same as [`trace-sampling-policy-settings.yml`](trace-sampling.md) |
|configuration-discovery.default.agentConfigurations| The ConfigurationDiscovery settings. | See [`configuration-discovery.md`](https://github.com/apache/skywalking-java/blob/20fb8c81b3da76ba6628d34c12d23d3d45c973ef/docs/en/setup/service-agent/java-agent/configuration-discovery.md). |
|configuration-discovery.default.agentConfigurations| The ConfigurationDiscovery settings. | See [`configuration-discovery.md`](https://skywalking.apache.org/docs/skywalking-java/next/en/setup/service-agent/java-agent/configuration-discovery/). |
## Group Configuration
Group Configuration is a config key corresponding to a group sub config item. A sub config item is a key-value pair. The logic structure is:
......
......@@ -29,19 +29,21 @@ and its value is from `Node.identifier.host_name` defined in OpenCensus Agent Pr
or `net.host.name` (or `host.name` for some OTLP versions) resource attributes defined in OpenTelemetry proto,
for identification of the metric data.
| Description | Configuration File | Data Source |
|----|-----|----|
| Metrics of Istio Control Plane | otel-rules/istio-controlplane.yaml | Istio Control Plane -> OpenTelemetry Collector -- OC/OTLP exporter --> SkyWalking OAP Server |
| Metrics of SkyWalking OAP server itself | otel-rules/oap.yaml | SkyWalking OAP Server(SelfObservability) -> OpenTelemetry Collector -- OC/OTLP exporter --> SkyWalking OAP Server |
| Metrics of VMs | otel-rules/vm.yaml | Prometheus node-exporter(VMs) -> OpenTelemetry Collector -- OC/OTLP exporter --> SkyWalking OAP Server |
| Metrics of K8s cluster | otel-rules/k8s-cluster.yaml | K8s kube-state-metrics -> OpenTelemetry Collector -- OC/OTLP exporter --> SkyWalking OAP Server |
| Metrics of K8s cluster | otel-rules/k8s-node.yaml | cAdvisor & K8s kube-state-metrics -> OpenTelemetry Collector -- OC/OTLP exporter --> SkyWalking OAP Server |
| Metrics of K8s cluster | otel-rules/k8s-service.yaml | cAdvisor & K8s kube-state-metrics -> OpenTelemetry Collector -- OC/OTLP exporter --> SkyWalking OAP Server |
| Metrics of MYSQL| otel-rules/mysql.yaml | prometheus/mysqld_exporter -> OpenTelemetry Collector -- OC/OTLP exporter --> SkyWalking OAP Server |
| Metrics of PostgreSQL| otel-rules/postgresql.yaml | postgres_exporter -> OpenTelemetry Collector -- OC/OTLP exporter --> SkyWalking OAP Server |
| Metrics of Apache APISIX| otel-rules/apisix.yaml | apisix prometheus plugin -> OpenTelemetry Collector -- OC/OTLP exporter --> SkyWalking OAP Server |
| Metrics of AWS Cloud EKS| otel-rules/aws-eks/eks-cluster.yaml |AWS Container Insights Receiver -> OpenTelemetry Collector -- OC/OTLP exporter --> SkyWalking OAP Server |
| Metrics of AWS Cloud EKS| otel-rules/aws-eks/eks-service.yaml |AWS Container Insights Receiver -> OpenTelemetry Collector -- OC/OTLP exporter --> SkyWalking OAP Server |
| Metrics of AWS Cloud EKS| otel-rules/aws-eks/eks-node.yaml |AWS Container Insights Receiver -> OpenTelemetry Collector -- OC/OTLP exporter --> SkyWalking OAP Server |
| Description | Configuration File | Data Source |
|-----------------------------------------|------------------------------------------------|-------------------------------------------------------------------------------------------------------------------|
| Metrics of Istio Control Plane | otel-rules/istio-controlplane.yaml | Istio Control Plane -> OpenTelemetry Collector -- OC/OTLP exporter --> SkyWalking OAP Server |
| Metrics of SkyWalking OAP server itself | otel-rules/oap.yaml | SkyWalking OAP Server(SelfObservability) -> OpenTelemetry Collector -- OC/OTLP exporter --> SkyWalking OAP Server |
| Metrics of VMs | otel-rules/vm.yaml | Prometheus node-exporter(VMs) -> OpenTelemetry Collector -- OC/OTLP exporter --> SkyWalking OAP Server |
| Metrics of K8s cluster | otel-rules/k8s/k8s-cluster.yaml | K8s kube-state-metrics -> OpenTelemetry Collector -- OC/OTLP exporter --> SkyWalking OAP Server |
| Metrics of K8s cluster | otel-rules/k8s/k8s-node.yaml | cAdvisor & K8s kube-state-metrics -> OpenTelemetry Collector -- OC/OTLP exporter --> SkyWalking OAP Server |
| Metrics of K8s cluster | otel-rules/k8s/k8s-service.yaml | cAdvisor & K8s kube-state-metrics -> OpenTelemetry Collector -- OC/OTLP exporter --> SkyWalking OAP Server |
| Metrics of MYSQL | otel-rules/mysql/mysql-instance.yaml | prometheus/mysqld_exporter -> OpenTelemetry Collector -- OC/OTLP exporter --> SkyWalking OAP Server |
| Metrics of MYSQL | otel-rules/mysql/mysql-service.yaml | prometheus/mysqld_exporter -> OpenTelemetry Collector -- OC/OTLP exporter --> SkyWalking OAP Server |
| Metrics of PostgreSQL | otel-rules/postgresql/postgresql-instance.yaml | postgres_exporter -> OpenTelemetry Collector -- OC/OTLP exporter --> SkyWalking OAP Server |
| Metrics of PostgreSQL | otel-rules/postgresql/postgresql-service.yaml | postgres_exporter -> OpenTelemetry Collector -- OC/OTLP exporter --> SkyWalking OAP Server |
| Metrics of Apache APISIX | otel-rules/apisix.yaml | apisix prometheus plugin -> OpenTelemetry Collector -- OC/OTLP exporter --> SkyWalking OAP Server |
| Metrics of AWS Cloud EKS | otel-rules/aws-eks/eks-cluster.yaml | AWS Container Insights Receiver -> OpenTelemetry Collector -- OC/OTLP exporter --> SkyWalking OAP Server |
| Metrics of AWS Cloud EKS | otel-rules/aws-eks/eks-service.yaml | AWS Container Insights Receiver -> OpenTelemetry Collector -- OC/OTLP exporter --> SkyWalking OAP Server |
| Metrics of AWS Cloud EKS | otel-rules/aws-eks/eks-node.yaml | AWS Container Insights Receiver -> OpenTelemetry Collector -- OC/OTLP exporter --> SkyWalking OAP Server |
**Note**: You can also use OpenTelemetry exporter to transport the metrics to SkyWalking OAP directly. See [OpenTelemetry Exporter](./backend-meter.md#opentelemetry-exporter).
# Use Grafana As The UI
Since 9.4.0, SkyWalking provide [PromQL Service](../../api/promql-service.md). You can choose [Grafana](https://grafana.com/)
as the SkyWalking UI. About the installation and how to use please refer to the [official document](https://grafana.com/docs/grafana/v9.3/).
Notice <1>, Gafana is [AGPL-3.0 license](https://github.com/grafana/grafana/blob/main/LICENSE), which is very different from Apache 2.0.
Please follow AGPL 3.0 license requirements.
Notice <2>, SkyWalking always uses its native UI as first class. All visualization features are only available on native UI.
Grafana UI is an extension on our support of PromQL APIs. We don't maintain or promise the complete Grafana UI dashboard setup.
## Configure Data Source
In the data source config panel, chose the `Prometheus` and set the url to the OAP server address, the default port is `9090`.
<img src="https://skywalking.apache.org/doc-graph/promql/grafana-datasource.jpg"/>
## Configure Dashboards
### Dashboards Settings
The following steps are the example of config a `General Service` dashboard:
1. Create a dashboard named `General Service`. A [layer](../../../../oap-server/server-core/src/main/java/org/apache/skywalking/oap/server/core/analysis/Layer.java) is recommended as a dashboard.
2. Configure variables for the dashboard:
<img src="https://skywalking.apache.org/doc-graph/promql/grafana-variables.jpg"/>
After configure, you can select the service/instance/endpoint on the top of the dashboard:
<img src="https://skywalking.apache.org/doc-graph/promql/grafana-variables2.jpg"/>
### Add Panels
The following contents show how to add several typical metrics panels.
General settings:
1. Chose the metrics and chart.
2. Set `Query options --> Min interval = 1m`, because the metrics min time bucket in SkyWalking is 1m.
3. Add PromQL expressions, use the variables configured above for the labels then you can select the labels value from top.
**Note: Some metrics values may be required calculations to match units.**
4. Select the returned labels you want to show on panel.
5. Test query and save the panel.
#### Common Value Metrics
1. For example `service_apdex` and `Time series chart`.
2. Add PromQL expression, the metric scope is `Service`, so add labels `service` and `layer` for match.
3. Set `Connect null values --> Always` and `Show points --> Always` because when the query interval > 1hour or 1day SkyWalking return
the hour/day step metrics values.
<img src="https://skywalking.apache.org/doc-graph/promql/grafana-panels.jpg"/>
#### Labeled Value Metrics
1. For example `service_percentile` and `Time series chart`.
2. Add PromQL expressions, the metric scope is `Service`, add labels `service` and `layer` for match.
And it's a labeled value metric, add `labels='0,1,2,3,4'` filter the result label, and add`relabels='P50,P75,P90,P95,P99'` rename the result label.
3. Set `Connect null values --> Always` and `Show points --> Always` because when the query interval > 1hour or 1day SkyWalking return
the hour/day step metrics values.
<img src="https://skywalking.apache.org/doc-graph/promql/grafana-panels2.jpg"/>
#### Sort Metrics
1. For example `service_instance_cpm` and `Bar gauge chart`.
2. Add PromQL expressions, add labels `parent_service` and `layer` for match, add `top_n='10'` and `order='DES'` filter the result.
3. Set the `Calculation --> Latest*`.
<img src="https://skywalking.apache.org/doc-graph/promql/grafana-panels3.jpg"/>
#### Sampled Records
Same as the Sort Metrics.
......@@ -144,11 +144,17 @@ catalog:
- name: "Dynamic Configuration"
path: "/en/setup/backend/dynamic-config"
- name: "UI Setup"
path: "/en/setup/backend/ui-setup"
catalog:
- name: "Native UI"
catalog:
- name: "Setup"
path: "/en/setup/backend/ui-setup"
- name: "Customization"
path: "/en/ui/readme"
- name: "Grafana UI"
path: "/en/setup/backend/ui-grafana"
- name: "Official Dashboards"
catalog:
- name: "Overview"
path: "/en/ui/readme"
- name: "General Service"
catalog:
- name: "Server Agents"
......@@ -259,8 +265,10 @@ catalog:
path: "/en/api/profiling-protocol"
- name: "Query APIs"
catalog:
- name: "GraphQL APIs"
- name: "GraphQL APIs"
path: "/en/api/query-protocol"
- name: "PromQL APIs"
path: "/en/api/promql-service"
- name: "Security Notice"
path: "/en/security/readme"
- name: "Academy"
......
......@@ -88,6 +88,9 @@ public enum DurationUtils {
List<PointOfTime> durations = new LinkedList<>();
durations.add(new PointOfTime(startTimeBucket));
if (startTimeBucket == endTimeBucket) {
return durations;
}
int i = 0;
do {
......
......@@ -47,7 +47,7 @@ GT: '>';
// Literals
NUMBER: Digit+ (DOT Digit+)?;
DURATION: Digit+ ('s' | 'm' | 'h' | 'd' | 'w' | 'y');
DURATION: Digit+ ('ms' | 's' | 'm' | 'h' | 'd' | 'w');
NAME_STRING: NameLetter+;
VALUE_STRING: '\'' .*? '\'' | '"' .*? '"';
......
......@@ -103,7 +103,6 @@ public class PromQLApiHandler {
}
@Get
@Post
@Path("/api/v1/metadata")
public HttpResponse metadata(
@Param("limit") Optional<Integer> limit,
......@@ -174,7 +173,6 @@ public class PromQLApiHandler {
* reserve these param to keep consistent with API protocol.
*/
@Get
@Post
@Path("/api/v1/label/{label_name}/values")
public HttpResponse labelValues(
@Param("label_name") String labelName,
......@@ -290,7 +288,7 @@ public class PromQLApiHandler {
if (time.isPresent()) {
endTS = formatTimestamp2Millis(time.get());
}
long startTS = endTS - 900000; //look back 15m by default
long startTS = endTS - 120000; //look back 2m by default
Duration duration = timestamp2Duration(startTS, endTS);
ExprQueryRsp response = new ExprQueryRsp();
......@@ -349,7 +347,7 @@ public class PromQLApiHandler {
@Param("query") String query,
@Param("start") String start,
@Param("end") String end,
@Param("step") String step,
@Param("step") Optional<String> step,
@Param("timeout") Optional<String> timeout) throws IOException {
long startTS = formatTimestamp2Millis(start);
long endTS = formatTimestamp2Millis(end);
......
......@@ -35,6 +35,8 @@ import org.apache.skywalking.oap.server.core.query.input.Duration;
import org.apache.skywalking.oap.server.core.query.type.MetricsValues;
import org.apache.skywalking.promql.rt.grammar.PromQLParser;
import org.joda.time.DateTime;
import org.joda.time.format.PeriodFormatter;
import org.joda.time.format.PeriodFormatterBuilder;
public class PromOpUtils {
//Adopt skywalking time step.
......@@ -48,11 +50,11 @@ public class PromOpUtils {
long durationValue = endTS - startTS;
if (durationValue < 3600000) {
if (durationValue <= 3600000) {
duration.setStep(Step.MINUTE);
duration.setStart(startDT.toString(DurationUtils.YYYY_MM_DD_HHMM));
duration.setEnd(endDT.toString(DurationUtils.YYYY_MM_DD_HHMM));
} else if (durationValue < 86400000) {
} else if (durationValue <= 86400000) {
duration.setStep(Step.HOUR);
duration.setStart(startDT.toString(DurationUtils.YYYY_MM_DD_HH));
duration.setEnd(endDT.toString(DurationUtils.YYYY_MM_DD_HH));
......@@ -252,4 +254,22 @@ public class PromOpUtils {
DecimalFormat format = new DecimalFormat("#.##");
return format.format(v);
}
/**
* Format duration string to org.joda.time.Duration.
* Don't support year and month because the days vary in length.
* @param duration such as "5d", "30m", "5d30m, "1w, "1w5d"
* @return org.joda.time.Duration
*/
public static org.joda.time.Duration formatDuration(String duration) {
PeriodFormatter f = new PeriodFormatterBuilder()
.appendWeeks().appendSuffix("w")
.appendDays().appendSuffix("d")
.appendHours().appendSuffix("h")
.appendMinutes().appendSuffix("m")
.appendSeconds().appendSuffix("s")
.appendMillis().appendSuffix("ms")
.toFormatter();
return f.parsePeriod(duration).toStandardDuration();
}
}
......@@ -58,6 +58,7 @@ import org.apache.skywalking.promql.rt.grammar.PromQLParser;
import org.apache.skywalking.promql.rt.grammar.PromQLParserBaseVisitor;
import static org.apache.skywalking.oap.query.promql.rt.PromOpUtils.buildMatrixValues;
import static org.apache.skywalking.oap.query.promql.rt.PromOpUtils.formatDuration;
import static org.apache.skywalking.oap.query.promql.rt.PromOpUtils.matrixBinaryOp;
import static org.apache.skywalking.oap.query.promql.rt.PromOpUtils.matrixCompareOp;
import static org.apache.skywalking.oap.query.promql.rt.PromOpUtils.matrixScalarBinaryOp;
......@@ -247,9 +248,9 @@ public class PromQLExprQueryVisitor extends PromQLParserBaseVisitor<ParseResult>
return result;
}
String timeRange = "PT" + ctx.DURATION().getText().toUpperCase();
String timeRange = ctx.DURATION().getText().toUpperCase();
long endTS = System.currentTimeMillis();
long startTS = endTS - java.time.Duration.parse(timeRange).toMillis();
long startTS = endTS - formatDuration(timeRange).getMillis();
duration = timestamp2Duration(startTS, endTS);
ParseResult result = visit(ctx.metricInstant());
result.setRangeExpression(true);
......@@ -259,14 +260,11 @@ public class PromQLExprQueryVisitor extends PromQLParserBaseVisitor<ParseResult>
private void checkLabels(Map<LabelName, String> labelMap,
LabelName... labelNames) throws IllegalExpressionException {
StringBuilder missLabels = new StringBuilder();
int j = 0;
for (int i = 0; i < labelNames.length; i++) {
String labelName = labelNames[i].toString();
if (labelMap.get(labelNames[i]) == null) {
if (i == 0) {
missLabels.append(labelName);
} else {
missLabels.append(",").append(labelName);
}
missLabels.append(j++ > 0 ? "," : "").append(labelName);
}
}
String result = missLabels.toString();
......@@ -425,7 +423,8 @@ public class PromQLExprQueryVisitor extends PromQLParserBaseVisitor<ParseResult>
Layer layer,
Scope scope,
Map<LabelName, String> labelMap) throws IllegalExpressionException {
checkLabels(labelMap, LabelName.TOP_N, LabelName.PARENT_SERVICE, LabelName.ORDER);
//sortMetrics query ParentService could be null.
checkLabels(labelMap, LabelName.TOP_N, LabelName.ORDER);
TopNCondition topNCondition = new TopNCondition();
topNCondition.setName(metricName);
topNCondition.setParentService(labelMap.get(LabelName.PARENT_SERVICE));
......
......@@ -102,7 +102,7 @@ public class MetricsQuery implements GraphQLQueryResolver {
* Metrics definition metadata query. Response the metrics type which determines the suitable query methods.
*/
public MetricsType typeOfMetrics(String name) throws IOException {
return getMetricsMetadataQueryService().typeOfMetrics(name);
return MetricsMetadataQueryService.typeOfMetrics(name);
}
/**
......
......@@ -29,4 +29,5 @@ public class AWSFirehoseReceiverModuleConfig extends ModuleConfig {
private long idleTimeOut = 30000;
private int acceptQueueSize = 0;
private int maxRequestHeaderSize = 8192;
private String firehoseAccessKey;
}
......@@ -82,7 +82,7 @@ public class AWSFirehoseReceiverModuleProvider extends ModuleProvider {
.getService(
OpenTelemetryMetricRequestProcessor.class);
httpServer.addHandler(
new FirehoseHTTPHandler(processor),
new FirehoseHTTPHandler(processor, moduleConfig.getFirehoseAccessKey()),
Collections.singletonList(HttpMethod.POST)
);
}
......
......@@ -21,6 +21,8 @@ import com.google.protobuf.InvalidProtocolBufferException;
import com.linecorp.armeria.common.HttpResponse;
import com.linecorp.armeria.common.HttpStatus;
import com.linecorp.armeria.server.annotation.ConsumesJson;
import com.linecorp.armeria.server.annotation.Default;
import com.linecorp.armeria.server.annotation.Header;
import com.linecorp.armeria.server.annotation.Post;
import com.linecorp.armeria.server.annotation.ProducesJson;
import io.opentelemetry.proto.collector.metrics.firehose.v0_7.ExportMetricsServiceRequest;
......@@ -28,17 +30,30 @@ import java.io.ByteArrayInputStream;
import java.util.Base64;
import lombok.AllArgsConstructor;
import lombok.extern.slf4j.Slf4j;
import org.apache.skywalking.oap.server.library.util.StringUtil;
import org.apache.skywalking.oap.server.receiver.otel.otlp.OpenTelemetryMetricRequestProcessor;
@Slf4j
@AllArgsConstructor
public class FirehoseHTTPHandler {
private final OpenTelemetryMetricRequestProcessor openTelemetryMetricRequestProcessor;
private final String firehoseAccessKey;
@Post("/aws/firehose/metrics")
@ConsumesJson
@ProducesJson
public HttpResponse collectMetrics(final FirehoseReq firehoseReq) {
public HttpResponse collectMetrics(final FirehoseReq firehoseReq,
@Default @Header(value = "X-Amz-Firehose-Access-Key") String accessKey) {
if (StringUtil.isNotBlank(firehoseAccessKey) && !firehoseAccessKey.equals(accessKey)) {
return HttpResponse.ofJson(
HttpStatus.UNAUTHORIZED,
new FirehoseRes(firehoseReq.getRequestId(), System.currentTimeMillis(),
"AccessKey incorrect, please check your config"
)
);
}
try {
for (RequestData record : firehoseReq.getRecords()) {
final ByteArrayInputStream byteArrayInputStream = new ByteArrayInputStream(
......
......@@ -24,12 +24,14 @@ import io.opentelemetry.proto.common.v1.KeyValue;
import io.opentelemetry.proto.metrics.v1.Sum;
import io.opentelemetry.proto.metrics.v1.SummaryDataPoint;
import io.vavr.Function1;
import java.io.IOException;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.function.Function;
import java.util.stream.Stream;
import lombok.RequiredArgsConstructor;
import lombok.extern.slf4j.Slf4j;
import org.apache.skywalking.oap.meter.analyzer.MetricConvert;
......@@ -161,6 +163,57 @@ public class OpenTelemetryMetricRequestProcessor implements Service {
return result;
}
/**
* ExponentialHistogram data points are an alternate representation to the Histogram data point in OpenTelemetry
* metric format(https://opentelemetry.io/docs/reference/specification/metrics/data-model/#exponentialhistogram).
* It uses scale, offset and bucket index to calculate the bound. Firstly, calculate the base using scale by
* formula: base = 2**(2**(-scale)). Then the upperBound of specific bucket can be calculated by formula:
* base**(offset+index+1). Above calculation way is about positive buckets. For the negative case, we just
* map them by their absolute value into the negative range using the same scale as the positive range. So the
* upperBound should be calculated as -base**(offset+index).
*
* Ignored the zero_count field temporarily,
* because the zero_threshold even could overlap the existing bucket scopes.
*
* @param positiveOffset corresponding to positive Buckets' offset in ExponentialHistogramDataPoint
* @param positiveBucketCounts corresponding to positive Buckets' bucket_counts in ExponentialHistogramDataPoint
* @param negativeOffset corresponding to negative Buckets' offset in ExponentialHistogramDataPoint
* @param negativeBucketCounts corresponding to negative Buckets' bucket_counts in ExponentialHistogramDataPoint
* @param scale corresponding to scale in ExponentialHistogramDataPoint
* @return The map is a bucket set for histogram, the key is specific bucket's upperBound, the value is item count
* in this bucket lower than or equals to key(upperBound)
*/
private static Map<Double, Long> buildBucketsFromExponentialHistogram(
int positiveOffset, final List<Long> positiveBucketCounts,
int negativeOffset, final List<Long> negativeBucketCounts, int scale) {
final Map<Double, Long> result = new HashMap<>();
double base = Math.pow(2.0, Math.pow(2.0, -scale));
if (base == Double.POSITIVE_INFINITY) {
log.warn("Receive and reject out-of-range ExponentialHistogram data");
return result;
}
double upperBound;
for (int i = 0; i < negativeBucketCounts.size(); i++) {
upperBound = -Math.pow(base, negativeOffset + i);
if (upperBound == Double.NEGATIVE_INFINITY) {
log.warn("Receive and reject out-of-range ExponentialHistogram data");
return new HashMap<>();
}
result.put(upperBound, negativeBucketCounts.get(i));
}
for (int i = 0; i < positiveBucketCounts.size() - 1; i++) {
upperBound = Math.pow(base, positiveOffset + i + 1);
if (upperBound == Double.POSITIVE_INFINITY) {
log.warn("Receive and reject out-of-range ExponentialHistogram data");
return new HashMap<>();
}
result.put(upperBound, positiveBucketCounts.get(i));
}
result.put(Double.POSITIVE_INFINITY, positiveBucketCounts.get(positiveBucketCounts.size() - 1));
return result;
}
// Adapt the OpenTelemetry metrics to SkyWalking metrics
private Stream<? extends Metric> adaptMetrics(
final Map<String, String> nodeLabels,
......@@ -187,16 +240,16 @@ public class OpenTelemetryMetricRequestProcessor implements Service {
if (sum
.getAggregationTemporality() == AGGREGATION_TEMPORALITY_DELTA) {
return sum.getDataPointsList().stream()
.map(point -> new Gauge(
metric.getName(),
mergeLabels(
nodeLabels,
buildLabels(point.getAttributesList())
),
point.hasAsDouble() ? point.getAsDouble()
: point.getAsInt(),
point.getTimeUnixNano() / 1000000
));
.map(point -> new Gauge(
metric.getName(),
mergeLabels(
nodeLabels,
buildLabels(point.getAttributesList())
),
point.hasAsDouble() ? point.getAsDouble()
: point.getAsInt(),
point.getTimeUnixNano() / 1000000
));
}
if (sum.getIsMonotonic()) {
return sum.getDataPointsList().stream()
......@@ -241,6 +294,26 @@ public class OpenTelemetryMetricRequestProcessor implements Service {
point.getTimeUnixNano() / 1000000
));
}
if (metric.hasExponentialHistogram()) {
return metric.getExponentialHistogram().getDataPointsList().stream()
.map(point -> new Histogram(
metric.getName(),
mergeLabels(
nodeLabels,
buildLabels(point.getAttributesList())
),
point.getCount(),
point.getSum(),
buildBucketsFromExponentialHistogram(
point.getPositive().getOffset(),
point.getPositive().getBucketCountsList(),
point.getNegative().getOffset(),
point.getNegative().getBucketCountsList(),
point.getScale()
),
point.getTimeUnixNano() / 1000000
));
}
if (metric.hasSummary()) {
return metric.getSummary().getDataPointsList().stream()
.map(point -> new Summary(
......
/*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.skywalking.oap.server.receiver.otel.otlp;
import io.opentelemetry.proto.metrics.v1.ExponentialHistogram;
import io.opentelemetry.proto.metrics.v1.ExponentialHistogramDataPoint;
import io.opentelemetry.proto.metrics.v1.Metric;
import java.lang.reflect.InvocationTargetException;
import java.lang.reflect.Method;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.stream.Collectors;
import java.util.stream.Stream;
import org.apache.skywalking.oap.server.library.module.ModuleManager;
import org.apache.skywalking.oap.server.library.util.prometheus.metrics.Histogram;
import org.apache.skywalking.oap.server.receiver.otel.OtelMetricReceiverConfig;
import org.junit.jupiter.api.BeforeEach;
import org.junit.jupiter.api.Test;
import static org.junit.jupiter.api.Assertions.assertEquals;
import static org.junit.jupiter.api.Assertions.assertTrue;
public class OpenTelemetryMetricRequestProcessorTest {
private OtelMetricReceiverConfig config;
private ModuleManager manager;
private OpenTelemetryMetricRequestProcessor metricRequestProcessor;
private Map<String, String> nodeLabels;
@BeforeEach
public void setUp() {
manager = new ModuleManager();
config = new OtelMetricReceiverConfig();
metricRequestProcessor = new OpenTelemetryMetricRequestProcessor(manager, config);
nodeLabels = new HashMap<>();
}
@Test
public void testAdaptExponentialHistogram() throws NoSuchMethodException, InvocationTargetException, IllegalAccessException {
Class<OpenTelemetryMetricRequestProcessor> clazz = OpenTelemetryMetricRequestProcessor.class;
Method adaptMetricsMethod = clazz.getDeclaredMethod("adaptMetrics", Map.class, Metric.class);
adaptMetricsMethod.setAccessible(true);
// number is 4; 7, 7.5; 8.5, 8.7, 9.4
var positiveBuckets = ExponentialHistogramDataPoint.Buckets.newBuilder()
.setOffset(10)
.addBucketCounts(
1) // (0, 6.72]
.addBucketCounts(
2
) // (6.72, 8]
.addBucketCounts(
3
) // (8, 9.51]
.build();
// number is -14, -14.5, -15; -18; -21, -26
var negativeBuckets = ExponentialHistogramDataPoint.Buckets.newBuilder()
.setOffset(15)
.addBucketCounts(
3
) // (-16, -13.45]
.addBucketCounts(
1
) // (-19.02, -16]
.addBucketCounts(
2
) // (-INFINITY, -19.02]
.build();
var dataPoint = ExponentialHistogramDataPoint.newBuilder()
.setCount(12)
.setSum(-63.4)
.setScale(2)
.setPositive(positiveBuckets)
.setNegative(negativeBuckets)
.setTimeUnixNano(1000000)
.build();
ExponentialHistogram exponentialHistogram = ExponentialHistogram.newBuilder()
.addDataPoints(dataPoint)
.build();
Metric metric = Metric.newBuilder()
.setName("test_metric")
.setExponentialHistogram(exponentialHistogram)
.build();
Stream<Histogram> stream = (Stream<Histogram>) adaptMetricsMethod.invoke(
metricRequestProcessor, nodeLabels, metric);
List<Histogram> list = stream.collect(Collectors.toList());
Histogram histogramMetric = list.get(0);
assertEquals("test_metric", histogramMetric.getName());
assertEquals(1, histogramMetric.getTimestamp());
assertEquals(12, histogramMetric.getSampleCount());
assertEquals(-63.4, histogramMetric.getSampleSum());
// validate the key and value of bucket
double base = Math.pow(2, Math.pow(2, -2));
assertTrue(histogramMetric.getBuckets().containsKey(Math.pow(base, 11)));
assertEquals(1, histogramMetric.getBuckets().get(Math.pow(base, 11)));
assertTrue(histogramMetric.getBuckets().containsKey(Math.pow(base, 12)));
assertEquals(2, histogramMetric.getBuckets().get(Math.pow(base, 12)));
assertTrue(histogramMetric.getBuckets().containsKey(Double.POSITIVE_INFINITY));
assertEquals(3, histogramMetric.getBuckets().get(Double.POSITIVE_INFINITY));
assertTrue(histogramMetric.getBuckets().containsKey(-Math.pow(base, 15)));
assertEquals(3, histogramMetric.getBuckets().get(-Math.pow(base, 15)));
assertTrue(histogramMetric.getBuckets().containsKey(-Math.pow(base, 16)));
assertEquals(1, histogramMetric.getBuckets().get(-Math.pow(base, 16)));
assertTrue(histogramMetric.getBuckets().containsKey(-Math.pow(base, 17)));
assertEquals(2, histogramMetric.getBuckets().get(-Math.pow(base, 17)));
}
}
......@@ -435,12 +435,13 @@ query-zipkin:
# Default look back on the UI for search traces, 15 minutes in millis
uiDefaultLookback: ${SW_QUERY_ZIPKIN_UI_DEFAULT_LOOKBACK:900000}
#This module is for PromQL API.
promql:
selector: ${SW_PROMQL:default}
default:
# For HTTP server
restHost: ${SW_PROMQL_REST_HOST:0.0.0.0}
restPort: ${SW_PROMQL_REST_PORT:9099}
restPort: ${SW_PROMQL_REST_PORT:9090}
restContextPath: ${SW_PROMQL_REST_CONTEXT_PATH:/}
restMaxThreads: ${SW_PROMQL_REST_MAX_THREADS:200}
restIdleTimeOut: ${SW_PROMQL_REST_IDLE_TIMEOUT:30000}
......@@ -566,3 +567,4 @@ aws-firehose:
idleTimeOut: ${SW_RECEIVER_AWS_FIREHOSE_HTTP_IDLE_TIME_OUT:30000}
acceptQueueSize: ${SW_RECEIVER_AWS_FIREHOSE_HTTP_ACCEPT_QUEUE_SIZE:0}
maxRequestHeaderSize: ${SW_RECEIVER_AWS_FIREHOSE_HTTP_MAX_REQUEST_HEADER_SIZE:8192}
firehoseAccessKey: ${SW_RECEIVER_AWS_FIREHOSE_ACCESS_KEY:}
Subproject commit 72060f8227ee3db4a8c12d31ece46404be760c69
Subproject commit 1be572a95f864bffc4cab3943657187fd650428b
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册