提交 9149de2a 编写于 作者: wu-sheng's avatar wu-sheng 提交者: kezhenxu94

Support multiple linear values and merging p50/75/90/95/99 into percentile (#4214)

* Support new percentile func with new alarm and exporter for this new func.

* Fix e2e and OAL script

* Fix wrong column.

* Fix percentile bug and oal engine bug.

* Update query protocol and add percentile test case

* Support new query

* Adopt GraphQL requirement

* Fix wrong type cast.

* Fix query in H2 and ES.

* Fix docs and comments.

* Fix an e2e compile issue

* Fix javadoc issue and e2e test issue.

* Change CPM to Apdex in TTL test.

* Fix OAL for TTL e2e

* Add metrics query for service percentile.

* Fix OAL engine bug. Method deserialize is not working when more than two field types are IntKeyLongValueHashMap

* Support multiple IntKeyLongValueHashMap fields in remote. About serialize/deserialize methods.

* Fix graphql statement error in e2e.

* Fix serialize not working and add generated serialize/deserialize of percentile into test cases.

* Fix test case format

* Remove generated code test.

* Fix failed e2e test

* Use avg resp time to apdex in the TTL test.

* ADD multiple linear metrics check for endpoint in e2e cluster.

* Support `-` to represent no threshold and doc of alarm about this.

* Move break to right place.

* Fix wrong break(s)

* Fix break and add a test case for multiple values alarm.

* Fix format.

* Add more doc for this new feature and GraphQL query protocol.
Co-authored-by: NJared Tan <jian.tan@daocloud.io>
Co-authored-by: Nkezhenxu94 <kezhenxu94@163.com>
上级 dcb71cde
......@@ -64,7 +64,7 @@ jobs:
restore-keys: |
${{ runner.os }}-maven-
- name: Set environment
run: export MAVEN_OPTS='-Dmaven.repo.local=~/.m2/repository -XX:+TieredCompilation -XX:TieredStopAtLevel=1 -XX:+CMSClassUnloadingEnabled -XX:+UseConcMarkSweepGC -XX:-UseGCOverheadLimit -Xmx3g'
run: export MAVEN_OPTS='-XX:+TieredCompilation -XX:TieredStopAtLevel=1 -XX:+CMSClassUnloadingEnabled -XX:+UseConcMarkSweepGC -XX:-UseGCOverheadLimit -Xmx3g'
- name: Compile & Install Test Codes
run: |
./mvnw checkstyle:check apache-rat:check
......
......@@ -37,15 +37,15 @@ rules:
# How many times of checks, the alarm keeps silence after alarm triggered, default as same as period.
silence-period: 3
message: Successful rate of service {name} is lower than 80% in 2 minutes of last 10 minutes
service_p90_sla_rule:
service_resp_time_percentile_rule:
# Metrics value need to be long, double or int
metrics-name: service_p90
metrics-name: service_percentile
op: ">"
threshold: 1000
threshold: 1000,1000,1000,1000,1000
period: 10
count: 3
silence-period: 5
message: 90% response time of service {name} is more than 1000ms in 3 minutes of last 10 minutes
message: Percentile response time of service {name} alarm in 3 minutes of last 10 minutes, due to more than one condition of p50 > 1000, p75 > 1000, p90 > 1000, p95 > 1000, p99 > 1000
service_instance_resp_time_rule:
metrics-name: service_instance_resp_time
op: ">"
......
......@@ -57,11 +57,8 @@ In this case, all input are requests of each endpoint, condition is `endpoint.st
> Service_Calls_Sum = from(Service.*).sum();
In this case, calls of each service.
- `p99`, `p95`, `p90`, `p75`, `p50`. Read [p99 in WIKI](https://en.wikipedia.org/wiki/Percentile)
> All_p99 = from(All.latency).p99(10);
In this case, p99 value of all incoming requests. The parameter is the precision of p99 latency calculation, such as in above case, 120ms and 124 are considered same.
- `thermodynamic`. Read [Heatmap in WIKI](https://en.wikipedia.org/wiki/Heat_map))
- `thermodynamic`. Read [Heatmap in WIKI](https://en.wikipedia.org/wiki/Heat_map)
> All_heatmap = from(All.latency).thermodynamic(100, 20);
In this case, thermodynamic heatmap of all incoming requests.
......@@ -75,6 +72,16 @@ In this case, apdex score of each service.
The parameter (1) is the service name, which effects the Apdex threshold value loaded from service-apdex-threshold.yml in the config folder.
The parameter (2) is the status of this request. The status(success/failure) effects the Apdex calculation.
- `p99`, `p95`, `p90`, `p75`, `p50`. Read [percentile in WIKI](https://en.wikipedia.org/wiki/Percentile)
> all_percentile = from(All.latency).percentile(10);
**percentile** is the first multiple value metrics, introduced since 7.0.0. As having multiple values, it could be query through `getMultipleLinearIntValues` GraphQL query.
In this case, `p99`, `p95`, `p90`, `p75`, `p50` of all incoming request. The parameter is the precision of p99 latency calculation, such as in above case, 120ms and 124 are considered same.
Before 7.0.0, use `p99`, `p95`, `p90`, `p75`, `p50` func(s) to calculate metrics separately. Still supported in 7.x, but don't be recommended, and don't be included in official OAL script.
> All_p99 = from(All.latency).p99(10);
In this case, p99 value of all incoming requests. The parameter is the precision of p99 latency calculation, such as in above case, 120ms and 124 are considered same.
## Metrics name
The metrics name for storage implementor, alarm and query modules. The type inference supported by core.
......@@ -101,9 +108,8 @@ serv_Endpoint_p99 = from(Endpoint.latency).filter(name like ("serv%")).summary(0
// Caculate the avg response time of each Endpoint
Endpoint_avg = from(Endpoint.latency).avg()
// Caculate the histogram of each Endpoint by 50 ms steps.
// Always thermodynamic diagram in UI matches this metrics.
Endpoint_histogram = from(Endpoint.latency).histogram(50)
// Caculate the p50, p75, p90, p95 and p99 of each Endpoint by 50 ms steps.
Endpoint_percentile = from(Endpoint.latency).percentile(10)
// Caculate the percent of response status is true, for each service.
Endpoint_success = from(Endpoint.*).filter(status = "true").percent()
......
......@@ -61,79 +61,4 @@ Backend is based on modularization principle, so very easy to extend a new recei
## Query Protocol
Query protocol follows GraphQL grammar, provides data query capabilities, which depends on your analysis metrics.
There are 5 dimensionality data is provided.
1. Metadata. Metadata includes the brief info of the whole under monitoring services and their instances, endpoints, etc.
Use multiple ways to query this meta data.
1. Topology. Show the topology and dependency graph of services or endpoints. Including direct relationship or global map.
1. Metrics. Metrics query targets all the objects defined in [OAL script](../concepts-and-designs/oal.md). You could get the
metrics data in linear or thermodynamic matrix formats based on the aggregation functions in script.
1. Aggregation. Aggregation query means the metrics data need a secondary aggregation in query stage, which makes the query
interfaces have some different arguments. Such as, `TopN` list of services is a very typical aggregation query,
metrics stream aggregation just calculates the metrics values of each service, but the expected list needs ordering metrics data
by the values.
1. Trace. Query distributed traces by this.
1. Alarm. Through alarm query, you can have alarm trend and details.
The actual query GraphQL scrips could be found inside `query-protocol` folder in [here](../../../oap-server/server-query-plugin/query-graphql-plugin/src/main/resources).
Here is the list of all existing metrics names, based on [official_analysis.oal](../../../oap-server/server-bootstrap/src/main/resources/official_analysis.oal)
**Global metrics**
- all_p99, p99 response time of all services
- all_p95
- all_p90
- all_p75
- all_p70
- all_heatmap, the response time heatmap of all services
**Service metrics**
- service_resp_time, avg response time of service
- service_sla, successful rate of service
- service_cpm, calls per minute of service
- service_p99, p99 response time of service
- service_p95
- service_p90
- service_p75
- service_p50
**Service instance metrics**
- service_instance_sla, successful rate of service instance
- service_instance_resp_time, avg response time of service instance
- service_instance_cpm, calls per minute of service instance
**Endpoint metrics**
- endpoint_cpm, calls per minute of endpoint
- endpoint_avg, avg response time of endpoint
- endpoint_sla, successful rate of endpoint
- endpoint_p99, p99 response time of endpoint
- endpoint_p95
- endpoint_p90
- endpoint_p75
- endpoint_p50
**JVM metrics**, JVM related metrics, only work when javaagent is active
- instance_jvm_cpu
- instance_jvm_memory_heap
- instance_jvm_memory_noheap
- instance_jvm_memory_heap_max
- instance_jvm_memory_noheap_max
- instance_jvm_young_gc_time
- instance_jvm_old_gc_time
- instance_jvm_young_gc_count
- instance_jvm_old_gc_count
**Service relation metrics**, represents the metrics of calls between service.
The metrics ID could be
got in topology query only.
- service_relation_client_cpm, calls per minute detected at client side
- service_relation_server_cpm, calls per minute detected at server side
- service_relation_client_call_sla, successful rate detected at client side
- service_relation_server_call_sla, successful rate detected at server side
- service_relation_client_resp_time, avg response time detected at client side
- service_relation_server_resp_time, avg response time detected at server side
**Endpoint relation metrics**, represents the metrics between dependency endpoints. Only work when tracing agent.
The metrics ID could be got in topology query only.
- endpoint_relation_cpm
- endpoint_relation_resp_time
Read [query protocol doc](query-protocol.md) for more details.
# Query Protocol
Query Protocol defines a set of APIs in GraphQL grammar to provide data query and interactive capabilities with SkyWalking
native visualization tool or 3rd party system, including Web UI, CLI or private system.
Query protocol official repository, https://github.com/apache/skywalking-query-protocol.
### Metadata
Metadata includes the brief info of the whole under monitoring services and their instances, endpoints, etc.
Use multiple ways to query this meta data.
```graphql
extend type Query {
getGlobalBrief(duration: Duration!): ClusterBrief
# Normal service related metainfo
getAllServices(duration: Duration!): [Service!]!
searchServices(duration: Duration!, keyword: String!): [Service!]!
searchService(serviceCode: String!): Service
# Fetch all services of Browser type
getAllBrowserServices(duration: Duration!): [Service!]!
# Service intance query
getServiceInstances(duration: Duration!, serviceId: ID!): [ServiceInstance!]!
# Endpoint query
# Consider there are huge numbers of endpoint,
# must use endpoint owner's service id, keyword and limit filter to do query.
searchEndpoint(keyword: String!, serviceId: ID!, limit: Int!): [Endpoint!]!
getEndpointInfo(endpointId: ID!): EndpointInfo
# Database related meta info.
getAllDatabases(duration: Duration!): [Database!]!
getTimeInfo: TimeInfo
}
```
### Topology
Show the topology and dependency graph of services or endpoints. Including direct relationship or global map.
```graphql
extend type Query {
# Query the global topology
getGlobalTopology(duration: Duration!): Topology
# Query the topology, based on the given service
getServiceTopology(serviceId: ID!, duration: Duration!): Topology
# Query the instance topology, based on the given clientServiceId and serverServiceId
getServiceInstanceTopology(clientServiceId: ID!, serverServiceId: ID!, duration: Duration!): ServiceInstanceTopology
# Query the topology, based on the given endpoint
getEndpointTopology(endpointId: ID!, duration: Duration!): Topology
}
```
### Metrics
Metrics query targets all the objects defined in [OAL script](../concepts-and-designs/oal.md). You could get the
metrics data in linear or thermodynamic matrix formats based on the aggregation functions in script.
3 types of metrics could be query
1. Single value. The type of most default metrics is single value, consider this as default. `getValues` and `getLinearIntValues` are suitable for this.
1. Multiple value. One metrics defined in OAL include multiple value calculations. Use `getMultipleLinearIntValues` to get all values. `percentile` is a typical multiple value func in OAL.
1. Heatmap value. Read [Heatmap in WIKI](https://en.wikipedia.org/wiki/Heat_map) for detail. `thermodynamic` is the only OAL func. Use `getThermodynamic` to get the values.
```graphql
extend type Query {
getValues(metric: BatchMetricConditions!, duration: Duration!): IntValues
getLinearIntValues(metric: MetricCondition!, duration: Duration!): IntValues
# Query the type of metrics including multiple values, and format them as multiple linears.
# The seq of these multiple lines base on the calculation func in OAL
# Such as, should us this to query the result of func percentile(50,75,90,95,99) in OAL,
# then five lines will be responsed, p50 is the first element of return value.
getMultipleLinearIntValues(metric: MetricCondition!, numOfLinear: Int!, duration: Duration!): [IntValues!]!
getThermodynamic(metric: MetricCondition!, duration: Duration!): Thermodynamic
}
```
Metrics are defined in the [official_analysis.oal](../../../oap-server/server-bootstrap/src/main/resources/official_analysis.oal).
### Aggregation
Aggregation query means the metrics data need a secondary aggregation in query stage, which makes the query
interfaces have some different arguments. Such as, `TopN` list of services is a very typical aggregation query,
metrics stream aggregation just calculates the metrics values of each service, but the expected list needs ordering metrics data
by the values.
Aggregation query is for single value metrics only.
```graphql
# The aggregation query is different with the metric query.
# All aggregation queries require backend or/and storage do aggregation in query time.
extend type Query {
# TopN is an aggregation query.
getServiceTopN(name: String!, topN: Int!, duration: Duration!, order: Order!): [TopNEntity!]!
getAllServiceInstanceTopN(name: String!, topN: Int!, duration: Duration!, order: Order!): [TopNEntity!]!
getServiceInstanceTopN(serviceId: ID!, name: String!, topN: Int!, duration: Duration!, order: Order!): [TopNEntity!]!
getAllEndpointTopN(name: String!, topN: Int!, duration: Duration!, order: Order!): [TopNEntity!]!
getEndpointTopN(serviceId: ID!, name: String!, topN: Int!, duration: Duration!, order: Order!): [TopNEntity!]!
}
```
### Others
The following query(s) are for specific features, including trace, alarm or profile.
1. Trace. Query distributed traces by this.
1. Alarm. Through alarm query, you can have alarm trend and details.
The actual query GraphQL scrips could be found inside `query-protocol` folder in [here](../../../oap-server/server-query-plugin/query-graphql-plugin/src/main/resources).
......@@ -13,7 +13,10 @@ Alarm rule is constituted by following keys
endpoint name.
- **Exclude names**. The following entity names are excluded in this rule. Such as Service name,
endpoint name.
- **Threshold**. The target value.
- **Threshold**. The target value.
For multiple values metrics, such as **percentile**, the threshold is an array. Described like `value1, value2, value3, value4, value5`.
Each value could the threshold for each value of the metrics. Set the value to `-` if don't want to trigger alarm by this or some of the values.
Such as in **percentile**, `value1` is threshold of P50, and `-, -, value3, value4, value5` means, there is no threshold for P50 and P75 in percentile alarm rule.
- **OP**. Operator, support `>`, `<`, `=`. Welcome to contribute all OPs.
- **Period**. How long should the alarm rule should be checked. This is a time window, which goes with the
backend deployment env time.
......@@ -38,7 +41,6 @@ rules:
count: 3
# How many times of checks, the alarm keeps silence after alarm triggered, default as same as period.
silence-period: 10
service_percent_rule:
metrics-name: service_percent
# [Optional] Default, match all services in this metrics
......@@ -47,17 +49,28 @@ rules:
- service_b
exclude-names:
- service_c
# Single value metrics threshold.
threshold: 85
op: <
period: 10
count: 4
service_resp_time_percentile_rule:
# Metrics value need to be long, double or int
metrics-name: service_percentile
op: ">"
# Multiple value metrics threshold. Thresholds for P50, P75, P90, P95, P99.
threshold: 1000,1000,1000,1000,1000
period: 10
count: 3
silence-period: 5
message: Percentile response time of service {name} alarm in 3 minutes of last 10 minutes, due to more than one condition of p50 > 1000, p75 > 1000, p90 > 1000, p95 > 1000, p99 > 1000
```
### Default alarm rules
We provided a default `alarm-setting.yml` in our distribution only for convenience, which including following rules
1. Service average response time over 1s in last 3 minutes.
1. Service success rate lower than 80% in last 2 minutes.
1. Service 90% response time is over 1s in last 3 minutes
1. Percentile of service response time is over 1s in last 3 minutes
1. Service Instance average response time over 1s in last 2 minutes.
1. Endpoint average response time over 1s in last 2 minutes.
......
......@@ -27,6 +27,7 @@ message ExportMetricValue {
int64 timeBucket = 5;
int64 longValue = 6;
double doubleValue = 7;
repeated int64 longValues = 8;
}
message SubscriptionsResp {
......@@ -36,6 +37,7 @@ message SubscriptionsResp {
enum ValueType {
LONG = 0;
DOUBLE = 1;
MULTI_LONG = 2;
}
message SubscriptionReq {
......
# TTL
In SkyWalking, there are two types of observability data, besides metadata.
1. Record, including trace and alarm. Maybe log in the future.
1. Metric, including such as p99/p95/p90/p75/p50, heatmap, success rate, cpm(rpm) etc.
1. Metric, including such as percentile, heatmap, success rate, cpm(rpm) etc.
Metric is separated in minute/hour/day/month dimensions in storage, different indexes or tables.
You have following settings for different types.
......
......@@ -117,6 +117,12 @@ public class GRPCExporter extends MetricFormatter implements MetricValuesExportS
double value = ((DoubleValueHolder)metrics).getValue();
builder.setDoubleValue(value);
builder.setType(ValueType.DOUBLE);
} else if (metrics instanceof MultiIntValuesHolder) {
int[] values = ((MultiIntValuesHolder)metrics).getValues();
for (int value : values) {
builder.addLongValues(value);
}
builder.setType(ValueType.MULTI_LONG);
} else {
return;
}
......
......@@ -38,6 +38,7 @@ message ExportMetricValue {
int64 timeBucket = 5;
int64 longValue = 6;
double doubleValue = 7;
repeated int64 longValues = 8;
}
message SubscriptionsResp {
......@@ -47,6 +48,7 @@ message SubscriptionsResp {
enum ValueType {
LONG = 0;
DOUBLE = 1;
MULTI_LONG = 2;
}
message SubscriptionReq {
......
......@@ -15,13 +15,12 @@ public void deserialize(org.apache.skywalking.oap.server.core.remote.grpc.proto.
${field.setter}(remoteData.getDataIntegers(${field?index}));
</#list>
java.util.Iterator iterator;
<#list serializeFields.intKeyLongValueHashMapFields as field>
setDetailGroup(new org.apache.skywalking.oap.server.core.analysis.metrics.IntKeyLongValueHashMap(30));
java.util.Iterator iterator = remoteData.getDataIntLongPairListList().iterator();
iterator = remoteData.getDataLists(${field?index}).getValueList().iterator();
while (iterator.hasNext()) {
org.apache.skywalking.oap.server.core.remote.grpc.proto.IntKeyLongValuePair element = (org.apache.skywalking.oap.server.core.remote.grpc.proto.IntKeyLongValuePair)(iterator.next());
super.getDetailGroup().put(new Integer(element.getKey()), new org.apache.skywalking.oap.server.core.analysis.metrics.IntKeyLongValue(element.getKey(), element.getValue()));
super.${field.getter}().put(new Integer(element.getKey()), new org.apache.skywalking.oap.server.core.analysis.metrics.IntKeyLongValue(element.getKey(), element.getValue()));
}
</#list>
}
\ No newline at end of file
......@@ -15,11 +15,15 @@ public org.apache.skywalking.oap.server.core.remote.grpc.proto.RemoteData.Builde
<#list serializeFields.intFields as field>
remoteBuilder.addDataIntegers(${field.getter}());
</#list>
java.util.Iterator iterator;
org.apache.skywalking.oap.server.core.remote.grpc.proto.DataIntLongPairList.Builder pairListBuilder;
<#list serializeFields.intKeyLongValueHashMapFields as field>
java.util.Iterator iterator = super.getDetailGroup().values().iterator();
iterator = super.${field.getter}().values().iterator();
pairListBuilder = org.apache.skywalking.oap.server.core.remote.grpc.proto.DataIntLongPairList.newBuilder();
while (iterator.hasNext()) {
remoteBuilder.addDataIntLongPairList(((org.apache.skywalking.oap.server.core.analysis.metrics.IntKeyLongValue)(iterator.next())).serialize());
pairListBuilder.addValue(((org.apache.skywalking.oap.server.core.analysis.metrics.IntKeyLongValue)(iterator.next())).serialize());
}
remoteBuilder.addDataLists(pairListBuilder);
</#list>
return remoteBuilder;
......
......@@ -19,5 +19,5 @@
package org.apache.skywalking.oap.server.core.alarm.provider;
public enum MetricsValueType {
LONG, INT, DOUBLE
LONG, INT, DOUBLE, MULTI_INTS
}
......@@ -18,12 +18,19 @@
package org.apache.skywalking.oap.server.core.alarm.provider;
import java.util.ArrayList;
import java.util.LinkedList;
import java.util.List;
import java.util.Map;
import java.util.concurrent.ConcurrentHashMap;
import java.util.concurrent.locks.ReentrantLock;
import org.apache.skywalking.oap.server.core.alarm.AlarmMessage;
import org.apache.skywalking.oap.server.core.alarm.MetaInAlarm;
import org.apache.skywalking.oap.server.core.analysis.metrics.DoubleValueHolder;
import org.apache.skywalking.oap.server.core.analysis.metrics.IntValueHolder;
import org.apache.skywalking.oap.server.core.analysis.metrics.LongValueHolder;
import org.apache.skywalking.oap.server.core.analysis.metrics.Metrics;
import org.apache.skywalking.oap.server.core.analysis.metrics.MultiIntValuesHolder;
import org.apache.skywalking.oap.server.library.util.CollectionUtils;
import org.joda.time.LocalDateTime;
import org.joda.time.Minutes;
......@@ -32,13 +39,6 @@ import org.joda.time.format.DateTimeFormatter;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.util.ArrayList;
import java.util.LinkedList;
import java.util.List;
import java.util.Map;
import java.util.concurrent.ConcurrentHashMap;
import java.util.concurrent.locks.ReentrantLock;
/**
* RunningRule represents each rule in running status. Based on the {@link AlarmRule} definition,
*
......@@ -86,7 +86,8 @@ public class RunningRule {
* Receive metrics result from persistence, after it is saved into storage. In alarm, only minute dimensionality
* metrics are expected to process.
*
* @param metrics
* @param meta of input metrics
* @param metrics includes the values.
*/
public void in(MetaInAlarm meta, Metrics metrics) {
if (!meta.getMetricsName().equals(metricsName)) {
......@@ -116,6 +117,9 @@ public class RunningRule {
} else if (metrics instanceof DoubleValueHolder) {
valueType = MetricsValueType.DOUBLE;
threshold.setType(MetricsValueType.DOUBLE);
} else if (metrics instanceof MultiIntValuesHolder) {
valueType = MetricsValueType.MULTI_INTS;
threshold.setType(MetricsValueType.MULTI_INTS);
} else {
return;
}
......@@ -126,8 +130,8 @@ public class RunningRule {
Window window = windows.get(meta);
if (window == null) {
window = new Window(period);
LocalDateTime timebucket = TIME_BUCKET_FORMATTER.parseLocalDateTime(metrics.getTimeBucket() + "");
window.moveTo(timebucket);
LocalDateTime timeBucket = TIME_BUCKET_FORMATTER.parseLocalDateTime(metrics.getTimeBucket() + "");
window.moveTo(timeBucket);
windows.put(meta, window);
}
......@@ -170,11 +174,9 @@ public class RunningRule {
return alarmMessageList;
}
/**
* A metrics window, based on {@link AlarmRule#period}. This window slides with time, just keeps the recent
* N(period) buckets.
* A metrics window, based on AlarmRule#period. This window slides with time, just keeps the recent N(period)
* buckets.
*
* @author wusheng
*/
......@@ -325,7 +327,7 @@ public class RunningRule {
break;
case DOUBLE:
double dvalue = ((DoubleValueHolder)metrics).getValue();
double dexpected = RunningRule.this.threshold.getDoubleThreadhold();
double dexpected = RunningRule.this.threshold.getDoubleThreshold();
switch (op) {
case EQUAL:
// NOTICE: double equal is not reliable in Java,
......@@ -343,6 +345,41 @@ public class RunningRule {
break;
}
break;
case MULTI_INTS:
int[] ivalueArray = ((MultiIntValuesHolder)metrics).getValues();
Integer[] iaexpected = RunningRule.this.threshold.getIntValuesThreshold();
MULTI_VALUE_CHECK:
for (int i = 0; i < ivalueArray.length; i++) {
ivalue = ivalueArray[i];
Integer iNullableExpected = 0;
if (iaexpected.length > i) {
iNullableExpected = iaexpected[i];
if (iNullableExpected == null) {
continue;
}
}
switch (op) {
case LESS:
if (ivalue < iNullableExpected) {
matchCount++;
break MULTI_VALUE_CHECK;
}
break;
case GREATER:
if (ivalue > iNullableExpected) {
matchCount++;
break MULTI_VALUE_CHECK;
}
break;
case EQUAL:
if (ivalue == iNullableExpected) {
matchCount++;
break MULTI_VALUE_CHECK;
}
break;
}
}
break;
}
}
......
......@@ -26,12 +26,14 @@ import org.slf4j.LoggerFactory;
*/
public class Threshold {
private static final Logger logger = LoggerFactory.getLogger(Threshold.class);
private static final String NONE_THRESHOLD = "-";
private String alarmRuleName;
private final String threshold;
private int intThreshold;
private double doubleThreadhold;
private double doubleThreshold;
private long longThreshold;
private Integer[] intValuesThreshold;
public Threshold(String alarmRuleName, String threshold) {
this.alarmRuleName = alarmRuleName;
......@@ -42,14 +44,18 @@ public class Threshold {
return intThreshold;
}
public double getDoubleThreadhold() {
return doubleThreadhold;
public double getDoubleThreshold() {
return doubleThreshold;
}
public long getLongThreshold() {
return longThreshold;
}
public Integer[] getIntValuesThreshold() {
return intValuesThreshold;
}
public void setType(MetricsValueType type) {
try {
switch (type) {
......@@ -60,8 +66,19 @@ public class Threshold {
longThreshold = Long.parseLong(threshold);
break;
case DOUBLE:
doubleThreadhold = Double.parseDouble(threshold);
doubleThreshold = Double.parseDouble(threshold);
break;
case MULTI_INTS:
String[] strings = threshold.split(",");
intValuesThreshold = new Integer[strings.length];
for (int i = 0; i < strings.length; i++) {
String thresholdItem = strings[i].trim();
if (NONE_THRESHOLD.equals(thresholdItem)) {
intValuesThreshold[i] = null;
} else {
intValuesThreshold[i] = Integer.parseInt(thresholdItem);
}
}
}
} catch (NumberFormatException e) {
logger.warn("Alarm rule {} threshold doesn't match the metrics type, expected type: {}", alarmRuleName, type);
......
......@@ -102,6 +102,42 @@ public class RunningRuleTest {
Assert.assertEquals("Successful rate of endpoint Service_123 is lower than 75%", alarmMessages.get(0).getAlarmMessage());
}
@Test
public void testMultipleValuesAlarm() {
AlarmRule alarmRule = new AlarmRule();
alarmRule.setAlarmRuleName("endpoint_multiple_values_rule");
alarmRule.setMetricsName("endpoint_percent");
alarmRule.setOp(">");
alarmRule.setThreshold("50,60,70,-, 100");
alarmRule.setCount(3);
alarmRule.setPeriod(15);
alarmRule.setMessage("response percentile of endpoint {name} is lower than expected values");
RunningRule runningRule = new RunningRule(alarmRule);
LocalDateTime startTime = TIME_BUCKET_FORMATTER.parseLocalDateTime("201808301440");
long timeInPeriod1 = 201808301434L;
long timeInPeriod2 = 201808301436L;
long timeInPeriod3 = 201808301438L;
runningRule.in(getMetaInAlarm(123), getMultipleValueMetrics(timeInPeriod1, 70, 60, 40, 40, 40));
runningRule.in(getMetaInAlarm(123), getMultipleValueMetrics(timeInPeriod2, 60, 60, 40, 40, 40));
runningRule.in(getMetaInAlarm(123), getMultipleValueMetrics(timeInPeriod3, 74, 60, 40, 40, 40));
// check at 201808301440
List<AlarmMessage> alarmMessages = runningRule.check();
Assert.assertEquals(0, alarmMessages.size());
runningRule.moveTo(TIME_BUCKET_FORMATTER.parseLocalDateTime("201808301441"));
// check at 201808301441
alarmMessages = runningRule.check();
Assert.assertEquals(0, alarmMessages.size());
runningRule.moveTo(TIME_BUCKET_FORMATTER.parseLocalDateTime("201808301442"));
// check at 201808301442
alarmMessages = runningRule.check();
Assert.assertEquals(1, alarmMessages.size());
Assert.assertEquals("response percentile of endpoint Service_123 is lower than expected values", alarmMessages.get(0).getAlarmMessage());
}
@Test
public void testNoAlarm() {
AlarmRule alarmRule = new AlarmRule();
......@@ -258,6 +294,14 @@ public class RunningRuleTest {
return mockMetrics;
}
private Metrics getMultipleValueMetrics(long timeBucket, int... values) {
MockMultipleValueMetrics mockMultipleValueMetrics = new MockMultipleValueMetrics();
mockMultipleValueMetrics.setValues(values);
mockMultipleValueMetrics.setTimeBucket(timeBucket);
return mockMultipleValueMetrics;
}
private class MockMetrics extends Metrics implements IntValueHolder {
private int value;
......@@ -305,4 +349,52 @@ public class RunningRuleTest {
return 0;
}
}
private class MockMultipleValueMetrics extends Metrics implements MultiIntValuesHolder {
private int[] values;
public void setValues(int[] values) {
this.values = values;
}
@Override public String id() {
return null;
}
@Override public void combine(Metrics metrics) {
}
@Override public void calculate() {
}
@Override public Metrics toHour() {
return null;
}
@Override public Metrics toDay() {
return null;
}
@Override public Metrics toMonth() {
return null;
}
@Override public int[] getValues() {
return values;
}
@Override public int remoteHashCode() {
return 0;
}
@Override public void deserialize(RemoteData remoteData) {
}
@Override public RemoteData.Builder serialize() {
return null;
}
}
}
......@@ -20,6 +20,7 @@ package org.apache.skywalking.oap.server.core.alarm.provider;
import org.junit.Test;
import static org.junit.Assert.assertArrayEquals;
import static org.junit.Assert.assertEquals;
/**
......@@ -31,7 +32,7 @@ public class ThresholdTest {
public void setType() {
Threshold threshold = new Threshold("my-rule", "75");
threshold.setType(MetricsValueType.DOUBLE);
assertEquals(0, Double.compare(75, threshold.getDoubleThreadhold()));
assertEquals(0, Double.compare(75, threshold.getDoubleThreshold()));
threshold.setType(MetricsValueType.INT);
assertEquals(75, threshold.getIntThreshold());
......@@ -40,6 +41,14 @@ public class ThresholdTest {
assertEquals(75L, threshold.getLongThreshold());
}
@Test
public void setTypeMultipleValues() {
Threshold threshold = new Threshold("my-rule", "75,80, 90, -");
threshold.setType(MetricsValueType.MULTI_INTS);
assertArrayEquals(new Object[] {75, 80, 90, null}, threshold.getIntValuesThreshold());
}
@Test
public void setTypeWithWrong() {
Threshold threshold = new Threshold("my-rule", "wrong");
......
......@@ -17,22 +17,14 @@
*/
// All scope metrics
all_p99 = from(All.latency).p99(10);
all_p95 = from(All.latency).p95(10);
all_p90 = from(All.latency).p90(10);
all_p75 = from(All.latency).p75(10);
all_p50 = from(All.latency).p50(10);
all_percentile = from(All.latency).percentile(10); // Multiple values including p50, p75, p90, p95, p99
all_heatmap = from(All.latency).thermodynamic(100, 20);
// Service scope metrics
service_resp_time = from(Service.latency).longAvg();
service_sla = from(Service.*).percent(status == true);
service_cpm = from(Service.*).cpm();
service_p99 = from(Service.latency).p99(10);
service_p95 = from(Service.latency).p95(10);
service_p90 = from(Service.latency).p90(10);
service_p75 = from(Service.latency).p75(10);
service_p50 = from(Service.latency).p50(10);
service_percentile = from(Service.latency).percentile(10); // Multiple values including p50, p75, p90, p95, p99
service_apdex = from(Service.latency).apdex(name, status);
// Service relation scope metrics for topology
......@@ -42,16 +34,9 @@ service_relation_client_call_sla = from(ServiceRelation.*).filter(detectPoint ==
service_relation_server_call_sla = from(ServiceRelation.*).filter(detectPoint == DetectPoint.SERVER).percent(status == true);
service_relation_client_resp_time = from(ServiceRelation.latency).filter(detectPoint == DetectPoint.CLIENT).longAvg();
service_relation_server_resp_time = from(ServiceRelation.latency).filter(detectPoint == DetectPoint.SERVER).longAvg();
service_relation_client_p99 = from(ServiceRelation.latency).filter(detectPoint == DetectPoint.CLIENT).p99(10);
service_relation_server_p99 = from(ServiceRelation.latency).filter(detectPoint == DetectPoint.SERVER).p99(10);
service_relation_client_p95 = from(ServiceRelation.latency).filter(detectPoint == DetectPoint.CLIENT).p95(10);
service_relation_server_p95 = from(ServiceRelation.latency).filter(detectPoint == DetectPoint.SERVER).p95(10);
service_relation_client_p90 = from(ServiceRelation.latency).filter(detectPoint == DetectPoint.CLIENT).p90(10);
service_relation_server_p90 = from(ServiceRelation.latency).filter(detectPoint == DetectPoint.SERVER).p90(10);
service_relation_client_p75 = from(ServiceRelation.latency).filter(detectPoint == DetectPoint.CLIENT).p75(10);
service_relation_server_p75 = from(ServiceRelation.latency).filter(detectPoint == DetectPoint.SERVER).p75(10);
service_relation_client_p50 = from(ServiceRelation.latency).filter(detectPoint == DetectPoint.CLIENT).p50(10);
service_relation_server_p50 = from(ServiceRelation.latency).filter(detectPoint == DetectPoint.SERVER).p50(10);
service_relation_client_percentile = from(ServiceRelation.latency).filter(detectPoint == DetectPoint.CLIENT).percentile(10); // Multiple values including p50, p75, p90, p95, p99
service_relation_server_percentile = from(ServiceRelation.latency).filter(detectPoint == DetectPoint.SERVER).percentile(10); // Multiple values including p50, p75, p90, p95, p99
// Service Instance relation scope metrics for topology
service_instance_relation_client_cpm = from(ServiceInstanceRelation.*).filter(detectPoint == DetectPoint.CLIENT).cpm();
......@@ -60,16 +45,8 @@ service_instance_relation_client_call_sla = from(ServiceInstanceRelation.*).filt
service_instance_relation_server_call_sla = from(ServiceInstanceRelation.*).filter(detectPoint == DetectPoint.SERVER).percent(status == true);
service_instance_relation_client_resp_time = from(ServiceInstanceRelation.latency).filter(detectPoint == DetectPoint.CLIENT).longAvg();
service_instance_relation_server_resp_time = from(ServiceInstanceRelation.latency).filter(detectPoint == DetectPoint.SERVER).longAvg();
service_instance_relation_client_p99 = from(ServiceInstanceRelation.latency).filter(detectPoint == DetectPoint.CLIENT).p99(10);
service_instance_relation_server_p99 = from(ServiceInstanceRelation.latency).filter(detectPoint == DetectPoint.SERVER).p99(10);
service_instance_relation_client_p95 = from(ServiceInstanceRelation.latency).filter(detectPoint == DetectPoint.CLIENT).p95(10);
service_instance_relation_server_p95 = from(ServiceInstanceRelation.latency).filter(detectPoint == DetectPoint.SERVER).p95(10);
service_instance_relation_client_p90 = from(ServiceInstanceRelation.latency).filter(detectPoint == DetectPoint.CLIENT).p90(10);
service_instance_relation_server_p90 = from(ServiceInstanceRelation.latency).filter(detectPoint == DetectPoint.SERVER).p90(10);
service_instance_relation_client_p75 = from(ServiceInstanceRelation.latency).filter(detectPoint == DetectPoint.CLIENT).p75(10);
service_instance_relation_server_p75 = from(ServiceInstanceRelation.latency).filter(detectPoint == DetectPoint.SERVER).p75(10);
service_instance_relation_client_p50 = from(ServiceInstanceRelation.latency).filter(detectPoint == DetectPoint.CLIENT).p50(10);
service_instance_relation_server_p50 = from(ServiceInstanceRelation.latency).filter(detectPoint == DetectPoint.SERVER).p50(10);
service_instance_relation_client_percentile = from(ServiceInstanceRelation.latency).filter(detectPoint == DetectPoint.CLIENT).percentile(10); // Multiple values including p50, p75, p90, p95, p99
service_instance_relation_server_percentile = from(ServiceInstanceRelation.latency).filter(detectPoint == DetectPoint.SERVER).percentile(10); // Multiple values including p50, p75, p90, p95, p99
// Service Instance Scope metrics
service_instance_sla = from(ServiceInstance.*).percent(status == true);
......@@ -80,11 +57,7 @@ service_instance_cpm = from(ServiceInstance.*).cpm();
endpoint_cpm = from(Endpoint.*).cpm();
endpoint_avg = from(Endpoint.latency).longAvg();
endpoint_sla = from(Endpoint.*).percent(status == true);
endpoint_p99 = from(Endpoint.latency).p99(10);
endpoint_p95 = from(Endpoint.latency).p95(10);
endpoint_p90 = from(Endpoint.latency).p90(10);
endpoint_p75 = from(Endpoint.latency).p75(10);
endpoint_p50 = from(Endpoint.latency).p50(10);
endpoint_percentile = from(Endpoint.latency).percentile(10); // Multiple values including p50, p75, p90, p95, p99
// Endpoint relation scope metrics
endpoint_relation_cpm = from(EndpointRelation.*).filter(detectPoint == DetectPoint.SERVER).cpm();
......@@ -104,11 +77,7 @@ instance_jvm_old_gc_count = from(ServiceInstanceJVMGC.count).filter(phrase == GC
database_access_resp_time = from(DatabaseAccess.latency).longAvg();
database_access_sla = from(DatabaseAccess.*).percent(status == true);
database_access_cpm = from(DatabaseAccess.*).cpm();
database_access_p99 = from(DatabaseAccess.latency).p99(10);
database_access_p95 = from(DatabaseAccess.latency).p95(10);
database_access_p90 = from(DatabaseAccess.latency).p90(10);
database_access_p75 = from(DatabaseAccess.latency).p75(10);
database_access_p50 = from(DatabaseAccess.latency).p50(10);
database_access_percentile = from(DatabaseAccess.latency).percentile(10);
// CLR instance metrics
instance_clr_cpu = from(ServiceInstanceCLRCPU.usePercent).doubleAvg();
......
/*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*
*/
package org.apache.skywalking.oap.server.core.analysis.metrics;
/**
* MultiIntValuesHolder always holds a set of int(s).
*
* @author wusheng
*/
public interface MultiIntValuesHolder {
int[] getValues();
}
/*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*
*/
package org.apache.skywalking.oap.server.core.analysis.metrics;
import java.util.Comparator;
import lombok.Getter;
import lombok.Setter;
import org.apache.skywalking.oap.server.core.analysis.metrics.annotation.Arg;
import org.apache.skywalking.oap.server.core.analysis.metrics.annotation.Entrance;
import org.apache.skywalking.oap.server.core.analysis.metrics.annotation.MetricsFunction;
import org.apache.skywalking.oap.server.core.analysis.metrics.annotation.SourceFrom;
import org.apache.skywalking.oap.server.core.storage.annotation.Column;
/**
* Percentile is a better implementation than {@link PxxMetrics}. It is introduced since 7.0.0, it could calculate the
* multiple P50/75/90/95/99 values once for all.
*
* @author wusheng
*/
@MetricsFunction(functionName = "percentile")
public abstract class PercentileMetrics extends GroupMetrics implements MultiIntValuesHolder {
protected static final String DATASET = "dataset";
protected static final String VALUE = "value";
protected static final String PRECISION = "precision";
private static final int[] RANKS = {50, 75, 90, 95, 99};
@Getter @Setter @Column(columnName = VALUE, isValue = true) private IntKeyLongValueHashMap percentileValues;
@Getter @Setter @Column(columnName = PRECISION) private int precision;
@Getter @Setter @Column(columnName = DATASET) private IntKeyLongValueHashMap dataset;
private boolean isCalculated;
public PercentileMetrics() {
percentileValues = new IntKeyLongValueHashMap(RANKS.length);
dataset = new IntKeyLongValueHashMap(30);
}
@Entrance
public final void combine(@SourceFrom int value, @Arg int precision) {
this.isCalculated = false;
this.precision = precision;
int index = value / precision;
IntKeyLongValue element = dataset.get(index);
if (element == null) {
element = new IntKeyLongValue(index, 1);
dataset.put(element.getKey(), element);
} else {
element.addValue(1);
}
}
@Override
public void combine(Metrics metrics) {
this.isCalculated = false;
PercentileMetrics percentileMetrics = (PercentileMetrics)metrics;
combine(percentileMetrics.getDataset(), this.dataset);
}
@Override
public final void calculate() {
if (!isCalculated) {
int total = dataset.values().stream().mapToInt(element -> (int)element.getValue()).sum();
int index = 0;
int[] roofs = new int[RANKS.length];
for (int i = 0; i < RANKS.length; i++) {
roofs[i] = Math.round(total * RANKS[i] * 1.0f / 100);
}
int count = 0;
IntKeyLongValue[] sortedData = dataset.values().stream().sorted(new Comparator<IntKeyLongValue>() {
@Override public int compare(IntKeyLongValue o1, IntKeyLongValue o2) {
return o1.getKey() - o2.getKey();
}
}).toArray(IntKeyLongValue[]::new);
for (IntKeyLongValue element : sortedData) {
count += element.getValue();
for (int i = index; i < roofs.length; i++) {
int roof = roofs[i];
if (count >= roof) {
percentileValues.put(index, new IntKeyLongValue(index, element.getKey() * precision));
index++;
} else {
break;
}
}
}
}
}
public int[] getValues() {
int[] values = new int[percentileValues.size()];
for (int i = 0; i < values.length; i++) {
values[i] = (int)percentileValues.get(i).getValue();
}
return values;
}
}
......@@ -60,7 +60,7 @@ public class MetricQueryService implements Service {
return metricQueryDAO;
}
public IntValues getValues(final String indName, final List<String> ids, final Downsampling downsampling,
public IntValues getValues(final String metricsName, final List<String> ids, final Downsampling downsampling,
final long startTB,
final long endTB) throws IOException {
if (CollectionUtils.isEmpty(ids)) {
......@@ -69,7 +69,7 @@ public class MetricQueryService implements Service {
* we return an empty list, and a debug level log,
* rather than an exception, which always being considered as a serious error from new users.
*/
logger.debug("query metrics[{}] w/o IDs", indName);
logger.debug("query metrics[{}] w/o IDs", metricsName);
return new IntValues();
}
......@@ -79,7 +79,7 @@ public class MetricQueryService implements Service {
where.getKeyValues().add(intKeyValues);
ids.forEach(intKeyValues.getValues()::add);
return getMetricQueryDAO().getValues(indName, downsampling, startTB, endTB, where, ValueColumnIds.INSTANCE.getValueCName(indName), ValueColumnIds.INSTANCE.getValueFunction(indName));
return getMetricQueryDAO().getValues(metricsName, downsampling, startTB, endTB, where, ValueColumnIds.INSTANCE.getValueCName(metricsName), ValueColumnIds.INSTANCE.getValueFunction(metricsName));
}
public IntValues getLinearIntValues(final String indName, final String id, final Downsampling downsampling,
......@@ -96,6 +96,27 @@ public class MetricQueryService implements Service {
return getMetricQueryDAO().getLinearIntValues(indName, downsampling, ids, ValueColumnIds.INSTANCE.getValueCName(indName));
}
public List<IntValues> getMultipleLinearIntValues(final String indName, final String id, final int numOfLinear,
final Downsampling downsampling,
final long startTB,
final long endTB) throws IOException, ParseException {
List<DurationPoint> durationPoints = DurationUtils.INSTANCE.getDurationPoints(downsampling, startTB, endTB);
List<String> ids = new ArrayList<>();
if (StringUtil.isEmpty(id)) {
durationPoints.forEach(durationPoint -> ids.add(String.valueOf(durationPoint.getPoint())));
} else {
durationPoints.forEach(durationPoint -> ids.add(durationPoint.getPoint() + Const.ID_SPLIT + id));
}
IntValues[] multipleLinearIntValues = getMetricQueryDAO().getMultipleLinearIntValues(indName, downsampling, ids, numOfLinear, ValueColumnIds.INSTANCE.getValueCName(indName));
ArrayList<IntValues> response = new ArrayList<IntValues>(numOfLinear);
for (IntValues value : multipleLinearIntValues) {
response.add(value);
}
return response;
}
public Thermodynamic getThermodynamic(final String indName, final String id, final Downsampling downsampling,
final long startTB,
final long endTB) throws IOException, ParseException {
......
......@@ -19,14 +19,13 @@
package org.apache.skywalking.oap.server.core.query.entity;
import java.util.LinkedList;
import java.util.List;
/**
* @author peng-yongsheng
* @author peng-yongsheng, wusheng
*/
public class IntValues {
private List<KVInt> values = new LinkedList<>();
private LinkedList<KVInt> values = new LinkedList<>();
public void addKVInt(KVInt e) {
values.add(e);
......@@ -40,4 +39,8 @@ public class IntValues {
}
return defaultValue;
}
public KVInt getLast() {
return values.getLast();
}
}
......@@ -33,12 +33,20 @@ public enum ValueColumnIds {
mapping.putIfAbsent(indName, new ValueColumn(valueCName, function));
}
public String getValueCName(String indName) {
return mapping.get(indName).valueCName;
public String getValueCName(String metricsName) {
return findColumn(metricsName).valueCName;
}
public Function getValueFunction(String indName) {
return mapping.get(indName).function;
public Function getValueFunction(String metricsName) {
return findColumn(metricsName).function;
}
private ValueColumn findColumn(String metricsName) {
ValueColumn column = mapping.get(metricsName);
if (column == null) {
throw new RuntimeException("Metrics:" + metricsName + " doesn't have value column definition");
}
return column;
}
class ValueColumn {
......
......@@ -34,5 +34,7 @@ public interface IMetricsQueryDAO extends DAO {
IntValues getLinearIntValues(String indName, Downsampling downsampling, List<String> ids, String valueCName) throws IOException;
IntValues[] getMultipleLinearIntValues(String indName, Downsampling downsampling, List<String> ids, int numOfLinear, String valueCName) throws IOException;
Thermodynamic getThermodynamic(String indName, Downsampling downsampling, List<String> ids, String valueCName) throws IOException;
}
......@@ -36,7 +36,11 @@ message RemoteData {
repeated int64 dataLongs = 2;
repeated double dataDoubles = 3;
repeated int32 dataIntegers = 4;
repeated IntKeyLongValuePair dataIntLongPairList = 5;
repeated DataIntLongPairList dataLists = 5;
}
message DataIntLongPairList {
repeated IntKeyLongValuePair value = 1;
}
message IntKeyLongValuePair {
......
/*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*
*/
package org.apache.skywalking.oap.server.core.analysis.metrics;
import org.apache.skywalking.oap.server.core.remote.grpc.proto.RemoteData;
import org.junit.Assert;
import org.junit.Test;
/**
* @author wusheng
*/
public class PercentileMetricsTest {
private int precision = 10;//ms
@Test
public void percentileTest() {
PercentileMetricsTest.PercentileMetricsMocker metricsMocker = new PercentileMetricsTest.PercentileMetricsMocker();
metricsMocker.combine(110, precision);
metricsMocker.combine(90, precision);
metricsMocker.combine(95, precision);
metricsMocker.combine(99, precision);
metricsMocker.combine(50, precision);
metricsMocker.combine(50, precision);
metricsMocker.combine(50, precision);
metricsMocker.combine(50, precision);
metricsMocker.combine(50, precision);
metricsMocker.combine(75, precision);
metricsMocker.combine(75, precision);
metricsMocker.calculate();
Assert.assertArrayEquals(new int[] {70, 90, 90, 90, 110}, metricsMocker.getValues());
}
@Test
public void percentileTest2() {
PercentileMetricsTest.PercentileMetricsMocker metricsMocker = new PercentileMetricsTest.PercentileMetricsMocker();
metricsMocker.combine(90, precision);
metricsMocker.combine(90, precision);
metricsMocker.combine(90, precision);
metricsMocker.combine(90, precision);
metricsMocker.combine(90, precision);
metricsMocker.combine(90, precision);
metricsMocker.combine(90, precision);
metricsMocker.combine(90, precision);
metricsMocker.combine(90, precision);
metricsMocker.combine(90, precision);
metricsMocker.combine(90, precision);
metricsMocker.calculate();
Assert.assertArrayEquals(new int[] {90, 90, 90, 90, 90}, metricsMocker.getValues());
}
@Test
public void percentileTest3() {
PercentileMetricsTest.PercentileMetricsMocker metricsMocker = new PercentileMetricsTest.PercentileMetricsMocker();
metricsMocker.combine(90, precision);
metricsMocker.combine(110, precision);
metricsMocker.calculate();
Assert.assertArrayEquals(new int[] {90, 110, 110, 110, 110}, metricsMocker.getValues());
}
@Test
public void percentileTest4() {
PercentileMetricsTest.PercentileMetricsMocker metricsMocker = new PercentileMetricsTest.PercentileMetricsMocker();
metricsMocker.combine(0, precision);
metricsMocker.combine(0, precision);
metricsMocker.calculate();
Assert.assertArrayEquals(new int[] {0, 0, 0, 0, 0}, metricsMocker.getValues());
}
public class PercentileMetricsMocker extends PercentileMetrics {
@Override public String id() {
return null;
}
@Override public Metrics toHour() {
return null;
}
@Override public Metrics toDay() {
return null;
}
@Override public Metrics toMonth() {
return null;
}
@Override public int remoteHashCode() {
return 0;
}
@Override public void deserialize(RemoteData remoteData) {
}
@Override public RemoteData.Builder serialize() {
return null;
}
}
}
......@@ -21,6 +21,7 @@ package org.apache.skywalking.oap.query.graphql.resolver;
import com.coxautodev.graphql.tools.GraphQLQueryResolver;
import java.io.IOException;
import java.text.ParseException;
import java.util.List;
import org.apache.skywalking.oap.query.graphql.type.*;
import org.apache.skywalking.oap.server.core.CoreModule;
import org.apache.skywalking.oap.server.core.query.*;
......@@ -60,6 +61,13 @@ public class MetricQuery implements GraphQLQueryResolver {
return getMetricQueryService().getLinearIntValues(metrics.getName(), metrics.getId(), StepToDownsampling.transform(duration.getStep()), startTimeBucket, endTimeBucket);
}
public List<IntValues> getMultipleLinearIntValues(final MetricCondition metrics, final int numOfLinear, final Duration duration) throws IOException, ParseException {
long startTimeBucket = DurationUtils.INSTANCE.exchangeToTimeBucket(duration.getStart());
long endTimeBucket = DurationUtils.INSTANCE.exchangeToTimeBucket(duration.getEnd());
return getMetricQueryService().getMultipleLinearIntValues(metrics.getName(), metrics.getId(), numOfLinear, StepToDownsampling.transform(duration.getStep()), startTimeBucket, endTimeBucket);
}
public Thermodynamic getThermodynamic(final MetricCondition metrics, final Duration duration) throws IOException, ParseException {
long startTimeBucket = DurationUtils.INSTANCE.exchangeToTimeBucket(duration.getStart());
long endTimeBucket = DurationUtils.INSTANCE.exchangeToTimeBucket(duration.getEnd());
......
Subproject commit dde9a0dad56617ccbf4226f5f71e667fd9620222
Subproject commit 249addeaaf524c0dd990444e5f4bcaf355ce8e01
......@@ -19,11 +19,20 @@
package org.apache.skywalking.oap.server.storage.plugin.elasticsearch.query;
import java.io.IOException;
import java.util.*;
import java.util.ArrayList;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import org.apache.skywalking.oap.server.core.analysis.Downsampling;
import org.apache.skywalking.oap.server.core.analysis.metrics.*;
import org.apache.skywalking.oap.server.core.query.entity.*;
import org.apache.skywalking.oap.server.core.query.sql.*;
import org.apache.skywalking.oap.server.core.analysis.metrics.IntKeyLongValue;
import org.apache.skywalking.oap.server.core.analysis.metrics.IntKeyLongValueHashMap;
import org.apache.skywalking.oap.server.core.analysis.metrics.Metrics;
import org.apache.skywalking.oap.server.core.analysis.metrics.ThermodynamicMetrics;
import org.apache.skywalking.oap.server.core.query.entity.IntValues;
import org.apache.skywalking.oap.server.core.query.entity.KVInt;
import org.apache.skywalking.oap.server.core.query.entity.Thermodynamic;
import org.apache.skywalking.oap.server.core.query.sql.Function;
import org.apache.skywalking.oap.server.core.query.sql.Where;
import org.apache.skywalking.oap.server.core.storage.model.ModelName;
import org.apache.skywalking.oap.server.core.storage.query.IMetricsQueryDAO;
import org.apache.skywalking.oap.server.library.client.elasticsearch.ElasticSearchClient;
......@@ -31,7 +40,8 @@ import org.apache.skywalking.oap.server.storage.plugin.elasticsearch.base.EsDAO;
import org.elasticsearch.action.search.SearchResponse;
import org.elasticsearch.search.SearchHit;
import org.elasticsearch.search.aggregations.AggregationBuilders;
import org.elasticsearch.search.aggregations.bucket.terms.*;
import org.elasticsearch.search.aggregations.bucket.terms.Terms;
import org.elasticsearch.search.aggregations.bucket.terms.TermsAggregationBuilder;
import org.elasticsearch.search.aggregations.metrics.avg.Avg;
import org.elasticsearch.search.aggregations.metrics.sum.Sum;
import org.elasticsearch.search.builder.SearchSourceBuilder;
......@@ -45,7 +55,9 @@ public class MetricsQueryEsDAO extends EsDAO implements IMetricsQueryDAO {
super(client);
}
@Override public IntValues getValues(String indName, Downsampling downsampling, long startTB, long endTB, Where where, String valueCName,
@Override
public IntValues getValues(String indName, Downsampling downsampling, long startTB, long endTB, Where where,
String valueCName,
Function function) throws IOException {
String indexName = ModelName.build(downsampling, indName);
......@@ -100,7 +112,8 @@ public class MetricsQueryEsDAO extends EsDAO implements IMetricsQueryDAO {
}
}
@Override public IntValues getLinearIntValues(String indName, Downsampling downsampling, List<String> ids, String valueCName) throws IOException {
@Override public IntValues getLinearIntValues(String indName, Downsampling downsampling, List<String> ids,
String valueCName) throws IOException {
String indexName = ModelName.build(downsampling, indName);
SearchResponse response = getClient().ids(indexName, ids.toArray(new String[0]));
......@@ -121,7 +134,43 @@ public class MetricsQueryEsDAO extends EsDAO implements IMetricsQueryDAO {
return intValues;
}
@Override public Thermodynamic getThermodynamic(String indName, Downsampling downsampling, List<String> ids, String valueCName) throws IOException {
@Override public IntValues[] getMultipleLinearIntValues(String indName, Downsampling downsampling,
List<String> ids, int numOfLinear, String valueCName) throws IOException {
String indexName = ModelName.build(downsampling, indName);
SearchResponse response = getClient().ids(indexName, ids.toArray(new String[0]));
Map<String, Map<String, Object>> idMap = toMap(response);
IntValues[] intValuesArray = new IntValues[numOfLinear];
for (int i = 0; i < intValuesArray.length; i++) {
intValuesArray[i] = new IntValues();
}
for (String id : ids) {
for (int i = 0; i < intValuesArray.length; i++) {
KVInt kvInt = new KVInt();
kvInt.setId(id);
kvInt.setValue(0);
intValuesArray[i].addKVInt(kvInt);
}
if (idMap.containsKey(id)) {
Map<String, Object> source = idMap.get(id);
IntKeyLongValueHashMap multipleValues = new IntKeyLongValueHashMap(5);
multipleValues.toObject((String)source.getOrDefault(valueCName, ""));
for (int i = 0; i < intValuesArray.length; i++) {
intValuesArray[i].getLast().setValue(multipleValues.get(i).getValue());
}
}
}
return intValuesArray;
}
@Override public Thermodynamic getThermodynamic(String indName, Downsampling downsampling, List<String> ids,
String valueCName) throws IOException {
String indexName = ModelName.build(downsampling, indName);
Thermodynamic thermodynamic = new Thermodynamic();
......
......@@ -19,12 +19,24 @@
package org.apache.skywalking.oap.server.storage.plugin.jdbc.h2.dao;
import java.io.IOException;
import java.sql.*;
import java.util.*;
import java.sql.Connection;
import java.sql.ResultSet;
import java.sql.SQLException;
import java.util.ArrayList;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import org.apache.skywalking.oap.server.core.analysis.Downsampling;
import org.apache.skywalking.oap.server.core.analysis.metrics.*;
import org.apache.skywalking.oap.server.core.query.entity.*;
import org.apache.skywalking.oap.server.core.query.sql.*;
import org.apache.skywalking.oap.server.core.analysis.metrics.IntKeyLongValue;
import org.apache.skywalking.oap.server.core.analysis.metrics.IntKeyLongValueHashMap;
import org.apache.skywalking.oap.server.core.analysis.metrics.Metrics;
import org.apache.skywalking.oap.server.core.analysis.metrics.ThermodynamicMetrics;
import org.apache.skywalking.oap.server.core.query.entity.IntValues;
import org.apache.skywalking.oap.server.core.query.entity.KVInt;
import org.apache.skywalking.oap.server.core.query.entity.Thermodynamic;
import org.apache.skywalking.oap.server.core.query.sql.Function;
import org.apache.skywalking.oap.server.core.query.sql.KeyValues;
import org.apache.skywalking.oap.server.core.query.sql.Where;
import org.apache.skywalking.oap.server.core.storage.model.ModelName;
import org.apache.skywalking.oap.server.core.storage.query.IMetricsQueryDAO;
import org.apache.skywalking.oap.server.library.client.jdbc.hikaricp.JDBCHikariCPClient;
......@@ -41,7 +53,8 @@ public class H2MetricsQueryDAO extends H2SQLExecutor implements IMetricsQueryDAO
}
@Override
public IntValues getValues(String indName, Downsampling downsampling, long startTB, long endTB, Where where, String valueCName,
public IntValues getValues(String indName, Downsampling downsampling, long startTB, long endTB, Where where,
String valueCName,
Function function) throws IOException {
String tableName = ModelName.build(downsampling, indName);
......@@ -100,7 +113,8 @@ public class H2MetricsQueryDAO extends H2SQLExecutor implements IMetricsQueryDAO
return orderWithDefault0(intValues, ids);
}
@Override public IntValues getLinearIntValues(String indName, Downsampling downsampling, List<String> ids, String valueCName) throws IOException {
@Override public IntValues getLinearIntValues(String indName, Downsampling downsampling, List<String> ids,
String valueCName) throws IOException {
String tableName = ModelName.build(downsampling, indName);
StringBuilder idValues = new StringBuilder();
......@@ -129,6 +143,48 @@ public class H2MetricsQueryDAO extends H2SQLExecutor implements IMetricsQueryDAO
return orderWithDefault0(intValues, ids);
}
@Override public IntValues[] getMultipleLinearIntValues(String indName, Downsampling downsampling,
List<String> ids,
int numOfLinear,
String valueCName) throws IOException {
String tableName = ModelName.build(downsampling, indName);
StringBuilder idValues = new StringBuilder();
for (int valueIdx = 0; valueIdx < ids.size(); valueIdx++) {
if (valueIdx != 0) {
idValues.append(",");
}
idValues.append("'").append(ids.get(valueIdx)).append("'");
}
IntValues[] intValuesArray = new IntValues[numOfLinear];
for (int i = 0; i < intValuesArray.length; i++) {
intValuesArray[i] = new IntValues();
}
try (Connection connection = h2Client.getConnection()) {
try (ResultSet resultSet = h2Client.executeQuery(connection, "select id, " + valueCName + " from " + tableName + " where id in (" + idValues.toString() + ")")) {
while (resultSet.next()) {
String id = resultSet.getString("id");
IntKeyLongValueHashMap multipleValues = new IntKeyLongValueHashMap(5);
multipleValues.toObject(resultSet.getString(valueCName));
for (int i = 0; i < intValuesArray.length; i++) {
KVInt kv = new KVInt();
kv.setId(id);
kv.setValue(multipleValues.get(i).getValue());
intValuesArray[i].addKVInt(kv);
}
}
}
} catch (SQLException e) {
throw new IOException(e);
}
return orderWithDefault0(intValuesArray, ids);
}
/**
* Make sure the order is same as the expected order, and keep default value as 0.
*
......@@ -149,7 +205,22 @@ public class H2MetricsQueryDAO extends H2SQLExecutor implements IMetricsQueryDAO
return intValues;
}
@Override public Thermodynamic getThermodynamic(String indName, Downsampling downsampling, List<String> ids, String valueCName) throws IOException {
/**
* Make sure the order is same as the expected order, and keep default value as 0.
*
* @param origin
* @param expectedOrder
* @return
*/
private IntValues[] orderWithDefault0(IntValues[] origin, List<String> expectedOrder) {
for (int i = 0; i < origin.length; i++) {
origin[i] = orderWithDefault0(origin[i], expectedOrder);
}
return origin;
}
@Override public Thermodynamic getThermodynamic(String indName, Downsampling downsampling, List<String> ids,
String valueCName) throws IOException {
String tableName = ModelName.build(downsampling, indName);
StringBuilder idValues = new StringBuilder();
......
......@@ -22,6 +22,7 @@ import com.google.common.io.Resources;
import org.apache.skywalking.e2e.metrics.Metrics;
import org.apache.skywalking.e2e.metrics.MetricsData;
import org.apache.skywalking.e2e.metrics.MetricsQuery;
import org.apache.skywalking.e2e.metrics.MultiMetricsData;
import org.apache.skywalking.e2e.service.Service;
import org.apache.skywalking.e2e.service.ServicesData;
import org.apache.skywalking.e2e.service.ServicesQuery;
......@@ -225,4 +226,29 @@ public class SimpleQueryClient {
return Objects.requireNonNull(responseEntity.getBody()).getData().getMetrics();
}
public List<Metrics> multipleLinearMetrics(final MetricsQuery query, String numOfLinear) throws Exception {
final URL queryFileUrl = Resources.getResource("metrics-multiLines.gql");
final String queryString = Resources.readLines(queryFileUrl, Charset.forName("UTF8"))
.stream()
.filter(it -> !it.startsWith("#"))
.collect(Collectors.joining())
.replace("{step}", query.step())
.replace("{start}", query.start())
.replace("{end}", query.end())
.replace("{metricsName}", query.metricsName())
.replace("{id}", query.id())
.replace("{numOfLinear}", numOfLinear);
final ResponseEntity<GQLResponse<MultiMetricsData>> responseEntity = restTemplate.exchange(
new RequestEntity<>(queryString, HttpMethod.POST, URI.create(endpointUrl)),
new ParameterizedTypeReference<GQLResponse<MultiMetricsData>>() {
}
);
if (responseEntity.getStatusCode() != HttpStatus.OK) {
throw new RuntimeException("Response status != 200, actual: " + responseEntity.getStatusCode());
}
return Objects.requireNonNull(responseEntity.getBody()).getData().getMetrics();
}
}
......@@ -18,24 +18,14 @@
package org.apache.skywalking.e2e.metrics;
import lombok.Data;
import lombok.ToString;
/**
* @author kezhenxu94
*/
@Data
@ToString
public class MetricsData {
private Metrics metrics;
public Metrics getMetrics() {
return metrics;
}
public void setMetrics(Metrics metrics) {
this.metrics = metrics;
}
@Override
public String toString() {
return "MetricsData{" +
"metrics=" + metrics +
'}';
}
}
......@@ -18,13 +18,13 @@
package org.apache.skywalking.e2e.metrics;
import java.time.LocalDateTime;
import java.time.ZoneOffset;
import java.util.List;
import org.apache.skywalking.e2e.SimpleQueryClient;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.time.LocalDateTime;
import java.time.ZoneOffset;
/**
* @author zhangwei
*/
......@@ -32,23 +32,22 @@ public class MetricsMatcher {
private static final Logger LOGGER = LoggerFactory.getLogger(MetricsMatcher.class);
public static void verifyMetrics(SimpleQueryClient queryClient, String metricName, String id,
final LocalDateTime minutesAgo) throws Exception {
final LocalDateTime minutesAgo) throws Exception {
verifyMetrics(queryClient, metricName, id, minutesAgo, 0, null);
}
public static void verifyMetrics(SimpleQueryClient queryClient, String metricName, String id,
final LocalDateTime minutesAgo, long retryInterval, Runnable generateTraffic) throws Exception {
final LocalDateTime minutesAgo, long retryInterval, Runnable generateTraffic) throws Exception {
boolean valid = false;
while (!valid) {
Metrics metrics = queryClient.metrics(
new MetricsQuery()
.stepByMinute()
.metricsName(metricName)
.start(minutesAgo)
.end(LocalDateTime.now(ZoneOffset.UTC).plusMinutes(1))
.id(id)
new MetricsQuery()
.stepByMinute()
.metricsName(metricName)
.start(minutesAgo)
.end(LocalDateTime.now(ZoneOffset.UTC).plusMinutes(1))
.id(id)
);
LOGGER.info("{}: {}", metricName, metrics);
AtLeastOneOfMetricsMatcher instanceRespTimeMatcher = new AtLeastOneOfMetricsMatcher();
......@@ -68,4 +67,36 @@ public class MetricsMatcher {
}
}
}
public static void verifyPercentileMetrics(SimpleQueryClient queryClient, String metricName, String id,
final LocalDateTime minutesAgo, long retryInterval, Runnable generateTraffic) throws Exception {
boolean valid = false;
while (!valid) {
List<Metrics> metricsArray = queryClient.multipleLinearMetrics(
new MetricsQuery()
.stepByMinute()
.metricsName(metricName)
.start(minutesAgo)
.end(LocalDateTime.now(ZoneOffset.UTC).plusMinutes(1))
.id(id),
"5"
);
LOGGER.info("{}: {}", metricName, metricsArray);
AtLeastOneOfMetricsMatcher matcher = new AtLeastOneOfMetricsMatcher();
MetricsValueMatcher greaterThanZero = new MetricsValueMatcher();
greaterThanZero.setValue("gt 0");
matcher.setValue(greaterThanZero);
try {
metricsArray.forEach(matcher::verify);
valid = true;
} catch (Throwable e) {
if (generateTraffic != null) {
generateTraffic.run();
Thread.sleep(retryInterval);
} else {
throw e;
}
}
}
}
}
......@@ -24,32 +24,32 @@ import org.apache.skywalking.e2e.AbstractQuery;
* @author kezhenxu94
*/
public class MetricsQuery extends AbstractQuery<MetricsQuery> {
public static String SERVICE_P99 = "service_p99";
public static String SERVICE_P95 = "service_p95";
public static String SERVICE_P90 = "service_p90";
public static String SERVICE_P75 = "service_p75";
public static String SERVICE_P50 = "service_p50";
public static String SERVICE_SLA = "service_sla";
public static String SERVICE_CPM = "service_cpm";
public static String SERVICE_RESP_TIME = "service_resp_time";
public static String SERVICE_APDEX = "service_apdex";
public static String[] ALL_SERVICE_METRICS = {
SERVICE_P99,
SERVICE_P95,
SERVICE_P90,
SERVICE_P75,
SERVICE_P50,
SERVICE_SLA,
SERVICE_CPM,
SERVICE_RESP_TIME,
SERVICE_APDEX
};
public static String SERVICE_PERCENTILE = "service_percentile";
public static String[] ALL_SERVICE_MULTIPLE_LINEAR_METRICS = {
SERVICE_PERCENTILE
};
public static String ENDPOINT_P99 = "endpoint_p99";
public static String ENDPOINT_P95 = "endpoint_p95";
public static String ENDPOINT_P90 = "endpoint_p90";
public static String ENDPOINT_P75 = "endpoint_p75";
public static String ENDPOINT_P50 = "endpoint_p50";
public static String ENDPOINT_CPM = "endpoint_cpm";
public static String ENDPOINT_AVG = "endpoint_avg";
public static String ENDPOINT_SLA = "endpoint_sla";
public static String[] ALL_ENDPOINT_METRICS = {
ENDPOINT_P99,
ENDPOINT_P95,
ENDPOINT_P90,
ENDPOINT_P75,
ENDPOINT_P50
ENDPOINT_CPM,
ENDPOINT_AVG,
ENDPOINT_SLA,
};
public static String ENDPOINT_PERCENTILE = "endpoint_percentile";
public static String[] ALL_ENDPOINT_MULTIPLE_LINEAR_METRICS = {
ENDPOINT_PERCENTILE
};
public static String SERVICE_INSTANCE_RESP_TIME = "service_instance_resp_time";
......
/*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*
*/
package org.apache.skywalking.e2e.metrics;
import lombok.Data;
import lombok.ToString;
import java.util.List;
/**
* @author kezhenxu94
*/
@Data
@ToString
public class MultiMetricsData {
private List<Metrics> metrics;
}
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
{
"query":"query ($id: ID!, $duration: Duration!, $numOfLinear: Int!) {
metrics: getMultipleLinearIntValues(metric: {
name: \"{metricsName}\"
id: $id
}, duration: $duration, numOfLinear: $numOfLinear) {
values {
value
}
}
}",
"variables": {
"duration": {
"start": "{start}",
"end": "{end}",
"step": "{step}"
},
"id": "{id}",
"numOfLinear": "{numOfLinear}"
}
}
......@@ -55,6 +55,7 @@ import java.util.Map;
import java.util.concurrent.TimeUnit;
import static org.apache.skywalking.e2e.metrics.MetricsMatcher.verifyMetrics;
import static org.apache.skywalking.e2e.metrics.MetricsMatcher.verifyPercentileMetrics;
import static org.apache.skywalking.e2e.metrics.MetricsQuery.*;
import static org.assertj.core.api.Assertions.fail;
......@@ -293,6 +294,10 @@ public class ClusterVerificationITCase {
verifyMetrics(queryClient, metricName, endpoint.getKey(), minutesAgo, retryInterval, this::generateTraffic);
}
for (String metricName : ALL_ENDPOINT_MULTIPLE_LINEAR_METRICS) {
verifyPercentileMetrics(queryClient, metricName, endpoint.getKey(), minutesAgo, retryInterval, this::generateTraffic);
}
}
}
......@@ -302,6 +307,9 @@ public class ClusterVerificationITCase {
verifyMetrics(queryClient, metricName, service.getKey(), minutesAgo, retryInterval, this::generateTraffic);
}
for (String metricName : ALL_SERVICE_MULTIPLE_LINEAR_METRICS) {
verifyPercentileMetrics(queryClient, metricName, service.getKey(), minutesAgo, retryInterval, this::generateTraffic);
}
}
private void verifyTraces(LocalDateTime minutesAgo) throws Exception {
......
......@@ -20,4 +20,4 @@
// In TTL ES7 testing, using full oal config makes the test very unstable, so only service_p99 is reserved for testing.
// Service scope metrics
service_p99 = from(Service.latency).p99(10);
\ No newline at end of file
service_resp_time = from(Service.latency).longAvg();
\ No newline at end of file
......@@ -43,7 +43,7 @@ import java.time.ZoneOffset;
import java.util.List;
import java.util.concurrent.CountDownLatch;
import static org.apache.skywalking.e2e.metrics.MetricsQuery.SERVICE_P99;
import static org.apache.skywalking.e2e.metrics.MetricsQuery.SERVICE_RESP_TIME;
import static org.assertj.core.api.Assertions.assertThat;
/**
......@@ -243,7 +243,7 @@ public class StorageTTLITCase {
return queryClient.metrics(
new MetricsQuery()
.id(serviceId)
.metricsName(SERVICE_P99)
.metricsName(SERVICE_RESP_TIME)
.step(step)
.start(queryStart)
.end(queryEnd)
......
......@@ -16,7 +16,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
export MAVEN_OPTS='-Dmaven.repo.local=.m2/repository -XX:+TieredCompilation -XX:TieredStopAtLevel=1 -XX:+CMSClassUnloadingEnabled -XX:+UseConcMarkSweepGC -XX:-UseGCOverheadLimit -Xmx3g'
export MAVEN_OPTS='-XX:+TieredCompilation -XX:TieredStopAtLevel=1 -XX:+CMSClassUnloadingEnabled -XX:+UseConcMarkSweepGC -XX:-UseGCOverheadLimit -Xmx3g'
base_dir=$(pwd)
build=0
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册