- 29 1月, 2022 1 次提交
-
-
由 xu1009 提交于
Co-authored-by: Nlitexu <litexu@tencent.com> Co-authored-by:
吴晟 Wu Sheng <wu.sheng@foxmail.com>
-
- 27 1月, 2022 1 次提交
-
-
由 wankai123 提交于
E2E verify OAP cluster model data aggregation and fix `SelfRemoteClient` self observing metrics. (#8481) * E2E: verify OAP cluster model data aggregation. * Fix `SelfRemoteClient` self observing metrics. * Remove unnecessary storage cases in the cluster e2e. * Add steps in cluster e2e to verify the whole cluster is up * update the doc
-
- 07 12月, 2021 1 次提交
-
-
由 Jared Tan 提交于
-
- 27 11月, 2021 1 次提交
-
-
由 刘威 提交于
Support Apache IoTDB as a storage option, mostly refer to previous InfluxDB storage option. * The Design of Apache IoTDB Storage Option, https://skywalking.apache.org/blog/2021-11-23-design-of-iotdb-storage-option/
-
- 22 10月, 2021 1 次提交
-
-
由 wankai123 提交于
* Replace e2e cases to e2e-v2: Kafka: Base, Meter, Log, Profile * Set `SW_KAFKA_FETCHER_ENABLE_NATIVE_PROTO_LOG`, `SW_KAFKA_FETCHER_ENABLE_NATIVE_JSON_LOG` default `true`.
-
- 17 9月, 2021 1 次提交
-
-
由 nicolchen 提交于
-
- 10 9月, 2021 1 次提交
-
-
由 wankai123 提交于
-
- 08 9月, 2021 1 次提交
-
-
由 wankai123 提交于
-
- 07 9月, 2021 1 次提交
-
-
由 wankai123 提交于
-
- 06 9月, 2021 1 次提交
-
-
由 kezhenxu94 提交于
-
- 03 9月, 2021 1 次提交
-
-
由 Daming 提交于
-
- 30 8月, 2021 1 次提交
-
-
由 HendSame 提交于
-
- 18 8月, 2021 1 次提交
-
-
由 kezhenxu94 提交于
-
- 27 7月, 2021 1 次提交
-
-
由 pkxiuluo 提交于
-
- 19 7月, 2021 1 次提交
-
-
由 lengyueqiufeng 提交于
-
- 17 7月, 2021 1 次提交
-
-
由 wu-sheng 提交于
Logically revert #6642 and partial #7153 to reduce unnecessary thread and concurrency process (#7318) The key logic behinds all these is, metrics persistence is fully asynchronous. * The core/maxSyncOperationNum setting(added in 8.5.0) is removed due to metrics persistence is fully asynchronous. * The core/syncThreads setting(added in 8.5.0) is removed due to metrics persistence is fully asynchronous. * Optimization: Concurrency mode of execution stage for metrics is removed(added in 8.5.0). The only concurrency of prepare stage is meaningful and kept. * Remove the outside preparedRequest list initialization, worker instance could always build a suitable size list in the first place (Reduce Array.copy and GC load a little).
-
- 16 7月, 2021 1 次提交
-
-
由 wu-sheng 提交于
Adjust index refresh period as INT(flushInterval * 2/3), it used to be as same as bulk flush period. At the edge case, in low traffic(traffic < bulkActions in the whole period), there is a possible case, 2 period bulks are included in one index refresh rebuild operation, which could cause version conflicts. And this case can't be fixed through core/persistentPeriod as the bulk fresh is not controlled by the persistent timer anymore. This PR should avoid the following exception in the low load case, especially when bulkActions is set larger than the number of a metric type.
-
- 14 7月, 2021 1 次提交
-
-
由 wu-sheng 提交于
* Performance: remove the synchronous persistence mechanism from batch ElasticSearch DAO. Because the current enhanced persistent session mechanism, don't require the data queryable immediately after the insert and update anymore. * Performance: share `flushInterval` setting for both metrics and record data, due to `synchronous persistence mechanism` removed. Record flush interval used to be hardcoded as 10s. * Remove `syncBulkActions` in ElasticSearch storage option. * Increase the default bulkActions(env, SW_STORAGE_ES_BULK_ACTIONS) to 5000(from 1000). * Increase the flush interval of ElasticSearch indices to 15s(from 10s) Add these 2 references. According to these, **(same _index, _type and _id) in same bulk will be in order** 1. https://github.com/elastic/elasticsearch/issues/50199 2. https://discuss.elastic.co/t/order-of--bulk-request-operations/98124 Notice, the order of different bulks is not guaranteed by the ElasticSearch cluster. We are going to have the risk of dirty write. But consider we set over 20s period between flush, and index flush period is 10, we should be safe. Recommend 5000 bulk size and 15s flush interval only. The persistent period has been set to 25s.
-
- 06 7月, 2021 1 次提交
-
-
由 Sergi Castro 提交于
* Allow configuring max request header size This allows configuring the HTTP max request header size from the jetty server. By default it uses 8192, the same jetty default.
-
- 05 7月, 2021 1 次提交
-
-
由 Daming 提交于
-
- 01 7月, 2021 2 次提交
- 30 6月, 2021 1 次提交
-
-
由 wu-sheng 提交于
1.0Performance: Add L1 aggregation flush period, which reduces the CPU load and helps young GC. 2. Replace do not direct send after the first aggregation to reduce the network #6400. 3. Enhance the DataCarrier to notify the consumer in no enqueue event in short term. 4. L1 aggregation flush period still works even no further metrics generated, powered by <3> 5. Fix gRPC remote client OOM. The concurrency control mechanism failed.
-
- 29 6月, 2021 1 次提交
-
-
由 Alvin 提交于
-
- 20 6月, 2021 1 次提交
-
-
由 wankai123 提交于
-
- 11 6月, 2021 1 次提交
-
-
由 静夜思朝颜 提交于
-
- 28 5月, 2021 1 次提交
-
-
由 Zhenxu 提交于
-
- 27 5月, 2021 1 次提交
-
-
由 Daming 提交于
-
- 20 5月, 2021 1 次提交
-
-
由 hailin0 提交于
-
- 07 5月, 2021 1 次提交
-
-
由 Zhenxu Ke 提交于
-
- 25 4月, 2021 1 次提交
-
-
由 liqiangz 提交于
-
- 23 4月, 2021 1 次提交
-
-
由 Darcy 提交于
-
- 19 4月, 2021 1 次提交
-
-
由 kl 提交于
-
- 02 4月, 2021 1 次提交
-
-
由 Evan 提交于
* make sync metrics concurrency * add changelog * add changelog * polish codes * change default value * remove unnecessary codes Co-authored-by: NEvan <evanljp@outlook.com> Co-authored-by: NZhenxu Ke <kezhenxu94@apache.org> Co-authored-by:
吴晟 Wu Sheng <wu.sheng@foxmail.com>
-
- 31 3月, 2021 1 次提交
-
-
由 Gao Hongtao 提交于
-
- 27 2月, 2021 1 次提交
-
-
由 wu-sheng 提交于
-
- 23 2月, 2021 1 次提交
-
-
由 Zhenxu Ke 提交于
-
- 21 2月, 2021 1 次提交
-
-
由 静夜思朝颜 提交于
-
- 20 2月, 2021 1 次提交
-
-
由 haoyann 提交于
-
- 18 2月, 2021 1 次提交
-
-
由 静夜思朝颜 提交于
-