- 15 4月, 2020 3 次提交
-
-
由 Guangning 提交于
* Delete no use docker image * Add get docker images for delete docker images * Add command for get space size * Add delete docker images * Fixed error * Upgrade postgresql from version 9.6 to 11 * Delete no used image first * Fixed delete image * Fixed docker images order * Add maven install step * Add option docker.nocache=true * Fixed correct docker image * Fixed integration test failed * Delete debug info (cherry picked from commit 04ed23c0) Fixed maven download
-
由 冉小龙 提交于
Signed-off-by: xiaolong.ran rxl@apache.org Motivation maven broken link: Currently, maven requires all links that start with https to download the required lib. Same issue reference to: https://support.sonatype.com/hc/en-us/articles/360041287334 lookup pulsar-io-solr package error as follows: 2020-01-16T11:39:08.1688765Z [ERROR] Failed to execute goal on project pulsar-io-solr: Could not resolve dependencies for project org.apache.pulsar:pulsar-io-solr:jar:2.5.0: Failed to collect dependencies at org.apache.solr:solr-core:jar:7.5.0 -> org.restlet.jee:org.restlet:jar:2.3.0: Failed to read artifact descriptor for org.restlet.jee:org.restlet:jar:2.3.0: Could not transfer artifact org.restlet.jee:org.restlet:pom:2.3.0 from/to maven-restlet (http://maven.restlet.org): Authorization failed for http://maven.restlet.org/org/restlet/jee/org.restlet/2.3.0/org.restlet-2.3.0.pom 403 Forbidden -> [Help 1] Modifications Replace http://central.maven.org/maven2/org/apache/pulsar/pulsar-common/1.22.0-incubating/pulsar-common-1.22.0-incubating.jar with https://repo1.maven.org/maven2/org/apache/pulsar/pulsar-common/2.4.2/pulsar-common-2.4.2.jar in FunctionCommonTest.java add a new address for pulsar-io-solr * Fix maven broken link Signed-off-by: Nxiaolong.ran <rxl@apache.org> * fix ci error Signed-off-by: Nxiaolong.ran <rxl@apache.org> * fix repository id of spring Signed-off-by: Nxiaolong.ran <rxl@apache.org> * fix other links Signed-off-by: Nxiaolong.ran <rxl@apache.org> (cherry picked from commit 8cf56f25)
- 13 4月, 2020 37 次提交
-
-
由 guangning 提交于
-
由 guangning 提交于
(cherry picked from commit 8d971346415033f5e6a2daccda7a0821028d6b6c)
-
由 Yang Yang 提交于
Fixes https://github.com/apache/pulsar/issues/5920 ### Motivation The current RabbitMQ connector lacks an integration test. ### Modifications Adds an integration test for the RabbitMQ connector(both source and sink): - A container definition using `rabbitmq:3.8-management` as suggested. - A RabbitMQ source tester, which provides configurations for the RabbitMQ source connector and publishes messages to the RabbitMQ. - A RabbitMQ sink tester, which provides configurations for the RabbitMQ sink connector and consumes messages from the RabbitMQ for verification. - A test case that invokes the testers with the current test framework. ### Verifying this change This change added tests and can be verified as follows: *(example:)* - *Added integration tests for the RabbitMQ connector* (cherry picked from commit 4cfb9528)
-
由 lipenghui 提交于
### Motivation When all subscriptions have no backlogs, but the backlog size of the topic stats is not 0. So this PR improves the backlog size calculation of the managed ledger. ### Modifications If all entries are consumed, return the ledger size as the consumed size. ### Verifying this change A new unit test added. (cherry picked from commit d72e383b)
-
由 Mike Russell 提交于
- updating README.md to call out that JDK 11 is supported instead of only listing JDK 1.8 / Java 8 Resolves: readme-update-java-11 (cherry picked from commit 576080016b3e670042b5bb810252db9f6fe9bd21)
-
由 Jia Zhai 提交于
### Motivation Not allow sub auto create by admin when disable topic auto create ### Modifications change admin code to not allow sub auto create by admin when disable topic auto create add tests ### Verifying this change ut passed * fix sub auto created by admin * fix test error: create sub partition when update it * fix flaky test (cherry picked from commit 21f6dcdd)
-
由 hrsakai 提交于
Co-authored-by: NSijie Guo <sijie@apache.org> (cherry picked from commit cf045e41)
-
由 ran 提交于
Fixes #5503 From #6445 In functions, it will encounter ```ClassCastException``` when using the Avro schema for topics. ``` Exception in thread "main" java.lang.ClassCastException: org.apache.pulsar.shade.org.apache.avro.generic.GenericData$Record cannot be cast to io.streamnative.KeyValueSchemaTest$Foo2 at io.streamnative.KeyValueSchemaTest.testConsumerByPythonProduce(KeyValueSchemaTest.java:412) at io.streamnative.KeyValueSchemaTest.main(KeyValueSchemaTest.java:305) ``` In functions, when using Avro schema specific the ClassLoader for ReflectDatumReader. Add integration test ```testAvroSchemaFunction``` in class ```PulsarFunctionsTest```. (cherry picked from commit 52ae1823) Handle conflict
-
由 Jia Zhai 提交于
### Motivation Try to make user able to use both "org.bouncycastle.jce.provider.BouncyCastleProvider" and "org.bouncycastle.jcajce.provider.BouncyCastleFipsProvider". Current code, bouncycastle (bc) jars are used in both broker and client, and are tied strongly in both broker and client. We need to make it easy config. This change try to split bc and module that depends on it. Then user could freely include/exclude it. ### Changes - build a shaded jar for bouncycastle non-fips version. other module depends on this module. - build nar for both fips and non-fips version of bouncycastle, user could able to load bouncycastle by these 2 nar. - split MessageCrypto out from client and made it an individual module. so client is able to exclude bouncycastle. - Add 2 test examples: 1, exclude bc-non-fips version, and include bc-fips version; 2, exclude bc-non-fips version and load bc-fips version by nar. (cherry picked from commit 181e5e7f)
-
由 Sanjeev Kulkarni 提交于
* Do not retry on authorization failure * Address feedback * Fix logic * Fix test * Fixed more tests * Fixed more test Co-authored-by: NSanjeev Kulkarni <sanjeevk@splunk.com> (cherry picked from commit 6cb0d25a)
-
由 Fangbin Sun 提交于
### Motivation Fixes #6561 ### Modifications Initialize `BatchMessageAckerDisabled` with a `new BitSet()` Object. (cherry picked from commit 2007de6d)
-
由 Boyang Jerry Peng 提交于
* Fix: topic with one partition cannot be updated (cherry picked from commit 9602c9bd)
-
由 lipenghui 提交于
Don't increment unacked messages for the consumer with Exclusive/Failover subscription mode. (#6558) Fixes #6552 ### Motivation #6552 is introduced by #5929, so this PR stop increase unacked messages for the consumer with Exclusive/Failover subscription mode. (cherry picked from commit 2449696a)
-
由 lipenghui 提交于
### Motivation Disable channel auto-read when publishing rate or publish buffer exceeded. Currently, ServerCnx set channel auto-read to false when getting a new message and publish rate exceeded or publish buffer exceeded. So, it depends on reading more one message. If there are too many ServerCnx(too many topics or clients), this will result in publish rate limitations with a large deviation. Here is an example to show the problem. Enable publish rate limit in broker.conf ``` brokerPublisherThrottlingTickTimeMillis=1 brokerPublisherThrottlingMaxByteRate=10000000 ``` Use Pulsar perf to test 100 partition message publishing: ``` bin/pulsar-perf produce -s 500000 -r 100000 -t 1 100p ``` The test result: ``` 10:45:28.844 [main] INFO org.apache.pulsar.testclient.PerformanceProducer - Throughput produced: 367.8 msg/s --- 1402.9 Mbit/s --- failure 0.0 msg/s --- Latency: mean: 710.008 ms - med: 256.969 - 95pct: 2461.439 - 99pct: 3460.255 - 99.9pct: 4755.007 - 99.99pct: 4755.007 - Max: 4755.007 10:45:38.919 [main] INFO org.apache.pulsar.testclient.PerformanceProducer - Throughput produced: 456.6 msg/s --- 1741.9 Mbit/s --- failure 0.0 msg/s --- Latency: mean: 2551.341 ms - med: 2347.599 - 95pct: 6852.639 - 99pct: 9630.015 - 99.9pct: 10824.319 - 99.99pct: 10824.319 - Max: 10824.319 10:45:48.959 [main] INFO org.apache.pulsar.testclient.PerformanceProducer - Throughput produced: 432.0 msg/s --- 1648.0 Mbit/s --- failure 0.0 msg/s --- Latency: mean: 4373.505 ms - med: 3972.047 - 95pct: 11754.687 - 99pct: 15713.663 - 99.9pct: 17638.527 - 99.99pct: 17705.727 - Max: 17705.727 10:45:58.996 [main] INFO org.apache.pulsar.testclient.PerformanceProducer - Throughput produced: 430.6 msg/s --- 1642.6 Mbit/s --- failure 0.0 msg/s --- Latency: mean: 5993.563 ms - med: 4291.071 - 95pct: 18022.527 - 99pct: 21649.663 - 99.9pct: 24885.375 - 99.99pct: 25335.551 - Max: 25335.551 10:46:09.195 [main] INFO org.apache.pulsar.testclient.PerformanceProducer - Throughput produced: 403.2 msg/s --- 1538.3 Mbit/s --- failure 0.0 msg/s --- Latency: mean: 7883.304 ms - med: 6184.159 - 95pct: 23625.343 - 99pct: 29524.991 - 99.9pct: 30813.823 - 99.99pct: 31467.775 - Max: 31467.775 10:46:19.314 [main] INFO org.apache.pulsar.testclient.PerformanceProducer - Throughput produced: 401.1 msg/s --- 1530.1 Mbit/s --- failure 0.0 msg/s --- Latency: mean: 9587.407 ms - med: 6907.007 - 95pct: 28524.927 - 99pct: 34815.999 - 99.9pct: 36759.551 - 99.99pct: 37581.567 - Max: 37581.567 10:46:29.389 [main] INFO org.apache.pulsar.testclient.PerformanceProducer - Throughput produced: 372.8 msg/s --- 1422.0 Mbit/s --- failure 0.0 msg/s --- Latency: mean: 11984.595 ms - med: 10095.231 - 95pct: 34515.967 - 99pct: 40754.175 - 99.9pct: 43553.535 - 99.99pct: 43603.199 - Max: 43603.199 10:46:39.459 [main] INFO org.apache.pulsar.testclient.PerformanceProducer - Throughput produced: 374.6 msg/s --- 1429.1 Mbit/s --- failure 0.0 msg/s --- Latency: mean: 12208.459 ms - med: 7807.455 - 95pct: 38799.871 - 99pct: 46936.575 - 99.9pct: 50500.095 - 99.99pct: 50500.095 - Max: 50500.095 10:46:49.537 [main] INFO org.apache.pulsar.testclient.PerformanceProducer - Throughput produced: 295.6 msg/s --- 1127.5 Mbit/s --- failure 0.0 msg/s --- Latency: mean: 14503.565 ms - med: 10753.087 - 95pct: 45041.407 - 99pct: 54307.327 - 99.9pct: 57786.623 - 99.99pct: 57786.623 - Max: 57786.623 ``` Analyze the reasons for such a large deviation is the producer sent batch messages and ServerCnx read more one message. This PR can not completely solve the problem but can alleviate this problem. When the message publish rate exceeded, the broker set channel auto-read to false for all topics. This will avoid parts of ServerCnx read more one message. ### Does this pull request potentially affect one of the following parts: *If `yes` was chosen, please highlight the changes* - Dependencies (does it add or upgrade a dependency): (no) - The public API: (no) - The schema: (no) - The default values of configurations: (no) - The wire protocol: (no) - The rest endpoints: (no) - The admin cli options: (no) - Anything that affects deployment: (no) ### Documentation - Does this pull request introduce a new feature? (no) (cherry picked from commit ec31d549)
-
由 Yijie Shen 提交于
Currently, many modules depend on `managed-ledger-test.jar` just because they want to use MockBookkeeper and MockZookeeper. This made module dependencies hard to understand and track. This PR introduces a new `testmocks` module and pulls all mocks from managed-ledger tests into the new module. (cherry picked from commit efe19e02)
-
由 Sanjeev Kulkarni 提交于
Co-authored-by: NSanjeev Kulkarni <sanjeevk@splunk.com> (cherry picked from commit 36ea153c)
-
由 lipenghui 提交于
### Motivation If the broker service is started, the client can connect to the broker and send requests depends on the namespace service, so we should create the namespace service before starting the broker. Otherwise, NPE occurs. ![image](https://user-images.githubusercontent.com/12592133/76090515-a9961400-5ff6-11ea-9077-cb8e79fa27c0.png) ![image](https://user-images.githubusercontent.com/12592133/76099838-b15db480-6006-11ea-8f39-31d820563c88.png) ### Modifications Move the namespace service creation and the schema registry service creation before start broker service. (cherry picked from commit 5285c68b)
-
由 k2la 提交于
### Motivation Because of #6391 , acked messages were counted as unacked messages. Although messages from brokers were acknowledged, the following log was output. ``` 2020-03-06 19:44:51.790 INFO ConsumerImpl:174 | [persistent://public/default/t1, sub1, 0] Created consumer on broker [127.0.0.1:58860 -> 127.0.0.1:6650] my-message-0: Fri Mar 6 19:45:05 2020 my-message-1: Fri Mar 6 19:45:05 2020 my-message-2: Fri Mar 6 19:45:05 2020 2020-03-06 19:45:15.818 INFO UnAckedMessageTrackerEnabled:53 | [persistent://public/default/t1, sub1, 0] : 3 Messages were not acked within 10000 time ``` This behavior happened on master branch. (cherry picked from commit 67f8cf30)
-
由 Rajan Dhabalia 提交于
### Motivation Remove duplicate `cnx()` method for `producer` (cherry picked from commit f9ada107)
-
由 Addison Higham 提交于
See #6416. This change ensures that all futures within BrokerService have a guranteed timeout. As stated in #6416, we see cases where it appears that loading or creating a topic fails to resolve the future for unknown reasons. It appears that these futures *may* not be returning. This seems like a sane change to make to ensure that these futures finish, however, it still isn't understood under what conditions these futures may not be returning, so this fix is mostly a workaround for some underlying issues Co-authored-by: NAddison Higham <ahigham@instructure.com> (cherry picked from commit 4a4cce9c)
-
由 Rajan Dhabalia 提交于
(cherry picked from commit ad5415ab)
-
由 Addison Higham 提交于
### Motivation Currently, the proxy only works to proxy v1/v2 functions routes to the function worker. ### Modifications This changes this code to proxy all routes for the function worker when those routes match. At the moment this is still a static list of prefixes, but in the future it may be possible to have this list of prefixes be dynamically fetched from the REST routes. ### Verifying this change - added some tests to ensure the routing works as expected (cherry picked from commit 329e2310)
-
由 Rolf Arne Corneliussen 提交于
Fixes #6482 ### Motivation Prevent topic compaction from leaking direct memory ### Modifications Several leaks were discovered using Netty leak detection and code review. * `CompactedTopicImpl.readOneMessageId` would get an `Enumeration` of `LedgerEntry`, but did not release the underlying buffers. Fix: iterate though the `Enumeration` and release underlying buffer. Instead of logging the case where the `Enumeration` did not contain any elements, complete the future exceptionally with the message (will be logged by Caffeine). * Two main sources of leak in `TwoPhaseCompactor`. The `RawBacthConverter.rebatchMessage` method failed to close/release a `ByteBuf` (uncompressedPayload). Also, the return ByteBuf of `RawBacthConverter.rebatchMessage` was not closed. The first one was easy to fix (release buffer), to fix the second one and make the code easier to read, I decided to not let `RawBacthConverter.rebatchMessage` close the message read from the topic, instead the message read from the topic can be closed in a try/finally clause surrounding most of the method body handing a message from a topic (in phase two loop). Then if a new message was produced by `RawBacthConverter.rebatchMessage` we check that after we have added the message to the compact ledger and release the message. ### Verifying this change Modified `RawReaderTest.testBatchingRebatch` to show new contract. One can run the test described to reproduce the issue, to verify no leak is detected. (cherry picked from commit f2ec1b4e)
-
由 Matteo Merli 提交于
(cherry picked from commit 6604f540)
-
由 Sijie Guo 提交于
### Motivation Proxy-logging fetches incorrect producerId for `Send` command because of that logging always gets producerId as 0 and it fetches invalid topic name for the logging. ### Modification Fixed topic logging by fetching correct producerId for `Send` command. (cherry picked from commit 65cc3031)
-
由 Rajan Dhabalia 提交于
### Motivation fix correct name for proxy thread executor name (cherry picked from commit 5c2c058f)
-
由 Jia Zhai 提交于
Fix #6439 We shouldn't static link libssl in libpulsar.a, as this is a security red flag. we should just use whatever the libssl the system provides. Because if there is a security problem in libssl, all the machines can just update their own libssl library without rebuilding libpulsar.a. As suggested, this change not change the old behavior, and mainly provides 2 other additional pulsar cpp client library in deb/rpm, and add related docs of how to use 4 libs in doc. The additional 2 libs: - pulsarSharedNossl (libpulsarnossl.so), similar to pulsarShared(libpulsar.so), with no ssl statically linked. - pulsarStaticWithDeps(libpulsarwithdeps.a), similar to pulsarStatic(libpulsar.a), and archived in the dependencies libraries of `libboost_regex`, `libboost_system`, `libcurl`, `libprotobuf`, `libzstd` and `libz` statically. Passed 4 libs rpm/deb build, install, and compile with a pulsar-client example code. * also add libpulsarwithdeps.a together with libpulsar.a into cpp client release * add documentation for libpulsarwithdeps.a, add g++ build examples * add pulsarSharedNossl target to build libpulsarnossl.so * update doc * verify 4 libs in rpm/deb build, installed, use all good (cherry picked from commit 33eea888)
-
由 Yijie Shen 提交于
When starting Pulsar in standalone mode with TLS enabled, it will fail to create two namespaces during start. This is because it's using the unencrypted URL/port while constructing the PulsarAdmin client. (cherry picked from commit 3e1b8f64)
-
由 Rolf Arne Corneliussen 提交于
Fixes #6453 ### Motivation `ConsumerBase` and `ProducerImpl` use `System.currentTimeMillis()` to measure the elapsed time in the 'operations' inner classes (`ConsumerBase$OpBatchReceive` and `ProducerImpl$OpSendMsg`). An instance variable `createdAt` is initialized with `System.currentTimeMills()`, but it is not used for reading wall clock time, the variable is only used for computing elapsed time (e.g. timeout for a batch). When the variable is used to compute elapsed time, it would more sense to use `System.nanoTime()`. ### Modifications The instance variable `createdAt` in `ConsumerBase$OpBatchReceive` and `ProducerImpl$OpSendMsg` is initialized with `System.nanoTime()`. Usage of the variable is updated to reflect that the variable holds nano time; computations of elapsed time takes the difference between the current system nano time and the `createdAt` variable. The `createdAt` field is package protected, and is currently only used in the declaring class and outer class, limiting the chances for unwanted side effects. (cherry picked from commit 459ec6e8)