- 28 8月, 2020 1 次提交
-
-
由 jianyun 提交于
### Motivation *Compatible with flink 1.11 need to use java8 date api in pulsar's primitive schemas.* ### Modifications *Add Instant, LocalDate, LocalTime, LocalDateTime to pulsar's primitive schemas* ### Verifying this change Add Instant, LocalDate, LocalTime, LocalDateTime types to the Schema type test
-
- 10 8月, 2020 1 次提交
-
-
由 ran 提交于
### Motivation Currently, the transaction components are all independent, the relationship between transaction client and transaction server needs to be established. The target of this PR is making the Pulsar client could send transaction messages to the Pulsar broker and execute commit command.
-
- 17 7月, 2020 1 次提交
-
-
由 lipenghui 提交于
* Handle NotAllowed Exception at the client side.
-
- 16 7月, 2020 1 次提交
-
-
由 Sijie Guo 提交于
*Motivation* The code generation for `repeated long` is not handled properly. (I am not sure how changes were made to PulsarApi.proto) *Modification* This pull request adds the code to handle generating code for `repeated long`. *Test* Add unit test to ensure `repeated long` is processed. Add test cases to cover both packed and non-package serialization for `repeated long`. See more details about packed serialization: https://developers.google.com/protocol-buffers/docs/encoding#optional
-
- 07 6月, 2020 1 次提交
-
-
由 Matteo Merli 提交于
### Motivation In certain cases, it is useful to just use key-shared dispatcher in order to have the same key to go to same consumer, although the ordering is not required. In this case, if we relax the ordering requirement, we can avoid new consumers getting stuck when an existing consumer is going through a prefetched queue of existing messages. This is especially relevant if the processing time is high.
-
- 04 6月, 2020 1 次提交
-
-
由 ran 提交于
Fixes #4804 Thanks, @nlu90, work at #6384. ### Motivation Currently, the KeyValue schema doesn't handle `null` key and `null` value well.
-
- 03 6月, 2020 1 次提交
-
-
由 luceneReader 提交于
* PIP-61: 1. resolve broker.conf, validate `advertisedListeners` and `internalListenerName` 2. register the `advertisedListeners` to zookeeper 3. client find the target broker with listenerName 4. add test case PulsarMultiListenersTest 5. add test case MultipleListenerValidatorTest
-
- 02 6月, 2020 2 次提交
-
-
由 Rajan Dhabalia 提交于
* PIP 37: [pulsar-client] support large message size fix producer fix ref counts add timeouts add validation fix recycling fix stats review fix test fix test fix send message and expiry-consumer-config fix schema test fix chunk properties * fix test
-
由 lipenghui 提交于
Master issue: #6253 Fixes #5969 ### Motivation Add support for ack batch message local index. Can be disabled at broker side by set batchIndexAcknowledgeEnable=false at broker.conf PIP-54 documentation will be created soon. ### Modifications 1. Managed cursor support track and persistent local index of batch message. 2. Client support send batch index ack to broker. 3. The batch messages with index ack information dispatched to the client. 4. Client skip the acked index. ### Verifying this change New unit tests added
-
- 19 5月, 2020 1 次提交
-
-
由 Neng Lu 提交于
Fixes #4803 ### Motivation Allow the typed consumer receive messages with `null` value if the producer sends message without payload. ### Modifications - add a flag in `MessageMetadata` to indicate if the payload is set when the message is created - check and return `null` if the flag is not set when reading data from a message
-
- 11 2月, 2020 1 次提交
-
-
由 Matteo Merli 提交于
* PIP-55: Refresh Authentication Credentials * Fixed import order * Do not check for original client credential if it's not coming through proxy * Fixed import order * Fixed mocked test assumption * Addressed comments * Avoid to print NPE on auth refresh check if auth is disabled
-
- 09 12月, 2019 1 次提交
-
-
由 lipenghui 提交于
### Motivation Implement transaction coordinator client. ### Modifications Add transaction coordinator client. Add transaction meta store handler to handle meta store request and response.
-
- 19 11月, 2019 1 次提交
-
-
由 lipenghui 提交于
Fixes #5535 Motivation Currently, if user create producer timeout, the connection handler of producer will reconnect to the broker later, but if in broker already done the previous create producer request, the reconnection will failed with "producer with name xxx is already connected". So this PR will introduce epoch for connection handler and add a field named isGeneratedName for producer to handle above problem. This PR only handle the generated producer name scenario, so many users occur errors such like #5535, so we need to fix the generated producer name scenario first. For the scenario of user specified producer name, we can discuss later and find a simple approach to handle it, i left my idea here: using producer id and producer name as the identity of producer, producer name used for EO producer and producer id can used by the producer reconnect, but this approach depends on globally unique producer id generator. Modifications If the producer with generated producer name and epoch of the producer is bigger than the exists producer, the new producer will overwrite the old producer, so the reconnect producer will create succeed. Verifying this change Add unit tests to simulate producer timeout and reconnection
-
- 15 11月, 2019 1 次提交
-
-
由 lipenghui 提交于
## Motivation Since #5491 merged, while user use new pulsar client to produce batch messages to older version broker(e.g. 2.4.0), send ack error will occur: ``` [pulsar-client-io-8-2] WARN org.apache.pulsar.client.impl.ProducerImpl - [persistent://sandbox/pressure-test/test-A-partition-11] [pulsar-cluster-test-13-294] Got ack for msg. expecting: 13 - got: 224 - queue-size: 9 ``` The problem is client use highest sequence id to match the response sequence id, but in old version broker can not return the highest id. So, this pr is try to fix the problem of produce batch message with new version client and old version broker. ### Modifications Add highest sequence id to CommandSendReceipt. If the response highest sequence id of send receipt > lowest sequence id, it means broker is a new version broker, so we need to verify the highest sequence id, otherwise we only verify the lowest sequence id.
-
- 13 11月, 2019 1 次提交
-
-
由 lipenghui 提交于
[Issue 5476]Fix message deduplicate issue while using external sequence id with batch produce (#5491) Fixes #5476 ### Motivation Fix #5476 ### Modifications 1. Add `last_sequence_id` in MessageMetadata and CommandSend, use sequence id and last_sequence_id to indicate the batch `lowest_sequence_id` and `highest_sequence_id`. 2. Handle batch message deduplicate check in MessageDeduplication 3. Response the `last_sequence_id` to client and add message deduplicate check in client
-
- 08 11月, 2019 1 次提交
-
-
由 lipenghui 提交于
### Motivation Introduce sticky consumer, users can enable it by ```java client.newConsumer() .keySharedPolicy(KeySharedPolicy.exclusiveHashRange().hashRangeTotal(10).ranges(Range.of(0, 10))) .subscribe(); ``` ### Modifications Add a new consumer selector named HashRangeExclusiveStickyKeyConsumerSelector to support sticky consumer. This change added tests and can be verified as follows: Add new unit tests.
-
- 25 10月, 2019 2 次提交
-
-
由 Yi Tang 提交于
Master Issue: #5141 ### Motivation Implement part-1 of [PIP-43](https://github.com/apache/pulsar/wiki/PIP-43%3A-producer-send-message-with-different-schema#changespart-1). ### Modifications * New message api to specify message schema explicitly; * Mechanism of registering schema on producing; * Batch message container support to check message schema; * Configuration for seamless introduction of this feature;
-
由 Rajan Dhabalia 提交于
Fix formatting rename field
-
- 27 9月, 2019 1 次提交
-
-
由 Matteo Merli 提交于
* Allow for topic deletions with regex consumers * Fixed test compilation * One more compile fix * Fixed BrokerServiceAutoTopicCreationTest
-
- 05 8月, 2019 1 次提交
-
-
由 Yong Zhang 提交于
*Motivation* Add new commands for the transaction. *Modifications* - Add new property for `CommandSend` - Add new command `CommandEndTxnOnPartition`
-
- 24 7月, 2019 1 次提交
-
-
由 Yong Zhang 提交于
* [Transaction][Buffer]Add new marker to show which message belongs to transaction --- *Motivation* Add new message type in the transaction including data and commit and abort maker in the transaction log. *Modifications* Add two new types of transaction messages. TXN_COMMIT is the commit marker of the transaction. TXN_ABORT is the abort marker of the transaction.
-
- 03 7月, 2019 1 次提交
-
-
由 lipenghui 提交于
* Allows consumer retrieve the sequence id that the producer set. * fix comments.
-
- 30 5月, 2019 1 次提交
-
-
由 Matteo Merli 提交于
* Delayed message delivery implementation * Fixed compilation * Allow to configure the delayed tracker implementation * Use int64 for timestamp * Address comments * More tests for TripleLongPriorityQueue * Removing useless sync block that causes deadlock with consumer close * Fixed merge conflict * Avoid new list when passing entries to consumer * Fixed test. Since entries are evicted from cache, they might be resent in diff order * Fixed context message builder * Fixed triggering writePromise when last entry was nullified * Moved entries filtering from consumer to dispatcher * Added Javadocs * Reduced synchronized scope to minimum
-
- 24 5月, 2019 1 次提交
-
-
由 Matteo Merli 提交于
* Replicated subscriptions - Markers protobuf * Added license check exclusions for generated code
-
- 21 5月, 2019 2 次提交
-
-
由 Fangbin Sun 提交于
* Support Snappy compression for java. * Some minor fix to pass unit tests * Format the cpp code * Added support for c++ client * Format the cpp code
-
由 Matteo Merli 提交于
* Replicated subscriptions - Configuration and client changes * Added missing header * Fixed mocked methods for tests * Fixed typo
-
- 16 5月, 2019 1 次提交
-
-
由 Yong Zhang 提交于
* Support set message size --- *Motivation* Currently Pulsar only support 5MB size of messages.But there are many cases will use more than 5MB message to transfer. https://github.com/apache/pulsar/wiki/PIP-36%3A-Max-Message-Size *Modifications* - Add message size in protocol - Automaticlly adjust client message size by server * Use `maxMessageSize` to set `nettyFrameSize` in bookie client --- *Motivation* When broker specify a `maxMessageSize` bookie should accept this value as `nettyFrameSize` *Modifications* - Use `cnx().getMaxMessageSize` - Discovery service only redirect so use the constant value `5 * 1024 * 1024` as message size - Put `MAX_METADATA_SIZE` as constant value in `InternalConfigurationData` * Use `Commands` to store message setting --- *Modifications* - use `Commands` to store default `MAX_MESSAGE_SIZE` and `MESSAGE_SIZE_FRAME_PADDING` - replace `LengthFieldBasedFrameDecoder` when has set message size - replace `PulsarDecoder.MaxMessageSize` * Fix some error * Fix license header * Add test and make `ClientCnx.maxMessageSize` static --- *Motivation* - Even if the cnx can't use, `maxMessageSize` should be used at compare message size. So it should as a static variable * fix code style * Fix license header
-
- 22 4月, 2019 1 次提交
-
-
由 lipenghui 提交于
## Motivation This is a core implementation for PIP-34 and there is a task tracker ISSUE-4077 for this PIP ## Modifications Add a new subscription type named Key_Shared Add PersistentStickyKeyDispatcherMultipleConsumers to handle the message dispatch Add a simple hash range based consumer selector Verifying this change Add new unit tests to verifying the hash range selector and Key_Shared mode message consume. * PIP-34 Key_Shared subscription core implementation. * PIP-34 Add more unit test. 1.test redelivery with Key_Shared subscription 2.test none key dispatch with Key_Shared subscription 3.test ordering key dispatch with Key_Shared subscription * PIP-34 Fix alignment issue of Pulsar.proto * PIP-34 Fix TODO: format * PIP-34 Fix hash and ordering key issues * PIP-34 documentation for Key_Shared subscription * PIP-34 Fix cpp test issue. * PIP-34 Fix cpp format issue.
-
- 30 3月, 2019 1 次提交
-
-
由 Sijie Guo 提交于
*Motivation* Fixes #3925 We have 3 places of defining schema type enums. We kept adding new schema type in pulsar-common. However we don't update the schema type in wire protocol and schema storage. This causes `SchemaType.NONE` is stored in SchemaRegistry. It fails debeizum connector on restarting. *Modifications* Make sure all 3 places have consistent schema type definitions. Record the correct schema type.
-
- 13 3月, 2019 1 次提交
-
-
由 Jia Zhai 提交于
This is to implement the mutual auth api discussed in "PIP-30: change authentication provider API to support mutual authentication" Mainly provide 2 new command CommandAuthResponse and CommandAuthChallenge in proto, to support it.
-
- 28 2月, 2019 1 次提交
-
-
由 冉小龙 提交于
Signed-off-by: xiaolong.ran ranxiaolong716@gmail.com Fixes #3446 #3565 Motivation Reset the subscription associated with this consumer to a specific publish time.
-
- 25 1月, 2019 1 次提交
-
-
由 Matteo Merli 提交于
* Added support for ZSTD compression * Fixed C++ formatting * Added warning in javadoc * Fixed comment format * Fixed exception include * Fixed exception mistake * Added ztsd to presto license file
-
- 10 1月, 2019 1 次提交
-
-
由 Matteo Merli 提交于
* Propagate specific Schema error to client * Handling new enums in C++ * Fixed formatting
-
- 27 9月, 2018 1 次提交
-
-
由 Ivan Kelly 提交于
Sometimes it can be useful to send something more complex than a string as the key of the message. However, early on Pulsar chose to make String the only way to send a key, and this permeates throughout the code, so we can't very well change it now. This patch adds rudamentary byte[] key support. If a user adds a byte[] key, the byte[] is base64 encoded and stored in the normal key field. We also send a flag to denote that it is base64 encoded, so the receiving end knows to decode it correct. There's no schema or anything attached to this. Any SerDe has to be handled manually by the client.
-
- 25 9月, 2018 1 次提交
-
-
由 Gordeev Boris 提交于
Allow non-pesistent topics to be retrieved along with persistent ones from the "GetTopicsOfNamespace" method (#2025) ### Motivation Please see issue #2009 for a detailed bug report. In our use case we require using java client with pattern subscription to read from a set of non-persistent topics. Unfortunately, right now this feature doesn't work. After researching the cause for this I have found out that under the hood the client is requesting a list of topics by namespace from the server and then filters them out by pattern and subscribes to them. The method in Pulsar broker NamespaceService class that is responsible for searching for required topics only uses ledgers, thus returning only persistent topics to the client. The goal of this pull request is to provide a solution for that problem. ### Modifications This pull request updates `getListOfTopics` method of NamespaceService class to also include active non-pesistent topics from local broker cache inside the `multiLayerTopicsMap` collection of BrokerService in the result. ### Result As a result, requesting a list of topics by namespace using the HTTP API or binary API (and thus via the clients) will add non-persistent topics to search result, allowing pattern subscription to be used with non-persistent topics. ### Considerations 1. Since this method pulls non-persistent topics from local broker cache, this probably means that this solution will only work for Pulsar installations with a single broker. And if there are multiple brokers, results might be inconsistent. Unfortunately I don't really know if non-persistent themselves work in multi-broker setups. I have recently asked on Slack if non-persistent topics are being replicated in any way and @merlimat's response was that they don't. Also it seems to be that some other methods that are working with non-persistent topics are using this very same collection. 2. It seems to me that unit tests have made sure that Java client can work with this setup, but this might still be a breaking change for other clients or if applications working with this API are not expecting non-persistent topics in result. 3. I have made sure that old unit tests inside the `pulsar-broker` subproject are still working and updated some old tests for this particular use case. Are there any more tests that I can add. Overall, we really need this and I would appreciate if maintainers could share their opinion. Thanks in advance.
-
- 14 9月, 2018 1 次提交
-
-
由 penghui 提交于
### Motivation Fixes #189 When consumer got messages from pulsar, It's difficult to ensure every message can be consume success. Pulsar support message redelivery feature by set acknowledge timeout when create a new consumer. This is a good feature guarantee consumer will not lost messages. But however, some message will redelivery so many times possible, even to the extent that it can be never stop. So, It's necessary to support a feature to control it by pulsar. Users can use this feature and customize this feature to control the message redelivery behavior. The feature named Dead Letter Topic. ### Modifications Consumer can set maximum number of redeliveries by java client. Consumer can set the name of Dead Letter Topic by java client, It’s not necessary. Message exceeding the maximum number of redeliveries should send to Dead Letter Topic and acknowledged automatic. ### Result If consumer enable future of dead letter topic. When Message exceeding the maximum number of redeliveries, message will send to the Dead Letter Topic and acknowledged automatic.
-
- 23 7月, 2018 1 次提交
-
-
由 Matteo Merli 提交于
-
- 06 7月, 2018 1 次提交
-
-
由 Boyang Jerry Peng 提交于
Change JSONSchema to generate a Avro schema from POJO so we can standardize on using Avro schema
-
- 12 6月, 2018 1 次提交
-
-
由 Boyang Jerry Peng 提交于
* adding avro schema * improving implementation * finishing implementation * remove unnecessary newlines * fixing poms * adding avro schema check * add missing license header * Add types to proto definitions * adding compatibiliy unit tests * shade avro dependencies * add shading to pulsar client kafka
-
- 01 6月, 2018 1 次提交
-
-
由 Matteo Merli 提交于
* Fixed event time metadata on batched messages * Fixed test * Fixed cpp test topic name
-