1. 31 3月, 2020 1 次提交
    • R
      [pulsar-broker] add flag to skip broker shutdown on transient OOM (#6634) · 2ea27982
      Rajan Dhabalia 提交于
      ### Motivation
      
      Some time due to high dispatch rate on one of the topic can temporarily cause broker to go OOM and it will be transient error and broker can recover within a few seconds as soon as some memory gets released. However, 2.4 release has change #4196 which restarts broker on OOM which can cause huge instability in cluster where that topic moves from one broker to another and restarts multiple brokers and cause disruption for other topics as well. we have seen similar kind of issue mentioned at  #5896 . This could be transient error and we need a way to ignore this error. So, we need a dynamic flag to skip broker shutdown on OOM to avoid instability in a cluster.
      ```
      01:48:49.549 [pulsar-io-22-37] ERROR org.apache.pulsar.PulsarBrokerStarter - -- Shutting down - Received OOM exception: Direct buffer memory
      java.lang.OutOfMemoryError: Direct buffer memory
              at java.nio.Bits.reserveMemory(Bits.java:175) ~[?:?]
              at java.nio.DirectByteBuffer.<init>(DirectByteBuffer.java:118) ~[?:?]
              at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:317) ~[?:?]
              at io.netty.buffer.PoolArena$DirectArena.allocateDirect(PoolArena.java:769) ~[netty-all-4.1.32.Final.jar:4.1.32.Final]
              at io.netty.buffer.PoolArena$DirectArena.newChunk(PoolArena.java:745) ~[netty-all-4.1.32.Final.jar:4.1.32.Final]
              at io.netty.buffer.PoolArena.allocateNormal(PoolArena.java:244) ~[netty-all-4.1.32.Final.jar:4.1.32.Final]
              at io.netty.buffer.PoolArena.allocate(PoolArena.java:226) ~[netty-all-4.1.32.Final.jar:4.1.32.Final]
              at io.netty.buffer.PoolArena.allocate(PoolArena.java:146) ~[netty-all-4.1.32.Final.jar:4.1.32.Final]
              at io.netty.buffer.PooledByteBufAllocator.newDirectBuffer(PooledByteBufAllocator.java:324) ~[netty-all-4.1.32.Final.jar:4.1.32.Final]
              at io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:185) ~[netty-all-4.1.32.Final.jar:4.1.32.Final]
              at org.apache.bookkeeper.common.allocator.impl.ByteBufAllocatorImpl.newDirectBuffer(ByteBufAllocatorImpl.java:164) ~[bookkeeper-common-allocator-4.9.4.2-yahoo.jar:4.9.4.2-yahoo]
              at org.apache.bookkeeper.common.allocator.impl.ByteBufAllocatorImpl.newDirectBuffer(ByteBufAllocatorImpl.java:158) ~[bookkeeper-common-allocator-4.9.4.2-yahoo.jar:4.9.4.2-yahoo]
              at io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:185) ~[netty-all-4.1.32.Final.jar:4.1.32.Final]
              at io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:176) ~[netty-all-4.1.32.Final.jar:4.1.32.Final]
              at io.netty.handler.ssl.SslHandler.allocate(SslHandler.java:1912) ~[netty-all-4.1.32.Final.jar:4.1.32.Final]
              at io.netty.handler.ssl.SslHandler.allocateOutNetBuf(SslHandler.java:1923) ~[netty-all-4.1.32.Final.jar:4.1.32.Final]
              at io.netty.handler.ssl.SslHandler.wrap(SslHandler.java:826) ~[netty-all-4.1.32.Final.jar:4.1.32.Final]
              at io.netty.handler.ssl.SslHandler.wrapAndFlush(SslHandler.java:797) ~[netty-all-4.1.32.Final.jar:4.1.32.Final]
              at io.netty.handler.ssl.SslHandler.flush(SslHandler.java:778) ~[netty-all-4.1.32.Final.jar:4.1.32.Final]
              at io.netty.channel.AbstractChannelHandlerContext.invokeFlush0(AbstractChannelHandlerContext.java:776) ~[netty-all-4.1.32.Final.jar:4.1.32.Final]
              at io.netty.channel.AbstractChannelHandlerContext.invokeFlush(AbstractChannelHandlerContext.java:768) ~[netty-all-4.1.32.Final.jar:4.1.32.Final]
              at io.netty.channel.AbstractChannelHandlerContext.flush(AbstractChannelHandlerContext.java:749) ~[netty-all-4.1.32.Final.jar:4.1.32.Final]
              at io.netty.channel.ChannelOutboundHandlerAdapter.flush(ChannelOutboundHandlerAdapter.java:115) ~[netty-all-4.1.32.Final.jar:4.1.32.Final]
              at io.netty.channel.AbstractChannelHandlerContext.invokeFlush0(AbstractChannelHandlerContext.java:776) ~[netty-all-4.1.32.Final.jar:4.1.32.Final]
              at io.netty.channel.AbstractChannelHandlerContext.invokeWriteAndFlush(AbstractChannelHandlerContext.java:802) ~[netty-all-4.1.32.Final.jar:4.1.32.Final]
              at io.netty.channel.AbstractChannelHandlerContext.write(AbstractChannelHandlerContext.java:814) ~[netty-all-4.1.32.Final.jar:4.1.32.Final]
              at io.netty.channel.AbstractChannelHandlerContext.writeAndFlush(AbstractChannelHandlerContext.java:794) ~[netty-all-4.1.32.Final.jar:4.1.32.Final]
              at org.apache.pulsar.broker.service.Consumer.lambda$sendMessages$51(Consumer.java:265) ~[pulsar-broker-2.4.6-yahoo.jar:2.4.6-yahoo]
              at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:163) [netty-all-4.1.32.Final.jar:4.1.32.Final]
              at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:404) [netty-all-4.1.32.Final.jar:4.1.32.
      :
      : 
      01:48:49.549 [pulsar-io-22-39] ERROR org.apache.pulsar.PulsarBrokerStarter - -- Shutting down - Received OOM exception: Direct buffer memory
      java.lang.OutOfMemoryError: Direct buffer memory
              at java.nio.Bits.reserveMemory(Bits.java:175) ~[?:?]
              at java.nio.DirectByteBuffer.<init>(DirectByteBuffer.java:118) ~[?:?]
              at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:317) ~[?:?]
              at io.netty.buffer.PoolArena$DirectArena.allocateDirect(PoolArena.java:769) ~[netty-all-4.1.32.Final.jar:4.1.32.Final]
              at io.netty.buffer.PoolArena$DirectArena.newChunk(PoolArena.java:745) ~[netty-all-4.1.32.Final.jar:4.1.32.Final]
              at io.netty.buffer.PoolArena.allocateNormal(PoolArena.java:244) ~[netty-all-4.1.32.Final.jar:4.1.32.Final]
              at io.netty.buffer.PoolArena.allocate(PoolArena.java:226) ~[netty-all-4.1.32.Final.jar:4.1.32.Final]
              at io.netty.buffer.PoolArena.allocate(PoolArena.java:146) ~[netty-all-4.1.32.Final.jar:4.1.32.Final]
              at io.netty.buffer.PooledByteBufAllocator.newDirectBuffer(PooledByteBufAllocator.java:324) ~[netty-all-4.1.32.Final.jar:4.1.32.Final]
              at io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:185) ~[netty-all-4.1.32.Final.jar:4.1.32.Final]
              at org.apache.bookkeeper.common.allocator.impl.ByteBufAllocatorImpl.newDirectBuffer(ByteBufAllocatorImpl.java:164) [bookkeeper-common-allocator-4.9.4.2-ya
      hoo.jar:4.9.4.2-yahoo]
              at org.apache.bookkeeper.common.allocator.impl.ByteBufAllocatorImpl.newDirectBuffer(ByteBufAllocatorImpl.java:158) [bookkeeper-common-allocator-4.9.4.2-yahoo.jar:4.9.4.2-yahoo]
              at io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:185) [netty-all-4.1.32.Final.jar:4.1.32.Final]
              at io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:176) [netty-all-4.1.32.Final.jar:4.1.32.Final]
              at io.netty.channel.unix.PreferredDirectByteBufAllocator.ioBuffer(PreferredDirectByteBufAllocator.java:53) [netty-all-4.1.32.Final.jar:4.1.32.Final]
              at io.netty.channel.DefaultMaxMessagesRecvByteBufAllocator$MaxMessageHandle.allocate(DefaultMaxMessagesRecvByteBufAllocator.java:114) [netty-all-4.1.32.Final.jar:4.1.32.Final]
      ```
      
      ### Modification
      Add dynamic flag to avoid broker shutdown on OOM.
      2ea27982
  2. 05 3月, 2020 1 次提交
    • S
      [Issue 6394] Add configuration to disable auto creation of subscriptions (#6456) · c3292a61
      Sijie Guo 提交于
      ### Motivation
      
      Fixes #6394
      
      ### Modifications
      
      - provide a flag `allowAutoSubscriptionCreation` in `ServiceConfiguration`, defaults to `true`
      - when `allowAutoSubscriptionCreation` is disabled and the specified subscription (`Durable`) on the topic does not exist when trying to subscribe via a consumer, the server should reject the request directly by `handleSubscribe` in `ServerCnx`
      - create the subscription on the coordination topic if it does not exist when init `WorkerService`
      c3292a61
  3. 17 2月, 2020 1 次提交
  4. 14 2月, 2020 1 次提交
    • L
      Supports evenly distribute topics count when splits bundle (#6241) · 1c099da5
      lipenghui 提交于
      ### Motivation
      
      Currently, bundle split splits the bundle into two parts of the same size. When there are fewer topics, bundle split does not work well. The topic assigned to the broker according to the topic name hash value, hashing is not effective in a small number of topics bundle split.
      
      So, this PR introduces an option(-balance-topic-count) for bundle split.  When setting it to true, the given bundle splits to 2 parts, each part has the same amount of topics.
      
      And introduce a new Load Manager implementation named `org.apache.pulsar.broker.loadbalance.impl.BalanceTopicCountModularLoadManager`.  The new Load Manager implementation splits bundle with balance topics count, others are not different from ModularLoadManagerImpl.
      1c099da5
  5. 01 2月, 2020 1 次提交
  6. 19 1月, 2020 1 次提交
    • J
      [docs] Update configuration information for 2.4.x releases (#6047) · 6f713a13
      Jennifer Huang 提交于
      ### Motivation
      Some parameters are added in the `broker.conf` and `standalone.conf` files. However, those parameters are not updated in the docs.
      See the following PRs for details: #4150, #4066, #4197, #3819, #4261, #4273, #4320.
      
      ### Modifications
      Add those parameter info, and sync docs with the code.
      
      Does not update the description quite much, there are two reasons for this:
      1. Keep doc content consistent with code. We need to update the description for those parameters in the code first, and then sync them in docs.
      2. Will adopt a generator to generate those content automatically in the near future.
      6f713a13
  7. 09 1月, 2020 1 次提交
  8. 01 1月, 2020 1 次提交
    • A
      [broker] Allow for namespace default of offload threshold (#5872) · 1aa32994
      Addison Higham 提交于
      Most namespace level configurations have corresponding cluster
      configuration that set a namespace default.
      
      The offload threshold does not, which makes it more difficult to ensure
      that namespaces have the cluster wide namespace defaults.
      
      There is one small wrinkle with this commit in that `-1` is used as a
      sentinel value to indicate to use the cluster default, this means that
      if the cluster default is to have offloading on and it is desired to
        disable a specific namespace, the namespace needs to set this value to
        some negative number other than `-1`!
      1aa32994
  9. 28 11月, 2019 1 次提交
  10. 27 9月, 2019 1 次提交
  11. 17 9月, 2019 1 次提交
    • X
      Add more config for auto-topic-creation (#4963) · 547c4218
      Xiaobing Fang 提交于
      Master Issue:  #4926
      
      ### Motivation
      
      
      Curently the partitioned-topic and non-partitioned topic is a little confuse for users. in PR #3450 we add config for auto-topic-creation.
      We could leverage this config to provide some more config for auto-topic-creation.
      
      ### Modifications
      
      - Add `allowAutoTopicCreationType` and `allowAutoTopicCreationNumPartitions` to configuration.
      - Users can use both configurations when they decide to create a topic automatically.
      - Add test.
      - Update doc.
      547c4218
  12. 12 8月, 2019 1 次提交
    • A
      Add option to disable authentication for proxy /metrics (#4921) · be7b24f9
      Addison Higham 提交于
      This commit adds a new option optionally disable authentication for the
      `/metrics` endpoint in the pulsar-proxy.
      
      Currently, authentication is required for the metrics endpoint when
      authentication is enabled, which makes monitoring more difficult.
      However, rather than just disable it completely and allow for metrics to
      be exposed to any unknown user, this makes it opt in.
      
      It could be argued that it should default to false, but as it is likely
      that the proxy is the only component potentially exposed to the public internet, we
      default to not exposing data.
      
      Fixes #4920
      be7b24f9
  13. 10 7月, 2019 1 次提交
  14. 14 6月, 2019 1 次提交
    • A
      [pulsar-broker] Add support for other algorithms in token auth (#4528) · 04e5fee6
      Addison Higham 提交于
      Before this patch, all keys are read as RSA, which meant that only RSA
      compatible JWT signing algorithms could be used, specifically, this
      limited the use of ECDSA family of JWT keys.
      
      This changes this by changing the signature we use to parse keys to also
      take a SignatureAlgorithm and also adds a new config option
      `tokenPublicAlg` which can be used to signify what algorithm the
      broker/proxy should use when reading public keys. However, these all
      default to RS256, which, should indicate to decode as RSA (even if
      another RS/PS algoritm is used).
      
      This also adds some new options to the Token CLI tool for those commands
      that weren't respecting the algorithm, but these are defaulted to RS256
      as well.
      04e5fee6
  15. 04 6月, 2019 1 次提交
    • A
      [tiered-storage] Add support for AWS instance and role creds (#4433) · 176c901a
      Addison Higham 提交于
      * Add support for AWS instance and role creds
      
      This commit makes changes to the tiered storage support for S3
      to allow for support of ec2 metadata instance credentials as well as
      additional config options for assuming a role to get credentials.
      
      This works by changing the way we provide credentials to use the
      funtional `Supplier` interface and for using the AWS specific
      `SessionCredentials` object for when we detect that the
      `CredentialProvider` is providing credentials that have a session token.
      
      * [tiered_storage] Tweak s3 credential handling to check on boot
      
      This changes the s3 handling slightly, instead of falling back to static
      credentials, we instead now fail if no s3 credentials can be found and
      change the unit tests to start a broker with s3 credentials.
      
      With the new Supplier API, we now fetch credentials on every request.
      Because of this, the failure and subsequent try/catch is costly and the
      integration tests were using this, which caused them to be significantly
      slower.
      
      Instead, we just check to see if we can fetch creds, and if we can't
      consider it an error condition to exit the app as it is unlikely in a
      production scenario to not have some credentials.
      
      * fix s3 test for missing creds
      176c901a
  16. 18 5月, 2019 2 次提交
  17. 04 5月, 2019 1 次提交
  18. 02 5月, 2019 1 次提交
    • M
      Allow to configure the managed ledger cache eviction frequency (#4066) · f5c7b22f
      Matteo Merli 提交于
      * Allow to configure the managed ledger cache eviction frequency
      
      * Fixed test
      
      * Simplified the cache eviction to make it predictable at the configured frequency
      
      * Address comments
      
      * Apply eviction on slowest active reader by preference
      
      * Re-introduced backlogged subscriptions test
      
      * Addressed comments
      
      * Use config option
      
      * Fixed active/inactive logic and read position
      
      * Use dedicated thread for cache evictions
      
      * Added config options in docs
      
      * Fixed tests
      
      * Added time triggered eviction test
      
      * Fixed flaky test
      
      * Fixed tests
      f5c7b22f
  19. 26 3月, 2019 1 次提交
  20. 15 3月, 2019 1 次提交
  21. 02 3月, 2019 1 次提交
  22. 27 2月, 2019 1 次提交
  23. 14 2月, 2019 1 次提交
  24. 12 2月, 2019 2 次提交
  25. 20 1月, 2019 1 次提交
  26. 13 12月, 2018 1 次提交
    • C
      Add bookkeeperClientRegionawarePolicyEnabled and bookkeeperClientReor… (#3171) · 24cc4bbb
      Christophe Bornet 提交于
      ## Motivation
      Fix #3119. This allows to configure region-aware policy and read-reordering so that brokers first read on bookies of their own region.
      
      ## Modifications
      1. Added parameters:
      ```
      // Enable region-aware bookie selection policy. BK will chose bookies from
      // different regions and racks when forming a new bookie ensemble
      // If enabled, the value of bookkeeperClientRackawarePolicyEnabled is ignored
      bookkeeperClientRegionawarePolicyEnabled=false
      
      // Enable/disable reordering read sequence on reading entries.
      bookkeeperClientReorderReadSequenceEnabled=false
      ```
      
      2. Fixed bug in ZkBookieRackAffinityMapping: the value set to racksWithHost by deserialize() was overriden by the affectation in setConf(). The fix just moves the hostname workaround in setConf().
      
      ## Result
      Users can enable bookkeeperClientRegionawarePolicyEnabled and bookkeeperClientReorderReadSequenceEnabled to make brokers read on bookies of their own region
      24cc4bbb
  27. 29 11月, 2018 1 次提交
    • M
      PIP-25: Token based authentication (#2888) · a99f7332
      Matteo Merli 提交于
      * PIP-25: Token based authentication
      
      * Addressed comments
      
      * Use Authorization header
      
      * Update to support env: data: and file: as sources for keys and tokens
      
      * Fixed cli description
      
      * Updated broker.conf
      
      * Improved consistency in reading keys and CLI tools
      
      * Fixed check for http headers
      
      * Accept rel time with no specified unit
      
      * Fixed reading data: URL
      
      * Addressed comments
      
      * Added integration tests
      
      * Addressed comments
      
      * Added CLI command to validate token against key
      
      * Fixed integration tests
      
      * Removed env:
      
      * Fixed rel time parsing
      a99f7332
  28. 27 11月, 2018 1 次提交
  29. 25 9月, 2018 1 次提交
  30. 29 8月, 2018 1 次提交
  31. 16 8月, 2018 1 次提交
    • M
      Increased default brokerShutdownTimeout to 60 seconds (#2377) · 7416fc0c
      Matteo Merli 提交于
      ### Motivation
      
      The default timeout for broker graceful shutdown is set to 3 seconds. This can give little room to do graceful shutdown when the broker is serving a lot of topics.
      
      There is no big downside in increasing the timeout to a much bigger value.
      7416fc0c
  32. 02 8月, 2018 1 次提交
  33. 23 7月, 2018 1 次提交
    • C
      Pulsar website using docusaurus (#2206) · 7d75fd28
      cckellogg 提交于
      ### Motivation
      
      Improve the documentation and usability of the pulsar website. This moves the website and documentation to a new framework (https://docusaurus.io/)  which will make it easier to maintain going forward.
      
      ### Modifications
      
      A new version of the website in site2 directory. Also updates the pulsar build docker to add the new website build dependencies.
      
      ### Result
      
      A more usable website and documentation.
      
      A preview of the site can be seen here: https://cckellogg.github.io/incubator-pulsar
      *All the links and images might not work on this site since it's a test only site*
      7d75fd28